text
stringlengths
1
1.19M
meta
dict
\section{Acknowledgment} {\bf\Large{Acknowledgement}} I thank Professor Kirk T. McDonald for bringing this issue to my attention and for sharing his unpublished note \cite{Kirk}, as well as for extensive discussions. \newpage
{ "timestamp": "2018-06-05T02:18:35", "yymm": "1806", "arxiv_id": "1806.01133", "language": "en", "url": "https://arxiv.org/abs/1806.01133" }
\section{Introduction} The extraction of physical laws from experimental data, often in the form of differential and partial differential equations, may be critical to science and engineering applications where governing equations are unknown. Time-series data collected from experiments, or as the macroscopic aggregate of small scale behavior, often obeys unknown governing equations that are parametrized by time-evolving parameters $\mu(t)$. In some cases, it may be possible to derive physical laws from first principals using data as well as some knowledge of the system, but there are many cases where this is elusive such as the large scale networked dynamical systems of the power grid and the brain, or chemical kinetics of, for example, the Belousov-Zhabotinsky reaction. Recently, there has been a substantial research effort towards automating the process of data-driven model discovery in order to identify interpretable expressions for the dynamics in the form of ordinary and partial differential equations (ODEs and PDEs): \begin{equation} u_t= N(u, u_x, u_{xx}, \dots ,\mu(t)),\quad t\in [0,T], \label{eq:overview} \end{equation} where $N(\cdot)$ characterizes the evolution of the system and its parametric dependencies through the parameter $\mu(t):[0,T]\rightarrow \R$. Although a number of automated discovery techniques have been developed for discovering the right hand side of (\ref{eq:overview}) with constant parameters $\mu(t)=\mu_0$, none have demonstrated the capability to infer the governing equations when the parametrization $\mu(t)$ has explicit time dependence. By imposing group sparsity techniques, we develop a mathematical architecture that allows us to explicitly disambiguate the dynamical evolution (\ref{eq:overview}) from its parametric dependencies $\mu(t)$. This is a critical innovation in model discovery as most realistic systems do indeed have time-dependent parametric dependencies that must be concurrently extracted during the model discovery process. Figure~\ref{fig:param} demonstrates two prototypical parametric dependencies: (a) a PDE model whose constant parameters change at fixed points in time and (b) a PDE model that depends continuously on the parameter $\mu(t)$. Our proposed model discovery method provides a principled approach to efficiently handle these two parametric cases, thus advancing the field of model discovery by disambiguating the dynamics from parametric time dependence. Even if the governing equations are known, parametric dependencies in PDEs complicate numerical simulation schemes and challenge one's ability to produce accurate future state predictions. Characterizing parametric dependence is also an critical task for model reduction in both time-independent~\cite{quarteroni2015reduced,hesthaven2016certified} and time-dependent PDEs~\cite{benner2015survey,benner2017model}. Thus the ability to explicitly extract the parametric dependence of a spatio-temporal system is necessary for accurate, quantitative characterization of PDEs. \begin{figure}[t] \centering \begin{overpic}[width=0.56\textwidth]{param_fig} \put(-6,17){${\mu}(t)$} \put(3,30){(a)} \put(57,30){(b)} \put(18,-3){\small Time $t$} \put(73,-3){\small Time $t$} \put(65,8){$u_t \!=\! N({u},{\mu}(t))$} \put(14,8){$u_t \!=\! N({u},{\mu}(t))$} \end{overpic} \caption{Two prototypical scenarios for parametric model discovery: (a) Parameters $\mu(t)$ are piece wise constant and change at fixed points in time, (b) The underlying PDE model $u_t \!=\! N({u},{\mu}(t))$ depends on continuously varying parameter ${\mu}(t)$.} \label{fig:param} \vspace{-.25in} \end{figure} More broadly, system identification using machine learning methods has emerged as a viable alternative to expert knowledge and first principles derivations. It is important to separate the field of system identification into two distinct categories: (i) methods that accurately reflect observed dynamics using black box functions (e.g. neural networks), and (ii) methods that recover closed form and interpretable expressions for the dynamics in the form of ordinary and partial differential equations (ODEs and PDEs). This duality reflects the two cultures narrative of machine learning and classical statistics made popular by Leo Breiman~\cite{breiman2001statistical}. On one hand, the research may assume a specific model for the data with known mechanism, while on the other the research is interested in algorithmic models that, while not necessarily reflecting the true mechanism, are accurate in prediction. While several recent works have made progress from both viewpoints, we focus on the former. The terms in a differential equation often have physical interpretations and motivation, e.g. diffusion and advection are ubiquitous in many physical systems and are characterized by prototypical expressions in a PDE. For systems where first principal derivations prove intractable, we may gain insights into the underlying physics of the system based on the terms in the identified PDE. Further, we view the process of extracting closed form equations as being more generalizable than fitting black box models to a specific dataset. Specifically, altering initial conditions will not change governing equations, but may break a machine learned black box solver. Research towards the automated inference of dynamical systems from data is not new \cite{Crutchfield1987cs}. Methods for extracting linear systems from time series data include the eigensystem realization algorithm (ERA) \cite{juang1985eigensystem} and Dynamic Mode Decomposition (DMD) \cite{Rowley2009jfm,schmid2010dynamic,tu2013dynamic,Brunton2015jcd,kutz2016dynamic,askham2018variable}. Identification of nonlinear systems has, until very recently, relied on black box methods. These include NARMAX \cite{chen1989representations}, neural networks \cite{gonzalez1998identification}, equation free methods \cite{Kevrekidis2003cms,kevrekidis2004equation,kevrekidis2009equation}, and Laplacian spectral analysis \cite{Giannakis2012pnas}. There has also been considerable recent work towards data-driven approximation of the Koopman operator~\cite{Mezic2005nd,Budivsic2012chaos,Mezic2013arfm} via extensions of DMD \cite{williams2015data}, diffusion maps \cite{giannakis2017data}, delay-coordinates \cite{brunton2017chaos, Arbabi2016arxiv, das2017delay} and neural networks \cite{Yeung2017arxiv,Takeishi2017nips,Wehmeyer2017arxiv,Mardt2017arxiv,Otto2017arxiv,lusch2017deep}. The use of genetic algorithms for nonlinear system identification \cite{Bongard2007pnas, Schmidt2009science} allowed for the derivation of physical laws in the form of ordinary differential equations. Genetic algorithms are highly effective in learning complex functional forms but are slower computationally than simple regression. Sparsity promoting methods have been used previously in dynamical systems~\cite{Schaeffer2013pnas, mackey2014compressive, caflisch2013pdes}, and sparse regression has been leveraged to identify parsimonious ordinary differential equations from a large library of allowed functional forms~\cite{Brunton2016,chen2017network}. Much work building on the sparse regression framework has followed and includes inferring rational functions \cite{Mangan2016}, the use of information criteria for model validation~\cite{mangan2017model}, constrained regression for conservation laws \cite{loiseau2018constrained}, model discovery with highly corrupt data \cite{tran2017exact}, the learning of bifurcation parameters \cite{schaeffer2017learning_group}, stochastic dynamics \cite{boninsegna2017sparse}, weak forms of the observed dynamics \cite{schaeffer2017sparse}, and regression with small amounts of data \cite{schaeffer2017extracting, kaiser2017sparse}. In contrast to sparse regression, a neural network based approach was proposed to identify ordinary differential equations, sacrificing some interpretability for a richer class of allowed functional forms \cite{raissi2018multistep}. Sparse regression based methods for PDEs were first used in \cite{rudy2017data,schaeffer2017learning}. These methods were demonstrated on a large class of PDEs and have the benefit of being highly interpretable, but struggle with numerical differentiation of noisy data. In Rudy {\em et al.}~\cite{rudy2017data} the noise was addressed by testing with only a small degree of noise (large SNR), while in Schaeffer {\em et al.}~\cite{schaeffer2017learning} noise was added to the time derivative after it was computed from clean data. Alternatively, Gaussian processes were used to determine linear PDEs \cite{raissi2017machine} and nonlinear PDEs known up to a set of coefficients \cite{raissi2017hidden}. Using Gaussian process regression requires less data than sparse regression and naturally manages noise, but the method is only applicable to PDEs with a known structure. Reference \cite{long2017pde} makes a substantial contribution by using neural networks to accurately learn partial differential equations with non constant coefficients. A neural network is constructed that mimics a forward Euler timestepping scheme and the accuracy of potential models is evaluated based on their future state prediction accuracy. While seemingly more robust than sparse regression, the method in \cite{long2017pde} does not penalize extraneous terms in the learned PDE and thus falls short of producing optimally parsimonious models. Furthermore, \cite{long2017pde} only tests the method on a nonlinear problem using a relatively strong Ansatz. Neural networks were also used in \cite{raissi2017physics1} and \cite{raissi2017physics2} to solve and to estimate parameters in partial differential equations with known terms to a high degree of accuracy. However, similar to \cite{raissi2017hidden}, it is assumed that the PDE is known up to a set of coefficients. A more sophisticated neural network approach was used in \cite{raissi2018deep} to learn dynamics of systems with unknown terms. However, the approach in \cite{raissi2018deep} does not give closed form representations of the dynamics and the resulting neural network model therefore does not give insights into the underlying physics. In this work, we present a sparse regression framework for identifying PDEs with non-constant coefficients, something that none of the previous PDE discovery methods are equipped to do. Specifically, we allow for variation in the value of a coefficient across time or space, but maintain that the active terms in the PDE must be consistent. This is an important innovation in practice as the parameters of physical systems often vary during the measurement process, so that the parametric dependencies be disambiguated from the PDE itself. Our method extends the sparse regression frameworks first proposed for PDE discovery~\cite{rudy2017data,schaeffer2017learning} by using group sparsity and results in a more parsimonious and interpretable model than neural networks. We are still limited by the accuracy of numerical differentiation and by the library terms in the sparse regression. Numerical differentiation using neural networks as shown in \cite{raissi2018deep} appears promising as a method for obtaining more accurate time derivatives from noisy data. The limitation based on terms included in the library seems more permanent. Any closed-form model expression must be representable with a finite set of building blocks. Here we only use monomials in our data and its derivatives, since these are the common terms seen in physics, but there is no limitation to the terms included in the library. \section{Methods} The parametric discovery method relies on several foundational mathematical tools. In the following subsections, we will discuss the identification of constant coefficient equations as well as regression methods for group sparsity. Finally, we combine these ideas to show how one may identify parametric PDEs and suggest a methodology for model selection that balances accuracy and the number of active terms in the PDE. \subsection{Identification of constant coefficient partial differential equations} Several recent methods have been proposed for the identification of constant coefficient partial differential equations from data. In this work we expand on the sparse regression framework, PDE-FIND, used in \cite{rudy2017data}. We will briefly elaborate on the method and refer the reader to the original paper for details. The PDE-FIND algorithm provides a principled technique for discovering the underlying PDE from spatial time series measurements alone using a library of candidate functions for the PDE and sparse regression. For the identification of constant coefficient PDEs, we have a dataset ${\bf U}$, which is a discretization of a function $u(x,t)$ that we assume satisfies the PDE of the form given in (\ref{eq:overview}): \begin{equation} u_t= N(u, u_x, u_{xx}, \dots) = \sum_{j=1}^d N_j(u, u_x, u_{xx}, \dots) \xi_j \, . \label{eq:const_coeff_PDE} \end{equation} We assume that the nonlinear expression $N(\cdot )$ may be expanded as a sum of simple monomial basis functions $N_j$ of $u$ and its derivatives. Note that this sum is not unique and that we can include extra basis functions by simply setting the corresponding $\xi_j$ to be zero. In PDE-FIND, which constructs an overcomplete library of many possible monomial basis functions and regresses to find $\xi$, sparsity is used to ensure that basis functions that do not appear in the PDE are set to zero in the sum. To be more precise, given a dataset ${\bf U} \in \R^{n x m}$ representing $m$ timesteps of a PDE discretized with $n$ gridpoints, we numerically differentiate in both $x$ and $t$ to form the linear regression problem given by (\ref{eq:PDE_FIND}) \begin{equation} \underbrace{ \begin{pmatrix} u_t(x_1,t_1) \\ u_t(x_2,t_1) \\ \vdots \\ u_t(x_n,t_m) \\ \end{pmatrix}}_{{\bf U}_t} = \underbrace{ \begin{pmatrix} 1 & u(x_1,t_1) & u_x(x_1,t_1) & \hdots & u^3u_{xxx}(x_1,t_1) \\ 1 & u(x_2,t_1) & u_x(x_2,t_1) & \hdots & u^3u_{xxx}(x_2,t_1) \\ \vdots & \vdots & \vdots & & \vdots \\ 1 & u(x_n,t_m) & u_x(x_n,t_m) & \hdots & u^3u_{xxx}(x_n,t_m) \\ \end{pmatrix}}_{\mathbf{\Theta} ({\bf U})} \xi\label{eq:PDE_FIND} \end{equation} which is a large, overdetermined linear system of equations ${\bf A}{\bf x}={\bf b}$. Note that here we have shown a problem where derivatives up to third order are multiplied by powers of $u$ up to cubic order, but one could include arbitrarily many library functions. Solving for $\xi$ and ensuring sparsity gives the PDE. PDE-FIND has been shown to accurately identify several partial differential equations from data alone. The sparsity constraint is a regularizer for the linear regression~\cite{rudy2017data}. \subsection{Group Sparsity} In a typical sparse regression, we seek a sparse solution to the linear system of equations ${\bf A}{\bf x}={\bf b}$. Accuracy of the predictor, $\|${\bf A}{\bf x}-{\bf b}$\|$, is balanced against the number of nonzero coefficients in ${\bf x}$. Thus the sparse regularization enforces a solution ${\bf x}$ with many zeros (which is the variable $\boldsymbol{\xi}$ in (\ref{eq:PDE_FIND})). In this paper, we use the notion of group sparsity to find time series representing each parameter in the PDE, rather than single values. We group collections of terms in ${\bf x}$ together and seek solutions to ${\bf A}{\bf x}={\bf b}$ that minimize the number of groups with nonzero coefficients. One well studied method for solving regression problems with group sparsity is the group LASSO (GLASSO)~\cite{friedman2010note} \begin{equation} \label{eq:group_lasso} \hat{\bf x} = \underset{\bf w}{\mbox{arg\,min}} \frac{1}{2n} \left\|{\bf b} - \sum_{g \in \mathcal{G}} \mathbf{A}^{(g)} {\bf w}^{(g)}\right\|_2^2 + \lambda \sum_{g \in \mathcal{G}} \|{\bf w}^{(g)}\|_2 . \end{equation} Here $\mathcal{G}$ is a collection of groups, each of which contains a subset of the indices enumerating the columns of ${\bf A}$ and coefficients in ${\bf x}$. Note that the second term in the GLASSO corresponds to a convex relaxation of the number of groups containing a nonzero value. The concept of group sparsity has been used in several previous methods for identifying dynamics given by ordinary differential equations \cite{chen2017network, schaeffer2017learning_group}. As it will be shown in the following, we find that the GLASSO performs poorly in the case of identifying PDEs. We instead use a sequential thresholding method based on ridge regression, similar to the method used in \cite{rudy2017data}, but adapted for group sparsity. A sequential thresholding method was also used in \cite{schaeffer2017learning_group} for group sparsity but for ordinary and not partial differential equations. Our method, which we call Sequential Grouped Threshold Ridge Regression (SGTR), is summarized in algorithm \ref{SGTR}. \begin{center} \begin{minipage}{0.8\textwidth} \begin{algorithm}[H] \caption{SGTR($\mathbf{A}, {\bf b}, \mathcal{G}, \lambda, \epsilon, \text{maxit}, f({\bf x}) = \|{\bf x} \|_2$) } \label{SGTR} \vspace{1 mm} \# Solves ${\bf x} \approx \mathbf{A}^{-1}{\bf b}$ with sparsity imposed on groups in $\mathcal{G}$\\ \vspace{1 mm} \# Initialize coefficients with ridge regression\\ ${\bf x} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \mathbf{A}{\bf w}\|_2^2 + \lambda \|{\bf w} \|_2^2$ \vspace{3 mm} \# Threshhold groups with small $f$ and repeat for $iter = 1,\hdots,\text{maxit}$:\\ \hspace{1 cm} \# Remove groups with sufficiently small $f({\bf x}^{(g)})$ \hspace{1 cm} $\mathcal{G} = \{g \in \mathcal{G} : f({\bf x}^{(g)}) > \epsilon\}$\\ \hspace{1 cm} \# Refit these groups (note this sets ${\bf x}^{(g)}=0$ for $g \not\in \mathcal{G}$) \hspace{1 cm} ${\bf x} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \sum_{g \in \mathcal{G}}\mathbf{A}^{(g)} {\bf w}^{(g)}\|_2^2 + \lambda \|{\bf w}\|_2^2$ \vspace{3 mm} \# Get unbiased estimates of coefficients after finding sparsity\\ ${\bf x}^{(\mathcal{G})} = \mbox{arg\,min}_{\bf w} \|{\bf b} - \sum_{g \in \mathcal{G}}\mathbf{A}^{(g)} {\bf w}^{(g)}\|_2^2$\\ return ${\bf x}$ \end{algorithm} \end{minipage} \end{center} \vspace{3 mm} Throughout the training, $\mathcal{G}$ tracks the groups that have nonzero coefficients, and it is paired down as we threshold coefficients with sufficiently small relevance, as measured by $f$. We use the 2-norm of the coefficients in each group for $f$ but one could also consider arbitrary functions. In particular, for problems where the coefficients within each group have a natural ordering, as they do in our case as time series or spatial functions, one could consider smoothness or other properties of the functions. In practice, we normalize each column of ${\bf A}$ and ${\bf b}$ so that differences in scale between the groups do not affect the result of the algorithm. For the GLASSO we always perform an unregularized least squares regression on the nonzero coefficients after the sparsity pattern has been discovered to debias the coefficients. We found SGTR to outperform the GLASSO for the problem of correctly identifying the active terms in parametric PDEs. \subsection{Data-driven identification of parametric partial differential equations} In the identification of parametric PDEs, we consider equations of the form \begin{equation} \label{eq:parametric_PDE} u_t = N(u, u_x, \hdots, \mu(t)) = \sum_{j=1}^D N_j(u, u_x, \hdots) \xi_j (t). \end{equation} Note that this equation is similar to (\ref{eq:const_coeff_PDE}) but has time-varying parametric dependence. To capture spatial variation in the coefficients, we simply replace $\xi(t)$ with $\xi(x)$. The PDE is assumed to contain a small number of active terms, each with a time varying coefficient $\xi (t)$. We seek to solve two problems: (i) determine which coefficients are nonzero and (ii) find the values of the coefficients for each $\xi_j$ at each timestep or spatial location for which we have data. For time dependent problems, we construct a separate regression for each timestep, allowing for variation in the PDE between timesteps. Similar to the PDE-FIND method, we construct a library of candidate functions for the PDE using monomials in our data and its derivatives so that \begin{equation} \label{eq:single_timestep_library} \mathbf{\Theta} \left(u^{(j)}\right) = \begin{pmatrix} \vline & \vline & \vline & \vline \\ 1 & u^{(j)} & \hdots & u^3u^{(j)}_{xxx} \\ \vline & \vline & \vline & \vline \end{pmatrix} \end{equation} where the set of $m$ equations is given by \begin{equation} \label{eq:parametric_PDE_FIND_setup_2} u_t^{(j)} = \mathbf{\Theta} \left(u^{(j)}\right)\xi^{(j)} ,\,\, j = 1,\hdots, m \, . \end{equation} Our goal is to solve the set of equations given by (\ref{eq:parametric_PDE_FIND_setup_2}) with the constraint that each $\xi^{(j)}$ is sparse and that they all share the same sparsity pattern. That is, we want a fixed set of active terms in the PDE. To do this, we consider the set of equations as a single linear system and use group sparsity. Expressing the system of equations for the parametric equation as a single linear system we get the block diagonal structure given by \begin{equation} \label{eq:parametric_PDE_FIND_setup} \begin{pmatrix} u_t^{(1)}\\ u_t^{(2)}\\ \vdots \\ u_t^{(m)} \end{pmatrix} = \underbrace{\begin{pmatrix} \mathbf{\Theta} \left(u^{(1)}\right) \\ & \mathbf{\Theta} \left(u^{(2)}\right) \\ && \ddots \\ &&& \mathbf{\Theta} \left(u^{(m)}\right) \end{pmatrix}}_{\tilde{\mathbf{\Theta}}} \begin{pmatrix} \xi^{(1)}\\ \xi^{(2)}\\ \vdots \\ \xi^{(m)} \, \end{pmatrix}. \end{equation} We solve (\ref{eq:parametric_PDE_FIND_setup}) using SGTR with columns of the block diagonal library matrix grouped by their corresponding term in the PDE. Thus for $m$ timesteps and $d$ candidate functions in the library, groups are defined as $\mathcal{G} = \{ {j + d\cdot i : i = 1,\dots, m} : j = 1,\hdots, d\}$. This ensures a sparse solution to the PDE while also allowing arbitrary time series for each variable. To obtain the correct level of sparsity, we separate 20\% of the data from each timestep to use as a validation set and search over the parameter $\lambda$ in the SGTR algorithm using cross validation to find the optimal $\lambda$. For problems with spatial, rather than temporal, variation in the coefficients, we simply group by spatial rather than time coordinate. A similar block diagonal structure is obtained but with $n$ blocks of size $m \times d$ rather than $m$ blocks of size $n \times d$. The groups are defined by $\mathcal{G} = \{ {j + d\cdot i : i = 1,\dots, n} : j = 1,\hdots, d\}$. Since we are evaluating the relevance of groups based on their norm, it is important to consider differences in the scale of the candidate functions. For example, if $u \sim \mathcal{O}(10^{-2})$ then a cubic function will be $\mathcal{O}(10^{-6})$ and relatively large coefficients multiplying this data may not have a large effect on the dynamics, but will not be removed by the hard threshold due to its size. To remedy this, we normalize each candidate function represented in $\mathbf{\Theta}$ as well as each $u_t^{(j)}$ to have unit length prior to the group thresholding algorithm and then correct for the normalization after we have discovered the correct sparsity pattern. \subsection{Model Selection} For each model we test both the GLASSO as well as SGTR using an exhaustive range of parameter values. Let $\tilde{\mathbf{\Theta}}$ denote the block diagonal matrix $\mathbf{\Theta}$ shown in equation \eqref{eq:parametric_PDE_FIND_setup} but with all columns normalized to have unit length and $\tilde{u}_t$ be the vector of all time derivatives having been normalized to unit length so that $\|\tilde{u}_t\| = \sqrt{m}$. For the GLASSO, we find the minimal value of $\lambda$ that will set all coefficients to zero which is given by \begin{equation} \label{eq:lam_max} \lambda_{\mbox{max}} = \underset{g \in \mathcal{G}}{\mbox{max}} \,\frac{1}{n}\|\tilde{\mathbf{\Theta}}^{(g)^T}\tilde{u}_t\|_2 . \end{equation} We check 50 evenly spaced values of $\lambda$ between $10^{-5} \lambda_{\mbox{max}}$ and $\lambda_{\mbox{max}}$ on a logarithmic scale. For SGTR, we search over the range of tolerances between $\epsilon_{min}$ and $\epsilon_{max}$ defined as \begin{equation} \label{eq:max_min_tol} \epsilon_{max/min} = \underset{g \in \mathcal{G}}{max/min}\, \|\xi_{\text{ridge}}^{(g)}\|_2, \end{equation} where $\xi_{\text{ridge}} = (\tilde{\mathbf{\Theta}}^T\tilde{\mathbf{\Theta}} + \lambda I)^{-1}\tilde{\mathbf{\Theta}}^T\tilde{u}_t$. Note that by definition, $\epsilon_{min}$ is the minimum tolerance that has any effect on the sparsity of the predictor and $\epsilon_{max}$ is the minimum tolerance that guarantees all coefficients to be zero. A set of 50 intermediate tolerances, equally spaced on a logarithmic scale, is tested between $\epsilon_{min}$ and $\epsilon_{max}$. To select the optimal model generated via each method, we evaluate the models using the AIC-inspired loss function \begin{equation} \label{eq:AIC} \mathcal{L}(\xi) = N \ln\left(\dfrac{\|\tilde{\mathbf{\Theta}}\xi-\tilde{u}_t\|_2^2}{N} + \epsilon \right) + 2k \end{equation} where $k$ is the number of nonzero coefficients in the identified PDE, $\|\xi\|_0/m$, and $N$ is the number of rows in $\mathbf{\Theta}$, which is equal to the size of our original dataset $u$. Equation \eqref{eq:AIC} is closely related to the Akaike Information Criterion (AIC) \cite{akaike1974new}. Typically, the mean square error of a linear model is used to evaluate goodness of fit, but in our case there is error in computing the time derivative $u_t$, so we assume that any linear model which perfectly fits the data is overfit. We have added $\epsilon = 10^{-5}$ to the mean square error of each model as a floor in order to avoid overfitting. Without this addition, our algorithm selects insufficiently parsimonious representations of the dynamics. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{aic_example} \caption{Example of loss function evaluated for a number of candidate models for the parametric Burgers' equation. Library included derivatives up to 4th order muliplying powers of $u$ up to cubic. Left: 50 models obtained via SGTR algorithm using values of $\epsilon$ between $\epsilon_{min}$ and $\epsilon_{max}$. Right: 50 models obtained via GLASSO for $\lambda$ between $10^{-5}\lambda_{max}$ and $\lambda_{max}$.} \label{fig:aic_fig} \end{figure} Figure~\ref{fig:aic_fig} illustrates the loss function from equation \eqref{eq:AIC} evaluated on models derived from 50 values of $\epsilon$ and $\lambda$ using SGTR and GLASSO respectively. Initially, a low penalty in each algorithm yields a model that is overfit to the data given our sparsity criteria. For an intermediate value of $\epsilon$ or $\lambda$, a more parsimonious but still predictive model is obtained. For sufficiently high values, the model is too sparse and is no longer predictive. \section{Computational Results of Parametric PDE Discovery} We test our method for the discovery of parametric partial differential equations on four canonical models; Burgers' equation with a time varying nonlinear term, the Navier-Stokes equation for vorticity with a jump in Reynolds number, a spatially dependent advection equation, and a spatially dependent Kuramoto-Sivashinsky equation. In each case the method is also tested after introducing white noise with mean magnitude equal to 1\% of the $L^2$-norm \ale{$L^2$ or $\ell_2$}of the dataset. The method is able to accurately identify the dynamics in each case except the Kuramoto-Sivashinsky equation, where the inclusion of a fourth order derivative makes numerical evaluation with noise highly challenging. A comparison with GLASSO regression is also given for a number of the examples. \subsection{Burgers' Equation with Diffusive Regularization} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{burgers_solution} \caption{Left: dataset for identification of the parametric diffusive Burgers' equation. Here the PDE was evolved on the interval $[-8,8]$ with periodic boundary conditions and $t \in [0,10]$. Right: Coefficients for the terms in the parametric Burgers' equation. The diffusion was held constant at 0.1 while the nonlinear advection as coefficient is given by $a(t)=-(1+sin(t)/4)$.} \label{fig:parametric_burgers_solution} \end{figure} To test the parametric discovery of PDEs, we consider a solution of Burgers' equation with a sinusoidally oscillating coefficient $a(t)$ for the nonlinear advection term \begin{equation} \label{eq:burgers} \begin{aligned} u_t &= a(t) uu_x + 0.1 u_{xx}\\[.1in] a(t) &= -\left(1 + \frac{\sin(t)}{4}\right) \end{aligned} \end{equation} where a small amount of diffusion is added to regularize the evolution dynamics. The time dependent Burgers' equation was solved numerically using a spectral method on the interval $[-8,8]$ with periodic boundary conditions and $t \in [0,10]$ with $n = 256$ grid points and $m = 256$ time steps. We search for parsimonious representations of the dynamics by including powers of $u$ up to cubic order, which can be multiplied by derivatives of $u$ up to fourth order. For the noise-free dataset we use the discrete Fourier transform for computing derivatives. For the noisy dataset, we use polynomial interpolation to smooth the derivatives~\cite{rudy2017data}. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{parametric_burgers_coeffs} \caption{Time series discovered for the coefficients of the parametric Burgers' equation. Top row: SGTR method, which correctly identifies the two terms. Bottom row: GLASSO method which adds several additional (incorrect) terms to the model. The left panels are noise-free, while the right panels contain 1\% noise. This parametric dependency is illustrated in Fig.~\ref{fig:param}b.} \label{fig:parametric_burgers} \end{figure} The resulting time series for the identified nonzero coefficients are shown in Fig.~\ref{fig:parametric_burgers}. SGTR correctly identified the active terms in the PDE for both the noise-free and noisy datasets, whereas GLASSO fails in both cases to produce the correct PDE and its parametric dependencies. \subsection{Navier-Stokes: Flow around a cylinder} We consider the fluid flow around a circular cylinder by simulating the Navier-Stokes vorticity equation \begin{equation} \label{eq:navier_stokes} \omega_t + \mathbf{u}\cdot \nabla \omega = \dfrac{1}{\nu (t)} \Delta \omega. \end{equation} Data is generated using the Immersed Boundary Projection Method (IBPM) \cite{taira:07ibfs,taira:fastIBPM} with $n_x = 449$ and $n_y=199$ spatial points in $x$ and $y$ respectively, and 1000 timesteps with $dt = 0.02$. The Reynolds number is adjusted half way through the simulation from $\nu = 100$ initially to $\nu = 75$. This is representative of the fluid velocity exhibiting a sudden decrease midway through the data collection. Our library of candidate functions is constructed using up to second order derivatives of the vorticity and multiplying by up to quadratic functions of the data. To keep the size of the machine learning problem tractable, we subsample 1000 random spatial location from the wake of the cylinder to construct our library at every tenth timestep~\cite{rudy2017data}. For the noise-free dataset, far fewer points are needed to accurately identify the dynamics. We suspect that with a more careful treatment of the numerical differentiation in the case of noisy data, such as that used in \cite{raissi2018deep}, the same would be true for the dataset with artificial noise, however such work is not the focus of this paper. The identified time series for the Navier-Stokes equation are shown in Fig.~\ref{fig:parametric_ns}. SGTR and GLASSO both correctly identify the active terms in the PDE. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{navier_stokes_solution} \caption{Left: dataset for identification of the parametric Navier-Stokes equation (\ref{eq:navier_stokes}). Right: coefficients for Navier-Stokes equations exhibiting jump in Reynolds number from 100 to 75 at $t=10$. This parametric dependency is illustrated in Fig.~\ref{fig:param}a.} \label{fig:parametric_ns_solution} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{parametric_ns_coeffs} \caption{Identified time series for coefficients for the Navier Stokes equation. Distinct axes are used to highlight jump in Reynolds number. Left: no noise. Right: 1\% noise} \label{fig:parametric_ns} \end{figure} \subsection{Spatially Dependent Advection-Diffusion Equation} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{advection_solution} \caption{Left: dataset for identification of the spatially dependent advection diffusion equation. Right: Spatial dependence of PDE. In this case, the loadings $\xi_j(t)$ in (\ref{eq:parametric_PDE}) are replaced by $\xi_j(x)$.} \label{fig:advection_solution} \end{figure} The advection-diffusion equation is a simple model for the transport of a physical quantity in a velocity field with diffusion. Here, we adapt the equation to have a spatially dependent velocity \begin{equation} \label{eq:advection} u_t = (c(x) u)_x + \epsilon u_{xx} = c(x) u_x + c'(x)u+ \epsilon u_{xx} \end{equation} which models transport through a spatially varying vector field due to $c=c(x)$. The PDE is solved on a periodic domain $[-L,L]$ with $L=5$, $\epsilon = 0.1$, and $c(x) = -1.5 + \cos(2\pi x /L)$ using a spectral method with $n = 256$ and $m = 256$. The library consists of powers of $u$ up to cubic, multiplied by derivatives of $u$ up to fourth order. Results for the advection-diffusion equation are shown in Fig.~\ref{fig:spatial_advection}. In the noise-free and noisy datasets, both SGTR and GLASSO Correctly identify the active terms in the PDE. \begin{figure}[h] \centering \includegraphics[width=1.0\textwidth]{spatial_advection_coeffs} \caption{Spatial dependence of advection diffusion equation. Left: no noise. Right: 1\% noise. Both SGTR and GLASSO correctly identified the active terms.} \label{fig:spatial_advection} \end{figure} \subsection{Spatially Dependent Kuramoto-Sivashinsky Equation} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{ks_solution} \vspace{-.2in} \caption{Left: dataset for identification of the spatially dependent Kuramoto Sivashinsky equation. Right: parametric dependency of the governing equations.} \label{fig:ks_solution} \end{figure} We now test the method on a Kuramoto-Sivashinsky equation with spatially varying coefficients \begin{equation} \label{eq:KS} u_t = a(x) uu_x + b(x) u_{xx} + c(x) u_{xxxx} . \end{equation} We use a periodic domain $[-L,L]$ with $L=20$ and coefficients $a(x) = 1 + \sin(2\pi x /L) /4$, $b(x) = -1 + e^{-(x-2)^2/5}/4$ and $c(x) = -1 - e^{-(x+2)^2/5}/4$. The equation is solved numerically to $t = 200$ using $n=512$ grid points and $m=1024$ timesteps. The second half of the data set is used, so as to only consider the region where the dynamics exhibited spatio-temporal chaos, resulting in a dataset containing 512 snapshots of 512 gridpoints. Since the Kuramoto Sivashinsky equation involves a fourth order derivative, it is very difficult to correctly identify it with noisy data since it is exceptionally difficult to accurately compute the fourth derivative. Indeed, our method fails to correctly identify the active terms when 1\% noise is added. With 0.01\% noise the correct terms were identified but with substantial error in coefficient value. We suspect that this shortcoming could at least be partially remedied by a more careful treatment of the numerical differentiation such as in \cite{bruno2012numerical}. The results of our parametric identification are shown in Fig.~\ref{fig:spatial_ks}. \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{spatial_ks_coeffs} \vspace{-.2in} \caption{Spatial dependence of Kuramoto-Sivashinsky. top row: SGTR. Bottom row: GLASSO. Left: no noise. Right 0.01\% noise using SGTR. SGTR detects correct sparsity with significant parameter error. GLASSO does not correctly identify the parsimonious model, nor does it do a good job at predicting the correct parametric values.} \label{fig:spatial_ks} \end{figure} \section{Discussion} We have presented a method for identifying governing laws for physical systems which exhibit either spatially or temporally dependent behavior. The method builds on a growing body of work in the applied mathematics and machine learning community that seeks to automate the process of discovering physical laws. To the best of our knowledge, our method is the first approach for deriving parsimonious PDE expressions of spatio-temporal system in the case of non-constant coefficients. Specifically, we can disambiguate between the governing PDE evolution and its parametric dependencies. In all examples, the SGTR algorithm outperformed the GLASSO in correctly identifying active terms in the PDE. Errors from the latter were generally in the form of extra terms added with small coefficient values throughout the time series. It may seem reasonable to threshold these time series after the discovery algorithm, but doing so assumes that the terms importance in the PDE is directly related to its magnitude, an assumption which we do not make given the normalization prior to sparse regression. In this work we have split the data into distinct timesteps or spatial locations in order to find PDE models for each subset of the data, resulting in coefficients that can vary in space or time. However, with a sufficiently fine grid, it seems feasible that one could bin the data by areas localized in space and time to determine a coefficient varying in both space and time with some loss of resolution. This same result may be achievable in a more stable manner by introducing a sparsity term to the work in \cite{long2017pde}. As is the case with other sparse regression methods for identifying dynamical systems, this method is constrained by the ability of the user to accurately differentiate data. For ordinary differential equations, this may be circumvented by looking at the weak form of the dynamics \cite{schaeffer2017extracting}, but doing so for PDEs seems difficult since there are derivatives that need to be evaluated with respect to multiple variables. We find the automatic differentiation approach used in \cite{raissi2018deep} promising and suspect that the inclusion of neural network based differentiation could radically improve the ability of our method to identify dynamics from noisy data. With sufficient knowledge of data it may also be possible to obtain better estimates through tuning the polynomial based differentiation \cite{bruno2012numerical}. Automating the identification of closed form physical laws from data will hopefully boost scientific progress in areas where deriving the same laws from first principals proves intractable. There are several limitations to many methods proposed in the field thus far. In particular, current methods have generally studied equations of the form $u_t = N(u,x,t)$ but many equations in physics are not in this class. Indeed, if measuring a system with parametric dependencies, then past methods are be unable to disambiguate between the evolution dynamics and its parametric dependencies $\mu(t)$, thus greatly limiting model discovery. There is also a trade off between methods that are able to derive parsimonious representations, though which are limited to a finite set of library elements, and those that use black box models to represent larger classes of possible functions. The researcher may also find difficulties in attempting to infer dynamics from the wrong set of measurements. For example, one could not derive the Schr\"{o}dinger by only looking at measurements of intensity. While not addressing these issues, this work makes a step towards generalizing the class of equations which may be accurately identified via machine learning methods.\\ \noindent\footnotesize{Code: \href{https://github.com/snagcliffs/parametric-discovery}{https://github.com/snagcliffs/parametric-discovery} } \begin{appendix} \end{appendix} \bibliographystyle{siamplain}
{ "timestamp": "2018-06-05T02:09:19", "yymm": "1806", "arxiv_id": "1806.00732", "language": "en", "url": "https://arxiv.org/abs/1806.00732" }
\subsection{Fully Connected Neural Networks} \label{subsec:conserved-fully-connected} We first formally define a fully connected feed-forward neural network with $N$ ($N\ge2$) layers. Let $\mathbf{W}^{(h)} \in \mathbb{R}^{n_h \times n_{h-1}}$ be the weight matrix in the $h$-th layer, and define $\vect{w} = ( \mathbf{W}^{(h)} )_{h=1}^N $ as a shorthand of the collection of all the weights. Then the function $f_{\vect{w}}: \mathbb{R}^d \to \mathbb{R}^p$ ($d = n_0, p = n_N$) computed by this network can be defined recursively: $f_{\vect{w}}^{(1)}(\vect{x}) = \mathbf{W}^{(1)}\vect{x}$, $f_{\vect{w}}^{(h)}(\vect{x}) = \mathbf{W}^{(h)} \phi_{h-1}(f_{\vect{w}}^{(h-1)}(\vect{x}))$ ($h = 2, \ldots, N$), and $f_{\vect{w}}(\vect{x}) = f_{\vect{w}}^{(N)}(\vect{x})$, where each $\phi_h$ is an activation function that acts coordinate-wise on vectors.\footnote{We omit the trainable bias weights in the network for simplicity, but our results can be directly generalized to allow bias weights.} We assume that each $\phi_h$ ($h\in[N-1]$) is \emph{homogeneous}, namely, $\phi_h(x) = \phi_h'(x)\cdot x$ for all $x$ and all elements of the sub-differential $\phi_h'(\cdot) $ when $\phi_h$ is non-differentiable at $x$. This property is satisfied by functions like ReLU $\phi(x) = \max\{x, 0\}$, Leaky ReLU $\phi(x) = \max\{x, \alpha x\}$ ($0<\alpha<1$), and linear function $\phi(x) = x$. Let $\ell: \mathbb{R}^p \times \mathbb{R}^p \to \mathbb{R}_{\ge0}$ be a differentiable loss function. Given a training dataset $\left\{ (\vect{x}_i, \vect{y}_i) \right\}_{i=1}^m \subset \mathbb{R}^d\times\mathbb{R}^p$, the training loss as a function of the network parameters $\vect{w}$ is defined as \begin{align} L(\vect{w}) = \frac1m \sum_{i=1}^m \ell\left( f_{\vect{w}}(\vect{x}_i), \vect{y}_i \right). \label{eqn:nn_loss} \end{align} We consider gradient descent with infinitesimal step size (also known as gradient flow) applied on $L(\vect{w})$, which is captured by the differential inclusion: \begin{equation} \label{eqn:gf-nn} \frac{\d\mathbf{W}^{(h)}}{\mathrm{d}t} \in - \frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}}, \qquad h = 1, \ldots, N, \end{equation} where $t$ is a continuous time index, and $\frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}}$ is the Clarke sub-differential \citep{clarke2008nonsmooth}. If curves ${\mathbf{W}}^{(h)} = {\mathbf{W}}^{(h)} (t)$ ($h\in[N]$) evolve with time according to \eqref{eqn:gf-nn} they are said to be a solution of the gradient flow differential inclusion. Our main result in this section is the following invariance imposed by gradient flow. \begin{thm}[Balanced incoming and outgoing weights at every neuron] \label{thm:conserved-neuron} For any $h\in[N-1]$ and $i\in[n_h]$, we have \begin{equation} \label{eqn:conserved-neuron} \frac{\d}{\mathrm{d}t} \left( \|\mathbf{W}^{(h)}[i, :]\|^2 - \|\mathbf{W}^{(h+1)}[:,i]\|^2 \right) = 0. \end{equation} \end{thm} Note that $\mathbf{W}^{(h)}[i, :]$ is a vector consisting of network weights coming into the $i$-th neuron in the $h$-th hidden layer, and $\mathbf{W}^{(h+1)}[:,i]$ is the vector of weights going out from the same neuron. Therefore, Theorem~\ref{thm:conserved-neuron} shows that gradient flow exactly preserves the difference between the squared $\ell_2$-norms of incoming weights and outgoing weights at any neuron. Taking sum of \eqref{eqn:conserved-neuron} over $i\in[n_h]$, we obtain the following corollary which says gradient flow preserves the difference between the squares of Frobenius norms of weight matrices. \begin{cor}[Balanced weights across layers] \label{cor:conserved-F-norm} For any $h\in[N-1]$, we have \begin{equation*} \frac{\d}{\mathrm{d}t} \left( \|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2 \right) = 0. \end{equation*} \end{cor} Corollary~\ref{cor:conserved-F-norm} explains why in practice, trained multi-layer models usually have similar magnitudes on all the layers: if we use a small initialization, $\|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2$ is very small at the beginning, and Corollary~\ref{cor:conserved-F-norm} implies this difference remains small at all time. This finding also partially explains why gradient descent converges. Although the objective function like \eqref{eqn:nn_loss} may not be smooth over the entire parameter space, given that $\|\mathbf{W}^{(h)}\|_F^2 - \|\mathbf{W}^{(h+1)}\|_F^2$ is small for all $h$, the objective function may have smoothness. Under this condition, standard theory shows that gradient descent converges. We believe this finding serves as a key building block for understanding first order methods for training deep neural networks. For linear activation, we have the following stronger invariance than Theorem~\ref{thm:conserved-neuron}: \begin{thm}[Stronger balancedness property for linear activation] \label{thm:conserved-linear} If for some $h\in[N-1]$ we have $\phi_h(x) = x$, then \begin{equation*} \frac{\d}{\mathrm{d}t} \left( \mathbf{W}^{(h)} (\mathbf{W}^{(h)})^\top - (\mathbf{W}^{(h+1)})^\top \mathbf{W}^{(h+1)} \right) = \mathbf{0}. \end{equation*} \end{thm} This result was known for linear networks \citep{arora2018optimization}, but the proof there relies on the entire network being linear while Theorem~\ref{thm:conserved-linear} only needs two consecutive layers to have no nonlinear activations in between. While Theorem~\ref{thm:conserved-neuron} shows the invariance in a node-wise manner, Theorem~\ref{thm:conserved-linear} shows for linear activation, we can derive a layer-wise invariance. Inspired by this strong invariance, in Section~\ref{sec:mf} we prove gradient descent with positive step sizes preserves this invariance approximately for matrix factorization. \subsection{Convolutional Neural Networks} \label{subsec:cnn} Now we show that the conservation property in Corollary~\ref{cor:conserved-F-norm} can be generalized to convolutional neural networks. In fact, we can allow \emph{arbitrary sparsity pattern and weight sharing structure} within a layer; convolutional layers are a special case. \paragraph{Neural networks with sparse connections and shared weights.} We use the same notation as in Section~\ref{subsec:conserved-fully-connected}, with the difference that some weights in a layer can be \emph{missing} or \emph{shared}. Formally, the weight matrix $\mathbf W^{(h)} \in \mathbb{R}^{n_h\times n_{h-1}}$ in layer $h$ ($h\in [N]$) can be described by a vector $\vect{v}^{(h)} \in \mathbb{R}^{d_h}$ and a function $g_h: [n_h]\times[n_{h-1}] \to [d_h]\cup\{0\}$. Here $\vect{v}^{(h)}$ consists of the actual \emph{free parameters} in this layer and $d_h$ is the number of free parameters (e.g. if there are $k$ convolutional filters in layer $h$ each with size $r$, we have $d_h = r\cdot k$). The map $g_h$ represents the sparsity and weight sharing pattern: \begin{align*} \mathbf W^{(h)}[i, j] = \begin{cases} 0, & g_h(i, j) = 0, \\ \vect{v}^{(h)}[k], & g_h(i, j) =k > 0. \end{cases} \end{align*} Denote by $\vect{v} = \left( \vect{v}^{(h)} \right)_{h=1}^N$ the collection of all the parameters in this network, and we consider gradient flow to learn the parameters: \begin{equation*} \frac{\d\vect{v}^{(h)}}{\mathrm{d}t} \in - \frac{\partial L(\vect{v})}{\partial \vect{v}^{(h)}}, \qquad h = 1, \ldots, N. \end{equation*} The following theorem generalizes Corollary~\ref{cor:conserved-F-norm} to neural networks with sparse connections and shared weights: \begin{thm}\label{thm:cnn} For any $h\in[N-1]$, we have \begin{equation*} \frac{\d}{\mathrm{d}t} \left( \|\vect{v}^{(h)}\|^2 - \|\vect{v}^{(h+1)}\|^2 \right) = 0. \end{equation*} \end{thm} Therefore, for a neural network with arbitrary sparsity pattern and weight sharing structure, gradient flow still balances the magnitudes of all layers. \subsection{Proof of Theorem~\ref{thm:conserved-neuron}} \label{subsec:proof_main} The proofs of all theorems in this section are similar. They are based on the use of the chain rule (i.e. back-propagation) and the property of homogeneous activations. Below we provide the proof of Theorem~\ref{thm:conserved-neuron} and defer the proofs of other theorems to Appendix~\ref{sec:proof-conserved}. \begin{proof}[Proof of Theorem~\ref{thm:conserved-neuron}] First we note that we can without loss of generality assume $L$ is the loss associated with one data sample $(\vect{x}, \vect{y}) \in \mathbb{R}^d\times\mathbb{R}^p$, i.e., $L(\vect{w}) = \ell(f_{\vect{w}}(\vect{x}), \vect{y})$. In fact, for $L(\vect{w}) = \frac1m \sum_{k=1}^m L_k(\vect{w})$ where $L_k(\vect{w}) = \ell\left( f_{\vect{w}}(\vect{x}_k), \vect{y}_k \right)$, for any single weight $\mathbf W^{(h)}[i, j]$ in the network we can compute $\frac{\d}{\mathrm{d}t} (\mathbf{W}^{(h)}[i,j])^2 = 2 \mathbf{W}^{(h)}[i,j] \cdot \frac{\d \mathbf{W}^{(h)}[i,j]}{\mathrm{d}t} = -2 \mathbf{W}^{(h)}[i,j] \cdot \frac{\partial L(\vect{w})}{\partial \mathbf{W}^{(h)}[i,j]} = -2 \mathbf{W}^{(h)}[i,j] \cdot \frac1m \sum_{k=1}^m \frac{\partial L_k(\vect{w})}{\partial \mathbf{W}^{(h)}[i,j]}$, using the sharp chain rule of differential inclusions for tame functions \citep{drusvyatskiy2015curves,davis2018stochastic}. Thus, if we can prove the theorem for every individual loss $L_k$, we can prove the theorem for $L$ by taking average over $k\in[m]$. Therefore in the rest of proof we assume $L(\vect{w}) = \ell(f_{\vect{w}}(\vect{x}), \vect{y})$. For convenience, we denote $\vect{x}^{(h)} = f_{\vect{w}}^{(h)}(\vect{x})$ ($h\in[N]$), which is the input to the $h$-th hidden layer of neurons for $h\in[N-1]$ and is the output of the network for $h=N$. We also denote $\vect{x}^{(0)} = \vect{x}$ and $\phi_0(x)=x$ ($\forall x$). Now we prove \eqref{eqn:conserved-neuron}. Since $\mathbf{W}^{(h+1)}[k,i]$ ($k\in[n_{h+1}]$) can only affect $L(\vect{w})$ through $\vect{x}^{(h+1)}[k]$ , we have for $k\in[n_{h+1}]$, \begin{equation*} \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[k,i] } = \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)}[k] } \cdot \frac{\partial \vect{x}^{(h+1)}[k]}{\partial \mathbf{W}^{(h+1)}[k,i]} = \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)}[k] } \cdot \phi_{h}(\vect{x}^{(h)}[i]), \end{equation*} which can be rewritten as \begin{equation*} \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[:,i] } = \phi_{h}(\vect{x}^{(h)}[i]) \cdot \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)} }. \end{equation*} It follows that \begin{equation} \label{eqn:proof-conserved-neuron-1} \begin{aligned} \frac{\d}{\mathrm{d}t} \|\mathbf{W}^{(h+1)}[:,i]\|^2 &= 2 \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\d}{\mathrm{d}t} \mathbf{W}^{(h+1)}[:,i] \right\rangle = -2 \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h+1)}[:,i] } \right\rangle \\ = -2 \phi_{h}(\vect{x}^{(h)}[i]) \cdot \left\langle \mathbf{W}^{(h+1)}[:,i], \frac{\partial L(\vect{w})}{ \partial \vect{x}^{(h+1)} } \right\rangle. \end{aligned} \end{equation} On the other hand, $\mathbf{W}^{(h)}[i, :]$ only affects $L(\vect{w})$ through $\vect x^{(h)}[i]$. Using the chain rule, we get \begin{align*} \frac{ \partial L(\vect{w}) }{ \partial \mathbf{W}^{(h)}[i, :] } &= \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h)}[i] } \cdot \phi_{h-1} (\vect x^{(h-1)})= \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \phi_{h-1} (\vect x^{(h-1)}), \end{align*} where $\phi'$ is interpreted as a set-valued mapping whenever it is applied at a non-differentiable point.\footnote{More precisely, the equalities should be an inclusion whenever there is a sub-differential, but as we see in the next display the ambiguity in the choice of sub-differential does not affect later calculations.} It follows that\footnote{This holds for any choice of element of the sub-differential, since $\phi'(x) x = \phi(x)$ holds at $x=0$ for any choice of sub-differential.} \begin{align*} & \frac{\d}{\mathrm{d}t} \|\mathbf{W}^{(h)}[i, :]\|^2 = 2 \left\langle \mathbf{W}^{(h)}[i, :], \frac{\d}{\mathrm{d}t} \mathbf{W}^{(h)}[i, :] \right\rangle = -2 \left\langle \mathbf{W}^{(h)}[i, :], \frac{\partial L(\vect{w})}{ \partial \mathbf{W}^{(h)}[i, :] } \right\rangle \\ =\,& -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \left\langle \mathbf{W}^{(h)}[i, :], \phi_{h-1} (\vect x^{(h-1)}) \right\rangle \\ =\,& -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_h'(\vect x^{(h)}[i]) \cdot \vect x^{(h)}[i] = -2 \left\langle \frac{ \partial L(\vect{w}) }{ \partial \vect x^{(h+1)} } , \mathbf{W}^{(h+1)}[:, i] \right\rangle \cdot \phi_{h}(\vect{x}^{(h)}[i]). \end{align*} Comparing the above expression to \eqref{eqn:proof-conserved-neuron-1}, we finish the proof. \end{proof} \section{Introduction} \label{sec:intro} \input{intro.tex} \section{The Auto-Balancing Properties in Deep Neural Networks} \label{sec:conserved} \input{conserved.tex} \section{Gradient Descent Converges to Global Minimum for Asymmetric Matrix Factorization} \label{sec:mf} \input{mf.tex} \section{Empirical Verification} \label{sec:exp} \input{exp.tex} \section{Conclusion and Future Work} \label{sec:con} \input{conclusion.tex} \section*{Acknowledgements} \label{sec:ack} \input{ack.tex} \subsection{Related Work} \label{sec:rel} \input{rel.tex} \subsection{Paper Organization} The rest of the paper is organized as follows. In Section~\ref{sec:conserved}, we present our main theoretical result on the implicit regularization property of gradient flow for optimizing neural networks. In Section~\ref{sec:mf}, we analyze the dynamics of randomly initialized gradient descent for asymmetric matrix factorization problem with unregularized objective function~\eqref{eqn:intro_mf_obj}. In Section~\ref{sec:exp}, we empirically verify the theoretical result in Section~\ref{sec:conserved}. We conclude and list future directions in Section~\ref{sec:con}. Some technical proofs are deferred to the appendix. \subsection{Notation} We use bold-faced letters for vectors and matrices. For a vector $\vect x$, denote by $\vect x[i]$ its $i$-th coordinate. For a matrix $\vect A$, we use $\mathbf A[i, j]$ to denote its $(i, j)$-th entry, and use $\mathbf A[i, :]$ and $\mathbf A[:, j]$ to denote its $i$-th row and $j$-th column, respectively (both as column vectors). We use $\norm{\cdot}_2$ or $\norm{\cdot}$ to denote the Euclidean norm of a vector, and use $\norm{\cdot}_F$ to denote the Frobenius norm of a matrix. We use $\langle \cdot, \cdot \rangle$ to denote the standard Euclidean inner product between two vectors or two matrices. Let $[n] = \{1, 2, \ldots, n\}$. \subsection{The General Rank-$r$ Case} First we consider the general case of $r\ge1$. Our main theorem below says that if we use a random small initialization $(\mathbf U_0, \mathbf V_0)$, and set step sizes $\eta_t$ to be appropriately small, then gradient descent \eqref{eqn:mf-gd-dynamics} will converge to a solution close to the global minimum of \eqref{eqn:mf}. To our knowledge, this is the first result showing that gradient descent with random initialization directly solves the un-regularized asymmetric matrix factorization problem~\eqref{eqn:mf}. \begin{thm}\label{thm:mf-main} Let $0<\epsilon < \norm{\mathbf M^*}_F$. Suppose we initialize the entries in $\mathbf U_0$ and $\mathbf V_0$ i.i.d. from $\mathcal{N}(0, \frac{\epsilon}{\mathrm{poly}(d)})$ ($d = \max\{d_1, d_2\}$), and run \eqref{eqn:mf-gd-dynamics} with step sizes $\eta_t = \frac{\sqrt{\epsilon/r}}{100(t+1) \norm{\mathbf M^*}_F^{3/2}}$ ($t=0,1,\ldots$).\footnote{The dependency of $\eta_t$ on $t$ can be $\eta_t = \Theta\left( t^{-(1/2+\delta)} \right)$ for any constant $\delta \in (0, 1/2]$.} Then with high probability over the initialization, $\lim_{t\to\infty}(\mathbf U_t, \mathbf V_t) = (\bar{\mathbf U}, \bar{\mathbf V})$ exists and satisfies $\norm{\bar{\mathbf U} \bar{\mathbf V}^\top - \mathbf M^*}_F \le \epsilon$. \end{thm} \paragraph{Proof sketch of Theorem~\ref{thm:mf-main}.} First let's imagine that we are using infinitesimal step size in GD. Then according to Theorem~\ref{thm:conserved-linear} (viewing problem~\eqref{eqn:mf} as learning a two-layer linear network where the inputs are all the standard unit vectors in $\mathbb{R}^{d_2}$), we know that $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$ will stay invariant throughout the algorithm. Hence when $\mathbf U$ and $\mathbf V$ are initialized to be small, $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$ will stay small forever. Combined with the fact that the objective $f(\mathbf U, \mathbf V)$ is decreasing over time (which means $\mathbf U \mathbf V^\top$ cannot be too far from $\mathbf M^*$), we can show that $\mathbf U$ and $\mathbf V$ will always stay bounded. Now we are using positive step sizes $\eta_t$, so we no longer have the invariance of $\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V$. Nevertheless, by a careful analysis of the updates, we can still prove that $\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t$ is small, the objective $f(\mathbf U_t, \mathbf V_t)$ decreases, and $\mathbf U_t$ and $\mathbf V_t$ stay bounded. Formally, we have the following lemma: \begin{lem} \label{lem:mf-balance} With high probability over the initialization $(\mathbf U_0, \mathbf V_0)$, for all $t$ we have: \begin{enumerate}[(i)] \item Balancedness: $\norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F \le \epsilon$; \item Decreasing objective: $f(\mathbf U_{t}, \mathbf V_{t}) \le f(\mathbf U_{t-1}, \mathbf V_{t-1}) \le \cdots \le f(\mathbf U_{0}, \mathbf V_{0}) \le 2\norm{\mathbf M^*}_F^2$; \item Boundedness: $\norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$. \end{enumerate} \end{lem} Now that we know the GD algorithm automatically constrains $(\mathbf U_t, \mathbf V_t)$ in a bounded region, we can use the smoothness of $f$ in this region and a standard analysis of GD to show that $(\mathbf U_t, \mathbf V_t)$ converges to a stationary point $(\bar{\mathbf U}, \bar{\mathbf V})$ of $f$ (Lemma~\ref{lem:mf-convergence}). Furthermore, using the results of \citep{lee2016gradient, panageas2016gradient} we know that $(\bar{\mathbf U}, \bar{\mathbf V})$ is almost surely not a strict saddle point. Then the following lemma implies that $(\bar{\mathbf U}, \bar{\mathbf V})$ has to be close to a global optimum since we know $\norm{\bar{\mathbf U}^\top \bar{\mathbf U} - \bar{\mathbf V}^\top \bar{\mathbf V}}_F \le \epsilon$ from Lemma~\ref{lem:mf-balance} (i). This would complete the proof of Theorem~\ref{thm:mf-main}. \begin{lem} \label{lem:mf-strict-saddle} Suppose $(\mathbf{U}, \mathbf{V})$ is a stationary point of $f$ such that $\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon$. Then either $\norm{\mathbf U \mathbf V^\top - \mathbf M^*}_F \le \epsilon$, or $(\mathbf{U}, \mathbf{V})$ is a strict saddle point of $f$. \end{lem} The full proof of Theorem~\ref{thm:mf-main} and the proofs of Lemmas~\ref{lem:mf-balance} and \ref{lem:mf-strict-saddle} are given in Appendix~\ref{sec:proof-mf}. \subsection{The Rank-$1$ Case} We have shown in Theorem~\ref{thm:mf-main} that GD with small and diminishing step sizes converges to a global minimum for matrix factorization. Empirically, it is observed that a constant step size $\eta_t \equiv \eta$ is enough for GD to converge quickly to global minimum. Therefore, some natural questions are how to prove convergence of GD with a constant step size, how fast it converges, and how the discretization affects the invariance we derived in Section~\ref{sec:conserved}. While these questions remain challenging for the general rank-$r$ matrix factorization, we resolve them for the case of $r=1$. Our main finding is that with constant step size, the norms of two layers are always within a constant factor of each other (although we may no longer have the stronger balancedness property as in Lemma~\ref{lem:mf-balance}), and we utilize this property to prove the \emph{linear convergence} of GD to a global minimum. When $r=1$, the asymmetric matrix factorization problem and its GD dynamics become $$ \min_{\vect{u} \in \mathbb{R}^{d_1}, \vect{v} \in \mathbb{R}^{d_2}} \frac12 \norm{\vect{u}\vect{v}^\top - \mathbf{M}^*}_F^2 $$ and \begin{align*} \vect{u}_{t+1} = \vect{u}_t - \eta (\vect{u}_t\vect{v}_t^\top - \mathbf{M}^*)\vect{v}_t, \qquad \vect{v}_{t+1} = \vect{v}_t - \eta\left(\vect{v}_t\vect{u}_t^\top - \mathbf{M}^{*\top} \right)\vect{u}_t. \end{align*} Here we assume $\mathbf{M}^*$ has rank $1$, i.e., it can be factorized as $\mathbf{M}^* = \sigma_1 \vect{u}^*\vect{v}^{*\top}$ where $\vect{u}^*$ and $\vect{v}^*$ are unit vectors and $\sigma_1>0$. Our main theoretical result is the following. \begin{thm}[Approximate balancedness and linear convergence of GD for rank-$1$ matrix factorization] \label{thm:rank_1} Suppose $\vect{u}_0 \sim \mathcal{N}(\vect 0,\delta\mathbf{I})$, $\vect{v}_0 \sim \mathcal{N}(\vect 0,\delta \mathbf I)$ with $\delta = c_{init} \sqrt{\frac{\sigma_1}{d}} $ ($d = \max\{d_1, d_2\}$) for some sufficiently small constant $c_{init} >0$, and $\eta = \frac{c_{step}}{\sigma_1}$ for some sufficiently small constant $ c_{step} >0$. Then with constant probability over the initialization, for all $t$ we have $c_0\le \frac{\abs{\vect{u}_t^\top \vect u^*}}{\abs{\vect{v}_t^\top \vect{v}^*}}\le C_0$ for some universal constants $ c_0,C_0>0$. Furthermore, for any $0<\epsilon< 1$, after $t = O\left( \log\frac{d}{\epsilon} \right)$ iterations, we have $\norm{\vect{u}_t\vect{v}_t^\top - \mathbf{M}^*}_F \le \epsilon \sigma_1$. \end{thm} Theorem~\ref{thm:rank_1} shows for $\vect{u}_t$ and $\vect{v}_t$, their strengths in the signal space, $\abs{\vect{u}_t^\top \vect{u}^*}$ and $\abs{\vect{v}_t^\top \vect{v}^*}$, are of the same order. This approximate balancedness helps us prove the linear convergence of GD. We refer readers to Appendix~\ref{sec:proof-rank-1} for the proof of Theorem~\ref{thm:rank_1}. \subsection{Proof of Lemma~\ref{lem:mf-balance}} Recall the following three properties we want to prove in Lemma~\ref{lem:mf-balance}, which we call $\mathcal{A}(t)$, $\mathcal{B}(t)$ and $\mathcal{C}(t)$, respectively: \begin{align*} \mathcal{A}(t):\qquad & \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F \le \epsilon, \\ \mathcal{B}(t):\qquad & f(\mathbf U_{t}, \mathbf V_{t}) \le f(\mathbf U_{t-1}, \mathbf V_{t-1}) \le \cdots \le f(\mathbf U_{0}, \mathbf V_{0}) \le 2 \norm{\mathbf M^*}_F^2, \\ \mathcal{C}(t):\qquad & \norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F. \end{align*} We use induction to prove these statements. For $t=0$, we can make the Gaussian variance in the initialization sufficiently small such that with high probability we have $$\norm{\mathbf U_{0}}_F^2 \le \epsilon,\qquad \norm{\mathbf V_0}_F^2 \le \epsilon, \qquad \norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F \le \frac\epsilon2.$$ From now on we assume they are all satisfied. Then $\mathcal{A}(0)$ is already satisfied, $\mathcal{C}(0)$ is satisfied because $\epsilon < \norm{\mathbf M^*}_F$, and $\mathcal{B}(0)$ can be verified by $f(\mathbf U_{0}, \mathbf V_{0}) = \frac12 \norm{\mathbf U_0 \mathbf V_0^\top - \mathbf M^*}_F^2 \le \norm{\mathbf U_0 \mathbf V_0^\top}_F^2 + \norm{\mathbf M^*}_F^2 \le \norm{\mathbf U_0}_F^2 \norm{\mathbf V_0^\top}_F^2 + \norm{\mathbf M^*}_F^2 \le \epsilon^2+\norm{\mathbf M^*}_F^2\le 2\norm{\mathbf M^*}_F^2$. To prove $\mathcal{A}(t)$, $\mathcal{B}(t)$ and $\mathcal{C}(t)$ for all $t$, we prove the following three claims. Since we have $\mathcal{A}(0)$, $\mathcal{B}(0)$ and $\mathcal{C}(0)$, if the following claims are all true, the proof will be completed by induction. \begin{enumerate}[(i)] \item $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(0), \ldots, \mathcal{C}(t) \implies \mathcal{A}(t+1)$; \item $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(t) \implies \mathcal{B}(t+1)$; \item $\mathcal{A}(t), \mathcal{B}(t) \implies \mathcal{C}(t)$. \end{enumerate} \begin{claim} $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(0), \ldots, \mathcal{C}(t) \implies \mathcal{A}(t+1)$. \end{claim} \begin{proof} Using the update rule \eqref{eqn:mf-gd-dynamics} we can calculate \begin{align*} & \mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1} \\ =\, & \left(\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t\right)^\top \left(\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t\right) \\ & - \left(\mathbf V_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*)^\top \mathbf U_t\right)^\top \left(\mathbf V_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*)^\top \mathbf U_t\right) \\ =\, & \mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t + \eta_t^2 \left( \mathbf V_t^\top \mathbf R_t^\top \mathbf R_t \mathbf V_t - \mathbf U_t^\top \mathbf R_t^\top \mathbf R_t \mathbf U_t \right), \end{align*} where $\mathbf R_t = \mathbf U_t \mathbf V_t^\top - \mathbf M^*$. Then we have \begin{equation} \label{eqn:mf-inproof-1} \begin{aligned} &\norm{\mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1}}_F\\ \le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + \eta_t^2 \left( \norm{\mathbf V_t^\top \mathbf R_t^\top \mathbf R_t \mathbf V_t }_F + \norm{ \mathbf U_t^\top \mathbf R_t^\top \mathbf R_t \mathbf U_t}_F \right) \\ \le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + \eta_t^2 \left( \norm{\mathbf V_t}_F^2 \norm{\mathbf R_t}_F^2 + \norm{\mathbf U_t}_F^2 \norm{\mathbf R_t}_F^2 \right) \\ =\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + 2\eta_t^2 \left( \norm{\mathbf V_t}_F^2 + \norm{\mathbf U_t}_F^2 \right) f(\mathbf U_t, \mathbf V_t) \\ \le\,& \norm{\mathbf U_t^\top \mathbf U_t - \mathbf V_t^\top \mathbf V_t}_F + 2\eta_t^2 \cdot 10\sqrt{r} \norm{\mathbf M^*}_F \cdot 2 \norm{\mathbf M^*}_F^2, \end{aligned} \end{equation} where the last line is due to $\mathcal{B}(t)$ and $\mathcal{C}(t)$. Since we have $\mathcal{B}(t')$ and $\mathcal{C}(t')$ for all $t' \le t$, \eqref{eqn:mf-inproof-1} is still true when substituting $t$ with any $t'\le t$. Summing all of them and noting $\norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F \le \frac{\epsilon}{2}$, we get \begin{align*} &\norm{\mathbf U_{t+1}^\top \mathbf U_{t+1} - \mathbf V_{t+1}^\top \mathbf V_{t+1}}_F \\ \le\,& \norm{\mathbf U_0^\top \mathbf U_0 - \mathbf V_0^\top \mathbf V_0}_F + 40\sqrt{r} \norm{\mathbf M^*}_F^3 \sum_{i=0}^t \eta_i^2 \\ \le\,& \frac\epsilon2 + 40\sqrt{r} \norm{\mathbf M^*}_F^3 \sum_{i=0}^t \frac{1}{(i+1)^2} \cdot \frac{\epsilon/r}{100^2\norm{\mathbf M^*}_F^3}\\ \le\,&\epsilon. \end{align*} Therefore we have proved $\mathcal{A}(t+1)$. \end{proof} \begin{claim} $\mathcal{B}(0), \ldots, \mathcal{B}(t), \mathcal{C}(t) \implies \mathcal{B}(t+1)$. \end{claim} \begin{proof} Note that we only need to show $f(\mathbf U_{t+1}, \mathbf V_{t+1}) \le f(\mathbf U_{t}, \mathbf V_{t})$. We prove this using the standard analysis of gradient descent, for which we need the smoothness of the objective function $f$ (Lemma~\ref{lem:mf-local-smooth}). We first need to bound $\norm{\mathbf U_t}_F$, $\norm{\mathbf V_t}_F$, $\norm{\mathbf U_{t+1}}_F$ and $\norm{\mathbf V_{t+1}}_F$. We know from $\mathcal{C}(t)$ that $\norm{\mathbf U_{t}}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$ and $\norm{\mathbf V_t}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$. We can also bound $\norm{\mathbf U_{t+1}}_F^2$ and $\norm{\mathbf V_{t+1}}_F^2$ easily from the GD update rule: \begin{align*} &\norm{\mathbf U_{t+1}}_F^2 \\ =\,& \norm{\mathbf U_t - \eta_t (\mathbf U_t \mathbf V_t^\top - \mathbf M^*) \mathbf V_t}_F^2 \\ \le\,& 2 \norm{\mathbf U_t}_F^2 + 2\eta_t^2 \norm{\mathbf U_t \mathbf V_t^\top - \mathbf M^*}_F^2 \norm{\mathbf V_t}_F^2 \\ \le\,& 2\cdot 5\sqrt{r} \norm{\mathbf M^*}_F + 2 \eta_t^2 \cdot 2f(\mathbf U_t, \mathbf V_t) \cdot 5\sqrt{r} \norm{\mathbf M^*}_F\\ \le\,& 10\sqrt{r} \norm{\mathbf M^*}_F + 2 \cdot \frac{\epsilon/r}{100^2 (t+1)^2 \norm{\mathbf M^*}_F^3} \cdot 4\norm{\mathbf M^*}_F^2 \cdot 5\sqrt{r} \norm{\mathbf M^*}_F & \text{(using $\mathcal{B}(t)$)} \\ \le \,& 10\sqrt{r} \norm{\mathbf M^*}_F + \frac{\epsilon}{100} \\ \le\, & 11\sqrt{r} \norm{\mathbf M^*}_F. & \text{(using $\epsilon < \norm{\mathbf M^*}_F$)} \end{align*} Let $\beta = (66\sqrt{r}+2) \norm{\mathbf M^*}_F $. From Lemma~\ref{lem:mf-local-smooth}, $f$ is $\beta$-smooth over $\mathcal{S} = \{(\mathbf U, \mathbf V): \norm{\mathbf U}_F^2 \le 11\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V}_F^2 \le 11\sqrt{r} \norm{\mathbf M^*}_F \}$. Also note that $\eta_t < \frac{1}{\beta}$ by our choice. Then using smoothness we have \begin{equation} \label{eqn:mf-obj-decrease} \begin{aligned} &f(\mathbf U_{t+1}, \mathbf V_{t+1}) \\ \le\,& f(\mathbf U_{t}, \mathbf V_{t}) + \left\langle \nabla f(\mathbf U_t, \mathbf V_t), \begin{pmatrix} \mathbf U_{t+1}\\ \mathbf V_{t+1} \end{pmatrix} - \begin{pmatrix} \mathbf U_{t}\\ \mathbf V_{t} \end{pmatrix} \right\rangle + \frac{\beta}{2} \norm{ \begin{pmatrix} \mathbf U_{t+1}\\ \mathbf V_{t+1} \end{pmatrix} - \begin{pmatrix} \mathbf U_{t}\\ \mathbf V_{t} \end{pmatrix} }_F^2 \\ =\,& f(\mathbf U_{t}, \mathbf V_{t}) - \eta_t \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 + \frac{\beta}{2}\eta_t^2 \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \\ \le\,& f(\mathbf U_{t}, \mathbf V_{t}) - \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2. \end{aligned} \end{equation} Therefore we have shown $\mathcal{B}(t+1)$. \end{proof} \begin{claim} $\mathcal{A}(t), \mathcal{B}(t) \implies \mathcal{C}(t)$. \end{claim} \begin{proof} From $\mathcal{B}(t)$ we know $\frac12 \norm{\mathbf U_t \mathbf V_t^\top - \mathbf M^*}_F^2 \le 2 \norm{\mathbf M^*}_F^2$ which implies $\norm{\mathbf U_t \mathbf V_t^\top}_F \le 3\norm{\mathbf M^*}_F$. Therefore it suffices to prove \begin{equation} \label{eqn:mf-inproof-toshow-1} \norm{\mathbf U \mathbf V^\top}_F \le 3\norm{\mathbf M^*}_F, \norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon \implies \norm{\mathbf U}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F, \norm{\mathbf V}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F. \end{equation} Now we prove \eqref{eqn:mf-inproof-toshow-1}. Consider the SVD $\mathbf U = \mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top$, where $\mathbf \Phi \in \mathbb{R}^{d_1\times d_1}$ and $\mathbf{\Psi} \in \mathbb{R}^{r\times r}$ are orthogonal matrices, and $\mathbf\Sigma \in \mathbb{R}^{d_1\times r}$ is a diagonal matrix. Let $\sigma_i = \mathbf \Sigma[i, i]$ ($i\in[r]$) which are all the singular values of $\mathbf U$. Define $\widetilde{\mathbf V} = \mathbf V \mathbf\Psi$. Then we have \begin{align*} 3\norm{\mathbf M^*}_F \ge \norm{\mathbf U \mathbf V^\top}_F = \norm{\mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top \mathbf{\Psi} \widetilde{\mathbf V}^\top }_F = \norm{\mathbf \Sigma \widetilde{\mathbf V}^\top }_F = \sqrt{\sum_{i=1}^r \sigma_i^2 \norm{\widetilde{\mathbf V}[:,i]}^2} \end{align*} and \begin{align*} \epsilon &\ge \norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F = \norm{ \mathbf\Psi \mathbf\Sigma^\top \mathbf{\Phi}^\top \mathbf\Phi \mathbf\Sigma \mathbf{\Psi}^\top - \mathbf{\Psi} \widetilde{\mathbf V}^\top \widetilde{\mathbf V} \mathbf{\Psi}^\top }_F = \norm{ \mathbf\Sigma^\top \mathbf\Sigma - \widetilde{\mathbf V}^\top \widetilde{\mathbf V} }_F \\ &\ge \sqrt{\sum_{i=1}^r \left( \sigma_i^2 - \norm{\widetilde{\mathbf V}[:,i]}^2 \right)^2}. \end{align*} Using the above two inequalities we get \begin{align*} \sum_{i=1}^r \sigma_i^4 &\le \sum_{i=1}^r \left( \sigma_i^4 + \norm{\widetilde{\mathbf V}[:,i]}^4 \right) = \sum_{i=1}^r \left( \sigma_i^2 - \norm{\widetilde{\mathbf V}[:,i]}^2 \right)^2 + 2 \sum_{i=1}^r \sigma_i^2 \norm{\widetilde{\mathbf V}[:,i]}^2 \\ &\le \epsilon^2 + 2 \left(3\norm{\mathbf M^*}_F\right)^2 \le 19 \norm{\mathbf M^*}_F^2. \end{align*} Then by the Cauchy-Schwarz inequality we have \begin{align*} \norm{\mathbf U}_F^2 = \sum_{i=1}^r \sigma_i^2 \le \sqrt{r \sum_{i=1}^r \sigma_i^4} \le \sqrt{r\cdot 19 \norm{\mathbf M^*}_F^2} \le 5\sqrt{r} \norm{\mathbf M^*}_F. \end{align*} Similarly, we also have $\norm{\mathbf V}_F^2 \le 5\sqrt{r} \norm{\mathbf M^*}_F$. Therefore we have proved \eqref{eqn:mf-inproof-toshow-1}. \end{proof} \subsection{Convergence to a Stationary Point} With the balancedness and boundedness properties in Lemma~\ref{lem:mf-balance}, it is then standard to show that $(\mathbf U_t, \mathbf V_t)$ converges to a stationary point of $f$. \begin{lem} \label{lem:mf-convergence} Under the setting of Theorem~\ref{thm:mf-main}, with high probability $\lim_{t\to\infty}(\mathbf U_t, \mathbf V_t) = (\bar{\mathbf U}, \bar{\mathbf V})$ exists, and $(\bar{\mathbf U}, \bar{\mathbf V})$ is a stationary point of $f$. Furthermore, $(\bar{\mathbf U}, \bar{\mathbf V})$ satisfies $\norm{\bar{\mathbf U}^\top \bar{\mathbf U} - \bar{\mathbf V}^\top \bar{\mathbf V}}\le \epsilon$. \end{lem} \begin{proof} We assume the three properties in Lemma~\ref{lem:mf-balance} hold, which happens with high probability. Then from \eqref{eqn:mf-obj-decrease} we have \begin{equation}\label{eqn:mf-obj-decrease-2} \begin{aligned} f(\mathbf U_{t+1}, \mathbf V_{t+1}) &\le f(\mathbf U_{t}, \mathbf V_{t}) - \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \\ &= f(\mathbf U_{t}, \mathbf V_{t}) - \frac{1}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F \norm{\begin{pmatrix} \mathbf U_{t+1} \\ \mathbf V_{t+1} \end{pmatrix} - \begin{pmatrix} \mathbf U_{t} \\ \mathbf V_{t} \end{pmatrix}}_F . \end{aligned} \end{equation} Under the above descent condition, the result of \cite{absil2005convergence} says that the iterates either diverge to infinity or converge to a fixed point. According to Lemma~\ref{lem:mf-balance}, \{$(\mathbf U_t, \mathbf V_t)\}_{t=1}^\infty$ are all bounded, so they have to converge to a fixed point $(\bar{\mathbf U}, \bar{\mathbf V})$ as $t\to\infty$. Next, from \eqref{eqn:mf-obj-decrease-2} we know that $\sum_{t=1}^\infty \frac{\eta_t}{2} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F^2 \le f(\mathbf U_0, \mathbf V_0)$ is bounded. Notice that $\eta_t$ scales like $1/t$. So we must have $\liminf_{t\to\infty} \norm{ \nabla f(\mathbf U_t, \mathbf V_t) }_F = 0$. Then according to the smoothness of $f$ in a bounded region (Lemma~\ref{lem:mf-local-smooth}) we conclude $ \nabla f(\bar{\mathbf U}, \bar{\mathbf V}) = \mat0$, i.e., $(\bar{\mathbf U}, \bar{\mathbf V})$ is a stationary point. The second part of the lemma is evident according to Lemma~\ref{lem:mf-balance} (i). \end{proof} \subsection{Proof of Lemma~\ref{lem:mf-strict-saddle}} The main idea in the proof is similar to \cite{ge2017no}. We want to find a direction $\mathbf \Delta$ such that either $[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta)$ is negative or $(\mathbf U, \mathbf V)$ is close to a global minimum. We show that this is possible when $\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F \le \epsilon$. First we define some notation. Take the SVD $\mathbf M^* = \mathbf \Phi^* \mathbf \Sigma^* \mathbf \Psi^{*\top}$, where $\mathbf \Phi^* \in \mathbb{R}^{d_1\times r}$ and $\mathbf \Psi^* \in \mathbb{R}^{d_2\times r}$ have orthonormal columns and $\mathbf \Sigma^* \in \mathbb{R}^{r\times r}$ is diagonal. Denote $\mathbf U^* = \mathbf \Phi^* (\mathbf \Sigma^*)^{1/2}$ and $\mathbf V^* = \mathbf \Psi^* (\mathbf \Sigma^*)^{1/2}$. Then we have $\mathbf U^* \mathbf V^{*\top} = \mathbf M^*$ (i.e., $(\mathbf U^*, \mathbf V^*)$ is a global minimum) and $\mathbf U^{*\top} \mathbf U^* = \mathbf V^{*\top} \mathbf V^*$. Let $\mathbf M = \mathbf U \mathbf V^\top$, $\mathbf W = \begin{pmatrix} \mathbf U\\ \mathbf V \end{pmatrix}$ and $\mathbf W^* = \begin{pmatrix} \mathbf U^*\\ \mathbf V^* \end{pmatrix}$. Define $$\mathbf{R} = \mathrm{argmin}_{\mathbf R' \in \mathbb{R}^{r\times r} \text{, orthogonal}} \norm{\mathbf W - \mathbf W^* \mathbf R'}_F$$ and \[ \mathbf\Delta = \mathbf W - \mathbf W^* \mathbf R. \] We will show that $\mathbf\Delta$ is the desired direction. Recall \eqref{eqn:mf-hessian}: \begin{equation} \label{eqn:mf-hessian-to-bound} \begin{aligned} \ [\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta) = 2 \left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle + \norm{\mathbf U \mathbf\Delta_{\mathbf V}^\top + \mathbf\Delta_{\mathbf U} \mathbf V^\top }_F^2, \end{aligned} \end{equation} where $ \mathbf \Delta = \begin{pmatrix} \mathbf \Delta_{\mathbf U} \\ \mathbf{\Delta}_{\mathbf V} \end{pmatrix}, \mathbf \Delta_{\mathbf U} \in \mathbb{R}^{d_1\times r}, \mathbf \Delta_{\mathbf V} \in \mathbb{R}^{d_2\times r}$. We consider the two terms in \eqref{eqn:mf-hessian-to-bound} separately. For the first term in \eqref{eqn:mf-hessian-to-bound}, we have: \begin{claim} \label{claim:mf-hessian-to-bound-1} $\left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle = -\norm{\mathbf M - \mathbf M^*}_F^2$. \end{claim} \begin{proof} Since $(\mathbf U, \mathbf V)$ is a stationary point of $f$, we have the first-order optimality condition: \begin{equation} \label{eqn:mf-inproof-first-order-opt-cond} \frac{\partial f(\mathbf U, \mathbf V)}{\partial \mathbf U} = (\mathbf M - \mathbf M^*) \mathbf V = \mat0, \qquad \frac{\partial f(\mathbf U, \mathbf V)}{\partial \mathbf V} = (\mathbf M - \mathbf M^*)^\top \mathbf U = \mat0. \end{equation} Note that $\mathbf \Delta_{\mathbf U} = \mathbf U - \mathbf U^* \mathbf R$ and $\mathbf \Delta_{\mathbf V} = \mathbf V - \mathbf V^* \mathbf R$. We have \begin{align*} &\left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle \\ =\,& \left\langle \mathbf M - \mathbf M^*, (\mathbf U - \mathbf U^* \mathbf R) (\mathbf V - \mathbf V^* \mathbf R)^\top \right\rangle \\ =\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M - \mathbf U^* \mathbf R \mathbf V^\top - \mathbf U \mathbf R^\top \mathbf V^{*\top} + \mathbf M^* \right\rangle \\ =\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M^*\right\rangle \\ =\,& \left\langle \mathbf M - \mathbf M^*, \mathbf M^* - \mathbf M \right\rangle \\ =\,& -\norm{\mathbf M - \mathbf M^*}_F^2, \end{align*} where we have used the following consequences of \eqref{eqn:mf-inproof-first-order-opt-cond}: \begin{align*} &\left\langle \mathbf M - \mathbf M^*, \mathbf M \right\rangle = \left\langle \mathbf M - \mathbf M^*, \mathbf U \mathbf V^\top \right\rangle = 0, \\ &\left\langle \mathbf M - \mathbf M^*, \mathbf U^* \mathbf R \mathbf V^\top \right\rangle = 0, \\ &\left\langle \mathbf M - \mathbf M^*, \mathbf U \mathbf R^\top \mathbf V^{*\top} \right\rangle = 0. \end{align*} \end{proof} The second term in \eqref{eqn:mf-hessian-to-bound} has the following upper bound: \begin{claim} \label{claim:mf-hessian-to-bound-2} $\norm{\mathbf U \mathbf\Delta_{\mathbf V} + \mathbf\Delta_{\mathbf U} \mathbf V}_F^2 \le \norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2 $. \end{claim} \begin{proof} We make use of the following identities, all of which can be directly verified by plugging in definitions: \begin{equation} \label{eqn:mf-inproof-identity-1} \mathbf U \mathbf\Delta_{\mathbf V}^\top + \mathbf\Delta_{\mathbf U} \mathbf V^\top = \mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top + \mathbf M - \mathbf M^*, \end{equation} \begin{equation} \label{eqn:mf-inproof-identity-2} \norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 = 4 \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top}_F^2 + \norm{\mathbf\Delta_{\mathbf U}^\top\mathbf\Delta_{\mathbf U} - \mathbf\Delta_{\mathbf V}^\top \mathbf\Delta_{\mathbf V}}_F^2, \end{equation} \begin{equation} \label{eqn:mf-inproof-identity-3} \begin{aligned} \norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2 =\ & 4\norm{\mathbf M - \mathbf M^*}_F^2 - 2 \norm{\mathbf U^\top \mathbf U^* - \mathbf V^\top \mathbf V^*}_F^2 \\&+ \norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F^2 + \norm{\mathbf U^{*\top} \mathbf U^* - \mathbf V^{*\top} \mathbf V^*}_F^2. \end{aligned} \end{equation} We also need the following inequality, which is \citep[Lemma 6]{ge2017no}: \begin{equation} \label{eqn:mf-inproof-rong-ineq} \norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 \le 2 \norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2. \end{equation} Now we can prove the desired bound as follows: \begin{align*} & \norm{\mathbf U \mathbf\Delta_{\mathbf V} + \mathbf\Delta_{\mathbf U} \mathbf V}_F^2 \\ =\, & \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top + \mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-1}) \\ =\,& \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top }_F^2 + 2 \left\langle \mathbf M - \mathbf M^*, \mathbf\Delta_{\mathbf U} \mathbf{\Delta}_{\mathbf V}^\top \right\rangle + \norm{\mathbf M - \mathbf M^*}_F^2 \\ =\,& \norm{\mathbf\Delta_{\mathbf U} \mathbf\Delta_{\mathbf V}^\top }_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\text{Claim~\ref{claim:mf-hessian-to-bound-1}})\\ \le\, & \frac14 \norm{\mathbf{\Delta} \mathbf{\Delta}^\top}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-2}) \\ \le\, & \frac12 \norm{\mathbf W \mathbf W^\top - \mathbf W^* \mathbf W^{*\top}}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-rong-ineq}) \\ =\, & 2\norm{\mathbf M - \mathbf M^*}_F^2 - \norm{\mathbf U^\top \mathbf U^* - \mathbf V^\top \mathbf V^*}_F^2 + \frac12\norm{\mathbf U^\top \mathbf U - \mathbf V^\top \mathbf V}_F^2 \\& + \frac12\norm{\mathbf U^{*\top} \mathbf U^* - \mathbf V^{*\top} \mathbf V^*}_F^2 - \norm{\mathbf M - \mathbf M^*}_F^2 & (\eqref{eqn:mf-inproof-identity-3}) \\ \le\,& \norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2, \end{align*} where in the last line we have used $\mathbf U^{*\top} \mathbf U^* = \mathbf V^{*\top} \mathbf V^*$ and $\norm{\mathbf U^{\top} \mathbf U - \mathbf V^{\top} \mathbf V} \le \epsilon$. \end{proof} Using Claims~\ref{claim:mf-hessian-to-bound-1} and \ref{claim:mf-hessian-to-bound-2}, we obtain an upper bound on \eqref{eqn:mf-hessian-to-bound}: \begin{align*} [\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta) \le -\norm{\mathbf M - \mathbf M^*}_F^2 + \frac12 \epsilon^2. \end{align*} Therefore, we have either $\norm{\mathbf U \mathbf V^\top - \mathbf M^*}_F = \norm{\mathbf M - \mathbf M^*}_F \le \epsilon$ or $[\nabla^2 f(\mathbf U, \mathbf V)] (\mathbf \Delta, \mathbf \Delta) \le -\frac12 \epsilon^2 <0$. In the latter case, $(\mathbf U, \mathbf V)$ is a strict saddle point of $f$. This completes the proof of Lemma~\ref{lem:mf-strict-saddle}. \subsection{Finishing the Proof of Theorem~\ref{thm:mf-main}} Theorem~\ref{thm:mf-main} is a direct corollary of Lemma~\ref{lem:mf-convergence}, Lemma~\ref{lem:mf-strict-saddle}, and the fact that gradient descent does not converge to a strict saddle point almost surely \citep{lee2016gradient, panageas2016gradient}. \subsection{Proof Sketch of Theorem~\ref{thm:rank_1}} To analyze the convergence, we define four key quantities. \begin{align*} \alpha_t = \vect{u}^\top\vect{x}_t,\quad \alpha_{t,\perp} = \norm{\vect{u}_\perp^\top\vect{x}_t}_2, \quad \beta_{t} = \vect{v}^\top \vect{y}_t, \quad \beta_{t,\perp} = \norm{\vect{v}_\perp^\top \vect{y}_t}_2 \end{align*} where $\vect{u}_\perp^\top$ and $\vect{v}^\top_{\perp}$ represent the projection matrix onto the complement space of $\vect{u}$ and $\vect{v}$, respectively. Notice that $\norm{\vect{x}_t}_2^2 = \alpha_t^2 + \alpha_{t,\perp}^2$ and $\norm{\vect{y}_t}_2^2 = \beta_t^2 + \beta_{t,\perp}^2$. Note if eventually we find a global minimum, we have $\beta_{t,\perp} = 0, \alpha_{t,\perp}=0$ and $\alpha_t \beta_t = \sigma_1$. It turns out we can write out an explicit formula for the dynamics of the these quantities. \begin{align*} \alpha_{t+1} = \left(1-\eta\left(\beta_t^2 +\beta_{t,\perp}^2\right)\right)\alpha_t + \eta\sigma_1\beta_t, \quad &\beta_{t+1} = \left(1-\eta\left(\alpha_t^2+\alpha_{t,\perp}^2\right)\right)\beta_t + \eta_1\sigma_1 \alpha_t\\ \alpha_{t+1,\perp} = \left(1-\eta\left(\beta_t^2+\beta_{t,\perp}^2\right)\right)\alpha_{t,\perp}, \quad &\beta_{t+1,\perp} = \left(1-\eta\left(\alpha_t^2+\alpha_{t,\perp}^2\right)\right)\beta_{t,\perp} \end{align*} Since we use Gaussian initialization, by standard random matrix theory, with overwhelming probability, we know the following facts about these quantities \[ \alpha_0 \asymp \frac{c_{init}\sigma_1}{\sqrt{d}}, \quad \alpha_{0,\perp} \asymp c_{init} \sqrt{\sigma_1}, \quad \beta_0 \asymp\frac{c_{init}\sigma_1}{\sqrt{d}}, \quad\beta_{0,\perp} \asymp c_{init} \sqrt{\sigma_1}. \] We also assume signal at the beginning is positive: $\alpha_0 > 0, \beta_0 > 0.$, which holds with probability $0.5$ using Gaussian initialization. To prove $c_0\le \frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}\le C_0$ for all iterations, we cannot only analyze quantity $\frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}$ along, the convergence rate to the global minimum also plays an important role in affecting $\frac{\abs{\vect{u^\top\vect{x}_t}}}{\abs{\vect{v}^\top \vect{y}_t}}$. We divide the dynamics into two stages. \begin{thm}[Stage 1: Escaping from $(\vect{ 0},\vect{0})$ Saddle Point]\label{thm:rank_1_stage_1} Let $T := \mathrm{argmin}_{t} \left\{\alpha_t^2 + \beta_t^2 \le c_1\sigma_1\right\}$ for $c_1 = 1/2$. If $\eta \le \frac{c_{step}}{\sigma_1}$, then for $t=1,\ldots,T-1$, the followings hold\begin{itemize} \item Magnitude in complement space remain small: $\xi_t \le \xi_0$; \item Growth of magnitude in signal space: $\abs{\alpha_{t+1}+\beta_{t+1}} \ge \left(1+\frac{\eta\sigma_1}{3}\right) \abs{\alpha_t+\beta_t}$; \item Bounded ratio between two layers: $\abs{\alpha_t - \beta_t} \le c_{diff}\abs{\alpha_t+\beta_t}$ where $c_{diff} \triangleq \frac{\abs{\alpha_0-\beta_0}}{\abs{\alpha_0+\beta_0}}$. \end{itemize} Furthermore, after at most $T_1 = O\left(\frac{1}{\eta\sigma_1}\log\left(\frac{\sigma_1}{\alpha_0+\beta_0}\right)\right)$ iterations, $T_1 \le T$, we have $\alpha_{T_1} \beta_{T_1} \ge c \sigma_1$ for some small constant $0 < c < 1$. \end{thm} In this stage, the strength in the noise space remains small ($\xi_t \le \xi_0$) and the strength in the signal space is growing ($\abs{\alpha_{t+1}+\beta_{t+1}} \ge \left(1+\frac{\eta\sigma_1}{3}\right) \abs{\alpha_t+\beta_t}$). Further, because $\abs{\alpha_t - \beta_t} \le c_{diff}\abs{\alpha_t+\beta_t}$, we know $\frac{1-c_{diff}}{1+c_{diff}}\le \frac{\alpha_t}{\beta_t} \le \frac{1+c_{diff}}{1-c_{diff}}$, which is our desired result. Here we consider $\abs{\alpha_t - \beta_t} / \abs{\alpha_t - \beta_t}$ instead of directly bounding $\frac{\alpha_t}{\beta_t}$ because the dynamics of $\abs{\alpha_t - \beta_t} / \abs{\alpha_t - \beta_t}$ is much simpler to analyze. This will be apparent in the proof. Now we enter stage 2, which is essentially a local convergence phase. The following theorem characterizes the behaviors of the strength in the signal and in the noise space in this stage. \begin{thm}[Stage 2: Local Convergence]\label{thm:rank_1_stage_2} Suppose at $T_1$-th iteration, the followings hold \begin{align*} \alpha_{T_1}\beta_{T_1} \ge c\sigma_1, \quad \alpha_{T_1,\perp}^2+\beta_{T_1,\perp}^2 \le &10c_{init} \sigma_1,\quad \abs{\alpha_{T_1}-\beta_{T_1}} \le c_{diff}\abs{\alpha_{T_1}+\beta_{T_1}} \end{align*} where $c > 1/5$, $c_{init} < 1/100$, $c_{diff}< 1$ and $\eta \le \frac{c_{step}}{\sigma_1}$ for $c_{step} \le 1/100$. Let $\tau_t = \min\{\alpha_t,\beta_t\}$ and $\tau_{min} \triangleq \frac{\tau_{T_1}}{2}$. Then for all $t = T+1,T+2,\ldots$, the followings hold \begin{align*} \tau_t \ge \prod_{i=T+1}^{t}\left(1-\eta\xi_0\left(1-\eta\tau_{min}^2\right)^{i-T}\right)\tau_T, \quad \xi_t \le \left(1-\eta\tau_{min}^2\right)^{t-T} \xi_0, \quad \alpha_t\beta_t \le \sigma_1. \end{align*} \end{thm} In Theorem~\ref{thm:rank_1_stage_2}, $\tau_t$ characterizes the smaller signal strength in $\vect{x}_t$ and $\vect{y}_t$ and $\xi_t$ characterizes the strength in the noise space. Note even though $\tau_t$ may decrease, with some algebra, Theorem~\ref{thm:rank_1_stage_2} shows $\tau_t$ is uniformly bounded by $\Omega\left(\sqrt{\sigma_1}\right)$ for all $t \ge T_1$. Combining with the fact that $\alpha_t\beta_t \le \sigma_1$, we know $c_0\le \frac{\alpha_t}{\beta_t} \le C_0$ for all $t \ge T_1$. Similar to Theorem~\ref{thm:rank_1_stage_1}, here we consider $\tau_t$ and $\alpha_t\beta_t$ instead of directly analyzing $\frac{\alpha_t}{\beta_t}$ because dynamics of $\tau_t$ and $\alpha_t\beta_t$ are easier to analyze. A directly corollary is the following local convergence rate result, which is proved by examining the dynamics of $\alpha_t\beta_t$. \begin{cor}\label{cor:rank_1local_convergence_rate} Under the same assumptions as in Theorem~\ref{thm:rank_1_stage_2}, we have after $T_2 = O\left(\frac{1}{\eta \sigma_1}\log\left(\frac{1}{\epsilon}\right)\right)$ iterations, $ \norm{\vect{x}\vect{y}^\top - \mathbf{M}}_F^2 \le \epsilon^2 \sigma_1^2.$ \end{cor} Theorem~\ref{thm:rank_1_stage_1}, Theorem~\ref{thm:rank_1_stage_2} and Corollary~\ref{cor:rank_1local_convergence_rate} together imply Theorem~\ref{thm:rank_1}.
{ "timestamp": "2018-11-01T01:14:46", "yymm": "1806", "arxiv_id": "1806.00900", "language": "en", "url": "https://arxiv.org/abs/1806.00900" }
\section{Introduction} \label{sec:Introduction} Mean field games were introduced by \cite{LasryLions.06a,LasryLions.06b,LasryLions.07} and \cite{HuangMalhameCaines.07, HuangMalhameCaines.06} to overcome the notorious intractability of $n$-player games. Two key simplifications are made. First, agents interact symmetrically through the empirical distribution of their states. Second, by formally letting $n\to\infty$, one passes to a representative agent whose actions do not affect this distribution because each individual agent becomes negligible. Thus, the mean field game is seen as an approximation of the $n$-player game for large $n$. We refer to the lecture notes~\cite{Cardaliaguet.13} and the monographs~\cite{BensoussanFrehseYam.13,CarmonaDelaRue.17a,CarmonaDelaRue.17b} and their extensive references for further background. In this paper, we conduct a case study of an $n$-player game of optimal stopping where multiple equilibria may occur naturally. We formulate an associated mean field game and highlight that certain mean field equilibria are limits of $n$-player equilibria while others are not, and study how to distinguish them. Equilibria that are not limit points are questionable from the point of view of applications, at least if they are motivated as ``$n$-player games with large $n$.'' Several ways of connecting $n$-player and mean field games have been studied in the literature. In many cases it is easier to establish the reverse direction, namely that a given mean field equilibrium induces an \emph{approximate} Nash equilibrium in the $n$-player game for large $n$. This goes back to \cite{HuangMalhameCaines.06} and is by now established in some generality, see in particular \cite{Lacker.14} for diffusion control, \cite{CarmonaDelarueLacker.17} for games of timing or~\cite{CecchinFischer.18} for finite state games (but see also \cite{CampiFischer.18} for a counterexample in a degenerate case with absorption). It then follows, conversely, that mean field equilibria are limits of approximate $n$-player equilibria. However, we emphasize that approximate and actual Nash equilibria may look quite different, and in particular one cannot expect in general that there is a true Nash equilibrium in the proximity of an approximate one. The convergence of $n$-player Nash equilibria to the mean field limit is often more delicate. The deep result of \cite{CardaliaguetDelarueLasryLions.15} shows convergence for a class of (closed-loop) games where agents choose drifts of diffusions. In their setting, the mean field game has a unique equilibrium as a consequence of the so-called monotonicity condition~\cite{LasryLions.06a} which postulates that it is disadvantageous for agents' states to be close to one another. In a related but different (open-loop) framework, and without imposing uniqueness, \cite{Fischer.14} obtains convergence under the assumption that the limiting measure flow is deterministic. More comprehensively, \cite{Lacker.14} shows that $n$-player equilibria converge to a weak notion of mean field equilibria which can include mixtures of deterministic equilibria, for a general class of diffusion-control games. A corresponding result for games of timing is established in \cite{CarmonaDelarueLacker.17}. Most recently, \cite{Lacker.18b} provides results along the lines of~\cite{Lacker.14} for the closed-loop case. Convergence has also been shown in a number of more specific problems, for instance stationary mean field games~\cite{LasryLions.06a}, linear-quadratic problems~\cite{Bardi.12} or a game of Poissonian control~\cite{NutzZhang.17}, among others. However, to the best of our knowledge, the question which mean field equilibria are limit points of (true) $n$-player equilibria has not been emphasized as such in the literature. We can mention the parallel work~\cite{CecchinDaiPraFischerPelino.18} on a two-state game: the game has unique $n$-player equilibria and these converge to a mean field equilibrium as expected; however, a second, less plausible mean field solution can appear for certain parameter values and this solution is not a limit. Another interesting parallel work \cite{DelarueFoguenTchuendom.18} studies several approaches of selecting an equilibrium in a linear-quadratic mean field game with multiple equilibria, including the convergence of $n$-player equilibria. Different approaches are shown to select different equilibria. From the perspective of mean field games, being a limit point of $n$-player equilibria can be seen as a stability property of equilibria with respect to the number of players. We are not aware of a systematic study in this direction (but see~\cite{BrianiCardaliaguet.18} for a recent investigation of a different stability property that is potentially related). Since mean field equilibria are often motivated as ``large~$n$'' equilibria, it seems desirable to understand the phenomenon in some generality and at least establish sufficient conditions. A general formulation and investigation of this stability seems wide open at this time, whence our focus on a case study in the present paper. \subsection{Synopsis} We start by introducing an $n$-player game of optimal stopping inspired by~\cite{Bertucci.17,BouveretDumitrescuTankov.18,CarmonaDelarueLacker.17,Nutz.16} and the literature on bank-runs following~\cite{DiamondDybvig.83}. In addition to their i.i.d.\ signals, players observe how many other players have already stopped. A crucial feature is that whenever an agent leaves the game, staying in the game becomes less attractive for the remaining agents. For instance, this may reflect that the bank is more likely to default if other clients withdraw their savings. In particular, the game satisfies the opposite of Lasry and Lions' monotonicity condition, or \emph{strategic complementarity} in Economics terminology~\cite{BulowGeanakoplosKlemperer.85}. Indeed, the model exhibits a ``flocking'' or ``herding'' behavior where groups of agents can collectively decide to stop or not. We will see that these choices can naturally give rise to multiple equilibria; more precisely, they parametrize the full range of $n$-player equilibria. Next, we review the mean field version of the game which was introduced in~\cite{Nutz.16} without discussing the $n$-player game. Enhancing slightly a result of~\cite{Nutz.16}, mean field equilibria are described by a simple equation: for any equilibrium, the proportion $\rho(t)$ of agents that have stopped by time~$t$ is a zero of a deterministic function $g_{t}$ on $[0,1]$ as is Figure~\ref{fig:intro}. More generally, any equilibrium $t\mapsto \rho(t)$ is characterized as an increasing, right-continuous selection of such zeros. In Figure~\ref{fig:intro}, we can distinguish several types of zeros: increasing-transversal~($i$), tangential~($t$) and decreasing-transversal~($d$). \begin{figure}[b] \begin{center} \begin{tikzpicture}[scale=1.1] \draw[line width=1pt,-latex] (-5,0) to (5,0) node[right=2pt] {$u$}; \fill (-4.25,0) circle (1.5pt) node[above=1pt] at (-4.25,0) {0}; \fill (4.25,0) circle (1.5pt) node[above=1pt] at (4.25,0) {1}; \draw[line width=1.5pt] (-4.25,-.5) to[out=0,in=225] (-3.25,0) to[out=45,in=135] (-1.75,0) to[out=-55,in=180] (-.575,0) to[out=0,in=235] (.5,0) to[out=45,in=135] (2,0) to[out=-45,in=225] (3.5,0) to[out=45,in=190] (4.25,.5) node[above=1pt] {$g_{t}(u)$}; \fill (-3.25,0) circle (2pt) node[above=2pt]{$i$}; \fill (-1.75,0) circle (2pt) node[above=2pt]{$d$}; \fill (-.575,0) circle (2pt) node[above=2pt]{$t$}; \fill (.5,0) circle (2pt) node[above=2pt]{$i$}; \fill (2,0) circle (2pt) node[above=2pt]{$d$}; \fill (3.5,0) circle (2pt) node[above=2pt]{$i$}; \end{tikzpicture} \end{center} \caption{Types of mean field equilibria at a fixed time $t$} \label{fig:intro} \end{figure} These types are related to how concentrated the distribution of the agents' signals is in a neighborhood of the zero, relative to the strength of interaction. Intuitively, tangential solutions are delicate in that they may disappear if Figure~\ref{fig:intro} is perturbed, whereas the transversal solutions are stable in this sense. We then turn to our main question and study which mean field equilibria are limits of $n$-player equilibria. Roughly, the main result is that \begin{enumerate} \item Increasing-transversal solutions are limits of $n$-player equilibria, \item decreasing-transversal solutions \emph{fail} to be limits, \item tangential solutions can but need not be limits. \end{enumerate} Specifically, we first consider the minimal and maximal equilibria, corresponding to the left- and right-most solutions in Figure~\ref{fig:intro}. The $n$-player game also has such extremal equilibria and these yield natural candidates for sequences converging to their mean field counterparts. After introducing appropriate notions for dynamic equilibria, we show that this convergence indeed holds, under the condition that the solutions are increasing-transversal (on a sufficiently large set of times $t$). However, we also find that if the minimal (say) solution is tangential, the minimal $n$-player equilibria can converge to a mixture of mean field equilibria and then the minimal mean field equilibrium may fail to be a proper limit. (The minimal and maximal solutions can be increasing-transversal or tangential, but never decreasing-transversal.) This also yields a novel example of how randomization can emerge in mean field games. Second, we study the convergence to a general mean field equilibrium, possibly somewhere in the middle of Figure~\ref{fig:intro}. In that case, there are no obvious candidates for the $n$-player approximations and more abstract arguments need to be used. We show by a fixed point construction that all increasing-transversal solutions are limits of $n$-player equilibria. Quite surprisingly however, (``strongly'') decreasing-transversal solutions fail to be limits despite appearing stable in Figure~\ref{fig:intro}. In fact, these solutions merely occur as parts of mixtures that are limits, and the weight within these mixtures can be bounded by a monotone function of the slope in Figure~\ref{fig:intro}. It turns out that some fairly detailed asymptotic statistics, such as the expected number of $n$-player equilibria, can be analyzed in our model---which is unusual for mean field games. The remainder of this paper is organized as follows. In Section~\ref{se:basicSetup}, we introduce the game of optimal stopping. Section~\ref{se:nPlayer} describes the Nash equilibria of the $n$-player version and Section~\ref{se:MFG} covers the analogue for the mean field game. The results on the convergence to the minimal and maximal equilibria are relatively direct and established in Section~\ref{se:convergenceExtremal}, whereas the more abstract results on the convergence to general equilibria are reported in Section~\ref{se:convergenceGeneral}. \section{Description of the Game}\label{se:basicSetup} Let $(I,\mathcal{I},\lambda)$ be a probability space representing the agents; we shall be interested in the $n$-player case with a finite $I$ and the mean field case with an atomless space. Let $(\Omega,\mathcal{G},P)$ be another probability space, equipped with a right-continuous filtration $\mathbb{G}=(\mathcal{G}_{t})_{t\in\mathbb{R}_{+}}$ and an exponentially distributed random variable $\mathcal{E}$ which is independent of $\mathbb{G}$. Given an agent $i\in I$, let $\alpha^{i}\geq0$ be a $\mathbb{G}$-progressively measurable process which is locally integrable and consider the random time $$ \theta^{i} = \inf\bigg\{t:\, \int_{0}^{t} \alpha^{i}_{s}\,ds = \mathcal{E}\bigg\}. $$ As in~\cite{Nutz.16}, one may think of $\theta^{i}$ as the time when agent $i$ expects the default of her bank. We fix a parameter $r\in\mathbb{R}$, interpreted as the interest rate paid by the bank (and assumed to be constant for simplicity). Following~\cite{Nutz.16}, we suppose that $\alpha^{i}$ is increasing\footnote{Increase is to be understood in the non-strict sense throughout the paper.} and that \begin{equation}\label{eq:intCond} \text{ $\inf\{t: \, \alpha^{i}_{t}-r\geq 0\}<\infty$\quad $P$-a.s.} \; \end{equation} Denoting by $\mathcal{T}$ the set of all $\mathbb{G}$-stopping times, we then consider the optimal stopping problem \begin{equation}\label{eq:optStopProblem} \sup_{\tau\in\mathcal{T}} E\big[e^{r\tau} \mathbf{1}_{\{\theta^{i}>\tau\}\cup \{\theta^{i}=\infty\}}\big] \end{equation} which we assume to have a finite value. Thus, if the default $\theta^{i}>\tau$, we may think of the agent as accruing the interest on an initial unit investment until~$\tau$, but losing everything if $\theta^{i}<\tau$. If the stopping time \begin{equation}\label{eq:optStopTime} \tau^{i} := \inf\{t: \, \alpha^{i}_{t}\geq r\} \in\mathcal{T} \end{equation} is a.s.\ finite, then $\tau^{i}$ is optimal and in fact the minimal solution of~\eqref{eq:optStopProblem}; cf. \cite[Lemma~2.1]{Nutz.16}. The solution is unique for instance if $\alpha^{i}$ is strictly increasing, but not in general. We assume that agents choose~\eqref{eq:optStopTime} in the case of non-uniqueness, which can be motivated e.g.\ as a preference for early stopping when other things are equal. This convention is not essential, but simplifies our exposition and allows us to focus on multiplicity of equilibria due to inherent game-theoretic aspects as it avoids ambiguity at the individual agents' level. The processes $\alpha^{i}$ will depend on the proportion $\rho(t)$ of players who have already stopped, thus inducing an interaction among the agents. Since given~$\rho$, the optimal stopping times are completely determined by~\eqref{eq:optStopTime}, we shall simply say that an equilibrium is a process $\rho$ which is $\mathbb{G}$-adapted and such that \begin{equation*}\label{eq:equilibDef} \rho(t) = \lambda\{i:\, \tau^{i}\leq t\}, \end{equation*} where it is tacitly assumed that the above set is $\lambda$-measurable. \section{The $n$-Player Game} \label{se:nPlayer} In this section, we formulate the $n$-player version of the ``toy model'' mean field game in \cite[Section~4]{Nutz.16}. Indeed, fix $n\in\mathbb{N}$ and take $I=\{1,\dots,n\}$ to be a set with $n$ elements, equipped with the normalized counting measure. Each player $i$ observes an idiosyncratic signal $Y^i_t\geq 0$ which is right-continuous, progressively measurable, increasing and such that $\{Y^{i}\}_{i\in I}$ are pairwise i.i.d.\ with the common c.d.f.\ $$ y\mapsto F_t(y):=P\{Y_t^i\leq y\}. $$ Moreover, for a fixed interaction constant\footnote{We could more generally consider processes $\alpha^{i}$ which are nonlinear functions of $Y^{i}$ and $\rho^{-i}$ and possibly a common noise, as in~\cite{Nutz.16}. However, the increased generality does not seem to lead to additional insights regarding the main questions of this paper, so we have chosen to use the simplified ``toy model'' in our exposition. The constant $c$ could in fact be normalized to $1$ by changing $Y^{i}$ and $r$, but we find it useful to represent the strength of interaction explicitly.} $c>0$, $$ \alpha^i(t)=Y_t^i+c\rho_{n}^{-i}(t), \quad \mbox{where}\quad \rho_n^{-i}(t)=\frac{\#\{j\neq i:\, \tau^j\leq t\}}{n} $$ is the fraction of other players\footnote{Once again, we have decided to exclude player~$i$ in order to focus on the game-theoretic aspect of multiplicity. If player~$i$ considers her own action; i.e., uses $\rho$ instead of $\rho^{-i}$, non-uniqueness can occur without other agents' involvement simply because of the direct feedback on the state process.} (from the perspective of $i$) that have already stopped, according to $(\tau^{j})_{j\neq i}$. Specializing from the previous section, an $n$-player equilibrium boils down to the process $\rho_{n}(t)=\#\{j:\,\tau^j\leq t\}/n$ where $\tau^{j}$ are as in~\eqref{eq:optStopTime}. In particular, if $\rho_{n}$ is an equilibrium and $(t,\omega)$ is such that $\rho_n(t)(\omega)=k/n$, then as the stopping times satisfy $\tau^i=\inf\{t:\alpha^i(t)\geq r\}$, we must have\footnote{We will often abbreviate $\#\{i\in I:\, \dots\}$ to $\#\{\dots\}$ in what follows.} \begin{equation}\label{eq:NplayerEquilibCond} \#\{Y_t^i(\omega)+c\, \frac{k-1}{n}\geq r\}=k \quad\mbox{and}\quad \#\{Y_t^i(\omega)+c\, \frac{k}{n}< r\}=n-k. \end{equation} This condition is also sufficient, in the sense made precise in Remark~\ref{rk:NplayerEquilibCondSuff}. Next, we sketch the structure of all equilibria $\rho_n(t)=\#\{i: \tau^i\leq t\}/n$ of this game by a recursive construction, starting with $K=\emptyset$. \begin{enumerate} \item[1.] Suppose that at a given stopping time $t_{0}$, a group $K\subsetneq I$ of agents has already stopped. Then every remaining agent $i\notin K$ examines her criterion $$ \theta_{K}^{i} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K}{n}\geq r\}. $$ If $\theta_{K}^{i}\leq t_{0}$, then player $i$ must stop immediately. We add $i$ to the set $K$ and repeat Step 1 until no further players are forced to stop. (By the monotonicity in $\#K$, it does not matter in which order the agents are processed.) \item[2.] Beyond individual players forced to stop, a group $J\subseteq K^{c}$ of agents may be able to ``coordinate'' and stop together.\footnote{While we are using suggestive language here, it should be noted that these are simply different configurations which may be equilibria. We are not trying to model a mechanism how players ``find'' an equilibrium.} Indeed, suppose that $$ \theta_{K}^{J} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K+\#J-1}{n}\geq r\} $$ satisfies $\theta_{K}^{J}\leq t_{0}$ for all $i\in J$. Then it is optimal for all these agents to stop as a group, and they may or may not ``choose'' to do so. If they stop, we add $J$ to $K$ and repeat the procedure starting with Step 1. \item[3.] After all remaining groups of agents have decided whether to stop at time $t_{0}$, we increment time until there exists a group or individual agent wanting to stop, and start again at Step 1. \end{enumerate} The multiplicity of equilibria of this game arises because of the choices taken by the groups $J$ in Step 2, as well as the order in which the groups are processed. Next, we describe two of these equilibria in detail. The first one is the minimal equilibrium and corresponds to groups~$J$ in Step 2 always choosing not to stop. This is equivalent to all players remaining in the game until their own optimality criterion forces them to quit. \begin{proposition} \label{pr:CharacterizationOfNMinEquilibrium} There exists an $n$-player equilibrium $\rho^m_n$ such that \begin{equation}\label{eq:CharNMinEquilib} \rho^m_n(t)=\frac{k}{n}\Longleftrightarrow\left\{ \begin{aligned} &\#\{Y_t^i+c\, \frac{k}{n}\geq r\}=k \\ &\#\{Y_t^i+c\, \frac{k-l}{n}\geq r\}\geq k-l+1,\quad 1\leq l\leq k. \end{aligned} \right. \end{equation} This equilibrium is minimal; i.e., $\rho^m_n(t)\leq\rho_n(t)$ for any $n$-player equilibrium~$\rho_n$. \end{proposition} \begin{proof} The construction is iterative. Given a set $K\subsetneq I$ corresponding to players who have already stopped, we can consider for all $i\notin K$ the stopping times $$ \theta_{K}^{i} = \inf\{t:\, Y^{i}_{t} + c\, \frac{\#K}{n}\geq r\} $$ with the corresponding order statistics $\theta_{K}^{(1)}\leq\theta_{K}^{(2)} \leq \dots$. We define $\theta_{K}=\theta_{K}^{(1)}$ and $i_{K}=(1)$. We note that agent $i$ must stop at $\theta_{K}^{i}$, even if no further agents $j\notin K$ choose to stop, and that $i_{K}$ is the first of the agents $i\notin K$ subject to this event. To define the equilibrium, start with $K_{0}=\emptyset$ and set $\tau^{i}=\theta_{K_{0}}\equiv \theta_{K_{0}}^{(1)}$ on $\{i=i_{K_{0}}\}$. Next, set $K_{1}=\{i_{K_{0}}\}$ and $\tau^{i}=\max\{\theta_{K_{1}},\theta_{K_{0}}\}$ on $\{i=i_{K_{1}}\}$, and continue inductively setting $K_{k}=K_{k-1}\cup \{i_{K_{k-1}}\}$ and $\tau^{i}=\max\{\theta_{K_{k}},\tau^{i_{K_{k-1}}}\}$ on $\{i=i_{K_{k}}\}$ for $k=2,\dots,n-1$. (The maximum needs to be taken since all the $\alpha^{j}$ are increased after player $i_{K_{k-1}}$ stops.) Setting $\rho^m_n(t)=\#\{i: \tau^i\leq t\}/n$, we have by construction that $\rho^m_n$ is an equilibrium with corresponding optimal stopping times $(\tau^{i})$ and that~\eqref{eq:CharNMinEquilib} holds. To see the minimality, let $\rho_{n}$ be any $n$-player equilibrium and consider $(t,\omega)$ such that $\rho_n(t)(\omega)=k/n$. Let $k'$ be such that $\rho^m_n(t)(\omega)=k'/n$. If we had $k' > k$, then~\eqref{eq:CharNMinEquilib} would imply $\#\{Y_t^i(\omega)+c\, \frac{k}{n}\geq r\}\geq k+1$ and hence $\#\{Y_t^i(\omega)+c\, \frac{k}{n}<r\}\leq n-k-1$, a contradiction to~\eqref{eq:NplayerEquilibCond}. Thus, $k'\leq k$ and we have shown that $\rho^m_n\leq \rho_n$. \end{proof} \begin{remark}\label{rk:minimalFromGivenEquilibrium} Let $\rho$ be an $n$-player equilibrium and $t_{0}$ a stopping time. There exists an equilibrium which is minimal among all $n$-player equilibria~$\varrho$ such that $\varrho=\rho$ on $[0,t_{0}]$. Indeed, it is obtained by agents stopping as in $\rho$ until $t_{0}$, whereas from $t_{0}$ onwards we apply the construction in the proof of Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} starting with $K=\{i:\,\tau^{i}\leq t_{0}\}$. We call this $\varrho$ the \emph{minimal extension} of $\rho$ after~$t_{0}$. \end{remark} The second extremal equilibrium is maximal and corresponds to players coordinating their actions such as to stop as early as possible. As seen in the construction below, this is equivalent to all players constantly seeking (maximally large) groups of collaborators so that immediate simultaneous stopping is optimal for all agents in the group. \begin{proposition} \label{pr:CharacterizationOfNMaxEquilibrium} There exists an $n$-player equilibrium $\rho^M_n$ such that \begin{equation}\label{eq:CharNMaxEquilib} \rho^M_n(t)=\frac{k}{n}\Longleftrightarrow\left\{ \begin{aligned} &\#\{Y_t^i+c\, \frac{k-1}{n}\geq r\}=k \\ &\#\{Y_t^i+c\, \frac{k+l-1}{n}\geq r\}\leq k+l-1,\quad 1\leq l\leq n-k. \end{aligned} \right. \end{equation} This equilibrium is maximal; i.e., $\rho^M_n(t)\geq\rho_n(t)$ for any $n$-player equilibrium~$\rho_n$. \end{proposition} \begin{proof} Given a set $K\subsetneq I$ of size $k=\#K$ corresponding to players who have already stopped, we can consider for $1\leq l \leq n-k$ the stopping times $$ \theta_{K}^{l} = \inf\{t:\, \#\{i\notin K:\, Y_t^i+c\, \frac{k+l-1}{n}\geq r\}\geq l\}; $$ intuitively, this is the first time an additional group $J$ of $\#J=l$ agents can collectively stop. If $\theta_{K}^{(1)}\leq\dots\leq \theta_{K}^{(n-k)}$ are the corresponding order statistics (ties are split by assigning the lower rank to the larger index $l$), pick $l=(1)$ and let $J=J(K)$ be the set of $i\notin K$ such that $\{Y^{i}_{\theta_{K}^{l}}\}_{i\in J}$ are the $l$ largest elements in $\{Y^{i}_{\theta_{K}^{l}}\}_{i\in K^{c}}$; we think of $J$ as the $l$ most pessimistic agents remaining at time $\theta_{K}^{l}$ and denote $\theta_{K}:=\theta_{K}^{l}$. To define the equilibrium, start with $K_{0}=\emptyset$ and set $\tau^{i}=\theta_{\emptyset}$ for $i\in J(\emptyset)$. Next, set $K_{1}=J(\emptyset)$ and $\tau^{i}=\theta_{K_{1}}$ for $i\in J(K_{1})$, and continue inductively with $K_{2}=J(K_{1})\cup K_{1}$. Setting $\rho^M_n(t)=\#\{i: \tau^i\leq t\}/n$, we have by construction that $\rho^M_n$ is an equilibrium with corresponding optimal stopping times $(\tau^{i})$ and that~\eqref{eq:CharNMaxEquilib} holds. To see the maximality, let $\rho_{n}$ be any $n$-player equilibrium and consider $(t,\omega)$ such that $\rho_n(t)(\omega)=k/n$. Again, $\rho_{n}$ must satisfy~\eqref{eq:NplayerEquilibCond}. Let $k'$ be such that $\rho^M_n(t)(\omega)=k'/n$. If we had $k' < k$, then~\eqref{eq:CharNMaxEquilib} would imply that $\#\{Y_t^i(\omega)+c\, \frac{k-1}{n}\geq r\}\leq k-1$, contradicting~\eqref{eq:NplayerEquilibCond}. \end{proof} The following observations will be used in Section~\ref{se:convergenceGeneral} when we construct $n$-player equilibria converging to a given mean field equilibrium. \begin{remark}\label{rk:cutAndPaste} (i) Consider $n$-player equilibria $\rho$ and $\rho'$, a stopping time $t_{0}$ and assume that $\rho(t_{0})\leq \rho'(t_{0})$. Then there exists an $n$-player equilibrium $\varrho$ such that $$ \varrho\mathbf{1}_{[0,t_{0})}=\rho \mathbf{1}_{[0,t_{0})} \quad \mbox{and}\quad \varrho\mathbf{1}_{[t_{0},\infty)}=\rho'\mathbf{1}_{[t_{0},\infty)}. $$ Indeed, let $I_{0}$ be the set of agents that have stopped by time $t_{0}$ in equilibrium~$\rho$ and let $I_{1}$ be the analogue for $\rho'$. By~\eqref{eq:optStopTime} we necessarily have $I_{0}\subseteq I_{1}$. The equilibrium $\varrho$ is obtained by following the stopping times of $\rho$ on $[0,t_{0})$. At $t_{0}$, all agents in the group $J=I_{1}\setminus I_{0}$ stop (and this must be optimal as $\rho'$ is an equilibrium). After that the remaining agents act as in~$\rho'$. (ii) Extending the above, consider $n$-player equilibria $\rho$ and $\rho'$, stopping times $t_{0} \leq t_{1}$ and assume that $\rho(t_{0})\leq \rho'(t_{1})$. Then there exists an $n$-player equilibrium $\varrho$ such that \begin{equation}\label{eq:cutPastProperty} \varrho\mathbf{1}_{[0,t_{0})}=\rho \mathbf{1}_{[0,t_{0})} \quad \mbox{and}\quad \varrho\mathbf{1}_{[t_{1},\infty)}=\rho'\mathbf{1}_{[t_{1},\infty)}. \end{equation} Indeed, let $\rho_{1}$ be the minimal extension of $\rho$ after $t_{0}$ (cf.\ Remark~\ref{rk:minimalFromGivenEquilibrium}). Let $I_{0}$ be the set of agents that have stopped by time $t_{0}$ in equilibrium $\rho$ and let $I_{1}$ be the set of agents that have stopped by time $t_{1}$ in equilibrium $\rho'$. Again, we observe that $I_{0}\subseteq I_{1}$, due to~\eqref{eq:optStopTime} and the increase of $Y^{i}$. Moreover, $I_{1}$ must include all agents that stop in the construction of the minimal extension on $[t_{0},t_{1}]$. As a result, $\rho_{1}(t_{1})\leq \rho'(t_{1})$, and now the claim follows by applying~(i). (iii) A last generalization is that when $\rho(t_{0})\leq \rho'(t_{1})$ merely holds on some set $A\in\mathcal{G}_{t_{1}}$, then we can still construct an $n$-player equilibrium $\varrho$ satisfying~\eqref{eq:cutPastProperty} on~$A$. Indeed, $\varrho$ is found as in (ii) except that on $A^{c}$, agents continue to stop according to $\rho_{1}$ after $t_{1}$. \end{remark} \begin{remark}\label{rk:NplayerEquilibCondSuff} (i) The necessary condition~\eqref{eq:NplayerEquilibCond} is sufficient in the following sense. Fix $n$ and a stopping time $t_{0}$, and suppose there exists an $\mathcal{G}_{t_{0}}$-measurable random variable $k$ satisfying~\eqref{eq:NplayerEquilibCond} at $t_{0}$; i.e., \begin{equation* \#\{Y_{t_{0}}^i+c\, \frac{k-1}{n}\geq r\}=k \quad\mbox{and}\quad \#\{Y_{t_{0}}^i+c\, \frac{k}{n}< r\}=n-k. \end{equation*} Then there exists an $n$-player equilibrium $\varrho$ such that $\varrho(t_{0})=k/n$. To construct $\varrho$, let agents stop as in the minimal equilibrium $\rho^{m}_{n}$ up to time $t_{0}$. By the argument at the end of the proof of Proposition~\ref {pr:CharacterizationOfNMinEquilibrium}, we must have $\rho^{m}_{n}(t_{0})\leq k/n$. At $t_{0}$, all remaining agents $i$ with $Y^{i}_{t_{0}} + c\, \frac{k-1}{n}\geq r$ stop, so that $\rho(t_{0})=k/n$. After that, the remaining agents follow the construction in the proof of Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} starting with $K=\{i:\,\tau^{i}\leq t_{0}\}$. (ii) A variant of this holds when~\eqref{eq:NplayerEquilibCond} is satisfied on some set $A\in \mathcal{G}_{t_{0}}$, with the conclusion that $\varrho(t_{0})=k/n$ holds only on $A$. Indeed, we construct $\varrho$ as above on $A$, whereas on $A^{c}$ we use $\rho^{m}_{n}$. (iii) For later use, we observe that if this construction is applied for two times $t_{0}\leq t_{1}$ and corresponding random variables $k_{0}\leq k_{1}$, the resulting equilibria satisfy $\varrho_{0}\leq\varrho_{1}$. \end{remark} \section{The Mean Field Game} \label{se:MFG} The game considered in this section is the ``toy model'' mean field game of \cite[Section~4]{Nutz.16}. Indeed, $(I, \mathcal{I}, \lambda)$ is an atomless probability space and we work on a so-called Fubini extension $(I\times\Omega,\Sigma,\mu)$ of the product $(I\times\Omega,\mathcal{I}\times\mathcal{G},\lambda\times P)$; see \cite[Section~3]{Nutz.16}. For each $i\in I$, let $Y^i_t\geq 0$ be a right-continuous, increasing, $\mathbb{G}$-progressively measurable process such that for each $t\geq 0$, $(i,\omega)\mapsto Y_t^i(\omega)$ is $\Sigma$-measurable and $Y_t^i,\ i\in I$ are $\lambda$-essentially pairwise i.i.d.; see also \cite[Definition 3.1]{Nutz.16}. Working on a Fubini extension ensures that such processes exist, as well as the validity of an Exact Law of Large Numbers. In all that follows, we assume that the c.d.f.\ $y\mapsto F_t(y)=P\{Y_t^i\leq y\}$ is continuous. Since $\lambda$ is atomless, each individual agent has zero mass and hence does not influence the state process $\rho(t)=\lambda\{i: \tau^i\leq t\}$. In particular, we do not distinguish $\rho$ and $\rho^{-i}$ and simply set $\alpha^i(t)=Y_t^i+c\rho(t)$. We recall that $\rho$ is an equilibrium if $\rho(t) = \lambda\{i:\, \tau^{i}\leq t\}$ where $\tau^{i}$ is as in~\eqref{eq:optStopTime} for $\lambda$-a.e.\ $i\in I$. Such a process may be random (see also~\cite{Nutz.16}). However, as common in the mean field game literature, we pay special attention to equilibria which are deterministic due to the infinite number of players.\footnote{Note that the key message of this paper, namely that some mean field equilibria are not limits of $n$-player equilibria, is only amplified if more mean field equilibria are considered.} The following is an improved version of \cite[Proposition 4.1]{Nutz.16} with necessary and sufficient conditions. \begin{proposition}\label{pr:MFGequilib} A real function $\rho: \mathbb{R}_{+}\to[0,1]$ is a mean field game equilibrium if and only if it is increasing, right-continuous and \begin{equation}\label{eq:master} \rho(t)+F_t(r-c\rho(t))=1, \quad t\geq0. \end{equation} \end{proposition} \begin{proof} Suppose that $\rho$ is a mean field game equilibrium, then $\rho$ is clearly increasing. Since $Y_t^i,\ i\in I$ are $\lambda$-essentially pairwise i.i.d., the Exact Law of Large Numbers (e.g., \cite[Section~3]{Nutz.16}) states that $\lambda\{i:Y^i_t\leq u\}=F_{t}(u)$ for all $u$. Using also~\eqref{eq:optStopTime} and that $y\mapsto F_{t}(y)$ is continuous, we have \begin{equation}\label{eq:proofRhoRC} \rho(t)=\lambda\{i:\tau^i\leq t\}=\lambda\{i:Y^i_t+c\rho(t+)\geq r\}=1-F_t(r-c\rho(t+)). \end{equation} Recall that $Y^{i}$ has right-continuous paths. Using again the continuity of $F_{t}$, this implies that \begin{equation}\label{eq:jointRC} \mbox{$(t,u)\mapsto F_t(r-cu)$ is jointly right-continuous.} \end{equation} It follows that $t\mapsto 1-F_t(r-c\rho(t+))$ is right-continuous, and thus the left-hand side of~\eqref {eq:proofRhoRC} must also be right-continuous. That is, $\rho(t)=\rho(t+)$, and then~\eqref{eq:proofRhoRC} becomes~\eqref{eq:master}. Conversely, suppose that $\rho$ is a function with the stated properties. Defining the corresponding optimal stopping times $\tau^{i}$ as in~\eqref{eq:optStopTime}, the Exact Law of Large Number shows that $$ \lambda\{i:\, \tau^{i}\leq t\}=\lambda\{i:\, Y^{i}_{t} + c\rho(t)\geq r\} =1-F_{t}(r-c\rho(t))=\rho(t); $$ that is, $\rho$ is an equilibrium. \end{proof} The following notions will be crucial in determining the convergence to the mean field limit. \begin{definition}\label{de:transversalDef} Fix $t\geq0$. A solution $u\in[0,1]$ of $u+F_t(r-cu)=1$ is called \emph{left-increasing-transversal} (or left-transversal for short) if \begin{equation}\label{eq:leftTrans} \mbox{for all $\varepsilon>0$ there is $u' \in(u-\varepsilon,u)$ such that $u'+F_t(r-cu')<1$} \end{equation} and \emph{right-increasing-transversal} (or right-transversal) if \begin{equation}\label{eq:rightTrans} \mbox{for all $\varepsilon>0$ there is $u' \in(u,u+\varepsilon)$ such that $u'+F_t(r-cu')>1$.} \end{equation} It is called \emph{increasing-transversal} if both~\eqref{eq:leftTrans} and~\eqref{eq:rightTrans} hold, and \emph{decreasing-transversal} if these hold with the inequality signs reversed. \end{definition} For instance, in Figure~\ref{fig:Comparison of special solutions}, $u^{m}$ is left-increasing-transversal and $u^{mrt}, u^{M}$ are right-increasing-transversal, but only $u^{Mlt}$ is increasing-transversal. A decreasing-transversal solution is also depicted. Next, we introduce a quartet of solutions that will be important in Section~\ref{se:convergenceExtremal}. \begin{figure}[h] \begin{center} \begin{tikzpicture} \draw[line width=1pt,-latex] (-5,0) to (5,0) node[right=2pt] {$u$}; \fill (-4.2,0) circle (2pt) node[above=2pt] at (-4.2,0) {0}; \fill (4.2,0) circle (2pt) node[above=2pt] at (4.2,0) {1}; \draw[line width=1.5pt] (-5,-1) to[out=50,in=180] (-2.8,0) to[out=0,in=180] (-1.44,0) to[out=0,in=180] (-0.7,.4) to[out=0,in=180] (0.7,-.5) to[out=0,in=180] (2.1,.4) to[out=0,in=180] (2.8,0) to[out=0,in=210] (5,1.2) node[above=2pt] {$u+F_t(r-cu)$}; \fill (-2.8,0) circle (2pt) node[above=2pt]{$u^{m}$}; \fill (-1.44,0) circle (2pt) node[below=2pt]{$u^{mrt}$}; \fill (1.44,0) circle (2pt) node[above=8pt,left]{$u^{Mlt}$}; \fill (2.89,0) circle (2pt) node[below=2pt]{$u^{M}$}; \end{tikzpicture} \end{center} \caption{Solutions $u^m$, $u^{mrt}$, $u^{Mlt}$ and $u^M$} \label{fig:Comparison of special solutions} \end{figure} \begin{lemma}\label{le:solutionsEx} Fix $t\geq0$. The equation $u+F_t(r-cu)=1$ has a minimal solution $u^{m}\in[0,1]$, a maximal solution $u^{M}\in[0,1]$, a minimal right-transversal solution $u^{mrt}\in[0,1]$, and a maximal left-transversal solution $u^{Mlt}\in[0,1]$. \end{lemma} \begin{proof} Since $G(u):=u+F_t(r-cu)<1$ for $u<0$ and $G(u)>1$ for $u>1$, the existence of $u^{m}$ and $u^{M}$ is immediate from the continuity of $G$. The fact that $G(u)<1$ for all $u<u^{m}$ entails that $u^{m}$ is left-transversal, and since it follows directly from the definition that the set of left-transversal solutions is stable under increasing limits, it follows that $u^{Mlt}$ exists. The argument for $u^{mrt}$ is similar. \end{proof} As illustrated in Figure~\ref{fig:Comparison of special solutions}, these four solutions may be distinct, and while $u^{m}$ is automatically left-transversal, it can happen that $u^{mrt}$ is not. Similarly for $u^{M}$ and $u^{Mlt}$. We can also note that $u^{mrt}\leq u^{Mlt}$ may fail, say if the graph is replaced by a flat stretch on $[u^{m},u^{M}]$. But in more generic cases, and in particular whenever $u^{m}$ and $u^{M}$ are not local extrema, the quartet describes at most two distinct solutions $u^m=u^{mrt} \leq u^{Mlt}=u^M$ and these are then increasing-transversal. In view of Lemma~\ref{le:solutionsEx} we may define, given $t\geq0$, \begin{equation}\label{eq:partMFGequlibDefs} \rho^{m}(t)=u^{m},\quad\rho^{M}(t)=u^{M},\quad\rho^{mrt}(t)=u^{mrt},\quad\rho^{Mlt}(t)=u^{Mlt}. \end{equation} Using the increase of $Y_{t}$ and~\eqref{eq:jointRC}, one can check that $\rho^{m},\rho^{M},\rho^{mrt},\rho^{Mlt}$ are increasing, $\rho^{M}$ and $\rho^{mrt}$ are right-continuous, and $\rho^{m}$ and $\rho^{Mlt}$ are left-continuous (but not continuous in general). \begin{corollary}\label{co:minMaxEquilib} (i) If $\rho: \mathbb{R}_{+}\to[0,1]$ is any increasing function such that~\eqref{eq:master} holds, then $\rho(t+)$ is an equilibrium. (ii) The functions $t\mapsto \rho^m(t+)$ and $t\mapsto\rho^M(t)$ are the minimal and maximal equilibria of the mean field game; i.e., they are equilibria and any other equilibrium $\rho$ satisfies $\rho^m(t+)\leq \rho(t)\leq \rho^M(t)$ for all $t\geq0$. \end{corollary} \begin{proof} (i) If $\rho$ is any increasing function such that~\eqref{eq:master} holds, then the joint right-continuity in~\eqref{eq:jointRC} implies that $\rho(t+)+F_t(r-c\rho(t+))=1$ for all $t\geq0$. It now follows from Proposition~\ref{pr:MFGequilib} that $\rho(t+)$ is an equilibrium. (ii) Both $\rho^m(t+)$ and $\rho^M(t)$ are equilibria by~(i). If $\rho$ is any equilibrium, then it is necessarily right-continuous by Proposition~\ref{pr:MFGequilib} and thus $\rho^m\leq \rho\leq \rho^M$ implies $\rho^m(t+)\leq \rho(t)\leq \rho^M(t)$ for all $t\geq0$. \end{proof} \section{Convergence to Extremal Equilibria} \label{se:convergenceExtremal} The main goal of the last two sections is to understand which mean field equilibria are limits of $n$-player equilibria. In brief, we will see that mean field equilibria described by increasing-transversal solutions of~\eqref{eq:master} (on a sufficiently large sets of times $t$) are such limits, whereas other equilibria need not be proper limits of $n$-player equilibria; they merely occur as parts of mixtures which are limits. In this section, we focus on the convergence to the minimal and maximal mean field equilibria; the less straightforward interior case is treated in the next section. As a first step, we relate limits of arbitrary $n$-player equilibria to mean field equilibria at a fixed time. We will see in Example~\ref{ex:twoType} that such limits need not be deterministic mean field equilibria as defined in the preceding section, hence the following result relates limits to mixtures of equilibria. This is in line with the results of~\cite{CarmonaDelarueLacker.17,Lacker.14} stating that $n$-player equilibria converge to ``weak'' equilibria of the mean field game, while also illustrating that randomization can indeed occur in a quite natural example. Given a closed set $A\subseteq\mathbb{R}$, we say that a sequence $(\xi_n)$ of random variables is \emph{asymptotically concentrated} on $A$ if $\lim_{n\to \infty}P(\xi_n\in A_{\varepsilon})=1$ for all $\varepsilon>0$, where $A_{\varepsilon}=\{x\in\mathbb{R}:\, d(x,A)<\varepsilon\}$ is the open $\varepsilon$-neighborhood of $A$. When $(\xi_n)$ is uniformly bounded, as it will be the case below, this is equivalent to any weak cluster point of $(\xi_n)$ being concentrated on $A$. Moreover, for $t\geq0$, we denote the solutions of~\eqref{eq:master} by $$ \mathcal{U}(t)=\{u\in[0,1]: u+F_{t}(r-cu)=1\}. $$ \begin{proposition} \label{pr:convergenceGeneralEquilib} Fix $t\geq 0$ and let $(\rho_{n})_{n\geq1}$ be a sequence of $n$-player equilibria. Then $\rho_n(t)$ is asymptotically concentrated on $\mathcal{U}(t)$. \end{proposition} \begin{proof} We first show that for any interval $[u_{0},u_{1}]\subseteq [0,1]$ such that $u\mapsto u+F_t(r-cu)$ is strictly smaller than $1$ on $[u_{0},u_{1}]$, \begin{equation}\label{eq:emptyIntSupport} P(u_{0} + \varepsilon' \leq \rho_n(t)\leq u_{1}-\varepsilon')\to 0 \quad\mbox{for all}\quad \varepsilon'>0. \end{equation} Indeed, let $u_{0}< u_{1}$ be as above. By increasing the value of $u_{1}$ if necessary, we may assume without loss of generality that $u\mapsto u+F_t(r-cu)$ attains its maximum over $[u_{0},u_{1}]$ at $u_{1}$. Given $0<\varepsilon<u_{1}-u_{0}$, we can then choose by continuity some $u \in (u_{1}-\varepsilon,u_{1})$ such that \begin{equation}\label{eq:leftGlobalMax} u'+F_t(r-cu')\leq u+F_t(r-cu)<1 \quad\mbox{for all}\quad u_{0}\leq u'\leq u. \end{equation} Furthermore, setting $$ \varepsilon_n(x)=\frac{\#\{Y_t^i+cx\geq r\}}{n}-(1-F_t(r-cx)),\quad x\in\mathbb{R} $$ and $\varepsilon_n=\sup_{x\in \mathbb{R}}\{|\varepsilon_n(x)|\}$, we have $\varepsilon_n\to 0$ a.s.\ by the uniform convergence in the Glivenko--Cantelli theorem. Let $X_i=\mathbf{1}_{\{Y_t^i+cu\geq r\}}$, then \begin{equation}\label{eq:epsNdef} \frac{X_1+\cdots+X_n}{n}=1-F_t(r-cu)+\varepsilon_n(u). \end{equation} Denote by $[x]$ the largest integer $k\leq x$. For any $[u_{0} n]+1 \leq l \leq [u n]$, let $Z_i^l=\mathbf{1}_{\{Y_t^i+c\, \frac{l}{n}\geq r\}}$, then similarly $$ \frac{Z_1^l+\cdots+Z_n^l}{n}=1-F_t(r-c\, \tfrac{l}{n})+\varepsilon_n(\tfrac{l}{n}). $$ On the event $\{Z_1^l+\cdots+Z_n^l =l\}$, we then have $$1+\varepsilon_n(\tfrac{l}{n})=\frac{l}{n}+F_t(r-c\, \tfrac{l}{n})\leq u+F_t(r-cu)$$ by~\eqref{eq:leftGlobalMax} and thus $$ \frac{X_1+\cdots+X_n}{n}=1-F_t(r-cu)+\varepsilon_n(u)\leq u-\varepsilon_n(\tfrac{l}{n})+\varepsilon_n(u)\leq u+2\varepsilon_n. $$ Combining this observation with~\eqref{eq:NplayerEquilibCond}, we have for all $[u_{0} n]+1 \leq l \leq [u n]$ that \begin{align*} \Big\{\rho_n(t)=\frac{l}{n}\Big\}&\subseteq \Big\{\#\big\{Y_t^i+c\, \tfrac{l}{n}\geq r\big\}=l\Big\}\\ &=\Big\{\frac{Z_1^l+\cdots+Z_n^l}{n}=\frac{l}{n}\Big\}\\ &\subseteq \Big\{\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big\}. \end{align*} Hence, $$\Big\{ \frac{[u_{0} n]+1}{n} \leq \rho_n(t)\leq \frac{[u n]}{n}\Big\}\subseteq \Big\{\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big\}$$ and thus \begin{align*} P\Big(\frac{[u_{0} n]+1}{n} \leq \rho_n(t)\leq \frac{[u n]}{n}\Big) &\leq P\Big(\frac{X_1+\cdots+X_n}{n}\leq u+2\varepsilon_n\Big)\\ &= P\Big(u+F_t(r-cu) \geq 1-2\varepsilon_n+\varepsilon_n(u)\Big)\to0 \end{align*} by~\eqref{eq:epsNdef} and~\eqref{eq:leftGlobalMax}. Since $\varepsilon>0$ was arbitrary, this shows~\eqref{eq:emptyIntSupport}. In a symmetric way, one can show the analogue of~\eqref{eq:emptyIntSupport} for intervals where $u\mapsto u+F_t(r-cu)$ is strictly larger than $1$. Since for any $\varepsilon>0$ the complement of $\mathcal{U}(t)_{\varepsilon}$ consists of finitely many intervals of one of these two types, the claim follows. \end{proof} Next, we narrow down the asymptotic support for the minimal and maximal $n$-player equilibria $\rho^m_n$ and $\rho^M_n$. We will see in Section~\ref{se:negativeResultsExtremal} that the following result is optimal and the limiting support is not a singleton in general. We recall the notation introduced in~\eqref{eq:partMFGequlibDefs}. \begin{lemma} \label{le:convergenceMinMaxEquilib} Fix $t\geq 0$. \begin{enumerate} \item The minimal $n$-player equilibrium $\rho^m_n(t)$ is asymptotically concentrated on $[\rho^m(t),\rho^{mrt}(t)]\cap \mathcal{U}(t)$. \item The maximal $n$-player equilibrium $\rho^M_n(t)$ is asymptotically concentrated on $[\rho^{Mlt}(t),\rho^M(t)]\cap \mathcal{U}(t)$. \end{enumerate} \end{lemma} \begin{proof} (i) In view of Proposition~\ref{pr:convergenceGeneralEquilib} and the definition of $\rho^{m}(t)$, it suffices to show that \begin{equation}\label{eq:proofMinMaxConv} P(\rho^m_n(t)\geq \rho^{mrt}(t)+\varepsilon')\to 0 \quad\mbox{for all}\quad \varepsilon'>0. \end{equation} Let $\varepsilon>0$. As $\rho^{mrt}(t)$ is right-transversal we can find $u \in (\rho^{mrt}(t),\rho^{mrt}(t)+\varepsilon)$ such that $1-F_t(r-cu)<u$. For $n$ large enough, we then have $\rho^{mrt}(t)< [u n]/n\leq u$. Let $X_i=\mathbf{1}_{\{Y_t^i+cu\geq r\}}$, then $$ \frac{X_1+\cdots+X_n}{n}\to EX_i=1-F_t(r-cu)\quad\mbox{a.s.} $$ by the Law of Large Numbers. Hence, $$ \frac{X_1+\cdots+X_n}{n}-\frac{[u n]}{n}\to 1-F_t(r-cu)-u<0\quad\mbox{a.s.} $$ Using also~\eqref{eq:CharNMinEquilib}, we conclude that \begin{align*} P(\rho^m_n(t)\geq u)&\leq P\Big(\rho^m_n(t)\geq \frac{[u n]}{n}\Big)\\ &\leq P\Big(\#\big\{Y_t^i+c\, \frac{[u n]}{n}\geq r\big\}\geq [u n]\Big)\\ &\leq P\Big(\frac{\#\{Y_t^i+cu\geq r\}}{n}\geq \frac{[u n]}{n}\Big)\\ &=P\Big(\frac{X_1+\cdots+X_n}{n}-\frac{[u n]}{n}\geq 0\Big)\to 0. \end{align*} As $\varepsilon>0$ was arbitrary, the above implies~\eqref{eq:proofMinMaxConv}. (ii) The arguments are similar to~(i) and therefore omitted. \end{proof} Next, we introduce an appropriate notion of convergence for dynamic equilibria as required for our main results. Note that given an increasing function, its right- and left-continuous limits (and all functions between these) differ only by the allocation of the function value at the (countably many) jumps. The fact that mean field equilibria are right-continuous, cf.\ Proposition~\ref{pr:MFGequilib}, reflects the fact that agents stopping at time $t$ are counted as having left the game at time $t$, whereas left-continuity would correspond to counting them as leaving immediately after~$t$. Since this difference is not fundamental, it seems reasonable to consider limits ``up to taking right-continuous versions.'' This has been accomplished by notions of so-called Fatou convergence, e.g.~\cite{Kramkov.96, Zitkovic.02}, in other areas of stochastic analysis. For increasing functions $\varphi_{n},\varphi$ on $\mathbb{R}_{+}$, we have that $(\liminf_{n}\varphi_{n})(t+)=(\limsup_{n}\varphi_{n})(t+) = \varphi(t+)$ holds for all $t\in\mathbb{R}_{+}$ if and only if $\lim \varphi_{n}(t)=\varphi(t)$ for all $t$ in a dense subset $D\subseteq\mathbb{R}_{+}$. This motivates the following. \begin{definition}\label{de:Fatou} A sequence $(\rho_{n})_{n\geq1}$ of $n$-player equilibria \emph{Fatou converges in probability} to a mean field equilibrium $\rho$ if there exists a dense set $D\subseteq\mathbb{R}_{+}$ such that $\rho_{n}(t)\to\rho(t)$ in probability for all $t\in D$. \end{definition} We note that by a diagonalization procedure, Fatou convergence in probability implies Fatou convergence a.s.\ along a subsequence $(n_{k})$, where the a.s.\ convergence is defined by direct analogy to the above. In particular, it then follows that the right-continuous versions of $\liminf_{k}\rho_{n_{k}}$ and $\limsup_{k}\rho_{n_{k}}$ coincide with $\rho$ a.s. With these notions in place, we can establish the convergence of extremal equilibria in the increasing-transversal case. (Note that the extremal equilibria cannot be decreasing-transversal; they are either increasing-transversal or tangential.) \begin{theorem}\label{th:convUnderH} Suppose that for all $t$ in a dense subset $D\subseteq \mathbb{R}_{+}$, the minimal solution $u\in[0,1]$ of $u+F_t(r-cu)=1$ is increasing-transversal. Then the minimal $n$-player equilibria $\rho^{m}_{n}$ Fatou converge in probability to the minimal mean field equilibrium as $n\to\infty$. The analogous assertion holds for the maximal equilibria $\rho^{M}_{n}$. \end{theorem} \begin{proof} By the hypothesis, $\rho^m(t)=\rho^{mrt}(t)$ for $t\in D$. Thus, Lemma~\ref{le:convergenceMinMaxEquilib} implies that $\lim \rho^{m}_{n}(t)=\rho^m(t)=\rho^{mrt}(t)$ in probability for $t\in D$. The analogue holds for $\rho^{M}_{n}$. \end{proof} Next, we discuss the transversality condition in more detail. In fact, if uniqueness holds for the mean field game, the condition is automatically satisfied and we conclude the following. \begin{corollary}\label{co:uniqueness} The following are equivalent: \begin{enumerate} \item the mean field game has a unique equilibrium $\rho$, \item the equation $u+F_t(r-cu)=1$, $u\in [0,1]$ has a unique solution for a dense set of $t\in\mathbb{R}_{+}$. \end{enumerate} In that case, any sequence $(\rho_{n})_{n\geq1}$ of $n$-player equilibria Fatou converges in probability to $\rho$. \end{corollary} \begin{proof} If (i) holds, then $\rho^{m}(t+)=\rho^{M}(t)$ for all $t\geq0$ by Corollary~\ref{co:minMaxEquilib}, and~(ii) follows since $\rho^{m}(t+)=\rho^{m}(t)$ except at the (countably many) jumps of $\rho^{m}$. The converse holds because equilibria are right-continuous; cf.\ Proposition~\ref{pr:MFGequilib}. Finally, if $u+F_t(r-cu)=1$ has a unique solution, this solution is necessarily increasing-transversal since $u+F_t(r-cu)<1$ for $u<0$ and $u+F_t(r-cu)>1$ for $u>1$. \end{proof} While we will see below that the transversality condition in Theorem~\ref{th:convUnderH} cannot be dropped, we can argue that this condition holds for a generic choice of signals $Y^{i}$. More generally, we discuss the following hypothesis (again, note that the extremal solutions can never be decreasing-transversal). \begin{definition}\label{de:H} We say that Hypothesis~(H) holds if for all $t$ in a dense subset of $\mathbb{R}_{+}$, any solution of $u\in[0,1]$ of $u+F_t(r-cu)=1$ is increasing-transversal or decreasing-transversal. \end{definition} While this hypothesis does not hold for all choices of $Y^{i}$, the exceptional set is small in the sense that a ``typical'' $F_{t}$ will not have a local extremum of $u\mapsto u+F_t(r-cu)$ at a solution of $u+F_t(r-cu)=1$, so that the latter must be transversal. As $t$ varies over $\mathbb{R}_{+}$, the non-transversal case is somewhat more likely to occur, but typically at only finitely many $t$ so that the hypothesis still holds. There seems to be no obvious way to quantify this. However, we state the following result which confirms the general intuition and shows that Hypothesis~(H) is always valid after a small perturbation of~$Y^{i}$. \begin{proposition} \label{prop:generic} For every $\delta>0$ there exists $0\leq \varepsilon\leq\delta$ such that after replacing $Y^{i}_{t}$ with $Y^{i}_t+\varepsilon$, Hypothesis~(H) is satisfied. \end{proposition} \begin{proof} Let us first observe that for any real function $f(x)$, the set of local minimum values $S=\{f(x):\, x$ is a local minimum of $f\}$ is countable. Indeed, for every $s\in S$ there is an open interval $I_s$ with rational endpoints such that $s=\min\{f(x):x\in I_s\}$. If $s,t\in S$ and $I_s=I_t$, then $s=t$, showing that $I: S\to \mathbb{Q}\times\mathbb{Q}$ is injective. For fixed $t\geq 0$, denote by $S(t)$ the set of all local minimum and maximum values of $u\mapsto u+F_t(r-cu)-1$, then $\cup_{t\in \mathbb{Q}}S(t)$ is again countable. Thus, we can find a sequence $a_{k}\downarrow 0$ with $a_k \notin \cup_{t\in \mathbb{Q}_{+}}S(t)$. Set $\varepsilon_k=ca_k$. Then, passing from $Y_{t}$ to $Y_t^{\varepsilon_k}=Y_t+\varepsilon_k$, the function under consideration is $$ u\mapsto u+F_t^{\varepsilon_k}(r-cu)=u+F_t(r-cu-\varepsilon_k)=(u+a_k)+F_t(r-c(u+a_k))-a_k. $$ By the construction of $a_k$, we know that 1 is not a local extremum value of this function. However, if a solution of $u+F_t^{\varepsilon_k}(r-cu)=1$ failed to be transversal, then 1 would be the value at a local extremum. \end{proof} \subsection{Counterexamples} \label{se:negativeResultsExtremal} In this section, we illustrate that the assertion of Theorem~\ref{th:convUnderH} may fail without the transversality condition, and more generally that the intervals in Lemma~\ref{le:convergenceMinMaxEquilib} cannot be improved. The examples presented here are essentially static, meaning that $Y^{i}_{t}$ does not depend on $t$. For purely technical reasons, namely to ensure the finiteness of the optimal stopping times~\eqref{eq:optStopTime} as assumed throughout, we introduce a time horizon $T\in(0,\infty)$ at which $Y^{i}_{t}$ jumps to a value larger than $r$, thus ensuring that all players stop. In the first example, we allow for atoms in the distribution of $Y^{i}_{t}$ to obtain an analytically tractable example. We argue below that the atoms are not essential to the observed phenomenon. \begin{example} \label{ex:twoType} Let $r=c=1$ and let $Y^i_{t}=Y^i_{0}$, $0\leq t< T$ be constant i.i.d.\ processes such that $\Law(Y^i_t)=\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_2$ for all $0\leq t<T$, and set $Y^{i}_{t}=2$ for $t\geq T$. Then the law of the minimal $n$-player equilibrium $\rho^m_n(t)$ converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ for all $0\leq t <T$ \end{example} \begin{proof} Proposition~\ref{pr:CharacterizationOfNMinEquilibrium} yields two cases for every $\omega$. If strictly less than $n/2$ of the realizations $\{Y_0^i(\omega),\, i=1,\dots, n\}$ equal 2, all players~$i$ with $Y_0^i(\omega)=2$ stop at $t=0$ and those with $Y_0^i(\omega)=1/2$ never stop. Whereas if $n/2$ or more of the realizations equal 2, then all agents stop at $t=0$. It follows that the law of $\rho^m_n(t)\equiv\rho^m_n(0)$ converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ as $n\to\infty$. \end{proof} The limit law $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ can be seen as a mixture of the deterministic mean field equilibria $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$. In fact, with an appropriate definition allowing for randomized equilibria, this mixture is itself an equilibrium. However, a remarkable conclusion is that there are no $n$-player equilibria converging to the minimal equilibrium $\rho^{m}$. \begin{corollary}\label{co:twoTypeMinNotLimit} In the context of Example~\ref{ex:twoType}, $\rho^{m}(t)$ is not a weak accumulation point of $n$-player equilibria, for any $0\leq t<T$. \end{corollary} \begin{proof} Suppose that there exists a subsequence $\rho_{k}=\rho_{n_{k}}$ of $n_{k}$-player equilibria such that $\rho_{k}(t)\to\rho(t)=1/2$ weakly. Then $\rho_{k}(t)\geq\rho_{k}^{m}(t)$ and $\Law(\rho_{k}^{m}(t))\to \frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ yield a contradiction. \end{proof} It may be useful to contrast this with the fact that $\rho^{m}$ is a limit of \emph{approximate} Nash equilibria. To wit, if all players~$i$ with $Y_0^i(\omega)=2$ stop at $t=0$ whereas those with $Y_0^i(\omega)=1/2$ do not stop until $T$, we obtain an approximate Nash equilibrium converging to $\rho^{m}$ as $n\to\infty$. The following example is a smooth version of Example~\ref{ex:twoType} where $Y^{i}_{t}$ admits a density; see also Figure~\ref{fig:cdf of different density functions}(b). It is not analytically tractable but the qualitative behavior is the same. \begin{figure \centering \subfigure[$\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_2$]{\label{fig:fft:a} \begin{minipage}[c]{0.3\textwidth} \centering \begin{tikzpicture} \draw[-latex] (0,0) -- (3,0); \draw[-latex] (0,0) -- (0,3); \node[below,left] at (0,0) {0}; \draw[dashed] (0,2.5) -- (2.5,0); \draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \draw[dotted] (1.25,1.25) -- (1.25,0); \draw[very thick] (1.25,0) node[minimum width=3pt,inner sep=0,draw, fill=white,circle]{} -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \end{tikzpicture} \end{minipage} } \subfigure[$4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:b} \begin{minipage}[c]{0.3\textwidth} \centering \begin{tikzpicture} \draw[-latex] (0,0) -- (3,0); \draw[-latex] (0,0) -- (0,3); \node[below,left] at (0,0) {0}; \draw[dashed] (0,2.5) -- (2.5,0); \draw[dotted] (1.25,1.25) -- (1.25,0); \draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \draw[very thick] (1.25,1.25) -- (1.5625,0) -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \end{tikzpicture} \end{minipage} } \subfigure[$\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:c} \begin{minipage}[c]{0.3\textwidth} \centering \begin{tikzpicture} \draw[-latex] (0,0) -- (3,0); \draw[-latex] (0,0) -- (0,3); \node[below,left] at (0,0) {0}; \draw[dashed] (0,2.5) -- (2.5,0); \draw[dotted] (1.25,1.25) -- (1.25,0); \draw[very thick] (0,1.25) -- (1.25,1.25) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \draw[very thick] (1.25,1.25) -- (2.5,0) node[minimum width=2.5pt,inner sep=0,draw, fill,circle]{}; \end{tikzpicture} \end{minipage} } \caption{Graphs of $F_t(1-u)$ (solid) and $1-u$ (dashed)} \label{fig:cdf of different density functions} \end{figure} \begin{example} \label{ex:twoTypeDensity} Let $r=c=1$ and let $Y^i_{t}=Y^{i}_{0}$, $0\leq t< T$ be i.i.d.\ processes such that the law of $Y^i_t$ has the density $f_{t}(y)=4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$ for all $0\leq t<T$, and let $Y^i_{t}=2+X^{i},$ $t\geq T$, where $X^{i}$ are i.i.d.\ with a continuous distribution on $[0,1]$. Then the simulation of $\rho^{m}_{n}(t)$, cf.\ Figure~\ref{fig:fft:I}, shows that $\rho^m_n(t)$ again converges to $\frac{1}{2}\delta_{\frac{1}{2}}+\frac{1}{2}\delta_{1}$ for $0\leq t<T$ which is again a mixture of the deterministic mean field equilibria $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$. \end{example} In the third example, the mean field game admits a continuum of solutions; see also Figure~\ref{fig:cdf of different density functions}(c). \begin{example} \label{ex:twoTypeDensityInterpol} Consider the setting of Example~\ref{ex:twoTypeDensity} with density $f_{t}(y)=\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$. In this case, we again have $\rho^{m}(t)\equiv \frac{1}{2}$ and $\rho^{mrt}(t)\equiv 1$, but now all values in between also correspond to mean field equilibria. The simulation of $\rho^{m}_{n}(t)$, cf.~Figure \ref{fig:fft:II}, illustrates that the law of $\rho^m_n(t)$ converges to a mixture of all these equilibria. \end{example} \begin{figure}[thb] \centering \subfigure[density $4\, \mathbf{1}_{[\frac{3}{8},\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:I} \begin{minipage}[c]{0.4\textwidth} \centering \includegraphics[width=2.2in,height=1.7in]{simulation1.jpg} \end{minipage} } \subfigure[density $\mathbf{1}_{[0,\frac{1}{2}]}(y)+\mathbf{1}_{[\frac{3}{2},2]}(y)$]{\label{fig:fft:II} \begin{minipage}[c]{0.5\textwidth} \centering \includegraphics[width=2.2in,height=1.7in]{simulation2.jpg} \end{minipage} } \caption{Simulations for $n$-player minimal equilibria ($n=10'000$). Locations $k/n$ of equilibria with $k$ stopped players on the $x$-axis, number of samples with that equilibrium on the $y$-axis.} \label{fig:simulations for $n$-player game minimal equilibrium} \end{figure} When the minimal mean field equilibrium is not increasing-transversal, the preceding examples illustrate that it need not be the limit of the minimal $n$-player equilibria. The final example shows that both cases are possible: it may be the limit even if it is not increasing-transversal. \begin{example} \label{ex:twoTypeDensityGood} Consider the setting of Example~\ref{ex:twoTypeDensity} with density $f_{t}(y)=2\,\mathbf{1}_{[1/2,1]}(y)$. In this case, we easily compute that $\rho^{m}(t)\equiv 0$ and $\rho^{mrt}(t)\equiv 1$. Nevertheless, $\rho^{m}_{n}(t)\equiv0$ due to $Y^{i}_{t}<r$ a.s., and thus $\rho^{m}_{n}(t)\to\rho^{m}(t)$. \end{example} \section{Convergence to General Equilibria}\label{se:convergenceGeneral} Theorem~\ref{th:convUnderH} shows that if the minimal and maximal mean field equilibria are increasing-transversal (on a dense set), then they are the limits of the minimal and maximal $n$-player equilibria. Indeed, the latter are obvious candidates for sequences converging to these mean field equilibria. For mean field equilibria that are not extremal, there are no obvious candidates for the approximating $n$-player equilibria. The following result shows that increasing-transversal equilibria are still limits; however, the approximating $n$-player equilibria have no simple description. We will see in Section~\ref{se:decrasingTransversal} that the analogue for decreasing-transversal solutions fails. \subsection{Increasing-Transversal Equilibria} \begin{theorem}\label{th:convToIncreasing} Let $\rho$ be a mean field equilibrium. Suppose that for all $t$ in a dense subset $D\subseteq \mathbb{R}_{+}$, the solution $u:=\rho(t)$ of $u+F_t(r-cu)=1$ is increasing-transversal. Then there exist $n$-player equilibria $(\rho_{n})_{n\geq1}$ which Fatou converge in probability to $\rho$ as $n\to\infty$. \end{theorem} The first step of the proof is to solve a static version of the problem. This will be accomplished by a fixed point argument for monotone functions. \begin{lemma}\label{le:convToIncreasingStatic} Let $t\geq0$, let $u\in[0,1]$ be an increasing-transversal solution of $u+F_t(r-cu)=1$ and let $\varepsilon,\delta>0$. There are $n_{0}\in\mathbb{N}$ and $A\in\mathcal{G}_{t}$ with $P(A)>1-\varepsilon$ such that for all $n\geq n_{0}$ and $\omega\in A$, there exists $k(\omega)\in\mathbb{N}$ such that $|u-k(\omega)/n|\leq \delta$ and~\eqref{eq:NplayerEquilibCond} holds; i.e., \begin{equation* \#\{Y_t^i(\omega)+c\, \frac{k(\omega)-1}{n}\geq r\}=k(\omega) \quad\!\mbox{and}\!\quad \#\{Y_t^i(\omega)+c\, \frac{k(\omega)}{n}< r\}=n-k(\omega). \end{equation*} Moreover, $k(\omega)$ can be chosen as a measurable function of $Y^{1}_{t}(\omega),\dots, Y^{n}_{t}(\omega)$. \end{lemma} \begin{proof} Since $u$ is increasing-transversal, there are points $u_{0},u_{1}\in\mathbb{R}$ such that $u-\delta/2 \leq u_{0} < u < u_{1} \leq u+ \delta/2$ and $$ u_{0}< 1 - F_{t}(r-cu_{0}) \leq 1 - F_{t}(r-cu_{1}) < u_{1}, $$ where the inequality in the middle is due to the monotonicity of $F_{t}$. The Glivenko--Cantelli theorem then implies that the event $A_{n}$ consisting of all~$\omega$ such that $$ [nu_{0}]\leq \#\{Y_t^i(\omega)+c\, \frac{[nu_{0}]-1}{n}\geq r\} \leq \#\{Y_t^i(\omega)+c\, \frac{[nu_{1}]}{n}\geq r\} \leq [nu_{1}] $$ satisfies $P(A_{n})\to 1$. For fixed $n$ and $\omega\in A_{n}$, consider the integer-valued function $$ k\mapsto G(k):=\#\{Y_t^i(\omega)+c\, \tfrac{k}{n}\geq r\}. $$ By the above, $G$ maps $\{[nu_{0}]-1,[nu_{0}],\dots,[nu_{1}]\}$ into $\{[nu_{0}],\dots,[nu_{1}]\}$. Moreover, $G$ is monotone increasing. Lemma~\ref{le:fixedpoint} below then yields the existence of $[nu_{0}]\leq k \leq [nu_{1}]$ such that $G(k-1)=G(k)=k$ which is exactly~\eqref{eq:NplayerEquilibCond}. By the choice of $u_{0},u_{1}$ we also have $|u-k/n|\leq \delta$ for $n$ large. Moreover, it is clear from the proof of Lemma~\ref{le:fixedpoint} that $k$ is a measurable function of $Y^{1}_{t},\dots, Y^{n}_{t}$. \end{proof} \begin{lemma}\label{le:fixedpoint} Let $x_{0}<x_{1}<\dots < x_{N}$ be real numbers for some $N\geq 1$. Let $J=\{x_{1},\dots,x_{N}\}$ and $J_{0}=\{x_{0}\}\cup J$. If $f: J_{0}\to J$ is monotone increasing, there exists $k\in\{1,\dots,N\}$ such that $f(x_{k-1})=f(x_{k})=x_{k}$. \end{lemma} \begin{proof} Since $f$ is monotone and maps $J$ into $J$, it must have a fixed point in~$J$. We claim that the minimal $k\in\{1,\dots,N\}$ such that $f(x_{k})=x_{k}$ has the desired property. Indeed, if $k=1$, monotonicity implies that $f(x_{0})=f(x_{1})$ and the proof is complete. If $k>1$, we observe that $f(x_{l-1})\geq x_{l}$ for all $1\leq l\leq k$. Indeed, $f(x_{1})\geq x_{2}$ since $x_{1}$ is not a fixed point, but then $f(x_{2})\geq x_{3}$ since $x_{2}$ is not a fixed point and $f$ is monotone, and so on. In particular, $f(x_{k-1})\geq x_{k}$ and thus $f(x_{k-1})=f(x_{k})=x_{k}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:convToIncreasing}] Fix $N\in\mathbb{N}$ and let $t_{1}<\dots<t_{N}$ be in $D$. For $n$ large enough, Lemma~\ref{le:convToIncreasingStatic} allows us to find sets $A_{l}\in\mathcal{G}_{t_{l}}$ with $P(A_{l})>1-N^{-2}$ and random variables $k_{l}$ satisfying $|\rho(t_{l})-k_{l}/n|\leq \delta:=1/N$ and~\eqref{eq:NplayerEquilibCond} on~$A_{l}$, for $1\leq l \leq N$. Following Remark~\ref{rk:NplayerEquilibCondSuff}, we can construct $n$-player equilibria $\rho^{l}_{n}$ such that $\rho^{l}_{n}(t_{l})=k_{l}/n$ on $A_{l}$. Next, we argue that these $\rho^{l}_{n}$ can be chosen such that \begin{equation}\label{eq:FPordered} \rho^{1}_{n}(t_{1})\leq \cdots \leq \rho^{m}_{n}(t_{m}) \mbox{ on } A_{1}\cap\cdots\cap A_{m},\quad 1\leq m\leq N. \end{equation} Indeed, we have $\rho(t_{l})\leq\rho(t_{l+1})$ by the increase of $\rho$. If $\rho(t_{l})<\rho(t_{l+1})$, then we can ensure $\rho^{l}_{n}(t_{l})\leq \rho^{l+1}_{n}(t_{l+1})$ on $A_{l}\cap A_{l+1}$ simply by choosing $\delta<|\rho(t_{l})-\rho(t_{l+1})|/2$ in Lemma~\ref{le:convToIncreasingStatic}. If $\rho(t_{l})=\rho(t_{l+1})$, we can observe that if the construction in the proof of Lemma~\ref{le:convToIncreasingStatic} is executed twice with $t_{l}$ and $t_{l+1}$, then by choosing the same parameters $u_{0},u_{1}$ the corresponding functions $f_{l}$ and $f_{l+1}$ satisfy $f_{l}\leq f_{l+1}$ due to the increase of $Y^{i}$. This implies that the corresponding minimal fixed points produced by the proof of Lemma~\ref{le:fixedpoint} satisfy $\rho^{l}_{n}(t_{l})\leq \rho^{l+1}_{n}(t_{l+1})$. In view of~\eqref{eq:FPordered}, we can use Remark~\ref{rk:cutAndPaste}(iii) to construct from the equilibria $(\rho^{l}_{n})_{1\leq l\leq N}$ another $n$-player equilibrium $\varrho_{n}$ with the property that $\varrho_{n}(t_{l})= \rho^{l}_{n}(t_{l})$ for all $1\leq l \leq N$ on $A^{N}:=\cap_{l=1}^{N}A_{l}$. To summarize, $\varrho_{n}$ satisfies $|\rho(t_{l})-\varrho_{n}(t_{l})|\leq 1/N$ for all $1\leq l \leq N$ on the set $A^{N}$ which has probability $P(A^{N})\geq 1-N^{-1}$. By letting $t_{1},\dots,t_{N}$ exhaust a countable dense subset $D'\subseteq D\subseteq \mathbb{R}_{+}$ as $N\to\infty$, this shows that there exist $n$-player equilibria $(\varrho_{n})_{n\geq1}$ such that $\varrho_{n}(t)\to\rho(t)$ in probability for all $t\in D'$ and the proof is complete. \end{proof} \begin{remark}\label{rk:convergenceToMixtures} The construction leading to Theorem~\ref{th:convToIncreasing} is pathwise and thus extends beyond deterministic mean field equilibria. For instance, let $\rho^{1},\rho^{2}$ be such equilibria satisfying the assumption of Theorem~\ref{th:convToIncreasing}, let $\lambda\in[0,1]$ and suppose that the $n$-player game admits a set $A\in\mathcal{G}_{0}$ with $P(A)=\lambda$. Then we can apply the construction separately on $A$ and $A^{c}$ to find $n$-player equilibria $\rho_{n}$ converging to the mixture $\lambda \delta_{\rho^{1}} + (1-\lambda)\delta_{\rho^{2}}$ on a dense set. In the same vain, convergence to more general mixtures could be analyzed. \end{remark} \subsection{Decreasing-Transversal Equilibria} \label{se:decrasingTransversal} Let us begin with a simulation and then establish that the observations correspond to a general result. \begin{figure}[bh] \centering \begin{tikzpicture}[scale=0.95] \draw[-latex] (0,0) -- (4,0); \draw[-latex] (0,0) -- (0,4); \node[below] at (0,0) {\small 0}; \node[below] at (3,0) {\small 1}; \node[left] at (0,3) {\small 1}; \node[below] at (1.5,0) {\small $0.5$}; \draw[dashed] (0,3) -- (3,0); \draw[very thick,domain=0:1.5] plot(\x,{3*(1-2*((\x)/3)^2)]}); \draw[very thick,domain=1.5:3] plot(\x,{6*(1-(\x)/3)^2}); \draw[dotted] (1.5,0) -- (1.5,1.5); \fill (0,3) circle (2pt); \fill (3,0) circle (2pt); \fill (1.5,1.5) circle (2pt); \end{tikzpicture} \hspace{.5em} \includegraphics[scale=.55]{simulation3.jpg} \caption{C.d.f.\ and simulation of Example~\ref{ex:tent}. The decreasing-transversal equilibrium at $0.5$ can only be approximated on 12.5\% of the samples.} \label{fig:tent} \end{figure} \begin{example} \label{ex:tent} Let $r=c=1$ and let $Y^i_{t}=Y^i_{0}$, $0\leq t< T$ be constant i.i.d.\ processes such that $\Law(Y^i_t)$ has the tent-shaped probability density $f(x)=2-4|x-1/2|$, $x\in[0,1]$. As illustrated in Figure~\ref{fig:tent} (left panel), the corresponding equation~\eqref{eq:master} has a decreasing-transversal solution at $u=1/2$ and increasing-transversal solutions at $u=0$ and $u=1$. For the game with $n=10'000$ players, the histogram in Figure~\ref{fig:tent} shows the values of $k/n$ such that $k$ satisfies the equilibrium conditions~\eqref{eq:NplayerEquilibCond}. The simulation illustrates the convergence to the equilibria at $u=0,1$ as proved in Theorem~\ref{th:convToIncreasing} but also suggests that $u=1/2$ is not a limit of $n$-player equilibria; indeed, only about 12.5\% of the samples allow for an $n$-player equilibrium with $k/n$ close to~$1/2$. In Proposition~\ref{pr:expectedNumber}, we will establish an asymptotic upper bound which yields $e^{-2}\approx 13.5\%$ in this example. \end{example} In the remainder of this section we assume that $F_{t}$ admits a continuous density~$f_{t}$. Let $x\in[0,1]$ be a solution of $u+F_{t}(r-cu)=1$. We say that $x$ is \emph{strongly decreasing-transversal} if $\partial_{u}|_{u=x}[u+F_{t}(r-cu)]<0$ or equivalently $$ f_{t}(r-cx)>c^{-1}. $$ We note that $x$ is then necessarily in $(0,1)$ and decreasing-transversal in the sense of Definition~\ref{de:transversalDef}; the only difference (given the continuity assumption) is that we exclude the case where $u+F_{t}(r-cu)$ has a vanishing derivative at $x$ (see also Remark~\ref{rk:tangentialDecreasingCase}). Intuitively, when $f_{t}(r-cx)$ is large, there are many similar agents (in terms of values of $Y^{i}$ and relative to the interaction constant $c$) close to such a state. As a result, these agents may tend to coordinate and either all stop or all not stop: it may be impossible to break up the group\footnote{Clearly, this intuition does not explain the phase-transition character of the phenomenon. To gather the intuition for a large density, it may be useful to consider the limiting case of an atom in~$F_{t}$: all agents corresponding to the atom make the same stopping decision.} and create an $n$-player equilibrium close to $x$. \begin{theorem}\label{th:convToDecreasing} Let $\rho$ be a mean field equilibrium and suppose that the set $$ \{t\geq0:\, \rho(t) \mbox{ is strongly decreasing-transversal}\} $$ has nonempty interior.\footnote{Note that the condition is nonempty interior rather than the set being nonempty. This corresponds to the fact that convergence in probability on a dense set of times~$t$ is sufficient for Fatou convergence; cf.\ Definition~\ref{de:Fatou}.} Then there does not exist a sequence of $n$-player equilibria $\rho_{n}$ Fatou converging to $\rho$ in probability. \end{theorem} This theorem follows from Corollary~\ref{co:decreasingLessOne} below which shows non-existence with positive probability at any fixed time $t$ where $\rho(t)$ is strongly decreasing-transversal. For brevity, we set $$ G_{n,t}(k)=\#\{Y_t^i+c\, \tfrac{k}{n}\geq r\} $$ so that the $n$-player equilibrium conditions~\eqref{eq:NplayerEquilibCond} can be expressed concisely as $G_{n,t}(k)=k=G_{n,t}(k-1)$. Moreover, we introduce \begin{align*} \mathcal{K}_{n,t} =\{0\leq k\leq n: G_{n,t}(k)=k=G_{n,t}(k-1)\}. \end{align*} Roughly speaking, we think of $\mathcal{K}_{n,t}(\omega)$ as the set of all $k$ such that $k/n=\rho_{n}(t)(\omega)$ for some $n$-player equilibrium $\rho_{n}(t)$. (This is not quite meaningful since equilibria can always be altered on nullsets.) More precisely, we have that if $\rho_{n}$ is a given equilibrium, then $n\rho_{n}(t)\in \mathcal{K}_{n,t}$ a.s.\ by~(3.1). In particular, we will use below that $\{|x-\rho_{n}(t)|<\varepsilon\}\subseteq \{\exists\, k\in \mathcal{K}^{}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}$~a.s. for all $x\in[0,1]$ and $\varepsilon>0$. Finally, we also introduce the superset \begin{align*} \mathcal{K}^{*}_{n,t} &=\{0\leq k\leq n: G_{n,t}(k)=k\} \supseteq\mathcal{K}_{n,t} \end{align*} which has no direct interpretation in terms of our game but is conveniently related to crossings of empirical distribution functions (see the proof below). \begin{proposition}\label{pr:NairSheppKlass} Fix $t\geq0$ and let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$. Let $\alpha:=cf_{t}(r-cx)$ and assume that $\alpha>1$. Then $$ \lim_{\varepsilon\to0}\lim_{n\to\infty} P(\exists\, k\in \mathcal{K}^{*}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon) = \frac{1-\theta}{\alpha-1}<1 $$ where $\theta\in(0,1)$ is defined through $\theta e^{-\theta}=\alpha e^{-\alpha}$. \end{proposition} \begin{proof} We first observe the local nature of the claim. Indeed, introducing the uniform random variables $U^{i}=F_{t}(Y_t^{i})$ we see that the event \begin{align*} A_{n,\varepsilon} &=\{\exists\, k\in \mathcal{K}^{*}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}\\ &= \{ \exists\, 0\leq k\leq n:\, \#\{Y_t^i+c\, \tfrac{k}{n}\geq r\}=k,\;|x-\tfrac{k}{n}|<\varepsilon\}\\ &= \{ \exists\, 0\leq k\leq n:\, \#\{U^i\geq F_{t}(r-c\, \tfrac{k}{n})\}=k,\;|x-\tfrac{k}{n}|<\varepsilon\} \end{align*} depends only on the values of $F_{t}$ in an $\varepsilon$-neighborhood of $x$. In particular, for $\varepsilon$ small enough, we may change $F_{t}$ outside that neighborhood to guarantee that the set of solutions of $u+F_{t}(r-cu)=1$ is $\{0,x,1\}$. Considering the c.d.f.\ $G(u)=1-F_{t}(r-cu)$, the proposition can be rephrased as the probability of having no crossings of the empirical distribution of $G$ and the (theoretical) uniform distribution near $x$: \begin{align*} A_{n,\varepsilon} =\{ \exists\, t\in[0,1]:\, \tfrac{1}{n}\#\{G^{-1}(U^i)\leq t\}=t,\;|x-t|<\varepsilon\}. \end{align*} (To see this identity, note that $\tfrac{1}{n}\#\{G^{-1}(U^i)\leq t\}=t$ implies $t=k/n$ for some $0\leq k\leq n$.) Following~\cite{NairSheppKlass.86}, this problem can be related to boundary-crossing probabilities of Poisson processes which turn out to be computable. In particular, after changing $F_{t}$ as outlined above, the conditions of~\cite[Theorem~1]{NairSheppKlass.86} are satisfied for $G$ and noting that $\alpha=G'(x)$, this theorem yields the result. \end{proof} In view of $\mathcal{K}_{n,t}\subseteq \mathcal{K}^{*}_{n,t}$, we have the following consequence (see also Figure~\ref{fig:bounds}). \begin{corollary}\label{co:decreasingLessOne} Fix $t\geq0$ and let $x\in[0,1]$ satisfy $x+F_{t}(r-cx)=1$. If~$x$ is strongly decreasing-transversal, then $$ \lim_{\varepsilon\to0}\lim_{n\to\infty} P(\exists\, k\in \mathcal{K}^{}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)<1. $$ \end{corollary} \begin{figure}[bth] \centering \includegraphics[scale=.60]{bounds.jpg} \caption{Bounds for the probability of finding an $n$-player equilibrium near~$x$ as in Corollary~\ref{co:decreasingLessOne}. The dashed and dashed-dotted lines are the upper bounds derived from Proposition~\ref{pr:NairSheppKlass} and Proposition~\ref{pr:expectedNumber}, respectively. The solid line is the lower bound from Proposition~\ref{pr:limitProbLowerBound}.} \label{fig:bounds} \end{figure} \begin{remark}\label{rk:weakerEquilibCond} One can ask if the non-existence result is related to the convention made in Section~\ref{se:nPlayer} that players do not consider their own impact on the state process. To address this question, we can drop the first equation in the equilibrium conditions~\eqref{eq:NplayerEquilibCond} and keep only the second (which seems uncontroversial); i.e., $\#\{Y_t^i+c\, \frac{k}{n}< r\}=n-k$. This corresponds to the definition of $\mathcal{K}^{*}_{n,t}$ and Proposition~\ref{pr:NairSheppKlass} shows that non-existence holds even under this condition alone. \end{remark} \begin{remark}\label{rk:tangentialDecreasingCase} Heuristics suggest that in the tangential case of a decreasing-transversal $x$ with $\alpha=1$, the limiting probability is $1$; i.e., the equilibrium is in fact a limit of $n$-player equilibria. The tangential case is less important because it generically does not occur, in the same sense as discussed below Definition~\ref{de:H}. We do not provide a rigorous result. \end{remark} In our last result, we determine the asymptotic expected number of equilibria close to $x$ (for both increasing- and decreasing-transversal cases). Importantly, it implies that this number is positive with positive probability. When $\alpha>1$ is not close to $1$, it also yields a fairly accurate upper bound for the probability of not finding an $n$-player equilibrium close to~$x$ (cf.~Example~\ref{ex:tent}) since the probability of finding more than one solution is small. On the other hand, we see that as $\alpha\to1$, the expected number of solutions tends to infinity, and in particular the probability of finding many solutions becomes large\footnote{In fact, one can show that $\lim_{\alpha\to1}\limsup_{n\to\infty} P(\#\{\mathcal{K}_{n,t}\cap |x-\tfrac{k}{n}|<\varepsilon\}=j)=0$ for all finite $j\geq0$ when $\varepsilon>0$ is small enough.}. \begin{proposition}\label{pr:expectedNumber} Fix $t\geq0$ and let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$. Let $\alpha:=cf_{t}(r-cx)$ and assume that $\alpha\neq1$. Then $$ \lim_{\varepsilon\to0}\lim_{n\to\infty} E[\#\{k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon\}]= \frac{e^{-\alpha}}{|1-\alpha|}. $$ In particular, $$ \limsup_{\varepsilon\to0}\limsup_{n\to\infty} P(\exists\, k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)\leq \frac{e^{-\alpha}}{|1-\alpha|}. $$ \end{proposition} One consequence of Proposition~\ref{pr:expectedNumber} is that non-uniqueness is indeed the typical case for the $n$-player game, as claimed in the Introduction: under the stated smoothness assumption on $F_{t}$, we typically have at least one mean field equilibrium corresponding to $0\neq\alpha<1$ and then the proposition and Lemma~\ref{le:convToIncreasingStatic} imply that there is more than one $n$-player equilibrium, for large~$n$. \begin{proof}[Proof of Proposition~\ref{pr:expectedNumber}.] We may assume that $c=1$, and we drop the index $t$ everywhere. We denote $$ \alpha(z)=f(r-z) $$ and recall that $x\in(0,1)$ and $\alpha=\alpha(x)\neq1$. Fix $\varepsilon>0$ and denote $$ x_{-}=x-\varepsilon,\quad x_{+}=x+\varepsilon, $$ $$ F_{-}=F(r-x-\varepsilon),\quad F_{+}=F(r-x+\varepsilon), $$ $$ \alpha_{-}=\inf_{|z-x|<\varepsilon} \alpha(z),\quad \alpha_{+}=\sup_{|z-x|<\varepsilon} \alpha(z), $$ $$ m(z)=\inf_{|z-x|\leq\varepsilon} z(1-z),\quad M(z)=\sup_{|z-x|\leq\varepsilon} z(1-z). $$ We assume that $\varepsilon$ is small enough such that $x_{\pm}\in(0,1)$ and $1\notin[\alpha_{-},\alpha_{+}]$. \vspace{.5em} \emph{Step 1: Bounds for $P(k\in\mathcal{K}_{n})$.} Fix $n$ and let $U^{i}=F(Y^{i})$, $1\leq i\leq n$ so that $(U^{i})$ are i.i.d.\ $\Unif[0,1]$, and let $U^{(1)}\geq \dots\geq U^{(n)}$ be the associated reverse order statistics. Noting that $U^{(k)}=U_{(n-k+1)}$ for the usual (increasing) order statistics $U_{(\cdot)}$, we have that $U^{(k)}\sim \Beta(n-k+1,k)$ and $U^{(k+1)}=U^{(k)}W_{k}^{\frac{1}{n-k}}$ where $W_{k}\sim \Unif[0,1]$ is independent; cf.\ \cite[Section~4]{ArnoldEtAl.13}. Moreover, we note that $k\in\mathcal{K}_{n}$ is equivalent to \begin{equation}\label{eq:orderStat1} U^{(k)}\geq F(r-\tfrac{k-1}{n})=:F_{k-1} \quad\mbox{and}\quad U^{(k+1)}\leq F(r-\tfrac{k}{n})=:F_{k}. \end{equation} As a result, for any deterministic integer $1\leq k\leq n$, \begin{align} P(k\in\mathcal{K}_{n}) &= P\big(U^{(k+1)}\leq F_{k},\, U^{(k)}\geq F_{k-1}\big)\nonumber\\ &= \int_{F_{k-1}}^{1} P\big(U^{(k+1)}\leq F_{k}\big|U^{(k)}=z\big)\, dP(U^{(k)}=z)\nonumber\\ &= \int_{F_{k-1}}^{1} P\big(W\leq (F_{k}/z)^{n-k}\big|U^{(k)}=z\big)\, dP(U^{(k)}=z)\nonumber\\ &= \frac{n!}{(n-k)!(k-1)!} \int_{F_{k-1}}^{1} F_{k}^{n-k}(1-z)^{k-1}\,dz\nonumber\\ &= \binom{n}{k}F_{k}^{n-k}(1-F_{k-1})^{k} \label{eq:orderStat2} \end{align} where $dP(U^{(k)}=z)$ indicates integration with respect to the law of $U^{(k)}$. We may observe that this quantity is reminiscent of a binomial distribution except that the success probability changes with $k$. Next, we use Taylor's theorem to find that \begin{equation}\label{eq:alphaTaylor} F_{k-1}=F(r-\tfrac{k}{n}+\tfrac{1}{n})=F(r-\tfrac{k}{n})+\alpha_{k}/n=F_{k}+\alpha_{k}/n \end{equation} where $\alpha_k=\alpha(\eta_k)$ with $\eta_k \in [\frac{k-1}{n},\frac{k}{n}]$ and in particular $\alpha_{k}\in[\alpha_{-},\alpha_{+}]$. Now suppose that $|x-\tfrac{k}{n}|<\varepsilon$. Then $k\geq nx_{-}$, and using also $F_{k}\geq F_{-}$, \begin{align*} P(k\in\mathcal{K}_{n}) &= \binom{n}{k}F_{k}^{n-k}(1-F_{k}-\alpha_{k}/n)^{k}\\ &= \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{k}}{(1-F_{k})n}\right)^{k}\\ &\leq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{-}}{(1-F_{-})n}\right)^{nx_{-}}. \end{align*} The fact that $$ (1-y) \leq e^{-y} \leq (1-y)(1+o(y)) $$ as $y\to0$ applied with $y=w/n$ yields $$ (1-\tfrac{w}{n})^{n} \leq e^{-w} \leq (1-\tfrac{w}{n})^{n}\,(1+O(1/n)) $$ as $n\to\infty$, uniformly over $w$ in a compact interval. This leads us to the upper bound \begin{align}\label{eq:proofStep1Conclusion} P(k\in\mathcal{K}_{n}) &\leq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k} e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}}. \end{align} Similarly, we have the lower bound \begin{align*} P(k\in\mathcal{K}_{n}) &\geq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}\left(1-\frac{\alpha_{+}}{(1-F_{+})n}\right)^{nx_{+}}\\ &\geq \binom{n}{k}F_{k}^{n-k}(1-F_{k})^{k}e^{-{\frac{\alpha_{+}x_{+}}{1-F_{+}}}}\,(1+O(1/n)). \end{align*} \vspace{.5em} \emph{Step~2: Decay away from $x$.} Let us recall Robbin's version~\cite{Robbins.55} of the Stirling approximation, \begin{equation}\label{eq:robbins} \sqrt{2\pi n} (\tfrac{n}{e})^{n} e^{\tfrac{1}{12n+1}} \leq n! \leq \sqrt{2\pi n} (\tfrac{n}{e})^{n} e^{\tfrac{1}{12n}}, \end{equation} showing in particular that $n! = \sqrt{2\pi n}\, (\tfrac{n}{e})^{n}\,(1+O(1/n))$. Since $n-k$ and $k$ are comparable to $n$ when $|x-\tfrac{k}{n}|<\varepsilon$, we have $$ \binom{n}{k}= (1+O(1/n)) \frac {\sqrt{2\pi n}\, (\tfrac{n}{e})^{n}} {\sqrt{2\pi (n-k)}\, (\tfrac{n-k}{e})^{(n-k)}\, \sqrt{2\pi k}\, (\tfrac{k}{e})^{k}} $$ uniformly over all $k$ such that $|x-\tfrac{k}{n}|<\varepsilon$. This shows that \begin{align}\label{eq:proofZsum} Z_{n,\varepsilon} &:=\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \binom{n}{k}F^{n-k}_{k}(1-F_{k})^{k}\nonumber\\ & =(1+O(1/n))\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n(1-\tfrac{k}{n})\tfrac{k}{n}}}\, \frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}\nonumber\\ &\leq \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n}}\, \frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}. \end{align} Our next goal is to estimate the summand above. We introduce the function $$ \varphi(z)=(1-z)^{n-k}z^{k} $$ so that \begin{equation}\label{eq:proofFraction} \frac{F^{n-k}_{k}(1-F_{k})^{k}}{(1-\tfrac{k}{n})^{n-k}(\tfrac{k}{n})^{k}}=\frac{\varphi(1-F_{k})}{\varphi(\tfrac{k}{n})} \end{equation} is the term in question. We can use Taylor's theorem similarly as above to find $$ F_{k}=F(r-\tfrac{k}{n})=F(r-x+x-\tfrac{k}{n})=F(r-x)+\tilde{\alpha}_{k}(x-\tfrac{k}{n}) $$ where $\tilde{\alpha}_{k}\in[\alpha_{-},\alpha_{+}]$. As $F(r-x)=1-x$, this equality can be rewritten as $$ F_{k}=1-\tfrac{k}{n}+(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n}). $$ Introducing also $$ \psi(z)=\log\varphi(z)=(n-k)\log(1-z)+ k \log z, $$ we have $$ \psi'(z) = -\frac{n-k}{1-z}+\frac{k}{z},\quad \psi''(z) = -n\left[\frac{1-\tfrac{k}{n}}{(1-z)^{2}}+\frac{\tfrac{k}{n}}{z^{2}}\right]<0 $$ and then $\psi'(k/n)=0$ shows that $\psi$ and $\varphi$ have a global maximum at~$k/n$. Taylor's theorem at the second order yields $$ \psi(1-F_{k})-\psi(\tfrac{k}{n})=\psi(\tfrac{k}{n}-(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n}))-\psi(\tfrac{k}{n})=\frac{\psi''(\xi_{k})}{2} (\tilde{\alpha}_{k}-1)^{2}(x-\tfrac{k}{n})^{2} $$ for a suitable number $\xi_{k}$ between $\tfrac{k}{n}$ and $\tfrac{k}{n}-(\tilde{\alpha}_{k}-1)(x-\tfrac{k}{n})$. Therefore, we have $|\xi_{k}-x|<A\varepsilon$, with $A=\max\{\alpha_+,1\}$. Using the above formula for $\psi''(z)$ and setting $$ \Gamma_{\varepsilon}=\inf_{\substack{|p-x|<\varepsilon\\|z-x|<A\varepsilon}} \left[\frac{1-p}{(1-z)^{2}}+\frac{p}{z^{2}}\right], $$ we arrive at $$ \psi(1-F_{k})-\psi(\tfrac{k}{n})\leq -\frac{n}{2} \Gamma_{\varepsilon} (\tilde{\alpha}_{k}-1)^{2}(x-\tfrac{k}{n})^{2}. $$ Setting also $\alpha_{*}=\alpha_{-}$ if $\alpha>1$ and $\alpha_{*}=\alpha_{+}$ if $\alpha<1$, exponentiating leads us to the desired estimate $$ \frac{\varphi(1-F_{k})}{\varphi(\tfrac{k}{n})} \leq \exp \left(-\frac{n}{2} \Gamma_{\varepsilon} (\alpha_{*}-1)^{2}(x-\tfrac{k}{n})^{2}\right) $$ and plugging this into~\eqref{eq:proofZsum} we have that \begin{align*} Z_{n,\varepsilon} &\le \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} \frac{1}{\sqrt{2\pi n}}\, \exp \left(-\frac{n}{2} \Gamma_{\varepsilon} (\alpha_{*}-1)^{2}(x-\tfrac{k}{n})^{2}\right). \end{align*} Set $$ w_{k}= \sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|(\tfrac{k}{n}-x) $$ and note that $$ \Delta w: = w_{k}-w_{k-1}=\frac{1}{\sqrt{n}}\sqrt{\Gamma_{\varepsilon}}|\alpha_{*}-1|. $$ The above sum can then be written as \begin{align*} Z_{n,\varepsilon} & \le \frac{1+O(1/n)}{\sqrt{m(x)}}\sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon} \frac{1}{\sqrt{2\pi n}}\, e^{-w_{k}^{2}/2}\\ &= \frac{1+O(1/n)}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|} \sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon} \frac{1}{\sqrt{2\pi}}\, e^{-w_{k}^{2}/2} \Delta w \end{align*} which suggests comparison with a Gaussian integral $\int_{\mathbb{R}}\frac{1}{\sqrt{2\pi}}\, e^{-w^{2}/2}\,dw=1$. Indeed, after subtracting the two largest summands neighboring the origin, the sum can be seen as a Riemann sum which is entirely below the integral. These two summands are $O(1/\sqrt{n})$ so that $$ \sum_{k:\, |w_{k}|<\sqrt{n\Gamma_{\varepsilon}}|\alpha_{*}-1|\varepsilon} \frac{1}{\sqrt{2\pi}}\, e^{-w_{k}^{2}/2} \Delta w \leq 1+O(1/\sqrt{n}) $$ and finally \begin{align*} Z_{n,\varepsilon} & \leq \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|} \, (1+O(1/\sqrt{n})). \end{align*} \vspace{.5em} \emph{Step 3: Conclusion.} Recalling~\eqref{eq:proofStep1Conclusion} we have \begin{align*} E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}] & = \sum_{k:\, |x-\tfrac{k}{n}|<\varepsilon} P(k\in\mathcal{K}_{n}) \\ & \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} Z_{n,\varepsilon}\\ & \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|} \, (1+O(1/\sqrt{n})) \end{align*} and hence \begin{align*} \limsup_{n\to\infty} E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}] & \leq e^{-{\frac{\alpha_{-}x_{-}}{1-F_{-}}}} \frac{1}{\sqrt{m(x)\Gamma_{\varepsilon}}}\frac{1}{|\alpha_{*}-1|}. \end{align*} As $\varepsilon\to0$ we have $x_{-}\to x$, $\alpha_{-}\to\alpha$, $\alpha_{*}\to\alpha$, $F_{-}\to F(r-x)=1-x$ and $$ m(x)\to x(1-x),\quad \Gamma_{\varepsilon}\to \frac{1}{1-x}+\frac{1}{x}=\frac{1}{x(1-x)}. $$ Thus, \begin{align*} \limsup_{\varepsilon\to0}\limsup_{n\to\infty} E[\#\{k\in \mathcal{K}_{n}:\, |x-\tfrac{k}{n}|<\varepsilon\}] & \leq \frac{e^{-\alpha}}{|\alpha-1|}. \end{align*} The matching lower bound follows similarly after replacing $\alpha_{-}$ by $\alpha_{+}$, $F_{-}$ by $F_{+}$, and so on. \end{proof} \begin{remark}\label{rk:rootNconvergence} The above proof offers insight into the speed of convergence of $n$-player equilibria. Specifically, the estimates entail that if $\varepsilon_{n}\downarrow0$ is such that $\varepsilon_{n}\sqrt{n}\to \beta\in[0,\infty]$, then \begin{align*} E[\#\{k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon_{n}\}] \to \frac{e^{-\alpha}}{|1-\alpha|} \mu\bigg(-\frac{|\alpha-1|}{\sqrt{x(1-x)}}\beta,\frac{|\alpha-1|}{\sqrt{x(1-x)}}\beta\bigg) \end{align*} where $\mu$ is the standard Gaussian distribution. Thus, a ball of radius $r_{n}/\sqrt{n}$ around $x$, where $r_{n}\to\infty$ arbitrarily slowly, will asymptotically contain all $n$-player equilibria converging to $x$, and this is optimal in the sense that if $\limsup r_{n}<\infty$ the ball will miss some solutions. \end{remark} In our final result we complement the upper bound in Proposition~\ref{pr:expectedNumber} by a lower bound. The gap between the bounds vanishes for large $\alpha$; see also Figure~\ref{fig:bounds}. \begin{proposition}\label{pr:limitProbLowerBound} Fix $t\geq0$, let $x\in(0,1)$ satisfy $x+F_{t}(r-cx)=1$ and suppose that $\alpha:=cf_{t}(r-cx)>1$. Then $$ \liminf_{\varepsilon\to0}\liminf_{n\to\infty} P(\exists\, k\in \mathcal{K}_{n,t}:\, |x-\tfrac{k}{n}|<\varepsilon)\geq L(\alpha)>0 $$ where $$ L(\alpha) = \frac{e^{-\alpha}}{\big(\alpha-1\big)\left(1+2\sqrt{\frac{2}{|a_0|}}\left\{1-\Phi\left(\sqrt{2|a_0|}\right)\right\}\right)} $$ with $a_0:=1-\alpha+\log(\alpha)<0$ and $\Phi$ is the standard normal c.d.f. \end{proposition} Since the lower bound is strictly positive, we can interpret the result as stating that~$x$ is necessarily part of a mixture which is itself a limit of $n$-player equilibria. In summary, when $x$ is strongly decreasing-transversal, we cannot find $n$-player equilibria converging to $x$ at time~$t$, but at least we can find $n$-player equilibria converging to a randomized mean field equilibrium which charges~$x$. \begin{proof}[Proof of Proposition~\ref{pr:limitProbLowerBound}] We use the notation from the proof of Proposition~\ref{pr:expectedNumber} and suppress~$t$. Let $\mathcal{K}= \mathcal{K}_{n,t}$ and $X=X_{n,\varepsilon}=\#\{k\in \mathcal{K}_{n,t}:\, |x-\frac{k}{n}|\le\varepsilon\}$. Set $\mu=E[X]$ and let $$ A=A_{n,\varepsilon}=\{|X-c\mu|\ge c\mu\} $$ for a constant $c>0$ to be chosen later. Clearly $P(X=0)\le P(A)$. Using the Markov inequality $$ P(|X-c\mu|\ge c\mu)\le \frac{E\left((X-c\mu)^2\right)}{c^2 \mu^2}= \frac{E\left(X^2\right)}{c^2 \mu^2}-\frac{2}{c}+1= \frac{2}{c}\left(\frac{E\left(X^2\right)}{2c \mu^2}-1\right)+1 $$ and choosing $c=\theta\frac{E[X^2]}{2\mu^2}$ for some $\theta>1$, we obtain that $$ P(X=0)\le1-\frac{4\mu^2(\theta-1)}{\theta^2E[X^2]}. $$ Optimizing over the right-hand side, we note that $\theta=2$ yields the best bound, so we choose $c=\frac{E[X^2]}{\mu^2}$ and conclude that \begin{equation}\label{eq:prooflimitProbLower1} P(X=0)\le 1-\frac{\mu^2}{E[X^2]}= 1-\frac{E[X]^{2}}{E[X^2]}. \end{equation} Since we have already determined the limit of~$E[X]$ in Proposition~\ref{pr:expectedNumber}, our goal is to find an upper bound for~$E[X^2]$. To that end, we first compute $$ P(k\in \mathcal{K}, j\in \mathcal{K})=P(U^{(k+1)}\le F_k, U^{(k)}\ge F_{k-1},U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1}) $$ for $k<j$; recall the notation of~\eqref{eq:orderStat1}. In fact, this probability is zero for $j=k+1$, so we focus on $k+2\leq j$. Conditionally on $U^{(k+1)}=h< U^{(k)}=u$, the pair $(U^{(j)},U^{(j+1)})$ has the same distribution as $(hV^{(j-(k+1))},hV^{(j-k)})$ where $V^{(\ell)}$ are the reverse order statistics of an i.i.d.\ sample $V_1,\cdots,V_{n-(k+1)}$ of size $n-(k+1)$ and distribution~$\Unif[0,1]$. Thus, we have \begin{align}\label{eq:V1} &P\left(U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1} \big| U^{(k+1)}=h, U^{(k)}=u\right)\nonumber\\ &=P\left(V^{j-(k+1)}\le \frac{F_j}{h}, V^{(j-k)}\ge \frac{F_{j-1}}{h}\right). \end{align} Clearly $P(V^{(j-k)}\ge \frac{F_{j-1}}{h})=0$ if $F_{j-1} \ge h$, so we only need to consider the case $h\in [F_{j-1},F_k]$. Using the formula developed in~\eqref{eq:orderStat2}, we obtain \begin{align}\label{eq:V2} &P\left(U^{(j+1)}\le F_j, U^{(j)}\ge F_{j-1} \big| U^{(k+1)}=h, U^{(k)}=u\right)\nonumber\\ &=\binom{n-(k+1)}{j-(k+1)} \left(\frac{F_j}h\right)^{n-j}\, \left(1-\frac{F_{j-1}}h\right)^{j-(k+1)}. \end{align} As above~\eqref{eq:orderStat1}, the joint density of $U^{(k)}$ and $U^{(k+1)}$ can be computed using the fact that $U^{(k)}\sim\Beta(n-k+1,k)$ and $U^{(k+1)}=W_{k}^{\frac{1}{n-k}}U^{(k)}$ where $W_{k}\sim\Unif[0,1]$ is independent of $U^{(k)}$: $$ dP\left(U^{(k)}=u, U^{(k+1)}=h\right)=k(n-k)\binom{n}{k}(1-u)^{k-1} h^{n-(k+1)} \, \mathbf{1}_{0\le h\le u\le 1}\, du \, dh. $$ Integrating with respect to this density and using the appropriate restrictions, we deduce that \begin{align*} P(k\in \mathcal{K}, j\in \mathcal{K}) &=k(n-k)\binom{n}{k} \binom{n-(k+1)}{j-(k+1)} F_j^{n-j} \\ &\quad\,\times\int_{F_{k-1}}^1 (1-u)^{k-1} \, du \int_{F_{j-1}}^{F_k} (h-F_{j-1})^{j-(k+1)}\, dh\\ &=\binom{n}{k} (1-F_{k-1})^k \, \frac{n-k}{j-k}\binom{n-(k+1)}{j-(k+1)} F_j^{n-j} (F_k-F_{j-1})^{j-k}\\ &=\binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \, \binom{n-k}{j-k} \left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j-1}}{F_k}\right)^{j-k}\\ &\le \binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \, \binom{n-k}{j-k} \left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j}}{F_k}\right)^{j-k}. \end{align*} By a repeated application of~\eqref{eq:alphaTaylor} we have that $\frac{F_{j}}{F_k}=1-\frac{\alpha_j (j-k)}{n F_k}$ for some $\alpha_j \in [\alpha_-,\alpha_+]$ and hence the last two terms above satisfy \begin{align*} \left(\frac{F_j}{F_k}\right)^{n-j} \left(1-\frac{F_{j}}{F_k}\right)^{j-k} &\leq \left[1-\frac{\alpha_j(j-k)}{n F_k}\right]^{n-j} \left[\frac{\alpha_j(j-k)}{n F_k}\right]^{j-k}\\ &\leq \exp\left(-\alpha_-(j-k)\frac{n-j}{nF_k}\right) (\alpha_+)^{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k}\\ &\leq \exp\left(-\alpha_-(j-k)\frac{1-x_+}{F_+}\right) (\alpha_+)^{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k}. \end{align*} On the other hand, Stirling's approximation as in~\eqref{eq:robbins} yields \begin{align*} &\binom{n-k}{j-k} \left(\frac{j-k}{nF_k}\right)^{j-k} \!\!\!\!\!\! = \frac{(n-k)!}{(n-j)! (j-k)! }\left(\frac{j-k}{nF_k}\right)^{j-k}\!\!\!\!\! \le \left(\frac{n-k}{nF_k}\right)^{j-k}\! \frac{(j-k)^{j-k}}{(j-k)! }\\ &\le \left(\frac{1-x_-}{F_-}\right)^{j-k}\!\!\!\!\!(j-k)^{j-k}\!\left[\left(\frac{(j-k)}{e}\right)^{j-k}\!\!\!\!\!\sqrt{2\pi(j-k)} \exp\left(\frac1{12(j-k)+1}\right)\right]^{-1}\\ &\le \left(\frac{1-x_-}{F_-}\right)^{j-k}\frac{e^{j-k}}{\sqrt{2\pi(j-k)}}\,. \end{align*} As a result, we obtain the upper bound \begin{equation}\label{eq:combEstimate} P(k\in \mathcal{K}, j\in \mathcal{K})\le \binom{n}{k}(1-F_{k-1})^k F_k^{n-k} \frac{1}{\sqrt{2\pi(j-k)}} \exp(a(j-k)) \end{equation} where $$ a=a(\alpha,\varepsilon):=1-\alpha_-\frac{1-x_+}{F_+}+\log(\alpha_+)+\log\left(\frac{1-x_-}{F_-}\right). $$ If the following sums run over indices $i$ with $|x-i/n|\le \varepsilon$, we can express the second moment of $X$ as \begin{align*} E[X^{2}] & = E\left[\left(\sum\nolimits_{k}\mathbf{1}_{k\in\mathcal{K}}\right)\left(\sum\nolimits_{j}\mathbf{1}_{j\in\mathcal{K}}\right)\right] = E\left[\sum\nolimits_{k,j}\mathbf{1}_{k\in\mathcal{K}}\mathbf{1}_{j\in\mathcal{K}}\right] \\ & = E\left[\sum\nolimits_{k}\mathbf{1}_{k\in\mathcal{K}} + 2\sum\nolimits_{k<j}\mathbf{1}_{k\in\mathcal{K}}\mathbf{1}_{j\in\mathcal{K}}\right]\\ &= \sum\nolimits_{k} P(k\in\mathcal{K}) + 2\sum\nolimits_{k<j} P(k\in\mathcal{K},\,j\in\mathcal{K}). \end{align*} Thus, \eqref{eq:combEstimate} leads to \begin{align*} E[X^2] &=\sum_{k:\, |x-k/n|\le \varepsilon} P(k\in \mathcal{K})+ 2\sum_{\substack{k,j:\, j\ge k+2, \\ |x-k/n|\le \varepsilon, \\ |x-j/n|\le \varepsilon}} P(k\in \mathcal{K}, j\in \mathcal{K})\nonumber\\ &\le E[X]+ \frac{2}{\sqrt{2\pi}}\, E[X] \sum\limits_{\ell=2}^{n(x_+-x_-)} \frac{1}{\sqrt{\ell}} \, e^{a\ell}. \end{align*} Note that $a_{0}:=\lim_{\varepsilon\downarrow 0}a(\alpha,\varepsilon)=1-\alpha+\log(\alpha)$ is strictly negative since $\alpha>1$. Thus, $a=a(\alpha,\varepsilon)<0$ for $\varepsilon$ small enough, so that $\frac{1}{\sqrt{\ell}} \, e^{a\ell}$ is summable. More precisely, $$ \frac{1}{\sqrt{2\pi}} \sum_{\ell=2}^\infty \frac{1}{\sqrt{\ell}} \, e^{a\ell}\le \frac{1}{\sqrt{2\pi}} \int_1^\infty \frac{1}{\sqrt{\ell}} e^{a\ell} \, d\ell=\sqrt{\frac{2}{|a|}} \frac{1}{\sqrt{2\pi}} \int_{\sqrt{2|a|}}^\infty e^{\frac{-z^2}{2}} \, dz. $$ Recalling also that $\lim_{\varepsilon \to 0}\lim_{n\to \infty} E[X]=\frac{e^{-\alpha}}{|\alpha-1|}=:H(\alpha)$ by Proposition~\ref{pr:expectedNumber}, we deduce that \begin{align*} \limsup_{\varepsilon \to 0}\limsup_{n\to \infty} E[X^2] \le H(\alpha)\left(1+2\sqrt{\frac{2}{|a_0|}}\left(1-\Phi\left(\sqrt{2|a_0|}\right)\right)\right) \end{align*} and combining this with~\eqref{eq:prooflimitProbLower1} yields the claim. \end{proof}
{ "timestamp": "2019-05-30T02:05:21", "yymm": "1806", "arxiv_id": "1806.00817", "language": "en", "url": "https://arxiv.org/abs/1806.00817" }
\section{Introduction} Behavioral Programming (BP) is a recently developed programming and modeling paradigm~\cite{harel2010programming}. Its inherent concurrent and modular nature offers a dynamic approach for the design and creation of modern software systems. Moreover, when written according to simple guidelines, behavioral programs become executable models, which can be formally analyzed and verified. Thus, BP creates new opportunities for model driven engineering, and for software engineering in general. Since its introduction in 2010, various libraries supporting BP have been developed~\cite{bpjecoop,harel2014scaling,Weinstock15,wwmerlang}. However, as each library was developed for a different purpose, they differ on various aspects of BP, which often makes them semantically incompatible with each other. This fractured landscape hinders BP-related research and dissemination, as it makes sharing and corroborating ideas harder, and code reuse prohibitively cumbersome. This paper presents an attempt to support the existing body of work done in BP under consistent, unified semantics. We propose a generalized version of BP as a common denominator for the existing work and define ways it can be refined to support existing work. Additionally, we propose a communication protocol between BP models (b-programs) and traditional software systems, which allows embedding b-programs as a component in heterogeneous systems. We present BPjs: a software tool based on our proposed definitions. BPjs can analyze and execute behavioral programs, either standalone or embedded in traditional systems. This, as explained in the paper, allows for methodologies that combine a layer of formally verified components with layers of software that are certified only by testing. The generalized BP presented in this paper, as implemented by BPjs, can serve as a common foundation for research in BP and in system engineering. In particular, it creates an easy and practical way of embedding models written using BP in regular systems written in Java. This opens new opportunities for model-driven engineering, in research, industry, and, as we have demonstrated during the 2018 fall semester, in the classroom. The rest of this paper is organized as follows: Section~\ref{sec:bp-intro} provides a brief introduction to behavioral programming. Section~\ref{sec:bpjs} introduces BPjs, its design, and how it can be used for execution and verification of b-programs. Section~\ref{sec:sample_programs} introduces and briefly discusses four sample applications written in BPjs. Section~\ref{sec:evaluating_bpjs} discusses our experience using BPjs in various contexts, including research and teaching. Section~\ref{sec:related_work} discusses how the work presented in this paper relates to other works in BP, modeling, and verification. Section~\ref{sec:discussions} discusses various aspects of BPjs, ranging from software engineering in the context of BP to BPjs-specific issues. Finally, Section~\ref{sec:summary} concludes. The code used in this paper is available on-line, either in~\cite{bpjs-code-appendix} or as part of the BPjs code repository. BPjs itself is open-sourced, and available on GitHub and Maven Central\footnote{See BPjs' project site at https://github.com/bthink-BGU/bpjs}. \section{Behavioral Programming 101} \label{sec:bp-intro} Behavioral Programming (BP)~\cite{bp:site}, a variant of scenario-based programming, was introduced by Harel, Marron and Weiss in 2010~\cite{harel2010programming}. Under BP, programs are composed of threads of behavior, called b-threads. B-threads run in parallel, coordinating a sequence of events via a synchronized protocol, as follows: during program execution, when a b-thread wants to synchronize with its peers, it submits a statement to a central event arbiter, and then blocks. The statement that each b-thread submits declares which events it requests to be selected, which events it waits for (but does not request), and which events it would like to block. When all b-threads have submitted their statements, the arbiter selects a single event that was requested and not blocked. It then unblocks the b-threads that requested or waited for that event. The rest of the b-threads remain at their state, until an event they requested or waited for is selected. This mechanism allows each b-threads to independently take care for a single scenario based requirement such as forbidding the scenario: "\code{TurnACOff}; Less than 1 minute passes; \code{TurnACOn}". The corresponding b-thread can, for example, wait for the event \code{TurnACOff} and then block the \code{TurnAcOn} event for the next minute (e.g., by counting 60 \code{SecondTick} events). \section{Introducing BPjs} \label{sec:bpjs} The tool we present in this paper is called BPjs. It is a platform aiming to support almost all\footnote{The only exception to this rule is the distributed event selection supported by BPC\cite{harel2014scaling}} the body of work created so far in behavioral programming under one roof. To this end, BPjs defines a generalized version of BP with well defined extension points and external interfaces. Extensions that use these interfaces can aid future work in BP, both by framing internal BP topics (``what are useful event selection strategies'') and external ones (``how can a BP subsystem be embedded in another system''). Thus, BPjs can serve as a common platform for researching and disseminating ideas in BP\@. Such platform was hitherto unavailable. BPjs allows b-programs to be embedded in existing software systems. Sending data from a host application to the b-program is done by enqueuing events to the b-program's external event queue. Sending data from the b-program to the host is done via a listener interface, which informs the host which event was selected, as well as other program life-cycle events. A super-step based mechanism similar to the one proposed in~\cite{harel2014scaling} takes care for embedding the events withing the run of the program in a systematic way. BPjs' generalized definition of BP standardizes two aspects that vary between former implementations: the algorithm that selects events, and the type of the events themselves. The event selection algorithm is abstracted under the concept of \emph{event selection strategy}. Given a b-program synchronization point, an event selection strategy calculates which events in it are selectable, which one of them is to be selected for the current calculation, and how this selection affects incoming events from the environment. Events in BPjs are standardized: they have a name (string), and optional data, whose type is unrestricted. Subsection~\ref{subs:event-selection-strategies} expands on event selection strategies in BPjs. BPC~\cite{harel2014scaling} also offered a generalized form of BP --- see Section~\ref{sec:related_work} for a short comparison between the two frameworks. To the best of our knowledge, with the exception of distributed execution, BPjs can support all use-cases implemented by former research in BP\@. Porting those tools and algorithms to BPjs is left to future work, and to the community. Next, we describe the static structure of BPjs, and then look at how it can be used to execute and verify b-programs. These descriptions are all high-level ones. For low level technical descriptions, we refer the reader to the project's website~\cite{bpjs:site}. BPjs' design is informed by our experience building BP libraries for Java~\cite{harel2010programming}, C~\cite{harel2012non}, JavaScript~\cite{ashrov2015use}, Blockly~\cite{marron2012decentralized}, and other languages. It differs from previous BP code in that it is designed to serve as a common platform, rather than to fit specific research. Where previous code makes ad-hoc assumptions about various BP aspects, such as the type of events, or the way events are being selected, BPjs uses abstraction or generalization to support that specific case within its general framework. One abstraction example is our design decision to use the Strategy design pattern for selecting events --- previous code used hard-coded strategies (except for BPC~\cite{harel2014scaling}). BPjs is implemented as a Java library that runs code written in JavaScript. It uses Mozilla Rhino\footnote{An open source JavaScript engine written and maintained by the Mozilla foundation. Rhino is available at https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino} JavaScript engine to execute regular JavaScript code, and custom code for handling synchronization calls. BPjs supports aforementioned extension and generalization using common design patterns, such as Strategy\cite{GoFBook} and Observer\cite{GoFBook}. \begin{figure} \centering \includegraphics[width=3.5in]{images/main-classes.pdf} \caption{Main classes of the BPjs platform (some details omitted for brevity). BPjs design maintains a strict division between model, execution and analysis. Classes in the \code{model} package are used to describe a b-program, but not to execute or analyze it. Model execution is done using classes in the \code{execution} package. The \code{analysis} package is used to verify and otherwise analyze a b-program.} \label{fig:bpjs-main-classes} \end{figure} The main classes of BPjs are listed in \figurename{~\ref{fig:bpjs-main-classes}}. The \code{BProgram} class, which models a single b-program, is responsible for maintaining the state of a b-program: b-threads, global scope (presumably used only for read-only data), and an external event queue. A \code{BProgram} instance has a single \code{EventSelectionStrategy} object, which it uses for event selection during synchronization points. Perhaps surprisingly, the \code{BProgram} class does not have a \code{run} method. Nor does it have a \code{verify} one. This is because a b-program is somewhere on the continuum between a runnable program and a model; exactly where, is up to the b-program developers. In order to run a \code{BProgram} instance, we pass it to a \code{BProgramRunner}. In order to verify it, we pass it to a \code{DfsBProgramVerifier}, along with a formal specification and some additional information (see Sub Section~\ref{sub:verification} below). In this sense we say that a \code{BProgram} is an executable model --- we can just run it by advancing the states sequentially and we can also analyze it by traversing the states back and forth. In addition to synchronizing, b-threads can call \emph{assert}, passing it a boolean expression as a parameter. If said expression evaluates to \code{false}, the b-program is considered in violation with its requirements. For added convenience, assertions also accept a string parameter, which allows programmers to explain what went wrong in a human-readable form. Assertions are common in programming languages, but incorporating them into BP is an addition introduced by BPjs. \subsection{Program Execution} \label{sub:program_execution} The structure of an application running a b-program using BPjs is shown in \figurename{~\ref{fig:bpjs-runtime-stack}}. The b-program runs atop a b-program runner. Runners take a single b-program, and execute its setup code. Then, iteratively, they collect b-thread synchronization statements, use the b-program's event selection strategy to select an event that is requested and not blocked, and advance b-threads who requested or waited for said event to their next synchronization point. Additionally, runners maintain a list of b-program listeners, which is the protocol introduced by BPjs for monitoring a running b-program. A b-program runner can be embedded in a host application. A host application can send data to the b-program by putting events in its external event queue. It can monitor the b-program by registering a b-program listener with the b-program runner. For example, consider a bath controller whose logic is implemented using BPjs. When an \code{add\_hot\_water} event is selected, the host application is informed via the listener interface, and instructs its actuators to physically open the water stream. If its sensors detect that the bath is about to overflow, the host application enqueues an \code{overflow\_imminent} event into the b-program's event queue, so the b-program can respond. Section~\ref{sec:sample_programs} contains more examples. Failed assertions cause an executing b-program to terminate. The host application is informed of the failed assertion, and may decide to shut down as well, drop into a ``safe mode'', or even restart the b-program under different settings, which may prevent the problem. \begin{figure} \centering \includegraphics[width=3in]{images/bpjs-blocks-runtime.pdf} \caption{BPjs program stack during b-program execution. The b-program and its event selection strategy run on top of a b-program runner, which handles program execution and communication with the host application. The host application reads data from the b-program by listening to the events it selects. It sends data to the b-program by placing events in its external event queue.} \label{fig:bpjs-runtime-stack} \end{figure} \subsection{Verification} \label{sub:verification} The structure of an application analyzing a b-program is shown in \figurename{~\ref{fig:bpjs-analysis-stack}}. The b-program and event strategy are directly analyzed, with no modification or transformation. Additional b-threads are added to the analyzed model in order to simulate the system's environment, and to detect requirement violations. Other b-threads can be added to limit the verification search space, which is typically rather large (as is the normal case in model checking). Sub Section~\ref{sub:sample-prog:mazes} shows an example of such a b-thread. \begin{figure} \centering \includegraphics[width=3in]{images/bpjs-blocks-verification.pdf} \caption{BPjs program stack during b-program verification. The b-program and its event selection strategy are analyzed by a BPjs verifier. Additional BP code (white rectangles) models the system's formal specification, and environment. In some cases the search space can be reduced by adding b-threads modeling assumptions or specific domain knowledge.} \label{fig:bpjs-analysis-stack} \end{figure} During verification, a model-checker traverses the system's state graph. When it discovers a new state, it checks that state for violations. There are two types of violations a verifier can detect: violating states, denoted by a b-thread that has a failed assertion, and deadlocks, where all requested events are blocked. The event sequence leading to the invalid state is used as a counter-example. Such sequences can later be used to fix a b-program, either manually, or automatically by an algorithm~\cite{harel2012non}. In BP, a \emph{Program state} is defined as the states of all b-threads participating in a b-program, immediately after all of them have reached a synchronization point, and are thus paused, waiting for an event to be selected. The state of a single b-thread is defined as its stack, heap, and program counter. This definition of program state differs from the ``classic'' model checking definition, which considers each possible memory state to be a separate program state. However, BP semantics ensure that, as long as all inter-b-thread communication is done via events, stepping between synchronization points can be considered atomic~\cite{bpmc}. This allows BPjs program analyzers to look at a significantly smaller number of states, compared to, e.g.\ Java Pathfinder~\cite{Havelund:JavaPathfinder}. \figurename{~\ref{fig:state-graph}} shows an example b-program state graph. Currently, BPjs uses depth-first search to traverse b-program state graphs. In~\cite{Weinstock15}, Weinstock used a very early version of BPjs to traverse a state space using A*. Concurrent and distributed state exploration are other feasible approaches. BPjs enables them by storing program state in a serializable form. Implementing such analyzers is left to future work. The assert statement and the deadlock detector, combined with observer b-threads, allow specification (and hence, verification) of all safety properties of a b-program. However, they do not allow for specification of liveness properties. For example, in Subsection~\ref{sub:sample-prog:dining_philosophers} below, we cannot say with simple assertion that a philosopher eats infinitely often. To this end, BPjs allows for more general verification procedures. Specifically, one can extend \code{DfsBProgramVerifier} to also look for violating cycles in the state graph. For example, it is possible to specify that certain states are `hot' using the \code{assert} statement. Then, the generalized verifier may look for `hot cycles' --- reachable cycles whom states are all hot. This approach, which is similar in nature to the one taken by Live Sequence Charts (LSC)~\cite{Marron12:LSCRef}, allows for verification of all $\omega$-regular properties and more. \begin{figure} \centering \includegraphics[width=3in]{images/state-graph.pdf} \caption{State graph of a sample b-program. Each square represents a single program state. Each state contains b-threads participating in the b-program (top row), requested events (row $R$) and blocked events (row $B$). The program transitions between states by selecting events which are requested and not blocked (as marked on the edges). The graph is of a general shape --- we cannot assume the absence of cycles, bi-partitioning, or other simplifying properties.} \label{fig:state-graph} \end{figure} \section{Sample Programs} \label{sec:sample_programs} This section illustrates a few aspects of creating systems with BPjs, using example programs. The full code, and a more detailed analysis of these examples, are available in BPjs' on-line documentation~\cite{bpjs:site}. Here we limit ourselves to a high-level discussion focusing on specific aspects of interest. \subsection{Hot/Cold Bath} \label{sub:sample-prog:hot_cold_bath} The hot/cold example, first introduced in~\cite{harel2010programming}, is the ``hello world'' program of Behavioral Programming, used as an example in numerous papers and tools to demonstrate how a basic b-program is written using a specific BP library. This program models a computerized bath controller, tasked with filling a bath with six parts water: three cold, and three hot. Said controller defines two events: \code{COLD}, and \code{HOT}. When the controller selects \code{COLD}, one part of cold water is added to the bath; when it selects \code{HOT}, one part of hot water are added to it. Listing~\ref{lst:hotcold-basic} shows an implementation of such controller. Looking at its code, we can see that it is a regular JavaScript program. All BP-related calls are done by calling methods on the \nhcode{bp} object, which is defined globally. First, \nhcode{bp} is used to register two b-threads, \code{add-hot} and \code{add-cold}. B-threads in BPjs are regular JavaScript functions, labeled with a name. Then, when the regular JavaScript pass is over and the two b-threads have been registered, BPjs begins the b-program execution, where b-threads synchronize and events are selected. Synchronization is done by invoking \nhcode{bp.sync} with a synchronization statement. BPjs allows passing partial statements, that is, if a b-thread does not wait for any events at a given point, the \nhcode{waitFor} field of the statement can be omitted. The events in this listing, \code{HOT} and \code{COLD}, are defined in the host application, and are programmatically added to the b-program's scope before it runs. This pattern is often used when an external system (in this case, the faucet actuators) has to respond to an event selected by a b-program. \begin{lstlisting}[ float, label={lst:hotcold-basic}, mathescape=true, caption={A controller for filling a bath with three parts cold water, and three parts hot water. The order in which the water parts are added is left to the discretion of the event selection strategy.} ] bp.registerBThread("add-hot", function(){ bp.sync({request:HOT}); bp.sync({request:HOT}); bp.sync({request:HOT}); }); bp.registerBThread("add-cold", function(){ bp.sync({request:COLD}); bp.sync({request:COLD}); bp.sync({request:COLD}); }); \end{lstlisting} The code in Listing~\ref{lst:hotcold-basic} allows the bath to become too hot or too cold while being filled (for example, when \code{add-hot} runs to completion while \code{add-cold} remains blocked at its first synchronization point). To prevent these unbalanced scenarios, we can add an additional b-thread, such as the one in Listing~\ref{lst:hotcold-balancer}, to the bath controller b-program. The \code{control-temp} b-thread ensures the bath water temperature is never too hot, by blocking the addition of each part of hot water until a part of cold water is added. It is interesting to note that this b-thread can be added and removed without affecting the other b-threads, thus creating a modular functionality for the bath controller. \begin{lstlisting}[ float, label={lst:hotcold-balancer}, caption={A b-thread that maintains safe water temperature by ensuring cold water are added before hot water are.} ] bp.registerBThread("control-temp", function() { while ( true ) { bp.sync({waitFor:COLD, block:HOT}); bp.sync({waitFor:HOT, block:COLD}); } }); \end{lstlisting} \subsection{Dining Philosophers} \label{sub:sample-prog:dining_philosophers} The Dining Philosophers is a classic concurrent programming challenge\footnote{The Dining Philosophers problem was first presented by Edsger Dijkstra in 1965, as an exam exercise.}. A group of philosophers is dining at a round table. For utensils, they have chopsticks - a single chopstick between each two plates. In order to eat, each philosopher has to obtain both chopsticks adjacent to her. This setting poses a mutual exclusivity challenge, as a chopstick may only be held by a single philosopher at any given moment. This problem is illustrated in Figure~\ref{fig:dining-philosophers}. \begin{figure} \centering \includegraphics[width=2.5in]{images/dining-philosophers} \caption{Dijkstra's Dining Philosophers problem. Adjacent philosophers share the chopstick between them. In order to eat, a philosopher has to pick up the sticks left and right to her. Since a stick can be used by at most a single philosopher at a time, this setting poses mutual exclusion challenges.} \label{fig:dining-philosophers} \end{figure} In this sub section, we look at a re-implementation of the Dining Philosophers b-program from~\cite{bpmc}, using BPjs. As modeled, philosophers use a naive algorithm to synchronize: they pick the chopstick to their right, then the one to their left, eat, and finally release the sticks in reverse order. This is a simple algorithm, but it can reach a deadlock. The code in Listing~\ref{lst:dp-philosopher} shows a function for adding a philosopher b-thread to a b-program. Functions like these can be viewed as parametrized b-thread templates, as they create different b-threads based on the parameters they are invoked with. B-thread templates can be used to reduce code duplication, or to generate a heterogeneous b-thread population, e.g. by randomizing template parameters according to a required distribution. \begin{lstlisting}[ float, label={lst:dp-philosopher}, caption={A function for adding a philosopher to the dining philosophers b-program. A dining philosopher repeatedly attempts to pick the chopstick to her right, then the one to her left, and then releases them in reverse order. } ] function addPhil(philNum) { bp.registerBThread("Phil"+philNum, function() { while (true) { // Request to pick the right stick bsync({ request: bp.Event("Pick"+philNum+"R") }); // Request to pick the left stick bsync({ request: bp.Event("Pick"+philNum+"L") }); // Request to release the left stick bsync({ request: bp.Event("Rel"+philNum+"L") }); // Request to release the right stick bsync({ request: bp.Event("Rel"+philNum+"R") }); } }); }; \end{lstlisting} Enforcing restrictions on chopstick usage, such as mutual exclusivity, is done by chopstick b-threads\footnote{The code for the chopstick b-threads, as well as the full program, are available on-line at~\cite{bpjs:site}.}. Each chopstick is modeled by a single b-thread, that blocks events based on its current state. For example, after chopstick \#2 is picked up by philosopher \#2 (that is, after event \code{Pick2R} was selected), it prevents philosopher \#3 from picking it up, by blocking event \code{Pick3L}. This block is lifted after philosopher \#2 releases the chopstick (\code{Rel2R}). Since we use b-thread template functions for adding philosophers and chopsticks to the model, the creation the actual simulation boils down to a single loop calling theses functions (see Listing~\ref{lst:dp-create}). \begin{lstlisting}[ float, label={lst:dp-create}, mathescape=true, caption={Creating a dining philosophers model.} ] for (var i=1; i<=PHILOSOPHER_COUNT; i++) { addStick(i); addPhil(i); } \end{lstlisting} The dining philosophers b-program described here can serve both as a simulation program and as a model to be checked. For simulation purposes, this b-program is run, and its event log can be analyzed, e.g.\ to get statistics about stick wait times. For verification, the b-program is passed to a verifier for deadlock detection. In both cases, the exact same code is used --- no translation is necessary when transitioning between code execution and verification. In terms of model checking performance, using a 2.6 GHz laptop with 16GB ram, BPjs finds a deadlocking counterexample for 20 dining philosophers, in about 2 seconds. For 30 dining philosophers, finding a counter example takes slightly more than 4 seconds. \subsection{Mazes and Model Checking} \label{sub:sample-prog:mazes} In this subsection, we apply techniques presented in~\cite{SemanticVariationsExe16}, using BPjs, for solving mazes. To this end, we will define a simple domain specific language (DSL) for describing mazes. In our DSL, mazes are created using ASCII drawings (see example at \figurename{~\ref{fig:maze}}). Each character in such drawing has specific semantics: spaces denote places a maze walker can walk into, \code{s} denotes the starting point for the walker, and \code{t} denotes a target cell. All other characters denote walls. \begin{figure} \centering \includegraphics[width=2.5in]{images/maze} \caption{A maze described using our maze-description DSL (left), and its solution, found by verifying the b-program that models it (right).} \label{fig:maze} \end{figure} Parsing a maze written in this DSL amounts to traversing an ASCII drawing, and adding appropriate b-threads for each of its characters. Listing~\ref{lst:maze-cell} shows the b-thread template used to generate b-threads for space cells. Under our model, space cells attract the maze walker when it enters a cell adjacent to them. To do this, they repeatedly wait for an entry event to any of their neighboring cells, and then request an entry event with their own coordinates. Note the parameters passed to \nhcode{waitFor}: these are not events; they are event \emph{sets}. An event set is a predicate, accepting an event and returning \code{true} iff that event is a member of the set. Event sets can be defined statically, like \code{anyEntrance}, or dynamically, by composing a new predicate function based on a set of parameters. This is the case in \code{adjacentCellEntries}, used in line 8. \begin{lstlisting}[ float, label={lst:maze-cell}, mathescape=true, caption={A b-thread template for adding a space cell at \code{row}, \code{col}. Space cells wait for the maze walker to enter a cell adjacent to them, and then request that it enters them. If the event selection strategy chose another cell for the maze walker to enter, the entry request is removed.} ] var anyEntrance = bp.EventSet("AnyEntrance", function(e){ return e.name.indexOf("Enter") === 0; }); function addSpaceCell( col, row ) { bp.registerBThread("cell(c:"+col+" r:"+row+")", function() { while ( true ) { bsync({waitFor: adjacentCellEntries(col, row)}); bsync({request: enterEvent(col, row), waitFor: anyEntrance}); } } ); } \end{lstlisting} Target cells are regular space cells, with an additional trait: when the maze walker enters them, \code{TARGET\_FOUND\_EVENT} is selected. To achieve this, the parser adds an additional b-thread for each target cell. This b-thread waits for the relevant entry event, and then requests \code{TARGET\_FOUND\_EVENT}, while blocking all other events. \begin{comment} \begin{lstlisting}[ float, label={lst:maze-target}, mathescape=true, caption={B-thread template for announcing a target located at \code{(row, col)} was found. In order to ensure that the announcement recognizes the correct cell, the \nhcode{bsync} requesting the announcement event blocks all other events.} ] bp.registerBThread("target", function () { bsync({ waitFor: enterEvent(col, row) }); bsync({ request: TARGET_FOUND_EVENT, block: bp.allExcept(TARGET_FOUND_EVENT) }); }); \end{lstlisting} \end{comment} Similarly, a start cell is a space cell with an additional behavior: when the maze program starts, it already has a maze walker in it. Thus, when handling a start cell, the maze parser generates an additional b-thread that requests an entry event with that cell's coordinates when the b-program starts. Parsing a maze description results in a b-program modeling the maze. Running the b-program will make the maze walker perform a valid random walk through the maze. It will start at the start cell, will not pass through walls, and if it will stumble on the target cell, \code{TARGET\_FOUND\_EVENT} will be fired. To solve a maze efficiently, we can use a verification. Our formal requirement will be that \code{TARGET\_FOUND\_EVENT} is never fired. A b-thread modeling this requirement will \nhcode{watFor} this event, and then call \nhcode{bp.ASSERT(false)}, to indicate that a requirement has been violated. After adding this b-thread to the generated b-program, we can pass it to a b-program verifier. If the maze has a solution, the verifier will return a counter example, with the path to the target cells encoded in its event trace. Verification of a maze b-program can take a long time and generate an arbitrarily long counter example. This is because of the random walk: while a counterexample will be found, the length of the path it uses to get to the target cell is unbounded. We can solve this issue by adding a b-thread that prevents a the maze walker from entering the same cell twice (see Listing~\ref{lst:mazes-once}). This thread turns the random walk into a walk that enters a new cell at every step, until it gets terminally stuck. The verification process examines all such possible walks, and returns one that ends up at the target cell. This is an example of a \emph{simplifier} b-thread, which reduces the search space required for verification. Note that adding this b-thread eliminates an infinite number of maze solutions: all paths from start to target that visit a cell more than once. However, for each eliminated solution, there exists a solution where each cell along the path is visited only once. This solution, which is a local minimum, remains in the search space. Thus, the verification is sound, even though it does not traverse the entire state space. \begin{lstlisting}[ float, label={lst:mazes-once}, caption={A b-thread preventing a the maze walker from entering a cell twice. This b-thread during verification, this b-thread reduces the search space, while maintaining the soundness of the verification result.} ] bp.registerBThread("onlyOnce", function(){ var block = []; while (true) { var evt = bsync({waitFor: anyEntrance, block: block}); block.push(evt); } }); \end{lstlisting} \subsection{Tic Tac Toe} \label{sub:sample-prog:tic_tac_toe} In this subsection, we use a TicTacToe program to demonstrate the usage of a priority-based event selection strategy, model execution and analysis, and environment simulation. The b-program presented here is borrowed, with modifications, from~\cite{bpjecoop}, where is was used to demonstrate the concept of aligning b-threads with system requirements. Our Tic Tac Toe program plays against a human on a $3 \times 3$ board, using classic Tic Tac Toe rules. It is made of two groups of b-threads. The first group enforces the rules of the game, such as ``players take turns'' (using a b-thread similar to the temperature-regulating b-thread in Listing~\ref{lst:hotcold-balancer}), or ``a square can be marked only once'' (using a b-threaed similar to the simplifier thread in Listing~\ref{lst:mazes-once}). B-threads from this group are also responsible for detecting when a player wins, or if a game ends in a tie. The second group forms the strategy of the computer player. It is composed of numerous b-threads that are responsible for defending against opponent attacks, and presenting initiative when possible. Listing~\ref{lst:ttt-strategy} shows two of these b-threads. Assume that \code{line} holds an array of three squares that forms a line on the board (i.e. a row, column, or diagonal). If player O have already placed $O$s in the first two squares, the b-thread \code{AddThirdO} suggests placing an $O$ in the remaining square, winning the game for player O. Similarly, if player X have already placed $X$s in the first two cells of that line, b-thread \code{PreventThirdX} requests putting an $O$ at the third square, in order to prevent the opponent from winning. The calls to \nhcode{bp.sync} where the aforementioned events are requested use an additional parameter - the priority of the request. Here, adding the third $O$ has a higher priority than preventing a third $X$. This is because the strategy prefers an immediate win over preventing a loss and continuing the game. The priority semantics of the second \nhcode{bp.sync} parameter are casted by the event selection strategy used by the Tic Tac Toe b-program. From BPjs' point of view, the second parameter has no defined semantics --- it is a hint to the b-program's event selection strategy. It becomes a part of the calling b-thread's synchronization statement, and BPjs passes it to the event selection strategy as-is. This allows BPjs to support multiple event selection strategies in a fully pluggable way. \begin{lstlisting}[ float, label={lst:ttt-strategy}, caption={Two b-threads that are part of the player strategy. \code{AddThirdO} is responsible for completing lines of two $O$s into a triplet (and, thus, winning the game). \code{PreventThirdX} is responsible for preventing the opponent from doing the same. The usage of requests with priorities (lines 5 and 12) ensures that, when required to choose between completing a row and blocking an opponent, the strategy will always prefer the former.} ] bp.registerBThread("AddThirdO", function() { while (true) { bp.sync({waitFor:[O(line[0].x, line[0].y)]}); bp.sync({waitFor:[O(line[1].x, line[1].y)]}); bp.sync({request:[O(line[2].x, line[2].y)]}, 50); } }); bp.registerBThread("PreventThirdX", function() { while (true) { bp.sync({waitFor:[X(line[0].x, line[0].y)]}); bp.sync({waitFor:[X(line[1].x, line[1].y)]}); bp.sync({request:[O(line[2].x, line[2].y)]}, 40); } }); \end{lstlisting} The Tic Tac Toe b-program can be used in two contexts: as a controller part in an interactive application, and as a model being verified. In a GUI context, a host GUI application runs the b-program internally. User clicks are translated to \code{X(x,y)} events and placed in the b-program's external event queue. The host application listens to events selected by its b-program, and updates its UI accordingly (e.g. drawing an $X$ in the top-right square when \code{X(2,0)} is selected). In a verification context, we use the same b-program, and add b-threads that model requirements. Listing~\ref{lst:ttt-x-never-win} shows a b-thread modeling the requirement ``X should never win''. Verifying our strategy against this requirement ensures that we have created a good Tic Tac Toe strategy\footnote{Tic Tac Toe does not have a strategy that guarantees victory. There exists a strategy which guarantees not losing the game, though.}. \begin{lstlisting}[ float, label={lst:ttt-x-never-win}, caption={A b-thread declaring that a state where X has won is illegal. This b-thread is a direct translation of the requirement ``X should never win''.} ] bp.registerBThread("R1:XShouldNotWin", function(){ bsync({waitFor:bp.Event(XWin)}); bp.ASSERT(false, "X won."); }); \end{lstlisting} Having added a formal specification, we now need to validate the strategy against all possible opponents. Ostensibly, this is a daunting task. However due to the fact that, when faced with multiple options for advancing, the verifier examines all of them, we can perform the required verification by adding a single b-thread, shown in Listing~\ref{lst:ttt-simulated-player}. The \code{SimulatedOpponent} b-thread constantly requests placing $X$s in all squares on the board. The game rule b-threads enforce that those requests are only honored during X's turn, and for non-populated squares. Thus, for each of X's possible turns, the verifier tests the consequences of placing an $X$ in each of the available locations. \begin{lstlisting}[ float, label={lst:ttt-simulated-player}, caption={A b-thread that simulates all possible behaviors of player X during model checking. The fact that this b-thread requests all squares, combined with the model checker state scanning, results in the model checker traversing the program execution sub-tree for every legal choice of placing an X on the board. Requests for placing an X in illegal locations are blocked by the game rules b-threads.} ] bp.registerBThread("SimulatedOpponent", function(){ while (true) { bsync({request:[X(0,0),...,X(2,2)]},10); } } \end{lstlisting} \section{Evaluating BPjs} \label{sec:evaluating_bpjs} BPjs is designed to support multiple BP variants, be used in different settings, and maintain a non-intimidating, developer-friendly nature. In this section we look at our experience using BPjs, and try to asses whether these goals were achieved. In its current form, BPjs was first used in~\cite{SemanticVariationsExe16} to run specifications written in LSC~\cite{DH01a}. To that end, authors embedded BPjs in a command-line application. The application parsed LSC diagrams into b-programs written in JavaScript, and passed these programs to BPjs for execution. Internally, our group used BPjs to program a robot in Robocode~\cite{Robocode2004}, and in a desktop GUI application (the Tic Tac Toe example presented in this paper). Recently, a team of students and researchers embedded BPjs in a web server in order to implement a graphical, web-based rule engine. The system was developed during a 28 hours hackathon, and went on to win first prize\footnote{First place in HackBGU, https://bit.ly/2riUjB0}. Lastly, we are currently building a prototype of an on-board satellite control software based on BPjs. This project was funded by the Israel Innovation Authority, after a board of external experts examined BPjs and BP, and found them fitting for satellite control. Thus, we feel confident saying that BPjs can be used successfully in different settings. In order to test how approachable BPjs is, we have used it as a teaching environment in an undergraduate course for computer science (CS) and information systems engineering (ISE) students. Students chose between implementing a project in BPjs or in an IoT environment. Out of 42 students, 20 chose to work with BPjs. Implemented projects included a web-based PacMan game, a Blockly based interface for behavioral programming, an implementation of strategies in computer games, and more. Students were instructed to first read the tutorials and only then approach the authors with questions regarding BPjs. Except for a single case where students implemented their own event selection strategy, the tutorials proved to be enough for students to implement their projects. In the course feedback, a 3rd year CS student writes: ``the whole idea of this system is very interesting for me, because it is actually a different approach to problems than the approach we, as students in CS, are used to have. The way BP engine works \ldots might be strange in the beginning, but when I got into it - it looked really logical and obvious''. Another student writes: ``Using the decision engine based on request, wait-for, and block, was initially hard to understand, but after a few examples I was able to understand it and enjoy using it. I found this way of thinking to be interesting and challenging''. One student concluded his feedback saying: ``Finally, I can say that this system is revolutionary in the way it sees and solves problems, but at the same time really friendly to the user.'' At the end of the course, a student survey was conducted, yielding the following results: \begin{center} \begin{tabular}{ | p{4.5cm} | p{2.5cm} |} \hline \textbf{Question} & \textbf{Answer Average (0-5)} \\ \hline \hline The material was presented in a clear and organized manner. & 4.7 \\ \hline I found the material and the way it was presented interesting. & 4.6 \\ \hline The use of the tool motivated me to be creative. & 4.7 \\ \hline I found the material taught in class relevant. & 4.5 \\ \hline Overall satisfaction. & 4.6 \\ \hline \end{tabular} \end{center} \section{Related Work} \label{sec:related_work} The closest relative to BPjs is BPj. Introduced in~\cite{harel2010programming}, the paper that also introduced Behavioral Programming, BPj allows writing behavioral programs using Java. Unlike BPjs, BPj was never designed as a general framework. Over time, a few mutually incompatible versions emerged, as researchers adapted it to fit specific needs. For technical reasons, BPj does not support verification beyond Java 5. Like BPjs, BPj is an open source software. However, its infrastructure is not adjusted to modern open source standards: it uses an IDE-specific project structure, and was developed in a close repository with the source code available as a zip archive. Our work on BPj informed many of our decision while working on BPjs. In particular, the decisions to use modular design, an IDE-agnostic project format, and an open, collaboration-friendly code repository. BPC~\cite{harel2014scaling} is a framework for writing behavioral programs in C++. Like BPjs, it offers customizable events and event selection. BPC also supports time constraints, a feature not currently supported by BPjs. BPC supports limited verification, based on examining the state graph of each b-thread, and then composing and analyzing the program graph using an external tool. A key difference between BPC and BPjs is that BPC was not planned to be embedded in host applications, but rather to be run standalone. But, as demonstrated in Section~\ref{sec:evaluating_bpjs}, the ability to embed a b-program in a larger system opens many interesting use cases, for which BPC cannot be used. Both BPC and BPj use a single system thread per b-thread, which makes b-threads expensive. This is an issue, since under the BP paradigm, b-programs should be able to consist of many small b-threads. In BPjs, on the other hand, a single system thread can run multiple b-threads, which makes b-threads much cheaper. This is a technical difference, but it enables BPjs to run larger b-programs using less resources. An obvious difference between BPjs, BPC, and BPj, is the language used for writing b-threads. While BPC and BPj use a programming language, complete with static typing and strict class definitions, BPjs uses a lenient scripting language. We claim that scripting languages are generally a better choice for BP, because the code for b-threads is normally quite short. Short programs don't require much code organization, and so the syntactic overhead imposed by programming languages in order to organize their code is non-beneficial for b-threads. Scripting languages allow writing cleaner b-thread code, as they skip a lot of boilerplate. PlayEngine~\cite{HM03}, PlayGO~\cite{playgo}, and Scenario Tools~\cite{greenyer2017scenariotools} support BP by implementing Live sequence charts (LSC)~\cite{DH01a}. BPjs, as well as BPC and BPj, offer more generality, but lack the slick visual interface these tools have. Looking more broadly at executable modeling languages, Executable UML\cite{mellor2002executable} models general programs using diagrams with executable semantics, and an action language. In~\cite{SemanticVariationsExe16} we have shown how BPjs can be used to define executable semantics of diagrammatic languages, and applied the proposed technique to UML's Sequence Diagrams. In this capacity, BPjs can provide a common meta-language for executable UML diagrams. JavaPathfinder (JPF)~\cite{JPF} performs verification on directly on Java code. Like BPjs, JPF uses its own runtime mechanism to execute the analyzed program. Another similarity is that both JPF and BPjs are designed to be very modular. The main difference between BPjs and JPF is that BPjs assumes it is verifying a behavioral program, and thus can limit the states it examines to synchronization points only. JPF cannot make this assumption, and thus has to process a significantly larger amount of states. \section{Discussions} \label{sec:discussions} This section briefly discusses options and questions raised by BPjs regarding Behavioral Programming, and software engineering in general. These discussions are not meant to be exhaustive, but rather to serve as starting points for a conversation. \subsection{Event Selection Strategies} \label{subs:event-selection-strategies} Behavioral Programming requires that at every synchronization point, only events that are requested and not blocked can be selected. The decision which event should be selected is left for the implementation. Indeed, different BP implementations choose their events in different ways. The original implementation~\cite{harel2010programming} prioritized events according to the b-thread that requested them. Subsequent versions used randomized selection, among other strategies. BPjs introduces a unified interface for event selection algorithms, which, for the best of our knowledge, supports all existing algorithms. This interface defines two methods: the first accepts the state of a b-program at a synchronization point, and returns a set of selectable events. The second method accepts a set of selectable events, and a b-program's state, and performs the actual selection. This former method is used during verification, while execution uses both. This separation allows event selection algorithms to use randomized event selection, when more than a single event is selectable. In order to support all existing strategies, BPjs allows b-threads to pass a meta-data object to \nhcode{bp.sync}, in addition to the usual statement declaring which events are requested, waited-for, and blocked. The additional object may guide the event selection strategy's choice. BPjs itself includes four different event selection strategies, each of which was found useful, either by us or by prior work. \begin{enumerate} \item \emph{Simple} Selects a random event that is requested and not blocked. \item \emph{Prioritized B-Threads} Maintains a priority for each b-thread. Selects a non-blocked event, requested by a b-thread with the highest priority who requested such event. \item \emph{Prioritized Synchronizations} B-threads may add a priority to their \nhcode{bp.sync} statements. Given a set of events that are requested and not blocked, this strategy will randomly select an event with the maximum priority. The Tic Tac Toe example presented in Sub Section~\ref{sub:sample-prog:tic_tac_toe} makes use of this strategy. \item \emph{Ordered Events} BPjs allows a b-thread to request multiple events by passing an event array to \nhcode{bp.sync}. Other strategies treat this array as a set of events; this strategy treats it as a list. Thus, for each b-thread, it will only consider selecting the first requested and not blocked event. \end{enumerate} Many other options for event selections spring to mind. For example, events can be selected using majority vote (calculated using \nhcode{request}s or \nhcode{waitFor}s). Algorithms can use planning, heuristics, or consult machine learning algorithms. Event selection algorithms can use analysis as a way of informing event selection decisions. The basic idea, similar to the one used by many chess playing programs, is to see what are the likely outcomes of selecting each of the selectable event at a given moment. To achieve this, an event selection strategy can preform a limited verification session, starting at the state in question, and selecting each possible event. The verification does not have to be exhaustive --- it may be enough to see what states are reachable within a limited amount of steps, and how likely is the program to actually reach them. Given this data, and a scoring algorithm for each reachable state, the event selection strategy will have valuable information to work with when deciding which event to select. \paragraph*{Hierarchies} Event selection strategies can be ordered by hierarchy, according to the event sequences they allow. An event selection strategy $s_{child}$ is under event strategy $s_{parent}$ if all event sequences made possible by $s_{child}$ are also possible under $s_{parent}$. Thus, if a system was verified under $s_{parent}$, it is also valid under $s_{child}$. The Simple event selection strategy, which selects an event which is ``requested and not blocked'' is at the top of this hierarchy, as it allows all event sequences. Thus, if a system was verified with the Simple event selection strategy, it will be valid with \emph{any} event selection strategy. This type of event selection strategies present an interesting combination between forward execution and model checking, creating a smart runtime, as described above. Of course, as verification can take time, an event selection strategy may choose to use verification only when it has enough time (e.g. in a drone, with no obstacles close by), and use a faster event selection algorithm when event selection has to be done fast. \subsection{Development Process of a Behavioral System} \label{sub:development_process_in_bpjs} The process of developing a software system using BPjs differs from the regular process of developing a system, in that developers may directly verify the system being developed against a set formal requirements. Conveniently, the requirements and the system are described using the same language, tools, and formalisms. A system developed in BPjs is composed of b-threads. Normally, each b-thread assumes one of the following roles: \begin{enumerate} \item\emph{Model b-threads} These b-threads form the core of the system. The equivalents of source code in regular software development, these b-threads will ship as part of the production system, once it is ready. \item\emph{Requirement b-threads} These b-threads form the formal requirements of the system. They follow the progression of the system while making assertions about it. When such assertion fails, the system is considered to be in violation of the requirement. Depending on the verification done before shipping the system, requirement b-threads may or may not be part of the complete product. Requirement b-thread against which the b-program was validated can be removed from the system, as the developers can safely assume that none of the requirements they represent are violated. Requirement b-threads that were not validated against, can ship as part of the finished product, and halt the system in case a violation is made. This is a form of monitoring, or ``runtime verification''~\cite{runtimeVerification}. \item\emph{Environment b-threads} These b-threads are used during the verification process to simulate the environment the system interacts with (Listing~\ref{lst:ttt-simulated-player} is an example of such a b-thread). These b-threads are not part of the shipped product. In this capacity, they are similar to unit testing code in mainstream software development. \item\emph{Assumptions b-threads} These b-threads allow faster verification process, by narrowing the amount of execution possibilities. They are important, as verification of a non-trivial system can take a long time. However, they should be used with care, since they may eliminate cases that might occur during execution. Such b-threads should be written in a way that makes the scenarios they eliminate very clear. One example for these b-threads is the \code{onlyOnce} b-thread of the mazes example (Listing~\ref{lst:mazes-once}). This b-thread eliminates an infinite number of runs by preventing the maze walker from entering the same cell twice. However, said elimination does not invalidate the verification process, since for each path from \code{S} to \code{T} that contains repeated entries to cells, we can build an equivalent simple path, that also arrives at the target cell. Thus, if there's a path from \code{S} to \code{T} in the given maze, \code{onlyOnce} would not prevent the maze walker from discovering it. \end{enumerate} \subsection{Software Engineering Practices} \label{sub:software_engineering_practices} A BPjs-based Java application is not a model, as it includes event handles, possibly a custom event selection strategy, and other mainstream programing constructs. However, when used properly, BPjs allows the core decision-making algorithms of said application to be a model. Thus, BPjs enables software engineers to verify (rather than test) core parts of the systems they develop. A fully idiomatic b-program, where: \begin{itemize} \item B-thread communicate through b-events only \item Reading inputs is done only through the external events queue \item Outputs are generated only through event listeners \end{itemize} is a model, since it can be formally analyzed, and, in particular, verified. \paragraph*{Iterative Process for Event Selection Strategies} As discussed in Subsection~\ref{subs:event-selection-strategies}, a good event selection strategy can greatly contribute to an application, both during the development process and at runtime. However, creating a good event selection strategy is not always trivial, especially during the first phases of a project. Building upon the hierarchical ordering of event selection strategies presented earlier in this section, we can propose an iterative development approach: First, start with a general strategy that might allow inefficient event selection. In subsequent iterations, refine the strategy. As long as the set of event traces possible under the refined strategy is a subset of the event traces possible under the basic strategy, the verified properties of the system will hold. Corollary: is it always possible to start with Simple strategy, and refine it as the project progresses. \section{Summary} \label{sec:summary} This paper presented consistent unified semantics for BP. Proposed semantics support almost all the existing body of work done in BP. Such semantics were hitherto unavailable, which we believe hindered research in this area. Additionally, this paper introduced BPjs, a generic platform for developing software systems based on the proposed BP semantics. BPjs is intended to be a ``blue collar'' BP platform, to borrow a phrase from James Gosling's introduction of Java~\cite{gosling1997feel}. While its creation required novel ideas and generalizations, its usage should be easy and down to earth. As such, it can be used to introduce students and practitioners to BP, and to model-driven design in general. BPjs can be improved in many ways, which we leave to future work (by us and others). These include better performance, lower memory requirements, new event selection strategies, improved debugging and logging tools, and a stronger verifier. Porting parts of the substantial body of work already developed using BP is also a worthwhile undertaking. By providing a common environment that is both extensible and easy to use, BPjs allows researchers and industry to share and re-use tools and code. Similar, maybe, to ROS\cite{quigleyros} in robotics or R\cite{r-lang} and Zelig~\cite{zelig:paper,zelig:manual} in statistics. We hope BPjs becomes a boring part of BP research, a piece of software that makes creating the exciting parts easier. \section*{Acknowledgment} This type of project relies on the generosity of companies that provide free, high-quality infrastructure for open source projects. We thank GitHub, Sonatype (Maven Central), Travis CI GmbH, Coveralls.io, Readthedocs.io, and javadoc.io, whose excellent services are used in the development of BPjs. The authors would like to thank David Harel and Assaf Marron for discussions, guidance, and hosting a presentation of an early version of this work at the Weizmann Institute of Science. Aviran Saadon, Achiya Elyasaf, and Meytal Genah, for using BPjs for their research, providing critical input and bearing with us while we fix bugs they have found. Moshe Weinstock, for technical contributions during the early stages of the project. Atilla Szegedi, for crucial work regarding Rhino continuations. \bibliographystyle{plainnat}
{ "timestamp": "2018-06-05T02:11:53", "yymm": "1806", "arxiv_id": "1806.00842", "language": "en", "url": "https://arxiv.org/abs/1806.00842" }
\section{Introduction} In the last two decades the chemistry of nitrogen, the fifth most abundant element in the Universe, has raised interest in the context of understanding the formation of interstellar materials and of our own Solar System. In particular, the isotopic ratio $^{14}$N/$^{15}$N seems to represent an important diagnostic tool to follow the evolutionary process from the primitive Solar Nebula (where measurements indicate $^{14}\text{N}/^{15}\text{N}\approx440-450$, \citealt{Marty11, Fouchet04}) up to present. The materials of the Solar System, from meteorites to the Earth's atmosphere, are enriched in $^{15}$N, with the exception of Jupiter atmosphere. Measurements of N$_2$ in the terrestrial atmosphere led to the result of $^{14}$N/$^{15}$N $\approx 272$ {\citep{Nier50}}, and {carbonaceous} chondrites show isotopic ratios as low as 50 \citep{Bonal10}, suggesting that at the origin of the Solar System multiple nitrogen reservoirs were present \citep{Hily-Blant17}. \par The nitrogen isotopic ratio has been measured also in different cold environments of the interstellar medium (ISM), and the results show a remarkable spread. \cite{Gerin09} found $^{14}$N/$^{15}$N $=350-810$ in NH$_3$ in a sample of low mass dense cores and protostars, while using HCN, \cite{Hily-Blant13} found $^{14}$N/$^{15}$N $=140-360$ in prestellar cores\footnote{The reader must be aware that isotopic ratios measured using HCN (or HNC) depends on the value assumed for the $^{12}$C/$^{13}$C ratio \citep{Roueff15}.}. Using N$_2$H$^+$ , \cite{Bizzocchi13} report a value of $^{14}$N/$^{15}$N $=1000\pm200$ in L1544. In high-mass star-forming regions, the ratio spans a range from 180 up to 1300 in N$_2$H$^+$ \citep{Fontani15}, and from 250 to 650 in HCN and HNC \citep {Colzi18a}\footnote{A summary of the measured isotopic ratios can be found in \cite{Wirstrom16}.}. \par From the theoretical point of view, these results, and especially the very high ratio of L1544, are difficult to explain. The first chemical models addressing the N-fractionation \citep{Terzieva00} suggested that diazenylium (N$_2$H$^+$ ) should experience a modest enrichment in $^{15}$N thorough the ion-neutral reactions: \begin{align} \label{Nreac1} \text{N}_2\text{H}^+ + {^{15}\text{N} }& \rightleftharpoons \text{N}^{15} \text{NH}^+ + \text{N} \; ,\\ \label{Nreac2} \text{N}_2\text{H}^+ + {^{15}\text{N}} & \rightleftharpoons {^{15} \text{N}} \text{NH}^+ + \text{N} \; . \end{align} A further development of the chemical network made by Charnley and Rodgers led to the so-called superfractionation theory. According to it, extremely high enhancements in $^{15}$N are expected in molecules such as N$_2$H$^+$ or NH$_3$, when CO is highly depleted in the gas phase \citep{CharnleyRodgers02,RodgersCharnley08}. Recently, however, based on ab initio calculations, \cite{Roueff15} suggested that the reactions \eqref{Nreac1} and \eqref{Nreac2} do not occur in the cold environments due to the presence of an entrance barrier. As a consequence, no fractionation is expected and the $^{14}$N/$^{15}$N ratio in diazenylium should be close the protosolar value of $\approx440$, {which is assumed to be valid in the local ISM, according to the most recent results \citep[e.g.][]{Colzi18a,Colzi18b}}. {This value, however, can be considered as an upper limit, since other recent works suggest a lower value for the elemental N-isotopic ratio in the solar neighborhood (e.g. $^{14}\text{N}/^{15}\text{N}\approx 300$, \citealt{Kahane18}, or $^{14}\text{N}/^{15}\text{N}\approx 330$, \citealt{Hily-Blant17}). None of these values are consistent with the} anti-fractionation seen for instance by \cite{Bizzocchi13}. More recently, \cite{Wirstrom17} included the newest rate coefficients from \cite{Roueff15} in a chemical model that takes into account also spin-state reactions, but their predictions fail in reproducing high depletion levels, as well as the high fractionation measured in HCN and HNC. \par So far, the observational evidence of anti-fractionation in low-mass star forming regions has been sparse due to the difficulty of such investigations, which require very long integration times ($\gtrsim 8\,$h). Diazenylium presents a further complication. Often, in fact, the N$_2$H$^+$ (1-0) emission is optically thick and presents hyperfine excitation anomalies that deviate from the Local Thermodynamic Equilibrium (LTE) conditions \citep{Daniel06, Daniel13}. Thus, a fully non-LTE radiative transfer approach must be adopted, requiring knowledge of the physical structure of the observed source. This method has been up to now applied to only a few sources at early stages, besides L1544. One is the core Barnard 1b, in which isotopic ratios of $\text{N}_2\text{H}^+ / \text{N}^{15} \text{NH}^+ = 400^{+100}_{-65}$ and $\text{N}_2\text{H}^+ / ^{15}\text{NNH}^+ >600$ were measured by \cite{Daniel13}. A second study was performed in the L16923E core in L1689N, and resulted in $^{14}\text{N}/^{15}\text{N} = 300^{+170}_{-100}$ \citep{Daniel16}. These two sources, however, are not truly representative of the prestellar phases. Barnard 1b hosts in fact two extremely young sources with bipolar outflows \citep{Gerin15}. 16293E in turn is located very close to the Class 0 protostar IRAS 16293-2422, and it is slightly warmer than typical prestellar cores ($T_{\text{dust}} = 16\,$K, \citealt{Stark04}). We can thus say that the L1544 $^{14}$N/$^{15}$N ratio in {N$_2$H$^+$ } appears peculiarly high, raising the doubt that it could represent an isolated and pathological case. \par In this paper, we present the analysis of three more objects: L183, L429, and L694-2. These are all \textit{bona fide} prestellar cores according to \cite{Crapsi05}, due to their centrally peaked column density profiles and high level of deuteration. As in the case of L1544, we modeled their physical conditions and used a non-LTE code for the radiative transfer of N$_2$H$^+$ , N$^{15}$NH$^+$ , and $^{15}$NNH$^+$ emissions. Our results confirm the depletion of $^{15}$N in diazenylium in this kind of sources. \section{Observations} The observations towards the three prestellar cores L183, L429 and L694-2 were carried out with the Institut de Radioastronomie Millim\'etrique (IRAM) 30m telescope, located at Pico Veleta (Spain), during three different sessions. The telescope pointing was checked frequently on planets (Uranus, Mars, Saturn) or a nearby bright source (W3OH), and was found to be accurate within $4''$. We used the EMIR receiver in the E090 configuration mode. The tuning frequency for the three observed transitions are listed in Table \ref{Lines}. The hyperfine rest frequencies of $^{15}$NNH$^+$ and N$^{15}$NH$^+$ were taken from \cite{Dore09}. The single pointing observations were performed using the frequency-switching mode. We used the VESPA backend with a spectral resolution of $20\,$kHz, corresponding to $0.06\,$km$\,$s$^{-1}$ at $90\,$GHz. We observed simultaneously the vertical and horizontal polarizations, and averaged them to obtain the final spectra. \par \begin{table}[h] \renewcommand{\arraystretch}{1.4} \centering \caption{Rest frequencies of the observed transitions and $1\sigma$ uncertainties. } \label{Lines} \begin{tabular}{ccc} \hline Species & Line & Frequency (MHz) \\ \hline N$_2$H$^+$ & $J = 1 \rightarrow 0$ & $93173.3991 \pm 0.0004$\tablefootmark{a} \\ N$^{15}$NH$^+$ & $J = 1 \rightarrow 0$ & $91205.6953 \pm 0.0006$\tablefootmark{b} \\ $^{15}$NNH$^+$ & $J = 1 \rightarrow 0$ & $90263.8360 \pm 0.0004$\tablefootmark{b} \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{From our calculations based on spectroscopic constants of \cite{Cazzoli12} } \\ \tablefoottext{b}{From our calculations based on spectroscopic constants of \cite{Dore09}} } \end{table} L694-2 was observed in good weather condition during July 2011, integrating for $1.15 \,$h for the N$_2$H$^+$ (1-0) transition, for $8.9\,$h for N$^{15}$NH$^+$ (1-0), and for $9.0\,$h for $^{15}$NNH$^+$ (1-0). L183 was observed in July 2012 in good to excellent weather conditions. The total integration times were $11.25\,$min ($\mathrm{N_2H^+}$) and $4.3\,$h ($\mathrm{N^{15}NH^+}$). L429 was observed during two different sessions (July 2012 and July 2017) in average weather conditions. We integrated for a total of $23 \,$min for N$_2$H$^+$ (1-0) and $5.7 \,$h on N$^{15}$NH$^+$ (1-0). We also observed for $1.2\,$h at the $^{15}$NNH$^+$ (1-0) frequency, but we did not detect any signal. For all the sources, we pointed at the millimetre dust peak \citep{Crapsi05}. The core coordinates, together with their distances and locations, are summarised in Table \ref{sources}. \par Complementary Herschel SPIRE data, used to obtain the density maps of the sources (see Sec. \ref{PhysMod}), were taken from the \textit{Herschel Science Archive}. The observation ID are: 1342203075 (L183), 1342239787 (L429), and 1342230846 (L694-2). We selected the highest processing level data, already zero-point calibrated and imaged (SPG version: v14.1.0). Figure \ref{Cores350} shows the three cores as seen with the Herschel SPIRE instrument at $350\, \mu$m, as well as the positions of the single-pointing observations performed with IRAM. \begin{figure}[h] \centering \includegraphics[scale = 0.13]{Cores_350_SPIRE.jpeg} \caption{The three prestellar cores as seen in dust thermal emission at $350\, \mu$m with Herschel SPIRE camera, in units of MJy$\,$sr$^{-1}$. From top to bottom: L183, L429, L694-2. The scalebar is indicated in each bottom-right corner. The white circles represent the positions of the IRAM pointings and the size of the beam. \label{Cores350}} \end{figure} \begin{table*} \renewcommand{\arraystretch}{1.4} \centering \caption{Sources' coordinates, distances and locations.} \label{sources} \begin{tabular}{cccc} \hline Source & Coordinates\tablefootmark{a} & Distance (pc)\tablefootmark{b} & Location \\ \hline L183 & 15$^h$54$^m$8.32$^s$, -2$^{\degree}$52$'$23.0$''$ & 110 & High lat. cloud \\ L429 & 18$^h$17$^m$6.40$^s$, -8$^{\degree}$14$'$0.0$''$ & 200 & Aquila Rift \\ L694-2 & 19$^h$41$^m$4.50$^s$, 10$^{\degree}$57$'$2.0$''$ & 250 & Isolated core \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{Coordinates are expressed as RA, Dec (J2000)} \\ \tablefoottext{b}{Distances taken from: \cite{Crapsi05} (L429, L694-2), \cite{Pagani04} (L183). } } \end{table*} \section{Results} The obtained spectra are shown in the left panels of Figure \ref{L183} (L183), Figure \ref{L429} (L429), and Figure \ref{L694-2} (L694-2). The data were processed using the GILDAS\footnote{Available at \url{http://www.iram.fr/IRAMFR/GILDAS/}.} software, and calibrated in main beam temperature $T_{\text{MB}}$ using the telescope efficiencies ($F_{\text{eff}}=0.95$ and $\eta_{\text{MB}}=0.80$, respectively) at the observed frequencies. The typical rms is $10-20\,$mK for N$_2$H$^+$ (1-0), and $3-4 \,$mK for the spectra of the rarer isotopologues, resulting in good to high-quality detections. The minimum signal-to-noise ratio (SNR) is $\approx 8$, whilst the maximum is $\approx150$. \par The CLASS package of GILDAS was first used to spectrally fit the data. We used the HFS fitting routine, which models the hyperfine structure of the analyzed transition assuming local thermodynamic equilibrium (LTE). Especially in the case of the N$_2$H$^+$ (1-0) line, this routine is not able to reproduce the observed data, due to the fact that the LTE conditions are not fulfilled. This is not due only to optical depth effects, which are taken into account in CLASS routines, but also to the fact that the excitation temperature is not the same for all the hyperfine transitions. A more refined approach that uses non-LTE analysis is therefore needed to compute reliable column densities, and will be discussed later (see Sec. \ref{Analysis}). Nevertheless, the CLASS analysis provides reliable results for the local standard of rest velocity ($V_{\text{LSR}}$), whose values are summarized in Table \ref{Vlsr}. Different isotopologues give generally consistent results, within $3\sigma$, for each source. The values derived from N$_2$H$^+$ (1-0) are also in agreement with the literature ones \citep{Crapsi05}. Table \ref{Vlsr} summarizes also the total linewidth (FWHM) obtained with the CLASS fitting routine. \begin{figure*}[h] \centering \includegraphics[scale = 0.1]{L183_BestFit_tot.jpeg} \caption{Observed spectra (black) and modeled ones (red) in L183, for N$_2$H$^+$ (top) and N$^{15}$NH$^+$ (bottom). The modeling was performed with MOLLIE as described in Sec. \ref{Analysis}. The left panels show the entire acquired spectra, while the right ones are zoom-in of the grey shaded velocity range. \label{L183}} \end{figure*} \begin{figure*}[h] \centering \includegraphics[scale = 0.18]{L429_BestFit_tot.jpeg} \caption{Observed spectra (black) and modeled ones (red) in L429, for N$_2$H$^+$ (top) and N$^{15}$NH$^+$ (bottom). The modeling was performed with MOLLIE as described in Sec. \ref{Analysis}{, and includes the infall velocity profile}. The left panels show the entire acquired spectra, while the right ones are zoom-in of the grey shaded velocity range. \label{L429}} \end{figure*} \begin{table}[!h] \renewcommand{\arraystretch}{1.4} \centering \footnotesize \caption{$V_{\text{LSR}}$ estimated from the CLASS HFS fitting routine.} \label{Vlsr} \begin{tabular}{cccc} \hline Source & Line & $V_{\text{LSR}}$ (km$\,$s$^{-1}$ ) & FWHM (km$\,$s$^{-1}$ ) \\ \hline \multirow{2}{*}{L183} & N$_2$H$^+$ (1-0) & $2.4145\pm0.0004$ & $0.222\pm 0.001$ \\ & N$^{15}$NH$^+$ (1-0) & $2.390\pm0.009$ & $0.28 \pm 0.03$ \\ \hline \multirow{2}{*}{L429} & N$_2$H$^+$ (1-0) & $6.7141\pm 0.0006$ & $0.401 \pm 0.001$ \\ & N$^{15}$NH$^+$ (1-0) & $6.77\pm0.02$ & $ 0.41\pm0.06$ \\ \hline \multirow{3}{*}{L694-2} & N$_2$H$^+$ (1-0) & $9.5577\pm 0.00014$ & $0.2635 \pm 0.0004 $ \\ & N$^{15}$NH$^+$ (1-0) & $9.562\pm0.007$ & $0.32 \pm 0.03$ \\ & $^{15}$NNH$^+$ (1-0) & $9.563\pm0.011$ & $0.30 \pm 0.02$ \\ \hline \end{tabular} \end{table} \begin{figure*}[h] \centering \includegraphics[scale = 0.17]{L694-2_BestFit_tot.jpeg} \caption{Observed spectra (black) and modeled ones (red) in L694-2, for N$_2$H$^+$ (top panel), N$^{15}$NH$^+$ (middle panel), and $^{15}$NNH$^+$ (bottom panel). The modeling was performed with MOLLIE as described in Sec. \ref{Analysis}. The left panels show the entire acquired spectra, while the right ones are zoom-in of the grey shaded velocity range. \label{L694-2}} \end{figure*} \section{Analysis \label{Analysis}} Our aim is to derive the column density of the different isotopologues, and to compute from their ratios the values of the corresponding $\mathrm{^{14}N/^{15}N}$, assuming that they are tracing the same regions. For the previously mentioned reasons, this is not possible using a standard LTE analysis (such as the one presented in the Appendix A of \citealt{Caselli02b}), as already shown for instance in the analysis of L1544 by \cite{Bizzocchi13}. Therefore, we used a non-LTE method, based on the radiative transfer code MOLLIE \citep{Keto90, Keto04}. MOLLIE can produce synthetic spectra of a molecule arising from a source with a given physical model. In particular, it is able to treat the case of overlapping transitions, and thus can properly model the crowded N$_2$H$^+$ (1-0) pattern. In what follows, we first describe the construction of the core models, and then present the analysis of the observed spectra using MOLLIE. {The case of L429, which presents peculiar issues, is treated separately.} \subsection{Source physical models \label{PhysMod}} MOLLIE is able to treat genuine 3D source models. Nevertheless, for the sake of simplicity, we chose to model the cores in our sample as spherically symmetric (1D). As one can see in Figure \ref{Cores350}, this assumption holds reasonably well for the densest parts {of all cores}\footnote{L694-2 was modeled as {an} elongated cylinder with the axis almost along the line of sight by \cite{Harvey03b,Harvey03a}, but for the sake of simplicity, and given the { relative roundness of the} source at high density (shown in Figure \ref{Cores350}), we adopted a 1D model.}. For L183, a more sophisticated{, 2D model,} have already been developed in our team (Lattanzi et al., in prep.). {This consists of a cylinder, with the axis lying on the plane of sky.} In order to be consistent with the analysis of the other two cores, we decided to average this model in concentric annuli {on the plane of sky} to obtain a one-dimensional profile.\par The simplest 1D model consists of a volume density radial profile and a temperature {radial} profile. We thus assume that the gas kinetic temperature and the dust temperature are equal. This is strictly true only when gas and dust are coupled ($n>10^{4-5} \,$cm$^{-3}$, \citealt{Goldsmith01}), but we do not have for all the sources enough information on the spatial distribution of the gas temperature, which would require maps of NH$_3$ (1,1) and (2,2) with JVLA (see \citealt{Crapsi07}). On the other hand, the available continuum data allow us to determine reliable values for the dust temperature with a resolution of $\approx 40''$. \par The volume density profile is derived from the analysis of the Herschel SPIRE maps at $250\, \mu$m, $350\, \mu$m, and $500\, \mu$m, as follows. Since we are interested in the core properties, we filtered out the contribution of the diffuse, surrounding material with a background subtraction. We computed the average flux of each map in the surrounding of the cores, at a distance of $\approx500-800''$. This was assumed to be the background contribution, and was subtracted from the SPIRE images pixel by pixel. Then, the background-subtracted SPIRE maps were fitted {simultaneously} using a modified black body emission, in order to obtain the dust column density map of the source {(for a complete description of the procedure, see for instance Appendix B of \citealt{Redaelli17})}. We adopted the optically thin approximation, and a gas-to-dust ratio of 100 \citep{Hildebrand83} to derive the H$_2$ column density. The dust opacity is assumed to scale with the frequency as \begin{equation} \kappa_{\nu} = \kappa_{250\mu\text{m}} \left( \frac{\nu}{\nu_{250\mu\text{m}}}\right )^{\beta} \: , \end{equation} where {$\kappa_{250\mu\text{m}} = 0.1 \,$cm$^{2}\,$g$^{-1}$} is the reference value at $250\, \mu$m \citep{Hildebrand83}, and $\beta = 2.0$, a suitable value for low-mass star-forming regions \citep{Chen16, Chacon-Tanarro17, Bracco17}. {From this procedure one also gets the line-of-sight averaged dust temperature map of each source. These data, however, were not used in the following analysis.} \par {The obtained column density map was averaged in concentric annuli starting from the densest pixel, and then a Plummer profile was fitted to the obtained points according to:} \begin{equation} N(r) = \frac{N(\text{H}_2)_{\text{peak}}}{\left[ 1+ \left( \frac{r}{r_0}\right)^2\right]^{\frac{p-1}{2}}} \; . \end{equation} {The obtained best-fit values of the free parameters (the characteristic radius $r_0$, the power-law parameter $p$ and the central column density $N(\text{H}_2)_{\text{peak}}$) can be used to derive the volume density profile $n(r)$, according to \cite{Arzoumanian11}, following:} \begin{equation} n(r) = \frac{n_0}{\left[ 1+ \left( \frac{r}{r_0}\right)^2\right]^{\frac{p}{2}}} \; . \end{equation} Table \ref{PlummerPara} summarizes the best fit values of the Plummer-profile fitting of each source. The values obtained for the $p$ exponent are in the range $\approx 2-3.5$, quite consistent with those found for other cores using similar power-law profile shapes \citep[e.g. in][]{Tafalla04, Pagani07}. The profiles obtained with this method typically show $n_0 \lesssim \, \text{a few} \, 10^5\,$cm$^{-3}$, and fall below $10^5\,$cm$^{-3}$ within the central $\approx 3500\,$AU. They thus fail to reach the high volume densities typical of prestellar cores centres. {In fact, the integrated $n(r)$ profiles along the line of sight result in column density values lower by a factor of 2-4 compared to the results of \cite{Crapsi05}, although the dust opacity value used in that work is consistent within 15\% with ours. This is} due to the poor angular resolution of the SPIRE maps, which were all convolved to the beam size of the $500\, \mu$m map ($\approx 38''$). The central regions of dense cores are in fact better traced with millimetre dust emission observations performed with large telescopes, which allow to better see their cold and concentrated structure. In order to correct for this, we artificially increased the density in the central part ($5-10\%$ of the total core radius $r_c$), until the column density derived from this profile is consistent with the value obtained from $1.2\,$mm observations. The inserted density profile was taken from the central part of the profile developed through hydrodynamical simulations for L1544 in \cite{Keto15}, a model known to work well to reproduce the prestellar core properties. \par \begin{figure}[h] \centering \includegraphics[scale = 0.1]{Cores_PhysicalModel_new.jpeg} \caption{The volume density profile (blue dots) and the dust temperature profile (red dots) for the three cores (from top to bottom: L183, L429, L694-2), {as a function radius in both pc and arcsec}. The vertical, dashed lines represent the radius within which the density was artificially increased (see text for details). \label{CoresModels}} \end{figure} The volume density profile derived in the previous paragraph is used as an input for the Continuum Radiative Transfer code CRT \citep{Juvela01, Juvela05} to derive the dust temperature ($T_{\text{dust}}$) profile. The CRT is a Monte Carlo code that computes the emerging emission and the dust temperature, given a background radiation field. For the latter, we used the standard interstellar radiation field of \cite{Black94}. {Since we want to model cores embedded in a parental cloud, the background radiation field has to be attenuated. Our team has the tabulated values for the \cite{Black94} model with an attenuation corresponding to a visual extinction of $A_V=1, \; 2, \; \text{and} \;10\,$mag. We tested all the three options, and found that generally the first two provide too warm temperatures. We thus decided to assume that radiation impinging on the cores is attenuated by an ambient cloud, whose thickness corresponds to a visual extinction of $A_V=10\,$mag. The external radiation field can still be multiplied by a factor $k$, in order to correctly reproduce the emitted surface brightness. To determine this parameter, we tested a number of values in the range $k=0.5- 5.0$, and for each one we computed the synthetic flux emitted at the SPIRE wavelengths at the cores' centres. We adopted the model that provides the best agreement with the observations\footnote{When the observations were available, we also simulated the flux at mm wavelengths.}. Typically, we needed to increase the external radiation field by a factor of 2-3, suggesting that the assumed thickness of the ambient cloud was too large, and that the real, correct attenuation is somewhat in between $2$ and $10\,$mag. This assumption is reasonable, as the H$_2$ column density derived from the SPIRE maps around the cores is usually $\mathrm{\approx 5 \,10^{21}}\,$cm$^{-2}$, albeit we do not know the three-dimensional structure of the cloud.} The dust opacities are taken from \cite{OssenkopfHenning94} for unprocessed dust grains covered by thin icy mantles. This choice is made so that the dust opacity values used in this part are consistent with the one from \cite{Hildebrand83}, used for fitting the SPIRE maps. \par The models built so far are static, i.e. the velocity field is zero everywhere. However, we know that many cores show hints of infall or expansion motions \citep{Lee01}. The velocity field can heavily impact the spectral features, and, when possible, it must be taken into account. For L183, in Sec. \ref{MOLLIE} we will show that the static model is good enough to properly reproduce the observations. For L694-2, we used the infall profile derived in \cite{Lee07} using high spatial resolution HCN data. L429 represents a more difficult case, and will be accurately treated in the next subsections. \begin{table}[h] \renewcommand{\arraystretch}{1.4} \centering \caption{Summary of the best fit values for the parameters of the Plummer profiles, for each source.} \label{PlummerPara} \begin{tabular}{ccc} \hline $n_0\, (\text{cm}^{-3})$ & $p$ & $r_0 ('')$ \\ \hline \multicolumn{3}{c}{L183} \\ $(4.1 \pm0.7)10^5$ & $1.95\pm0.10$ & $29\pm 2$ \\ \hline \multicolumn{3}{c}{L429} \\ $(4.1 \pm0.7)10^5$ & $1.86\pm0.08$ & $16 \pm1$ \\ \hline \multicolumn{3}{c}{L694-2} \\ $(1.8 \pm0.5)10^5$ & $3.32\pm0.56$ & $41\pm6$ \\ \hline \end{tabular} \end{table} \subsection{Spectral modeling with MOLLIE \label{MOLLIE}} The physical models developed in Sec. \ref{PhysMod} are used as inputs for MOLLIE. The structure of each core is modeled with three nested grids of increasing resolutions towards the centre, each composed by 48 cells, onto which the physical quantities profiles are interpolated. The collisional coefficients used are from \cite{Lique15}, who computed them for the main isotopologue and the most abundant collisional partner, $p$-H$_2$. The $o$-H$_2$ is thus neglected, which is a reasonable assumption given that the orto-to-para ratio (OPR) in dense cores is expected to be very low (in L1544, $\text{OPR}\approx 10^{-3}$, \citealt{Kong15}). The collisional coefficient for N$^{15}$NH$^+$ and $^{15}$NNH$^+$ have been derived from those of N$_2$H$^+$ following the method described in Appendix \ref{HypRates}. \par Our fit procedure has two free parameters, the turbulent velocity dispersion, $\sigma_{\text{turb}}$, and the molecular abundance $X_{\text{st}}$ with respect to H$_2$, assumed to be radially constant. Since MOLLIE requires very long computational times to fully sample the parameters space and find the best-fit values, we proceeded with a limited parameter space sampling. We first set the $\sigma_{turb}$ value, testing $\approx 5$ values on the N$_2$H$^+$ (1-0) spectra. This value is kept fixed also for the N$^{15}$NH$^+$ and $^{15}$NNH$^+$ (1-0) lines. Then we produced $8$ synthetic spectra for each transition, varying each time the initial abundance, and convolving them to the $27''$ IRAM beam. The results were compared to the observations using a simple $\chi^2$ analysis, i.e. computing: \begin{equation} \chi^2 = \sum_i \left\{ \frac{\left( T_{\text{MB,obs}}^i - T_{\text{MB,mod}}^i \right)^2}{\sigma_{\text{obs}}^2} \right\} \: , \label{chi2} \end{equation} where $T_{\text{MB,obs}}^i$ and $T_{\text{MB,mod}}^i$ are the main beam temperature in the $i$-th velocity channel for the observed spectrum and the modeled one, respectively, and $\sigma_{\text{obs}}$ is the rms of the observations. The sum is computed excluding signal-free channels. In order to evaluate the uncertainties, we fitted a polynomial function to the $\chi^2$ distribution and set the lower/upper limits on $X_{st}$ according to $\chi^2$ variations of $\approx 20 \%$ for the N$_2$H$^+$ (1-0) spectra and $\approx 15 \%$ for the other isotopologues. We chose these two different limits due to different opacity effects. In fact, the N$_2$H$^+$ (1-0) lines are optically thick, and thus changes in the molecular abundance lead to smaller changes in the resulting spectra compared to the optically thin $^{15}$NNH$^+$ and N$^{15}$NH$^+$ lines. Since the $\chi^2$ distribution is usually asymmetric, so are the error bars. In order to evaluate the column densities, we integrated the product $n(\text{H}_2) \cdot X_{st}$ (convolved to the IRAM beam) along the line of sight crossing the centre of the model sphere. {In Appendix \ref{chi2App}, we report the curves for the $\chi^2$ in the analysed sources.} \par Figure \ref{L183} and \ref{L694-2} show the best fit spectra (in red), obtained as just described, in comparison with the observed ones (black curve) for L183 and L694-2 , respectively. The overall agreement is good, and most of the spectral features are well reproduced, as seen in the right panels, which show a zoom-in of the main component. \subsection{The analysis of L429} L429 represents a more difficult case to model. As one can see in Figure \ref{L429} and from the last column of Table \ref{Vlsr}, the N$_2$H$^+$ (1-0) line is almost a factor of two broader than in the other two sources. This may be due to the fact that this core is located in a more active environment, the Aquila Rift, but can also be a hint of multiple components along the line of sight. Moreover, concerning its velocity field, \cite{Lee01} listed L429 among the ``strong infall candidates'' while in \cite{Sohn07} the analyzed HCN spectra shows both infall and expansion features. A full charachterization of the dynamical state of the source and its velocity profile would require high quality, high spatial resolution maps of molecular emission, which is beyond the scope of this paper. At the first stage we tried to fit the observed spectra first increasing $\sigma_{\text{turb}}$. The static model is however unable to reproduce the hyperfine intensity ratios, and thus we adopted the infall profile of L694-2. The agreement with the observations increased significantly, meaning that a velocity field is indeed required to model the spectra. Due to the difficulties in analyzing this source, the $\chi^2$ analysis previously described is not suitable, {because it presents an irregular shape and its minimum corresponds to a clearly wrong solution, due to the fact that it is not possible to simultaneously reproduce the intensity of all the hyperfine components}. We therefore determined $X_{\text{st}}$ in the same way as for $\sigma_{\text{turb}}$, testing multiple values. We then associated the uncertainty to this value using the largest relative uncertainties found in the other two sources ($24\%$ for N$_2$H$^+$ (1-0) and $25\%$ for N$^{15}$NH$^+$ (1-0)). \subsection{Obtained results} Table \ref{Results} summarizes the values of $\sigma_{\text{turb}}$, $X_{\text{st}}$, and column density $N_{\text{mol}}$ for each line in the observed sample. For a sanity check, since the rare isotopologues transitions are optically thin and do not present intensity anomalies, we derived their molecular densities using the LTE approach of \cite{Caselli02b}, focusing on the main component only. The results of this analysis are shown in the 6th column of Table \ref{Results} ($N_{\text{mol}}^{\text{LTE}}$). One can note that these values are consistent with the ones derived through the non-LTE method. {The L183 physical structure and N$_2$H$^+$ emission have been previously modelled by \cite{Pagani07}. It is interesting to notice that their best fit profiles for both density and temperature are close to ours, even though their model is warmer in the outskirts of the source. Furthermore, despite a different abundance profile, their derived N$_2$H$^+$ column density is consistent with our value.} \par \begin{table*} \renewcommand{\arraystretch}{1.6} \centering \caption{Parameters and results of the modelling with MOLLIE.} \label{Results} \begin{tabular}{c|cccccc} \hline Source & Line & $\sigma_{\text{turb}}$/km$\,$s$^{-1}$ & $X_{\text{st}}/10^{-13}$ & $N_{\text{mol}}/10^{10}$cm$^{-2}$ & $N_{\text{mol}}^{\text{LTE}}/10^{10}$cm$^{-2}$ & $^{14}$N/$^{15}$N \\ \hline L183 &N$_2$H$^+$ & 0.12 & $ 2.50^{+0.25}_{-0.60} \, 10^{3}$ & $1.29^{+0.13}_{-0.31} \, 10^{3}$ & -& \\ &N$^{15}$NH$^+$ & 0.12 & $ 3.75^{+0.95}_{-0.75} $ & $ 1.93^{+0.49}_{-0.38} $ & $ 2.24^{+0.54}_{-0.54} $ & $ 670^{+150}_{-230}$ \\ \hline L429 & N$_2$H$^+$ & 0.23 & $4.5 \,10^3$ & $1.82^{+0.44}_{-0.44}\,10^{3}$ & - & \\ &N$^{15}$NH$^+$ & 0.23 & $5.5$ & $2.5^{+0.63}_{-0.63}$ & $ 2.46^{+0.44}_{-0.44} $ & $730^{+250}_{-250}$ \\ \hline L694-2 & N$_2$H$^+$ & 0.12 & $ 3.50^{+0.60}_{-0.35} \, 10^{3}$ & $ 1.26^{+0.22}_{-0.12} \, 10^{3}$ & - & \\ & N$^{15}$NH$^+$ & 0.12 & $ 6.00^{+1.00}_{-1.00} $ & $ 2.17^{+0.36}_{-0.36} $ & $ 2.74^{+0.43}_{-0.43} $ & $ 580^{+140}_{-110}$ \\ & $^{15}$NNH$^+$ & 0.12 & $ 5.00^{+0.90}_{-1.20} $ & $ 1.81^{+0.32}_{-0.44} $ & $ 2.13^{+0.37}_{-0.37} $ & $ 700^{+210}_{-140}$ \\ \hline L1544\tablefootmark{a} & N$_2$H$^+$ &0.075 & $ 5.50^{+1.25}_{-0.75} \, 10^{3}$ & $ 1.73^{+0.39}_{-0.24} \, 10^{3}$ &- & \\ & N$^{15}$NH$^+$ & 0.075 & $6.00^{+1.00}_{-1.40} $ & $ 1.89^{+0.31}_{-0.44} $ & $ 2.45^{+0.57}_{-0.57} $ & $ 920^{+300}_{-200}$ \\ & $^{15}$NNH$^+$ & 0.075 & $ 5.50^{+0.95}_{-0.70} $ & $ 1.73^{+0.30}_{-0.22} $ & $ 2.02^{+0.28}_{-0.28} $& $ 1000^{+260}_{-220}$ \\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{The values for L1544 are based on the data shown in \cite{Bizzocchi13}. The non-LTE modeling uses the updated collisional rates, while the LTE results were derived adopting revised excitation temperature values.}} \end{table*} With the values for the molecular column densities found with the fully non-LTE analysis, we can infer the isotopic ratio dividing the main isotolopogue column densities for the corresponding rare isotopologues ones. Uncertainties are propagated using standard error calculation. The results are summarized in the last column of Table \ref{Results}. \section{Discussion} Figure \ref{Ratios} shows a summary of the obtained isotopic ratios. Since from the analysis of \cite{Bizzocchi13} the collisional rates for the N$_2$H$^+$ system have changed, we re-modeled the literature data for this source. The new results are: $^{14}\text{N}/^{15}\text{N} = 920^{+300}_{-200}$ (using $\mathrm{N^{15}NH^+}$) and $^{14}\text{N}/^{15}\text{N} = 1000^{+260}_{-220}$ (using $\mathrm{^{15}NNH^+}$). They are also shown in Figure \ref{Ratios}. \begin{figure}[h] \centering \includegraphics[scale = 0.5]{Ratios_comparison.pdf} \caption{The $^{14}$N/$^{15}$N values obtained in the sample presented in this paper and re-computed for L1544 with errorbars, determined with the method described in the main text.. Red points refer to measurements of N$^{15}$NH$^+$ , while blue ones of $^{15}$NNH$^+$ . The solid line represents the average value found in the whole sample ($= 770$), while the dashed curve is the protosolar nebula value (440). \label{Ratios}} \end{figure} These values are perfectly consistent with the already mentioned literature value of $^{14}\text{N}/^{15}\text{N} = 1000 \pm 200$ of \cite{Bizzocchi13}. More recently, \cite{DeSimone18} computed again the nitrogen isotopic ratio from diazenylium in L1544, and found $^{14}\text{N}/^{15}\text{N} = 228-408$, a result inconsistent with ours. The IRAM data analysed by those authors, though, have a spectral resolution of $50\,$KHz (more than twice coarse than ours). Furthermore, the authors used a standard LTE analysis, which is not suitable for this case, as already mentioned. This point has been studied in detail in \cite{Daniel06, Daniel13}, where the authors showed that in typical core conditions ($T \approx 10\,$K, $n \approx 10^{4-6}\,$cm$^{-3}$) the hypothesis of identical excitation temperature for all hyperfine components in the N$_2$H$^+$ (1-0) transition is not valid. Due to radiative trapping effects, in fact, the hyperfines intensities ratios deviates from the ones predicted with LTE calculations. \par Figure \ref{Ratios} shows that the computed values are consistent within the source sample. {Due to the large uncertainties, we can conclude that the isotopic ratios are only marginally inconsistent with the value of 440, representative of the protosolar nebula (black, dashed curve). Nevertheless, the trend is clear. Despite the fact that L1544 still present the highest values in the sample, its case is now clearly not an isolated and pathological one. This larger statistics thus supports the hypothesis that diazenylium is $^{15}$N-depleted in cold prestellar cores.} Instead of "super-fractionation" predicted by some chemistry models, N$_2$H$^+$ seems to experience "anti-fractionation" in these objects. As already stressed out, this trend cannot be understood within the frame of current chemical models. \cite{Roueff15} predict that the $^{14}$N/$^{15}$N should be close to the protosolar value ($\approx 400$). \cite{Wirstrom17} came to very similar conclusions. In both chemical networks, the physical model assumes for the gas a temperature of $10\,$K, which can be up to 40\% higher than the values found for the central parts of the cores ($6-7$\,K). However, further calculations have shown that lowering the temperature by $2-3\,$K does not produce significant differences in the results (Roueff, private communications). {\cite{Visser18} highlighted how isotope-selective photodissociation is a key mechanism to understand the nitrogen isotopic chemistry in protoplanetary disks. We can speculate that different levels of selective photodissociation in different environments could reproduce the variety of N-isotopic ratios that are observed. It will be indeed worthy to further investigate this point by both observational and theoretical point of view.} \par From the L1544 and L694-2 results, there seems to be a tentative evidence that N$^{15}$NH$^+$ is more abundant than $^{15}$NNH$^+$ . This can be explained by the theory, according to which the proton transfer reaction \begin{equation} ^{15}\text{NN}\text{H}^+ + {\text{N}^{15}\text{N} } \rightleftharpoons \text{N}^{15} \text{NH}^+ +{\text{N}^{15}\text{N} } \; , \\ \end{equation} tends to shift the relative abundance of the two isotopologues, slightly favouring N$^{15}$NH$^+$ due to the fact that N$^{15}$NH$^+$ zero-point energy is lower than the one of $^{15}$NNH$^+$ by $8.1\,$K \citep[see reaction RF2 in][]{Wirstrom17}. {It is interesting that the same trend is found also in a very different environment such as OMC-2 FIR4, a young protocluster hosting several protostellar object. In this source, \cite{Kahane18} measured lower values for the isotopic ratio, but in agreement with our result found that $^{15}$NNH$^+$ is less abundant than $\mathrm{N^{15}NH^+}$.} However, we emphasise that higher quality and higher statistics observations are needed to confirm this point. \section{Conclusions} We have analyzed the diazenylium isotopologues spectra in three prestellar cores, L183, L429 and L694-2 in order to derive nitrogen isotopic ratios. Since LTE conditions are not fulfilled, especially for the N$_2$H$^+$ (1-0) transition, we have used a fully non-LTE radiative transfer approach, implemented in the numerical code MOLLIE. We have carefully derived the physical models of the sources, computing their volume density and dust temperature profiles. With these, we were able to produce synthetic spectra to be compared with our observations, in order to derive the best-fit values for the molecular abundances and column densities. Using the same method, we have also re-computed the isotopic ratio of L1544. The difference with the literature value of \cite{Bizzocchi13}, due to changes in the molecular collisional rates, is well within the uncertainties. \par In our sample of 4 cores, we derived $^{14}$N/$^{15}$N values in the range $630-1000$. Within the confidence range given by our uncertainties estimation, all our results are inconsistent with the value $\approx 400$, predicted by the current theoretical models. L1544 still presents higher depletion levels than the other sources, but in general all the cores are anti-fractionated. The theoretical bases of such a trend are at the moment not understood. A deep revision of our knowledge of the nitrogen chemistry is required in order to understand the chemical pathways that lead to so low abundances of N$^{15}$NH$^+$ and $^{15}$NNH$^+$ compared to the main isotopologue. \begin{acknowledgements} We thank the anonymous referee, whose comments helped improving the quality of the manuscript. \end{acknowledgements}
{ "timestamp": "2018-06-05T02:17:48", "yymm": "1806", "arxiv_id": "1806.01088", "language": "en", "url": "https://arxiv.org/abs/1806.01088" }
\section{Introduction} A common feature of online social networks (OSNs) is the possibility of individuals to continuously interact among themselves, by sharing contents and expressing opinions or ratings on different topics \cite{Amelkin2017, Tempo2017}. % % % % % We address such a context by considering a network scenario in which nodes can mutually rate, i.e., can give/receive a score to/from other ``neighboring'' nodes, and aim at learning their own (or their neighbors') state. The state may indicate a social orientation, influencing level, or the belonging to a thematic community. % % % Due to the large-scale nature of OSNs, centralized solutions exhibit limitations both in terms of computation burden and privacy preservation, hence distributed solutions are needed. % % In recent years, a great interest has been devoted to distributed schemes in which nodes aim at estimating a common parameter, e.g., by means of Maximum Likelihood (ML) approaches, \cite{barbarossa2007decentralized,schizas2008consensus,chiuso2011gossip}, or performing simultaneous estimation and classification, \cite{chiuso2011gossip,fagnani2014distributed}. % % % % % In \cite{coluccia2013distributed,coluccia2014hierarchical,coluccia2016bayesian} a more general Bayesian framework is considered, in which nodes estimate local parameters, rather than reaching consensus on a common one. In particular, an Empirical Bayes approach is proposed in which the parameters of the prior distribution, called \emph{hyperparameters}, are estimated through a distributed algorithm. The estimated hyperparameters are then combined with local measurements to obtain the Minimum Mean Square Error (MMSE) estimator of the local parameters. % % % % % % % % % % % % % In the recent literature on distributed social learning, agents aim at estimating a \emph{common} unobservable state from noisy observations through non-Bayesian schemes in which each agent processes its own and its neighbors' beliefs \cite{jadbabaie2012non,shahrampour2013exponentially,lalitha2014social,molavi2016foundations,nedic2017fast}, see also \cite{nedic2016tutorial} for a tutorial. % % % % % % % % % % % % % % % % % % A different batch of references investigates interpersonal influences in groups of individuals and the emerging of asymptotic opinions, \cite{mirtabatabaei2012opinion,mirtabatabaei2014reflected,friedkin2016network}, see \cite{frasca2015distributed,Tempo2017} for a tutorial on opinion formation in social networks. % % % % % % % % % The problem of self-rating in a social environment is discussed in \cite{li2016self}, where agents can perform a predefined task, but with different abilities.% % % In the present paper, we set up a learning problem in a network context in which each node needs to classify its own local state based on observations coming from the interaction with other nodes. Interactions among nodes are expressed by evaluations that a node performs on other ones, modeled through a weighted digraph that we will be referred to as \emph{score graph}. This general scenario captures a wide variety of contexts arising from social relationships, where nodes have only a partial knowledge of the world. Specifically, in Section~\ref{sec:setup} we devise a Bayesian probabilistic framework wherein, however, both the parameters of the observation model and the hyperparameters of the prior distribution are allowed to be unknown. % In order to solve this interaction-based learning problem, we propose in Section~\ref{sec:algorithm} a learning approach combining a local Bayesian classifier with a joint parameter-hyperparameter Maximum Likelihood estimation approach. % % % % Since the ML estimator is computationally intractable even for moderately small networks, we resort to the conceptual tool of graphical models to identify a relaxation of the probabilistic model that leads to a distributed estimator. % In Section~\ref{sub:community_discovery} we validate the performance of the proposed distributed estimator via Monte Carlo simulations. % % % % % % % % % % \section{Bayesian framework for interaction-based learning}\label{sec:setup} In this section, we set up the interaction-based learning problem in which agents of a network interact with each others according to a score graph. To learn its own state each node can use observations associated to incoming or outcoming edges. We propose a Bayesian probabilistic model with unknown parameters, which need to be estimated to solve the learning problem. \subsection{Interaction network model} We consider a \emph{network of agents} able to perform evaluations of other agents. The result of each evaluation is a score given by the evaluating agent on the evaluated one. Such an interaction is described by a \emph{score graph}. Formally, we let $\{1,\dots,N\}$ be the set of agent identifiers and $G_S = (\{1,\dots,N\},E_S)$ a digraph such that $(i,j)\in E_S$ if agent $i$ evaluates agent $j$. We denote by $n$ the total number of edges in the graph, and assume that each node has at least one incoming edge in the score graph, that is, there is at least one agent evaluating it. Let $\mathcal{C}$ and $\mathcal{R}$ be the set of possible state and score values, respectively. Being finite sets, we can assume $\mathcal{C} = \{c_1,\dots,c_C\}$ and $\mathcal{R} = \{r_1,\dots,r_R\}$, where $C$ and $R$ are the cardinality of the two sets, respectively. Consistently, in the network we consider the following quantities: \begin{itemize} \item $x_i \in \mathcal{C}$, unobservable \emph{state} (or community) of agent $i$; \item $y_{ij} \in \mathcal{R}$, \emph{score} (or evaluation result) of the evaluation performed by agent $i$ on agent $j$. \end{itemize} An example of score graph with associated state and score values is shown in Fig.~\ref{fig:score-graph-example}. % \begin{figure} \centering \includegraphics[scale=0.8]{fig-score-graph-example} \caption{Example of a score graph $G_S$.} \label{fig:score-graph-example \end{figure} Besides the evaluation capability, the agents have also \emph{communication} and \emph{computation} functionalities. Agents communicate according to a time-dependent directed \emph{communication graph} $t\mapstoG_{\texttt{cmm}}(t) = (\{1,\dots,N\}, E_{\texttt{cmm}}(t))$, where the edge set $E_{\texttt{cmm}}(t)$ describes the communication among agents: $(i,j)\inE_{\texttt{cmm}}(t)$ if agent $i$ communicates to $j$ at time $t\in\mathbb{Z}_{\geq 0}$. We introduce the notation $N_{{\texttt{cmm}},i}^{I}(t)$ and $N_{{\texttt{cmm}},i}^{O}(t)$ for the in- and out-neighborhoods of node $i$ at time $t$ in the communication graph. We will require these neighborhoods to include the node $i$ itself; formally, we have \begin{align*} N_{{\texttt{cmm}},i}^{I}(t) &= \{j : (j,i)\inE_{\texttt{cmm}}(t)\}\cup\{i\},\\ N_{{\texttt{cmm}},i}^{O}(t) &= \{j : (i,j)\inE_{\texttt{cmm}}(t)\}\cup\{i\} \end{align*} For the communication graph we assume the following: \begin{assumption}\label{ass:connectivity} \! There exists an integer $Q\ge1$ such that the graph $\bigcup_{\tau=t Q}^{(t+1)Q-1} \!\!G_{\texttt{cmm}}(\tau)\!$ is strongly connected $\forall \, t\!\ge\!0$. \end{assumption} We point out that in general the (time-dependent) communication graph, modeling the distributed computation, is not necessarily related to the (fixed) score graph. We just assume that when the distributed algorithm starts each node $i$ knows the scores received by in-neighbors in the score graph. % \subsection{Bayesian probabilistic model} We consider the score $y_{ij}, (i,j)\inE_S$, as the (observed) realization of a random variable denoted by $Y_{ij}$; likewise, each state value $x_{i}, i\in\{1,\dots,N\}$, is the (unobserved) realization of a random variable $X_i$. To highlight the conditional dependencies among the random variables involved in the score graph, we resort to the tool of graphical models and in particular of Bayesian networks \cite{koller2009probabilistic}. Specifically, we introduce the \emph{Score Bayesian Network} with $N+n$ nodes $X_i$, $i=1,\dots,N$, and $Y_{ij}$, $(i,j)\inE_S$ and $2n$ (conditional dependency) arrows defined as follows. For each $(i,j)\inE_S$, we have $X_i\rightarrowY_{ij}\leftarrowX_j$ indicating that $Y_{ij}$ conditionally depends on $X_i$ and $X_j$. In Fig.~\ref{fig:graphical-model-example} we represent the Score Bayesian Network related to the score graph in Fig.~\ref{fig:score-graph-example}. \begin{figure} \centering \includegraphics[scale=0.8]{fig-graphical-model-example} \caption{The score Bayesian network related to the score graph in Fig. \ref{fig:score-graph-example}.} \label{fig:graphical-model-example}% \end{figure} Denoting by $\bm{Y}_{E_S}$ the vector of all the random variables $Y_{ij},(i,j)\inE_S$, the joint distribution factorizes as \[ \mathbb{P}(\bm{Y}_{E_S},X_1,\dots,X_N)\! = \!\Big(\prod_{(i,j)\inE_S}\!\!\!\!\mathbb{P}(Y_{ij}|X_i,X_j)\Big)\Big(\prod_{i=1}^N\mathbb{P}(X_i)\Big). \] We assume $Y_{ij}$, $(i,j)\inE_S$ are ruled by a conditional probability distribution $\mathbb{P}(Y_{ij} | X_i,X_j; \bm{\theta})$, depending on a \emph{parameter} vector $\bm{\theta}$ whose components take values in a given set $\Theta$. For notational purposes, we define the tensor \begin{equation}\label{eq:p_hlm} p_{h|\ell,m}(\bm{\theta}) := \mathbb{P}(Y_{ij} = r_h | X_i = c_\ell,X_j = c_m; \bm{\theta}), \end{equation} where $r_h\in\mathcal{R}$ and $c_\ell, c_m \in \mathcal{C}$. From the definition of probability distribution, we have the constraint $\bm{\theta}\in\mathcal{S}_{\Theta}$ with \begin{align*} \mathcal{S}_{\Theta} := \Big\{\bm{\theta}\in\Theta : p_{h | \ell,m}(\bm{\theta})\in[0,1], \sum_{h = 1}^{R} p_{h | \ell,m}(\bm{\theta}) = 1\Big\}. \end{align*} We model $X_i$, $i=1,\dots,N$, as identically distributed random variables ruled by a probability distribution $\mathbb{P}(X_i ; \bm{\gamma})$, depending on a \emph{hyperparameter} vector $\bm{\gamma}$ whose components take values in a given set $\Gamma$. Again, we introduce the notation \begin{equation}\label{eq:p_l} p_\ell(\bm{\gamma}) := \mathbb{P}(X_i = c_\ell; \bm{\gamma}). \end{equation} and, analogously to $\bm{\theta}$, we have the constraint $\bm{\gamma} \in \mathcal{S}_{\Gamma}$ with \[ \mathcal{S}_{\Gamma} := \Big\{\bm{\gamma}\in\Gamma : p_\ell(\bm{\gamma}) \in [0,1], \sum_{\ell = 1}^{C} p_\ell(\bm{\gamma}) = 1 \Big\}. \] % We assume that $p_{h | \ell,m}$ and $p_\ell$ are continuous functions, and that each node knows $p_{h | \ell,m}, p_\ell$ and the scores received from its in-neighbors and given to its out-neighbors in $G_S$. An example is discussed in the next subsection, while the problem of jointly estimating the \emph{parameter-hyperparameter} $(\bm{\theta},\bm{\gamma})$ will be addressed in Section~\ref{sec:algorithm}; the latter will be then a building block of the (distributed) learning scheme. % \subsection{Example: social ranking scenario}\label{subsec:example_scenarios} A relevant scenario is user profiling in OSNs. In social relationships, in fact, people naturally tend to aggregate into groups based on some affinity; this is found also in OSN contexts. For instance, consider a thread on a dedicated subject, wherein each member can express her/his preferences by assigning to other members/colleagues' posts a score from $1$ to $R$ indicating an increasing level of appreciation for that post. % To model the distribution of scores, we propose the following variant of the so-called Mallow's $\phi$-model \cite{mallows1957nonnull}: \begin{equation} p_{h|\ell,m}(\theta) = \frac{1}{\psi_{\ell,m}(\theta)} e^{-\big(\frac{(r_R - r_h) / r_R - d(c_\ell,c_m) / c_C}{\theta}\big)^2}, \label{eq:mallow} \end{equation} where $r_h = h$ ($h = 1,\dots,R$), $c_\ell = \ell$ ($\ell = 1,\dots,C$), $\theta\in\mathbb{R}_{>0}$ is a dispersion parameter, $\psi_{\ell,m}(\theta)$ is a normalizing constant, and $d$ is a semi-distance, i.e., $d \ge 0$ and $d(c_\ell,c_m) = 0$ if and only if $c_\ell=c_m$. % Informally, the ``farther'' a given community $c_\ell$ is from another community $c_m$, the higher will be the distance $d(c_\ell,c_m)$, and thus the lower the score. In many cases the resulting subgroups reflect some hierarchy in the population: basic examples are forums or working teams. Thus, we consider a scenario in which each person belongs to a community reflecting some degree of expertise about a given topic or field. In particular, we have $C$ ordered communities, with $\ell$th community given by $c_\ell = \ell$. That is, for example, a person in the community $c_1$ is a \emph{newbie}, while a person in $c_C$ is a \emph{master}. % Since climbing in the hierarchy can be regarded as the result of several ``promotion'' events, a possible probabilistic model for the communities is a binomial distribution $\mathcal{B}(C - 1,\gamma)$, where $\gamma\in[0,1]$ represents the probability of being promoted, i.e., % \begin{align*} p_\ell(\gamma) &= \binom{C - 1}{c_\ell - 1}\gamma^{c_\ell - 1}(1 - \gamma)^{C - 1 - (c_\ell - 1)}. \end{align*} We will refer to this set-up as \emph{social-ranking model}. % \section{Interaction-based distributed learning}\label{sec:algorithm} In this section we describe the proposed distributed learning scheme. % Without loss of generality, we focus on a set-up in which a node wants to self-classify. The same scheme also applies to a scenario in which a node wants to classify its neighbors, provided it knows their given and received scores. The section is structured as follows. % First, we derive a local Bayesian classifier provided that an estimation of parameter-hyperparameter $(\bm{\theta},\bm{\gamma})$ is available. % Then, based on a combination of plain ML and Empirical Bayes estimation approaches, we derive a joint parameter-hyperparameter estimator. % Finally, we propose a suitable relaxation of the Score Bayesian Network which leads to a distributed estimator, based on proper distributed optimization algorithms. % \subsection{Bayesian classifiers (given parameter-hyperparameter)} Each node can self-classify (i.e., learn its own state) if an estimate $(\bm{\hat{\theta}}, \bm{\hat{\gamma}})$ of parameter-hyperparameter $(\bm{\theta}, \bm{\gamma})$ is available. Before discussing in details how this estimate can be obtained in a distributed way, we develop a \emph{decentralized} MAP self-classifier % which uses only single-hop information, i.e., the scores it gives to and receives from neighbors. % Formally, let $\bm{y}_{N_i}$ be the vector of (observed) scores that agent $i$ obtains by in-neighbors and provides to out-neighbors, i.e., the stack vector of $y_{\idxji}$ with $(j,i)\inE_S$ and $y_{ij}$ with $(i,j)\inE_S$. Consistently, let ${\bm{Y}\!}_{N_i}$ be the corresponding random vector. For each agent $i=1,\dots,N$, we define \begin{equation*} u_i(c_\ell) := \mathbb{P}(X_i = c_\ell | {\bm{Y}\!}_{N_i} = \bm{y}_{N_i}; \bm{\hat{\gamma}}, \bm{\hat{\theta}}), \quad \ell = 1,\dots,C. \end{equation*} % The \emph{soft classifier} of $i$ is the probability vector $\bm{u}_i := (u_i(c_1),\dots,u_i(c_C))$ (whose components are nonnegative and sum to $1$). In Fig.~\ref{fig:soft-classifier} we depict a pie-chart representation of an example vector $\bm{u}_i$. \begin{figure} \centering \includegraphics[scale=0.7]{fig-soft-classifier} \caption{Example of outcome of the soft classifier of an agent $i$, for $C = 4$: $\bm{u}_i = (0.5,0.25,0.15,0.1)$.} \label{fig:soft-classifier} \end{figure} From the soft classifier we can define the classical \emph{Maximum A-Posteriori probability (MAP) classifier} as the argument corresponding to the maximum component of $u_i$, i.e., \begin{equation*} \hat{x}_i := \argmax_{c_\ell\in\mathcal{C}} u_i(c_\ell). \end{equation*} The main result here is to show how to efficiently compute the MAP classifiers. First, we define \begin{align*} N_i^{\leftrightarrow}&:=\{j:(j,i)\inE_S,(i,j)\inE_S\},\\ N_i^{\leftarrow}&:=\{j:(j,i)\inE_S,(i,j)\notinE_S\},\\ N_i^{\rightarrow}&:=\{j:(i,j)\inE_S, (j,i)\notinE_S\}, \end{align*} and for each $h,k=1,\dots,R$ we introduce the quantities: \begin{align*} n_i^{\leftrightarrow}(h,k) &:= |\{j\inN_i^{\leftrightarrow}: y_{ij}=r_h, y_{ji}=r_k\}|,\\ n_i^{\leftarrow}(h) &:= |\{j\inN_i^{\leftarrow}: y_{ji}=r_h\}|,\\ n_i^{\rightarrow}(h) &:= |\{j\inN_i^{\rightarrow}: y_{ij}=r_h\}|. \end{align*} \begin{theorem}\label{thm:MAP_characterization} Let $i\in\{1,\dots,N\}$ be an agent of the score graph. Then, the components of the vector $u_i$ are given by \begin{equation*} u_{i}(c_\ell) = \frac{v_{i}(c_\ell)}{Z_{i}} \end{equation*} where $Z_{i} = \sum_{\ell = 1}^C v_{i}(c_\ell)$ is a normalizing constant, and $v_{i}(c_\ell) = p_\ell(\bm{\hat{\gamma}})\pi_i^{\leftrightarrow}(c_\ell)\pi_i^\leftarrow(c_\ell)\pi_i^\rightarrow(c_\ell)$ with \begin{align*} \pi_i^{\leftrightarrow}(c_\ell) &= \prod_{h,k=1}^C\Big(\sum_{m=1}^C p_{k|m,\ell}(\bm{\hat{\theta}})p_{h|\ell,m}(\bm{\hat{\theta}})p_{m}(\bm{\hat{\gamma}})\Big)^{n_i^{\leftrightarrow}(h,k)},\\ \pi_i^\leftarrow(c_\ell) &= \prod_{h = 1}^R\Big(\sum_{m=1}^C p_{h|m,\ell}(\bm{\hat{\theta}})p_m(\bm{\hat{\gamma}})\Big)^{n_i^\leftarrow(h)},\\ \pi_i^\rightarrow(c_\ell) &= \prod_{h = 1}^R\Big(\sum_{m=1}^C p_{h|\ell,m}(\bm{\hat{\theta}})p_m(\bm{\hat{\gamma}})\Big)^{n_i^\rightarrow(h)}. \end{align*} \oprocend \end{theorem} The proof is given in \cite{sasso2017interaction}. % \subsection{Joint Parameter-Hyperparameter ML estimation (JPH-ML)} Classification requires that at each node an estimate $(\bm{\hat{\theta}}, \bm{\hat{\gamma}})$ of parameter-hyperparameter $(\bm{\theta}, \bm{\gamma})$ is available. % On this regard, a few remarks about $\bm{\theta}$ and $\bm{\gamma}$ are now in order. Depending on both the application and the network context, these parameters may be known, or (partially) unknown to the nodes. If both of them are known, we are in a pure Bayesian set-up in which, as just shown, each node can independently self-classify with no need of cooperation. The case of unknown $\bm{\theta}$ (and known $\bm{\gamma}$) falls into a Maximum-Likelihood framework, while the case of unknown $\bm{\gamma}$ (and known $\bm{\theta}$) can be addressed by an \emph{Empirical Bayes} approach. % In this paper we consider a general scenario in which both of them can be unknown. % Our goal is then to compute, in a distributed way, an estimate of \emph{parameter-hyperparameter} $(\bm{\theta},\bm{\gamma})$ and use it for the classification at each node. % In the following we show how to compute it in a distributed way by following a mixed Empirical Bayes and Maximum Likelihood approach. % The \emph{Joint Parameter-Hyperparameter Maximum Likelihood (\jphml/) estimator} can be defined as \begin{equation} \label{eq:MLE} (\bm{\hat{\theta}}_{\text{\tiny ML}}, \bm{\hat{\gamma}}_{\text{\tiny ML}}) := \argmax_{(\bm{\theta}, \bm{\gamma})\in \mathcal{S}_{\Theta}\times\mathcal{S}_{\Gamma}} L({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}) \end{equation} where ${\bm{y}\!}_{E_S}$ is the vector of all scores $y_{\idxji}, (j,i)\inE_S$, and \begin{equation} \label{eq:likelihood} L({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}) = \mathbb{P}({\bm{Y}\!}_{E_S} = {\bm{y}\!}_{E_S} \, ; \, \bm{\theta}, \bm{\gamma}) \end{equation} is the \emph{likelihood function}. % Notice that $\bm{\theta}$ is directly linked to the observables ${\bm{y}\!}_{E_S}$; the hyperparameter $\bm{\gamma}$ is instead related to the unobservable states. While one could readily obtain the likelihood function for the sole estimation of $\bm{\theta}$ from the distribution of scores, the presence of $\bm{\gamma}$ requires to marginalize over all unobservable state (random) variables. % By the law of total probability \begin{align} \begin{split} &L({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}) = \\ &\hspace{5pt}\sum_{\ell_1=1}^C\! \cdots\! \sum_{\ell_N = 1}^C \!\mathbb{P}({\bm{Y}\!}_{E_S}\!=\! {\bm{y}\!}_{E_S}, X_{1}\!=\!c_{\ell_1},\dots,X_{N}\!=\!c_{\ell_N}). \end{split} \label{eq:LF_deriv_1} \end{align} Indicating with $N^{I}_{i}$ the set of in-neighbors of agent $i$ in the score graph (we are assuming that it is non-empty), the probability in \eqref{eq:LF_deriv_1} can be written as the product of the conditional probability of scores, i.e., \begin{align*} \mathbb{P}({\bm{Y}\!}_{E_S}= {\bm{y}\!}_{E_S}\,|\, &X_{1}=c_{\ell_1},\dots, X_{N}=c_{\ell_N}) = \\ &\prod_{i = 1}^N\prod_{j\inN^{I}_{i}} \mathbb{P}(Y_{\idxji} = y_{\idxji}| X_{j} = c_{\ell_j}, X_{i} = c_{\ell_i}) \end{align*} multiplied by the prior probability of states, i.e., \begin{align*} \mathbb{P}(X_{1}=c_{\ell_1},\dots, X_{N}=c_{\ell_N}) = \prod_{i=1}^{N}\mathbb{P}(X_{i}=c_{\ell_i}). \end{align*} Thus, the likelihood function turns out to be \begin{align*} L({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}) \!=\! \sum_{\ell_1=1}^C \!\cdots \!\sum_{\ell_N = 1}^C\prod_{i=1}^N p_{\ell_i}(\bm{\gamma})\prod_{j\inN^{I}_{i}}p_{h_{ji}\,|\, \ell_i,\ell_j }(\bm{\theta}) \end{align*} where $h_{\idxji}$ is the index of the score element $r_{h_{\idxji}} \in \mathcal{R}=\{r_1,\ldots,r_R\}$ associated to the score $y_{\idxji}$, i.e., $y_{\idxji}= r_{h_{\idxji}}$. \subsection{Distributed JPH Node-based Relaxed estimation (JPH-NR)} From the equations above it is apparent that the likelihood function couples the information at all nodes, so problem~\eqref{eq:MLE} is not amenable to distributed solution. To make it distributable, we propose a relaxation approach. To this aim we introduce, instead of $L({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma})$, a \emph{Node-based Relaxed (NR) likelihood} $L_{NR}({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma})$. % Let $\bm{y}_{N_i^I}$ be the vector of (observed) scores that agent $i$ obtains by in-neighbors and ${\bm{Y}\!}_{N_i^I}$ the corresponding random vector. Then, \begin{align} L_{NR}({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}) := \prod_{i = 1}^N \mathbb{P}({\bm{Y}\!}_{N_i^I} = {\bm{y}\!}_{N_i^I} \, ; \, \bm{\theta},\bm{\gamma}). \end{align} % This relaxation can be interpreted as follows. We imagine that each node has a virtual state, independent of its true state, every time it evaluates another node. Thus, in the Score Bayesian Network, besides the state variables $X_i$, $i = 1,\dots,N$, there will be additional variables $X_i^{\rightarrow j}$ for each $j$ with $(i,j)\inE_S$. % To clarify this model, Figs.~\ref{fig:score-graph-middle}-\ref{fig:graphical-model-middle} depict the node-based relaxed graph and the corresponding graphical model for the same example given in Figs.~\ref{fig:score-graph-example}-\ref{fig:graphical-model-example}. \begin{figure} \centering \includegraphics{fig-score-graph-middle} \caption{Node-based relaxation of the score graph in Fig. \ref{fig:score-graph-example}, with virtual nodes indicating the virtual states of each node.} \label{fig:score-graph-middle} \end{figure} \begin{figure} \centering \includegraphics{fig-graphical-model-middle} \caption{Node-based relaxation of the score Bayesian network of Fig. \ref{fig:score-graph-middle}.} \label{fig:graphical-model-middle} \end{figure} Since ${\bm{Y}\!}_{N_i^I}$, $i=1,\dots,N$, are not independent, then clearly $L\neqL_{NR}$. % However, as it will appear from the numerical performance assessment reported in the Section~\ref{sub:community_discovery}, this choice yields reasonably small estimation errors. Using this virtual independence between ${\bm{Y}\!}_{N_i^I}$, with $i=1,\dots,N$, we define the \emph{JPH-NR estimator} as \begin{equation} \label{eq:MLE-NR} (\bm{\hat{\theta}}_{\text{\tiny NR}}, \bm{\hat{\gamma}}_{\text{\tiny NR}}) := \argmax_{(\bm{\theta}, \bm{\gamma})\in \mathcal{S}_{\Theta}\times\mathcal{S}_{\Gamma}} L_{NR}({\bm{y}\!}_{E_S};\bm{\theta}, \bm{\gamma}). \end{equation} The next result characterizes the structure of \JPHNR/ \eqref{eq:MLE-NR}. % \begin{proposition} \label{prop:loglike_NR} The \JPHNR/ estimator based on the node-based relaxation of the score Bayesian network is given by \begin{equation} \label{eq:NRopt} (\bm{\hat{\theta}}_{\text{\tiny \textnormal{NR}}}, \bm{\hat{\gamma}}_{\text{\tiny \textnormal{NR}}}) = \argmax_{(\bm{\theta}, \bm{\gamma})\in \mathcal{S}_{\Theta}\times\mathcal{S}_{\Gamma}} \sum_{i=1}^{N} g(\bm{\theta},\bm{\gamma};\bm{n}_i) \end{equation} with $\bm{n}_i \!=\! [n_i^{(1)} \cdots n_i^{(R)}]^\top$, \!\! $n_i^{(h)} \!:=\! |\{j\in N^I_i: \! y_{\idxji}\! =\! r_h\}|$, \!\! and \begin{align}\label{eq:g} g(\bm{\theta},\bm{\gamma};\bm{n}_i) &=\nonumber\\ \log\Big(\sum_{\ell = 1}^C& p_\ell(\bm{\gamma})\prod_{h = 1}^R \Big(\sum_{m=1}^C p_{h \,|\, m, \ell}(\bm{\theta}) p_m(\bm{\gamma})\Big)^{n_i^{(h)}}\Big). \end{align} \oprocend \end{proposition} The proof is given in \cite{sasso2017interaction}. Proposition~\ref{prop:loglike_NR} ensures that the \JPHNR/ estimator can be computed by solving an optimization problem that has a separable cost (i.e., the sum of $N$ local costs). Available distributed optimization algorithms for asynchronous networks can be adopted to this aim, e.g. \cite{carli2015analysis}, \cite{nedic2015distributed}, \cite{di2016next}. % \section{Distributed learning for social ranking}\label{sub:community_discovery} \begin{figure} \centering \includegraphics[scale=0.7]{fig-ranking-C6-R3-sigma0-rmse} \caption{\JPHNR/ RMSE of the estimates of $(\theta,\gamma)$ as a function of the number of edges $n$.} \label{fig:ranking-100-theta} \end{figure} % We report numerical results for the social ranking model described in Section~\ref{subsec:example_scenarios} with $C=6$, $R=3$ and $N = 300$. % We adopt in \eqref{eq:mallow} the semi-distance $ d(c_\ell,c_m) = |c_\ell - c_m| = |\ell - m|$. The true values of parameter-hyperparameter are $\theta = \frac{1}{5}$ and $\gamma = \frac{3}{10}$. Monte Carlo simulations have been run to test the performance of the \JPHNR/ estimator, with $1000$ trials for each point. % % Fig.~\ref{fig:ranking-100-theta} reports the RMSE for the estimation of $(\theta,\gamma)$ as a function of the number of edges. It is worth noting that the estimation errors decrease as the number of edges increases, since more data are available. % \begin{figure} \centering \includegraphics[scale=0.7]{fig-ranking-C6-R3-sigma0-misclassification} \caption{Misclassification rate as function of the number of edges $n$ increasing from $N$ (cyclic graph) to $N^2-N$ (complete graph), with $N = 300$, $\gamma = \frac{3}{10}$, $\theta = \frac{1}{5}$, $C = 6$ and $R = 3$.} \label{fig:ranking-100} \end{figure} \begin{figure} \centering \includegraphics[scale=0.7]{fig-ranking-soft} \caption{Soft classifier representation of a particular score graph.} \label{fig:ranking-soft} \end{figure} The impact of estimation errors on the learning performance is shown in Fig.~\ref{fig:ranking-100}: the curve clearly shows that the inferential relationship between scores and states is ``weaker'' hence more data are needed for a good learning. As a benchmark, the curve corresponding also to the ``oracle'' classifier that uses the true value of $\gamma$ and $\theta$ is reported. Remarkably, the proposed estimator is very close to the performance of the benchmark. % Finally, we report an additional case to highlight the usefulness of the soft classifier. We considered a network of $N = 10$ agents, divided in $C = 3$ communities, in which the maximum score is $R = 3$. The related score graph $G_S$ is shown in Fig. \ref{fig:ranking-soft}. We drew the states and scores in the given score graph according to the previous distributions, and then used the social ranking model to solve the learning problem as before, by means of the \JPHNR/ estimator. The contour of a node has a color which indicates the true state of the node. Inside the node we have represented the outcome of the soft classification, i.e., the output of the local self-classifier, as a pie-chart. The colors used are: red for state $c_1$, blue for state $c_2$, gray for state $c_3$. Moreover, each edge is depicted by a different pattern based on its evaluation result $r_h$: solid lines are related to scores equal to $3$, dash dot lines are related to scores equal to $2$, while dotted lines are related to scores equal to $1$. We assigned to each node a symbol $\checkmark$ or $\times$ indicating if the MAP classifier correctly decided for the true state or not. Fig. \ref{fig:ranking-soft} shows a realization with three misclassification errors; remarkably, all of them correspond to a lower confidence level given by the soft classifier, which is an important indicator of the lack of enough information to reasonably trust the decision. It can be observed that the edge patterns concur to determine the decision. Indeed, the only gray-state node is correctly classified thanks to the predominant number of dotted edges insisting on it, and similarly for the blue-state nodes which mostly have solid incoming edges. When a mix of scores are available, clearly there is more uncertainty and the learning may fail, as for two of the red-state nodes. % \section{Conclusion}\label{sec:conclusions} In this paper we have proposed a novel probabilistic framework for distributed learning, which is particularly relevant to emerging contexts such as cyber-physical systems and social networks. In the proposed set-up, nodes of a network want to learn their (unknown) state; differently from a classical set-up, the information does not come from (noisy) measurements of the state but rather from observations produced by the interaction with other nodes. For this problem we have proposed a hierarchical (Bayesian) framework in which the parameters of the interaction model as well as hyperparameters of the prior distributions may be unknown. Node classification is performed by means of a local Bayesian classifier that uses parameter-hyperparameter estimates, obtained by combining the plain ML with the Empirical Bayes estimation approaches in a joint scheme. The resulting estimator is very general but, unfortunately, not amenable to distributed computation. Therefore, by relying on the conceptual tool of graphical models, we have proposed an approximated ML estimator that exploits a proper relaxation of the conditional dependencies among the involved random variables. Remarkably, the approximated likelihood function leads to distributed estimation algorithms. To demonstrate the application of the proposed schemes, we have addressed an example scenario from user profiling in social networks, for which Monte Carlo simulations are reported. Results show that the proposed distributed learning scheme, although based on relaxation of the exact likelihood function, exhibits performance very close to the ideal classifier that has perfect knowledge of all parameters. % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:15:19", "yymm": "1806", "arxiv_id": "1806.01003", "language": "en", "url": "https://arxiv.org/abs/1806.01003" }
\section{Mechanical model} Consider an inverted planar pendulum in a gravitational field with its pivot point moving along a horizontal line in the plane of the pendulum. The law of motion $\xi(t)$ of the pivot point is given and the pendulum is moving in the presence of dry friction. Let $l$ be the length of the pendulum, $m$ be its mass. Let $r = (r_x, r_y)$ be the radius vector of the massive point of the pendulum, and $r_x$, $r_y$ are its components in axes of an orthogonal coordinate system $Oxy$, where $Oy$ is the vertical axis. The general equation of motion has the form $$ m \ddot r = F_{grav} + F_{fric} + N. $$ Here $F_{grav} = -mg\cdot e_y$ is the applied force of gravity, $F_{fric}$ is the force of dry friction, and $N$ is the force of constraint that appears from the holonomic constraint $(r_x - \xi(t))^2 + r_y^2 = l^2$. By $g$ we denote the gravitational acceleration. We assume that the force of dry friction is the Coulomb friction. In our model we consider the case when the Stribeck effect can be ignored, and the difference between the dynamic and static friction coefficients is negligibly small. Therefore, $F_{fric}$ is as follows \cite{popov2010, ivanov2009bifurcations} $$ F_{fric} = -\mu |N| \frac{v}{|v|}, \mbox{ if } v \ne 0, \quad |F_{fric}| \leqslant \mu |N|, \mbox{ if } v = 0. $$ Here $\mu > 0$ is the dry friction coefficient, $N$ is the normal reaction force, and $v$ is the relative velocity of the massive point: $v = (\dot r_x - \dot \xi)e_x + \dot r_y e_y$. Let $q$ be the angle between the horizontal line and the rod ($q = 0$ or $q = \pi$ are the horizontal positions). It is not hard to obtain that $|N| = m |\ddot \xi \cos q -l\dot q^2 + g\sin q|$. Therefore, for $v \ne 0$, the equation of motion can be presented as follows \begin{equation} \label{eq1} \begin{aligned} &\dot q = p,\\ &\dot p = \frac{\ddot\xi}{l}\sin q -\frac{\mu}{l}\left| \ddot\xi \cos q - lp^2 + g\sin q \right|\frac{p}{|p|} - \frac{g}{l}\cos q. \end{aligned} \end{equation} When $p = 0$, $|F_{fric}|$ can be any value between zero and $\mu |N|$. Therefore, the motion of the system cannot be described by an ordinary differential equation. One of the possible and convenient solution to this problem is to consider a differential inclusion corresponding to the considered model of dry friction, which we do in the next section. The system of a one-dimensional inverted pendulum --- being a simple yet strongly non-linear system --- has been considered by many authors (see, for instance, \cite{kapitsa1954pendulum, bardin1995stability, butikov2001dynamic, seyranian2006stability}). Many of these results deal with the smooth system of a pendulum with vertically oscillating pivot. Unlike these cases, we consider a non-smooth system with dry friction and the pivot point moving horizontally and show that there always exists a solution along which the inverted pendulum never falls below the horizontal line. \section{Main result} The system (\ref{eq1}) and similar systems can be presented in the following form \begin{equation} \label{eq2} \dot x = f(x, t), \end{equation} where $f$ is a piecewise continuous function in a domain $G \subset \mathbb{R}^{n+1}$ and $M \subset G$ is a set of measure zero of points of discontinuity of $f$. Following \cite{filippov2013differential}, consider a differential inclusion associated with the above equation (\ref{eq2}) \begin{equation} \label{eq3} \dot x \in F(x,t), \end{equation} where $F \colon G \to 2^{\mathbb{R}^{n}}$ is a set-valued function defined as follows: for any point $(x,t) \in G$, the set $F(x,t)$ is the smallest convex closed set containing all the limit values of $f(x^*,t)$, $(x^*,t) \notin M$, $x^* \to x$. \begin{definition} A solution of the differential inclusion (\ref{eq3}) is an absolutely continuous function $x \colon I \to \mathbb{R}^n$ defined on an interval or on a segment $I$ for which (\ref{eq3}) is satisfied almost everywhere. \end{definition} Below, by the solution of (\ref{eq1}) we mean the solution of the corresponding differential inclusion with the right-hand side denoted by $\Phi = \Phi(q,p,t)$ (in our case, $M$ is the plane $p = 0$). We also assume that $\ddot \xi $ in (\ref{eq1}) is a Lipschitz function. First, we show that the existence of solutions and their continuous dependence on initial data follow directly from the general properties of $\Phi$. \begin{definition} Let $A, B \subset \mathbb{R}^n $ be non-empty closed sets. Then $$ \beta(A,B) = \sup\limits_{a \in A}\rho(a, B). $$ Here $\rho(a, B) = \inf\limits_{b \in B} \rho(a,b)$ and $\rho(a,b)$ is the Euclidean distance in $\mathbb{R}^n$. \end{definition} \begin{definition} A set-valued function $F$ is called upper semicontinuous at $x \in \mathbb{R}^n$, if $\beta(F(y),F(x)) \to 0$ as $y \to x$. A function is called upper semicontinuous on a set $G$ if it is upper semicontinuous at each point of $G$. \end{definition} It is not hard to see that $\Phi$ is upper semicontinuous in $\mathbb{R} / 2\pi\mathbb{Z} \times \mathbb{R} \times \mathbb{R}$. \begin{theorem}\cite{filippov2013differential} Let $F$ be an upper semicontinuous set-valued function in a domain $G \subset \mathbb{R}^{n+1}$, and for all $(x, t) \in G$, $F(x, t)$ is a non-empty, bounded, closed and convex set. Then for any point $(x_0, t_0) \in G$ there exists a local solution of the problem $$ \dot x \in F(x,t), \quad x(t_0) = x_0. $$ Moreover, if $G$ is closed and bounded, then every solution can be continued up to the boundary of $G$. \end{theorem} From this theorem, it follows that for the system (\ref{eq1}), solutions exist for all $t > t_0$. Indeed, set-valued function $\Phi$ is periodic in $q$ and it can be shown that if $p > 0$ and $$ p^2 > p_*^2 = \left( g + \max\limits_{t \in [t_0,t_1]}|\ddot \xi| \right) \left(1 + \frac{1}{\mu} \right)\frac{1}{l}, $$ then $\dot p < 0$ for any $t \in [t_0, t_1]$. Similarly, for $p<0$ and large $|p|$, we have $\dot p > 0$. Therefore, solutions can be continued to an arbitrarily long time interval. From the below result, it also follows that solutions of (1) depend continuously on initial data. \begin{theorem} \cite{filippov2013differential} Let $F$ be an upper semicontinuous set-valued function in a domain $G \subset \mathbb{R}^{n+1}$, and for all $(x, t) \in G$, $F(x, t)$ is a non-empty, bounded, closed and convex set; $t_0 \in [a, b]$, let all the solutions of the problem $$ \dot x \in F(x,t), \quad x(t_0) = x_0 $$ exist for $a \leqslant t \leqslant b$ and their graphs lie in $G$. Then for any $\varepsilon > 0$ there exists $\delta > 0$, such that for any $t_0^* \in [a,b]$ and $x_0^*$, $|t_0^* - t_0|<\delta$ and $|x_0^* - x_0|<\delta$, each solution with initial conditions $t = t_0^*$, $x = x_0^*$ exists and differs (w.r.t. the uniform norm) from some solution with initial conditions $t = t_0$, $x = x_0$ by not more than $\varepsilon$. \end{theorem} \begin{definition} We say that (\ref{eq3}) has a right-unique solution at a point $(x_0, t_0)$ if there exists $t_1 > t_0$ such that each two solutions of the differential inclusion satisfying the condition $x(t_0) = x_0$ coincide on $[t_0, t_1]$. \end{definition} Let us now show that, for given initial conditions, the solution of (\ref{eq2}) is right-unique. The following result was also proved in \cite{filippov2013differential} \begin{theorem} Let a function $f(t,x)$ in a domain $G$ be discontinuous only on a set of measure zero. Let there exist a summable function $l(t)$ such that for almost all points $(x,t)$ and $(y,t)$ of the domain $G$ we have $f(x,t) \leqslant l(t)$ and for $|x - y| < \varepsilon_0$, $\varepsilon_0 > 0$, the following holds \begin{equation} \label{eq4} (x-y)\cdot(f(x,t) - f(y,t)) \leqslant l(t) |x-y|^2. \end{equation} Then any solution of the corresponding differential inclusion (\ref{eq3}) is right-unique in the domain $G$. \end{theorem} \begin{remark} As usual, we say that function $l \colon \mathbb{R} \to \mathbb{R}$ is summable if it is Lebesgue integrable and $$ \int\limits_K|l(t)|\, dt < \infty, $$ For any compact $K$. Below we consider only constant functions $l(t) = l$ that are always summable. \end{remark} \begin{corollary} The solutions of (\ref{eq1}) are right-unique. \end{corollary} \begin{proof} Let $G$ be a bounded domain in $\mathbb{R}^{3}$, by $G^+$ we denote $\{ p > 0 \} \cap G$. Similarly, $G^- = \{ p < 0 \} \cap G$. By $f(q,p,t)$ we denote the right-hand side of the system (1). Let $f^+(q,0,t)$ and $f^-(q,0,t)$ be the limiting values of the function $f$ at the point $(q,0,t)$, from $G^+$ and $G^-$, correspondingly. Let $n$ be a vector directed toward increasing values of $p$. From (\ref{eq1}) we have $$ n \cdot f^+(q,0,t) = \frac{\ddot\xi}{l}\sin q -\frac{\mu}{l}\left| \ddot\xi \cos q - lp^2 + g\sin q \right| - \frac{g}{l}\cos q, $$ and $$ n \cdot f^-(q,0,t) = \frac{\ddot\xi}{l}\sin q + \frac{\mu}{l}\left| \ddot\xi \cos q - lp^2 + g\sin q \right| - \frac{g}{l}\cos q. $$ Therefore, $n \cdot f^+(q,0,t) \leqslant n \cdot f^-(q,0,t)$. Let us now show that for almost all points $(q_1,p_1,t)$ and $(q_2, p_2,t)$, inequality (4) holds. If both points are in $G^+$ or in $G^-$, then the inequality follows from the fact that the right-hand side is Lipschitz continuous (in this case, we can put $l(t)$ to be a constant). Now suppose that $(q_1,p_1,t) \in G^+$ and $(q_2, p_2,t) \in G^-$. By $(q, 0, t)$ we denote the point of intersection of the line segment connecting $(q_1,p_1,t)$ and $(q_2, p_2,t)$ with the plane $p = 0$. Since $f$ is Lipschitz continuous in $G^-$ and $G^+$, then for some constant $l$, we have the following inequalities. \begin{equation*} \begin{aligned} &|f(q_1, p_1, t) - f^+(q, 0, t)| \leqslant l |(q_1, p_1,t) - (q, 0,t)|,\\ &|f^-(q, 0, t) - f(q_2, p_2, t)| \leqslant l |(q_2, p_2,t) - (q, 0,t)|. \end{aligned} \end{equation*} From these inequalities and the fact that the points $(q_1,p_1,t)$, $(q,0,t)$, $(q_2,p_2,t)$ are on the same line, we have \begin{equation*} \begin{aligned} |f(q_1, p_1, t) - f^+(q, 0, t) + f^-(q, 0, t) - f(q_2, p_2, t)| \leqslant l |(q_1, p_1,t) - (q_2, p_2,t)|. \end{aligned} \end{equation*} Therefore, \begin{equation*} \begin{aligned} ((q_1, p_1,t) - (q_2, p_2,t))&\cdot (f(q_1, p_1, t) - f^+(q, 0, t) + f^-(q, 0, t) - f(q_2, p_2, t)) \\&\leqslant l |(q_1, p_1,t) - (q_2, p_2,t)|^2. \end{aligned} \end{equation*} Note that $f^+(q, 0, t) - f^-(q, 0, t)$ is parallel to $n$ or equals zero, therefore $$ ((q_1, p_1,t) - (q_2, p_2,t)) \cdot (f^+(q, 0, t) - f^-(q, 0, t)) \leqslant 0. $$ Finally, if we sum the last two inequalities, we obtain (\ref{eq4}). When $G$ is not a bounded region, it can be presented as a union of bounded sets, in which the solutions are right-unique. \end{proof} \begin{remark} We note, that the presented proof is similar to the one in \cite{filippov2013differential}, yet it covers a wider class of functions (in \cite{filippov2013differential}, $f$ is assumed to be twice-differentiable almost everywhere). \end{remark} We have shown that any solution of (\ref{eq1}) exists for all $t \geqslant t_0$, this solution is right-unique and continuously depends on initial conditions. From these properties, we obtain the following result. \begin{proposition}\label{propo1} There exist $q_0 \in [0, \pi]$, $p_0$ such that for the solution $(q(t), p(t))$ of (\ref{eq1}) with the corresponding initial conditions $q(t_0) = q_0$, $p(t_0) = p_0$, the following holds $q(t) \in [0, \pi]$ for all $t > t_0$. \end{proposition} \begin{proof} First, consider (\ref{eq1}) in the domain $G = \{ 0 < q < \pi \}$. Any solution leaving $G$ can be continued up to the boundary of $G$. At the same time, the solution cannot leave $G$ at the points where $q = 0$ and $p > 0$ or where $q = \pi$ and $p < 0$. Therefore, for any solution starting in $G $ there are three possibilities: it can never leave $G$; it can leave $G$ through the set $q = 0$, $p < 0$ or through the set $q = \pi$, $p > 0$; it can leave $G$ through the set $q = 0$, $p = 0$ or $q = \pi$, $p = 0$. Let us now consider a continuous curve $p = \sigma(q)$, $t = t_0$, $0 \leqslant q \leqslant \pi$, where $\sigma$ is a continuous function and $\sigma(0) < 0$, $\sigma(\pi) > 0$. Consider all the solutions starting at this curve. Suppose that all these solutions leave $G$. If some solution leaves $G$ through the set $q = 0$, $p < 0$ or $q = \pi$, $p > 0$, then all the solutions starting from close initial conditions also leave $G$ through close boundary points. It follows from the continuous dependence on the initial data. Now consider the case when some solution, starting at the considered curve, reaches the line $q = 0$, $p = 0$ for the first time at moment $t = t^*$. This solution either stays in $q = 0$, $p = 0$ for all $t \geqslant t^*$ or leaves it at some $t = t^{**}$. If it stays in the line for all $t \geqslant t^*$, then it is the required solution. However, above we have supposed that all solutions leave $G$, i.e., our solution leaves line $q = 0$, $p = 0$ at $t = t^{**}$, where $t^{**} = t^* + \sup \{ \Delta t \geqslant 0 \colon q(t^* + t) = 0, \, p(t^* + t) = 0,\quad \forall\, 0 \leqslant t \leqslant \Delta t \}$. There are two possibilities: the pendulum can start moving either outside or inside the set $G$. If it moves inside $G$, then the map between the curve and the boundary $\partial G$ may become discontinuous because then there is a possibility for two close solutions to leave $G$ through the different components of the boundary ($q = 0$ and $q = \pi$). Below we prove that it is not the case. For small $|q|$ and $p > 0$, we have $\dot p < 0$. Therefore, our solution can leave the line only to the set where $p \leqslant 0$, i.e., there exists $t^{***} > t^{**}$ such that $p(t) \leqslant 0$, for all $t \in [t^{**}, t^{***}]$. Moreover, for some $t \in [t^{**}, t^{***}]$ we have $p(t) < 0$ (if it is not true, then the solution do not leave the line at $t = t^{**}$). Since $\dot q = p$, we obtain that our solution, and solutions close to it, leave $G \cup \partial G$. Similarly, one can prove that if some solution reaches the line $p = 0$, $q = \pi$, it either stay in it forever or leaves $G \cup \partial G$. Consider the continuous map from $\partial G$ to the points $q = 0$, $p = \sigma(0)$ and $q = \pi$, $p = \sigma(\pi)$ that maps two connected components of the boundary to these two points, respectively (to be more precise, it maps the plane $q = 0$ into the point $q = 0$, $p = \sigma(0)$, $t = 0$ and the plane $q = \pi$ into the point $q = \pi$, $p = \sigma(\pi)$ $t = 0$). We supposed that all solutions starting at the curve leave $G \cup \partial G$. Previously, we have shown that in this case there exists a continuous map between the curve $\sigma$ and the set $\partial G$. This map This map is a correspondence that assigns to every point of $\sigma$ the point of the first exit of the corresponding solution from $G$. Now we can consider the composition of the above two continuous maps. Finally, we obtain a continuous map between the curve and its boundary points. This contradiction proves the proposition. \end{proof} Note that from the proof it also follows that there exist infinitely many solutions without falling. Indeed, a one-parameter family of such solutions can be obtained if we consider a family of non-intersecting curves $\sigma(q)$. Similarly, one can prove the following result, which contains sufficient conditions for the existence of a solution staying in $(0, \pi)$ for all $t \geqslant t_0$. \begin{proposition} Suppose that $\mu|\ddot \xi| < g$ for all $t \geqslant t_0$. Then there exist $q_0 \in (0, \pi)$, $p_0$ such that for the solution $(q(t), p(t))$ of (1) with the corresponding initial conditions $q(t_0) = q_0$, $p(t_0) = p_0$, the following holds $q(t) \in (0, \pi)$ for all $t > t_0$. \end{proposition} \begin{proof} The proof is analogous to the previous one. The only difference is that it is possible to show that solutions starting in $G$ cannot leave this set at the points where $q = 0$ or $q = \pi$ and $p = 0$. Indeed, suppose that for the solution $(q(t), p(t))$, for some $t^*$, we have $q(t^*) = 0$ and $p(t^*) = 0$. Since for all $t$, we have $-g/2l - \mu|\ddot \xi|/2l \leqslant -g/2l + \mu|\ddot \xi|/2l < 0$, then any solution which reaches the plane $p=0$ at the point $q = 0$, leaves this plane. Moreover, we can conclude that this solution can reach the plane only from the region where $p > 0$. Taking into account the above inequalities, for small $\Delta t < 0$, we have $$ q(t^* + \Delta t) = -\frac{g}{2l}\Delta t^2 - \frac{\mu}{2l}|\ddot \xi|\Delta t^2 + o(\Delta t^2) < 0. $$ We obtain that the considered solution reaches the point $q = 0$ and $p = 0$ from the outside of $G$. Similarly, one can prove that solutions cannot leave $G$ at the point $q = \pi$, $p = 0$. \end{proof} \section{Conclusion} The presented proof is based on the topological ideas of so-called Wa\.zewski method (see \cite{wazewski1948principe}, \cite{reissig1963qualitative}), which can be also applied to other pendulum-like systems. For instance, in \cite{polekhin2014examples}, it was proved that, if the motion of the pendulum is frictionless, then for any $\ddot \xi$, there exists a solution without falling. Note, that this result agrees with Proposition \ref{propo1} from this paper if we put $\mu = 0$. A good overview of the attempts to prove the existence of a solution that always remains above the horizontal line in the system without friction can be found in \cite{Srzednicki2017}. However, the system with dry friction is qualitatively different comparing to the frictionless case: in the latter case, there are no equilibrium points when $|\ddot\xi|\ne 0$. At the same time, if $\mu \ne 0$ and $|\ddot \xi|$ is relatively small, then there is a set of equilibrium points in a vicinity of $q = \pi/2$. These points can be considered as solutions without falling, yet from the proof it can be seen that there also exists at least one non-constant solution that never falls. The ideas that are used in the presented paper can also be used for systems with viscous friction and for more complex systems where the massive point is moving on a two-dimensional surface. Further development of the ideas of Wa\.zewski method can also be used to prove the existence of periodic solutions without falling in pendulum-like and general mechanical systems \cite{polekhin2015forced}, \cite{polekhin2016forced}, \cite{bolotin2015calculus}. Similar methods have been found useful in studying global controllability of an inverted pendulum \cite{Polekhin2017gsi, Polekhin2018}. In conclusion, we would like to note that the above results hold for a wider class of friction models. In particular, it is possible to consider various sufficiently smooth Stribeck curves. Therefore, we may expect the existence of a falling free motion (however, possibly unstable) in a real system of an inverted pendulum with horizontally moving pivot point.
{ "timestamp": "2018-06-05T02:17:38", "yymm": "1806", "arxiv_id": "1806.01075", "language": "en", "url": "https://arxiv.org/abs/1806.01075" }
\subsection{Proof of Theorem \ref{theo:lo_exact}} \begin{theorem*} Let $T$ be the run time of a \oneone Algorithm on $LO_k^f$ to optimize a bit string of length $n$. It then holds that \begin{align*} E(T) = E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1} \in \Theta\left(E\left(T^n_k\right)\frac{n}{k}\right) \end{align*} where $T^n_k$ is the run time to optimize $f^k$ with a bit flip probability of $\frac{1}{n}$. \end{theorem*} \begin{proof} Let $T_i$ be the run time to optimize $f$ on the $i^{\text{th}}$ block $b_i$, given that all previous blocks are already optimized. By the definition of $LO_k^f$ we know that all blocks have to be optimized one after another from block $b_0$ to block $b_{\frac{n}{k}-1}$. Hence, \begin{align}\label{eq:gen_sum} E(T) = \sum_{i=0}^{\frac{n}{k}-1} E(T_i). \end{align} This still holds even if we optimize multiple blocks at once, because then the run time of the next blocks goes down to $0$. Let $p_i$ be the probability to not flip a bit in the first $i-1$ blocks. If a bit flips within the $i-1$ first blocks, the fitness no longer depends on the $i$th block that we want to optimize, hence whatever happened in the $i$th block will be discarded. Therefore if we only look at steps where no bit flip happened in those leading blocks, we get a run time of $T^n_k$. We want to prove this intuition by using Wald's equation. Let $X_j \in \{0,1\}$ be an indicator variable, that equals $1$ when no bit flip happened in the first $i$ blocks of the bit string we want to optimize, else $0$. By definition of \LO, all steps are discarded when a bit flip in these blocks were made. \begin{align*} &T^n_k = \sum_{j=1}^{T_i} X_j\\ \intertext{Using Wald's Equation} &\Rightarrow E\left(T^n_k\right) = E(T_i) E(X_j) = E(T_i) p_i\\ &\Rightarrow E(T_i) = \frac{E\left(T^n_k\right)}{p_i} = \left(\frac{n}{n-1}\right)^{ik}E\left(T^n_k\right) \end{align*} Using that $i$ has a maximum value of $\frac{n}{k}-1$ we can also conclude, \begin{align}\label{eq:raw_bounds} E\left(T^n_k\right) \le E\left(T_i\right) = \left(\frac{n}{n-1}\right)^{ik}E\left(T^n_k\right) \le e E\left(T^n_k\right). \end{align} If we put that into Equation \eqref{eq:gen_sum} we get \begin{align*} E(T) &= \sum_{i=0}^{\frac{n}{k}-1} E(T_i)\\ E(T)&= \sum_{i=0}^{\frac{n}{k}-1} \left(\frac{n}{n-1}\right)^{ik}E\left(T^n_k\right)\\ &= E\left(T^n_k\right)\sum_{i=0}^{\frac{n}{k}-1} \left(\frac{n}{n-1}\right)^{ik}\\ &= E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1}. \end{align*} It directly follows from Equation \eqref{eq:gen_sum} and \eqref{eq:raw_bounds}, that \begin{align*} \frac{n}{k}E\left(T^n_k\right) \le E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1} \le \frac{en}{k}E\left(T^n_k\right) \end{align*} and hence, \begin{align*} E(T) = E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1} = \Theta\left(E\left(T^n_k\right)\frac{n}{k}\right). \end{align*} \end{proof} \subsection{Proof of Corollary \ref{cor:lo_exact_bound}} \begin{corollary*} The expected run time of a \oneone on \LO is exactly \begin{align*} E(T_{LO}(n)) = \frac{\left(\frac{n}{n-1}\right)^{n-1}+\frac{1}{n}-1}{2}n^2. \end{align*} \end{corollary*} \begin{proof} We use Theorem \ref{theo:lo_exact} and set $k=1$, so that $E\left(T^n_k\right)$ is just the expected run time to get a $1$ on one bit that is initialized randomly and has a flip probability $\frac{1}{n}$. In half of all cases we already start with a $1$ and are done in $0$ steps, otherwise we have a expected run time of $n$. This fitness function nested in \LO gives us \LO itself. Obviously, $E(T^n_k) = \frac{n}{2}$ which leads to \begin{align*} E(T_{LO}(n)) &= E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1}\\ &= \frac{n}{2} \frac{\left(\frac{n}{n-1}\right)^n-1}{\frac{n}{n-1}-1}\\ &= \frac{\left(\frac{n}{n-1}\right)^{n-1}+\frac{1}{n}-1}{2}n^2. \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lem:fork_half}} \begin{lemma*} For any run of the \oneone on \Fork{n}{r} let $V$ denote the event that the valley string occurs as a best solution before the optimum. Then, $\Pr{(V)} = \frac{1}{2}$. \end{lemma*} \begin{proof} Let $p_S(V)$ be the probability of $V$ under the condition of starting with bit string $S$ and $S'$ be $S$ reversed. It then holds that $p_S(V) = p_{S'}(\overline{V})$ due to the fact that the fitnesses of reversed strings stay the same with the exception of the optimum and the valley and the valley reversed is the optimum. Since they are the last strings of a run, they do not influence the probabilities of all other strings to occur before. From the law of total probabilities we can conclude, \begin{align*} \Pr{(V)} &= \sum_{S\in \{0,1\}^n} p_S(V)\frac{1}{2^n} = \sum_{S'\in \{0,1\}^n} p_{S'}(V)\frac{1}{2^n}\\ &= \sum_{S\in \{0,1\}^n} p_S(\overline{V})\frac{1}{2^n} = \Pr{(\overline{V})}. \end{align*} Hence, $\Pr{{V}} = \frac{1}{2}$. \end{proof} \subsection{Proof of Theorem \ref{theo:fork_general}} \begin{theorem*} The expected optimization time of the \oneone on \Fork{k}{r} with bit flip probability $\frac{1}{n}$ and $n\ge k$ is \begin{align*} E\left(T(k)\right) \in \Theta\left(n^{2r}\right). \end{align*} \end{theorem*} \begin{proof} First we show $E\left(T(k)\right) \in \Or\left(n^{2r}\right)$ using the fitness level argument. Assuming the worst case of getting into every possible fitness level we get an upper bound on the expected run time by adding up the expected run times of leaving all the levels for a better fitness. Let level $L_i$ be the set of all bit strings that yield the same fitness $i$, so that $\forall s_i \in L_i\colon f(s_i)=i$. Let $p_i$ be the probability of the \oneone for leaving $L_i$ for a higher level. By definition of \Fork{k}{r}, for all $i<k$ there is always at least one $0$ that leads to a higher fitness when flipped, given that all current $1$s remain. For these levels $L_0$, \ldots, $L_k$, $i$ equals the number of $1$s in the bit string. Recall that $n, k\ge 2$. \begin{align*} \forall i < k: p_i \ge \frac{1}{n}\left(1-\frac{1}{n}\right)^i \ge \frac{1}{n}\left(1-\frac{1}{n}\right)^n \ge \frac{1}{4n} \end{align*} The only fitness levels not covered by this are levels with fitness between $k$ and $k+2$. We can leave out $k+2$ since there is no need for leaving the optimum. Being in level $k$ means having the $1^k$ bit string. The only way to leave that level is by either flipping the first $r$ bits to get to level $k+1$ or flipping the last $r$ bits to get to level $k+2$. In both cases all other $k-r$ bits have to remain unchanged. \begin{align*} p_k \ge \left(\frac{1}{n}\right)^{r}\left(1-\frac{1}{n}\right)^{k-r} \ge \frac{1}{n^r}\left(1-\frac{1}{n}\right)^n \ge \frac{1}{4n^r} \end{align*} To leave level $k+1$ all $2r$ bits have to be flipped keeping all other bits. \begin{align*} p_{k+1} \ge \left(\frac{1}{n}\right)^{2r}\left(1-\frac{1}{n}\right)^{k-2r} \;\ge\; \frac{1}{n^{2r}}\left(1-\frac{1}{n}\right)^n \;\ge\; \frac{1}{4n^{2r}} \end{align*} Using the fitness level argument we get \begin{align*} E\left(T(k)\right) &\le \sum_{i=0}^{k+1} \frac{1}{p_i}\\ &= \sum_{i=0}^{k-1} \frac{1}{p_i} + \frac{1}{p_k} + \frac{1}{p_{k+1}}\\ &\le 4\left(n^r + n^{2r} + \sum_{i=0}^{k-1} n\right)\\ &= 4\left(n^r + n^{2r} + kn\right) \intertext{and having $n \ge k\ge 2r \ge 2$ brings us to} &\le 12n^{2r} \;\in\; \Or\left(n^{2r}\right). \end{align*} Next, we will to prove that $E\left(T(k)\right) \in \Omega\left(n^{2r}\right)$. Let $V$ be the event that the valley occurs within a run. From the law of total expectation it follows that \begin{align*} E\left(T(k)\right) &= E\left(T(k) \; \middle| \; V\right)\Pr{(V)} + E\left(T(k) \; \middle| \; \overline{V}\right)\Pr{(\overline{V})}\\ &\ge E\left(T(k) \; \middle| \; V\right)\Pr{(V)}\\ \intertext{Using Lemma \ref{lem:fork_half} and the run time of $n^{2r}$ to escape the valley, one gets} &= \frac{1}{2}E\left(T(k) \; \middle| \; V\right) \ge \frac{1}{2}n^{2r} \in \Omega\left(n^{2r}\right). \end{align*} \end{proof} \subsection{Proof of Theorem \ref{theo:single_fork_grouped}} \begin{theorem*} The expected optimization time of the \oneone on \LO with $k$-block \Fork{n}{r} is $E(T) \in \Theta\left(\frac{1}{k}n^{2r+1}\right)$. \end{theorem*} \begin{proof} We use Theorem \ref{theo:lo_exact} to get to this statement. As discussed before, to fulfill the requirements, the $1^n$ bit string has to yield the highest fitness. However, this can be easily achieved by flipping the last $r$ bits in the definition of \Fork{n}{r}. All other properties remain the same since $0$ and $1$ are interchangeable. Therefore the run times do not change with this modified definition. It holds that $E(T) = \Theta\left(E\left(T^n_k\right)\frac{n}{k}\right)$ where $T^n_k$ is the run time to optimize $f$ on a bit string of length $k$ with a bit flip probability of $\frac{1}{n}$. In the previous Theorem \ref{theo:fork_general} we already have shown that $T^n_k \in \Theta\left(n^{2r}\right)$. Thus, $E(T) \in \Theta\left(\frac{1}{k}n^{2r+1}\right)$.\qed \end{proof} \subsection{Proof of Lemma \ref{lem:all_allone}} \begin{lemma*} Let $\lambda$ be the number of islands running the \oneone optimizing \Fork{n}{r}. Let $T_{all}^\lambda$ denote the run time for all islands to get to the optimum, valley or $1^n$ as their best solution respectively, regardless of the migration topology and policy. If $\lambda$ is polynomial in $n$, \begin{align*} E\left(T_{all}^\lambda\right) \in \Or\left(n\log n\right). \end{align*} \end{lemma*} \begin{proof} For an upper bound we can assume that we just want to optimize \OM, since the valley and the optimum of \Fork{n}{r} are just exceptions here that make the run time even shorter. Further we only consider the worst case that all islands work isolated, since in \OM a higher fitness means being closer to $1^n$. The expected run time for a single island $E\left(T_{opt}\right)$ until it reaches the optimum, the valley or $1^n$ can be bound from above by the expected number of steps it would take to simply optimize \OM, since the fact that in \Fork{n}{r} optimum and valley can be obtained earlier just decreases the run time. Let $p_c$ be the probability for one island to take longer than $cen(\ln\lambda+ \ln n)$ steps to optimize \OM, where $c$ is a constant. The multiplicative drift \cite{Doerr2012} gives us the following tail bounds \cite{Doerr2013}. \begin{align*} \Pr{\left(T_{opt}>en\left(2c\ln\lambda + \ln n\right)\right)} \le e^{-2c\ln\lambda} = \frac{1}{e^c\lambda^2} \end{align*} Especially for $c\ge1$ we have \begin{align*} p_c = \Pr{\left(T_{opt}>cen(\ln\lambda+ \ln n)\right)} \le \frac{1}{e^c\lambda^2}. \end{align*} The run time for all $\lambda$ islands can then be bound by \begin{align*} p^\lambda_c=\Pr{\left(T_{opt}^\lambda>cen(\ln\lambda+ \ln n)\right)} &= 1-(1-p_c)^\lambda\\ &\le 1-\left(1-p_c \lambda\right) \le \frac{1}{e^c\lambda} \end{align*} for $c\ge 1$. Let $a_i = ien(\ln\lambda+ \ln n)$. Applying the law of total expectation gives us \begin{align*} E\left(T_{opt}^\lambda\right) &\le \sum_{i=0}^{\infty} E\left(T_{opt}^\lambda \; \middle| \; a_i< T_{opt}^\lambda\le a_{i+1}\right)\Pr{\left(T_{opt}^\lambda>a_i\right)}\\ &\le \sum_{i=0}^{\infty} a_{i+1}p^\lambda_i\\ \intertext{and since our bound on $p^\lambda_c$ only holds for $i\ge 1$, } &\le a_1 + \sum_{i=1}^{\infty}\frac{a_{i+1}}{e^i\lambda}\\ &= en(\ln\lambda+ \ln n) + \sum_{i=1}^{\infty}\frac{(i+1)n(\ln\lambda+ \ln n)}{e^i\lambda}\\ &\le en(\ln\lambda+ \ln n) + \frac{n(\ln\lambda+ \ln n)}{\lambda}\sum_{i=1}^{\infty}\frac{i+1}{e^i}\\ &\le en(\ln\lambda+ \ln n) + 2n\ln n\sum_{i=1}^{\infty}\frac{i+1}{e^i}\\ &\le en\ln\lambda+ (e+4)n\ln n\\ &\in \Or\left(n\log n\right). \end{align*} The last step follows from the restriction of $\lambda$ being polynomial in $n$. \end{proof} \subsection{Proof of Theorem \ref{theo:isolated_fork}} In order to prove the theorem, we first give the following lemma. \begin{lemma}\label{lem:choose_sum_div} For $n\ge 1$, \begin{align*} \frac{1}{2^n}\sum_{k=1}^{n} \binom{n}{k}\frac{n}{k} \in \Theta\left(1\right). \end{align*} \end{lemma} \begin{proof} \begin{align*} \frac{1}{2^n}\sum_{k=1}^{n} \binom{n}{k}\frac{n}{k} &= \frac{1}{2^n}\sum_{k=1}^{n} \binom{n}{k}\frac{n+1}{k+1}\cdot\frac{n}{n+1}\cdot \frac{k+1}{k}\\ &= \frac{1}{2^n}\sum_{k=1}^{n} \binom{n+1}{k+1}\frac{n}{n+1}\cdot \frac{k+1}{k}\\ &= \frac{1}{2^n}\sum_{k=1}^{n} \binom{n+1}{k+1}\Theta\left(1\right)\\ &= \frac{1}{2^n}\sum_{k=1}^{n-1} \left(\binom{n}{k} + \binom{n}{k+1}\right)\Theta\left(1\right) + \Theta\left(1\right)\\ &= \frac{1}{2^n}\sum_{k=1}^{n-1} \binom{n}{k}\Theta\left(1\right) + \frac{1}{2^n}\sum_{k=2}^{n} \binom{n}{k}\Theta\left(1\right) + \Theta\left(1\right)\\ &= \Theta\left(1\right) \end{align*} \end{proof} \begin{theorem*} For $\lambda \le n^r$ isolated islands the expected time to optimize \Fork{n}{r} is \begin{align*} E\left(T\right) \in \Or\left(n\log(n)+\frac{n^{2r}}{\lambda 2^\lambda} + \frac{n^r}{\lambda}\right). \end{align*} \end{theorem*} \begin{proof} First assume all islands having the optimum, valley or the $1^n$ string as their best solution respectively. From Lemma \ref{lem:all_allone} we know that this is approached in $\Or\left(n\log n\right)$ steps. We further assume the worst case, that none of the islands has already found an optimum up to this point. As shown in Lemma \ref{lem:fork_half} an island gets into the valley with probability $\frac{1}{2}$. If $i$ islands will find the optimum before the valley, the upper bound on the run time to find the optimum is the minimum of \begin{enumerate} \item the run time it would take at least one of the $i$ islands to make the $n^r$-jump to the optimum and \item the run time it would take $\lambda-i$ islands to all get to the valley and one of them to get from there to the optimum. \end{enumerate} It is well known that the expected value for the maximum of $\lambda$ independently sampled elements of the same geometric distribution with success probability of $p$ can be bound from above by $\frac{1}{p}\sum_{k=1}^{\lambda}\frac{1}{k} \le \frac{\log\lambda+1}{p}$ \cite{Eisenberg2008}. We can adapt this to our upper bound on the run time for $\lambda-i$ islands to get to the valley from $1^n$, since this is also geometrically distributed for each island. Further, when we observe the jumps to the valley or the optimum respectively, we make use of Equation \eqref{eq:all_jump}. From the law of total expectation we can observe that \begin{align*} E\left(T\right)&\le \Or\left(n\log(n)\right)\\ &+ \sum_{i=0}^{\lambda}\left(\frac{1}{2}\right)^\lambda \binom{\lambda}{i} \min\left(\frac{n^r}{i}+1, 2n^r\log\lambda + \frac{n^{2r}}{\lambda-i}+1\right)\\ &= \Or\left(n\log(n)\right)\\ &+\sum_{i=0}^{\lambda}\left(\frac{1}{2}\right)^\lambda \binom{\lambda}{i} \min\left(\frac{n^r}{i}, 2n^r\log\lambda + \frac{n^{2r}}{\lambda-i}\right). \end{align*} To eliminate the minimum in the inequality, we want to find out from which $i$ on $\frac{n^r}{i} \le 2n^r\log\lambda + \frac{n^{2r}}{\lambda-i}$. Obviously this $i$ equals 1 since zero islands cannot make progress at all, but for any larger $i$, the first term is always smaller than the second one. Knowing this, one can resolve the minimum to get to \begin{align*} E\left(T\right) &\le \Or\left(n\log(n)\right)+\frac{2n^r\log\lambda}{2^\lambda} + \frac{n^{2r}}{\lambda2^\lambda} + \sum_{i=1}^{\lambda}\left(\frac{1}{2}\right)^\lambda \binom{\lambda}{i} \frac{n^r}{i}. \end{align*} The second term on the right side is always smaller than the first or the third one: For $\lambda \ge 2r\log n$, $\frac{2n^r\log\lambda}{2^\lambda} < n\log n$ and for $\lambda \le 2r\log n$, $\frac{2n^r\log\lambda}{2^\lambda} < \frac{n^{2r}}{\lambda2^\lambda}$. Therefore we make the right side of the inequality just greater by doubling the outer two and leaving out the middle one. \begin{align*} E\left(T\right) &\le \Or\left(2n\log(n)\right) + \frac{2n^{2r}}{\lambda2^\lambda} + \frac{n^r}{2^\lambda}\sum_{i=1}^{\lambda} \binom{\lambda}{i}\frac{1}{i} \end{align*} Using Lemma \ref{lem:choose_sum_div} gives \begin{align*} E\left(T\right) \in \Or\left(n\log(n) + \frac{n^{2r}}{\lambda2^\lambda} + \frac{n^r}{\lambda}\right). \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lem:worst_case_runtime}} \begin{lemma*} For $\lambda \in \Or\left(\frac{n^{2r-1}}{\log n}\right)$ islands and any migration topology and/or policy, the expected run time to optimize \Fork{n}{r} can be bounded by \begin{align*} E(T) \in \Or\left(\frac{n^{2r}}{\lambda}\right). \end{align*} \end{lemma*} \begin{proof} From Lemma \ref{lem:all_allone} we know that the expected number of steps until all islands have found either the valley, $1^n$ or the optimum is in $\Or\left(n\log n\right)$. If no island already found the optimum after that time, every island has a chance of at least $\frac{n^{2r}}{\lambda}$ to get to the optimum in the next step. By using Equation \eqref{eq:all_jump}, the expected run time for one out of $\lambda$ islands to make the final jump to the optimum can be bounded by $\Or\left(\frac{n^{2r}}{\lambda}\right)$.\\ Therefore we get an overall run time of $\Or\left(n\log n + \frac{n^{2r}}{\lambda}\right) = \Or\left(\frac{n^{2r}}{\lambda}\right)$. \end{proof} \subsection{Proof of Lemma \ref{lem:ov_bound}} \begin{lemma*} The probability $p_{ov}$ for a single island to generate the optimum or the valley during its way to $1^n$ while optimizing \Fork{k}{r} with \oneone$_{1/n}$ is \begin{align*} p_{ov} \in \Or\left(\frac{1}{k^{r-1}}\right)\;\;\text{ for }k\le n. \end{align*} \end{lemma*} \begin{proof} We start by investigating the expected number of times $E(J)$ that the \oneone generates an individual with $r$ zeros and $k-r$ ones, even if this individual will be discarded. Recall that also the optimum and the valley have that number of zeros and ones. Let $S_i$ be the number of jumps that the \oneone makes from fitness level $k-i$ to the desired level of exactly $r$ zeros. It follows that, \begin{align*} E(J) &\le E(1 + S_r + S_{r-1}+\dots + S_1)\\ &= 1+\sum_{i=1}^r E(S_i). \end{align*} The ``$+1$'' comes into play when the level with $r$ zeros is reached from a lower fitness level, which can only happen once by the definition of the \oneone. The next step now is to resolve $E(S_i)$. Let $T_i$ denote the number of steps the \oneone, while optimizing \OM, stays on fitness $k-i$. By having a geometric distribution, $T_i$ equals the reciprocal of the probability $p_i$ that we leave the fitness level $k-i$ for a higher one. This is at least he case, if one arbitrary zero is flipped to one while all other bits remain the same. It follows that \begin{align*} p_i &\ge \binom{i}{1}\frac{1}{n}\left(1-\frac{1}{n}\right)^{k-1} \quad=\quad \frac{i}{n}\left(1-\frac{1}{n}\right)^{n-1} \quad\ge\quad \frac{1}{en} \end{align*} and hence, $E(T_i) \le en$. Further let $X_{i, j}$ be the indicator variable that equals $1$ if a bit string with exactly $r$ zeros is generated on mutation step $j$ while the \oneone is at fitness $k-i$, else $0$. We can observe that $S_i = \sum_{j=1}^{T_i} X_{i, j}$. Using Wald's equation we get $E\left(S_i\right) = E(T_i)E(X_{i, 1})$. Recall that for Wald's equation $X_{i,j}$ has to have the same distribution for every $j$ and a fix arbitrary $i$. Hence, \begin{align} E(J) &\le 1+\sum_{i=1}^r E(S_i) \quad=\quad 1+\sum_{i=1}^r E(T_i)E(X_{i, 1})\nonumber\\ &= 1+\sum_{i=1}^{r-1} E(T_i)E(X_{i, 1})+E(T_r)E(X_{r, 1}) \label{eq:wald_num_of_VO}. \end{align} Next we will resolve $E(X_{i, 1})$. In the sum, $i=r$ is a special case, because this level already is the level we want to get to and at least two bits have to be flipped to get another bit string on the same level. To distinguish, we investigate two cases. \begin{itemize} \item Case 1: $i\le r-1$ Let $Z$ be the number of zeros flipped to one in a step. The more zeros flip, the more have to be flipped back, since we already need more zeros. \begin{align*} E(X_{i,1}) &= \sum_{h=0}^\infty E\left(X_{i,1} \; \middle| \; Z=h\right)\Pr{(Z=h)}\\ &\le \sum_{h=0}^\infty E\left(X_{i,1} \; \middle| \; Z=0\right)\Pr{(Z=h)}\\ &= E\left(X_{i,1} \; \middle| \; Z=0\right)\sum_{h=0}^\infty \Pr{(Z=h)}\\ &= E\left(X_{i,1} \; \middle| \; Z=0\right)\\ &= \left(\frac{1}{n}\right)^{r-i} \binom{k-i}{r-i} \left(1-\frac{1}{n}\right)^{k-r}\\ \intertext{Knowing that $r<k$ by definition of \Fork{k}{r} leads to} &\le \left(\frac{1}{n}\right)^{r-i} \binom{k}{r-i}\\ &\le \left(\frac{k}{n}\right)^{r-i} \quad\le\quad \frac{k}{n}. \end{align*} \item Case 2: $i=r$: We make a sum over all possible numbers of zeros that flip to one. The same amount of ones has to flip to zero. \begin{align*} E(X_{r,1}) &= \sum_{h=1}^{r} \left(\frac{1}{n}\right)^{2h}\binom{r}{h}\binom{n-r}{h}\\ &\le \sum_{h=1}^{r} \left(\frac{1}{n}\right)^2\binom{r}{1}\binom{n-r}{1}\\ &= r^2(n-r)\left(\frac{1}{n}\right)^2\\ &\le \frac{r^2}{n} \end{align*} \end{itemize} If we now put $E(T_i)$ and $E(X_{i,1})$ into Equation \eqref{eq:wald_num_of_VO}, we get \begin{align*} E(J) &\le 1+\sum_{i=1}^{r-1} \left(en \frac{k}{n}\right)+en\frac{r^2}{n}\\ &\le 1 + erk + er^2. \end{align*} Since this is an upper bound on the expected number of bit strings with exactly $r$ zeros that are seen during one run of the \oneone, we can get an upper bound on the probability that we see the valley ($p_v$) or the optimum ($p_o$), respectively, by \begin{align*} p_o = p_v &\le \frac{1 + erk + er^2}{\binom{k}{r}}. \end{align*} Since $r$ is constant, there is a constant $c$ so that the following holds. \begin{align*} p_o = p_v &\le \frac{1 + erk + er^2}{ck^r}\\ &= \frac{1}{ck^r} + \frac{erk}{ck^r} + \frac{er^2}{ck^r}\\ &\le \frac{d}{k^{r-1}}\quad\text{for a constant $d>0$} \end{align*} Applying the union bound we finally get \begin{align*} p_{ov} &\le p_o + p_v = \frac{2d}{k^{r-1}}\\ &\in \Or\left(\frac{1}{k^{r-1}}\right). \end{align*} \end{proof} \subsection{Proof of Lemma \ref{lem:probability_Q}} \begin{lemma*} Under the condition of at least one of $\lambda \in \Or\left(n^{r-1}\right)$ island having $1^n$ and all others the valley as solution, the probability for $Q$ that one island will find the valley and a migration will be made before the optimum is found by any island is \begin{align*} \frac{c}{2} \frac{n^r}{n^r + \tau\lambda} \;\le\; \Pr{(Q)} \;\le\; \frac{n^r + \frac{\lambda}{2}}{2n^r + \tau\lambda}. \end{align*} for a constant $0<c<1$. \end{lemma*} \begin{proof} Starting with the examination of the lower bound, we look at the island $\Lambda$ that comes up with the valley or the optimum at first. With probability $\frac{1}{2}$ it finds the valley before the optimum (Lemma~\ref{lem:fork_half}). Until the next migration happens, no island should find the optimum to make $Q$ happen. We already know from Lemma~\ref{lem:ov_bound} that $\left(1-\frac{1}{n^r}\right)^\lambda$ is a lower bound on the probability of not finding the optimum by at least one of $\lambda$ islands in the next step. This has to hold for all steps until a migration event occurs. In the following sum, $i$ represents the index of the step where $\Lambda$ shares the valley to all other islands. Thus, \begin{align*} \Pr{(Q)}&= \frac{1}{2} \sum_{i=1}^{\infty} \left(1-\frac{1}{\tau}\right)^{i-1} \frac{1}{\tau}\left(1-\frac{1}{n^r}\right)^{\lambda i}\\ &= \frac{\left(1-\frac{1}{n^r}\right)^\lambda}{2\tau} \sum_{i=0}^{\infty} \left(\left(1-\frac{1}{\tau}\right)\left(1-\frac{1}{n^r}\right)^\lambda\right)^i\\ &\ge \frac{\left(1-\frac{1}{n^r}\right)^\lambda}{2\tau} \sum_{i=0}^{\infty} \left(\left(1-\frac{1}{\tau}\right)\left(1-\frac{\lambda}{n^r}\right)\right)^i. \intertext{By knowing that $\lambda \in \Or\left(n^{r-1}\right)$, there is a constant $0<c<1$ so that $c \le \left(1-\frac{1}{n^r}\right)^\lambda$.} &\ge \frac{c}{2\tau} \sum_{i=0}^{\infty} \left(\left(1-\frac{1}{\tau}\right)\left(1-\frac{\lambda}{n^r}\right)\right)^i\\ &= \frac{c}{2\tau} \frac{1}{1-\left(1-\frac{1}{\tau}\right)\left(1-\frac{\lambda}{n^r}\right)}\\ &= \frac{c}{2\tau} \frac{1}{\frac{1}{\tau} + \frac{\lambda}{n^r}-\frac{\lambda}{\tau n^r}}\\ &\ge \frac{c}{2} \frac{1}{1 + \frac{\tau\lambda}{n^r}}\\ &= \frac{c}{2} \frac{n^r}{n^r + \tau\lambda} \end{align*} which proves the lower bound. For the counterpart we assume all islands that come up with the valley during the whole run time already start with the valley. This also means that the next new individual found by any island is the optimum. The probability that the number of islands that get the valley is at least $\frac{\lambda}{2} \ge r\log n$ is at most $\left(\frac{1}{2}\right)^{r\log n} = \frac{1}{n^r}$, since $\frac{1}{2}$ is the probability to get the valley instead of the optimum (Lemma \ref{lem:fork_half}). By using Lemma \ref{lem:worst_case_runtime} and the law of total expectation we see that this case can be ignored, as the resulting term will not dominate the bound. Thus, we will now assume that we start with at most $\frac{\lambda}{2}$ valleys and the next change on an island will be the optimum. Since all islands migrate always at the same time, we get an upper bound on $\Pr{(Q)}$ similarly to the lower bound. \begin{align*} \Pr{(Q)}&= \frac{1}{2} \sum_{i=1}^{\infty} \left(1-\frac{1}{\tau}\right)^{i-1} \frac{1}{\tau}\left(1-\frac{1}{n^r}\right)^{\left(\lambda-\frac{\lambda}{2}\right) i}\\ &\le \frac{1}{2} \sum_{i=1}^{\infty} \left(1-\frac{1}{\tau}\right)^{i-1} \frac{1}{\tau}\left(1-\frac{1}{n^r}\right)^{\frac{\lambda i}{2}}\\ &= \frac{\left(1-\frac{1}{n^r}\right)^{\frac{\lambda}{2}}}{2\tau} \sum_{i=0}^{\infty} \left(\left(1-\frac{1}{\tau}\right)\left(1-\frac{1}{n^r}\right)^{\frac{\lambda}{2}}\right)^i\\ &\le \left(\frac{1}{2\tau}\right) \frac{1}{1-\left(1-\frac{1}{\tau}\right)\left(1-\frac{1}{n^r}\right)^{\frac{\lambda}{2}}}\\ &\le \left(\frac{1}{2\tau}\right) \frac{1}{1-\left(\frac{\tau-1}{\tau}\right)\left(\frac{n^r}{n^r+\frac{\lambda}{2}}\right)}\\ &= \left(\frac{1}{2\tau}\right) \frac{1}{1-\left(\frac{\tau n^r - n^r}{\tau n^r + \frac{\tau\lambda}{2}}\right)}\\ &= \frac{n^r + \frac{\lambda}{2}}{2n^r + \tau\lambda}. \end{align*} \end{proof} \subsection{Proof of Theorem \ref{theo:fork_comlete_lower_bound}} \begin{theorem*} The expected run time of $2r\log n \le \lambda \in \Or\left(\frac{n^{r-1}}{\log n}\right)$ islands optimizing \Fork{n}{r} on a complete graph is $E(T) \in \Omega\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2}\right)$. \end{theorem*} \begin{proof} To get a lower bound, for this proof we want to assume starting with all islands having $1^n$ or the valley as best solution. Let the event that this state is ever reached be called $U$. To show that our assumption leads to a lower bound we now have to prove that $\Pr{(U)}$ is at least constant. To let $U$ happen, no island is allowed to find the optimum until $U$ is satisfied. If we look at Lemma~\ref{lem:ov_bound}, we know that the probability for one island to generate the optimum during this time is at most $\frac{b}{n^{r-1}}$ for a constant $b>0$ . We can conclude that $\Pr{(U)} \ge \left(1-\frac{b}{n^{r-1}}\right)^\lambda$. From the limitation $\lambda \in \Or\left(n^{r-1}\right)$ we can derive $\Pr{(U)} \ge d$ for a constant $0<d\le1$. This confirms that we can derive the lower bound under the assumption we wanted to make. For the event $Q$ like defined in Lemma~\ref{lem:probability_Q}, we use that $E(T) = E(T \;|\; Q)\Pr{(Q)} + E(T \;|\; \overline{Q})\Pr{(\overline{Q})}$ and will show the values of both summands in the following. Also from that Lemma we receive the bounds on $\Pr{(Q)}$. If $Q$ happens, meaning all islands get the valley as solution, all islands will have to make the jump to the optimum from the valley. The expected overall run time under this condition therefore is at least $\frac{n^{2r}}{2\lambda}$ (Equation \eqref{eq:all_jump}). Therefore, for this event we get a total of $ E\left(T \; \middle| \; Q\right)\Pr{(Q)} \ge \frac{n^{2r}}{2\lambda}\frac{dc}{2} \left(\frac{n^r}{n^r + \tau\lambda}\right) \;\in\; \Omega\left(\frac{n^{3r}}{\lambda n^r + \tau\lambda^2}\right) $. We now continue with the other part, $E\left(T \; \middle| \; \overline{Q}\right)\Pr{(\overline{Q})}$. The expected run time until one of $\lambda$ islands comes up with the optimum is $E{(T \;|\;\overline{Q})} \in \Omega\left(\frac{n^r}{\lambda}\right)$ (Equation~\eqref{eq:all_jump}) which leads us to $ E{(T \;|\;\overline{Q})}\Pr{(\overline{Q})} \in \Omega\left(\frac{\tau n^r}{n^r + \tau\lambda}\right) $. If we finally add both cases of $Q$ and $\overline{Q}$ we get \begin{align*} E(T) &= E{(T \;|\; Q)}\Pr{(Q)} + E{(T \;|\;\overline{Q})}\Pr{(\overline{Q})} \;\in\; \Omega\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2}\right). \end{align*}\qed \end{proof} \subsection{Proof of Theorem \ref{theo:fork_comlete_upper_bound}} \begin{theorem*} The expected run time of $2\log n \le \lambda \in \Or\left(\frac{n^{r-1}}{\log n}\right)$ islands optimizing \Fork{n}{r} on a complete graph is in $ E(T) \in \Or\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2} + \frac{n^{2r+1}\log n}{\tau\lambda}\right) $. \end{theorem*} \begin{proof} To derive an upper bound we assume for the whole proof that until all islands have $1^n$ or the valley as best individual, no optimum is generated. Similar to the proof for the lower bound we want to split the run time by the event $U$, that all islands come to a state where at least one island has $1^n$ and all others the valley as their best individual. Unlike before, we cannot ignore the case of $\overline{U}$, since we want an upper bound. We start with $\overline{U}$. The only two reasons for $\overline{U}$ to happen are that \begin{enumerate}[nosep] \item all islands create the valley on their own or \item at least one island finds the valley during these $dn\log n$ steps and migrates it to all others \end{enumerate} (1): The probability for this is not more than $\left(\frac{1}{2}\right)^{2r\log n} = \frac{1}{n^{2r}}$, because we have at least $2r\log n$ islands. As in Theorem \ref{theo:fork_comlete_lower_bound} we can ignore that case because of Lemma \ref{lem:worst_case_runtime} and the law of total expectation. (2): To calculate the probability for that, we assume the worst case that there are already islands with the valley from the beginning on. We know that after $\Or\left(n\log n\right)$ steps, the decision of $U$ or $\overline{U}$ has been made (Lemma \ref{lem:all_allone}). The probability to migrate during that time is for a constant $d>0$ \begin{align*} \Pr{(\overline{U})} &\le \sum_{i=0}^{dn\log n} \left(1-\frac{1}{\tau}\right)^i\frac{1}{\tau} \;\le\; \frac{2dn\log n}{\tau + dn\log n}\;\le\; \frac{2dn\log n}{\tau}. \end{align*} The last step could be made by the inequality introduced by Badkobeh et al. \cite{Badkobeh2015}. The run time in that case would be in $\Or\left(\frac{n^{2r}}{\lambda}\right)$, using Lemma~\ref{lem:worst_case_runtime}. It follows that $ E{(T\;|\;\overline{U})}\Pr{(\overline{U})} \in \Or\left(\frac{n^{2r+1}\log n}{\tau\lambda}\right) $. It is still left to show the run time under the condition of $U$. Therefore for the rest of the proof we will assume that everything happens under the condition of $U$. Like for the lower bound we split up the run time again into two cases of $Q$ and $\overline{Q}$ defined like in Lemma~\ref{lem:probability_Q}. Starting with $Q$, we get an expected run time by using Lemma \ref{lem:worst_case_runtime} which results in $E(T \;|\; Q) \in \Or\left(\frac{n^{2r}}{\lambda}\right)$. We already showed the bounds on $\Pr{(Q)}$ in Lemma~\ref{lem:probability_Q}, especially $\Pr{(Q)} \le \frac{n^r+ \frac{\lambda}{2}}{2n^r + \tau\lambda}$. Putting both together leads to $ E(T \;|\; Q)\Pr{(Q)} \in \Or\left(\frac{n^{3r}}{\lambda n^r + \tau\lambda^2}\right) $. In the case of $\overline{Q}$, the run time it takes these island to get to the optimum is in $\Or\left(n\log n + \frac{n^r}{\lambda}\right)$. This follows from Equation~\eqref{eq:all_jump} and the fact that we have to make $\Or\left(n\log n\right)$ steps to get to make $U$ happen in the first place (Lemma \ref{lem:all_allone}). Again by looking at Lemma~\ref{lem:probability_Q}, we see that $\Pr{(\overline{Q})} \le \frac{n^r(2-c) + 2\tau\lambda}{2n^r + 2\tau\lambda} \le \frac{n^r(2-c) + 2\tau\lambda}{2n^r + 2\tau\lambda}$. If $\tau\lambda \le n^r$, we can observe that $\Pr{(\overline{Q})} \in \Or(\Pr{(Q)})$ and hence $E(T \;|\; \overline{Q})\Pr{(\overline{Q})} \in \Or\left(E(T \;|\; Q)\Pr{(Q)}\right)$. In this case the bound we already derived for that is enough. The other case would be that $n^r < \tau\lambda$. Therefore, $\Pr{(\overline{Q})} \le \frac{\tau\lambda(4-c)}{2n^r + 2\tau\lambda}$ and therefore $ E(T \;|\; \overline{Q})\Pr{(\overline{Q})} \in \Or\left(\frac{\tau n^r + \tau\lambda n\log n}{n^r + \tau\lambda}\right) \;\in\; \Or\left(\frac{\tau n^r}{n^r + \tau\lambda}\right) $. The last step could be made because of $\lambda$ being in $\Or\left(\frac{n^{r-1}}{\log n}\right)$. Finally we add up the three results together to obtain \begin{align*} E(T) &= E{(T\;|\;\overline{U})}\Pr{(\overline{U})} + E(T \;|\; Q)\Pr{(Q)} + E(T \;|\; \overline{Q})\Pr{(\overline{Q})}\\ &\in \Or\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2} + \frac{n^{2r+1}\log n}{\tau\lambda}\right). \end{align*} Recall that this holds because $Q$ and $\overline{Q}$ were made under the condition of $U$.\qed \end{proof} \subsection{Proof of Lemma \ref{lem:ring_constant_valleys_until_allone}} \begin{lemma*} If $b\ge 1+\frac{r}{\epsilon}$ is constant and $\lambda \in \Or\left(n^{r-1-\epsilon}\right)$, where $\epsilon>0$ is a constant, it holds that the expected run time of to optimize \Fork{n}{r} on any island is $ E(T) \le E{\left(T \;\middle|\; V_b\right)} + \Or\left(n\right) $. \end{lemma*} \begin{proof} We know that $E(T) = E{(T \;|\; \overline{V_b})}\Pr{(\overline{V_b})}+E{\left(T \;\middle|\; V_b\right)}\Pr{\left(V_b\right)}$, so what is left to prove is that $E{(T \;|\; \overline{V_b})}\Pr{(\overline{V_b})} \in \Or(n)$. We use Lemma \ref{lem:ov_bound} to derive that $\Pr{(\overline{V_b})} \le \left(\frac{1}{n^{r-1}}\right)^b\binom{\lambda}{b} \le \left(\frac{\lambda}{n^{r-1}}\right)^b$. For a constant $m>0$, the expected worst case run time when having too many valleys at most $\frac{mn^{2r}}{\lambda}$ (Lemma \ref{lem:worst_case_runtime}). Hence, for a constant $d>0$ so that $\lambda \le dn^{r-1-\epsilon}$, \begin{align*} E{\left(T \;\middle|\; \overline{V_b}\right)}\Pr{(\overline{V_b})} & \leq \frac{mn^{2r}}{\lambda}\left(\frac{\lambda}{n^{r-1}}\right)^b \;=\; mn^{2r}\frac{\lambda^{b-1}}{n^{br-b}} \;\le\; md^{b-1} n^{r+1+\epsilon-b\epsilon} \end{align*} By having $b\ge 1+\frac{r}{\epsilon}$ we get to $E{\left(T \;\middle|\; \overline{V_b}\right)}\Pr{(\overline{V_b})}\le md^{b-1} n\;\in\; \Or\left(n\right)$.\qed \end{proof} \subsection{Proof of Lemma \ref{lem:ring_constant_migration}} \begin{lemma*} Let $\epsilon>0$, $b\ge 1+\frac{r}{\epsilon}$ and $c\ge 7rb$ be constants and $\lambda \in \Or\left(n^{r-1-\epsilon}\right)$. When $\frac{1}{\tau}$ denotes the migration probability and $\tau \in \Omega\left(n\log n\right)$, the expected run time to optimize \Fork{n}{r} on any island is $ E(T) \le E{\left(T \;\middle|\; V_b\cap B_c\right)} + \Or\left(\frac{n^r}{\lambda}\right). $ \end{lemma*} \begin{proof} We already know from Lemma \ref{lem:ring_constant_valleys_until_allone} that $E(T) \le E{\left(T \;\middle|\; V_b\right)} + \Or\left(n\right)$. From now on we assume for simplicity that all probabilities and expected values are under the condition of $V_b$. If we can prove that $E{(T \;|\; \overline{B_c})}\Pr{(\overline{B_c})} \in \Or\left(\frac{n^r}{\lambda}\right)$, we are done since $\Or(n) \subseteq \Or\left(\frac{n^r}{\lambda}\right)$. We assume $E{(T \;|\; \overline{B_c})}$ to have the worst run time of $\Or\left(\frac{n^{2r}}{\lambda}\right)$ steps (Lemma \ref{lem:worst_case_runtime}). Thus, there is a constant $m>0$ so that this worst case run time is $E{(T \;|\; \overline{B_c})} \le \frac{mn^{2r}}{\lambda}$. Next we examine $\Pr{(\overline{B_c})}$. We first look at one island that came up with the valley on its own and sends this individual to its neighboring islands. If it migrates at least once, two more islands adopt the valley. From there on, to let one more island know the solution, one of the two islands has to migrate. Then this new island becomes the one that has to migrate and so on. This way there will be a cluster of adjacent islands on the Ring where each has the valley as optimal solution. With probability $\frac{1}{\tau}$ a migration is made. Because there are only two outer most bits in a cluster, the number of islands reached in one step is $2$. We will use $d>0$ as a constant to express that the run time for all islands to get $1^n$, the optimum or the valley is at most $dn\log n$ (see Lemma~\ref{lem:all_allone}). The expected amount of migrations made during that time is $\frac{dn\log n}{\tau}$. After the $dn\log n$ steps we have exactly $c\log n$ valleys, if the $b$ valleys migrate $\frac{c\log n-b}{2b}$ times. The probability to spread the found $b$ valleys at least $S\ge\frac{c\log n-b}{2b}$ times is \begin{align*} \Pr\left(\overline{B_c}\right)&=\Pr\left(S \ge \frac{c\log n-b}{2b}\right) \;=\; \Pr\left(S \ge \frac{dn\log n}{\tau}+\frac{c\log n-b}{2b}-\frac{dn\log n}{\tau}\right)\\ &= \Pr\left(S \ge \left(1+\frac{c\tau\log n-\tau b}{2bdn\log n}-1\right)\frac{dn\log n}{\tau}\right)\\ &< e^{-\frac{\left(\frac{c\tau\log n-\tau b}{2bdn\log n}-1\right)\left(\frac{dn\log n}{\tau}\right)}{3}} \;=\; e^{-\frac{c\log n-b}{6b}+\frac{dn\log n}{3\tau}}\\ &= e^{-\frac{c\log n}{6b}+\frac{1}{6}+\frac{dn\log n}{3\tau}}. \end{align*} Since $\tau \in \Omega\left(n\log n\right)$, for $n$ large enough we get $\Pr\left(\overline{B_c}\right) \le e^{-\frac{c\log n}{7b}} \;=\; \frac{1}{n^{\frac{c}{7b}}} \;\le\; \frac{1}{n^r}$ because of $c\ge 7rb$ and therefore, $ E{(T \;|\; \overline{B_c})}\Pr{(\overline{B_c})} \le \frac{mn^{2r}}{\lambda}\left(\frac{1}{n^r}\right) \;=\; \Or\left(\frac{n^r}{\lambda}\right) $.\qed \end{proof} \subsection{Proof of Theorem \ref{theo:ring_fork}} \begin{theorem*} I we have $\tau \in \Omega(n\log n)$, $12r\log n \le \lambda \in \Or\left(n^{r-1-\epsilon}\right)$ and $\lambda^2 \tau \ge 17r^2n^r\log^2 n$, then the expected optimization time for \Fork{n}{r} on a Ring topology is $ E(T) \in \Or\left(\frac{n^r}{\lambda}\right). $ \end{theorem*} \begin{proof} From Lemma \ref{lem:ring_constant_migration} we know that $E(T) \le E{\left(T \;\middle|\; V_b\cap B_c\right)} + \Or\left(\frac{n^r}{\lambda}\right)$, where $b>0$ and $c\ge 7br$ are constant. What is left to show is that $E{\left(T \;\middle|\; V_b\cap B_c\right)} \in \Or\left(\frac{n^r}{\lambda}\right)$. Therefore we can prove the theorem by assuming that we start with $c\log n$ islands having the valley and the rest having $1^n$ as their solution. Of course we have to add $\Or\left(n\log n\right)$ to the bound in the end, which is the expected time to get to that state (Lemma~\ref{lem:all_allone}).\\ \indent We get an upper bound if we assume the worst case that all islands that would generate the valley from now on before the optimum is found already start with the valley, while all others have $1^n$ as solution so far. The probability that this number of islands is at least $r\log n$ is $\left(\frac{1}{2}\right)^{r\log n} = \frac{1}{n^r}$. From the law of total expectation and Lemma \ref{lem:worst_case_runtime} it follows that we can ignore that case since it does not exceed the bounds we want to prove.\\ \indent Due to the mentioned reason we consider that already $8r\log n$ islands have the valley as their best solution from the beginning on. We will show now that the time that is left for the other islands until they adopt the valley by migration is enough so that at least one island finds the optimum. We get an upper bound if we assume that the $8r\log n$ islands with the valley are distributed evenly such that there are $8r\log n$ many of these clusters of non-valley islands.\\ \indent We want to examine now how many islands adopt the valley after $\frac{2rn^r\log n}{\lambda}$ steps. Since we expect $2r\log n$ islands to adopt the valley each $\tau$ steps, the expected value of valley islands after this time is $\frac{4r^2n^r\log^2 n}{\tau\lambda} + 8r\log n \le \frac{\lambda}{4}$ for $n$ large enough. To get this we make use of $\tau\lambda^2 \ge 17r^2n^r\log^2 n$. Using Chernoff bounds, the probability to lose more than the double this number is at most $\Pr{\left(X \ge (1+1)\frac{\lambda}{4}\right)} < \frac{1}{e^{\frac{\lambda}{12}}} \le \frac{1}{e^{\frac{12r\log n}{12}}} = \frac{1}{n^r}$. Again by using the law of total expectation and Lemma \ref{lem:worst_case_runtime} we can ignore that case. Therefore we know that, after $\frac{2rn^r\log n}{\lambda}$ steps, we still have at least $\frac{\lambda}{2}$ islands left. Hence, the number of evaluations that were made during that time is at least $rn^r\log n$. The probability to not find the optimum during that time is at most $ p \le \left(1-\frac{1}{n^r}\right)^{rn^r\log n} \le \left(1-\frac{1}{n^r}\right)^{rn^r\ln n} \le \frac{1}{n^r}. $ We now use that \begin{align*} E(T) &= E\left(T \;\middle|\; T<\frac{2rn^r\log n}{\lambda}\right)\Pr{\left(T<\frac{2rn^r\log n}{\lambda}\right)}\\ &+E\left(T \;\middle|\; T\ge\frac{2rn^r\log n}{\lambda}\right)\Pr{\left(T\ge\frac{2rn^r\log n}{\lambda}\right)}\\ &\le E\left(T \;\middle|\; T<\frac{2rn^r\log n}{\lambda}\right)\cdot 1 \;+\; \frac{n^{2r}}{\lambda }\cdot \frac{1}{n^r}. \end{align*} We already know that the number of islands during $T<\frac{2rn^r\log n}{\lambda}$ steps is at least $\frac{\lambda}{2}$. Hence, $ E(T) \le E\left(T \;\middle|\; T<\frac{2rn^r\log n}{\lambda}\right) + \frac{n^r}{\lambda} \;\le\; \frac{2n^r}{\lambda} + \frac{n^{r}}{\lambda} \;=\; \frac{3n^r}{\lambda} \in \Or\left(\frac{n^r}{\lambda}\right). $ If we finally add $\Or\left(n\log n\right)$ for the run time until they all found $1^n$ or the valley in the first place, we get $\Or\left(n\log n + \frac{n^r}{\lambda}\right)$. This also matches the bound we still have to add because of Lemma \ref{lem:ring_constant_migration}. The term $n\log n$ is dominated by $\frac{n^r}{\lambda}$, because of the restriction to $\lambda$. Which leads to an upper bound of $\Or\left(\frac{n^r}{\lambda}\right)$.\qed \end{proof} \subsection{Run time bounds on \Fork{n}{r}} To get a lower bound on the run time we want to concentrate on the case that all islands come to a state where every island has $1^n$ as its solution. That is why in the next lemma we show how likely it is that the optimum or the valley is generated before that state is reached. \begin{lemma}\label{lem:ov_bound} The probability $p_{ov}$ for a single island to generate the optimum or the valley during its way to $1^n$ while optimizing \Fork{k}{r} with \oneone$_{1/n}$ is $p_{ov} \in \Or\left(\frac{1}{k^{r-1}}\right)\;\;\text{ for }k\le n$. \end{lemma} Next we give a lemma that we use for the upper and the lower bound, which considers the event that and islands finds the valley and broadcasts it, thus drowning diversity. \begin{lemma}\label{lem:probability_Q} Under the condition of at least one of $\lambda \in \Or\left(n^{r-1}\right)$ island having $1^n$ and all others the valley as solution, the probability for the event $Q$ that one island will find the valley and a migration will be made before the optimum is found by any island is $\frac{c}{2} \frac{n^r}{n^r + \tau\lambda} \;\le\; \Pr{(Q)} \;\le\; \frac{n^r + \frac{\lambda}{2}}{2n^r + \tau\lambda}$ for a constant $0<c<1$. \end{lemma} We continue with the lower bound for the complete topology, followed by the upper bound. The restriction for $\lambda$ to be in $\Or\left(\frac{n^{r-1}}{\log n}\right)$ in the next theorems is useful since the expected number of derived optima during the first $\Or(n\log n)$ steps is constant if we chose $\lambda \in \Theta\left(n^{r-1}\right)$. This follows from Lemma \ref{lem:all_allone} and Lemma~\ref{lem:ov_bound}. \begin{theorem}\label{theo:fork_comlete_lower_bound} The expected run time of $2r\log n \le \lambda \in \Or\left(\frac{n^{r-1}}{\log n}\right)$ islands optimizing \Fork{n}{r} on a complete graph is $E(T) \in \Omega\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2}\right)$. \end{theorem} \begin{theorem}\label{theo:fork_comlete_upper_bound} The expected run time of $2\log n \le \lambda \in \Or\left(\frac{n^{r-1}}{\log n}\right)$ islands optimizing \Fork{n}{r} on a complete graph is in $ E(T) \in \Or\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2} + \frac{n^{2r+1}\log n}{\tau\lambda}\right) $. \end{theorem} \begin{corollary}\label{cor:RunTimeComplete} If we choose $\tau \in \Omega(n\log n)$ and $2r\log n \le \lambda \in \Or(n^{r-1-\epsilon})$ for $\epsilon > 0$ constant, then the optimization time for \Fork{n}{r} on a complete graph is $E(T) \in \Theta\left(\frac{n^{3r} + \tau\lambda n^r}{\lambda n^r + \tau\lambda^2}\right)$. \end{corollary} \begin{proof} We already have shown the lower bound in Theorem \ref{theo:fork_comlete_lower_bound}. The second term of the upper bound in Theorem \ref{theo:fork_comlete_upper_bound} is dominated by the rest if $\tau \in \Omega(n\log n)$. Therefore it matches the lower bound.\qed \end{proof} \begin{corollary} The number of fitness evaluations to spread the optimum to all islands is in $\Theta\left(n^{1.5r}\right)$ for the best choice of parameters $\lambda$ and $\tau$. \end{corollary} \begin{proof} This can be shown by recalling that the number of evaluations to get there is in $\Omega\left(\frac{n^{3r} + \tau\lambda n^r}{n^r + \tau\lambda} + \tau\lambda\right)$ and in $\Or\left(\frac{n^{3r} + \tau\lambda n^r}{n^r + \tau\lambda} + \frac{n^{2r+1}\log n}{\tau} + \tau\lambda\right)$ (Theorem \ref{theo:fork_comlete_lower_bound} and \ref{theo:fork_comlete_upper_bound}). There is no way to choose $\tau\lambda$ to get below $\Theta\left(n^{1.5 r}\right)$.\qed \end{proof} \subsection{The \oneone} For the \oneone we can use Lemma~\ref{lem:fork_half} to see that it will be trapped with probability $1/2$, which leads to the following two theorems. \begin{theorem}\label{theo:fork_general} The expected optimization time of the \oneone on \Fork{k}{r} with bit flip probability $\frac{1}{n}$ and $n\ge k$ is $E\left(T(k)\right) \in \Theta\left(n^{2r}\right)$. \end{theorem} \begin{theorem}\label{theo:single_fork_grouped} The expected optimization time of the \oneone on \LO with $k$-block \Fork{n}{r} is $E(T) \in \Theta\left(\frac{1}{k}n^{2r+1}\right)$. \end{theorem} \subsection{Composite Fitness Function} Here we define composite fitness functions formally. Let $f = (f^n)_{n \in \natnum}$ be a family of fitness functions such that, for all $n \in \natnum$, $f^n: \{0,1\}^n \rightarrow \realnum_{\geq 0}$. For this construction, we suppose that $1^n$ is the unique optimum (if a different bit string is the unique optimum, we apply a corresponding bit mask to make $1^n$ the unique optimum, but will not mention it any further). With \emph{LeadingOnes with $k$-block $f$} we denote the fitness function which divides the input $\x$ into bit strings of length $k$ ($(\x_0 \x_1 \dots \x_{k-1}), (\x_k \x_{k+1} \dots \x_{2k-1}), \dots$). The blocks contribute to the total fitness using fitness function $f^k$, but only if all previous blocks have reached the unique optimum $1^k$. More formally, \vspace*{-0.3cm} \begin{align*} LO_k^f(\x) = \sum_{i=0}^{\frac{n}{k}-1} f^k(\x_{ik}\x_{ik+1} \dots\x_{ik+k-1}) \cdot \prod_{j=0}^{ik-1} \x_{j}. \end{align*} Similarly, \OM with $k$-block $f$ is defined as \begin{align*} OM_k^f(\x) = \sum_{i=0}^{\frac{n}{k}-1} f^k(\x_{ik}\x_{ik+1} \dots\x_{ik+k-1}). \end{align*} In this paper we only analyze the run times around the \LO version. Note that it equals the the $LOB_b$ function in the work of Jansen and Wiegand \cite{Jansen2003}. In general, also other fitness functions can be used as the outer function of that nesting method, as exemplified in our definition of \OM with $k$-block. \subsection{Run time of \LO with $k$-block $f$} In the following we develop a general approach for proving run times on \LO with $k$-block $f$. We make use of the observation that there is always exactly one block that contributes to the overall fitness except all previous blocks that are already optimal. \begin{theorem} \label{theo:lo_exact} Let $T$ be the run time of a \oneone Algorithm on $LO_k^f$ to optimize a bit string of length $n$. We have \begin{align*} E(T) = E\left(T^n_k\right) \frac{\left(\frac{n}{n-1}\right)^n-1}{\left(\frac{n}{n-1}\right)^k-1} \in \Theta\left(E\left(T^n_k\right)\frac{n}{k}\right) \end{align*} where $T^n_k$ is the run time to optimize $f^k$ with bit flip probability~$\frac{1}{n}$. \end{theorem} With this Theorem as a tool we can now derive exact bounds on functions like \LO. The following equation was already shown by Sudholt \cite[Corollary 1]{Sudholt2010}. Here we use the approach of composite functions, which generalizes to many other fitness functions, but note that the underlying proof idea is the same. \begin{corollary}\label{cor:lo_exact_bound} The expected run time of a \oneone on \LO is exactly \begin{align*} E(T_{LO}(n)) = \frac{\left(\frac{n}{n-1}\right)^{n-1}+\frac{1}{n}-1}{2}n^2. \end{align*} \end{corollary} \subsection{Run time bounds on \Fork{n}{r}} The next lemma will help us to get to the final bounds. \begin{lemma}\label{lem:all_allone} Let $\lambda$ be the number of islands running the \oneone optimizing \Fork{n}{r}. Let $T_{all}^\lambda$ denote the run time for all islands to get to the optimum, valley or $1^n$ as their best solution respectively, regardless of the migration topology and policy. If $\lambda$ is polynomial in $n$, $E\left(T_{all}^\lambda\right) \in \Or\left(n\log n\right)$. \end{lemma} \begin{theorem}\label{theo:isolated_fork} For $\lambda \le n^r$ isolated islands the expected time to optimize \Fork{n}{r} is $E\left(T\right) \in \Or\left(n\log(n)+\frac{n^{2r}}{\lambda 2^\lambda} + \frac{n^r}{\lambda}\right)$. \end{theorem} \begin{corollary} If we use $r\log n \le \lambda \le n^r$ islands, $E\left(T\right) \in \Or\left(n\log(n)+\frac{n^r}{\lambda}\right)$. \end{corollary} \begin{corollary} The expected number of evaluations until all $\lambda \le n^r$ islands get the optimum is in $\Omega\left(\lambda n^{2r}\log \lambda\right)$. \end{corollary} \begin{proof} To achieve this, all islands have to find the optimum. Observe that we expect half of the islands to get trapped. The probability that there are more than half as much is - by using Chernoff bounds - asymptotically more than a constant. Therefore we expect to need at least $\Omega\left(n^{2r}\log \lambda\right)$ rounds, which results in the given number of evaluations.\qed \end{proof} \begin{theorem}\label{theo:isolated_blocked_fork} For $k \le \frac{n}{\log\lambda}$, the expected run time of $\lambda$ islands optimizing \LO with $k$-block \Fork{n}{r} by running the \oneone can be bound by $E(T) \in \Omega\left(\frac{n^{2r}}{\lambda}\right)$. \end{theorem} \begin{proof} Let $V$ denote the event that every island gets trapped in a valley at least once during a run. From Lemma \ref{lem:fork_half} one can derive that $\Pr{(V)} = \left(1-\frac{1}{2^\frac{n}{k}}\right)^\lambda$. Again we apply the law of total expectation and use the lower bound $\frac{E}{2\lambda}$ on the expected run time of $\lambda$ islands until one of them makes a jump where $E$ is the expected number of steps for a single island (Equation \eqref{eq:all_jump}). \begin{align*} E(T) &= E\left(T \; \middle| \; V\right)\Pr{(V)} + E\left(T \; \middle| \; \overline{V}\right)\Pr{(\overline{V})} \;\ge\; \frac{n^{2r}}{2\lambda}\Pr{(V)} \;=\; \frac{n^{2r}}{2\lambda}\left(1-\frac{1}{2^\frac{n}{k}}\right)^\lambda \end{align*} Using that $k \le \frac{n}{\log(\lambda)}$ finally gives $E(T)\ge \frac{n^{2r}}{2\lambda}\left(1-\frac{1}{\lambda}\right)^\lambda \;\in\; \Omega\left(\frac{n^{2r}}{\lambda}\right)$.\qed \end{proof} \section{Introduction} \label{sec:intro} \input{intro} \section{Algorithms} \label{sec:islands} \input{islands} \section{Fitness Functions} \label{sec:groups} \input{groups} \section{No Migration} \label{sec:fork} \input{fork} \subsection{Independent Runs} \label{sec:isolated} \input{isolated} \section{Complete Topology} \label{sec:complete} \input{complete} \section{Ring Topology} \label{sec:ring} \input{ring} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \clearpage \bibliographystyle{splncs04} \subsection{Run time bounds on \Fork{n}{r}} \begin{lemma}\label{lem:ring_constant_valleys_until_allone} If $b\ge 1+\frac{r}{\epsilon}$ is constant and $\lambda \in \Or\left(n^{r-1-\epsilon}\right)$, where $\epsilon>0$ is a constant, it holds that the expected run time of to optimize \Fork{n}{r} on any island is $ E(T) \le E{\left(T \;\middle|\; V_b\right)} + \Or\left(n\right) $. \end{lemma} Another event that could possibly lead to a high run time is when one of the found valleys is shared too fast to all other islands. Like before we will show that this case is unlikely enough to not dominate the run time. \begin{lemma}\label{lem:ring_constant_migration} Let $\epsilon>0$, $b\ge 1+\frac{r}{\epsilon}$ and $c\ge 7rb$ be constants and $\lambda \in \Or\left(n^{r-1-\epsilon}\right)$. When $\frac{1}{\tau}$ denotes the migration probability and $\tau \in \Omega\left(n\log n\right)$, the expected run time to optimize \Fork{n}{r} on any island is $ E(T) \le E{\left(T \;\middle|\; V_b\cap B_c\right)} + \Or\left(\frac{n^r}{\lambda}\right). $ \end{lemma} From the previous lemmas we can derive that if we choose $\tau$ large enough and $\lambda$ small enough, we get an upper bound on the expected run time by just looking at the case of $B_c\cap V_b$ and adding $\Or\left(\frac{n^r}{\lambda}\right)$. \begin{theorem}\label{theo:ring_fork} For $\tau \in \Omega(n\log n)$, $12r\log n \le \lambda \in \Or\left(n^{r-1-\epsilon}\right)$ and $\lambda^2 \tau \ge 17r^2n^r\log^2 n$, the expected optimization time for \Fork{n}{r} on a Ring topology is $ E(T) \in \Or\left(\frac{n^r}{\lambda}\right). $ \end{theorem} The next corollary sums up our findings and shows performance for the optimal choice of parameters. \begin{corollary}\label{cor:RunTimeRing} The expected number of fitness evaluations to have all islands at the optimum is in $\Or\left(n^r\log^2 n\right)$ if $\tau$ and $\lambda$ are set appropriately. \end{corollary} \begin{proof} If we use the results from Theorem \ref{theo:ring_fork}, we see that there are $\Or\left(n^r + \tau\lambda^2\right)$ evaluations until all islands are informed. If we consider that $\tau\lambda^2 \in \Or\left(n^r\log^2 n\right)$ we get the bound.\qed \end{proof}
{ "timestamp": "2018-06-05T02:18:28", "yymm": "1806", "arxiv_id": "1806.01128", "language": "en", "url": "https://arxiv.org/abs/1806.01128" }
\section{Introduction} \blfootnote{ \hspace{-0.65cm} * Equal contribution } \blfootnote{ \hspace{-0.65cm} This work is licensed under a Creative Commons Attribution 4.0 International License. License details: \url{http://creativecommons.org/licenses/by/4.0/} } The problem of obtaining a semantic embedding for a sentence that ensures that the related sentences are closer and unrelated sentences are farther lies at the core of understanding languages. This would be relevant for a wide variety of machine reading comprehension and related tasks such as sentiment analysis. Towards this problem, we propose a supervised method that uses a sequential encoder-decoder framework for paraphrase generation. The task of generating paraphrases is closely related to the task of obtaining semantic sentence embeddings. In our approach, we aim to ensure that the generated paraphrase embedding should be close to the true corresponding sentence and far from unrelated sentences. The embeddings so obtained help us to obtain state-of-the-art results for paraphrase generation task. Our model consists of a sequential encoder-decoder that is further trained using a pairwise discriminator. The encoder-decoder architecture has been widely used for machine translation and machine comprehension tasks. In general, the model ensures a `local' loss that is incurred for each recurrent unit cell. It only ensures that a particular word token is present at an appropriate place. This, however, does not imply that the whole sentence is correctly generated. To ensure that the whole sentence is correctly encoded, we make further use of a pair-wise discriminator that encodes the whole sentence and obtains an embedding for it. We further ensure that this is close to the desired ground-truth embeddings while being far from other (sentences in the corpus) embeddings. This model thus provides a `global' loss that ensures the sentence embedding as a whole is close to other semantically related sentence embeddings. This is illustrated in Figure~\ref{fig:intro_fig}. We further evaluate the validity of the sentence embeddings by using them for the task of sentiment analysis. We observe that the proposed sentence embeddings result in state-of-the-art performance for both these tasks. Our contributions are: a) We propose a model for obtaining sentence embeddings for solving the paraphrase generation task using a pair-wise discriminator loss added to an encoder-decoder network. b) We show that these embeddings can also be used for the sentiment analysis task. c) We validate the model using standard datasets with a detailed comparison with state-of-the-art methods and also ensure that the results are statistically significant. \begin{figure*}[ht] \centering \includegraphics[width=0.95\columnwidth]{final_intro.png} \caption{Pairwise Discriminator based Encoder-Decoder for Paraphrase Generation: This is the basic outline of our model which consists of an LSTM encoder, decoder and discriminator. Here the encoders share the weights. The discriminator generates discriminative embeddings for the Ground Truth-Generated paraphrase pair with the help of `global' loss. Our model is jointly trained with the help of a `local' and `global' loss which we describe in section~\ref{methods}.} \label{fig:intro_fig} \end{figure*} \section{Related Work} Given the flexibility and diversity of natural language, it has always been a challenging task to represent text efficiently. There have been several hypotheses proposed for representing the same.~\cite{harris_TF1954,Firth_SLA1957,Sahlgren_IJDS2008} proposed a distribution hypothesis to represent words, i.e., words which occur in the same context have similar meanings. One popular hypothesis is the bag-of-words (BOW) or Vector Space Model~\cite{Salton_ACM1975}, in which a text (such as a sentence or a document) is represented as the bag (multiset) of its words.~\cite{lin_ACM2001dirt} proposed an extended distributional hypothesis and~\cite{Deerwester_JASIS1990,Turney_ACM2003} proposed a latent relation hypothesis, in which a pair of words that co-occur in similar patterns tend to have similar semantic relation. Word2Vec\cite{Mikolov_ARXIV2013,mikolov_NIPS2013,Goldberg_ARXIV2014} is also a popular method for representing every unique word in the corpus in a vector space. Here, the embedding of every word is predicted based on its context (surrounding words). NLP researchers have also proposed phrase-level and sentence-level representations~\cite{mitchell_WOL2010,zanzotto_ACL2010,yessenalina_EMNLP2011,grefenstette_IWCS2013,mikolov_NIPS2013}.~\cite{socher_ICML2011,Kim_ARXIV2014,lin_EMNLP2015,Yin_ARXIV2015,Kalchbrenner_ARXIV2014} have analyzed several approaches to represent sentences and phrases by a weighted average of all the words in the sentence, combining the word vectors in an order given by a parse tree of a sentence and by using matrix-vector operations. The major issue with BOW models and weighted averaging of word vectors is the loss of semantic meaning of the words, the parse tree approaches can only work for sentences because of its dependence on sentence parsing mechanism.~\cite{Socher_EMNLP2013,Le_ICML2014} proposed a method to obtain a vector representation for paragraphs and use it to for some text-understanding problems like sentiment analysis and information retrieval. Many language models have been proposed for obtaining better text embeddings in Machine Translation~\cite{Sutskever_NIPS2014,cho_EMLNLP2014,vinyals_arXiv2015,wu_CoRR2016}, question generation~\cite{Du_ArXiv2017}, dialogue generation ~\cite{shang_ICNLP2015,li_ARXIV2016,li_arxiv2017adversarial}, document summarization~\cite{rush_EMNLP2015}, text generation ~\cite{zhang_arxiv2017adversarial,Hu_ICML2017toward,Yu_AAAI2017seqgan,Guo_arxiv2017long,liang_CORR2017recurrent,Reed_CVPR2016} and question answering~\cite{yin_IJCAI2016,miao_ICML2016}. For paraphrase generation task,~\cite{Prakash_Arxiv2016neural} have generated paraphrases using stacked residual LSTM based network.~\cite{hasan_ClinicalNLP2016} proposed a encoder-decoder framework for this task.~\cite{gupta_AAAI2017} explored a VAE approach to generate paraphrase sentences using recurrent neural networks.~\cite{li_arXiv2017Paraphrase} used reinforcement learning for paraphrase generation task. \label{sec:lit_surv} \section{Method}\label{methods} In this paper, we propose a text representation method for sentences based on an encoder-decoder framework using a pairwise discriminator for paraphrase generation and then fine tune these embeddings for sentiment analysis task. Our model is an extension of \textit{seq2seq}~\cite{Sutskever_NIPS2014} model for learning better text embeddings. \subsection{Overview} \noindent \textbf{Task:} In the paraphrase generation problem, given an input sequence of words $X = [x_1,..., x_L]$, we need to generate another output sequence of words $Y = [q_1,..., q_T ]$ that has the same meaning as $X$. Here $L$ and $T$ are not fixed constants. Our training data consists of $M$ pairs of paraphrases $ \{(X_i,Y_i)\}_{i=1}^M$ where $X_i$ and $Y_i$ are the paraphrase of each other.\\ \\ Our method consists of three modules as illustrated in Figure~\ref{fig:main_figure}: first is a Text Encoder which consists of LSTM layers, second is LSTM-based Text Decoder and last one is an LSTM-based Discriminator module. These are shown respectively in part 1, 2, 3 of Figure~\ref{fig:main_figure}. Our network with all three parts is trained end-to-end. The weight parameters of encoder and discriminator modules are shared. Instead of taking a separate discriminator, we shared it with the encoder so that it learns the embedding based on the `global' as well as `local' loss. After training, at test time we used encoder to generate feature maps and pass it to the decoder for generating paraphrases. These text embeddings can be further used for other NLP tasks such as sentiment analysis. \begin{figure*}[ht] \centering \includegraphics[width=1.0\columnwidth]{final_model_final.png} \caption{This is an overview of our model. It consists of 3 parts: 1) LSTM-based Encoder module which encodes a given sentence, 2) LSTM-based Decoder Module which generates natural language paraphrases from the encoded embeddings and 3) LSTM-based pairwise Discriminator module which shares its weights with the Encoder module and this whole network is trained with local and global loss.} \label{fig:main_figure} \end{figure*} \subsection{Encoder-LSTM} We use an LSTM-based encoder to obtain a representation for the input question $X_i$, which is represented as a matrix in which every row corresponds to the vector representation of each word. We use a one-hot vector representation for every word and obtain a word embedding $c_i$ for each word using a Temporal CNN~\cite{zhang_NIPS2015character,Palangi_TASLP2016} module that we parameterize through a function $G(X_i,W_e)$ where $W_e$ are the weights of the temporal CNN. Now this word embedding is fed to an LSTM-based encoder which provides encoding features of the sentence. We use LSTM~\cite{hochreiter_NC1997} due to its capability of capturing long term memory~\cite{Palangi_TASLP2016}. As the words are propagated through the network, the network collects more and more semantic information about the sentence. When the network reaches the last word ($L_{th}$ word), the hidden state $h_{L}$ of the network provides a semantic representation of the whole sentence conditioned on all the previously generated words $(q_0,q_1...,q_{t})$. Question sentence encoding feature $f_i$ is obtained after passing through an LSTM which is parameterized using the function $F(C_i, W_l)$ where $W_l$ are the weights of the LSTM. This is illustrated in part 1 of Figure~\ref{fig:main_figure}. \subsection{Decoder-LSTM} The role of decoder is to predict the probability for a whole sentence, given the embedding of input sentence ($f_i$). RNN provides a nice way to condition on previous state value using a fixed length hidden vector. The conditional probability of a sentence token at a particular time step is modeled using an LSTM as used in machine translation~\cite{Sutskever_NIPS2014}. At time step $t$, the conditional probability is denoted by $P( q_{t} | {f_i},q_0,..,q_{t-1})= P( q_{t} | {f_i},h_{t})$, where $h_{t}$ is the hidden state of the LSTM cell at time step $t$. $h_{t}$ is conditioned on all the previously generated words $(q_0,q_1..,q_{t-1})$ and $q_{t}$ is the next generated word. Generated question sentence feature $\hat{p_d} = \{\hat{p_1},\dots, \hat{p_T}\}$ is obtained by decoder LSTM which is parameterized using the function $D(f_i, W_{dl})$ where $W_{dl}$ are the weights of the decoder LSTM. The output of the word with maximum probability in decoder LSTM cell at step $k$ is input to the LSTM cell at step $k+1$ as shown in Figure~\ref{fig:main_figure}. At $t=-1$, we are feeding the embedding of input sentence obtained by the encoder module. $\hat{Y_i}=\{\hat{q_0},\hat{q_1},...,\hat{q}_{T+1}\}$ are the predicted question tokens for the input $X_i$. Here, we are using $\hat{q_0}$ and $\hat{q}_{T+1}$ as the special START and STOP token respectively. The predicted question token ($\hat{q_i}$) is obtained by applying Softmax on the probability distribution $\hat{p_i}$. The question tokens at different time steps are given by the following equations where LSTM refers to the standard LSTM cell equations: \begin{equation} \begin{split} & d_{-1}=\mbox{Encoder}(f_{i}) \\ & h_0=\mbox{LSTM}(d_{-1})\\ & d_t=W_d*q_t, \forall t\in \{0,1,2,...T-1\} \\ & {h_{t+1}}=\mbox{LSTM}(d_t,h_{t}), \forall t\in \{0,1,2,...T-1\}\\ & \hat{p}_{t+1} = W_v * h_{t+1} \\ & \hat{q}_{t+1} = \mbox{Softmax}(\hat{p}_{t+1})\\ & \text{Loss}_{t+1}=\text{loss}(\hat{q}_{t+1},q_{t+1}) \end{split} \end{equation} Where $\hat{q}_{t+1}$ is the predicted question token and $q_{t+1}$ is the ground truth one. In order to capture local label information, we use the Cross Entropy loss which is given by the following equation: \begin{equation} L_{local}=\frac{-1}{T}\sum_{t=1}^{T} {q_{t} \text{log} \text{P}(\hat{q_{t}}|{q_0,..q_{t-1}})} \end{equation} Here $T$ is the total number of sentence tokens, $\text{P}(\hat{q_{t}}|{q_0,..q_{t-1}})$ is the predicted probability of the sentence token, $q_t$ is the ground truth token. \subsection{Discriminative-LSTM} The aim of the Discriminative-LSTM is to make the predicted sentence embedding $f_i^p$ and ground truth sentence embedding $f_i^g$ indistinguishable as shown in Figure~\ref{fig:main_figure}. Here we pass $\hat{p_d}$ to the shared encoder-LSTM to obtain $f_i^p$ and also the ground truth sentence to the shared encoder-LSTM to obtain $f_i^g$. The discriminator module estimates a loss function between the generated and ground truth paraphrases. Typically, the discriminator is a binary classifier loss, but here we use a global loss, similar to~\cite{Reed_CVPR2016} which acts on the last hidden state of the recurrent neural network (LSTM). The main objective of this loss is to bring the generated paraphrase embeddings closer to its ground truth paraphrase embeddings and farther from the other ground truth paraphrase embeddings (other sentences in the batch). Here our discriminator network ensures that the generated embedding can reproduce better paraphrases. We are using the idea of sharing discriminator parameters with encoder network, to enforce learning of embeddings that not only minimize the local loss (cross entropy), but also the global loss. Suppose the predicted embeddings of a batch is $e_p=[f_1^p,f_2^p,.. f_N^p]^T$, where $f_i^p$ is the sentence embedding of $i^{th}$ sentence of the batch. Similarly ground truth batch embeddings are $e_g=[f_1^g,f_2^g,.. f_N^g]^T$, where $N$ is the batch size, $f_i^p \in \mathds{R}^d$ $f_i^g \in \mathds{R}^d$. The objective of global loss is to maximize the similarity between predicted sentence $f_i^p$ with the ground truth sentence $f_i^g$ of $i^{th}$ sentence and minimize the similarity between $i^{th}$ predicted sentence, $f_i^p$, with $j^{th}$ ground truth sentence, $f_j^g$, in the batch. The loss is defined as \begin{equation} L_{global}= \sum_{i=1}^{N}\sum_{j=1}^{N} \max(0,((f_i^p\cdot f_j^g)- (f_i^p \cdot f_i^g)+1)) \end{equation} Gradient of this loss function is given by \begin{equation} \bigg(\frac{dL}{de_p}\bigg)_i= \sum_{j=1,\\ j\neq i}^{N} (f_j^g-f_i^g) \end{equation} \begin{equation} \bigg(\frac{dL}{de_g}\bigg)_i= \sum_{j=1,\\ j\neq i}^{N} (f_j^p-f_i^p) \end{equation} \subsection{Cost function} \noindent Our objective is to minimize the total loss, that is the sum of local loss and global loss over all training examples. The total loss is: \begin{equation} L_{total}= \frac{1}{M} \sum^{M}_{i=1} (L_{local} + L_{global}) \end{equation} Where $M$ is the total number of examples, $L_{local}$ is the cross entropy loss, $L_{global}$ is the global loss. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|c|c|c|c|c|} \hline \bf Dataset &\bf Model & \bf BLEU1 & \bf BLEU2 & \bf BLEU3 & \bf BLEU4 &\bf ROUGE & \bf METEOR \\ \hline &ED-L(Base Line) &33.7 & 22.3 &18.0 & 12.1 &35.3 &14.3 \\ &EDD-G &40.7 &28.3 &21.1 &16.1 &39.7 &19.6 \\ 50K&EDD-LG &40.9 &28.6 &21.3 &16.1 &40.2 &19.8 \\ &EDD-LG(shared) &\textbf{41.1} &\textbf{29.0} &\textbf{21.5} &\textbf{16.5} &\textbf{40.6} &\textbf{20.1}\\ \hline &ED-L(Base Line) &35.1 & 25.4 &19.6 & 14.4 &37.4 &15.4 \\ &EDD-G &42.1 & 29.4 &21.6 & 16.4 &41.4 &20.4 \\ 100K&EDD-LG &44.2 &31.6 &22.1 &17.9 & 43.6 &22.1 \\ &EDD-LG(shared) & \textbf{45.7}& \textbf{32.4} &\textbf{23.8} &\textbf{17.9} &\textbf{44.9} &\textbf{23.1}\\ \hline \end{tabular} \caption{\label{score_tab_1}Analysis of variants of our proposed method on Quora Dataset as mentioned in section \ref{ablation_analysis}. Here L and G refer to the Local and Global loss and shared represents the parameter sharing between the discriminator and encoder module. As we can see that our proposed method EDD-LG(shared) clearly outperforms the other ablations on all metrics and detailed analysis is present in section~\ref{ablation_analysis}.} \end{table \section{Experiments} We perform experiments to better understand the behavior of our proposed embeddings. To achieve this, we benchmark Encoder Decoder Discriminator Local-Global (shared) (EDD-LG(shared)) embeddings on two text understanding problems, Paraphrase Generation and Sentiment Analysis. We use the Quora question pairs dataset \footnote{website: \url{https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs}} for paraphrase generation and Stanford Sentiment Treebank dataset~\cite{Socher_EMNLP2013} for sentiment analysis. In this section we describe the different datasets, experimental setup and results of our experiments. \subsection{Paraphrase Generation} Paraphrase generation is an important problem in many NLP applications such as question answering, information retrieval, information extraction, and summarization. It involves generation of similar meaning sentences. \subsubsection{Dataset} We use the newly released Quora question pairs dataset for this task. It consists of over 400K potential question duplicate pairs. As pointed out in~\cite{gupta_AAAI2017}, the question pairs having the binary value 1 are the ones which are actually the paraphrase of each other and the others are duplicate questions. So, we choose all such question pairs with binary value 1. There are a total of 149K such questions. Some examples of generated question-paraphrase pairs are provided in Table~\ref{tab:paraphrase_samples}. More results are present in the appendix. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|c|c|c|c|c|} \hline \bf Dataset &\bf Model & \bf BLEU1 & \bf METEOR & \bf TER \\ \hline &Unsupervised VAE~\cite{gupta_AAAI2017} & 8.3 &12.2 &83.7\\ &VAE-S~\cite{gupta_AAAI2017} &11.9 &17.4 &69.4\\ 50K&VAE-SVG~\cite{gupta_AAAI2017} &17.1 & 21.3 &63.1 \\ &VAE-SVG-eq~\cite{gupta_AAAI2017} &17.4 & \textbf{21.4} & 61.9\\ &EDD-G (\textbf{Ours}) &40.7 & 19.7 & 51.2\\ &EDD-LG(\textbf{Ours}) &40.9 & 19.8 & 51.0\\ &EDD-LG(shared)(\textbf{Ours}) &\textbf{41.1} & 20.1 & \textbf{50.8}\\ \hline &Unsupervised~\cite{gupta_AAAI2017} & 10.6&14.3 &79.9 \\ &VAE-S~\cite{gupta_AAAI2017} & 17.5& 21.6 &67.1\\ 100K&VAE-SVG~\cite{gupta_AAAI2017} &22.5 & 24.6 &55.7\\ &VAE-SVG-eq~\cite{gupta_AAAI2017} &22.9 & \textbf{24.7} &55.0 \\ &EDD-G (\textbf{Ours}) &42.1 & 20.4 & 49.9\\ &EDD-LG(\textbf{Ours}) &44.2 & 22.1 & 48.3\\ &EDD-LG(shared)(\textbf{Ours}) &\textbf{45.7} &23.1 & \textbf{47.5}\\ \hline \end{tabular} \caption{\label{score_tab_2}Analysis of Baselines and State-of-the-Art methods for paraphrase generation on Quora dataset. As we can see clearly that our model outperforms the state-of-the-art methods by a significant margin in terms of BLEU and TER scores. Detailed analysis is present in section~\ref{sec:Baselines_analysis}. A lower TER score is better whereas for the other metrics, a higher score is better. Details for the metrics are present in the appendix.} \end{table \subsubsection{Experimental Protocols} We follow the experimental protocols mentioned in~\cite{gupta_AAAI2017} for the Quora Question Pairs dataset. In our experiments, we divide the dataset into 2 parts 145K and 4K question pairs. We use these as our training and testing sets. We further divide the training set into 50K and 100K dataset sizes and use the rest 45K as our validation set. We also followed the dataset split mentioned in~\cite{li_arXiv2017Paraphrase} to calculate the accuracies on a different test set and provide the results on our project webpage. We trained our model end-to-end using local loss (cross entropy loss) and global loss. We have used RMSPROP optimizer to update the model parameter and found these hyperparameter values to work best to train the Paraphrase Generation Network: $\text{learning rate} =0.0008, \text{batch size} = 150, \alpha = 0.99, \epsilon=1e-8$. We have used learning rate decay to decrease the learning rate on every epoch by a factor given by: \[\text{Decay\_factor}=exp\left(\frac{log(0.1)}{a*b} \right)\] where $a=1500$ and $b=1250$ are set empirically. \begin{table}[ht] \centering \begin{tabular}{|>{\arraybackslash}m{0.035\textwidth}|>{\arraybackslash}m{0.25\textwidth}|>{\arraybackslash}m{0.3\textwidth}|>{\arraybackslash}m{0.25\textwidth}|} \hline S.No&\bf Original Question &\bf Ground Truth Paraphrase & \bf Generated Paraphrase \\ \hline 1&Is university really worth it? &Is college even worth it? &Is college really worth it? \\ \hline 2&Why India is against CPEC? &Why does India oppose CPEC? &Why India is against Pakistan? \\ \hline 3&How can I find investors for my tech startup? &How can I find investors for my startup on Quora? &How can I find investors for my startup business? \\\hline 4&What is your view/opinion about surgical strike by the Indian Army?& What world nations think about the surgical strike on POK launch pads and what is the reaction of Pakistan?& What is your opinion about the surgical strike on Kashmir like? \\ \hline 5&What will be Hillary Clinton's strategy for India if she becomes US President? & What would be Hillary Clinton's foreign policy towards India if elected as the President of United States? & What will be Hillary Clinton's policy towards India if she becomes president? \\ \hline \end{tabular} \caption{\label{tab:paraphrase_samples}Examples of Paraphrase generation on Quora Dataset. We observe that our model is able to understand abbreviations as well and then ask questions on the basis of that as is the case in the second example.} \end{table \subsubsection{Ablation Analysis}\label{ablation_analysis} We experimented with different variations for our proposed method. We start with baseline model which we take as a simple encoder and decoder network with only the local loss (ED-Local)~\cite{Sutskever_NIPS2014}. Further we have experimented with encoder-decoder and a discriminator network with only global loss (EDD-Global) to distinguish the ground truth paraphrase with the predicted one. Another variation of our model is used both the global and local loss (EDD-LG). The discriminator is the same as our proposed method, only the weight sharing is absent in this case. Finally, we make the discriminator share weights with the encoder and train this network with both the losses (EDD-LG(shared)). The analyses are given in table~\ref{score_tab_1}. Among the ablations, the proposed EDD-LG(shared) method works way better than the other variants in terms of BLEU and METEOR metrics by achieving an improvement of 8\% and 6\% in the scores respectively over the baseline method for 50K dataset and an improvement of 10\% and 7\% in the scores respectively for 100K dataset. \subsubsection{Baseline and State-of-the-Art Method Analysis}\label{sec:Baselines_analysis} There has been relatively less work on this dataset and the only work which we came across was that of~\cite{gupta_AAAI2017}. We further compare our method EDD-LG(shared) model with their VAE-SVG-eq which is the current state-of-the-art on Quora datset. Also we provide comparisons with other methods proposed by them in table~\ref{score_tab_2}. As we can see from the table that we achieve a significant improvement of 24\% in BLEU score and 11\% in TER score (A lower TER score is better) for 50K dataset and similarly 22\% in BLEU score and 7.5\% in TER score for 100K dataset. \subsubsection{Statistical Significance Analysis} We have analysed statistical significance~\cite{Demvsar_JMLR2006} for our proposed embeddings against different ablations and the state-of-the-art methods for the paraphrase generation task. The Critical Difference (CD) for Nemenyi~\cite{Fivser_PLOS2016} test depends upon the given $\alpha$ (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different, otherwise they are statistically different. Figure ~\ref{fig:result_1_B} visualizes the post hoc analysis using the CD diagram. From the figure, it is clear that our embeddings work best and the results are significantly different from the state-of-the-art methods. \begin{table}[ht] \centering \begin{tabular}{|>{\arraybackslash}m{0.55\textwidth}|>{\centering\arraybackslash}m{0.2\textwidth}|>{\centering\arraybackslash}m{0.2\textwidth}|} \hline {\bf Model} & \bf Error Rate (Fine-Grained) \\\hline Naive Bayes~\cite{Socher_EMNLP2013} & 59.0 \\ SVMs~\cite{Socher_EMNLP2013} & 59.3 \\ Bigram Naive Bayes~\cite{Socher_EMNLP2013} & 58.1 \\ Word Vector Averaging~\cite{Socher_EMNLP2013} & 67.3\\%\hline Recursive Neural Network~\cite{Socher_EMNLP2013} & 56.8 \\ Matrix Vector-RNN~\cite{Socher_EMNLP2013} & 55.6 \\ Recursive Neural Tensor Network~\cite{Socher_EMNLP2013} & 54.3 \\ Paragraph Vector~\cite{Le_ICML2014} & {51.3}\\ \hline EDD-LG(shared) (\bf Ours) & \bf 35.6 \\ \hline \end{tabular} \caption{\label{score_tab_4}Performance of our method compared to other approaches on the Stanford Sentiment Treebank Dataset. The error rates of other methods are reported in~\cite{Le_ICML2014}} \end{table \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{SSA.png} \caption{The mean rank of all the models on the basis of BLEU score are plotted on the x-axis. Here EDD-LG-S refers to our EDD-LG shared model and others are the different variations of our model described in section \ref{ablation_analysis} and the models on the right are the different variations proposed in~\cite{gupta_AAAI2017}. Also the colored lines between the two models represents that these models are not significantly different from each other. CD=5.199,p=0.0069} \label{fig:result_1_B} \end{figure} \subsection{Sentiment Analysis with Stanford Sentiment Treebank (SST) Dataset} \subsubsection{Dataset} This dataset consists of sentiment labels for different movie reviews and was first proposed by~\cite{Pang_ACL2005}.~\cite{Socher_EMNLP2013} extended this by parsing the reviews to subphrases and then fine-graining the sentiment labels for all the phrases of movies reviews using Amazon Mechanical Turk. The labels are classified into 5 sentiment classes, namely \{Very Negative, Negative, Neutral, Positive, Very Positive\}. This dataset contains a total 126k phrases for training set, 30k phrases for validation set and 66k phrases for test set. \subsubsection{Tasks and Baselines} In~\cite{Socher_EMNLP2013}, the authors propose two ways of benchmarking. We consider the 5-way fine-grained classification task where the labels are \{Very Negative, Negative, Neutral, Positive, Very Positive\}. The other axis of variation is in terms of whether we should label the entire sentence or all phrases in the sentence. In this work we only consider labeling all the phrases.~\cite{Socher_EMNLP2013} apply several methods to this dataset and we show their performance in table~\ref{score_tab_4}. \begin{table}[ht] \small \centering \begin{tabular}{|>{\arraybackslash}m{0.075\textwidth}|>{\arraybackslash}m{0.73\textwidth}|m{0.11\textwidth}|} \hline \bf Phrase ID &\bf Phrase & \bf Sentiment \\ \hline 162970& The heaviest, most joyless movie &\\ 159901& Even by dumb action-movie standards, Ballistic : Ecks vs. Sever is a dumb action movie. &\\ 158280& Nonsensical, dull ``cyber-horror'' flick is a grim, hollow exercise in flat scares and bad acting &Very Negative\\ 159050& This one is pretty miserable, resorting to string-pulling rather than legitimate character development and intelligent plotting. &\\ 157130& The most hopelessly monotonous film of the year, noteworthy only for the gimmick of being filmed as a single unbroken 87-minute take. &\\ \hline 156368& No good jokes, no good scenes, barely a moment &\\ 157880& Although it bangs a very cliched drum at times &\\ 159269& They take a long time to get to its gasp-inducing ending. & Negative\\ 157144& Noteworthy only for the gimmick of being filmed as a single unbroken 87-minute &\\ 156869& Done a great disservice by a lack of critical distance and a sad trust in liberal arts college bumper sticker platitudes&\\ \hline 221765& A hero can stumble sometimes. &\\ 222069& Spiritual rebirth to bruising defeat&\\ 218959& An examination of a society in transition &Neutral\\ 221444& A country still dealing with its fascist past& \\ 156757& Have to know about music to appreciate the film's easygoing blend of comedy and romance & \\ \hline 157663& A wildly funny prison caper. &\\ 157850& This is a movie that's got oodles of style and substance. & \\ 157879& Although it bangs a very cliched drum at times, this crowd-pleaser's fresh dialogue, energetic music, and good-natured spunk are often infectious. & Positive\\ 156756& You don't have to know about music to appreciate the film's easygoing blend of comedy and romance. &\\ 157382& Though of particular interest to students and enthusiast of international dance and world music, the film is designed to make viewers of all ages, cultural backgrounds and rhythmic ability want to get up and dance. &\\ \hline 162398& A comic gem with some serious sparkles. &\\ 156238& Delivers a performance of striking skill and depth &\\ 157290& What Jackson has accomplished here is amazing on a technical level. &Very Positive\\ 160925& A historical epic with the courage of its convictions about both scope and detail. &\\ 161048& This warm and gentle romantic comedy has enough interesting characters to fill several movies, and its ample charms should win over the most hard-hearted cynics.&\\ \hline \end{tabular} \caption{\label{score_tab_sup}Examples of Sentiment classification on test set of kaggle competition dataset.} \end{table \subsubsection{Experimental Protocols} For the task of Sentiment analysis, we are using a similar method of performing the experiments as used by~\cite{Socher_EMNLP2013}. We treat every subphrase in the dataset as a separate sentence and learn their corresponding representations. We then feed these to a logistic regression to predict the movie ratings. During inference time, we used a method simialr to~\cite{Le_ICML2014} in which we freeze the representation of every word and use this to construct a representation for the test sentences which are then fed to a logistic regression for predicting the ratings. In order to train a sentiment classification model, we have used RMSPROP, to optimize the classification model parameter and we found these hyperparameter values to be working best for our case: $\text{learning rate} =0.00009, \text{batch size} = 200, \alpha = 0.9, \epsilon=1e-8$.\\ \subsubsection{Results} We report the error rates of different methods in table~\ref{score_tab_4}. We can clearly see that the performance of bag-of-words or bag-of-n-grams models (the first four models in the table) is not up to the mark and instead the advanced methods (such as Recursive Neural Network~\cite{Socher_EMNLP2013}) perform better on sentiment analysis task. Our method outperforms all these methods by an absolute margin of 15.7\% which is a significant increase considering the rate of progress on this task. We have also uploaded our models to the online competition on Rotten Tomatoes dataset \footnote{website: \url{www.kaggle.com/c/sentiment-analysis-on-movie-reviews}} and obtained an accuracy of 62.606\% on their test-set of 66K phrases. \\ We provide 5 examples for each sentiment in table~\ref{score_tab_sup}. We can see clearly that our proposed embeddings are able to get the complete meaning of smaller as well as larger sentences. For example, our model classifies `Although it bangs a very cliched drum at times' as Negative and `Although it bangs a very cliched drum at times, this crowd-pleaser's fresh dialogue, energetic music, and good-natured spunk are often infectious.' as positive showing that it is able to understand the finer details of language. More results and visualisations showing the part of the phrase to which the model attends while classifying are present in the appendix. The link for the project website and code is provided here \footnote{Project website: \url{https://badripatro.github.io/Question-Paraphrases/}}. \section{Conclusion} In this paper we have proposed a sentence embedding using a sequential encoder-decoder with a pairwise discriminator. We have experimented with this text embedding method for paraphrase generation and sentiment analysis. We also provided experimental analysis which justifies that a pairwise discriminator outperforms the previous state-of-art methods for NLP tasks. We also performed ablation analysis for our method, and our method outperforms all of them in terms of BLEU, METEOR and TER scores. We plan to generalize this to other text understanding tasks and also extend the same idea in vision domain. \bibliographystyle{acl} \section{Introduction} Given a sentence, the problem of obtaining a semantic embedding that ensures that the related sentences are close and unrelated sentences are far lies at the core of understanding languages. This would be relevant for a wide variety of machine reading comprehension and related tasks such as sentiment analysis. Towards this problem, we propose a supervised method that uses a sequential encoder-decoder framework for paraphrase generation. The task of generating paraphrases is closely related to the task of obtaining semantic sentence embeddings. In our approach we aim to ensure that the generated paraphrase embedding should be close to the true corresponding sentence and far from unrelated sentences. The embeddings so obtained help us to obtain state-of-the-art results for paraphrase generation task. Our model consists of a sequential encoder-decoder model that is further trained using a pairwise discriminator. The encoder-decoder model has been widely used for machine translation and machine comprehension tasks. In general, the model only ensures a `local' loss that is incurred for each recurrent unit cell. This only ensures that a particular word token is present at an appropriate place. This however does not imply that the whole sentence is correctly generated. In order to ensure that the whole sentence is correctly encoded, we make further use of a pair-wise discriminator that encodes the whole sentence and obtains an embedding for it. We further ensure that this is close to the desired ground-truth embeddings while being far from other embeddings. This model thus provides a `global' loss that ensures the sentence embedding as a whole is close to other semantically related sentence embeddings. This is illustrated in figure~\ref{fig:intro_fig}. We further evaluate the validity of the sentence embedding by using them for the task of sentiment analysis. We observe that the proposed sentence embeddings result in state-of-the-art performance for both these tasks. To summarize through this paper we observe the following contributions: a) We propose a model for obtaining sentence embeddings for solving the paraphrase generation task using a pair-wise discriminator loss added to an encoder-decoder network. b) We show that these embeddings can also be used for the sentiment analysis task. c) We validate the model using standard datasets with detailed comparison with the state of the art and also ensure that the results are statistically significant. \begin{figure*}[ht] \centering \includegraphics[width=1.0\columnwidth]{COLING_2018/fig/coling_DGAN_intro_final.pdf} \caption{Pairwise Discriminator based Encoder-Decoder for Paraphrase Generation: This is the basic outline of our model which consists of an LSTM encoder, decoder and discriminator. Here the discriminator and encoder share weights and hence the name pairwise discriminator. Our model is jointly trained with the help of a `local' and `global' loss which we describe later.} \label{fig:intro_fig} \end{figure*} \section{Related Work} Given the flexibility and diversity of natural language, it has been always a challenging task to represent text efficiently. There have been several hypothesis proposed for representing the same.~\cite{harris_TF1954,Firth_SLA1957,Sahlgren_IJDS2008} proposed a distribution hypothesis to represent words, i.e words which occur in the same context have similar meanings. One popular hypothesis is the bag-of-words (BOW) or Vector Space Model~\cite{Salton_ACM1975}, in which a text (such as a sentence or a document) is represented as the bag (multiset) of its words.~\cite{lin_ACM2001dirt} proposed extended distributional hypothesis and~\cite{Deerwester_JASIS1990,Turney_ACM2003} proposed latent relation hypothesis, in which,a Pairs of words that co-occur in similar patterns similar pattern tend to have similar semantic relation. Word2Vec\cite{Mikolov_ARXIV2013,mikolov_NIPS2013,Goldberg_ARXIV2014} model is also a popular method for representing every unique word in the corpus in a vector space. Here, the embedding of every word is predicted based on its context (surrounding words). NLP researcher have also proposed phrase-level and sentence-level representations~\cite{mitchell_WOL2010,zanzotto_ACL2010,yessenalina_EMNLP2011,grefenstette_IWCS2013,mikolov_NIPS2013}.~\cite{socher_ICML2011,Kim_ARXIV2014,lin_EMNLP2015,Yin_ARXIV2015,Kalchbrenner_ARXIV2014} have analyzed several approaches to represent sentences and phrases by a weighted average of all the words in the sentences, combining the word vectors in an order given by a parse tree of a sentence and by using matrix-vector operations. The major issue with BOW models and weighted averaging of word vectors is the loss of semantic meaning of the words, the parse tree approaches can only work for sentences because of its dependence on sentence parsing mechanism.~\cite{Socher_EMNLP2013,Le_ICML2014} proposed a method to obtain a vector representation for paragraphs and use it to for some text-understanding problems like sentiment analysis and information retrieval. Many language models have been proposed for obtaining better text embedding in Machine Translation\cite{Sutskever_NIPS2014,cho_EMLNLP2014,vinyals_arXiv2015,wu_CoRR2016}, question generation~\cite{Du_ArXiv2017},dialogue generation ~\cite{shang_ICNLP2015,li_ARXIV2016,li_arxiv2017adversarial}, document summarization~\cite{rush_EMNLP2015}, text generation ~\cite{zhang_arxiv2017adversarial,Hu_ICML2017toward,Yu_AAAI2017seqgan,Guo_arxiv2017long,liang_CORR2017recurrent,Reed_CVPR2016} and question answering~\cite{yin_IJCAI2016,miao_ICML2016}. For paraphrase generation task,~\cite{Prakash_Arxiv2016neural} have generated paraphrases using stacked residual LSTM based network.~\cite{hasan_ClinicalNLP2016} proposed a encoder-decoder framework for this task.~\cite{gupta_AAAI2017} explored VAE setup to generate paraphrase sentences using recurrent neural networks.~\cite{li_arXiv2017Paraphrase} used reinforcement learning for paraphrase generation task. Siamese neural network was proposed for a number of metric learning tasks~\cite{yih_ACL2011learning,chen_NIPS2011extracting}. Also~\cite{mueller_AAAI2016siamese,neculoiu2_RLNLP016learning} have proposed LSTM based siamese network for learning sentence similarity which we further adopt in our work. \label{sec:lit_surv} \section{Method} In this paper, we propose a text representation method for sentences and paragraphs based on the encoder decoder framework using the pairwise discriminator for paraphrase generation and then fine tune these embeddings for sentiment analysis task. Our model is an extension of seq2seq~\cite{Sutskever_NIPS2014} model for learning better text embedding. \subsection{Overview} \noindent \textbf{Task:} We are solving the paraphrase generation problem. Given an input sequence of words $X = [x_1,..., x_L]$, we need to generate another output sequence of words $Y = [y_1,..., y_T ]$ that has the same meaning as $X$. Here $L$ and $T$ are not fixed constants. Our training data consist of $N$ pairs of paraphrase $ \{(X_i,Y_i)\}_{i=1}^N$ where $X_i$ and $Y_i$ are the paraphrase of each other.\\ \\ Our method consists of three modules as illustrated in figure~\ref{fig:main_figure} : First is Text Encoder which consists of LSTM layers, second is LSTM based Text Decoder and last one is a LSTM based discriminator module. These are shown in part 1,2,3 of figure~\ref{fig:main_figure}. The weight parameters of encoder and discriminator modules are shared. Our network with all three parts is trained end-to-end. Our framework uses a encoder-decoder along with discriminator module for paraphrase generation. Instead of taking a separate discriminator, we shared it with the encoder so that it learns the embedding based on the 'global' as well as 'local loss'. After training, at test time we used encoder to generate feature maps and pass it to the decoder for generating paraphrases. These text embeddings can be further used for other NLP tasks such as sentiment analysis. \begin{figure*}[ht] \centering \includegraphics[width=1.0\columnwidth]{COLING_2018/fig/coling_main6.pdf} \caption{This is an overview of our model. It consists of 3 parts: 1) LSTM based Encoder module which encodes a given sentence, 2) LSTM based Decoder Module which generates natural language paraphrases from the encoded embeddings and 3) LSTM based pairwise Discriminator module which shares its weights with the Encoder module and this whole network is trained with cross entropy and siamese loss.} \label{fig:main_figure} \end{figure*} \subsection{Encoder-LSTM} Input question $X_i$, is represented as a matrix in which every row corresponds to the vector representation of each word. Typically, the vectors are the word embedding. We are using one-hot vector representation for every word. We obtain a word embedding $c_i$ for each word using a Temporal CNN~\cite{zhang_NIPS2015character,Palangi_TASLP2016} module that we parameterize through a function $G(x_i,W_c)$ where $W_c$ are the weights of the temporal CNN. Now this word embedding is fed to an encoder which provides encoding features of the sentence. We use LSTM~\cite{hochreiter_NC1997} due to its capability of capturing long term memory~\cite{Palangi_TASLP2016}. As the words are propogated through the network, the network collects more and more semantic information about the sentence. When the network reaches the last word at $L$, the hidden state $h_{L}$ of the network provides a semantic representation of the whole sentence conditioned on all the previously generated words $(q_0,q_1..,q_{t})$. The hidden state $h_{t+1}$ can be expressed as a linear combination of $h_t$ and $q_{t+1}$, $h_{t+1}= \tanh{(W_h*h_{t} + W_e * q_{t+1})}$. Where $W_h, W_e$ are weight for hidden state and word embedding. Question sentence encoding feature $f_i$ is obtained after passing through an LSTM which is parameterised using the function $F(C_i, W_l)$ where $W_l$ are the weights of the LSTM. This is illustrated in part 1 of figure~\ref{fig:main_figure}. \subsection{Decoder-LSTM} The role of decoder is to predict the probability for a whole sentence, given the embedding of input sentence ($f_e$). RNN provides a nice way to condition on previous state value using a fixed length hidden vector. The conditional probability of a sentence token at particular time step $q_{t+1}$ is modeled using an LSTM as used in machine translation~\cite{Sutskever_NIPS2014}. At time step $t+1$, the conditional probability is denoted by $P( q_{t+1} | {f_i},q_0,..,q_{t})= P( q_{t+1} | {f_i},h_{t+1})$, where $h_{t+1}$ is the hidden state of the LSTM cell at time step $t+1$. $h_{t+1}$ is conditioned on all the previously generated words $(q_0,q_1..,q_{t})$ and $q_{t+1}$ is the next generated word. $h_{t+1}$ can be expressed as a linear combination of $h_t$ and $q_{t+1}$, $h_{t+1}= \tanh{(W_h*h_{t} + W_d * q_{t+1})}$. The output of the word with maximum probability in the distribution in the LSTM cell at step $k$ is input to the LSTM cell at step $k+1$ as shown in figure ~\ref{fig:main_figure}. At $t=-1$, we are feeding an embedding of input sentences obtained by the encoder module. $Y_i={q_0,...q_N}$ are the ground truth paraphrase words for the input $X_i$. Here, we are using $q_0$ and $q_N$ as the special START and STOP token respectively. The softmax probability for the predicted question token at different time steps is given by the following standard LSTM equations: \begin{equation} \begin{split} & p_{-1}=S_i=\mbox{Encoder}(f_{i}) \\ & h_0=\mbox{LSTM}(p_{-1})\\ & p_t=W_d*q_t, \forall t\in \{0,1,2,...N-1\} \\ & {h_{t+1}}=\mbox{LSTM}(p_t,h_{t}), \forall t\in \{0,1,2,...N-1\}\\ & o_{t+1} = W_o * h_{t+1} \\ & \hat{y}_{t+1} = P( q_{t+1} | f_i,h_{t})= \mbox{Softmax}(o_{t+1})\\ & Loss_{t+1}=loss(\hat{y}_{t+1},y_{t+1}) \end{split} \end{equation} Where $\hat{y}_{t+1}$ is the predicted question token and $y_{t+1}$ is the ground truth one. In order to capture local label information, We use the Cross Entropy loss which is given by the following equation: \begin{equation} L_{local}=\frac{-1}{N}\sum_{t=1}^{N} {y_{t} \text{log} \text{P}(q_{t}|{q_0,..q_{t}})} \end{equation} Here $N$ is the total number of sentence tokens, $\text{P}(q_{t}|f_i,{q_0,..q_{t}})$ is the predicted probability of the sentence token, $y_t$ is the ground truth token. \subsection{Discriminative-LSTM} The aim of the discriminator is to distinguish the predicted sentence embedding $\hat{p_i}$ with ground truth reference $g_j$ embedding and make in-distinguish with relevent embedding as shown in fig.~\ref{fig:main_figure}. In this sense, the discriminator estimates a loss function between generated and original paraphrases. Typically, the discriminator is a binary classifier loss , but here we define a Siamese loss, which acts on the last hidden state of the recurrent neural network, which gives better performance. In order to learn the extrinsic sentences matching capability, we have trained discriminator model using Siamese loss. The main objective of this loss is to bring our generated paraphrase embedding closer to its ground truth paraphrase embedding and further from the other ground truth paraphrase embeddings (other sentences in the batch). Here our discriminator network ensures that the generated embedding can reproduce better paraphrases that are close enough to the ground truth sentence. We are using the idea of sharing discriminator parameters with encoder network, to enforce learning of embedding that not only minimize the local loss (cross entropy), but also the global loss. Suppose embedding of predicted batch is $e_p=[f_1^p,f_2^p,.. f_N^p]^T$, where $f_i^p$ is the sentence embedding of $i^{th}$ sentence of batch. Similarly ground truth batch embedding are $e_g=[f_1^g,f_2^g,.. f_N^g]^T$, where $N$ is the batch size. Where $f_i^p \in \mathds{R}^d$ $f_i^g \in \mathds{R}^d$. We define an adversarial loss whose objective is to maximize the similarity between predicted sentence $f_i^p$ with the ground truth sentence $f_i^g$ of $i^{th}$ sentence and minimize the similarity between $i^{th}$ sentence of the predicted $f_i^p$ with $j^{th}$ sentence of the ground truth $f_j^g$ sentence in a batch. The loss is defined as \begin{equation} L_{global}= \sum_{i=1}^{N}\sum_{j=1}^{N} \max(0,((f_i^p\cdot f_j^g)- (f_i^p \cdot f_i^g)+1)) \end{equation} Gradient of loss function given by \begin{equation} \bigg(\frac{dL}{de_p}\bigg)_i= \sum_{j=1,\\ j\neq i}^{N} (f_j^g-f_i^g) \end{equation} \begin{equation} \bigg(\frac{dL}{de_g}\bigg)_i= \sum_{j=1,\\ j\neq i}^{N} (f_j^p-f_i^p) \end{equation} \subsection{Cost function} \noindent Our objective is to minimize the total loss, that is the sum of local loss and global loss over all training examples. The total loss is: \begin{equation} L_{total}= \frac{1}{M} \sum^{M}_{i=1} (L_{local} + L_{global}) \end{equation} Where $M$ is the total number of examples, $L_{local}$ is the cross entropy loss, $L_{global}$ is the discriminative loss. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|c|c|c|c|c|} \hline \bf Dataset &\bf Model & \bf BLEU1 & \bf BLEU2 & \bf BLEU3 & \bf BLEU4 &\bf ROUGE & \bf METEOR \\ \hline &ED(Base Line) &33.7 & 22.3 &18.0 & 12.1 &35.3 &14.3 \\ &EDD &40.7 &28.3 &21.1 &16.1 &39.7 &19.6 \\ 50K&EDGAN &40.9 &28.6 &21.3 &16.1 &40.2 &19.8 \\ &EDD FULL &\textbf{41.1} &\textbf{29.0} &\textbf{21.5} &\textbf{16.5} &\textbf{40.6} &\textbf{20.1}\\ \hline &ED(Base Line) &35.1 & 25.4 &19.6 & 14.4 &37.4 &15.4 \\ &EDD &42.1 & 29.4 &21.6 & 16.4 &41.4 &20.4 \\ 100K&EDGAN &44.2 &43.7 &22.1 &17.9 & 43.6 &22.1 \\ &EDD FULL & \textbf{45.7}& \textbf{32.4} &\textbf{23.8} &\textbf{17.9} &\textbf{44.9} &\textbf{23.1}\\ \hline \end{tabular} \caption{\label{score_tab_1}Analysis of variants of our proposed method on Quora Dataset as mentioned in section \ref{ablation_analysis} As we can see that our proposed method EDD Full clearly outperforms the other ablations on all metrics and detailed analysis is present in section~\ref{ablation_analysis}.} \end{table \section{Experiments} We perform experiments to better understand the behavior of our proposed embedding. To achieve this, we benchmark EDDF embeddings on two text understanding problems, Paraphrase Generation and Sentiment Analysis. We use the Quora question pairs dataset \footnote{website: \url{https://data.quora.com/First-Quora-Dataset-Release-Question-Pairs}} for paraphrase generation and Stanford Sentiment Treebank dataset~\cite{Socher_EMNLP2013} for sentiment analysis. In this section we describe the different datasets, experimental setup and results of our experiments. \subsection{Paraphrase Generation} Paraphrase generation is an important problem in many NLP applications such as question answering, information retrieval, information extraction, and summarization. It involves generation of similar meaning sentences. \subsubsection{Dataset} We use the newly released Quora question pairs dataset for this task. The dataset consists of over 400K lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the questions in the pair are truly a duplicate of each-other. Wherever the binary value is 1, the question in the pair are not identical; they are rather paraphrases of each-other. So, for our study, we choose all such question pairs with binary value 1. There are a total of 149K such questions. Some examples of question and their generated paraphrases can be found in Table~\ref{tab:paraphrase_samples}. More results are present in the supplementary material. \begin{table}[ht] \centering \begin{tabular}{|l|l|l|c|c|c|c|c|} \hline \bf Dataset &\bf Model & \bf BLEU1 & \bf METEOR & \bf TER \\ \hline &Unsupervised VAE~\cite{gupta_AAAI2017} & 8.3 &12.2 &83.7\\ &VAE-S~\cite{gupta_AAAI2017} &11.9 &17.4 &69.4\\ 50K&VAE-SVG~\cite{gupta_AAAI2017} &17.1 & 21.3 &63.1 \\ &VAE-SVG-eq~\cite{gupta_AAAI2017} &17.4 & \textbf{21.4} & 61.9\\ &EDD (\textbf{Ours}) &40.7 & 19.7 & 51.2\\ &EDD GAN(\textbf{Ours}) &40.9 & 19.8 & 51.0\\ &EDD FULL(\textbf{Ours}) &\textbf{41.1} & 20.1 & \textbf{50.8}\\ \hline &Unsupervised~\cite{gupta_AAAI2017} & 10.6&14.3 &79.9 \\ &VAE-S~\cite{gupta_AAAI2017} & 17.5& 21.6 &67.1\\ 100K&VAE-SVG~\cite{gupta_AAAI2017} &22.5 & 24.6 &55.7\\ &VAE-SVG-eq~\cite{gupta_AAAI2017} &22.9 & \textbf{24.7} &55.0 \\ &EDD (\textbf{Ours}) &42.1 & 20.4 & 49.9\\ &EDD GAN(\textbf{Ours}) &44.2 & 22.1 & 48.3\\ &EDD FULL(\textbf{Ours}) &\textbf{45.7} &23.1 & \textbf{47.5}\\ \hline \end{tabular} \caption{\label{score_tab_2}Analysis of Baselines and State-of-the-Art methods for paraphrase generation on Quora dataset. As we can see clearly that our model outperforms the state-of-the-art methods by a significant margin in terms of BLEU and TER scores. Detailed analysis is present in section~\ref{sec:Baselines_analysis}. A lower TER score is better whereas for the other metrics, a higher score is better. Also more details for the metrics are present in the supplementary material.} \end{table \subsubsection{Experimental Protocols} We follow the experimental protocols mentioned in~\cite{gupta_AAAI2017} for the Quora dataset. In our experiments, we divide the dataset into 2 parts 145K and 4K question pairs. We use these as our training and testing sets. We further divide the training set into 50K and 100K dataset sizes and use the rest 45K as our validation set. We trained our model end-to-end using cross entropy and siamese loss. We have used RMSPROP optimizer to update the model parameter and found these hyper-parameter values to work best: $\text{learning rate} =0.0008, \text{batch size} = 150, \alpha = 0.99, \epsilon=1e-8$ to train the Paraphrase Generation Network. We have used learning rate decay to decrease the learning rate on every epoch by a factor given by: \[\text{Decay\_factor}=exp\left(\frac{log(0.1)}{a*b} \right)\] where $a=1500$ and $b=1250$ is set empirically. \begin{table}[h] \centering \begin{tabular}{|>{\arraybackslash}m{0.035\textwidth}|>{\arraybackslash}m{0.3\textwidth}|>{\arraybackslash}m{0.3\textwidth}|>{\arraybackslash}m{0.3\textwidth}|} \hline S.No&\bf Original Question &\bf Ground Truth Paraphrase & \bf Generated Paraphrase \\ \hline 1&Is university really worth it? &Is college even worth it? &Is college really worth it? \\ \hline 2&Why India is against CPEC? &Why does India oppose CPEC? &Why India is against Pakistan? \\ \hline 3&How can I find investors for my tech startup? &How can I find investors for my startup on Quora? &How can I find investors for my startup business? \\\hline 4&What is your view/opinion about surgical strike by the Indian Army?& What world nations think about the surgical strike on POK launch pads and what is the reaction of Pakistan?& What is your opinion about the surgical strike on Kashmir like? \\ \hline 5&What will be Hillary Clinton's strategy for India if she becomes US President? & What would be Hillary Clinton's foreign policy towards India if elected as the President of United States? & What will be Hillary Clinton's policy towards India if she becomes president? \\ \hline \end{tabular} \caption{\label{tab:paraphrase_samples}Examples of Paraphrase generation on Quora Dataset. We observe that our model is able to understand abbreviations as well and then ask questions on the basis of that as is the case } \end{table \subsubsection{Ablation Analysis}\label{ablation_analysis} We experimented with different variations for our proposed method. We start with baseline model which we take as a simple encoder and decoder network (ED)~\cite{Sutskever_NIPS2014}. Further we have experimented with encoder-decoder and a discriminator network (EDD) to distinguish the ground truth paraphrase with predicted one. This method has only the global loss and not the local loss and the discriminator is the same as our proposed method, only the weight sharing is absent in this case. Another variation of our model is EDGAN, which has a discriminator and is trained alternately between local and global loss similar to GAN training. Finally, we make the discriminator share weights with the encoder and train this network with both the losses (EDD Full). The analyses are given in table~\ref{score_tab_1}. Among the ablations, the proposed EDD Full method works way better than the other variants in terms of BLEU and METEOR metrics by achieving an improvement of 8\% and 6\% in the scores respectively over the baseline method for 50K dataset and an improvement of 10\% and 7\% in the scores respectively for 100K dataset. \subsubsection{Baseline and State-of-the-Art Method Analysis}\label{sec:Baselines_analysis} There has been relatively lesser work on this dataset and the only work which we came across was that of~\cite{gupta_AAAI2017}. We further compare our method EDD Full with their VAE-SVG-eq which is the current state-of-the-art on Quora datset. Also we provide comparisons with other methods proposed by them only in table~\ref{score_tab_2}. As we can see from the table that we achieve a significant increase of 24\% in BLEU score and 11\% in TER score for 50K dataset and similarly 22\% in BLEU score and 7.5\% in TER score for 100K dataset. \subsubsection{Statistical Significance Analysis} We have analysed statistical significance~\cite{Demvsar_JMLR2006} for our proposed embeddings against different ablations and the state-of-the-art methods for the paraphrase generation task. The Critical Difference (CD) for Nemenyi~\cite{Fivser_PLOS2016} test depends upon the given $\alpha$ (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different, otherwise they are statistically different. Figure ~\ref{fig:result_1_B} visualizes the post hoc analysis using the CD diagram. From the figure, it is clear that our embeddings work best and is significantly different from the state-of-the-art methods. \begin{table}[ht] \centering \begin{tabular}{|>{\arraybackslash}m{0.55\textwidth}|>{\centering\arraybackslash}m{0.2\textwidth}|>{\centering\arraybackslash}m{0.2\textwidth}|} \hline {\bf Model} & \bf Error Rate (Fine-Grained) \\\hline Naive Bayes~\cite{Socher_EMNLP2013} & 59.0 \\ SVMs~\cite{Socher_EMNLP2013} & 59.3 \\ Bigram Naive Bayes~\cite{Socher_EMNLP2013} & 58.1 \\ Word Vector Averaging~\cite{Socher_EMNLP2013} & 67.3\\%\hline Recursive Neural Network~\cite{Socher_EMNLP2013} & 56.8 \\ Matrix Vector-RNN~\cite{Socher_EMNLP2013} & 55.6 \\ Recursive Neural Tensor Network~\cite{Socher_EMNLP2013} & 54.3 \\ Paragraph Vector~\cite{Le_ICML2014} & {51.3}\\ \hline EDD FULL (\bf Ours) & \bf 35.6 \\ \hline \end{tabular} \caption{\label{score_tab_4}Performance of our method compared to other approaches on the Stanford Sentiment Treebank Dataset. The error rates of other methods are reported in~\cite{Le_ICML2014}} \end{table \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{fig/SSA_EDD.png} \caption{The mean rank of all the models on the basis of BLEU score are plotted on the x-axis. Here EDDF refers to our EDD-full model and others are the different variations of our model described in section \ref{ablation_analysis} and the models on the right are the different variations proposed in~\cite{gupta_AAAI2017}. Also the colored lines between the two models represents that these models are not significantly different from each other. CD=5.199,p=0.0069} \label{fig:result_1_B} \end{figure} \subsection{SENTIMENT ANALYSIS with Stanford Sentiment Treebank (SST) Dataset} \subsubsection{Dataset} This dataset was first proposed by~\cite{Pang_ACL2005}.~\cite{Socher_EMNLP2013} extended this by parsing the reviews to subphrases and then fine-graining the sentiment labels for all the phrases of movies reviews using Amazon Mechanical Turk. The labels are classified into 5 sentiment classes, namely very positive, positive, neutral, negative and very negative. This dataset contains a total 126k phrases for training set, 30k phrases for validation set and 66k phrases for test set. \subsubsection{Tasks and Baselines} In~\cite{Socher_EMNLP2013}, the authors propose two ways of benchmarking. We consider a 5-way fine-grained classification task where the labels are \{Very Negative, Negative, Neutral, Positive, Very Positive\}. The other axis of variation is in terms of whether we should label the entire sentence or all phrases in the sentence. In this work we only consider labeling all the phrases.~\cite{Socher_EMNLP2013} apply several methods to this dataset and we show their performance in table~\ref{score_tab_4}. \begin{table}[ht] \small \centering \begin{tabular}{|>{\arraybackslash}m{0.1\textwidth}|>{\arraybackslash}m{0.73\textwidth}|m{0.11\textwidth}|} \hline \bf Phrase ID &\bf Phrase & \bf Sentiment \\ \hline 162970& The heaviest, most joyless movie &\\ 159901& Even by dumb action-movie standards, Ballistic : Ecks vs. Sever is a dumb action movie. &\\ 158280& Nonsensical, dull ``cyber-horror'' flick is a grim, hollow exercise in flat scares and bad acting &Very Negative\\ 159050& This one is pretty miserable, resorting to string-pulling rather than legitimate character development and intelligent plotting. &\\ 157130& The most hopelessly monotonous film of the year, noteworthy only for the gimmick of being filmed as a single unbroken 87-minute take. &\\ \hline 156368& No good jokes, no good scenes, barely a moment &\\ 157880& Although it bangs a very cliched drum at times &\\ 159269& They take a long time to get to its gasp-inducing ending. & Negative\\ 157144& Noteworthy only for the gimmick of being filmed as a single unbroken 87-minute &\\ 156869& Done a great disservice by a lack of critical distance and a sad trust in liberal arts college bumper sticker platitudes&\\ \hline 221765& A hero can stumble sometimes. &\\ 222069& Spiritual rebirth to bruising defeat&\\ 218959& An examination of a society in transition &Neutral\\ 221444& A country still dealing with its fascist past& \\ 156757& Have to know about music to appreciate the film's easygoing blend of comedy and romance & \\ \hline 157663& A wildly funny prison caper. &\\ 157850& This is a movie that's got oodles of style and substance. & \\ 157879& Although it bangs a very cliched drum at times, this crowd-pleaser's fresh dialogue, energetic music, and good-natured spunk are often infectious. & Positive\\ 156756& You don't have to know about music to appreciate the film's easygoing blend of comedy and romance. &\\ 157382& Though of particular interest to students and enthusiast of international dance and world music, the film is designed to make viewers of all ages, cultural backgrounds and rhythmic ability want to get up and dance. &\\ \hline 162398& A comic gem with some serious sparkles. &\\ 156238& Delivers a performance of striking skill and depth &\\ 157290& What Jackson has accomplished here is amazing on a technical level. &Very Positive\\ 160925& A historical epic with the courage of its convictions about both scope and detail. &\\ 161048& This warm and gentle romantic comedy has enough interesting characters to fill several movies, and its ample charms should win over the most hard-hearted cynics.&\\ \hline \end{tabular} \caption{\label{score_tab_sup}Examples of Sentiment classification on test set of kaggle competition dataset.} \end{table \subsubsection{Experimental Protocols} We follow the experimental protocols as described in~\cite{Socher_EMNLP2013}. To make use of the available labeled data, in our model, each subphrase is treated as an independent sentence and we learn the representations for all the subphrases in the training set. After learning the vector representations for training sentences and their subphrases, we feed them to a logistic regression to learn a predictor of the movie rating. At test time, we freeze the vector representation for each word, and learn the representations for the sentences using gradient descent. Once the vector representations for the test sentences are learned, we feed them through the logistic regression to predict the movie rating. In order to train a sentiment classification model, we have used RMSPROP, to optimize the classification model parameter and we found these hyperparameter values to be working best for our case: $\text{learning rate} =0.00009, \text{batch size} = 200, \alpha = 0.9, \epsilon=1e-8$.\\ \subsubsection{Results} We report the error rates of different methods in table~\ref{score_tab_4}. We can clearly see that the bag-of-words or bag-of-n-grams models (NB, SVM, BiNB) perform poorly. Also the advanced methods (such as Recursive Neural Network~\cite{Socher_EMNLP2013}) perform much better on the task of fine-grained sentiment analysis. Our method outperforms all these methods by an absolute margin of 15.7\% which is a pretty significant increase considering the rate of progress on this task. We have also uploaded our models to the online competition on Rotten Tomatoes dataset \footnote{website: \url{www.kaggle.com/c/sentiment-analysis-on-movie-reviews}} and obtained an accuracy of 62.606\% on their test-set of 66K phrases. \\ We also provide 5 examples for each sentiment in table~\ref{score_tab_sup}. We can see clearly that our proposed embeddings are able to get the complete meaning of smaller as well as larger sentences. For example, our model classifies `Although it bangs a very cliched drum at times' as Negative and `Although it bangs a very cliched drum at times, this crowd-pleaser's fresh dialogue, energetic music, and good-natured spunk are often infectious.' as positive showing that it is able to understand the finer details of language. More results and some visualisations showing the part of the phrase to which the model attends while classifying are present in the supplementary material. \section{Conclusion} In this paper we have proposed a sentence embedding using sequential encoder-decoder with a pairwise discriminator. We have experimented with this text embedding method for paraphrase generation and sentiment analysis. We also provided experiment analysis which justify that a pairwise discriminator outperforms the previous state-of-art methods for NLP tasks such as paraphrase generation and sentiment analysis. We also performed ablation analysis for our method, and our method outperforms all of them in terms of BLEU,METEOR and TER scores. We plan to generalise this to other text understanding tasks and also extend the same idea in vision domain. \bibliographystyle{acl}
{ "timestamp": "2019-03-18T01:02:15", "yymm": "1806", "arxiv_id": "1806.00807", "language": "en", "url": "https://arxiv.org/abs/1806.00807" }
\section{Introduction} A highlight in the theory of Drinfeld-Jimbo quantum groups is the construction of canonical bases by Lusztig and Kashiwara; cf. \cite{Lu93}. Let $(\mbf U, {\mbf U}^\imath)$ be a quantum symmetric pair of finite type \cite{Le99} (also cf. \cite{BK15}). A general theory of ($\imath$-)canonical bases arising from quantum symmetric pairs of finite type was developed recently by Bao and the first author in \cite{BW16}, where $\imath$-canonical bases on based $\mbf U$-modules and on the modified form of ${\mbf U}^\imath$ were constructed. One notable feature of the definition of quantum symmetric pairs is its dependence on parameters; see \cite{Le99, BK15, BW16} where various conditions on parameters are imposed at different levels of generalities for various constructions. In \cite{BWW18} it is shown that the unequal parameters of type B Hecke algebras correspond under the $\imath$-Schur duality to certain specializations of parameters in the type AIII/AIV quantum symmetric pairs. \vspace{3mm} For the remainder of the paper we let $\mbf U$ be the quantum group of $\mathfrak{sl}_2$ over $\mathbb Q(q)$ with generators $E, F, K^{\pm 1}$. Let ${\mbf U}^\imath$ be the $\mathbb Q(q)$-subalgebra of $\mbf U$ (with a parameter $\kappa$), which is a polynomial algebra in $t$, where \[ t = F+ q^{-1}EK^{-1} +\kappa K^{-1}. \] Then ${\mbf U}^\imath$ is a coideal subalgebra, and $(\mbf U, {\mbf U}^\imath)$ is an example of quantum symmetric pairs; cf. \cite{Ko93}. For the consideration of $\imath$-canonical bases (which will be referred to as $\imath$-divided powers from now on) the parameter $\kappa$ is taken to be a bar invariant element in $\mathcal A =\mathbb Z[q,q^{-1}]$; cf. \cite{BW16}. In contrast to the usual quantum $\mathfrak{sl}_2$ case where the divided powers are simple to define, finding explicit formulae for the $\imath$-divided powers is a highly nontrivial problem. Closed formulae for the $\imath$-divided powers in ${\mbf U}^\imath$ were known earlier only in two distinguished cases when $\kappa=0$ or $1$ (cf. \cite{BeW18}); this verified a conjecture in \cite{BW13} when $\kappa=1$. The goal of this paper is to establish closed formulae for the $\imath$-divided powers in ${\mbf U}^\imath$ as polynomials in $t$ and also viewed as elements in $\mbf U$ (via the embedding of ${\mbf U}^\imath$ to $\mbf U$), when $\kappa$ is an arbitrary $q$-integer which is clearly bar invariant. For arbitrary $\overline{\kappa} =\kappa \in \mathcal A$, we are able to present a closed formula only for the {\em second} $\imath$-divided power (note the first $\imath$-divided power is simply $t$ itself). \vspace{3mm} In contrast to the quantum group setting, there are {\em two} $\mathcal A$-forms, ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ and ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$, for ${\mbf U}^\imath$ corresponding to the parities $\{\rm ev, \rm odd\}$ of highest weights of finite-dimensional simple $\mbf U$-modules \cite{BW13, BW16}. As a very special case of a main theorem in \cite{BW16}, ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ (and respectively, ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$) admits $\imath$-canonical bases ($= \imath$-divided powers) for an arbitrary parameter $\overline{\kappa} =\kappa \in \mathcal A$, which are invariant with respect to a bar map (which fixes $t$ and hence is not a restriction of Lusztig's bar map on $\mbf U$ to ${\mbf U}^\imath$) and satisfy an asympototic compatibility with the $\imath$-canonical bases on finite-dimensional simple $\mbf U$-modules. Computations by hand and by Mathematica have led us to make an ansatz for the formulae for the $\imath$-divided powers as polynomials in $t$ when $\kappa$ is an arbitrary $q$-integer. In further discussions we need to separate the cases when $\kappa$ is an even or odd $q$-integer, and let us restrict ourselves to the case for the $q$-integer $\kappa$ being even in the remainder of the Introduction. Our ansatz is that the $\imath$-divided powers $\dvev{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ and $\dv{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$ are given as follows: for $a\in \mathbb N$, \begin{align} \label{def:idp:evevKa} \dvev{n} = \begin{cases} \frac{t}{[2a]!} (t - [-2a+2])(t -[-2a+4]) \cdots (t -[2a-4]) (t - [2a-2]), & \text{if } n=2a, \\ \\ \frac{1}{[2a+1]!} (t - [-2a])(t -[-2a+2]) \cdots (t -[2a-2]) (t - [2a]), &\text{if } n=2a+1. \end{cases} \end{align} \begin{align} \label{def:dv:evoddK} {\small \dv{n} = \begin{cases} \frac{1}{[2a]!} (t - [-2a+1] ) (t - [-2a+3]) \cdots (t - [2a-3]) (t -[2a-1]), & \text{if } n=2a, \\ \\ \frac{t}{[2a+1]!} (t - [-2a+1] ) (t - [-2a+3]) \cdots (t - [2a-3]) (t -[2a-1]), &\text{if } n=2a+1. \end{cases} } \end{align} That is, the $\imath$-divided power formulas as polynomials in $t$ for $\kappa$ being even $q$-integers are the same as for $\kappa=0$ \cite{BW13, BeW18}. To verify the above formulae \eqref{def:idp:evevKa}-\eqref{def:dv:evoddK} are indeed the $\imath$-canonical bases as defined and established in \cite{BW16}, we need to verify 2 properties: (1) these polynomials in $t$ lie in the corresponding $\mathcal A$-forms of ${\mbf U}^\imath$; (2) they satisfy the asymptotic compatibility with $\imath$-canonical bases on finite-dimensional simple $\mbf U$-modules. To that end, we find the expansions for the polynomials in $t$ defined in \eqref{def:idp:evevKa}-\eqref{def:dv:evoddK} in terms of $E, F, K^{\pm 1}$ of $\mbf U$ (via the embedding ${\mbf U}^\imath \to \mbf U$); they are given by explicit triple sum formulae. Once these explicit formulae are found, they are verified by lengthy inductions. In the formulation of the expansion formulae, a sequence of degree $n$ polynomials $p^{(n)} (x) =p_n(x)/ [n]!$ which are defined recursively arise naturally; see \S \ref{subsec:pn}. The presence of $p^{(n)} (x)$ makes the formulation of the expansion formula and its proof much more difficult than the distinguished cases when $\kappa=0$ or $1$ treated in \cite{BeW18}. The polynomials $p^{(n)} (x)$ satisfy a crucial integrality property in the sense that $p^{(n)} (\kappa)\in \mathcal A$ which leads to the integral property (1) above. To prove such an integrality we express these polynomials in terms of another sequence of polynomials (which are a reincarnation of the $\imath$-divided powers viewed as polynomials) with some $q^2$-binomial coefficients. Note that $p_0(0)=1$ and $p_n(0)=0$ for $n\ge 1$, our triple sum expansion formula reduces at $\kappa=0$ to a double sum formula in \cite{BeW18}. When applying the expansion formulas for the $\imath$-divided powers to the highest weight vectors of the finite-dimensional simple $\mbf U$-modules, we establish the asymptotic compatibility property (2) above explicitly. Note this form of compatibility cannot be made as strong as the compatibility between $\imath$-divided powers on ${\mbf U}^\imath$ and simple $\mbf U$-modules when $\kappa=0$ or $1$ (as expected in \cite{BW16}). With Huanchen Bao's help, we compute the closed formulae for the second $\imath$-divided power for an arbitrary parameter $\overline{\kappa} =\kappa \in \mathcal A$; see Appendix~\ref{sec:genK}. It is an interesting open problem to find closed formulae for higher $\imath$-divided power with such an arbitrary parameter $\kappa$. % \vspace{3mm} The paper is organized as follows. There are 4 sections depending on the parities of the weights and of the parameter $\kappa$. In Section~\ref{sec:evevK}, we recall the basics of the quantum symmetric pair $(\mbf U, {\mbf U}^\imath)$. Throughout the section we take $\kappa$ to be an arbitrary even $q$-integer. We establish a key integral property of the polynomials $p^{(n)} (x)$, and use it to formulate and establish the expansion formula for the $\imath$-divided powers $\dvev{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$. We prove that $\{\dvev{n} | n\ge 0 \}$ form an $\imath$-canonical basis for ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ by showing $\dvev{n} v^+_{2\la} $ is an $\imath$-canonical basis element on the finite-dimensional simple $\mbf U$-modules $L(2\lambda)$, for integers $\lambda \gg n$. In Section~\ref{sec:oddevK}, letting $\kappa$ be an arbitrary even $q$-integer, we establish the expansion formula for the $\imath$-divided powers $\dv{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$. We prove that $\{\dv{n} | n \ge 0\}$ form an $\imath$-canonical basis for ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$ by showing $\dv{n}v^+_{2\la+1} $ is an $\imath$-canonical basis element on the finite-dimensional simple $\mbf U$-modules $L(2\lambda+1)$, for integers $\lambda \gg n$. In Section~\ref{sec:evoddK}, we take $\kappa$ to be an arbitrary odd $q$-integer. We establish a key integral property of another sequence of polynomials $\mathfrak p^{(n)} (x)$, and use it to establish the expansion formula for the $\imath$-divided powers $\dvp{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$. We show that $\{\dvp{n} | n \ge 0\}$ form an $\imath$-canonical basis for ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ and $\dvp{n}v^+_{2\la} $ is an $\imath$-canonical basis element on the finite-dimensional simple $\mbf U$-modules $L(2\lambda)$, for integers $\lambda \gg n$. In Section~\ref{sec:oddoddK}, we take $\kappa$ to be an arbitrary odd $q$-integer. We establish the expansion formula for the $\imath$-divided powers $\dvd{n}$ in ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$. We show that $\{\dvd{n} | n \ge 0\}$ form an $\imath$-canonical basis for ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$ and $\dvd{n}v^+_{2\la+1} $ is an $\imath$-canonical basis element on the finite-dimensional simple $\mbf U$-modules $L(2\lambda+1)$, for integers $\lambda \gg n$. In Appendix~\ref{sec:genK}, for arbitrary $\overline{\kappa} =\kappa \in \mathcal A$, we present closed formulae for the second $\imath$-divided powers in ${}_\mathcal A {\mbf U}^\imath_{\rm ev}$ and ${}_\mathcal A {\mbf U}^\imath_{\rm odd}$. \vspace{.3cm} {\bf Acknowledgement.} WW thanks Huanchen Bao for his insightful collaboration. The formula in the Appendix for the second $\imath$-divided power with arbitrary parameter $\kappa$ (which was obtained with help from Huanchen) was crucial to this project, and to a large extent this paper grows by exploring for what values for the parameter $\kappa$ reasonable formulae for higher divided powers can be obtained. The research of WW and the undergraduate research of CB are partially supported by a grant from National Science Foundation. Mathematica was used intensively in this work. \section{The $\imath$-divided powers $\dvev{n}$ for even weights and even $\kappa$} \label{sec:evevK} \subsection{The quantum $\mathfrak{sl}_2$} Recall the quantum group $\mbf U=\mbf U_q(\mathfrak{sl}_2)$ is the $\mathbb Q(q)$-algebra generated by $F, E, K, K^{-1}$, subject to the relations: \[ KK^{-1}=K^{-1}K=1, \; EF -FE =\frac{K-K^{-1}}{q-q^{-1}}, \; K E =q^2 E K, \; K F=q^{-2} FK. \] There is an anti-involution $\varsigma$ of the $\mathbb Q$-algebra $\mbf U$: \begin{align} \label{eq:vs} \varsigma: \mbf U \longrightarrow \mbf U, \qquad E\mapsto E, \quad F\mapsto F, \quad K\mapsto K, \quad q\mapsto q^{-1}. \end{align} Let $\mathcal A =\mathbb Z[q,q^{-1}]$ and $\kappa \in \mathcal A$. Set \begin{equation} \label{eq:Y} \check{E} :=q^{-1}EK^{-1}, \qquad h :=\frac{K^{-2}-1}{q^2-1}. \end{equation} Define, for $a\in \mathbb Z, n\ge 0$, \begin{equation} \label{kbinom} \qbinom{h;a}{n} =\prod_{i=1}^n \frac{q^{4a+4i-4} K^{-2} -1}{q^{4i} -1}, \qquad [h;a]= \qbinom{h;a}{1}. \end{equation} Then we have, for $a\in \mathbb Z, n\in \mathbb N$, \begin{align} \label{FEk} \begin{split} F \check{E} - & q^{-2} \check{E} F =h, \\ \qbinom{h;a}{n} F = F \qbinom{h;a+1}{n}, & \qquad \qbinom{h;a}{n} \check{E} =\check{E} \qbinom{h;a-1}{n}. \end{split} \end{align} It follows by definition that \begin{equation} \label{kbinom2} \qbinom{h;a}{n} =\qbinom{h;1+a}{n} -q^{4a}K^{-2} \qbinom{h;1+a}{n-1}. \end{equation} The following formula holds for $n\ge 0$: \begin{align} \label{FYn} F \check{E}^{(n)} &=q^{-2n} \check{E}^{(n)} F + \check{E}^{(n-1)} \frac{q^{3-3n} K^{-2} - q^{1-n} }{q^2-1}. \end{align} Let \[ \check{E}^{(n)} =\check{E}^n/[n]!, \quad F^{(n)} =F^n/[n]!,\quad \text{ for } n\ge 1. \] Then $\check{E}^{(n)}= q^{-n^2} E^{(n)} K^{-n}$. It is understood that $\check{E}^{(n)}=0$ for $n<0$ and $\check{E}^{(0)}=1$. \subsection{The coideal subalgebra ${\mbf U}^\imath$} Recall $\check{E}$ from \eqref{eq:Y}. Let \begin{align} \label{eq:t} t &=F + \check{E} +\kappa K^{-1}. \end{align} Denote by ${\mbf U}^\imath$ the $\mathbb Q(q)$-subalgebra of $\mbf U$ generated by $t$. Then ${\mbf U}^\imath$ is a coideal subalgebra of $\mbf U$ and $(\mbf U, {\mbf U}^\imath)$ forms a quantum symmetric pair \cite{Ko93, Le99}; also cf. \cite{BW16}. Denote by $\dot{\mbf U}$ the modified quantum group of $\mathfrak{sl}_2$ \cite{Lu93}, which is the $\mathbb Q(q)$-algebra generated by $E\mathbf 1_\lambda, F\mathbf 1_\lambda$ and the idempotents $\mathbf 1_\lambda$, for $\lambda\in \mathbb Z$. Let ${}_\A{\dot{\mbf U}}$ (respectively, ${}_\A{\dot{\mbf U}}_{\rm{ev}}$, ${}_\A{\dot{\mbf U}}_{\text{odd}}$) be the $\mathcal A$-subalgebra of $\dot{\mbf U}$ generated by $E^{(n)} \mathbf 1_\lambda, F^{(n)} \mathbf 1_\lambda, \mathbf 1_\lambda$, for all $n\ge 0$ and for $\lambda\in \mathbb Z$ (respectively, for $\lambda$ even, for $\lambda$ odd). Note ${}_\A{\dot{\mbf U}} ={}_\A{\dot{\mbf U}}_{\rm{ev}} \oplus {}_\A{\dot{\mbf U}}_{\text{odd}}$. There is a natural left action of $\mbf U$ (and hence ${\mbf U}^\imath$) on $\dot{\mbf U}$ such that $K \mathbf 1_\lambda =q^\lambda \mathbf 1_\lambda$. For $\mu\in \mathbb N$, denote by $L(\mu)$ the finite-dimensional simple $\mbf U$-module of highest weight $\mu$, and denote by $L(\mu)_\mathcal A$ its Lusztig $\mathcal A$-form. Following \cite{BW16}, we introduce the following $\mathcal A$-subalgebras of ${\mbf U}^\imath$, for ${\rm p}\in \{ {\rm ev}, {\rm odd} \}$: \begin{align*} {}_\mathcal A {\mbf U}^\imath_{\rm p} &=\{x\in {\mbf U}^\imath~|~x u \in {}_\A{\dot{\mbf U}}_{\rm p}, \forall u\in {}_\A{\dot{\mbf U}}_{\rm p} \} \\ &=\big \{x\in {\mbf U}^\imath~|~x v \in L(\mu)_\mathcal A, \forall v\in L(\mu)_\mathcal A, \forall {\rm p}\equiv\mu\pmod{2} \big \}. \end{align*} By \cite{BW16}, ${}_\mathcal A {\mbf U}^\imath_{\rm p}$ is a free $\mathcal A$-submodule of ${\mbf U}^\imath$ such that ${\mbf U}^\imath =\mathbb Q(q) \otimes_\mathcal A {}_\mathcal A {\mbf U}^\imath_{\rm p}$, for ${\rm p}\in \{ {\rm ev}, {\rm odd} \}$. \subsection{Definition of $\dvev{n}$ for even $\kappa$} In this section~\ref{sec:evevK} we shall always take $\kappa$ to be an even $q$-integer, i.e., \begin{align} \label{eq:evenK} \kappa & =[2\ell], \quad \text{ for } \ell \in \mathbb Z. \end{align} We shall take the following as a definition of the $\imath$-divided powers $\dvev{n}$. \begin{definition} Set $\dvev{1} =t = F +\check{E} +\kappa K^{-1}$. The divided powers $\dvev{n}$, for $n\ge 1$, are defined by the recursive relations: \begin{align} \label{eq:tt} \begin{split} t \cdot \dvev{2a-1} &=[2a] \dvev{2a}, \\ t \cdot \dvev{2a} &= [2a+1] \dvev{2a+1} + [2a] \dvev{2a-1}, \quad \text{ for } a\ge 1. \end{split} \end{align} Equivalently, $\dvev{n}$ is defined by the closed formula \eqref{def:idp:evevKa}. \end{definition} The bar involution $\psi_\imath$ on ${\mbf U}^\imath$, which fixed $t$ and sends $q\mapsto q^{-1}$, clearly fixes the above $\imath$-divided powers. The following is a variant of \cite[Lemma~2.2]{BeW18} (where $\kappa=0$) with the same proof. \begin{lem} \label{lem:anti} The anti-involution $\varsigma$ on $\mbf U$ sends $ F\mapsto F, \check{E} \mapsto \check{E}, K^{-1} \mapsto K^{-1}, q \mapsto q^{-1}$; in addition, $\varsigma$ sends \[ h\mapsto -q^{2} h, \quad \dvev{n} \mapsto \dvev{n}, \quad \qbinom{h;a}{n}\mapsto (-1)^n q^{2n(n+1)} \qbinom{h;1-a-n}{n}, \; \forall a\in\mathbb Z, n\in\mathbb N. \] \end{lem} \subsection{Polynomials $p_n(x)$ and $p^{(n)}(x)$} \label{subsec:pn} \begin{definition} For $n\in \mathbb N$, the monic polynomial $p_n(x)$ in $x$ of degree $n$ is defined as \begin{align} \label{def:pn} p_{n+1} =x p_n + q^{1-2n} [n] [n-1] p_{n-1}, \qquad p_0 =1. \end{align} \end{definition} Also set $p_n =0$ for all $n<0.$ Note that $p_n$ is an odd polynomial for $n$ odd while it is an even polynomial for $n$ even. These polynomials $p_n$ will appear in the expansion formula for the $\imath$-divided powers in $\mbf U$. \begin{example} Here are $p_n(x)$, for the first few $n\ge 0$: \begin{align*} p_0& =1, \qquad p_1 =x, \qquad p_2=x^2, \qquad p_3=x^3 +(q^{-4} +q^{-2})x, \\ p_4&=x^4 + (q^{-4} +q^{-2})(q^{-4} +q^{-2}+2)x^2, \\ p_5&=x^5 + (q^{-4} +q^{-2}) (q^{-8}+q^{-6}+3q^{-4} +2q^{-2}+3)x^3 \\ & \qquad \qquad + (q^{-4} +q^{-2})^2 (q^{-8}+q^{-6}+2q^{-4} +q^{-2}+1)x. \end{align*} \end{example} Introduce the monic polynomial of degree $n$: \begin{align*} g_n(x) = \begin{cases} \prod_{i=0}^{m-1} (x^2-[2i]^2), & \text{ if } n=2m \text{ is even}; \\ x \prod_{i=1}^{m} (x^2-[2i]^2), & \text{ if } n=2m+1 \text{ is odd}. \end{cases} \end{align*} Define \begin{equation} \label{eq:divpg} p^{(n)}(x) =p_n(x)/[n]!, \qquad g^{(n)}(x) =g_n(x)/[n]!. \end{equation} The recursion for $p_n$ can be rewritten as \begin{equation} \label{eq:f(n)} [n+1]p^{(n+1)} =x p^{(n)} + q^{1-2n} [n-1] p^{(n-1)}. \end{equation} Our next goal is to show that $p^{(n)}([2\ell]) \in \mathcal A$ for all $\ell \in \mathbb Z$; cf. Proposition~\ref{prop:fg:ev}. This is carried out by relating $p^{(n)}$ to $g^{(n)}$ for various $n$, and showing that $g^{(n)}([2\ell]) \in \mathcal A$ for all $\ell \in \mathbb Z$. \begin{lem} \label{lem:ev:g-integral} For $\kappa =[2\ell]$ with $\ell \in \mathbb Z$ (see \eqref{eq:evenK}), we have $g^{(n)}(\kappa) \in \mathcal A$, for all $n\ge 0$. \end{lem} \begin{proof} We separate the cases for $n=2m+1$ and $n=2m$. Noting that \begin{equation} \label{eq:A} \kappa -[2i] =[2\ell]-[2i] = [\ell-i] (q^{\ell+i} +q^{-\ell-i}), \end{equation} we have \begin{align*} g^{(2m+1)}(\kappa) &= \frac1{[2m+1]!} \prod_{i=-m}^{m} (\kappa-[2i]) = \qbinom{\ell+m}{2m+1} \prod_{i=-m}^{m}(q^{\ell+i} +q^{-\ell-i} ) \in \mathcal A. \end{align*} Similarly, we have \begin{align*} g^{(2m)}(\kappa) &= \frac1{[2m]!} \kappa \prod_{i=1-m}^{m-1} (\kappa-[2i]) \\ &= \frac1{[2m]!} \prod_{i=1-m}^{m} (\kappa-[2i]) + \frac1{[2m-1]!} \prod_{i=1-m}^{m-1} (\kappa-[2i]) \\ &=\qbinom{\ell+m-1}{2m} \prod_{i=1-m}^{m}(q^{\ell+i} +q^{-\ell-i} ) + g^{(2m-1)}(\kappa) \in \mathcal A. \end{align*} The lemma is proved. \end{proof} \begin{prop} \label{prop:fg:ev} For $m \in \mathbb Z_{\ge 1}$, we have \begin{align*} p^{(2m)} &= \sum_{a=0}^{m-1} q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} g^{(2m-2a)}, \\ p^{(2m-1)} &= \sum_{a=0}^{m-1} q^{(3-2m)a} \qbinom{m-1}{a}_{q^2} g^{(2m-2a-1)}. \end{align*} In particular, we have $p^{(n)} (\kappa) \in \mathcal A$, for all $\kappa =[2\ell]$ with $\ell \in \mathbb Z$ and all $n\ge 0$. \end{prop} \begin{proof} It follows from these formulae for $p^{(n)}$ and Lemma~\ref{lem:ev:g-integral} that $p^{(n)}([2\ell]) \in \mathcal A$. It remains to prove these formulae. Recall \begin{align} \label{recursive:f} x \cdot p^{(n)} = [n+1] p^{(n+1)} - q^{1-2n} [n-1] p^{(n-1)}, \quad \text{ for } n \ge 1. \end{align} Also recall \begin{align} \label{recursive:g} \begin{split} x \cdot g^{(2a-1)} &=[2a] g^{(2a)}, \\ x \cdot g^{(2a)} &= [2a+1] g^{(2a+1)} + [2a] g^{(2a-1)}, \quad \text{ for } a\ge 1. \end{split} \end{align} We prove the formulae by induction on $m$, with the base cases for $p^{(1)}$ and $p^{(2)}$ being clear. We separate in two cases. (1). Let us prove the formula for $p^{(2m+1)}$, assuming the formulae for $p^{(2m-1)}$ and $p^{(2m)}$. By \eqref{recursive:f}--\eqref{recursive:g} we have \begin{align*} [2m+1]p^{(2m+1)} &= x p^{(2m)} +q^{1-4m} [2m-1] p^{(2m-1)} \\ &= \sum_{a=0}^{m-1}q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} x \cdot g^{(2m-2a)} \\ &\qquad +\sum_{a=0}^{m-1} q^{(3-2m)a+1-4m} [2m-1] \qbinom{m-1}{a}_{q^2} g^{(2m-2a-1)} \\ &= \sum_{a=0}^{m-1}q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} \left( [2m-2a+1]g^{(2m-2a+1)} +[2m-2a]g^{(2m-2a-1)} \right) \\ &\qquad +\sum_{a=0}^{m-1} q^{(3-2m)a+1-4m} [2m-1] \qbinom{m-1}{a}_{q^2} g^{(2m-2a-1)}. \end{align*} By combining the like terms for $g^{(2m-2a-1)}$ above and using the identity \[ q^{(1-2m)a} [2m-2a] +q^{(3-2m)a+1-4m} [2m-1] =q^{(1-2m)(a+1)} [4m-2a-1], \] we obtain \begin{align*} [2m+1]p^{(2m+1)} &= \sum_{a=0}^{m-1}q^{(1-2m)a} [2m-2a+1] \qbinom{m-1}{a}_{q^2} g^{(2m-2a+1)} \\ &\qquad +\sum_{a=0}^{m-1} q^{(1-2m)(a+1)} [4m-2a-1] \qbinom{m-1}{a}_{q^2} g^{(2m-2a-1)} \\ % &= [2m+1]\sum_{a=0}^{m}q^{(1-2m)a} \qbinom{m}{a}_{q^2} g^{(2m-2a+1)}, \end{align*} where the last equation results from shifting the second summation index from $a\to a-1$ and using the identity \[ [4m-2a+1] \qbinom{m-1}{a}_{q^2} +q^{(1-2m)a} [4m-2a+1] \qbinom{m-1}{a-1}_{q^2} =[2m+1] \qbinom{m}{a}_{q^2}. \] (2) We shall prove the formula for $p^{(2m+1)}$, assuming the formulae for $p^{(2m-1)}$ and $p^{(2m)}$. By \eqref{recursive:f}--\eqref{recursive:g} we have \begin{align*} &[2m+2]p^{(2m+2)} \\ &= x p^{(2m+1)} +q^{-1-4m} [2m] p^{(2m)} \\ &= \sum_{a=0}^{m}q^{(1-2m)a} \qbinom{m}{a}_{q^2} x \cdot g^{(2m-2a+1)} +\sum_{a=0}^{m-1} q^{(1-2m)a-1-4m} [2m] \qbinom{m-1}{a}_{q^2} g^{(2m-2a)} \\ % &= \sum_{a=0}^{m} q^{(1-2m)a} [2m-2a+2] \qbinom{m}{a}_{q^2} g^{(2m-2a+2)} +\sum_{a=0}^{m-1} q^{(1-2m)a-1-4m} [2m] \qbinom{m-1}{a}_{q^2} g^{(2m-2a)} \\ &= [2m+2] \sum_{a=0}^{m} q^{(-1-2m)a} \qbinom{m}{a}_{q^2} g^{(2m-2a+2)}, \end{align*} where the last equation results from shifting the second summation index from $a\to a-1$ and using the following identity \[ [2m-2a+2] \qbinom{m}{a}_{q^2} + q^{-2-2m} [2m] \qbinom{m-1}{a-1}_{q^2} = q^{-2a} [2m+2] \qbinom{m}{a}_{q^2}. \] The proposition is proved. \end{proof} \begin{rem} \label{rem:positive-p} Note that $p_n(x) \in \mathbb N [q^{-1}] [x]$ for all $n$. Therefore, for all non-negative even $q$-integer $\kappa$, we have $p_n (\kappa) \in \mathbb N [q,q^{-1}]$. \end{rem} \subsection{Formulae for $\dvev{n}$ with even $\kappa$} Recall $p^{(n)}(x)$ from \eqref{def:pn}--\eqref{eq:divpg}. \begin{thm} \label{thm:dvev:evKappa} Assume $\kappa$ is an even $q$-integer as in \eqref{eq:evenK}. Then we have, for $m\ge 1$, \begin{align} \dvev{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a)} p^{(2m-b-2c)}(\kappa) \label{t2m:evev2} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}, \notag \\ \dvev{2m-1} &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a)} p^{(2m-b-2c-1)}(\kappa) \label{t2m-1:evev2} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c+1} F^{(b-a)}. \notag \end{align} \end{thm} The proof of Theorem~\ref{thm:dvev:evKappa} will be given in \S\ref{subsec:proof:dvev:evKappa} below. Let us list some equivalent formulae here. By applying the anti-involution $\varsigma$ we convert the formulae in Theorem~\ref{thm:dvev:evKappa} into the following. \begin{thm} \label{thm:dvev:evKappa2} Assume $\kappa$ is an even $q$-integer as in \eqref{eq:evenK}. Then we have, for $m\ge 1$, \begin{align} \dvev{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{3c+b(2m-b-2c) +a(b-a)} p^{(2m-b-2c)}(\kappa) \label{t2m:evevFhE} \\ &\qquad\qquad\qquad\quad \cdot F^{(b-a)} K^{b-2m+2c} \qbinom{h;m-c}{c} \check{E}^{(a)}, \notag \\ % \dvev{2m-1} &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c-1) +a(b-a)} p^{(2m-b-2c-1)}(\kappa) \label{t2m-1:evevFhE} \\ &\qquad\qquad\qquad\quad \cdot F^{(b-a)} K^{b-2m+2c+1} \qbinom{h;m-c}{c} \check{E}^{(a)}. \notag \end{align} \end{thm} \begin{proof} By Lemma~\ref{lem:anti}, the anti-involution $\varsigma$ sends $\qbinom{h;1-m}{c}\mapsto (-1)^c q^{2c(c+1)} \qbinom{h;m-c}{c}$, $q\mapsto q^{-1}$, while fixing $\kappa, K, \check{E}^{(a)}, F^{(a)}$ and $\dvev{n}$. The formulae \eqref{t2m:evevFhE}--\eqref{t2m-1:evevFhE} now follows by applying $\varsigma$ to the formulae \eqref{t2m:evev2}--\eqref{t2m-1:evev2} in Theorem~\ref{thm:dvev:evKappa}. \end{proof} For $c\in \mathbb N, m\in\mathbb Z$, we define \begin{align} \label{cbinom} \cbinom{m}{c} &= \prod_{i=1}^c \frac{q^{4(m+i-1)} -1}{q^{-4i}-1} \in \mathcal A, \qquad \cbinom{m}{0}=1. \end{align} Note that $\cbinom{m}{c}$ are $q^2$-binomial coefficients up to some $q$-powers. Let $\lambda, m \in \mathbb Z$ with $m\ge 1$. We note that \begin{align} \label{eq:qbinom-hc} \qbinom{h;m-c}{c} \mathbf 1_{2\lambda} &= \prod_{i=1}^c \frac{q^{4(m-\lambda-c+i-1)} -1}{q^{4i} -1} \mathbf 1_{2\lambda} = (-1)^c q^{-2c(c+1)} \cbinom{m-\lambda-c}{c} \mathbf 1_{2\lambda}. \end{align} The following corollary is immediate from \eqref{eq:qbinom-hc}, Proposition~\ref{prop:fg:ev}, and Theorem~\ref{thm:dvev:evKappa}. \begin{cor} We have $\dvev{n} \in {}_\mathcal A {\mbf U}^\imath_{\rm{ev}}$, for all $n$. \end{cor} \begin{rem} Note $p_n(0)=0$ for $n>0$, and $p_0(0)=1$. The formulae in Theorems~\ref{thm:dvev:evKappa} and \ref{thm:dvev:evKappa2} in the special case for $\kappa=0$ recover the formulae in \cite[Theorem~2.5, Proposition~2.7]{BeW18}. \end{rem} For $n\in \mathbb N$, we denote \begin{align} \label{eq:divB} b^{(n)} = \sum_{a=0}^n q^{-a(n-a)} \check{E}^{(a)}F^{(n-a)}. \end{align} \begin{example} The formulae for $\dvev{n}$ in Theorem~\ref{thm:dvev:evKappa}, for $1\le n \le 3$, read as follows: \begin{align*} \dvev{1} &= F +\check{E} + \kappa K^{-1}, \\ \dvev{2} &= \check{E}^{(2)} +q^{-1} \check{E} F + F^{(2)} + q [h;0] + \kappa (q^{-1}K^{-1}F + q^{-1} \check{E} K^{-1}) + \kappa^2 \frac{K^{-2}}{[2]}, \\ \dvev{3} & = b^{(3)} + q^3[h;-1]F+ q^3 \check{E} [h;-1] \\ &\qquad + \big(q^{-2} \check{E}^{(2)}K^{-1} +q^{-3} \check{E} K^{-1}F + q^{-2} K^{-1}F^{(2)} + q^3 [h;-1] K^{-1} \big) \kappa \\ &\qquad + \frac{q^{-2}}{[2]} (\check{E} K^{-2} +K^{-2}F) \kappa^2 +\frac{\kappa^3 + (q^{-4} +q^{-2})\kappa}{[3]!} K^{-3}. \end{align*} \end{example} \subsection{The $\imath$-canonical basis for $\dot{\mbf U}^\imath_{\rm{ev}}$ for even $\kappa$} \label{sec:iCB:ev} Denote by $L(\mu)$ the simple $\mbf U$-module of highest weight $\mu \in \mathbb N$, with highest weight vector $v^+_\mu$. Then $L(\mu)$ admits a canonical basis $\{ F^{(a)} v^+_\mu\mid 0\le a \le \mu\}$. Following \cite{BW13,BW16}, there exists a new bar involution $\psi_\imath$ on $L(\mu)$, which, in our current rank one setting, can be defined simply by requiring $\dvev{n} v^+_\mu$ to be $\psi_\imath$-invariant for all $n$. As a very special case of the general results in \cite[Corollary~F]{BW16} (also cf. \cite{BW13}), we know that the $\imath$-canonical basis $\{b^{\mu}_{a} \}_{0\le a \le \mu}$ of $L(\mu)$ exists and is characterized by the following 2 properties: (iCB1) $b^{\mu}_{a}$ is $\psi_\imath$-invariant; \qquad (iCB2) $b^{\mu}_{a} \in F^{(a)} v^+_\mu + \sum_{0\le r< a} q^{-1} \mathbb Z[q^{-1}] F^{(r)} v^+_\mu$. \begin{thm} \label{thm:iCB:ev} $\quad$ \begin{enumerate} \item Let $n\in \mathbb N$. For each integer $\lambda \gg n$, the element $\dvev{n} v^+_{2\la} $ is an $\imath$-canonical basis element for $L(2\lambda)$. \item The set $\{\dvev{n} \mid n \in \mathbb N \}$ forms the $\imath$-canonical basis for ${\mbf U}^\imath$ (and an $\mathcal A$-basis in ${}_\mathcal A {\mbf U}^\imath_{\rm{ev}}$). \end{enumerate} \end{thm} \begin{proof} Let $\lambda, m \in \mathbb N$ with $m\ge 1$. We compute by Theorem~\ref{thm:dvev:evKappa2}(1) and \eqref{eq:qbinom-hc} that \begin{align} \dvev{2m} v^+_{2\la} &= \sum_{b=0}^{2m} \sum_{c \geq 0} (-1)^c q^{3c+b(2m-b-2c)} p^{(2m-b-2c)}(\kappa) F^{(b)} K^{b-2m+2c} \qbinom{h;m-c}{c} v^+_{2\la} \notag \\ &= \sum_{b=0}^{2m} \sum_{c \geq 0} q^{(b-2\lambda)(2m-b-2c)-2c^2+c} p^{(2m-b-2c)}(\kappa) \cbinom{m-\lambda-c}{c}F^{(b)} v^+_{2\la} \label{dvev:2m-mod}. \end{align} Similarly we compute by Theorem~\ref{thm:dvev:evKappa2}(2) that \begin{align} \dvev{2m-1} v^+_{2\la} &= \sum_{b=0}^{2m-1} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c-1)} p^{(2m-b-2c-1)}(\kappa) F^{(b)} K^{b-2m+2c+1} \qbinom{h;m-c}{c} v^+_{2\la} \notag \\ &= \sum_{b=0}^{2m-1} \sum_{c \geq 0} q^{(b-2\lambda)(2m-b-2c-1)-2c^2-c} p^{(2m-b-2c-1)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la} . \label{dvev:2m-1-mod} \end{align} Note that \begin{equation} \label{eq:q-1} \cbinom{m-\lambda-c}{c} \in \mathbb N[q^{-1}], \qquad \text{ for }\lambda \ge m, c\ge 0. \end{equation} Let $n\in \mathbb N$. Recall $\kappa =[2\ell]$, for $\ell \in \mathbb Z$. One computes that $\deg_q p^{(n)} (\kappa) =(2\ell-1)n - {n \choose 2}$ and hence $q^{(b-2\lambda)(n-b-2c)-2c^2 \pm c} p^{(n-b-2c)}(\kappa) \in q^{-1} \mathbb N[q^{-1}]$ when $\lambda \gg n>b+2c$. It follows by \eqref{dvev:2m-mod}, \eqref{dvev:2m-1-mod} and \eqref{eq:q-1} that \begin{align} \label{eq:lattice} \dvev{n} v^+_{2\la} & \in F^{(n)} v^+_{2\la} + \sum_{b<n}q^{-1} \mathbb Z[q^{-1}] F^{(b)} v^+_{2\la} , \qquad \text{ for } \lambda \gg n. \end{align} Therefore, the first statement follows by the characterization properties (iCB1)--(iCB2) of the $\imath$-canonical basis for $L(2\lambda)$. The second statement follows now from the definition of the $\imath$-canonical basis for ${\mbf U}^\imath$ using the projective system $\{ L(2\lambda) \}_{\lambda\in \mathbb N}$ with ${\mbf U}^\imath$-homomorphisms $L(2\lambda+2) \rightarrow L(2\lambda)$; cf. \cite[\S6]{BW16}; this set $\{\dvev{n} \mid n \in \mathbb N \}$ forms an $\mathcal A$-basis for ${}_\mathcal A {\mbf U}^\imath_{\rm{ev}}$. The theorem is proved. \end{proof} \begin{rem} It follows by \eqref{dvev:2m-mod} that \[ \dvev{2m} v^+_{2\la} \in F^{(2m)} v^+_{2\la} + q^{2m-2\lambda-1} \kappa F^{(2m-1)} v^+_{2\la} +\sum_{k\ge 2} \mathcal A \cdot F^{(2m-k)} v^+_{2\la} . \] Recall $\kappa=[2\ell]$. When $\ell -1 \ge \lambda -m \ge 0$, we have $q^{2m-2\lambda-1} \kappa \not \in q^{-1} \mathbb Z[q^{-1}]$, and therefore, the nonzero element $\dvev{2m} v^+_{2\la} $ is not an $\imath$-canonical basis element in $L(2\lambda)$. So we cannot expect to have a stronger form of Theorem~\ref{thm:iCB:ev} as \cite[Theorem 2.10(1)]{BeW18} in the case when $\kappa=0$ (i.e., $\ell=0$). \end{rem} \subsection{Proof of Theorem~\ref{thm:dvev:evKappa} } \label{subsec:proof:dvev:evKappa} We prove the formulae for $\dvev{n}$ by induction on $n$, in two steps (1)--(2) below. The base cases when $n=1,2$ are clear. (1) We shall prove the formula \eqref{t2m:evev2} for $\dvev{2m}$, assuming the formula \eqref{t2m-1:evev2} for $\dvev{2m-1}$. Recall $[2m] \dvev{2m} = t \cdot \dvev{2m-1}$, and $t= F+ \check{E} +\kappa K^{-1}$. Let us compute \[ I=\check{E} \dvev{2m-1}, \qquad II=F \dvev{2m-1}, \qquad III=\kappa K^{-1} \dvev{2m-1}. \] First we have \begin{align*} I &=\sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a)} [a+1] p^{(2m-b-2c-1)}(\kappa) \\ & \qquad\qquad \cdot \check{E}^{(a+1)} \qbinom{h;1-m}{c} K^{b-2m+2c+1} F^{(b-a)} \\ &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -(b-1)(2m-b-2c) -(a-1)(b-a)} [a] p^{(2m-b-2c)}(\kappa) \\ & \qquad\qquad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}, \end{align*} where the last equation is obtained by shifting indices $a\to a-1$, $b\to b-1$ (and then adding some zero terms to make the bounds of the summations uniform throughout). Using \eqref{FYn} we have \begin{align*} II &=\sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a)} p^{(2m-b-2c-1)}(\kappa) \\ & \qquad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \qbinom{h;1-m}{c} K^{b-2m+2c+1} F^{(b-a)} =II^1 +II^2, \end{align*} where $II^1$ and $II^2$ are the two natural summands associated to the plus sign. By shifting index $b\to b-1$ and then adding some zero terms, we further have \begin{align*} II^1&=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c - (b-1)(2m-b-2c) -a(b-a-1)-2a+2b+4c-4m} [b-a] p^{(2m-b-2c)}(\kappa) \\ & \qquad \qquad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} II^2 &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2}+2c - (b+1)(2m-b-2c) - (a+1)(b-a)-2} p^{(2m-b-2c)}(\kappa) \\ & \qquad \qquad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} K^{b-2m+2c} F^{(b-a)}. \end{align*} Using \eqref{recursive:f} we also compute \begin{align*} III &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a) -2a} \kappa \cdot p^{(2m-b-2c-1)}(\kappa) \\ & \qquad\qquad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} \\ &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a) -2a} [2m-b-2c] p^{(2m-b-2c)}(\kappa) \\ & \qquad\qquad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} \\ & \quad+\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a) -2a -4m} [2m-b-2c] p^{(2m-b-2c)}(\kappa) \\ & \qquad\qquad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c-1} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $I, II^1, II^2$ and $III$ gives us \[ t \cdot \dvev{2m-1} = \sum_{0 \le a \le b \le 2m} \sum_{c\ge 0} p^{(2m-b-2c)}(\kappa) \check{E}^{(a)} H_{a,b,c} K^{b-2m+2c} F^{(b-a)}, \] where \begin{align*} H_{a,b,c} & := q^{\binom{2c}{2}+2c -(b-1)(2m-b-2c) -(a-1)(b-a)} [a] \qbinom{h;1-m}{c} \\ & + q^{\binom{2c}{2}+2c - (b-1)(2m-b-2c) -a(b-a-1)-2a+2b+4c-4m} [b-a] \qbinom{h;-m}{c} \\ & + q^{\binom{2c}{2}-2c+1 - (b+1)(2m-b-2c) - (a+1)(b-a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+ q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a) -2a} [2m-b-2c] \qbinom{h;1-m}{c} \\ & - q^{\binom{2c}{2}+2c -b(2m-b-2c-1) -a(b-a) -2a -4m} [2m-b-2c] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} Recall $[2m] \dvev{2m} = t \cdot \dvev{2m-1}$. To prove the formula \eqref{t2m:evev2}, by the PBW basis theorem it suffices to prove the following identity, for all $a,b,c$: \begin{align} \label{eq:H} H_{a,b,c} = q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a)} [2m] \qbinom{h;1-m}{c}. \end{align} Thanks to $q^{2m-a} [a] -[2m] =q^{-a} [a-2m]$, we can combine the RHS with the first summand of LHS \eqref{eq:H}. Hence, after canceling out the $q$-powers $q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a)-a}$ on both sides, we see that \eqref{eq:H} is equivalent to the following identity, for all $a, b, c$: \begin{equation} \label{ABCD} A+B+C+D_1 +D_2=0, \end{equation} where \begin{align*} A &= [a-2m] \qbinom{h;1-m}{c}, \\ B &= q^{4c-2m+b} [b-a] \qbinom{h;-m}{c}, \\ C &= q^{2a-2m+1} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1}, \\ D_1 &= q^{2c+b-a} [2m-b-2c] \qbinom{h; 1-m}{c}, \\ D_2 &= - q^{2c-4m+b-a} [2m-b-2c] \qbinom{h; 1-m}{c-1} K^{-2}. \end{align*} Let us prove the identity \eqref{ABCD}. Using \eqref{kbinom2}, we can write $B=B_1+B_2$, where \begin{align*} B_1 &= q^{4c-2m+b} [b-a] \qbinom{h; 1-m}{c}, \quad B_2 = - q^{4c-6m+b} [b-a] \qbinom{h; 1-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} = q^{4m-4c-3a}\frac{( q^{4c-4m}K^{-2}-1)}{q^2-1} + q^{2m-2c-2a-1} [2m-2c-a], \] we rewrite $C=C_1+C_2$, where \begin{align*} C_1 &= q^{-2c+2m-a} [2c] \qbinom{h; 1-m}{c}, \quad C_2 = q^{-2c} [2m-2c-a] \qbinom{h; 1-m}{c-1}. \end{align*} A direct computation gives us \begin{align*} A+B_1+C_1+D_1 &= (q-q^{-1}) [2c] [2m-2c-a] \qbinom{h; 1-m}{c}, \\ C_2 +(B_2+D_2) &= C_2 - q^{2c-4m} [2m-2c-a] \qbinom{h; 1-m}{c-1} K^{-2} \\ &= - (q-q^{-1}) [2c] [2m-2c-a] \qbinom{h; 1-m}{c}. \end{align*} Summing up these two equations, we have $A+B+C+D_1+D_2=0,$ whence \eqref{ABCD}, completing Step~(1). \vspace{3mm} (2) Assuming the formulae for $\dvev{n}$ with $n\le 2m$, we shall now prove the following formula for $\dvev{2m+1}$ (obtained from \eqref{t2m-1:evev2} with $m$ replaced by $m+1$): \begin{align} \dvev{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}+2c -b(2m-b-2c+1) -a(b-a)} p^{(2m-b-2c+1)}(\kappa) \label{t2m+1:evev2} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \notag \end{align} The proof is based on the recursion $t \cdot \dvev{2m} =[2m+1] \dvev{2m+1} +[2m] \dvev{2m-1}$. Recall $t= F +\check{E} +\kappa K^{-1}$ and $\dvev{2m}$ from \eqref{t2m:evev2}. We shall compute \[ \texttt I=\check{E} \dvev{2m}, \qquad \texttt{II}=F \dvev{2m}, \qquad \texttt{III} =\kappa K^{-1} \dvev{2m}, \] respectively. First we have \begin{align*} \texttt{I} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a)} [a+1] p^{(2m-b-2c)}(\kappa) \check{E}^{(a+1)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}\\ &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -(b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad\qquad\qquad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where the last equation is obtained by shifting indices $a\to a-1$, $b\to b-1$. We also have \begin{align*} \texttt{II} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a)} p^{(2m-b-2c)}(\kappa) \\ & \qquad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} =\texttt{II}_1 +\texttt{II}_2, \end{align*} where $\texttt{II}_1$ and $\texttt{II}_2$ are the two natural summands associated to the plus sign. By shifting the index $b\to b-1$, we have \begin{align*} \texttt{II}_1&=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -(b-1)(2m-b-2c+1) -a(b-a-1)-2a +2(b+2c-2m-1)} [b-a] p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad \qquad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} \texttt{II}_2 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} -(b+1)(2m-b-2c+1) -(a+1)(b-a)} p^{(2m-b-2c+1)}(\kappa) \\ & \qquad\qquad \qquad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} Using \eqref{recursive:f} we also compute \begin{align*} \texttt{III} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a) -2a} [2m-b-2c+1] \\ &\qquad\qquad\qquad\qquad \cdot p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\ & \quad + \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a) -2a-4m} [2m-b-2c+1] \\ &\qquad\qquad\qquad\qquad \cdot p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c-1} K^{b-2m+2c-3} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $\texttt{I}, \texttt{II}_1, \texttt{II}_2$, and $\texttt{III}$, we obtain \[ t \cdot \dvev{2m} = \sum_{0 \le a \le b \le 2m+1} \sum_{c\ge 0} p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} L_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \] where \begin{align*} L_{a,b,c} := & q^{\binom{2c}{2} -(b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] \qbinom{h;1-m}{c} \\ &+ q^{\binom{2c}{2} -(b-1)(2m-b-2c+1) -a(b-a-1)-2a +2(b+2c-2m-1)} [b-a] \qbinom{h;-m}{c} \\ &+ q^{\binom{2c}{2}-4c+3 -(b+1)(2m-b-2c+1) -(a+1)(b-a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+ q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a) -2a} [2m-b-2c+1] \qbinom{h;1-m}{c} \\ &+ q^{\binom{2c}{2} -b(2m-b-2c) -a(b-a) -2a-4m} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} On the other hand, using \eqref{t2m-1:evev2} (with an index shift $c\to c-1$) and \eqref{t2m+1:evev2} we write \begin{align*} [2m+1] & \dvev{2m+1} +[2m] \dvev{2m-1} \\ &=\sum_{0 \le a \le b \le 2m+1} \sum_{c \geq 0} p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} R_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where \begin{align*} R_{a,b,c} :=& q^{\binom{2c}{2}+2c -b(2m-b-2c+1) -a(b-a)} [2m+1] \qbinom{h;-m}{c} \\ &+ q^{\binom{2c}{2}-2c+1 -b(2m-b-2c+1) -a(b-a)} [2m] \qbinom{h;1-m}{c-1}. \end{align*} To prove the formula \eqref{t2m+1:evev2} for $\dvev{2m+1}$, it suffices to show that, for all $a,b,c$, \begin{equation} \label{L=R} L_{a,b,c}=R_{a,b,c}. \end{equation} Canceling the $q$-powers $q^{\binom{2c}{2}+2bc-b(2m-b+1)-a(b-a)}$ on both sides, we see that the identity \eqref{L=R} is equivalent to the following identity, for all $a,b,c$: \begin{align} \label{l=r} \begin{split} q^{2m-2c-a+1} [a] \qbinom{h;1-m}{c} &+ q^{2c-2m+b-a-1} [b-a] \qbinom{h;-m}{c} \\ &+ q^{-2m-2c+a+2} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+ q^{b -2a} [2m-b-2c+1] \qbinom{h;1-m}{c} \\ &+ q^{b -2a -4m} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2} \\ =& q^{2c} [2m+1] \qbinom{h;-m}{c} + q^{-2c+1} [2m] \qbinom{h;1-m}{c-1}. \end{split} \end{align} By combining the second summand of LHS with the first summand of RHS as well as combining the third summand of LHS with the second summand of RHS, the identity \eqref{l=r} is reduced to the following equivalent identity, for all $a, b, c$: \begin{equation} \label{WXYZ} W+X+Y+Z_1+Z_2=0, \end{equation} where \begin{align*} W&= [a] \qbinom{h;1-m}{c}, \\ X&= q^{4c-2m+b-1} [-2m+b-a-1] \qbinom{h;-m}{c}, \\ Y&= \frac{q^{-4m-a+1}K^{-2}-q^{a+1}}{q^2-1} \qbinom{h;1-m}{c-1}, \\ Z_1&= q^{2c-2m+b -a-1} [2m-b-2c+1] \qbinom{h;1-m}{c}, \\ Z_2&= q^{2c-6m+b-a-1} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} Let us prove the identity \eqref{WXYZ}. Using \eqref{kbinom2}, we can write $X=X_1+X_2$, where \begin{align*} X_1 &=q^{4c-2m+b-1} [-2m+b-a-1] \qbinom{h; 1-m}{c}, \\ X_2 &= - q^{4c-6m+b-1} [-2m+b-a-1] \qbinom{h; 1-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{-4m-a+1}K^{-2}-q^{a+1}}{q^2-1} = q^{-4c-a+1}\frac{( q^{4c-4m}K^{-2}-1)}{q^2-1} + q^{-2c} [-2c-a], \] we rewrite $Y=Y_1+Y_2$, where \begin{align*} Y_1 &=q^{-2c-a} [2c] \qbinom{h; 1-m}{c}, \qquad Y_2 = q^{-2c} [-2c-a] \qbinom{h; 1-m}{c-1}. \end{align*} A direct computation shows that \begin{align*} W+X_1+Y_1+Z_1 &= - (q-q^{-1}) [2c+a] [2c] \qbinom{h; 1-m}{c}, \\ (X_2+Z_2) +Y_2 &= q^{2c-4m} [2c+a] \qbinom{h; 1-m}{c-1} K^{-2} +Y_2 \\ &= (q-q^{-1}) [2c+a] [2c] \qbinom{h; 1-m}{c}. \end{align*} Summing up these two equations, we obtain $W+X+Y+Z_1+Z_2=0$, whence \eqref{WXYZ}, completing Step ~(2). The proof of Theorem~\ref{thm:dvev:evKappa} is completed. \qed \section{The $\imath$-divided powers $\dv{n}$ for odd weights and even $\kappa$} \label{sec:oddevK} In this section we shall always take $\kappa$ to be an even $q$-integer, i.e., \[ \kappa =[2\ell], \qquad \text{ for } \ell \in \mathbb Z. \] \subsection{Definition of $\dv{n}$ for even $\kappa$} We introduce some notations. Set, for $n\ge 1, a\in \mathbb Z$, \begin{equation} \label{brace} \LR{h;a}{0}=1, \qquad \LR{h;a}{n}= \prod_{i=1}^n \frac{q^{4a+4i-4} K^{-2}-q^2}{q^{4i}-1}, \qquad \llbracket h;a\rrbracket = \LR{h;a}{1}. \end{equation} It follows from \eqref{brace} that, for $n\ge 0$ and $a\in \mathbb Z$, \begin{align} \label{eq:commLR} \begin{split} \LR{h;a}{n} F &= F \LR{h;a+1}{n}, \qquad \LR{h;a}{n} \check{E} =\check{E} \LR{h;a-1}{n}, \\ \LR{h;a}{n} &=\LR{h;1+a}{n} -q^{4a}K^{-2} \LR{h;1+a}{n-1}. \end{split} \end{align} We shall take the following as a definition of the $\imath$-divided powers $\dv{n}$. \begin{definition} Set $\dv{1} =t= F +\check{E} +\kappa K^{-1}$. The divided powers $\dv{n}$, for $n\ge 1$, are defined by the recursive relations: \begin{align} \label{eq:tt:oddevKa} \begin{split} t \cdot \dv{2a-1} &=[2a] \dv{2a}+ [2a-1] \dv{2a-2}, \\ t \cdot \dv{2a} &= [2a+1] \dv{2a+1} , \quad \text{ for } a\ge 1. \end{split} \end{align} Equivalently, $\dv{n}$ is defined by the closed formula \eqref{def:dv:evoddK}. \end{definition} The following lemma is a variant of \cite[Lemma~3.3]{BeW18} (where $\kappa=0$) with the same proof. \begin{lem} \label{lem:anti2} The anti-involution $\varsigma$ on $\mbf U$ fixes $F, \check{E}, K, \dvd{n}$, respectively. Moreover, $\varsigma$ sends $q \mapsto q^{-1}$, and \[ \LR{h;a}{n}\mapsto (-1)^n q^{2n(n-1)} \LR{h;2-a-n}{n}, \quad \forall a \in \mathbb Z,\; n\in \mathbb N. \] \end{lem} \subsection{Formulae of $\dv{n}$ with even $\kappa$} Recall the polynomials $p^{(n)}$, for $n\ge 0$, from \S\ref{subsec:pn}. \begin{thm} \label{thm:dv:evenKappa} Let $\kappa$ be an even $q$-integer. Then we have, for $m\ge 0$, \begin{align} \dv{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)} p^{(2m-b-2c)}(\kappa) \label{t2m:oddev} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}, \notag \\% \dv{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)} p^{(2m-b-2c+1)}(\kappa) \label{t2m+1:oddev} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \notag \end{align} \end{thm} The proof of Theorem~\ref{thm:dv:evenKappa} will be given in \S\ref{subsec:proof:dv:evKappa} below. By applying the anti-involution $\varsigma$ we convert the formulae in Theorem~\ref{thm:dv:evenKappa} into the following. \begin{thm} \label{thm:dv:evenKappa2} Let $\kappa$ be an even $q$-integer. Then we have, for $m\ge 0$, \begin{align} \dv{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{-c +b(2m-b-2c) +a(b-a)} p^{(2m-b-2c)}(\kappa) \label{t2m:oddevFhE} \\ &\qquad \qquad \qquad\qquad \cdot F^{(b-a)} K^{b-2m+2c} \LR{h;1+m-c}{c} \check{E}^{(a)} \notag \\ \dv{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c+1) +a(b-a)} p^{(2m-b-2c+1)}(\kappa) \label{t2m+1:oddevFhE} \\ &\qquad\qquad\qquad\qquad \cdot F^{(b-a)} K^{b-2m+2c-1} \LR{h;1+m-c}{c} \check{E}^{(a)}. \notag \end{align} \end{thm} \begin{proof} Recall from Lemma~\ref{lem:anti2} that the anti-involution $\varsigma$ on $\mbf U$ fixes $F, \check{E}, K, \dvd{n}, \kappa$ while sending $q \mapsto q^{-1}$, $\LR{h;1-m}{c}\mapsto (-1)^c q^{2c(c-1)} \LR{h;1+m-c}{c}$. The formulae \eqref{t2m:oddevFhE}--\eqref{t2m+1:oddevFhE} now follows from \eqref{t2m:oddev}--\eqref{t2m+1:oddev}. \end{proof} Let $\lambda, m \in \mathbb Z$ and $c\in \mathbb N$. Recall $\cbinom{m}{c}$ from \eqref{cbinom}. We note that \begin{align} \label{eq:LRhc} \LR{h; 1+m-c}{c} \mathbf 1_{2\lambda+1} &= \prod_{i=1}^c \frac{q^{4m-4c+4i} K^{-2} -q^2}{q^{4i} -1} \mathbf 1_{2\lambda+1} = (-1)^c q^{-2c^2} \cbinom{m-\lambda-c}{c} \mathbf 1_{2\lambda+1}. \end{align} The following corollary is immediate from \eqref{eq:LRhc}, Proposition~\ref{prop:fg:ev}, and Theorem~\ref{thm:dv:evenKappa}. \begin{cor} We have $\dv{n} \in {}_\mathcal A {\mbf U}^\imath_{\rm{odd}}$, for all $n$. \end{cor} \begin{example} The formulae of $\dv{n}$, for $1\le n\le 3$, in Theorem~\ref{thm:dv:evenKappa} reads as follows. \begin{align*} \dv{1} &= F +\check{E} + \kappa K^{-1}, \\% \dv{2} &= b^{(2)} + q\llbracket h;0 \rrbracket + \kappa (q^{-1}K^{-1}F + q^{-1} \check{E} K^{-1}) + \frac{\kappa^{2}}{[2]} K^{-2}, \\% \dv{3} & = b^{(3)} + q^{-1} \llbracket h;0 \rrbracket F + q^{-1} \check{E} \llbracket h;0 \rrbracket \\ & \qquad +(q^{-2} \check{E}^{(2)} K^{-1} + q^{-3} \check{E} K^{-1} F + q^{-2} K^{-1} F^{(2)} + q^{-1} \llbracket h;0 \rrbracket K^{-1})\kappa \\ & \qquad + \frac{q^{-2}}{[2]}(\check{E} K^{-2} + K^{-2} F)\kappa^2 + \frac{K^{-3}}{[3]!}\big( \kappa^3 + (q^{-4} + q^{-2})\kappa \big). \end{align*} \end{example} \begin{rem} The formula for $\dv{2m}$ is formally obtained from $\dvev{2m}$ in Theorem~\ref{thm:dvev:evKappa}, with $\qbinom{h;a}{c}$ replaced by $\LR{h;a}{c}$. The formula for $\dv{2m-1}$ is formally obtained from $\dvev{2m-1}$ in Theorem~\ref{thm:dvev:evKappa}, with $\qbinom{h;a}{c}$ replaced by $q^{-4c}\LR{h;a+1}{c}$. \end{rem} \begin{rem} Note $p_n(0)=0$ for $n>0$, and $p_0(0)=1$. The formulae in Theorems~\ref{thm:dv:evenKappa} and \ref{thm:dv:evenKappa2} in the special case for $\kappa=0$ recover the formulae in \cite[Theorem~3.1, Proposition~3.4]{BeW18}. \end{rem} \subsection{The $\imath$-canonical basis for $\dot{\mbf U}_{\text{odd}}$ for even $\kappa$} \label{sec:iCB:odd} Recall from \S\ref{sec:iCB:ev} the $\imath$-canonical basis on simple $\mbf U$-modules $L(\mu)$, for $\mu \in \mathbb N$. \begin{thm} \label{thm:iCB:odd} $\quad$ \begin{enumerate} \item Let $n\in \mathbb N$. For each integer $\lambda \gg n$, the element $\dv{n} v^+_{2\la+1} $ is an $\imath$-canonical basis element for $L(2\lambda+1)$. \item The set $\{\dv{n} \mid n \in \mathbb N \}$ forms the $\imath$-canonical basis for ${\mbf U}^\imath$ (and an $\mathcal A$-basis in ${}_\mathcal A {\mbf U}^\imath_{\rm{odd}}$). \end{enumerate} \end{thm} \begin{proof} Recall $\cbinom{m}{c}$ from \eqref{cbinom} and $\LR{h; a}{c}$ from \eqref{brace}. Let $\lambda, m \in \mathbb N$. It follows by a direct computation using Theorem~\ref{thm:dv:evenKappa2} and \eqref{eq:LRhc} that \begin{align} \dv{2m} v^+_{2\la+1} &= \sum_{b=0}^{2m} \sum_{c \geq 0} (-1)^c q^{-c +b(2m-b-2c)} p^{(2m-b-2c)}(\kappa) \notag \\ &\qquad \qquad \qquad\qquad \cdot F^{(b)} K^{b-2m+2c} \LR{h;1+m-c}{c} v^+_{2\la+1} \notag \\ &= \sum_{b=0}^{2m} \sum_{c \geq 0} q^{-2c^2-c +(b-2\lambda-1)(2m-b-2c)} p^{(2m-b-2c)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la+1} . \label{t2m:oddevF} \end{align} Similarly using Theorem~\ref{thm:dv:evenKappa2} we have \begin{align} \dv{2m+1} v^+_{2\la+1} &= \sum_{b=0}^{2m+1} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c+1)} p^{(2m-b-2c+1)}(\kappa) \notag \\ &\qquad\qquad\qquad\qquad \cdot F^{(b)} K^{b-2m+2c-1} \LR{h;1+m-c}{c} v^+_{2\la+1} \notag \\ &= \sum_{b=0}^{2m+1} \sum_{c \geq 0} q^{-2c^2+c +(b-2\lambda-1)(2m-b-2c+1)} p^{(2m-b-2c+1)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la+1} . \label{t2m+1:oddevF} \end{align} By a similar argument for \eqref{eq:lattice}, using \eqref{t2m:oddevF}--\eqref{t2m+1:oddevF} we obtain \begin{align*} \dv{n} v^+_{2\la+1} & \in F^{(n)} v^+_{2\la+1} + \sum_{b<n}q^{-1} \mathbb Z[q^{-1}] F^{(b)} v^+_{2\la+1} , \qquad \text{ for } \lambda \gg n. \end{align*} The second statement follows now from the definition of the $\imath$-canonical basis for ${\mbf U}^\imath$ using the projective system $\{ L(2\lambda+1) \}_{\lambda\ge 0}$; cf. \cite[\S6]{BW16}. \end{proof} \subsection{Proof of Theorem~\ref{thm:dv:evenKappa} } \label{subsec:proof:dv:evKappa} We prove the formulae for $\dv{n}$ by induction on $n$, in two steps (1)--(2) below. The base cases when $n=1,2$ are clear. (1) We shall prove the formula \eqref{t2m+1:oddev} for $\dv{2m+1}$, assuming the formula \eqref{t2m:oddev} for $\dv{2m}$. Recall $[2m+1] \dvev{2m+1} = t \cdot \dv{2m}$, and $t= F +\check{E} +\kappa K^{-1}$. Let us compute \[ I=\check{E} \dv{2m}, \qquad II=F \dv{2m}, \qquad III=\kappa K^{-1} \dv{2m}. \] First we have \begin{align*} I &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)} [a+1] p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a+1)} \LR{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} \\% &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where the last equation is obtained by shifting indices $a\to a-1$, $b\to b-1$ (and then adding some zero terms to make the bounds of the summations uniform throughout). Using \eqref{FYn} we have \begin{align*} II &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)} p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \LR{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} \end{align*} where $II^1$ and $II^2$ are the two natural summands associated to the plus sign. By shifting index $b\to b-1$, we further have \begin{align*} II^1 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} II^2 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} - (b+1)(2m-b-2c+1) -(a+1)(b-a)} p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} Using \eqref{recursive:f} we also compute \begin{align*} III &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)-2a} \kappa \cdot p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\%% &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)-2a} [2m-b-2c+1] \\ &\qquad \qquad \qquad\quad \cdot p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\% &\quad - \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} - (b+2)(2m-b-2c+2) -a(b-a)-2a+1} [2m-b-2c+1] \\ &\qquad \qquad \qquad\quad \cdot p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \LR{h;1-m}{c-1} K^{b-2m+2c-3} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $I, II^1, II^2$ and $III$ gives us \[ t \cdot \dv{2m} = \sum_{0 \le a \le b \le 2m+1} \sum_{c\ge 0} p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \mathcal H_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \] where \begin{align*} \mathcal H_{a,b,c} := & \; q^{\binom{2c}{2} - (b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] \LR{h;1-m}{c} \\ &+ q^{\binom{2c}{2} - (b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] \LR{h;-m}{c} \\ &+ q^{\binom{2c-2}{2} - (b+1)(2m-b-2c+1) -(a+1)(b-a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1} \\ &+ q^{\binom{2c}{2} - b(2m-b-2c) -a(b-a)-2a} [2m-b-2c+1] \LR{h;1-m}{c} \\ &- q^{\binom{2c-2}{2} - (b+2)(2m-b-2c+2) -a(b-a)-2a+1} [2m-b-2c+1] \LR{h;1-m}{c-1} K^{-2}. \end{align*} Recall $[2m+1] \dv{2m+1} = t \cdot \dv{2m}$. To prove the formula \eqref{t2m+1:oddev} for $\dv{2m+1}$, by the PBW basis theorem and the inductive assumption it suffices to prove the following identity, for all $a,b,c$: \begin{align} \label{eq:H3} \mathcal H_{a,b,c} = q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)} [2m+1] \LR{h;1-m}{c}. \end{align} Thanks to $q^{2m-a+1} [a] -[2m+1] = q^{-a} [a-2m-1]$, we can combine the RHS with the first summand of LHS \eqref{eq:H3}. Hence, after canceling out the $q$-powers $q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a) -a}$ on both sides, we see that \eqref{eq:H3} is equivalent to the following identity, for all $a, b, c$: \begin{equation} \label{ABCD3} \mathcal A+\mathcal B+\mathcal C+\mathcal D_1 +\mathcal D_2=0, \end{equation} where \begin{align*} \mathcal A &= [a-2m-1] \LR{h;1-m}{c}, \\ \mathcal B &= q^{4c-2m+b-1} [b-a] \LR{h;-m}{c}, \\ \mathcal C &= q^{2a-2m+2} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1}, \\ \mathcal D_1 &= q^{2c+b-a} [2m-b-2c+1] \LR{h;1-m}{c}, \\ \mathcal D_2 &= - q^{2c-4m+b-a} [2m-b-2c+1] \LR{h;1-m}{c-1} K^{-2}. \end{align*} Let us prove the identity \eqref{ABCD3}. Using \eqref{eq:commLR}, we can write $\mathcal B=\mathcal B_1+\mathcal B_2$, where \begin{align*} \mathcal B_1 &= q^{4c-2m+b-1} [b-a] \LR{h;-m}{c}, \quad \mathcal B_2 = - q^{4c-6m+b-1} [b-a] \LR{h;-m}{c}. \end{align*} Noting that \[ \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} = q^{4m-4c-3a}\frac{( q^{4c-4m}K^{-2}-q^2)}{q^2-1} + q^{2m-2c-2a} [2m-2c-a+1], \] we rewrite $\mathcal C=\mathcal C_1+\mathcal C_2$, where \begin{align*} \mathcal C_1 &= q^{2m-2c-a+1} [2c] \LR{h; 1-m}{c}, \quad \mathcal C_2 = q^{2-2c} [2m-2c-a+1] \LR{h; 1-m}{c-1}. \end{align*} A direct computation gives us \begin{align*} \mathcal A+\mathcal B_1+\mathcal C_1+\mathcal D_1 &= (q-q^{-1}) [2c] [2m-2c-a+1] \LR{h; 1-m}{c}, \\ \mathcal C_2 +(\mathcal B_2+\mathcal D_2) &= \mathcal C_2 - q^{2c-4m} [2m-2c-a+1] \LR{h; 1-m}{c-1} K^{-2} \\ &= - (q-q^{-1}) [2c] [2m-2c-a+1] \LR{h; 1-m}{c}. \end{align*} Summing up these two equations, we have $\mathcal A+\mathcal B+\mathcal C+\mathcal D_1+\mathcal D_2=0,$ whence \eqref{ABCD3}, completing Step~(1). \vspace{3mm} (2) Assuming the formulae for $\dv{n}$ with $n\le 2m+1$, we shall now prove the following formula \eqref{t2m+2:oddev} for $\dv{2m+2}$ (obtained with $m$ replaced by $m+1$ in \eqref{t2m:oddev}): \begin{align} \dv{2m+2} &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c+2) -a(b-a)} p^{(2m-b-2c+2)}(\kappa) \label{t2m+2:oddev} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \notag \end{align} The proof is based on the recursion $t \cdot \dv{2m+1} =[2m+2] \dv{2m+2} +[2m+1] \dv{2m}$. Recall $t= F +\check{E} +\kappa K^{-1}$ and $\dv{2m+1}$ from \eqref{t2m+1:oddev}. We shall compute \[ \texttt{I}=\check{E} \dv{2m+1}, \qquad \texttt{II}=F \dv{2m+1}, \qquad \texttt{III} =\kappa K^{-1} \dv{2m+1}, \] respectively. First by by shifting indices $a\to a-1$ and $b\to b-1$, we have \begin{align*} \texttt{I} &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)} [a+1] p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a+1)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\ &=\sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - (b-1)(2m-b-2c+2) -(a-1)(b-a)} [a] p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} Using \eqref{FYn} we also have \begin{align*} \texttt{II} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)} p^{(2m-b-2c+1)}(\kappa) \\ &\quad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} =\texttt{II}_1 +\texttt{II}_2, \end{align*} where $\texttt{II}_1$ and $\texttt{II}_2$ are the two natural summands associated to the plus sign. By shifting the index $b\to b-1$, we have \begin{align*} \texttt{II}_1 &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - (b+1)(2m-b-2c+2) -a(b-a-1)-2a} [b-a] p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} \texttt{II}_2 &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2}-2c - (b+1)(2m-b-2c+2) -(a+1)(b-a)+2} p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} Using \eqref{eq:pn2div} we also compute \begin{align*} \texttt{III} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)-2a} \kappa \cdot p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)} \\ &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)-2a} [2m-b-2c+2] \\ &\qquad\qquad\qquad\quad \cdot p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)} \\% & - \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2}-2c - (b+2)(2m-b-2c+3) -a(b-a)-2a+3} [2m-b-2c+2] \\ &\qquad\qquad\qquad\quad \cdot p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \LR{h;1-m}{c-1} K^{b-2m+2c-4} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $\texttt{I}, \texttt{II}_1, \texttt{II}_2$, and $\texttt{III}$, we obtain \[ t \cdot \dv{2m+1} = \sum_{0 \le a \le b \le 2m+2} \sum_{c\ge 0} p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \mathcal L_{a,b,c} K^{b-2m+2c-2} F^{(b-a)}, \] where \begin{align*} \mathcal L_{a,b,c} := & \; q^{\binom{2c}{2}-2c - (b-1)(2m-b-2c+2) -(a-1)(b-a)} [a] \LR{h;1-m}{c} \\ &+q^{\binom{2c}{2}-2c - (b+1)(2m-b-2c+2) -a(b-a-1)-2a} [b-a] \LR{h;-m}{c} \\ &+q^{\binom{2c-2}{2}-2c - (b+1)(2m-b-2c+2) -(a+1)(b-a)+2} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1} \\ &+q^{\binom{2c}{2}-2c - b(2m-b-2c+1) -a(b-a)-2a} [2m-b-2c+2] \LR{h;1-m}{c} \\ &- q^{\binom{2c-2}{2}-2c - (b+2)(2m-b-2c+3) -a(b-a)-2a+3} [2m-b-2c+2] \LR{h;1-m}{c-1} K^{-2}. \end{align*} On the other hand, using \eqref{t2m:oddev} (with an index shift $c\to c-1$) and \eqref{t2m+2:oddev} we write \begin{align*} [2m+2] & \dv{2m+2} +[2m+1] \dv{2m} \\ &=\sum_{0 \le a \le b \le 2m+2} \sum_{c \geq 0} p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \mathcal R_{a,b,c} K^{b-2m+2c-2} F^{(b-a)}, \end{align*} where \begin{align*} \mathcal R_{a,b,c} &:= q^{\binom{2c}{2} - b(2m-b-2c+2) -a(b-a)} [2m+2] \LR{h;-m}{c} \\ &\qquad + q^{\binom{2c-2}{2} - b(2m-b-2c+2) -a(b-a)} [2m+1] \LR{h;1-m}{c-1}. \end{align*} To prove the formula \eqref{t2m+2:oddev} for $\dv{2m+2}$, it suffices to show that, for all $a,b,c$, \begin{equation} \label{L=R3} \mathcal L_{a,b,c}= \mathcal R_{a,b,c}. \end{equation} Canceling the $q$-powers $q^{\binom{2c}{2}-2c - (b-1)(2m-b-2c+2) -(a-1)(b-a)}$ on both sides, we see that the identity \eqref{L=R3} is equivalent to the following identity, for all $a,b,c$: \begin{align} \label{l=r3} \begin{split} [a] \LR{h;1-m}{c} &+q^{4c-4m+b-4} [b-a] \LR{h;-m}{c} \\ &+q^{2a-4m+1} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;1-m}{c-1} \\ &+q^{2c-2m+b-a-2} [2m-b-2c+2] \LR{h;1-m}{c} \\ &- q^{2c-6m+b-a-2} [2m-b-2c+2] \LR{h;1-m}{c-1} K^{-2} \\ &= q^{4c-2m+a-2} [2m+2] \LR{h;-m}{c} + q^{a-2m+1} [2m+1] \LR{h;1-m}{c-1}. \end{split} \end{align} By combining the second summand of LHS with the first summand of RHS as well as combining the third summand of LHS with the second summand of RHS, the identity \eqref{l=r3} is reduced to the following equivalent identity, for all $a, b, c$: \begin{equation} \label{WXYZ3} \mathcal W+\mathcal X+\mathcal Y+\mathcal Z_1+\mathcal Z_2=0, \end{equation} where \begin{align*} \mathcal W & =[a] \LR{h;1-m}{c}, \\ \mathcal X & =q^{4c-2m+b-2} [b-a-2m-2] \LR{h;-m}{c}, \\ \mathcal Y & = \frac{q^{-a-4m+1}K^{-2}-q^{a+3}}{q^2-1} \LR{h;1-m}{c-1}, \\ \mathcal Z_1 & =q^{2c-2m+b-a-2} [2m-b-2c+2] \LR{h;1-m}{c}, \\ \mathcal Z_2 & = - q^{2c-6m+b-a-2} [2m-b-2c+2] \LR{h;1-m}{c-1} K^{-2}. \end{align*} Let us finally prove the identity \eqref{WXYZ3}. Using \eqref{eq:commLR}, we can write $\mathcal X=\mathcal X_1+\mathcal X_2$, where \begin{align*} \mathcal X_1 &=q^{4c-2m+b-2} [b-a-2m-2] \LR{h; 1-m}{c}, \\ \mathcal X_2 &= - q^{4c-6m+b-2} [b-a-2m-2] \LR{h; 1-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{-a-4m+1}K^{-2}-q^{a+3}}{q^2-1} = q^{-4c-a+1}\frac{( q^{4c-4m}K^{-2}-q^2)}{q^2-1} + q^{2-2c} [-2c-a], \] we rewrite $\mathcal Y=\mathcal Y_1+\mathcal Y_2$, where \begin{align*} \mathcal Y_1 &=q^{-2c-a} [2c] \LR{h; 1-m}{c}, \qquad \mathcal Y_2 = q^{2-2c} [-2c-a] \LR{h; 1-m}{c-1}. \end{align*} A direct computation shows that \begin{align*} \mathcal W+\mathcal X_1+\mathcal Y_1+\mathcal Z_1 &= - (q-q^{-1}) [2c+a] [2c] \LR{h; 1-m}{c}, \\ (\mathcal X_2+\mathcal Z_2) +\mathcal Y_2 &= q^{2c-4m} [2c+a] \LR{h; 1-m}{c-1} K^{-2} +\mathcal Y_2 \\ &= (q-q^{-1}) [2c+a] [2c] \LR{h; 1-m}{c}. \end{align*} Summing up these two equations, we obtain $\mathcal W+\mathcal X+\mathcal Y+\mathcal Z_1+\mathcal Z_2=0$, whence \eqref{WXYZ3}, completing Step ~(2). The proof of Theorem~\ref{thm:dv:evenKappa} is completed. \qed \section{The $\imath$-divided powers $\dvp{n}$ for even weights and odd $\kappa$} \label{sec:evoddK} In this section~\ref{sec:evoddK} we shall always take $\kappa$ to be an odd $q$-integer, i.e., \begin{align} \label{eq:oddK} \kappa & =[2\ell-1], \quad \text{ for } \ell \in \mathbb Z. \end{align} \subsection{Definition of $\dvp{n}$ for odd $\kappa$} We shall take the following as a definition of the $\imath$-divided powers $\dvp{n}$ with odd $\kappa$, which is formally identical to the formulae for $\dv{n}$ with even $\kappa$. \begin{definition} Set $\dvp{1} = t = F +\check{E} +\kappa K^{-1}$. The divided powers $\dvp{n}$, for $n\ge 1$, are defined by the recursive relations: \begin{align} \label{eq:tt:oddevKa} \begin{split} t \cdot \dvp{2a-1} &=[2a] \dvp{2a}+ [2a-1] \dvp{2a-2}, \\ t \cdot \dvp{2a} &= [2a+1] \dvp{2a+1} , \quad \text{ for } a\ge 1. \end{split} \end{align} \end{definition} Equivalently, we have the following closed formula for $\dvp{n}$, for $a\in\mathbb N$: \begin{align*} \dvp{n} = \begin{cases} \frac{1}{[2a]!} (t - [-2a+1] ) (t - [-2a+3]) \cdots (t - [2a-3]) (t -[2a-1]), & \text{if } n=2a, \\ \\ \frac{t}{[2a+1]!} (t - [-2a+1] ) (t - [-2a+3]) \cdots (t - [2a-3]) (t -[2a-1]), &\text{if } n=2a+1. \end{cases} \end{align*} Note the above formulae are formally the same as \eqref{def:dv:evoddK} for $\dv{n} \in {}_\mathcal A{\mbf U}^\imath_{\rm odd}$ for $\kappa$ an even $q$-integer. \subsection{Polynomials $\mathfrak p_n(x)$ and $\mathfrak p^{(n)}(x)$} \label{subsec:pn2} We define a sequence of polynomials $\mathfrak p_n(x)$ in a variable $x$, for $n\in \mathbb N$, by letting \begin{align} \label{eq:pn2} \mathfrak p_{n+1} =x \mathfrak p_n + q^{2-2n} [n] [n-2] \mathfrak p_{n-1}, \qquad \mathfrak p_0=1. \end{align} Note $\mathfrak p_n$ is a monic polynomial in $x$ of degree $n$. Also set $\mathfrak p_n=0$ for $n<0$. These polynomials $\mathfrak p_n$ will appear in the expansion formula for the $\imath$-divided powers in $\mbf U$. \begin{example} Here are the polynomials $\mathfrak p_n(x)$, for $1\le n \le 5$: \begin{align*} & \mathfrak p_1(x) =x, \qquad \mathfrak p_2(x) =x^2-1, \qquad \mathfrak p_3(x) =x^3-x, \\ \mathfrak p_4(x) &=x^4 +(q^{-4}[3]-1)x^2 -q^{-4}[3], \quad \mathfrak p_5(x) = x (x^2 - 1) (x^2+q^{-5}[3]! + q^{-6}[5]). \end{align*} \end{example} A simple induction using the recursive formula \eqref{eq:pn2} shows that $\mathfrak p_n/(x-1) \in \mathbb N[q^{-1}][x]$, for $n\ge 2$. In particular, for $\kappa$ any positive odd $q$-integer, we always have $\mathfrak p_n (\kappa) \in \mathbb N[q,q^{-1}]$. Introduce the monic polynomials $\mathfrak g_n(x)$ of degree $n$: \begin{align} \label{eq:gn-ood} \mathfrak g_n(x) = \begin{cases} \prod_{i=1}^{m} (x^2-[2i-1]^2), & \text{ if } n=2m \text{ is even}; \\ x \prod_{i=1}^{m} (x^2-[2i-1]^2), & \text{ if } n=2m+1 \text{ is odd}. \end{cases} \end{align} Define \[ \mathfrak p^{(n)}(x) =\mathfrak p_n(x)/[n]!, \qquad \mathfrak g^{(n)}(x) =\mathfrak g_n(x)/[n]!. \] Then Equation \eqref{eq:pn2} implies that \begin{align} \label{eq:pn2div} x \cdot \mathfrak p^{(n)} =[n+1] \mathfrak p^{(n+1)}-q^{2-2n} [n-2] \mathfrak p^{(n-1)} \quad (n\ge 1). \end{align} Our next goal is to prove that $\mathfrak p^{(n)}(\kappa)$ are integral, i.e., $\mathfrak p^{(n)}([2\ell-1]) \in \mathcal A$, for all $n, \ell \in \mathbb Z$; see Proposition~\ref{prop:fg:odd}. This is achieved by relating $\mathfrak p^{(n)}$ to $\mathfrak g^{(n)}$ for varied $n$. \begin{lem} \label{lem:odd:g-integral} For $\kappa =[2\ell-1]$ as in \eqref{eq:oddK}, we have $\mathfrak g^{(n)} (\kappa) \in \mathcal A$, for all $n\in \mathbb N$. \end{lem} \begin{proof} We separate the cases for $n=2m+1$ and $n= 2m$. Noting that \begin{equation} \label{eq:A2} \kappa -[2i-1] =[2\ell-1]-[2i-1] = [\ell-i] (q^{\ell+i-2} +q^{-\ell-i+2}), \end{equation} we have \begin{align*} \mathfrak g^{(2m)}(\kappa) &= \frac1{[2m]!} \prod_{i=1-m}^{m} (\kappa-[2i-1]) = \qbinom{\ell+m-1}{2m} \prod_{i=1-m}^{m}(q^{\ell+i-2} +q^{-\ell-i+2} ) \in \mathcal A. \end{align*} Similarly, using \eqref{eq:A2} we have \begin{align*} \mathfrak g^{(2m+1)}(\kappa) &= \frac1{[2m+1]!} \kappa \prod_{i=1-m}^{m} (\kappa-[2i-1]) \\ &= \frac1{[2m+1]!} \prod_{i=-m}^{m} (\kappa-[2i-1]) - \frac1{[2m]!} \prod_{i=1-m}^{m} (\kappa-[2i-1]) \\ &=\qbinom{\ell+m}{2m+1} \prod_{i=-m}^{m}(q^{\ell+i-2} +q^{-\ell-i+2} ) + \qbinom{\ell+m-1}{2m} \prod_{i=1-m}^{m}(q^{\ell+i-2} +q^{-\ell-i+2} ) \in \mathcal A. \end{align*} The lemma is proved. \end{proof} \begin{prop} \label{prop:fg:odd} For $m \in \mathbb N$, we have \begin{align*} \mathfrak p^{(2m)} &= \sum_{a=0}^{m-1} q^{(3-2m)a} \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a)}, \\ \mathfrak p^{(2m+1)} &= \sum_{a=0}^{m-1} q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a+1)}. \end{align*} In particular, we have $\mathfrak p^{(n)} (\kappa) \in \mathcal A$, for all $n\in \mathbb N$ and all $\kappa =[2\ell-1]$ with $\ell \in \mathbb Z$. \end{prop} \begin{proof} It follows from these formulae for $\mathfrak p^{(n)}$ and Lemma~\ref{lem:odd:g-integral} that $\mathfrak p^{(n)} ([2\ell-1]) \in \mathcal A$. Let us prove these formulae. Note that \begin{align} \label{recursive:g2} \begin{split} x \cdot \mathfrak g^{(2a-1)} &=[2a] \mathfrak g^{(2a)} + [2a-1] \mathfrak g^{(2a-2)}, \\ x \cdot \mathfrak g^{(2a)} &= [2a+1] \mathfrak g^{(2a+1)}, \quad \text{ for } a\ge 1. \end{split} \end{align} We prove by induction on $m$, with the base cases for $\mathfrak p^{(1)}$ and $\mathfrak p^{(2)}$ being clear. We separate in two cases. (1) Let us prove the formula for $\mathfrak p^{(2m+1)}$, assuming the formulae for $\mathfrak p^{(2m-1)}$ and $\mathfrak p^{(2m)}$. By \eqref{eq:pn2div} and \eqref{recursive:g2} we have \begin{align*} [2m+1] \mathfrak p^{(2m+1)} &= x \mathfrak p^{(2m)} +q^{2-4m} [2m-2] \mathfrak p^{(2m-1)} \\ &= \sum_{a=0}^{m-1} q^{(3-2m)a} \qbinom{m-1}{a}_{q^2} x \cdot \mathfrak g^{(2m-2a)} \\&\qquad +\sum_{a=0}^{m-2} q^{(3-2m)a+2-4m} [2m-2] \qbinom{m-2}{a}_{q^2} \mathfrak g^{(2m-2a-1)} \\ &= \sum_{a=0}^{m-1} q^{(3-2m)a} [2m-2a+1] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a+1)} \\&\qquad +\sum_{a=0}^{m-2} q^{(3-2m)a+2-4m} [2m-2] \qbinom{m-2}{a}_{q^2} \mathfrak g^{(2m-2a-1)} \\ &= \sum_{a=0}^{m-1} q^{(3-2m)a} [2m-2a+1] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a+1)} \\&\qquad +\sum_{a=1}^{m-1} q^{(3-2m)(a-1)+2-4m} [2m-2] \qbinom{m-2}{a-1}_{q^2} \mathfrak g^{(2m-2a+1)} \\ &=[2m+1] \sum_{a=0}^{m-1} q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a+1)}, \end{align*} where the last equation results from shifting the second summation by $a\to a-1$ and using the identity \[ q^{2a} [2m-2a+1] \qbinom{m-1}{a}_{q^2} +q^{2a-2m-1} [2m-2] \qbinom{m-2}{a-1}_{q^2} =[2m+1] \qbinom{m-1}{a}_{q^2}. \] \vspace{3mm} (2) We shall prove the formula for $\mathfrak p^{(2m+2)}$, assuming the formulae for $\mathfrak p^{(2m+1)}$ and $\mathfrak p^{(2m)}$. By \eqref{eq:pn2div} and \eqref{recursive:g2} we have \begin{align*} &[2m+2] \mathfrak p^{(2m+2)} \\ &= x \mathfrak p^{(2m+1)} +q^{-4m} [2m-1] \mathfrak p^{(2m)} \\ &= \sum_{a=0}^{m-1} q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} x\cdot \mathfrak g^{(2m-2a+1)} + \sum_{a=0}^{m-1} q^{(3-2m)a-4m} [2m-1] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a)} \\% &= \sum_{a=0}^{m-1} q^{(1-2m)a} \qbinom{m-1}{a}_{q^2} \left( [2m-2a+2] \mathfrak g^{(2m-2a+2)} +[2m-2a+1] \mathfrak g^{(2m-2a)} \right) \\& \qquad + \sum_{a=0}^{m-1} q^{(3-2m)a-4m} [2m-1] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a)}. \end{align*} By combining the like terms for $\mathfrak g^{(2m-2a)}$ and using the identity \[ q^{(1-2m)a} [2m-2a+1] + q^{(3-2m)a-4m} [2m-1] =q^{(1-2m)(a+1)} [4m-2a], \] we have \begin{align*} [2m+2] \mathfrak p^{(2m+2)} &= \sum_{a=0}^{m-1} q^{(1-2m)a} [2m-2a+2] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a+2)} \\& \qquad + \sum_{a=0}^{m-1} q^{(1-2m)(a+1)} [4m-2a] \qbinom{m-1}{a}_{q^2} \mathfrak g^{(2m-2a)} \\% &= [2m+2] \sum_{a=0}^{m} q^{(1-2m)a} \qbinom{m}{a}_{q^2} \mathfrak g^{(2m-2a+2)}, \end{align*} where the last equation results from shifting the second summation index from $a\to a-1$ and using the identity \[ [2m-2a+2] \qbinom{m-1}{a}_{q^2} + [4m-2a+2] \qbinom{m-1}{a-1}_{q^2} = [2m+2] \qbinom{m}{a}_{q^2}. \] The proposition is proved. \end{proof} \subsection{Formulae for $\dvp{n}$ with odd $\kappa$} \begin{thm} \label{thm:dvp:oddKappa} Let $\kappa$ be an odd $q$-integer. Then, for $m\ge 0$, we have \begin{align} \dvp{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \label{t2m:evodd} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}, \notag \\ \dvp{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2} -b(2m-b-2c+1) -a(b-a)} \mathfrak p^{(2m-b-2c+1)}(\kappa) \label{t2m+1:evodd} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \notag \end{align} \end{thm} The proof of Theorem~\ref{thm:dvp:oddKappa} will be given in \S\ref{subsec:proof:dvp:oddKappa} below. By applying the anti-involution $\varsigma$ we convert the formulae in Theorem~\ref{thm:dvp:oddKappa} as follows. \begin{thm} \label{thm:dvp:oddKappa2} Let $\kappa$ be an odd $q$-integer. Then we have, for $m\ge 1$, \begin{align} \dvp{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c\geq 0} (-1)^c q^{c +b(2m-b-2c) +a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \label{t2m:oddevKa} \\ &\qquad \qquad \qquad\quad \cdot F^{(b-a)} K^{b-2m+2c} \qbinom{h;m-c}{c} \check{E}^{(a)} \notag \\ \dvp{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} (-1)^c q^{3c +b(2m-b-2c+1) +a(b-a)} \mathfrak p^{(2m-b-2c+1)}(\kappa) \label{t2m+1:oddevKa} \\ &\qquad\qquad\qquad\quad \cdot F^{(b-a)} K^{b-2m+2c-1} \qbinom{h;m-c}{c} \check{E}^{(a)}. \notag \end{align} \end{thm} \begin{proof} By Lemma~\ref{lem:anti}, the anti-involution $\varsigma$ sends $\qbinom{h;1-m}{c}\mapsto (-1)^c q^{2c(c+1)} \qbinom{h;m-c}{c}$, $q\mapsto q^{-1}$, while fixing $\kappa, K, \check{E}^{(a)}, F^{(a)}$ and $\dvev{n}$. The formulae \eqref{t2m:oddevKa}--\eqref{t2m+1:oddevKa} now follows from \eqref{t2m:evodd}--\eqref{t2m+1:evodd} in Theorem~\ref{thm:dvp:oddKappa}. \end{proof} The following corollary is immediate from \eqref{eq:qbinom-hc}, Proposition~\ref{prop:fg:odd}, and Theorem~\ref{thm:dvp:oddKappa}. \begin{cor} We have $\dvp{n} \in {}_\mathcal A {\mbf U}^\imath_{\rm{ev}}$, for all $n$. \end{cor} \begin{rem} Note $\mathfrak p_n(1)=0$ for $n\ge 2$, and $\mathfrak p_0(1)=\mathfrak p_1(1)=1$. The formulae in Theorems~\ref{thm:dvp:oddKappa} and \ref{thm:dvp:oddKappa2} in the special case for $\kappa=1$ recover the formulae in \cite[Theorem~4.1, Proposition~4.3]{BeW18}. \end{rem} \begin{example} The formulae of $\dvp{n}$, for $1\le n\le 3$, in Theorem~\ref{thm:dvp:oddKappa} reads as follows. \begin{align*} \dvp{1} &= F+ \check{E} + K^{-1}, \\ \dvp{2} &= b^{(2)} + q^3 [h;0]+\kappa( q^{-1}K^{-1}F +q^{-1}\check{E} K^{-1} ) + \frac{\kappa^{2}-1}{[2]} K^{-2}, \\ \dvp{3} &= b^{(3)} + q \check{E} [h;0] + q [h;0] F + \kappa( q^{-2} \check{E}^{(2)} K^{-1} + q^{-3} \check{E} K^{-1} F + q^{-2} K^{-1} F^{(2)} + q [h;0] K^{-1}) \\ & \qquad + q^{-2}\frac{\kappa^{2}-1}{[2]}(\check{E} K^{-2} + K^{-2} F) + \frac{\kappa^3 -\kappa}{[3]!} K^{-3}. \end{align*} \end{example} \subsection{The $\imath$-canonical basis for $\dot{\mbf U}_{\rm{ev}}$ with odd $\kappa$} \label{sec:iCB:evwt-oddK} Recall from \S\ref{sec:iCB:ev} the $\imath$-canonical basis on simple $\mbf U$-modules $L(\mu)$, for $\mu \in \mathbb N$. \begin{thm} \label{thm:iCB:evwt-oddK} $\quad$ \begin{enumerate} \item Let $n\in \mathbb N$. For each integer $\lambda \gg n$, the element $\dvp{n} v^+_{2\la} $ is an $\imath$-canonical basis element for $L(2\lambda)$. \item The set $\{\dvp{n} \mid n \in \mathbb N \}$ forms the $\imath$-canonical basis for ${\mbf U}^\imath$ (and an $\mathcal A$-basis in ${}_\mathcal A {\mbf U}^\imath_{\rm{ev}}$). \end{enumerate} \end{thm} \begin{proof} Let $\lambda, m \in \mathbb N$. Recall $\cbinom{m}{c}$ from \eqref{cbinom} and $\qbinom{h; a}{c}$ from \eqref{kbinom}. It follows by a direct computation using Theorem~\ref{thm:dvp:oddKappa2} and \eqref{eq:qbinom-hc} that \begin{align} \dvp{2m} v^+_{2\la} &= \sum_{b=0}^{2m} \sum_{c\geq 0} (-1)^c q^{c +b(2m-b-2c)} \mathfrak p^{(2m-b-2c)}(\kappa) F^{(b)} K^{b-2m+2c} \qbinom{h;m-c}{c} v^+_{2\la} \notag \\ &= \sum_{b=0}^{2m} \sum_{c \geq 0} q^{-2c^2-c +(b-2\lambda)(2m-b-2c)} \mathfrak p^{(2m-b-2c)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la} . \label{t2m:evwt-oF} \end{align} Similarly using Theorem~\ref{thm:dvp:oddKappa2} we have \begin{align} \dvp{2m+1} v^+_{2\la} &= \sum_{b=0}^{2m+1} \sum_{c\geq 0} (-1)^c q^{3c +b(2m-b-2c+1)} \mathfrak p^{(2m-b-2c+1)}(\kappa) F^{(b)} K^{b-2m+2c-1} \qbinom{h;m-c}{c} v^+_{2\la} \notag \\ &= \sum_{b=0}^{2m+1} \sum_{c \geq 0} q^{-2c^2+c +(b-2\lambda)(2m-b-2c+1)} \mathfrak p^{(2m-b-2c+1)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la} . \label{t2m+1:evwt-oF} \end{align} By a similar argument for \eqref{eq:lattice}, using \eqref{t2m:evwt-oF}--\eqref{t2m+1:evwt-oF} we obtain \begin{align*} \dvp{n} v^+_{2\la} & \in F^{(n)} v^+_{2\la} + \sum_{b<n}q^{-1} \mathbb Z[q^{-1}] F^{(b)} v^+_{2\la} , \qquad \text{ for } \lambda \gg n. \end{align*} The second statement follows now from the definition of the $\imath$-canonical basis for ${\mbf U}^\imath$ using the projective system $\{ L(2\lambda) \}_{\lambda\ge 0}$; cf. \cite[\S6]{BW16}. \end{proof} \subsection{Proof of Theorem~\ref{thm:dvp:oddKappa} } \label{subsec:proof:dvp:oddKappa} We prove the formulae for $\dvp{n}$ by induction on $n$, in two steps (1)--(2) below. The base cases when $n=1,2$ are clear. (1) We shall prove the formula \eqref{t2m+1:evodd} for $\dvp{2m+1}$, assuming the formula \eqref{t2m:evodd} for $\dvp{2m}$. Recall $[2m+1] \dvp{2m+1} = t \cdot \dvp{2m}$, and $t= F + \check{E} +\kappa K^{-1}$. Let us compute \[ I=\check{E} \dvp{2m}, \qquad II=F \dvp{2m}, \qquad III=\kappa K^{-1} \dvp{2m}. \] First we have \begin{align*} I &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)} [a+1] \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a+1)} \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}, \\% &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - (b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where the last equation is obtained by shifting indices $a\to a-1$, $b\to b-1$. Using \eqref{FYn} we have \begin{align*} II &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \\ & \qquad\quad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \qbinom{h;1-m}{c} K^{b-2m+2c} F^{(b-a)} =II^1 +II^2, \\ \end{align*} where $II^1$ and $II^2$ are the two natural summands associated to the plus sign. By shifting index $b\to b-1$, we further have \begin{align*} II^1 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c -(b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} II^2 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c-2}{2}+2c - (b+1)(2m-b-2c+1) -(a+1)(b-a)-2} \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} Using \eqref{eq:pn2div} we also compute \begin{align*} III &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)-2a} \kappa \cdot \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\%% &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)-2a} [2m-b-2c+1] \\ &\qquad \qquad \qquad\quad \cdot \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\% &\quad - \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c\geq 0} q^{\binom{2c-2}{2}+2c - (b+2) (2m-b-2c+2) -a(b-a)-2a} [2m-b-2c] \\ &\qquad \qquad \qquad\quad \cdot \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c-1} K^{b-2m+2c-3} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $I, II^1, II^2$ and $III$ gives us \[ t \cdot \dvp{2m} = \sum_{0 \le a \le b \le 2m+1} \sum_{c\ge 0} \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \mathscr H_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \] where \begin{align*} \mathscr H_{a,b,c} := & \; q^{\binom{2c}{2}+2c - (b-1)(2m-b-2c+1) -(a-1)(b-a)} [a] \qbinom{h;1-m}{c} \\ &+q^{\binom{2c}{2}+2c -(b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] \qbinom{h;-m}{c} \\ &+q^{\binom{2c-2}{2}+2c - (b+1)(2m-b-2c+1) -(a+1)(b-a)-2} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+q^{\binom{2c}{2}+2c - b(2m-b-2c) -a(b-a)-2a} [2m-b-2c+1] \qbinom{h;1-m}{c} \\ &+q^{\binom{2c-2}{2}+2c - (b+2) (2m-b-2c+2) -a(b-a)-2a} [2m-b-2c] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} Recall $[2m+1] \dv{2m+1} = t \cdot \dvp{2m}$. To prove the formula \eqref{t2m+1:evodd} for $\dv{2m+1}$, by the PBW basis theorem and the inductive assumption it suffices to prove the following identity, for all $a,b,c$: \begin{align} \label{eq:H4} \mathscr H_{a,b,c} = q^{\binom{2c}{2} -b(2m-b-2c+1) -a(b-a)} [2m+1] \qbinom{h;1-m}{c}. \end{align} Thanks to $q^{2m-a+1} [a] -[2m+1] = q^{-a} [a-2m-1]$, we can combine the RHS\eqref{eq:H4} with the first summand of LHS\eqref{eq:H4}. Hence, after canceling out the $q$-powers $q^{\binom{2c}{2}- b(2m-b-2c+1) -a(b-a) -a}$ on both sides, we see that \eqref{eq:H4} is equivalent to the following identity, for all $a, b, c$: \begin{equation} \label{ABCD4} \mathscr A+\mathscr B+\mathscr C+\mathscr D_1 +\mathscr D_2=0, \end{equation} where \begin{align*} \mathscr A &= [a-2m-1] \qbinom{h;1-m}{c}, \\ \mathscr B &= q^{4c-2m+b-1} [b-a] \qbinom{h;-m}{c}, \\ \mathscr C &= q^{2a-2m} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1}, \\ \mathscr D_1 &= q^{2c+b-a} [2m-b-2c+1] \qbinom{h;1-m}{c}, \\ \mathscr D_2 &= - q^{2c-4m+b-a-1} [2m-b-2c] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} Let us prove the identity \eqref{ABCD4}. Using \eqref{kbinom2}, we can write $\mathscr B=\mathscr B_1+\mathscr B_2$, where \begin{align*} \mathscr B_1 &= q^{4c-2m+b-1} [b-a] \qbinom{h;-m}{c}, \quad \mathscr B_2 = - q^{4c-6m+b-1} [b-a] \qbinom{h;-m}{c}. \end{align*} Noting that \[ q^{2a-2m}\frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} = q^{2m-4c-a}\frac{( q^{4c-4m}K^{-2}-1)}{q^2-1} + q^{-2c-1} [2m-2c-a], \] we rewrite $\mathscr C=\mathscr C_1+\mathscr C_2$, where \begin{align*} \mathscr C_1 &= q^{2m-2c-a-1} [2c] \qbinom{h; 1-m}{c}, \quad \mathscr C_2 = q^{-2c-1} [2m-2c-a] \qbinom{h; 1-m}{c-1}. \end{align*} A direct computation gives us \begin{align*} \mathscr A+\mathscr B_1+\mathscr C_1+\mathscr D_1 &=(1-q^{-2}) [2c] [2m-2c-a] \qbinom{h; 1-m}{c}, \\ \mathscr C_2 +(\mathscr B_2+\mathscr D_2) &= \mathscr C_2 - q^{2c-4m-1} [2m-2c-a] \qbinom{h; 1-m}{c-1} K^{-2} \\ &= - (1-q^{-2}) [2c] [2m-2c-a] \qbinom{h; 1-m}{c}. \end{align*} Summing up these two equations, we have $\mathscr A+\mathscr B+\mathscr C+\mathscr D_1+\mathscr D_2=0,$ whence \eqref{ABCD4}, completing Step~(1). \vspace{3mm} (2) Assuming the formulae for $\dvp{n}$ with $n\le 2m+1$, we shall now prove the following formula \eqref{t2m+2:evodd} for $\dvp{2m+2}$ (obtained with $m$ replaced by $m+1$ in \eqref{t2m:evodd}): \begin{align} \dvp{2m+2} &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} +2c- b(2m-b-2c+2) -a(b-a)} \mathfrak p^{(2m-b-2c+2)}(\kappa) \label{t2m+2:evodd} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \notag \end{align} The proof is based on the recursion $t \cdot \dvp{2m+1} =[2m+2] \dvp{2m+2} +[2m+1] \dvp{2m}$. Recall $t= F +\check{E} +\kappa K^{-1}$ and $\dvp{2m+1}$ from \eqref{t2m+1:evodd}. We shall compute \[ \texttt{I}=\check{E} \dvp{2m+1}, \qquad \texttt{II}=F \dvp{2m+1}, \qquad \texttt{III} =\kappa K^{-1} \dvp{2m+1}, \] respectively. First by by shifting indices $a\to a-1$ and $b\to b-1$, we have \begin{align*} \texttt{I} &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c+1) -a(b-a)} [a+1] \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a+1)} \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\ &=\sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b-1)(2m-b-2c+2) -(a-1)(b-a)} [a] \mathfrak p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} Using \eqref{FYn} we also have \begin{align*} \texttt{II} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c+1) -a(b-a)} \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\quad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \qbinom{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)} =\texttt{II}_1 +\texttt{II}_2, \end{align*} where $\texttt{II}_1$ and $\texttt{II}_2$ are the two natural summands associated to the plus sign. By shifting the index $b\to b-1$, we further have \begin{align*} \texttt{II}_1 &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b+1)(2m-b-2c+2) -a(b-a-1)-2a} [b-a] \mathfrak p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;-m}{c} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} \texttt{II}_2 &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} - (b+1)(2m-b-2c+2) -(a+1)(b-a)} \mathfrak p^{(2m-b-2c+2)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} Using \eqref{eq:pn2div} we also compute \begin{align*} \texttt{III} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c+1) -a(b-a)-2a} \kappa \cdot \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)} \\ &= \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c+1) -a(b-a)-2a} [2m-b-2c+2] \\ &\qquad\qquad\qquad\quad \cdot \mathfrak p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c} K^{b-2m+2c-2} F^{(b-a)} \\% & - \sum_{b=0}^{2m+2} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} - (b+2)(2m-b-2c+3) -a(b-a)-2a+2} [2m-b-2c+1] \\ &\qquad\qquad\qquad\quad \cdot \mathfrak p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \qbinom{h;1-m}{c-1} K^{b-2m+2c-4} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $\texttt{I}, \texttt{II}_1, \texttt{II}_2$, and $\texttt{III}$, we obtain \[ t \cdot \dvp{2m+1} = \sum_{0 \le a \le b \le 2m+2} \sum_{c\ge 0} \mathfrak p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \mathscr L_{a,b,c} K^{b-2m+2c-2} F^{(b-a)}, \] where \begin{align*} \mathscr L_{a,b,c} := & \; q^{\binom{2c}{2} - (b-1)(2m-b-2c+2) -(a-1)(b-a)} [a] \qbinom{h;1-m}{c} \\ &+q^{\binom{2c}{2} - (b+1)(2m-b-2c+2) -a(b-a-1)-2a} [b-a] \qbinom{h;-m}{c} \\ &+q^{\binom{2c-2}{2} - (b+1)(2m-b-2c+2) -(a+1)(b-a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+q^{\binom{2c}{2} - b(2m-b-2c+1) -a(b-a)-2a} [2m-b-2c+2] \qbinom{h;1-m}{c} \\ &- q^{\binom{2c-2}{2} - (b+2)(2m-b-2c+3) -a(b-a)-2a+2} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} On the other hand, using \eqref{t2m:evodd} (with an index shift $c\to c-1$) and \eqref{t2m+2:evodd} we write \begin{align*} [2m+2] & \dvp{2m+2} +[2m+1] \dvp{2m} \\ &=\sum_{0 \le a \le b \le 2m+2} \sum_{c \geq 0} \mathfrak p^{(2m-b-2c+2)}(\kappa) \check{E}^{(a)} \mathscr R_{a,b,c} K^{b-2m+2c-2} F^{(b-a)}, \end{align*} where \begin{align*} \mathscr R_{a,b,c} &:= q^{\binom{2c}{2} +2c- b(2m-b-2c+2) -a(b-a)} [2m+2] \qbinom{h;-m}{c} \\ &\qquad + q^{\binom{2c-2}{2} +2c- b(2m-b-2c+2) -a(b-a)-2} [2m+1] \qbinom{h;1-m}{c-1}. \end{align*} To prove the formula \eqref{t2m+2:oddev} for $\dvp{2m+2}$, it suffices to show that, for all $a,b,c$, \begin{equation} \label{L=R4} \mathscr L_{a,b,c}= \mathscr R_{a,b,c}. \end{equation} Canceling the $q$-powers $q^{\binom{2c}{2} - (b-1)(2m-b-2c+2) -(a-1)(b-a)}$ on both sides, we see that the identity \eqref{L=R4} is equivalent to the following identity, for all $a,b,c$: \begin{align} \label{l=r4} \begin{split} [a] \qbinom{h;1-m}{c} &+q^{4c-4m+b-4} [b-a] \qbinom{h;-m}{c} \\ &+q^{2a-4m-1} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \qbinom{h;1-m}{c-1} \\ &+q^{2c-2m+b-a-2} [2m-b-2c+2] \qbinom{h;1-m}{c} \\ &- q^{2c-6m+b-a-3} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2} \\ &= q^{4c-2m+a-2} [2m+2] \qbinom{h;-m}{c} + q^{a-2m-1} [2m+1] \qbinom{h;1-m}{c-1}. \end{split} \end{align} By combining the second summand of LHS with the first summand of RHS as well as combining the third summand of LHS with the second summand of RHS, the identity \eqref{l=r4} is reduced to the following equivalent identity, for all $a, b, c$: \begin{equation} \label{WXYZ4} \mathscr W+\mathscr X+\mathscr Y+\mathscr Z_1+\mathscr Z_2=0, \end{equation} where \begin{align*} \mathscr W & =[a] \qbinom{h;1-m}{c}, \\ \mathscr X & =q^{4c-2m+b-2} [b-a-2m-2] \qbinom{h;-m}{c}, \\ \mathscr Y & = \frac{q^{-a-4m-1}K^{-2}-q^{a+1}}{q^2-1} \qbinom{h;1-m}{c-1}, \\ \mathscr Z_1 & =q^{2c-2m+b-a-2} [2m-b-2c+2] \qbinom{h;1-m}{c}, \\ \mathscr Z_2 & = - q^{2c-6m+b-a-3} [2m-b-2c+1] \qbinom{h;1-m}{c-1} K^{-2}. \end{align*} Let us finally prove the identity \eqref{WXYZ4}. Using \eqref{kbinom2}, we can write $\mathscr X=\mathscr X_1+\mathscr X_2$, where \begin{align*} \mathscr X_1 &=q^{4c-2m+b-2} [b-a-2m-2] \qbinom{h; 1-m}{c}, \\ \mathscr X_2 &= - q^{4c-6m+b-2} [b-a-2m-2] \qbinom{h; 1-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{-a-4m-1}K^{-2}-q^{a+1}}{q^2-1} = q^{-4c-a-1}\frac{( q^{4c-4m}K^{-2}-1)}{q^2-1} + q^{-2c-1} [-2c-a-1], \] we rewrite $\mathscr Y=\mathscr Y_1+\mathscr Y_2$, where \begin{align*} \mathscr Y_1 &=q^{-2c-a-2} [2c] \qbinom{h; 1-m}{c}, \qquad \mathscr Y_2 = -q^{-2c-1} [2c+a+1] \qbinom{h; 1-m}{c-1}. \end{align*} A direct computation shows that \begin{align*} \mathscr W+\mathscr X_1+\mathscr Y_1+\mathscr Z_1 &= - (1-q^{-2}) [2c+a+1] [2c] \qbinom{h; 1-m}{c}, \\ (\mathscr X_2+\mathscr Z_2) +\mathscr Y_2 &= q^{2c-4m-1} [2c+a+1] \qbinom{h; 1-m}{c-1} K^{-2} +\mathscr Y_2 \\ &= (1-q^{-2}) [2c+a+1] [2c] \qbinom{h; 1-m}{c}. \end{align*} Summing up these two equations, we obtain $\mathscr W+\mathscr X+\mathscr Y+\mathscr Z_1+\mathscr Z_2=0$, whence \eqref{WXYZ4}, completing Step ~(2). The proof of Theorem~\ref{thm:dvp:oddKappa} is completed. \qed \section{The $\imath$-divided powers $\dvd{n}$ for odd weights and odd $\kappa$} \label{sec:oddoddK} In this section~\ref{sec:oddoddK} we always take $\kappa$ to be an odd $q$-integer, i.e., \begin{equation*} \kappa =[2\ell-1], \qquad \text{ for } \ell \in \mathbb Z. \end{equation*} \subsection{Definition of $\dvd{n}$ for odd $\kappa$} \begin{definition} Set $\dvd{1} =t= F +\check{E} +\kappa K^{-1}$. The divided powers $\dvd{n}$, for $n\ge 1$, are defined by the recursive relations: \begin{align} \label{eq:tt:evoddKa} \begin{split} t \cdot \dvd{2a-1} &=[2a] \dvd{2a}, \\ t \cdot \dvd{2a} &= [2a+1] \dvd{2a+1} + [2a] \dvd{2a-1}, \quad \text{ for } a\ge 1. \end{split} \end{align} \end{definition} Equivalently, we have the following closed formula for $\dvd{n}$: \begin{align} \label{def:dvd:evoddKa} \dvd{n} = \begin{cases} \frac{t}{[2a]!} (t - [-2a+2])(t -[-2a+4]) \cdots (t -[2a-4]) (t - [2a-2]), & \text{if } n=2a, \\ \\ \frac{1}{[2a+1]!} (t - [-2a])(t -[-2a+2]) \cdots (t -[2a-2]) (t - [2a]), &\text{if } n=2a+1. \end{cases} \end{align} Note the formulae for $\dvd{n}$ with odd $\kappa$ is formally identical to the formulae for $\dvev{n}$ with even $\kappa$. \subsection{Formulae for $\dvd{n}$ with odd $\kappa$} Recall the polynomials $\mathfrak p^{(n)}$, for $n\ge 0$, from \S\ref{subsec:pn2}. \begin{thm} \label{thm:dvd:oddKappa} Assume $\kappa$ is an odd $q$-integer. Then we have, for $m\ge 1$, \begin{align} \dvd{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \label{t2moddodd} \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)}, \notag \\ \dvd{2m-1} &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)} \mathfrak p^{(2m-b-2c-1)}(\kappa) \label{t2m-1oddodd} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c+1} F^{(b-a)}. \notag \end{align} \end{thm} The proof of Theorem~\ref{thm:dvd:oddKappa} will be given in \S\ref{subsec:proof:dvd:oddKappa} below. By applying the anti-involution $\varsigma$ we convert the formulae in Theorem~\ref{thm:dvd:oddKappa} as follows. \begin{thm} \label{thm:dvd:oddKappa2} Assume $\kappa$ is an odd $q$-integer. Then we have, for $m\ge 1$, \begin{align} \dvd{2m} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c) +a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \label{t2moddodd2} \\ &\qquad\qquad\qquad\quad \cdot F^{(b-a)} K^{b-2m+2c} \LR{h;m-c}{c} \check{E}^{(a)}, \notag \\% \dvd{2m-1} &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} (-1)^c q^{-c+ b(2m-b-2c-1) +a(b-a)} \mathfrak p^{(2m-b-2c-1)}(\kappa) \label{t2m-1oddodd2} \\ &\qquad\qquad\qquad\quad \cdot F^{(b-a)} K^{b-2m+2c+1} \LR{h;m-c}{c} \check{E}^{(a)}. \notag \end{align} \end{thm} \begin{proof} Recall from Lemma~\ref{lem:anti2} that the anti-involution $\varsigma$ on $\mbf U$ fixes $F, \check{E}, K, \dvd{n}, \kappa$ while sending $q \mapsto q^{-1}$, $\LR{h;2-m}{c}\mapsto (-1)^c q^{2c(c-1)} \LR{h;m-c}{c}$. The formulae \eqref{t2moddodd2}--\eqref{t2m-1oddodd2} now follow by applying $\varsigma$ to the formulae \eqref{t2moddodd}--\eqref{t2m-1oddodd} in Theorem~\ref{thm:dvd:oddKappa}. \end{proof} The following corollary is immediate from \eqref{eq:LRhc}, Proposition~\ref{prop:fg:odd}, and Theorem~\ref{thm:dvd:oddKappa}. \begin{cor} We have $\dvd{n} \in {}_\mathcal A {\mbf U}^\imath_{\rm{odd}}$, for all $n$. \end{cor} \begin{rem} Note $\mathfrak p_n(1)=0$ for $n\ge 2$, and $\mathfrak p_0(1)=\mathfrak p_1(1)=1$. The formulae in Theorems~\ref{thm:dvd:oddKappa} and \ref{thm:dvd:oddKappa2} in the special case for $\kappa=1$ recover the formulae in \cite[Theorem~5.1, Proposition~5.3]{BeW18}. \end{rem} \begin{example} The formulae of $\dvd{n}$, for $1\le n\le 3$, in Theorem~\ref{thm:dvd:oddKappa} read as follows. \begin{align*} \dvd{1} &= F +\check{E} + \kappa K^{-1}, \\ \dvd{2} &= \check{E}^{(2)} +q^{-1} \check{E} F + F^{(2)} + q^{-1} [[h;1]] + \kappa (q^{-1}K^{-1}F + q^{-1} \check{E} K^{-1}) + \frac{\kappa^2-1}{[2]} K^{-2}, \\ \dvd{3} & = b^{(3)} + q[[h;0]]F+ q \check{E} [[h;0]] \\ &\qquad + \big(q^{-2} \check{E}^{(2)}K^{-1} +q^{-3} \check{E} K^{-1}F + q^{-2} K^{-1}F^{(2)} + q [[h;0]] K^{-1} \big) \kappa \\ &\qquad + \frac{(\kappa^2-1)}{[2]} (q^{-2}\check{E} K^{-2} +q^{-2}K^{-2}F) +\frac{\kappa^3 -\kappa}{[3]!} K^{-3}. \end{align*} \end{example} \subsection{The $\imath$-canonical basis for $\dot{\mbf U}_{\text{odd}}$ with odd $\kappa$} \label{sec:iCB:evwt-oddK} Recall from \S\ref{sec:iCB:ev} the $\imath$-canonical basis on simple $\mbf U$-modules $L(\mu)$, for $\mu \in \mathbb N$. \begin{thm} \label{thm:iCB:evwt-oddK} $\quad$ \begin{enumerate} \item Let $n\in \mathbb N$. For each integer $\lambda \gg n$, the element $\dvd{n} v^+_{2\la+1} $ is an $\imath$-canonical basis element for $L(2\lambda+1)$. \item The set $\{\dvd{n} \mid n \in \mathbb N \}$ forms the $\imath$-canonical basis for ${\mbf U}^\imath$ (and an $\mathcal A$-basis in ${}_\mathcal A {\mbf U}^\imath_{\rm{odd}}$). \end{enumerate} \end{thm} \begin{proof} Let $\lambda, m \in \mathbb N$. Recall $\cbinom{m}{c}$ from \eqref{cbinom}. It follows by a direct computation using Theorem~\ref{thm:dvd:oddKappa2} and \eqref{eq:LRhc} that \begin{align} \dvd{2m} v^+_{2\la+1} &= \sum_{b=0}^{2m} \sum_{c \geq 0} (-1)^c q^{c +b(2m-b-2c)} \mathfrak p^{(2m-b-2c)}(\kappa) F^{(b)} K^{b-2m+2c} \LR{h;m-c}{c} v^+_{2\la+1} \notag \\ &= \sum_{b=0}^{2m} \sum_{c \geq 0} q^{-2c^2+c +(b-2\lambda)(2m-b-2c)} \mathfrak p^{(2m-b-2c)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la+1} . \label{t2m:oddoF} \end{align} Similarly using Theorem~\ref{thm:dvd:oddKappa2} we have \begin{align} \dvd{2m-1} v^+_{2\la+1} &= \sum_{b=0}^{2m-1} \sum_{c \geq 0} (-1)^c q^{-c+ b(2m-b-2c-1)} \mathfrak p^{(2m-b-2c-1)}(\kappa) F^{(b)} K^{b-2m+2c+1} \LR{h;m-c}{c} v^+_{2\la+1} \notag \\ &= \sum_{b=0}^{2m-1} \sum_{c \geq 0} q^{-2c^2-c +(b-2\lambda)(2m-b-2c+1)} \mathfrak p^{(2m-b-2c+1)}(\kappa) \cbinom{m-\lambda-c}{c} F^{(b)} v^+_{2\la+1} . \label{t2m-1:oddoF} \end{align} By a similar argument for \eqref{eq:lattice}, using \eqref{t2m:oddoF}--\eqref{t2m-1:oddoF} we obtain \begin{align*} \dvd{n} v^+_{2\la+1} & \in F^{(n)} v^+_{2\la+1} + \sum_{b<n}q^{-1} \mathbb Z[q^{-1}] F^{(b)} v^+_{2\la+1} , \qquad \text{ for } \lambda \gg n. \end{align*} The second statement follows now from the definition of the $\imath$-canonical basis for ${\mbf U}^\imath$ using the projective system $\{ L(2\lambda+1) \}_{\lambda\ge 0}$; cf. \cite[\S6]{BW16}. \end{proof} \subsection{Proof of Theorem~\ref{thm:dvd:oddKappa} } \label{subsec:proof:dvd:oddKappa} We prove the formulae for $\dvd{n}$ by induction on $n$, in two separate cases (1)--(2) below. The base cases when $n=1,2$ are clear. (1) We shall prove the formula \eqref{t2moddodd} for $\dvd{2m}$, assuming the formula \eqref{t2m-1oddodd} for $\dvd{2m-1}$. Recall $[2m] \dvd{2m} = t \cdot \dvd{2m-1}$, and $t= F +\check{E} +\kappa K^{-1}$. Let us compute \[ I=\check{E} \dvd{2m-1}, \qquad II=F \dvd{2m-1}, \qquad III=\kappa K^{-1} \dvd{2m-1}. \] We compute \begin{align*} I &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)} [a+1] \mathfrak p^{(2m-b-2c-1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a+1)} \LR{h;2-m}{c} K^{b-2m+2c+1} F^{(b-a)} \\ &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b-1)(2m-b-2c) -(a-1)(b-a)} [a] \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)}. \end{align*} where the last equation is obtained by shifting indices $a\to a-1$, $b\to b-1$. Using \eqref{FYn} we have \begin{align*} II &=\sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)}\mathfrak p^{(2m-b-2c-1)}(\kappa) \\ & \qquad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \LR{h;2-m}{c} K^{b-2m+2c+1} F^{(b-a)} =II^1 +II^2, \end{align*} where $II^1$ and $II^2$ are the two natural summands associated to the plus sign. By shifting the index $b\to b-1$ and then adding some zero terms, we obtain \begin{align*} II^1 &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - (b+1)(2m-b-2c) -a(b-a+1)} [b-a] \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad\qquad\qquad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and then adding some zero terms, we also obtain \begin{align*} II^2 &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} - (b+1)(2m-b-2c) -(a+1)(b-a)}\mathfrak p^{(2m-b-2c)}(\kappa) \\ & \qquad\qquad\qquad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1} K^{b-2m+2c} F^{(b-a)}. \end{align*} By the identity \eqref{eq:pn2div} we have \begin{align*} III &= \sum_{b=0}^{2m-1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)-2a} \kappa \cdot \mathfrak p^{(2m-b-2c-1)}(\kappa) \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)} \\%% &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)-2a} [2m-b-2c] \\ &\qquad\qquad\qquad\quad \cdot \mathfrak p^{(2m-b-2c)} \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)} \\ &\quad - \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2} -(b+2)(2m-b-2c+1) -a(b-a)-2a +2} [2m-b-2c-1] \\ &\qquad\qquad\qquad\quad \cdot \mathfrak p^{(2m-b-2c)} \check{E}^{(a)} \LR{h;2-m}{c-1} K^{b-2m+2c-2} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $I, II^1, II^2, III$ gives us \[ t \cdot \dvd{2m-1} = \sum_{0 \le a \le b \le 2m} \sum_{c\ge 0} \mathfrak p^{(2m-b-2c)}(\kappa) \check{E}^{(a)} \mathfrak H_{a,b,c} K^{b-2m+2c} F^{(b-a)}, \] where \begin{align*} \mathfrak H_{a,b,c} := & \; q^{\binom{2c}{2} - (b-1)(2m-b-2c) -(a-1)(b-a)} [a] \LR{h;2-m}{c} \\ &+ q^{\binom{2c}{2} - (b+1)(2m-b-2c) -a(b-a+1)} [b-a] \LR{h;1-m}{c} \\ &+ q^{\binom{2c-2}{2} - (b+1)(2m-b-2c) -(a+1)(b-a)} % \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1} \\% &+ q^{\binom{2c}{2} - b(2m-b-2c-1) -a(b-a)-2a}[2m-b-2c] \LR{h;2-m}{c} \\ &- q^{\binom{2c-2}{2} -(b+2)(2m-b-2c+1) -a(b-a)-2a +2} [2m-b-2c-1] \LR{h;2-m}{c-1} K^{-2}, \end{align*} Recall $[2m] \dvd{2m} = t \cdot \dvd{2m-1}$. To prove the formula \eqref{t2moddodd} for $\dvd{2m}$, by the PBW basis theorem and the inductive assumption it suffices to prove the following identity, for all $a,b,c$: \begin{align} \label{eq:Hoo} \mathfrak H_{a,b,c} = q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)} [2m] \LR{h;2-m}{c}. \end{align} Thanks to $q^{2m-a} [a] -[2m] =q^{-a} [a-2m]$, we can combine the RHS\eqref{eq:Hoo} with the first summand of the LHS\eqref{eq:Hoo}. Hence, after canceling out the $q$-powers $q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)-a}$ on both sides, we see that \eqref{eq:Hoo} is equivalent to the following identity, for all $a, b, c$: \begin{equation} \label{ABCD2} \mathfrak A+\mathfrak B+\mathfrak C+\mathfrak D_1 +\mathfrak D_2=0, \end{equation} where \begin{align*} \mathfrak A &= [a-2m] \LR{h;2-m}{c}, \\ \mathfrak B &= q^{b+4c-2m} [b-a] \LR{h;1-m}{c}, \\ \mathfrak C &= q^{2a-2m+3} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1}, \\ \mathfrak D_1 &= q^{2c+b-a} [2m-b-2c] \LR{h;2-m}{c}, \\ \mathfrak D_2 &= - q^{2c-4m+b-a+3} [2m-b-2c-1] \LR{h;2-m}{c-1} K^{-2}. \end{align*} Let us prove the identity \eqref{ABCD2}. Using \eqref{eq:commLR}, we can write $\mathfrak B=\mathfrak B_1+\mathfrak B_2$, where \begin{align*} \mathfrak B_1 &= q^{b+4c-2m} [b-a] \LR{h;2-m}{c}, \quad \mathfrak B_2 = - q^{b+4c-6m+4} [b-a] \LR{h;2-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} = q^{4m-4c-3a-4}\frac{( q^{4c-4m+4}K^{-2}-q^2)}{q^2-1} + q^{2m-2c-2a-2} [2m-2c-a-1], \] we rewrite $\mathfrak C=\mathfrak C_1+\mathfrak C_2$, where \begin{align*} \mathfrak C_1 &= q^{2m-a-2c-2} [2c] \LR{h;2-m}{c}, \qquad \mathfrak C_2 = q^{1-2c} [2m-2c-a-1] \LR{h;2-m}{c-1}. \end{align*} A direct computation gives us \begin{align*} \mathfrak A+\mathfrak B_1+\mathfrak C_1+\mathfrak D_1 &= (1-q^{-2}) [2c] [2m-2c-a-1] p^{(2m-b-2c)}(\kappa) \LR{h;2-m}{c}, \\ \mathfrak C_2 +(\mathfrak B_2+\mathfrak D_2) &= \mathfrak C_2 - q^{2c-4m+3} [2m-2c-a-1] p^{(2m-b-2c)}(\kappa) \LR{h;2-m}{c-1} K^{-2} \\ &= - (1-q^{-2}) [2c] [2m-2c-a-1] p^{(2m-b-2c)}(\kappa) \LR{h;2-m}{c}. \end{align*} Summing up these two equations, we have $\mathfrak A+\mathfrak B+\mathfrak C+\mathfrak D_1+\mathfrak D_2=0,$ whence \eqref{ABCD2}, completing Step~(1). \vspace{3mm} (2) Assuming the formulae for $\dvd{n}$ with $n\le 2m$, we shall now prove the following formula for $\dvd{2m+1}$ (obtained from \eqref{t2m-1oddodd} with $m$ replaced by $m+1$): \begin{align} \dvd{2m+1} &= \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2} -b(2m-b-2c+1) -a(b-a)} p^{(2m-b-2c+1)}(\kappa) \label{t2m+1oddodd} \\ &\qquad\qquad\qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \notag \end{align} We first need to compute $t \cdot \dvd{2m}$, where we recall $t= F +\check{E} +\kappa K^{-1}$ and $\dvd{2m}$ from \eqref{t2moddodd}. We shall compute \[ \texttt I=\check{E} \dvd{2m}, \qquad \texttt{II}=F \dvd{2m}, \qquad \texttt{III} =\kappa K^{-1} \dvd{2m}, \] respectively. First we have \begin{align*} \texttt{I} &=\sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)} [a+1] \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a+1)} \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)} \\ % &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -(b-1)(2m-b-2c+1)-(a-1)(b-a)} [a] \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where the last equation is obtained by shifting the indices $a\to a-1$, $b\to b-1$. We also have \begin{align*} \texttt{II} &= \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)} \mathfrak p^{(2m-b-2c)}(\kappa) \\ &\qquad \cdot \left( q^{-2a} \check{E}^{(a)} F + \check{E}^{(a-1)} \frac{q^{3-3a}K^{-2}-q^{1-a}}{q^2-1} \right ) \LR{h;2-m}{c} K^{b-2m+2c} F^{(b-a)} =\texttt{II}_1 +\texttt{II}_2, \end{align*} where $\texttt{II}_1$ and $\texttt{II}_2$ are the two natural summands associated to the plus sign. By shifting the index $b\to b-1$, we further have \begin{align*} \texttt{II}_1 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -(b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \qquad \qquad\quad \cdot \check{E}^{(a)} \LR{h;1-m}{c} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} By shifting the indices $a\to a+1$, $b\to b+1$ and $c\to c-1$, we further have \begin{align*} \texttt{II}_2 &=\sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2}-2c -(b+1)(2m-b-2c+1)-(a+1)(b-a)+2} \mathfrak p^{(2m-b-2c+1)}(\kappa) \\ &\qquad \cdot \check{E}^{(a)} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1} K^{b-2m+2c-1} F^{(b-a)}. \end{align*} Using the identity \eqref{eq:pn2div} we also compute \begin{align*} \texttt{III} = & \sum_{b=0}^{2m} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)-2a} \kappa \cdot \mathfrak p^{(2m-b-2c)}(\kappa) \\ & \qquad\quad \cdot \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\% = & \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)-2a} [2m-b-2c+1] \\ & \qquad\quad \cdot \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \LR{h;2-m}{c} K^{b-2m+2c-1} F^{(b-a)} \\ & - \sum_{b=0}^{2m+1} \sum_{a=0}^{b} \sum_{c \geq 0} q^{\binom{2c-2}{2}-2c -(b+2)(2m-b-2c+2)-a(b-a)-2a+4} [2m-b-2c] \\ & \qquad\quad \cdot \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \LR{h;2-m}{c-1} K^{b-2m+2c-3} F^{(b-a)}. \end{align*} (Note that we have shifted the index $c\to c-1$ in the last summand above.) Collecting the formulae for $\texttt{I}, \texttt{II}_1, \texttt{II}_2, \texttt{III}$, we obtain an expression of the form \[ t \cdot \dvd{2m} = \sum_{0 \le a \le b \le 2m+1} \sum_{c\ge 0} \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \mathfrak L_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \] where \begin{align*} \mathfrak L_{a,b,c} :=& \; q^{\binom{2c}{2}-2c -(b-1)(2m-b-2c+1)-(a-1)(b-a)} [a] \LR{h;2-m}{c} \\ &+q^{\binom{2c}{2}-2c -(b+1)(2m-b-2c+1) -a(b-a+1)} [b-a] \LR{h;1-m}{c} \\ &+q^{\binom{2c-2}{2}-2c -(b+1)(2m-b-2c+1)-(a+1)(b-a)+2} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1} \\ &+q^{\binom{2c}{2}-2c -b(2m-b-2c)-a(b-a)-2a} [2m-b-2c+1] \LR{h;2-m}{c} \\ &-q^{\binom{2c-2}{2}-2c -(b+2)(2m-b-2c+2)-a(b-a)-2a+4} [2m-b-2c] \LR{h;2-m}{c-1} K^{-2}. \end{align*} On the other hand, using \eqref{t2m-1oddodd} (with an index shift $c\to c-1$) and \eqref{t2m+1oddodd} we write \begin{align*} [2m+1] & \dvd{2m+1} +[2m] \dvd{2m-1} \\ & =\sum_{0 \le a \le b \le 2m+1} \sum_{c \geq 0} \mathfrak p^{(2m-b-2c+1)}(\kappa) \check{E}^{(a)} \mathfrak R_{a,b,c} K^{b-2m+2c-1} F^{(b-a)}, \end{align*} where \begin{align*} \mathfrak R_{a,b,c} :=& q^{\binom{2c}{2} -b(2m-b-2c+1) -a(b-a)} [2m+1] \LR{h;1-m}{c} \\ &\quad + q^{\binom{2c-2}{2} - b(2m-b-2c+1) -a(b-a)} [2m] \LR{h;2-m}{c-1}. \end{align*} To prove the formula \eqref{t2m+1oddodd} for $\dvd{2m+1}$, it suffices to show that, for all $a,b,c$, \begin{equation} \label{L=R2} \mathfrak L_{a,b,c}=\mathfrak R_{a,b,c}. \end{equation} Canceling the $q$-powers $q^{\binom{2c}{2}-2c -(b-1)(2m-b-2c+1)-(a-1)(b-a)}$ on both sides, we see that the identity \eqref{L=R2} is equivalent to the following identity, for all $a,b,c$: \begin{align} \label{l=r2} \begin{split} [a] \LR{h;2-m}{c} &+q^{4c-4m+b-2} [b-a] \LR{h;1-m}{c} \\ &+q^{2a-4m+3} \frac{q^{-3a}K^{-2}-q^{-a}}{q^2-1} \LR{h;2-m}{c-1} \\ &+q^{2c-2m+b-a-1} [2m-b-2c+1] \LR{h;2-m}{c} \\ &-q^{2c-6m+b-a+2} [2m-b-2c] \LR{h;2-m}{c-1} K^{-2} \\ =& \; q^{4c-2m+a-1} [2m+1] \LR{h;1-m}{c} + q^{a-2m+2} [2m] \LR{h;2-m}{c-1}. \end{split} \end{align} By combining the second summand of the LHS\eqref{l=r2} with the first summand of the RHS\eqref{l=r2} as well as combining the third summand of the LHS with the second summand of the RHS, the identity \eqref{l=r2} is reduced to the following equivalent identity, for all $a, b, c$: \begin{equation} \label{WXYZ2} \mathfrak W+ \mathfrak X+\mathfrak Y+\mathfrak Z_1+\mathfrak Z_2=0, \end{equation} where \begin{align*} \mathfrak W&= [a] \LR{h;2-m}{c}, \\ \mathfrak X&= q^{4c-2m+b-1} [b-a-2m-1] \LR{h;1-m}{c}, \\ \mathfrak Y&= \frac{q^{3-a-4m}K^{-2}-q^{3+a}}{q^2-1} \LR{h;2-m}{c-1}, \\ \mathfrak Z_1&= q^{2c-2m+b-a-1} [2m-b-2c+1] \LR{h;2-m}{c}, \\ \mathfrak Z_2&= -q^{2c-6m+b-a+2} [2m-b-2c] \LR{h;2-m}{c-1} K^{-2}. \end{align*} Let us finally prove the identity \eqref{WXYZ2}. Using \eqref{eq:commLR}, we can write $\mathfrak X=\mathfrak X_1+\mathfrak X_2$, where \begin{align*} \mathfrak X_1 &=q^{4c-2m+b-1} [b-a-2m-1] \LR{h; 2-m}{c}, \\ \mathfrak X_2 &= - q^{4c-6m+b+3} [b-a-2m-1] \LR{h; 2-m}{c-1} K^{-2}. \end{align*} Noting that \[ \frac{q^{3-a-4m}K^{-2}-q^{3+a}}{q^2-1} = q^{-4c-a-1}\frac{( q^{4c-4m+4}K^{-2}-q^2)}{q^2-1} + q^{1-2c} [-2c-a-1], \] we rewrite $\mathfrak Y=\mathfrak Y_1+\mathfrak Y_2$, where \begin{align*} \mathfrak Y_1 &=q^{-2c-a-2} [2c] \LR{h; 2-m}{c}, \qquad \mathfrak Y_2 = -q^{1-2c} [2c+a+1] \LR{h; 2-m}{c-1}. \end{align*} A direct computation shows that \begin{align*} \mathfrak W+\mathfrak X_1+\mathfrak Y_1+\mathfrak Z_1 &= -(1-q^{-2}) [2c+a+1] [2c] \LR{h; 2-m}{c}, \\ (\mathfrak X_2+\mathfrak Z_2) +\mathfrak Y_2 &= q^{-2m-a+1} [2c+a] \LR{h; 2-m}{c-1} K^{-2} +\mathfrak Y_2 \\ &= (1-q^{-2}) [2c+a+1] [2c] \LR{h; 2-m}{c}. \end{align*} Summing up these two equations, we obtain $\mathfrak W+\mathfrak X+\mathfrak Y+\mathfrak Z_1+\mathfrak Z_2=0$, whence \eqref{WXYZ2}, completing Step ~(2). The proof of Theorem~\ref{thm:dvd:oddKappa} is completed. \qed
{ "timestamp": "2018-06-05T02:12:43", "yymm": "1806", "arxiv_id": "1806.00878", "language": "en", "url": "https://arxiv.org/abs/1806.00878" }
\subsection{Introduction}\label{introduction} In 1972, May\textsuperscript{\protect\hyperlink{ref-May1972}{1}} first demonstrated that randomly assembled systems of sufficient complexity are almost inevitably unstable given infinitesimally small perturbations. Complexity in this case is defined by the size of the system (i.e., the number of potentially interacting components; \(S\)), its connectance (i.e., the probability that one component will interact with another; \(C\)), and the variance of interaction strengths (\(\sigma^{2}\))\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}. May's finding that the probability of local stability falls to near zero given a sufficiently high threshold of \(\sigma\sqrt{SC}\) is broadly relevant for understanding the dynamics and persistence of systems such as ecological\textsuperscript{\protect\hyperlink{ref-May1972}{1}--\protect\hyperlink{ref-Grilli2017}{6}}, neurological\textsuperscript{\protect\hyperlink{ref-Gray2008}{7},\protect\hyperlink{ref-Gray2009}{8}}, biochemical\textsuperscript{\protect\hyperlink{ref-Rosenfeld2009}{9},\protect\hyperlink{ref-MacArthur2010}{10}}, and socio-economic\textsuperscript{\protect\hyperlink{ref-May2008}{11}--\protect\hyperlink{ref-Bardoscia2017}{14}} networks. As such, identifying general principles that affect stability in complex systems is of wide-ranging importance. Randomly assembled complex systems can be represented as large square matrices (\(\mathbf{M}\)) with \(S\) components (e.g., networks of species\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}} or banks\textsuperscript{\protect\hyperlink{ref-Haldane2011}{12}}). One element of such a matrix, \(M_{ij}\), defines how component \(j\) affects component \(i\) in the system at a point of equilibrium\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}. Off-diagonal elements (\(i \neq j\)) therefore define interactions between components, while diagonal elements (\(i = j\)) define component self-regulation (e.g., carrying capacity in ecological communities). Traditionally, off-diagonal elements are assigned non-zero values with a probability \(C\), which are sampled from a distribution with variance \(\sigma^{2}\); diagonal elements are set to \(-1\)\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-Allesina2012}{2},\protect\hyperlink{ref-Allesina2015}{5}}. Local system stability is assessed using eigenanalysis on \(\mathbf{M}\), with the system being stable if the real parts of all eigenvalues (\(\lambda\)), and therefore the leading eigenvalue (\(\lambda_{max}\)), are negative (\(\Re(\lambda_{max}) < 0\))\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-Allesina2012}{2}}. In a large system (high \(S\)), eigenvalues are distributed uniformly\textsuperscript{\protect\hyperlink{ref-Tao2010}{15}} within a circle centred at \(\Re = -d\) (\(-d\) is the mean value of diagonal elements) and \(\Im = 0\), with a radius of \(\sigma\sqrt{SC}\)\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-Allesina2012}{2},\protect\hyperlink{ref-Allesina2015}{5}} (Fig. 1a). Local stability of randomly assembled systems therefore becomes increasingly unlikely as \(S\), \(C\), and \(\sigma\) increase. \clearpage \hrule \vspace{2mm} \textbf{Figure 1: Eigenvalue distributions of random complex systems.} Each panel shows the real (x-axis) and imaginary (y-axis) parts of \(S =\) 400 eigenvalues from random \(S \times S\) matrices. (\(\textbf{a}\)) A system represented by a matrix \(\mathbf{A}\), in which all elements are sampled from a normal distribution with \(\mu = 0\) and \(\sigma_{A} = 1/\sqrt{S}\). Points are uniformly distributed within the blue circle centred at the origin with a radius of \(\sigma_{A} \sqrt{S} = 1\). (\(\textbf{b}\)) The same system as in \(\textbf{a}\) after including variation in the response rates of \(S\) components, represented by the diagonal matrix \(\gamma\), such that \(\mathbf{M} = \gamma\mathbf{A}\). Elements of \(\gamma\) are randomly sampled from a uniform distribution from \(\min = 0\) to \(\max = 2\). Eigenvalues of \(\mathbf{M}\) are then distributed non-uniformly within the red circle centred at the origin with a radius of \(\sqrt{\sigma^{2}_{A}(1 + \sigma^{2}_{\gamma})S} \approx\) 1.15. (\(\textbf{c}\)) A different random system \(\mathbf{A}\) constructed from the same parameters as in \(\textbf{a}\), except with diagonal element values of \(-1\). (\(\textbf{d}\)) The same system \(\textbf{c}\) after including variation in component response rates, sampled from \(\mathcal{U}(0, 2)\) as in \(\textbf{b}\). \includegraphics{fig1.pdf} \vspace{2mm} \hrule May's\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-Allesina2012}{2}} stability criterion \(\sigma\sqrt{SC} < d\) assumes that the expected response rates (\(\gamma\)) of individual components to perturbations of the system are identical, but this is highly unlikely in any complex system. In ecological communities, for example, the rate at which population density changes following perturbation will depend on the generation time of organisms, which might vary by orders of magnitude among species. Species with short generation times will respond quickly (high \(\gamma\)) to perturbations relative to species with long generation times (low \(\gamma\)). Similarly, the speed at which individual banks respond to perturbations in financial networks, or individuals or institutions respond to perturbations in complex social networks, is likely to vary. The effect of such variance on stability has not been investigated in complex systems theory. Intuitively, variation in \(\gamma\) (\(\sigma^{2}_{\gamma}\)) might be expected to decrease system stability by introducing a new source of variation into the system and thereby increasing \(\sigma\). Here I show that, despite higher \(\sigma\), realistic complex systems (in which \(S\) is high but finite) are actually more likely to be stable if their individual component response rates vary. My results are robust across commonly observed network structures, including random\textsuperscript{\protect\hyperlink{ref-May1972}{1}}, small-world\textsuperscript{\protect\hyperlink{ref-Watts1998}{16}}, scale-free\textsuperscript{\protect\hyperlink{ref-Albert2002}{17}}, and cascade food web\textsuperscript{\protect\hyperlink{ref-Solow1998}{18},\protect\hyperlink{ref-Williams2000}{19}} networks. \subsection{Results}\label{results} \textbf{Component response rates of random complex systems}. Complex systems (\(\mathbf{M}\)) are built from two matrices, one modelling component interactions (\(\mathbf{A}\)), and second modelling component response rates (\(\boldsymbol{\gamma}\)). Both \(\mathbf{A}\) and \(\boldsymbol{\gamma}\) are square \(S \times S\) matrices. Rows in \(\mathbf{A}\) define how a given component \(i\) is affected by each component \(j\) in the system, including itself (where \(i = j\)). Off-diagonal elements of \(\mathbf{A}\) are independent and identically distributed (i.i.d), and diagonal elements are set to \(A_{ii} = -1\) as in May\textsuperscript{\protect\hyperlink{ref-May1972}{1}}. Diagonal elements of \(\boldsymbol{\gamma}\) are positive, and off-diagonal elements are set to zero (i.e., \(\boldsymbol{\gamma}\) is a diagonal matrix with positive support). The distribution of \(diag(\boldsymbol{\gamma})\) over \(S\) components thereby models the distribution of component response rates. The dynamics of the entire system \(\mathbf{M}\) can be defined as follows\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}}, \begin{equation} \label{defM} \mathbf{M} = \boldsymbol{\gamma} \mathbf{A}. \end{equation} Equation 1 thereby serves as a null model to investigate how variation in component response rate (\(\sigma^{2}_{\gamma}\)) affects complex systems. In the absence of such variation (\(\sigma^{2}_{\gamma} = 0\)), \(\boldsymbol{\gamma}\) is set to the identity matrix (diagonal elements all equal 1), and \(\mathbf{M} = \mathbf{A}\). Under these conditions, eigenvalues of \(\mathbf{M}\) are distributed uniformly\textsuperscript{\protect\hyperlink{ref-Tao2010}{15}} in a circle centred at \((-1, 0)\) with a radius of \(\sigma \sqrt{SC}\)\textsuperscript{\protect\hyperlink{ref-May1972}{1}} (Fig. 1a). \textbf{Effect of \(\mathbf{\sigma^{2}_{\gamma}}\) on \(\mathbf{M}\) (co)variation}. The value of \(\Re(\lambda_{max})\), and therefore system stability, can be estimated from five properties of \(\mathbf{M}\)\textsuperscript{\protect\hyperlink{ref-Tang2014b}{21}}. These properties include (1) system size (\(S\)), (2) mean self-regulation of components (\(d\)), (3) mean interaction strength between components (\(\mu\)), (4) the variance of between component interaction strengths (hereafter \(\sigma^{2}_{M}\), to distinguish from \(\sigma^{2}_{A}\) and \(\sigma^{2}_{\gamma}\)), and (5) the correlation of interaction strengths between components, \(M_{ij}\) and \(M_{ji}\) (\(\rho\))\textsuperscript{\protect\hyperlink{ref-Sommers1988}{22}}. Positive \(\sigma^{2}_{\gamma}\) does not change \(S\), nor does it necessarily change \(E[d]\) or \(E[\mu]\). What \(\sigma^{2}_{\gamma}\) does change is the total variation in component interaction strengths (\(\sigma^{2}_{M}\)), and \(\rho\). Introducing variation in \(\gamma\) increases the total variation in the system. Variation in the off-diagonal elements of \(\mathbf{M}\) is described by the joint variation of two random variables, \begin{equation} \label{var_ref} \sigma^{2}_{M} = \sigma^{2}_{A}\sigma^{2}_{\gamma} + \sigma^{2}_{A}E[\gamma_{i}]^{2}+\sigma^{2}_{\gamma}E[A_{ij}]^{2}. \end{equation} Given \(E[\gamma_{i}] = 1\) and \(E[A_{ij}] = 0\), Eq. 2 can be simplified, \begin{equation} \sigma^{2}_{M} = \sigma^{2}_{A}(1 + \sigma^{2}_{\gamma}). \nonumber \end{equation} The increase in \(\sigma^{2}_{M}\) caused by \(\sigma^{2}_\gamma\) can be visualised from the eigenvalue spectra of \(\textbf{A}\) versus \(\textbf{M} = \boldsymbol{\gamma}\textbf{A}\) (Fig. 1). Given \(d = 0\) and \(C = 1\), the distribution of eigenvalues of \(\textbf{A}\) and \(\textbf{M}\) lie within a circle of a radius \(\sigma_{A}\sqrt{S}\) and \(\sigma_{M}\sqrt{S}\), respectively (Fig. 1a vs.~1b). If \(d \neq 0\), positive \(\sigma^{2}_\gamma\) changes the distribution of eigenvalues\textsuperscript{\protect\hyperlink{ref-Ahmadian2015}{23}--\protect\hyperlink{ref-Stone2017}{25}}, potentially affecting stability (Fig. 1c vs.~1d). Given \(\sigma^{2}_\gamma = 0\), \(\Re(\lambda_{max})\) increases linearly with \(\rho\) such that\textsuperscript{\protect\hyperlink{ref-Tang2014c}{26}}, \begin{equation} \Re(\lambda_{max}) \approx \sigma_{M}\sqrt{SC}\left(1 + \rho\right). \nonumber \end{equation} If \(\rho < 0\), such as when \(\textbf{M}\) models a predator-prey system in which \(M_{ij}\) and \(M_{ji}\) have opposing signs, stability increases\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}. If diagonal elements of \(\boldsymbol{\gamma}\) vary independently, the magnitude of \(\rho\) is decreased because \(\sigma^{2}_{\gamma}\) increases the variance of \(M_{ij}\) without affecting the expected covariance between \(M_{ij}\) and \(M_{ji}\) (Figure 2). \vspace{2mm} \hrule \vspace{2mm} \textbf{Figure 2: Complex system correlation versus stability with and without variation in component response rates}. Each point represents 10000 replicate numerical simulations of a random complex system \(\mathbf{M} = \gamma \mathbf{A}\) with a fixed correlation between off-diagonal elements \(A_{ij}\) and \(A_{ji}\) (\(\rho\), x-axis). Where real parts of eigenvalues of \(\mathbf{M}\) are negative (y-axis), \(\mathbf{M}\) is stable (black dotted line). Blue circles show systems in the absence of variation in component response rates (\(\sigma^{2}_{\gamma} = 0\)). Red squares show systems in which \(\sigma^{2}_{\gamma} = 1/3\). Arrows show the range of real parts of leading eigenvalues observed. Because \(\gamma\) decreases the magnitude of \(\rho\), purple lines are included to link replicate simulations before (blue circles) and after (red squares) including \(\gamma\). The range of \(\rho\) values in which \(\gamma\) decreases the mean real part of the leading eigenvalue is indicated with grey shading. In all simulations, system size and connectence were set to \(S = 25\) and \(C = 1\), respectively. Off-diagonal elements of \(\textbf{A}\) were randomly sampled from \(A_{ij} \sim \mathcal{N}(0, 0.4^{2})\), and diagonal elements were set to \(-1\). Elements of \(\gamma\) were sampled, \(\gamma \sim \mathcal{U}(0, 2)\). \includegraphics[height=14cm,keepaspectratio]{fig2.pdf} \vspace{2mm} \hrule \textbf{Numerical simulations of random systems with and without \(\mathbf{\sigma^{2}_{\gamma}}\)}. I used numerical simulations and eigenanalysis to test how variation in \(\gamma\) affects stability in random matrices with known properties, comparing the stability of \(\textbf{A}\) versus \(\mathbf{M} = \gamma\mathbf{A}\). Values of \(\gamma\) were sampled from a uniform distribution where \(\gamma \sim \mathcal{U}(0, 2)\) and \(\sigma^{2}_{\gamma} = 1/3\) (see Supplementary Information for other \(\gamma\) distributions, which gave similar results). In all simulations, diagonal elements were standardised to ensure that \(-d\) between individual \(\textbf{A}\) and \(\textbf{M}\) pairs were identical (also note that \(E[\gamma_{i}] = 1\)). First I focus on the effect of \(\gamma\) across values of \(\rho\), then for increasing system sizes (\(S\)) in random and structured networks. By increasing \(S\), the objective is to determine the effect of \(\gamma\) as system complexity increases toward the boundary at which stability is realistic for a finite system. \textbf{Simulation of random \(\mathbf{M}\) across \(\mathbf{\rho}\)}. Numerical simulations revealed that \(\sigma^{2}_{\gamma}\) results in a nonlinear relationship between \(\rho\) and \(\Re(\lambda_{max})\), which can sometimes increase the stability of the system. Figure 2 shows a comparison of \(\Re(\lambda_{max})\) across \(\rho\) values for \(\mathbf{A}\) (\(\sigma^{2}_{\gamma} = 0\)) versus \(\mathbf{M}\) (\(\sigma^{2}_{\gamma} = 1/3\)) given \(S = 25\), \(C = 1\), and \(\sigma_{A} = 0.4\). For \(-0.4 \leq \rho \leq 0.7\) (shaded region of Fig. 2), expected \(\Re(\lambda_{max})\) was lower in \(\mathbf{M}\) than \(\mathbf{A}\). For \(\rho \geq -0.1\), the lower bound of the range of \(\Re(\lambda_{max})\) values also decreased given \(\sigma^{2}_{\gamma}\), resulting in negative \(\Re(\lambda_{max})\) in \(\mathbf{M}\) for \(\rho = -0.1\) and \(\rho = 0\). Hence, across a wide range of system correlations, variation in the response rate of system components had a stabilising effect. The stabilising effect of \(\sigma^{2}_{\gamma}\) across \(\rho\) increased with increasing \(S\). Figure 3 shows numerical simulations of \(\mathbf{M}\) across increasing \(S\) given \(C = 1\) and \(\sigma_{A} = 0.2\) (\(\sigma_{A}\) has been lowered here to better illustrate the effect of \(S\); note that now given \(S = 25\), \(1 = \sigma_{A}\sqrt{SC}\)). For relatively small systems (\(S \leq 25\)), \(\sigma^{2}_{\gamma}\) never decreased the expected \(\Re(\lambda_{max})\). But as \(S\) increased, the curvilinear relationship between \(\rho\) and \(\Re(\lambda_{max})\) decreased expected \(\Re(\lambda_{max})\) for \(\mathbf{M}\) given low magnitudes of \(\rho\). In turn, as \(S\) increased, and systems became more complex, \(\sigma^{2}_{\gamma}\) increased the proportion of numerical simulations that were observed to be stable (see below). \textbf{Simulation of random \(\mathbf{M}\) across \(\mathbf{S}\)}. To investigate the effect of \(\sigma^{2}_{\gamma}\) on stability across systems of increasing complexity, I simulated random \(\mathbf{M = \gamma A}\) matrices at \(\sigma_{A} = 0.4\) and \(C = 1\) across \(S = \{2, 3, ..., 49, 50\}\). One million \(\mathbf{M}\) were simulated for each \(S\), and the stability of \(\mathbf{A}\) vesus \(\mathbf{M}\) was assessed given \(\gamma \sim \mathcal{U}(0, 2)\) (\(\sigma^{2}_{\gamma} = 1/3\)). For all \(S > 10\), I found that the number of stable random systems was higher in \(\mathbf{M}\) than \(\mathbf{A}\) (Fig. 4; see Supplementary Information for full table of results), and that the difference between the probabilities of observing a stable system increased with an increase in \(S\). In other words, the potential for \(\sigma^{2}_{\gamma}\) to affect stability increased with increasing system complexity and was most relevant for systems on the cusp of being too complex to be realistically stable. For the highest values of \(S\), nearly all systems that were stable given varying \(\gamma\) would not have been stable given \(\gamma = 1\). I also simulated 100000 \(\mathbf{M}\) for three types of random networks that are typically interpreted as modelling three types of interspecific ecological interactions\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2},\protect\hyperlink{ref-Allesina2011}{27}}. These interaction types are competitive, mutualist, and predator-prey, as modelled by off-diagonal elements that are constrained to be negative, positive, or paired such that if \(A_{ij} > 0\) then \(A_{ji} < 0\), respectively\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}} (but are otherwise identical to the purely random \(\mathbf{A}\)). As \(S\) increased, a higher number of stable \(\mathbf{M}\) relative to \(\mathbf{A}\) was observed for competitor and predator-prey, but not mutualist, systems. A higher number of stable systems was observed whenever \(S > 12\) and \(S > 40\) for competitive and predator-prey systems, respectively (note that \(\rho < 0\) for predator-prey systems, making stability more likely overall). The stability of mutualist systems was never affected by \(\sigma^{2}_{\gamma}\). The effect of \(\sigma^{2}_{\gamma}\) on stability did not change qualitatively across values of \(C\), \(\sigma_{A}\), or for different distributions of \(\gamma\) (see Supporting Information). \textbf{Simulation of structured \(\mathbf{M}\) across \(\mathbf{S}\)}. To investigate how \(\sigma^{2}_{\gamma}\) affects the stability of commonly observed network structures, I simulated one million \(\mathbf{M = \gamma A}\) for small-world\textsuperscript{\protect\hyperlink{ref-Watts1998}{16}}, scale-free\textsuperscript{\protect\hyperlink{ref-Albert2002}{17}}, and cascade food web\textsuperscript{\protect\hyperlink{ref-Solow1998}{18},\protect\hyperlink{ref-Williams2000}{19}} networks. In all of these networks, rules determining the presence or absence of an interaction between components \(i\) and \(j\) constrain the overall structure of the network. In small-world networks, interactions between components are constrained so that the expected degree of separation between any two components increases in proportion to \(\log(S)\)\textsuperscript{\protect\hyperlink{ref-Watts1998}{16}}. In scale-free networks, the distribution of the number of components with which a focal component interacts follows a power law; a few components have many interactions while most components have few interactions\textsuperscript{\protect\hyperlink{ref-Albert2002}{17}}. In cascade food webs, species are ranked and interactions are constrained such that a species \(i\) can only feed on \(j\) if the rank of \(i > j\). \clearpage \hrule \vspace{2mm} \textbf{Figure 3: System correlation versus stability across different system sizes}. In each panel, 10000 random complex systems \(\mathbf{M} = \gamma \mathbf{A}\) are simulated for each correlation \(\rho = \{-0.90, -0.85, ..., 0.85, 0.90 \}\) between off-diagonal elements \(A_{ij}\) and \(A_{ji}\). Lines show the expected real part of the leading eigenvalues of \(\mathbf{M}\) (red squares; \(\sigma^{2}_{\gamma} = 1/3\)) versus \(\mathbf{A}\) (blue circles; \(\sigma^{2}_{\gamma} = 0\)) across \(\rho\), where negative values (below the dotted black line) indicate system stability. Differences between lines thereby show the effect of component response rate variation (\(\gamma\)) on system stability across system correlations and sizes (\(S\)). For all simulations, system connectance was \(C = 1\). Off-diagonal elements of \(\textbf{A}\) were randomly sampled from \(A_{ij} \sim \mathcal{N}(0, 0.2^{2})\), and diagonal elements were set to \(-1\). Elements of \(\gamma\) were sampled such that \(\gamma \sim \mathcal{U}(0, 2)\), so \(\sigma^{2}_{\gamma} = 1/3\). \includegraphics{fig3.pdf} \vspace{2mm} \hrule Network structure did not strongly modulate the effect that \(\sigma^{2}_{\gamma}\) had on stability. For comparable magnitudes of complexity, structured networks still had a higher number of stable \(\mathbf{M}\) than \(\mathbf{A}\). For random networks, \(\sigma^{2}_{\gamma}\) increased stability given \(S > 10\) (\(\sigma_{A} = 0.4\) and \(C = 1\)), and therefore complexity \(\sigma_{A} \sqrt{SC} \gtrapprox 1.26\). This threshold of complexity, above which more \(\mathbf{M}\) than \(\mathbf{A}\) were stable, was comparable for small-world networks, and slightly lower for scale-free networks (note that algorithms for generating small-world and scale-free networks necessarily led to varying \(C\); see methods). Varying \(\gamma\) increased stability in cascade food webs for \(S > 27\), and therefore at a relatively low complexity magnitudes compared to random predator-prey networks (\(S > 40\)). Overall, network structure did not greatly change the effect that \(\sigma^{2}_{\gamma}\) had on increasing the upper bound of complexity within which stability might reasonably be observed. \vspace{2mm} \hrule \vspace{2mm} \textbf{Figure 4: Stability of large complex systems with and without variation in component response rate (\(\boldsymbol{\gamma}\)).} The \(\log\) number of systems that are stable across different system sizes (\(S = \{2, 3, ..., 49, 50 \}\)) given \(C = 1\), and the proportion of systems for which variation in \(\gamma\) is critical for system stability. For each \(S\), 1 million complex systems are randomly generated. Stability of each complex system is tested given variation in \(\gamma\) by randomly sampling \(\gamma \sim \mathcal{U}(0, 2)\). Stability given \(\sigma^{2}_{\gamma}>0\) is then compared to stability in an otherwise identical system in which \(\gamma_{i} = E[\mathcal{U}(0, 2)]\) for all components. Blue and red bars show the number of stable systems in the absence and presence of \(\sigma^{2}_{\gamma}\), respectively. The black line shows the proportion of systems that are stable when \(\sigma^{2}_{\gamma}>0\), but would be unstable if \(\sigma^{2}_{\gamma}=0\) (i.e., the conditional probability that \(\mathbf{A}\) is unstable given that \(\mathbf{M}\) is stable). \includegraphics{fig4.pdf} \vspace{2mm} \hrule \textbf{System feasibility given \(\mathbf{\sigma^{2}_{\gamma}}\)} For complex systems in which individual system components represent the density of some tangible quantity, it is relevant to consider the feasibility of the system. Feasibility assumes that values of all components are positive at equilibrium\textsuperscript{\protect\hyperlink{ref-Grilli2017}{6},\protect\hyperlink{ref-Dougoud2018}{28},\protect\hyperlink{ref-Song2018}{29}}. This is of particular interest for ecological communities because population density (\(n\)) cannot take negative values, meaning that ecological systems need to be feasible for stability to be biologically realistic\textsuperscript{\protect\hyperlink{ref-Dougoud2018}{28}}. While my results are intended to be general to all complex systems, and not restricted to species networks, I have also performed a feasibility analysis on all matrices tested for stability. I emphasise that \(\gamma\) is not interpreted as population density in this analysis, but instead as a fundamental property of species life history such as expected generation time. Feasibility was unaffected by \(\sigma^{2}_{\gamma}\) and instead occurred with a fixed probability of \(1/2^{S}\), consistent with a recent proof by Serván et al.\textsuperscript{\protect\hyperlink{ref-Servan2018}{30}} (see Supplementary Information). Hence, for pure interacting species networks, variation in component response rate (i.e., species generation time) does not affect stability at biologically realistic species densities. \textbf{Targeted manipulation of \(\boldsymbol{\gamma}\)}. To further investigate the potential of \(\sigma^{2}_{\gamma}\) to be stabilising, I used a genetic algorithm. Genetic algorithms are heuristic tools that mimic evolution by natural selection, and are useful when the space of potential solutions (in this case, possible combinations of \(\gamma\) values leading to stability in a complex system) is too large to search exhaustively\textsuperscript{\protect\hyperlink{ref-Hamblin2013}{31}}. Generations of selection on \(\gamma\) value combinations to minimise \(\Re(\lambda_{max})\) demonstrated the potential for \(\sigma^{2}_{\gamma}\) to increase system stability. Across \(S = \{2, 3, ..., 39, 40\}\), sets of \(\gamma\) values were found that resulted in stable systems with probabilities that were up to four orders of magnitude higher than when \(\gamma = 1\) (see Supplementary Information), meaning that stability could often be achieved by manipulating \(S\) \(\gamma\) values rather than \(S \times S\) \(\mathbf{M}\) elements (i.e., by manipulating component response rates rather than interactions between components). \subsection{Discussion}\label{discussion} I have shown that the stability of complex systems might often be contigent upon variation in the response rates of their individual components, meaning that factors such as rate of trait evolution (in biological networks), transaction speed (in economic networks), or communication speed (in social networks) need to be considered when investigating the stability of complex systems. Variation in component response rate is more likely to be critical for stability in systems that are especially complex, and it can ultimately increase the probability that system stability is observed above that predicted by May's\textsuperscript{\protect\hyperlink{ref-May1972}{1}} classically derived \(\sigma \sqrt{SC}\) criterion. The logic outlined here is general, and potentially applies to any complex system in which individual system components can vary in their reaction rates to system perturbation. It is important to recognise that variation in component response rate is not stabilising per se; that is, adding variation in component response rates to a particular system does not increase the probability that the system will be stable. Rather, highly complex systems that are observed to be stable are more likely to have varying component response rates, and for this variation to be critical to their stability (Fig. 4). This is caused by the shift to a non-uniform distribution of eigenvalues that occurs by introducing variation in \(\gamma\) (Fig. 1), which can sometimes cause all of the real components of the eigenvalues of the system matrix to become negative, but might also increase the real components of eigenvalues. My focus here is distinct from Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{24}}, who applied the same mathematical framework to investigate how a diagonal matrix \(\mathbf{X}\) (equivalent to \(\boldsymbol{\gamma}\) in my model) affects the stability of a community matrix \(\mathbf{M}\) given an interaction matrix \(\mathbf{A}\) within a generalised Lotka-Volterra model, where \(\mathbf{M} = \mathbf{XA}\). Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{24}} analytically demonstrated that the effect of \(\mathbf{X}\) on system stability decreases exponentially as system size becomes arbitrarily large (\(S \to \infty\)) for a given magnitude of complexity \(\sigma\sqrt{SC}\). My numerical results do not contradict this prediction because I did not scale \(\sigma = 1 / \sqrt{S}\), but instead fixed \(\sigma\) and increased \(S\) to thereby increase total system complexity (see Supplemental Information for results simulated across \(\sigma\) and \(C\)). Overall, I show that component response rate variation increases the upper bound of complexity at which stability can be realistically observed, meaning that highly complex systems are more likely than not to vary in their component response rates, and for this variation to be critical for system stability. Interestingly, while complex systems were more likely to be stable given variation in component response rate, they were not more likely to be feasible, meaning that stability was not increased when component values were also restricted to being positive at equilibrium. Feasibility is important to consider, particularly for the study of ecological networks of species\textsuperscript{\protect\hyperlink{ref-Grilli2017}{6},\protect\hyperlink{ref-Stone2017}{25},\protect\hyperlink{ref-Dougoud2018}{28},\protect\hyperlink{ref-Servan2018}{30}} because population densities cannot realistically be negative. My results therefore suggest that variation in the rate of population responses to perturbation (e.g., due to differences in generation time among species) is unlikely to be critical to the stability of purely multi-species interaction networks (see also Supplementary Information). Nevertheless, ecological interactions do not exist in isolation in empirical systems\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}}, but instead interact with evolutionary, abiotic, or social-economic systems. The relevance of component response rate for complex system stability should therefore not be ignored in the broader context of ecological communities. The potential importance of component response rate variation was most evident from the results of simulations in which the genetic algorithm was used in attempt to maximise the probability of system stability. The probability that some combination of component response rates could be found to stabilise the system was shown to be up to four orders of magnitude higher than the background probabilities of stability in the absence of any component response rate variation. Instead of manipulating the \(S \times S\) interactions between system components, it might therefore be possible to manipulate only the \(S\) response rates of individual system components to achieve stability. Hence, managing the response rates of system components in a targeted way could potentially facilitate the stabilisation of complex systems through a reduction in dimensionality. A general mathematical framework encompassing shifts in eigenvalue distributions caused by a diagonal matrix \(\boldsymbol{\gamma}\) has been investigated\textsuperscript{\protect\hyperlink{ref-Ahmadian2015}{23}} and recently applied to questions concerning species density and feasibility\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{24},\protect\hyperlink{ref-Stone2017}{25}}, but \(\boldsymbol{\gamma}\) has not been interpreted as rates of response of individual system components to perturbation. My model focuses on component response rates for systems of a finite size, in which complexity is high but not yet high enough to make the probability of stability unrealistically low for actual empirical systems. For this upper range of system size, randomly assembled complex systems are more likely to be stable if their component response rates vary (e.g., \(10 < S < 30\) for parameter values in Fig. 4). Variation in component response rate might therefore be critical for maintaining stability in many highly complex empirical systems. These results are broadly applicable for understanding the stability of complex networks across the physical, life, and social sciences. \subsection{Methods}\label{methods} \textbf{Component response rate (\(\boldsymbol{\gamma}\)) variation}. In a synthesis of eco-evolutionary feedbacks on community stability, Patel et al.\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}} model a system that includes a vector of potentially changing species densities (\(\mathbf{n}\)) and a vector of potentially evolving traits (\(\mathbf{x}\)). For any species \(i\) or trait \(j\), change in species density (\(n_{i}\)) or trait value (\(x_{j}\)) with time (\(t\)) is a function of the vectors \(\mathbf{n}\) and \(\mathbf{x}\), \[\frac{dn_{i}}{dt} = n_{i}f_{i}(\mathbf{n}, \mathbf{x}),\] \[\frac{dx_{j}}{dt} = \epsilon g_{j}(\mathbf{n}, \mathbf{x}).\] In the above, \(f_{i}\) and \(g_{j}\) are functions that define the effects of all species densities and trait values on the density of a species \(i\) and the value of trait \(j\), respectively. Patel et al.\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}} were interested in stability when the evolution of traits was relatively slow or fast in comparison with the change in species densities, and this is modulated in the above by the scalar \(\epsilon\). The value of \(\epsilon\) thereby determines the timescale separation between ecology and evolution, with high \(\epsilon\) modelling relatively fast evolution and low \(\epsilon\) modelling relatively slow evolution\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}}. I use the same principle that Patel et al.\textsuperscript{\protect\hyperlink{ref-Patel2018}{20}} use to modulate the relative rate of evolution to modulate rates of component responses for \(S\) components. Following May\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-May1973}{32}}, the value of a component \(i\) at time \(t\) (\(v_{i}(t)\)) is affected by the value of \(j\) (\(v_{j}(t)\)) and \(j\)'s marginal effect on \(i\) (\(a_{ij}\)), and by \(i\)'s response rate (\(\gamma_{i}\)), \[\frac{dv_{i}(t)}{dt} = \gamma_{i} \sum_{j=1}^{S}a_{ij}v_{j}(t).\] In matrix notation\textsuperscript{\protect\hyperlink{ref-May1973}{32}}, \[\frac{d\mathbf{v}(t)}{dt} = \boldsymbol{\gamma} \mathbf{A}\mathbf{v}(t).\] In the above, \(\boldsymbol{\gamma}\) is a diagonal matrix in which elements correspond to individual component response rates. Therefore, \(\mathbf{M} = \boldsymbol{\gamma} \mathbf{A}\) defines the change in values of system components and can be analysed using the techniques of May\textsuperscript{\protect\hyperlink{ref-May1972}{1},\protect\hyperlink{ref-Ahmadian2015}{23},\protect\hyperlink{ref-May1973}{32}}. In these analyses, row means of \(\mathbf{A}\) are expected to be identical, but variation around this expectation will naturally arise due to random sampling of \(\mathbf{A}\) off-diagonal elements and finite \(S\). In simulations, the total variation in \(\mathbf{M}\) row means that is attributable to \(\mathbf{A}\) is small relative to that attributable to \(\boldsymbol{\gamma}\), especially at high \(S\). Variation in \(\boldsymbol{\gamma}\) specifically isolates the effects of differing component response rates, hence causing differences in expected \(\mathbf{M}\) row means. \textbf{Construction of random and structured networks}. I used the R programming language for all numerical simulations and analyses\textsuperscript{\protect\hyperlink{ref-Rproject}{33}}. Purely random networks were generated by sampling off-diagonal elements from an i.i.d \(A_{ij} \sim \mathcal{N}(0, 0.4^{2})\) with a probability \(C\) (unsampled elements were set to zero). Diagonal elements \(A_{ii}\) were set to \(-1\). Elements of \(\boldsymbol{\gamma}\) were simulated i.i.d. from a distribution with positive support (typically \(\gamma \sim \mathcal{U}(0, 2)\)). Random \(\mathbf{A}\) matrices with correlated elements \(A_{ij}\) and \(A_{ji}\) were built using Cholesky decomposition. Competitor networks in which off-diagonal elements \(A_{ij} \leq 0\) were constructed by first building a random \(\mathbf{A}\), then flipping the sign of any elements in which \(A_{ij} > 0\). Similarly, mutualist networks were constructed by building a random \(\mathbf{A}\), then flipping the sign of elements where \(A_{ij} < 0\). Predator-prey networks were constructed by first building a random \(\mathbf{A}\), then flipping the sign of either \(A_{ij}\) or \(A_{ji}\) if \(A_{ij} \times A_{ji} > 0\). Small-world networks were constructed using the method of Watts and Strogatz\textsuperscript{\protect\hyperlink{ref-Watts1998}{16}}. First, a regular network\textsuperscript{\protect\hyperlink{ref-Watts1998}{16}} was created such that components were arranged in a circle. Each component was initially set to interact with its \(k/2\) closest neighbouring components on each side, where \(k\) was an even natural number (e.g., for \(k = 2\) the regular network forms a ring in which each component interacts with its two adjacent neighbours; see Supplemental Material for examples). Each interaction between a focal component and its neighbour was then removed and replaced with with a probability of \(\beta\). In replacement, a new component was randomly selected to interact with the focal component; selection was done with equal probability among all but the focal component. The resulting small-world network was represented by a square \(S \times S\) binary matrix \(\mathbf{B}\) in which 1s represented interactions between components and 0s represented the absence of an interaction. A new random matrix \(\mathbf{J}\) was then generated with elements \(J_{ij}\) sampled i.i.d. from \(\mathcal{N}(0, 0.4^{2})\). To build the interaction matrix \(\mathbf{A}\), I used element-wise multiplication \(\mathbf{A} = \mathbf{J} \odot \mathbf{B}\), then set \(diag(\mathbf{A}) = -1\). I set \(k = S/12\) and simulated small-world networks across all combinations of \(S = \{24, 48, 72, 96, 120, 144, 168\}\) and \(\beta = \{0, 0.01, 0.1, 0.25, 1\}\). Scale-free networks were constructed using the method of Albert and Barabási\textsuperscript{\protect\hyperlink{ref-Albert2002}{17}}. First, a saturated network (all components interact with each other) of size \(m \leq S\) was created. New components were then added sequentially to the network; each newly added component was set to interact with \(m\) randomly selected existing components. When the system size reached \(S\), the distribution of the number of total interactions that components had followed a power-law tail\textsuperscript{\protect\hyperlink{ref-Albert2002}{17}}. The resulting network was represented by an \(S \times S\) binary matrix \(\mathbf{G}\), where 1s and 0s represent the presence and absence of an interaction, respectively. As with small-world networks, a random matrix \(\mathbf{J}\) was generated, and \(\mathbf{A} = \mathbf{J} \odot \mathbf{G}\). Diagonal elements were set to \(-1\). I simulated scale-free networks across all combinations of \(S = \{24, 48, 72, 96, 120\}\) and \(m = \{2, 3, ..., 11, 12\}\). Cascade food webs were constructed following Solow and Beet\textsuperscript{\protect\hyperlink{ref-Solow1998}{18}}. First, a random matrix \(\mathbf{A}\) was generated with off-diagonal elements sampled i.i.d so that \(A_{ij} \sim \mathcal{N}(0, 0.4^2)\). Each component in the system was ranked from \(1\) to \(S\). If component \(i\) had a higher rank than component \(j\) and \(A_{ij} < 0\), then \(A_{ij}\) was multiplied by \(-1\). If \(i\) had a lower rank than \(j\) and \(A_{ji} < 0\), then \(A_{ji}\) was multiplied by \(-1\). In practice, this resulted in a matrix \(\mathbf{A}\) with negative and positive values in the lower and upper triangles, respectively. Diagonal elements of \(\mathbf{A}\) were set to \(-1\) and \(C = 1\). I simulated cascade food webs for \(S = \{2, 3, ..., 59, 60\}\). \textbf{System feasibility}. Dougoud et al.\textsuperscript{\protect\hyperlink{ref-Dougoud2018}{28}} identify the following feasibility criteria for ecological systems characterised by \(S\) interacting species with varying densities in a generalised Lotka-Volterra model, \[\mathbf{n^{*}} = -\left(\theta \mathbf{I} + (CS)^{-\delta}\mathbf{J} \right)^{-1}\mathbf{r}.\] In the above, \(\mathbf{n^{*}}\) is the vector of species densities at equilibrium. Feasibility is satisfied if all elements in \(\mathbf{n^{*}}\) are positive. The matrix \(\mathbf{I}\) is the identity matrix, and the value \(\theta\) is the strength of intraspecific competition (diagonal elements). Diagonal values are set to \(-1\), so \(\theta = -1\). The variable \(\delta\) is a normalisation parameter that modulates the strength of interactions (\(\sigma\)) for \(\mathbf{J}\). Implicitly, here \(\delta = 0\) underlying strong interactions. Hence, \((CS)^{-\delta} = 1\), so in the above, a diagonal matrix of -1s (\(\theta \mathbf{I}\)) is added to \(\mathbf{J}\), which has a diagonal of all zeros and an off-diagonal affecting species interactions (i.e., the expression \((CS)^{-\delta}\) relates to May's\textsuperscript{\protect\hyperlink{ref-May1972}{1}} stability criterion\textsuperscript{\protect\hyperlink{ref-Dougoud2018}{28}} by \(\frac{\sigma}{(CS)^{-\delta}}\sqrt{SC} < 1\), and hence for my purposes \((CS)^{-\delta} = 1\)). Given \(\mathbf{A} = \theta\mathbf{I + J}\), the above criteria is therefore reduced to the below (see also Serván et al.\textsuperscript{\protect\hyperlink{ref-Servan2018}{30}}), \[\mathbf{n^{*} = -A^{-1}r}.\] To check the feasibility criteria for \(\mathbf{M = \gamma A}\), I therefore evaluated \(\mathbf{-M^{-1}r}\) (\(\mathbf{r}\) elements were sampled i.i.d. from \(r \sim \mathcal{N}(0, 0.4^{2})\)). Feasibility is satisfied if all of the elements of the resulting vector are positive. \textbf{Genetic algorithm}. Ideally, to investigate the potential of \(\sigma^{2}_{\gamma}\) for increasing the proportion of stable complex systems, the search space of all possible \(diag(\boldsymbol{\gamma})\) vectors would be evaluated for each unique \(\mathbf{M = \gamma A}\). This is technically impossible because \(\gamma_{i}\) can take any real value between 0-2, but even rounding \(\gamma_{i}\) to reasonable values would result in a search space too large to practically explore. Under these conditions, genetic algorithms are highly useful tools for finding practical solutions by mimicking the process of biological evolution\textsuperscript{\protect\hyperlink{ref-Hamblin2013}{31}}. In this case, the practical solution is finding vectors of \(diag(\boldsymbol{\gamma})\) that decrease the most positive real eigenvalue of \(\mathbf{M}\). The genetic algorithm used achieves this by initialising a large population of 1000 different potential \(diag(\boldsymbol{\gamma})\) vectors and allowing this population to evolve through a process of mutation, crossover (swaping \(\gamma_{i}\) values between vectors), selection, and reproduction until either a \(diag(\boldsymbol{\gamma})\) vector is found where all \(\Re(\lambda) < 0\) or some ``giving up'' critiera is met. For each \(S = \{2, 3, ..., 39, 40\}\), the genetic algorithm was run for 100000 random \(\mathbf{M = \gamma A}\) (\(\sigma_{A} = 0.4\), \(C = 1\)). The genetic algorithm was initialised with a population of 1000 different \(diag(\boldsymbol{\gamma})\) vectors with elements sampled i.i.d from \(\gamma \sim \mathcal{U}(0, 2)\). Eigenanalysis was performed on the \(\mathbf{M}\) resulting from each \(\boldsymbol{\gamma}\), and the 20 \(diag(\boldsymbol{\gamma})\) vectors resulting in \(\mathbf{M}\) with the lowest \(\Re(\lambda_{max})\) each produced 50 clonal offspring with subsequent random mutation and crossover between the resulting new generation of 1000 \(diag(\boldsymbol{\gamma})\) vectors. Mutation of each \(\gamma_{i}\) in a \(diag(\boldsymbol{\gamma})\) vector occurred with a probability of 0.2, resulting in a mutation effect of size \(\mathcal{N}(0, 0.02^{2})\) being added to generate the newly mutated \(\gamma_{i}\) (any \(\gamma_{i}\) values that mutated below zero were multiplied by \(-1\), and any values that mutated above 2 were set to 2). Crossover occurred between two sets of 100 \(diag(\boldsymbol{\gamma})\) vectors paired in each generation; vectors were randomly sampled with replacement among but not within sets. Vector pairs selected for crossover swapped all elements between and including two \(\gamma_{i}\) randomly selected with replacement (this allowed for reversal of vector element positions during crossover; e.g., \(\{\gamma_{4}, \gamma_{5}, \gamma_{6}, \gamma_{7}\} \to \{\gamma_{7}, \gamma_{6}, \gamma_{5}, \gamma_{4}\}\) ). The genetic algorithm terminated if a stable \(\mathbf{M}\) was found, 20 generations occurred, or if the mean \(\boldsymbol{\gamma}\) fitness increase between generations was less than 0.01 (where fitness was defined as \(W_{\gamma} = -\Re(\lambda_{max})\) for \(\mathbf{M}\)). \textbf{Acknowledgements:} I am supported by a Leverhulme Trust Early Career Fellowship (ECF-2016-376). Conversations with L. Bussière and N. Bunnefeld, and comments from J. J. Cusack and I. L. Jones, improved the quality of this work. \textbf{Supplementary Information:} Full tables of stability results for simulations across different system size (\(S\)) values, ecological community types, connectance (\(C\)) values, interaction strengths (\(\sigma\)), and \(\gamma\) distributions are provided as supplementary material. An additional table also shows results for how feasibility changes across \(S\). All code and simulation outputs are publicly available as part of the RandomMatrixStability package on GitHub (\url{https://github.com/bradduthie/RandomMatrixStability}). \textbf{Additional Information:} The author declares no competing interests. All work was carried out by A. Bradley Duthie, and all code and data are accessible on \href{https://github.com/bradduthie/RandomMatrixStability}{GitHub}. \textbf{References} \hypertarget{refs}{} \hypertarget{ref-May1972}{} 1. May, R. M. Will a large complex system be stable? \emph{Nature} \textbf{238,} 413--414 (1972). \hypertarget{ref-Allesina2012}{} 2. Allesina, S. \& Tang, S. Stability criteria for complex ecosystems. \emph{Nature} \textbf{483,} 205--208 (2012). \hypertarget{ref-Townsend2010a}{} 3. Townsend, S. E., Haydon, D. T. \& Matthews, L. On the generality of stability-complexity relationships in Lotka-Volterra ecosystems. \emph{Journal of Theoretical Biology} \textbf{267,} 243--251 (2010). \hypertarget{ref-Mougi2012}{} 4. Mougi, A. \& Kondoh, M. Diversity of interaction types and ecological community stability. \emph{Science} \textbf{337,} 349--351 (2012). \hypertarget{ref-Allesina2015}{} 5. Allesina, S. \emph{et al.} Predicting the stability of large structured food webs. \emph{Nature Communications} \textbf{6,} 7842 (2015). \hypertarget{ref-Grilli2017}{} 6. Grilli, J. \emph{et al.} Feasibility and coexistence of large ecological communities. \emph{Nature Communications} \textbf{8,} (2017). \hypertarget{ref-Gray2008}{} 7. Gray, R. T. \& Robinson, P. A. Stability and synchronization of random brain networks with a distribution of connection strengths. \emph{Neurocomputing} \textbf{71,} 1373--1387 (2008). \hypertarget{ref-Gray2009}{} 8. Gray, R. T. \& Robinson, P. A. Stability of random brain networks with excitatory and inhibitory connections. \emph{Neurocomputing} \textbf{72,} 1849--1858 (2009). \hypertarget{ref-Rosenfeld2009}{} 9. Rosenfeld, S. Patterns of stochastic behavior in dynamically unstable high-dimensional biochemical networks. \emph{Gene Regulation and Systems Biology} \textbf{3,} 1--10 (2009). \hypertarget{ref-MacArthur2010}{} 10. MacArthur, B. D., Sanchez-Garcia, R. J. \& Ma'ayan, A. Microdynamics and criticality of adaptive regulatory networks. \emph{Physics Review Letters} \textbf{104,} 168701 (2010). \hypertarget{ref-May2008}{} 11. May, R. M., Levin, S. A. \& Sugihara, G. Complex systems: Ecology for bankers. \emph{Nature} \textbf{451,} 893--895 (2008). \hypertarget{ref-Haldane2011}{} 12. Haldane, A. G. \& May, R. M. Systemic risk in banking ecosystems. \emph{Nature} \textbf{469,} 351--355 (2011). \hypertarget{ref-Suweis2014}{} 13. Suweis, S. \& D'Odorico, P. Early warning signs in social-ecological networks. \emph{PLoS ONE} \textbf{9,} (2014). \hypertarget{ref-Bardoscia2017}{} 14. Bardoscia, M., Battiston, S., Caccioli, F. \& Caldarelli, G. Pathways towards instability in financial networks. \emph{Nature Communications} \textbf{8,} 1--7 (2017). \hypertarget{ref-Tao2010}{} 15. Tao, T. \& Vu, V. Random matrices: Universality of ESDs and the circular law. \emph{Annals of Probability} \textbf{38,} 2023--2065 (2010). \hypertarget{ref-Watts1998}{} 16. Watts, D. J. \& Strogatz, S. H. Collective dynamics of 'small world' networks. \emph{Nature} \textbf{393,} 440--442 (1998). \hypertarget{ref-Albert2002}{} 17. Albert, R. \& Barabási, A. L. Statistical mechanics of complex networks. \emph{Reviews of Modern Physics} \textbf{74,} 47--97 (2002). \hypertarget{ref-Solow1998}{} 18. Solow, A. R. \& Beet, A. R. On lumping species in food webs. \emph{Ecology} \textbf{79,} 2013--2018 (1998). \hypertarget{ref-Williams2000}{} 19. Williams, R. J. \& Martinez, N. D. Simple rules yield complex food webs. \emph{Nature} \textbf{404,} 180--183 (2000). \hypertarget{ref-Patel2018}{} 20. Patel, S., Cortez, M. H. \& Schreiber, S. J. Partitioning the effects of eco-evolutionary feedbacks on community stability. \emph{American Naturalist} \textbf{191,} 1--29 (2018). \hypertarget{ref-Tang2014b}{} 21. Tang, S. \& Allesina, S. Reactivity and stability of large ecosystems. \emph{Frontiers in Ecology and Evolution} \textbf{2,} 1--8 (2014). \hypertarget{ref-Sommers1988}{} 22. Sommers, H. J., Crisanti, A., Sompolinsky, H. \& Stein, Y. Spectrum of large random asymmetric matrices. \emph{Physical Review Letters} \textbf{60,} 1895--1898 (1988). \hypertarget{ref-Ahmadian2015}{} 23. Ahmadian, Y., Fumarola, F. \& Miller, K. D. Properties of networks with partially structured and partially random connectivity. \emph{Physical Review E - Statistical, Nonlinear, and Soft Matter Physics} \textbf{91,} 012820 (2015). \hypertarget{ref-Gibbs2017}{} 24. Gibbs, T., Grilli, J., Rogers, T. \& Allesina, S. The effect of population abundances on the stability of large random ecosystems. \emph{Physical Review E - Statistical, Nonlinear, and Soft Matter Physics} \textbf{98,} 022410 (2018). \hypertarget{ref-Stone2017}{} 25. Stone, L. The feasibility and stability of large complex biological networks: a random matrix approach. \emph{Scientific Reports} \textbf{8,} 8246 (2018). \hypertarget{ref-Tang2014c}{} 26. Tang, S., Pawar, S. \& Allesina, S. Correlation between interaction strengths drives stability in large ecological networks. \textbf{17,} 1094--1100 (2014). \hypertarget{ref-Allesina2011}{} 27. Allesina, S. \& Levine, J. M. A competitive network theory of species diversity. \emph{Proceedings of the National Academy of Sciences of the United States of America} \textbf{108,} 5638--5642 (2011). \hypertarget{ref-Dougoud2018}{} 28. Dougoud, M., Vinckenbosch, L., Rohr, R., Bersier, L.-F. \& Mazza, C. The feasibility of equilibria in large ecosystems: a primary but neglected concept in the complexity-stability debate. \emph{PLOS Computational Biology} \textbf{14,} e1005988 (2018). \hypertarget{ref-Song2018}{} 29. Song, C. \& Saavedra, S. Will a small randomly assembled community be feasible and stable? \emph{Ecology} \textbf{99,} 743--751 (2018). \hypertarget{ref-Servan2018}{} 30. Serván, C. A., Capitán, J. A., Grilli, J., Morrison, K. E. \& Allesina, S. Coexistence of many species in random ecosystems. \emph{Nature Ecology and Evolution} \textbf{2,} 1237--1242 (2018). \hypertarget{ref-Hamblin2013}{} 31. Hamblin, S. On the practical usage of genetic algorithms in ecology and evolution. \emph{Methods in Ecology and Evolution} \textbf{4,} 184--194 (2013). \hypertarget{ref-May1973}{} 32. May, R. M. Qualitative stability in model ecosystems. \emph{Ecology} \textbf{54,} 638--641 (1973). \hypertarget{ref-Rproject}{} 33. R Core Team. \emph{R: A language and environment for statistical computing}. (R Foundation for Statistical Computing, 2018). \clearpage \section{Supplemental Information} \vspace{2mm} \hrule \vspace{2mm} \textbf{This supplemental information supports the manuscript ``Component response rate variation underlies the stability of complex systems'' with additional analyses to support its conclusions. All text, code, and data underlying this manuscript are publicly available on \href{https://github.com/bradduthie/RandomMatrixStability}{GitHub} as part of the RandomMatrixStability R package.} The \href{https://github.com/bradduthie/RandomMatrixStability}{RandomMatrixStability package} includes all functions and tools for recreating the text, this supplemental information, and running all code; additional documentation is also provided for package functions. The RandomMatrixStability package is available on \href{https://github.com/bradduthie/RandomMatrixStability}{GitHub}; to download it, the \href{https://cran.r-project.org/web/packages/devtools/index.html}{\texttt{devtools} library} is needed. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{install.packages}\NormalTok{(}\StringTok{"devtools"}\NormalTok{);} \KeywordTok{library}\NormalTok{(devtools);} \end{Highlighting} \end{Shaded} The code below installs the RandomMatrixStability package using devtools. \begin{Shaded} \begin{Highlighting}[] \KeywordTok{install_github}\NormalTok{(}\StringTok{"bradduthie/RandomMatrixStability"}\NormalTok{);} \end{Highlighting} \end{Shaded} \vspace{2mm} \hrule \vspace{2mm} \section{Supplemental Information table of contents}\label{supplemental-information-table-of-contents} \begin{itemize} \tightlist \item \protect\hyperlink{IncrS}{Stability across increasing \(S\)} \item \protect\hyperlink{ecological}{Stability of random ecological networks} \begin{itemize} \tightlist \item \protect\hyperlink{competition}{Competitor networks} \item \protect\hyperlink{mutualism}{Mutualist networks} \item \protect\hyperlink{pred-prey}{Predator-prey networks} \end{itemize} \item \protect\hyperlink{connectance}{Sensitivity of connectance (C) values} \begin{itemize} \tightlist \item \protect\hyperlink{connect3}{C = 0.3} \item \protect\hyperlink{connect5}{C = 0.5} \item \protect\hyperlink{connect7}{C = 0.7} \item \protect\hyperlink{connect9}{C = 0.9} \end{itemize} \item \protect\hyperlink{sigma}{Large networks of \(C = 0.05\) across \(S\) and \(\sigma\)} \begin{itemize} \tightlist \item \protect\hyperlink{sigma3}{\(\sigma\) = 0.3} \item \protect\hyperlink{sigma4}{\(\sigma\) = 0.4} \item \protect\hyperlink{sigma5}{\(\sigma\) = 0.5} \item \protect\hyperlink{sigma6}{\(\sigma\) = 0.6} \end{itemize} \item \protect\hyperlink{gam_dist}{Sensitivity of distribution of \(\gamma\)} \item \protect\hyperlink{structured}{Stability of structured networks} \item \protect\hyperlink{Feasibility}{Feasibility of complex systems} \item \protect\hyperlink{ga}{Stability given targeted manipulation of \(\gamma\) (genetic algorithm)} \item \protect\hyperlink{Gibbs}{Consistency with Gibbs et al. (2018)} \item \protect\hyperlink{repr}{Reproducing simulation results} \item \protect\hyperlink{ref}{Literature cited} \end{itemize} \clearpage \hypertarget{IncrS}{\section{\texorpdfstring{Stability across increasing \(S\)}{Stability across increasing S}}\label{IncrS}} Figure 4 of the main text reports the number of stable random complex systems found over 1 million iterations. The table below shows the results for all simulations of random \(\mathbf{M}\) matrices at \(\sigma_{A} = 0.4\) and \(C = 1\) given a range of \(S = \{2, 3, ..., 49, 50\}\). In this table, the \texttt{A} refers to \(\mathbf{A}\) matrices where \(\gamma = 1\), while \texttt{M} refers to \(\mathbf{M}\) matrices after \(\sigma^{2}_{\gamma}\) is added and \(\gamma \sim \mathcal{U}(0, 2)\). Each row summarises data for a given \(S\) over 1 million randomly simulated \(\mathbf{M}\). The column \texttt{A\_unstable} shows the number of \(\mathbf{A}\) matrices that are unstable, and the column \texttt{A\_stable} shows the number of \(\mathbf{A}\) matrices that are stable (these two columns sum to 1 million). Similarly, the column \texttt{M\_unstable} shows the number of \(\mathbf{M}\) matrices that are unstable and \texttt{M\_stable} shows the number that are stable. The columns \texttt{A\_stabilised} and \texttt{A\_destabilised} show how many \(\mathbf{M}\) matrices were stabilised or destabilised, respectively, by \(\sigma^{2}_{\gamma}\). \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 293 & 999707 & 293 & 999707 & 0 & 0\tabularnewline 3 & 3602 & 996398 & 3609 & 996391 & 0 & 7\tabularnewline 4 & 14937 & 985063 & 15008 & 984992 & 0 & 71\tabularnewline 5 & 39289 & 960711 & 39783 & 960217 & 36 & 530\tabularnewline 6 & 78845 & 921155 & 80207 & 919793 & 389 & 1751\tabularnewline 7 & 133764 & 866236 & 136904 & 863096 & 1679 & 4819\tabularnewline 8 & 204112 & 795888 & 208241 & 791759 & 5391 & 9520\tabularnewline 9 & 288041 & 711959 & 291775 & 708225 & 12619 & 16353\tabularnewline 10 & 384024 & 615976 & 384931 & 615069 & 23153 & 24060\tabularnewline 11 & 485975 & 514025 & 481019 & 518981 & 35681 & 30725\tabularnewline 12 & 590453 & 409547 & 577439 & 422561 & 48302 & 35288\tabularnewline 13 & 689643 & 310357 & 669440 & 330560 & 57194 & 36991\tabularnewline 14 & 777496 & 222504 & 751433 & 248567 & 60959 & 34896\tabularnewline 15 & 850159 & 149841 & 821613 & 178387 & 58567 & 30021\tabularnewline 16 & 905057 & 94943 & 877481 & 122519 & 51255 & 23679\tabularnewline 17 & 943192 & 56808 & 919536 & 80464 & 40854 & 17198\tabularnewline 18 & 969018 & 30982 & 949944 & 50056 & 30102 & 11028\tabularnewline 19 & 984301 & 15699 & 970703 & 29297 & 20065 & 6467\tabularnewline 20 & 992601 & 7399 & 983507 & 16493 & 12587 & 3493\tabularnewline 21 & 996765 & 3235 & 991532 & 8468 & 7030 & 1797\tabularnewline 22 & 998693 & 1307 & 995567 & 4433 & 3884 & 758\tabularnewline 23 & 999503 & 497 & 997941 & 2059 & 1883 & 321\tabularnewline 24 & 999861 & 139 & 999059 & 941 & 899 & 97\tabularnewline 25 & 999964 & 36 & 999617 & 383 & 380 & 33\tabularnewline 26 & 999993 & 7 & 999878 & 122 & 121 & 6\tabularnewline 27 & 999995 & 5 & 999946 & 54 & 53 & 4\tabularnewline 28 & 1000000 & 0 & 999975 & 25 & 25 & 0\tabularnewline 29 & 1000000 & 0 & 999997 & 3 & 3 & 0\tabularnewline 30 & 1000000 & 0 & 999999 & 1 & 1 & 0\tabularnewline 31 & 1000000 & 0 & 999999 & 1 & 1 & 0\tabularnewline 32 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 33 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 34 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 35 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 36 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 37 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 38 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 39 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 40 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 41 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 42 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 43 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 44 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 45 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 46 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 47 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 48 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 49 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 50 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} Overall, the ratio of stable \(\mathbf{A}\) matrices to stable \(\mathbf{M}\) matrices found is greater than 1 whenever \(S > 10\) (compare column 3 to column 5), and this ratio increases with increasing \(S\) (column 1). Hence, more randomly created complex systems (\(\mathbf{M}\)) are stable given variation in \(\gamma\) than when \(\gamma = 1\). Note that feasibility results were omitted for the table above, but are \protect\hyperlink{Feasibility}{reported below}. \hypertarget{ecological}{\section{Stability of random ecological networks}\label{ecological}} While the foundational work of May\textsuperscript{\protect\hyperlink{ref-May1972}{1}} applies broadly to complex networks, much attention has been given specifically to ecological networks of interacting species. In these networks, the matrix \(\mathbf{A}\) is interpreted as a community matrix and each row and column is interpreted as a single species. The per capita effect that the density of any species \(i\) has on the population dynamics of species \(j\) is found in \(A_{ij}\), meaning that \(\mathbf{A}\) holds the effects of pair-wise interactions between \(S\) species\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2},\protect\hyperlink{ref-Allesina2015}{3}}. While May's original work\textsuperscript{\protect\hyperlink{ref-May1972}{1}} considered only randomly assembled communities, recent work has specifically looked at more restricted ecological communities including competitive networks (all off-diagonal elements of \(\mathbf{A}\) are negative), mutualist networks (all off-diagonal elements of \(\mathbf{A}\) are positive), and predator-prey networks (for any pair of \(i\) and \(j\), the effect of \(i\) on \(j\) is negative and \(j\) on \(i\) is positive, or vice versa)\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2},\protect\hyperlink{ref-Allesina2015}{3}}. In general, competitor and mutualist networks tend to be unstable, while predator-prey networks tend to be highly stabilising\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}. I investigated competitor, mutualist, and predator-prey networks following Allesina et al.\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}. To create these networks, I first generated a random matrix \(\mathbf{A}\), then changed the elements of \(\mathbf{A}\) accordingly. If \(\mathbf{A}\) was a competitive network, then the sign of any positive off-diagonal elements was reversed to be negative. If \(\mathbf{A}\) was a mutualist network, then the sign of any positive off-diagonal elements was reversed to be positive. And if \(\mathbf{A}\) was a predator-prey network, then all \(i\) and \(j\) pairs of elements were checked; any pairs of the same sign were changed so that one was negative and the other was positive. The number of stable \(\mathbf{M = \gamma A}\) systems was calculated \protect\hyperlink{IncrS}{exactly as it was} for random matrices for values of \(S\) from 2 to 50 (100 in the case of the relatively more stable predator-prey interactions), except that only 100000 random \(\mathbf{M}\) were generated instead of 1 million. The following tables for restricted ecological communities can therefore be compared with the random \(\mathbf{M}\) \protect\hyperlink{IncrS}{results above} (but note that counts from systems with comparable probabilities of stability will be an order of magnitude lower in the tables below due to the smaller number of \(\mathbf{M}\) matrices generated). As with the \protect\hyperlink{IncrS}{results above}, in the tables below, \texttt{A} refers to matrices \(\mathbf{A}\) when \(\gamma = 1\) and \texttt{M} refers to matrices after \(\sigma^{2}_{\gamma}\) is added. The column \texttt{A\_unstable} shows the number of \(\mathbf{A}\) matrices that are unstable, and the column \texttt{A\_stable} shows the number of \(\mathbf{A}\) matrices that are stable (these two columns sum to 100000). Similarly, the column \texttt{M\_unstable} shows the number of \(\mathbf{M}\) matrices that are unstable and \texttt{M\_stable} shows the number that are stable. The columns \texttt{A\_stabilised} and \texttt{A\_destabilised} show how many \(\mathbf{A}\) matrices were stabilised or destabilised, respectively, by \(\sigma^{2}_{\gamma}\). \textbf{Competition} Results for competitor interaction networks are shown below \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 48 & 99952 & 48 & 99952 & 0 & 0\tabularnewline 3 & 229 & 99771 & 231 & 99769 & 0 & 2\tabularnewline 4 & 701 & 99299 & 704 & 99296 & 0 & 3\tabularnewline 5 & 1579 & 98421 & 1587 & 98413 & 0 & 8\tabularnewline 6 & 3218 & 96782 & 3253 & 96747 & 6 & 41\tabularnewline 7 & 5519 & 94481 & 5619 & 94381 & 23 & 123\tabularnewline 8 & 9062 & 90938 & 9237 & 90763 & 77 & 252\tabularnewline 9 & 13436 & 86564 & 13729 & 86271 & 230 & 523\tabularnewline 10 & 18911 & 81089 & 19303 & 80697 & 505 & 897\tabularnewline 11 & 25594 & 74406 & 25961 & 74039 & 1011 & 1378\tabularnewline 12 & 33207 & 66793 & 33382 & 66618 & 1724 & 1899\tabularnewline 13 & 41160 & 58840 & 41089 & 58911 & 2655 & 2584\tabularnewline 14 & 50575 & 49425 & 49894 & 50106 & 3777 & 3096\tabularnewline 15 & 59250 & 40750 & 57892 & 42108 & 4824 & 3466\tabularnewline 16 & 67811 & 32189 & 65740 & 34260 & 5634 & 3563\tabularnewline 17 & 75483 & 24517 & 73056 & 26944 & 5943 & 3516\tabularnewline 18 & 82551 & 17449 & 79878 & 20122 & 5780 & 3107\tabularnewline 19 & 88030 & 11970 & 85204 & 14796 & 5417 & 2591\tabularnewline 20 & 92254 & 7746 & 89766 & 10234 & 4544 & 2056\tabularnewline 21 & 95233 & 4767 & 93002 & 6998 & 3695 & 1464\tabularnewline 22 & 97317 & 2683 & 95451 & 4549 & 2803 & 937\tabularnewline 23 & 98508 & 1492 & 97122 & 2878 & 1991 & 605\tabularnewline 24 & 99240 & 760 & 98407 & 1593 & 1216 & 383\tabularnewline 25 & 99669 & 331 & 99082 & 918 & 739 & 152\tabularnewline 26 & 99871 & 129 & 99490 & 510 & 452 & 71\tabularnewline 27 & 99938 & 62 & 99732 & 268 & 240 & 34\tabularnewline 28 & 99985 & 15 & 99888 & 112 & 108 & 11\tabularnewline 29 & 99990 & 10 & 99951 & 49 & 46 & 7\tabularnewline 30 & 100000 & 0 & 99981 & 19 & 19 & 0\tabularnewline 31 & 100000 & 0 & 99993 & 7 & 7 & 0\tabularnewline 32 & 100000 & 0 & 99996 & 4 & 4 & 0\tabularnewline 33 & 100000 & 0 & 99998 & 2 & 2 & 0\tabularnewline 34 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Mutualism} Results for mutualist interaction networks are shown below \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 56 & 99944 & 56 & 99944 & 0 & 0\tabularnewline 3 & 3301 & 96699 & 3301 & 96699 & 0 & 0\tabularnewline 4 & 34446 & 65554 & 34446 & 65554 & 0 & 0\tabularnewline 5 & 86520 & 13480 & 86520 & 13480 & 0 & 0\tabularnewline 6 & 99683 & 317 & 99683 & 317 & 0 & 0\tabularnewline 7 & 99998 & 2 & 99998 & 2 & 0 & 0\tabularnewline 8 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline 9 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline 10 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline 11 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline 12 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Predator-prey} Results for predator-prey interaction networks are shown below \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 3 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 4 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 5 & 1 & 99999 & 1 & 99999 & 0 & 0\tabularnewline 6 & 4 & 99996 & 4 & 99996 & 0 & 0\tabularnewline 7 & 2 & 99998 & 2 & 99998 & 0 & 0\tabularnewline 8 & 5 & 99995 & 5 & 99995 & 0 & 0\tabularnewline 9 & 20 & 99980 & 21 & 99979 & 0 & 1\tabularnewline 10 & 20 & 99980 & 22 & 99978 & 0 & 2\tabularnewline 11 & 38 & 99962 & 39 & 99961 & 0 & 1\tabularnewline 12 & 64 & 99936 & 66 & 99934 & 0 & 2\tabularnewline 13 & 87 & 99913 & 91 & 99909 & 0 & 4\tabularnewline 14 & 157 & 99843 & 159 & 99841 & 0 & 2\tabularnewline 15 & 215 & 99785 & 227 & 99773 & 0 & 12\tabularnewline 16 & 293 & 99707 & 310 & 99690 & 0 & 17\tabularnewline 17 & 383 & 99617 & 408 & 99592 & 0 & 25\tabularnewline 18 & 443 & 99557 & 473 & 99527 & 3 & 33\tabularnewline 19 & 642 & 99358 & 675 & 99325 & 4 & 37\tabularnewline 20 & 836 & 99164 & 887 & 99113 & 7 & 58\tabularnewline 21 & 1006 & 98994 & 1058 & 98942 & 10 & 62\tabularnewline 22 & 1153 & 98847 & 1228 & 98772 & 20 & 95\tabularnewline 23 & 1501 & 98499 & 1593 & 98407 & 30 & 122\tabularnewline 24 & 1841 & 98159 & 1996 & 98004 & 40 & 195\tabularnewline 25 & 2146 & 97854 & 2316 & 97684 & 58 & 228\tabularnewline 26 & 2643 & 97357 & 2809 & 97191 & 119 & 285\tabularnewline 27 & 3034 & 96966 & 3258 & 96742 & 158 & 382\tabularnewline 28 & 3690 & 96310 & 3928 & 96072 & 201 & 439\tabularnewline 29 & 4257 & 95743 & 4532 & 95468 & 290 & 565\tabularnewline 30 & 4964 & 95036 & 5221 & 94779 & 424 & 681\tabularnewline 31 & 5627 & 94373 & 5978 & 94022 & 452 & 803\tabularnewline 32 & 6543 & 93457 & 6891 & 93109 & 666 & 1014\tabularnewline 33 & 7425 & 92575 & 7777 & 92223 & 818 & 1170\tabularnewline 34 & 8540 & 91460 & 8841 & 91159 & 1071 & 1372\tabularnewline 35 & 9526 & 90474 & 9842 & 90158 & 1337 & 1653\tabularnewline 36 & 10617 & 89383 & 10891 & 89109 & 1624 & 1898\tabularnewline 37 & 12344 & 87656 & 12508 & 87492 & 2021 & 2185\tabularnewline 38 & 13675 & 86325 & 13877 & 86123 & 2442 & 2644\tabularnewline 39 & 15264 & 84736 & 15349 & 84651 & 2870 & 2955\tabularnewline 40 & 17026 & 82974 & 17053 & 82947 & 3363 & 3390\tabularnewline 41 & 18768 & 81232 & 18614 & 81386 & 3905 & 3751\tabularnewline 42 & 20791 & 79209 & 20470 & 79530 & 4579 & 4258\tabularnewline 43 & 23150 & 76850 & 22754 & 77246 & 5217 & 4821\tabularnewline 44 & 25449 & 74551 & 24184 & 75816 & 6285 & 5020\tabularnewline 45 & 27702 & 72298 & 26464 & 73536 & 6754 & 5516\tabularnewline 46 & 30525 & 69475 & 28966 & 71034 & 7646 & 6087\tabularnewline 47 & 32832 & 67168 & 31125 & 68875 & 8487 & 6780\tabularnewline 48 & 36152 & 63848 & 33865 & 66135 & 9479 & 7192\tabularnewline 49 & 38714 & 61286 & 36242 & 63758 & 10125 & 7653\tabularnewline 50 & 41628 & 58372 & 38508 & 61492 & 11036 & 7916\tabularnewline 51 & 44483 & 55517 & 41023 & 58977 & 11704 & 8244\tabularnewline 52 & 48134 & 51866 & 44287 & 55713 & 12573 & 8726\tabularnewline 53 & 51138 & 48862 & 46721 & 53279 & 13223 & 8806\tabularnewline 54 & 54261 & 45739 & 49559 & 50441 & 13757 & 9055\tabularnewline 55 & 57647 & 42353 & 52403 & 47597 & 14324 & 9080\tabularnewline 56 & 60630 & 39370 & 55293 & 44707 & 14669 & 9332\tabularnewline 57 & 63647 & 36353 & 57787 & 42213 & 15103 & 9243\tabularnewline 58 & 66961 & 33039 & 60439 & 39561 & 15450 & 8928\tabularnewline 59 & 69968 & 30032 & 63708 & 36292 & 15246 & 8986\tabularnewline 60 & 72838 & 27162 & 66270 & 33730 & 15177 & 8609\tabularnewline 61 & 75609 & 24391 & 68873 & 31127 & 15006 & 8270\tabularnewline 62 & 77999 & 22001 & 71318 & 28682 & 14538 & 7857\tabularnewline 63 & 80616 & 19384 & 73517 & 26483 & 14510 & 7411\tabularnewline 64 & 83089 & 16911 & 76209 & 23791 & 13784 & 6904\tabularnewline 65 & 85150 & 14850 & 78086 & 21914 & 13412 & 6348\tabularnewline 66 & 86908 & 13092 & 80437 & 19563 & 12477 & 6006\tabularnewline 67 & 88671 & 11329 & 82379 & 17621 & 11718 & 5426\tabularnewline 68 & 90537 & 9463 & 84483 & 15517 & 10878 & 4824\tabularnewline 69 & 91969 & 8031 & 86233 & 13767 & 10033 & 4297\tabularnewline 70 & 93181 & 6819 & 87914 & 12086 & 9070 & 3803\tabularnewline 71 & 94330 & 5670 & 89200 & 10800 & 8401 & 3271\tabularnewline 72 & 95324 & 4676 & 90833 & 9167 & 7359 & 2868\tabularnewline 73 & 96143 & 3857 & 91805 & 8195 & 6726 & 2388\tabularnewline 74 & 96959 & 3041 & 93065 & 6935 & 5900 & 2006\tabularnewline 75 & 97543 & 2457 & 93987 & 6013 & 5222 & 1666\tabularnewline 76 & 97969 & 2031 & 94900 & 5100 & 4481 & 1412\tabularnewline 77 & 98497 & 1503 & 95756 & 4244 & 3809 & 1068\tabularnewline 78 & 98744 & 1256 & 96442 & 3558 & 3269 & 967\tabularnewline 79 & 99045 & 955 & 96942 & 3058 & 2837 & 734\tabularnewline 80 & 99276 & 724 & 97528 & 2472 & 2329 & 581\tabularnewline 81 & 99481 & 519 & 97996 & 2004 & 1894 & 409\tabularnewline 82 & 99556 & 444 & 98321 & 1679 & 1597 & 362\tabularnewline 83 & 99691 & 309 & 98722 & 1278 & 1227 & 258\tabularnewline 84 & 99752 & 248 & 98943 & 1057 & 1015 & 206\tabularnewline 85 & 99833 & 167 & 99144 & 856 & 837 & 148\tabularnewline 86 & 99895 & 105 & 99346 & 654 & 642 & 93\tabularnewline 87 & 99925 & 75 & 99461 & 539 & 530 & 66\tabularnewline 88 & 99945 & 55 & 99566 & 434 & 428 & 49\tabularnewline 89 & 99976 & 24 & 99675 & 325 & 324 & 23\tabularnewline 90 & 99977 & 23 & 99756 & 244 & 243 & 22\tabularnewline 91 & 99982 & 18 & 99839 & 161 & 155 & 12\tabularnewline 92 & 99988 & 12 & 99865 & 135 & 135 & 12\tabularnewline 93 & 99994 & 6 & 99885 & 115 & 115 & 6\tabularnewline 94 & 99993 & 7 & 99911 & 89 & 88 & 6\tabularnewline 95 & 99998 & 2 & 99953 & 47 & 47 & 2\tabularnewline 96 & 99999 & 1 & 99965 & 35 & 35 & 1\tabularnewline 97 & 99999 & 1 & 99979 & 21 & 21 & 1\tabularnewline 98 & 100000 & 0 & 99973 & 27 & 27 & 0\tabularnewline 99 & 100000 & 0 & 99984 & 16 & 16 & 0\tabularnewline 100 & 100000 & 0 & 99989 & 11 & 11 & 0\tabularnewline \bottomrule \end{longtable} Overall, as expected\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}}, predator-prey communities are relatively stable while mutualist communties are highly unstable. But interestingly, while \(\sigma^{2}_{\gamma}\) stabilises predator-prey and competitor communities, it does not stabilise mutualist communities. This is unsurprising because purely mutualist communities are characterised by a very positive\textsuperscript{\protect\hyperlink{ref-Allesina2012}{2}} leading \(\Re(\lambda)\), and it is highly unlikely that \(\sigma^{2}_{\gamma}\) alone will shift all real parts of eigenvalues to negative values. \hypertarget{connectance}{\section{Sensitivity of connectance (C) values}\label{connectance}} In the main text, for simplicity, I assumed connectance values of \(C = 1\), meaning that all off-diagonal elements of a matrix \(\mathbf{M}\) were potentially nonzero and sampled from a normal distribution \(\mathcal{N}(0, \sigma^{2}_{A})\) where \(\sigma_{A} = 0.4\). Here I present four tables showing the number of stable communities given \(C = \{0.3, 0. 5, 0.7, 0.9 \}\). In all cases, uniform variation in component response rate (\(\gamma \sim \mathcal{U}(0, 2)\)) led to a higher number of stable communities than when \(\gamma\) did not vary (\(\gamma = 1\)). In contrast to the main text, 100000 rather than 1 million \(\mathbf{M}\) were simulated. As with the results on \protect\hyperlink{IncrS}{stability with increasing \(S\)} shown above, in the tables below \texttt{A} refers to \(\mathbf{A}\) matrices when \(\gamma = 1\), and \texttt{M} refers to \(\mathbf{M}\) matrices after \(\sigma^{2}_{\gamma}\) is added. The column \texttt{A\_unstable} shows the number of \(\mathbf{A}\) matrices that are unstable, and the column \texttt{A\_stable} shows the number of \(\mathbf{A}\) matrices that are stable (these two columns sum to 100000). Similarly, the column \texttt{M\_unstable} shows the number of \(\mathbf{M}\) matrices that are unstable and \texttt{M\_stable} shows the number that are stable. The columns \texttt{A\_stabilised} and \texttt{A\_destabilised} show how many \(\mathbf{A}\) matrices were stabilised or destabilised, respectively, by \(\sigma^{2}_{\gamma}\). \textbf{Connectance \(\mathbf{C = 0.3}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 5 & 99995 & 5 & 99995 & 0 & 0\tabularnewline 3 & 6 & 99994 & 6 & 99994 & 0 & 0\tabularnewline 4 & 24 & 99976 & 24 & 99976 & 0 & 0\tabularnewline 5 & 59 & 99941 & 59 & 99941 & 0 & 0\tabularnewline 6 & 98 & 99902 & 98 & 99902 & 0 & 0\tabularnewline 7 & 160 & 99840 & 161 & 99839 & 0 & 1\tabularnewline 8 & 290 & 99710 & 293 & 99707 & 0 & 3\tabularnewline 9 & 430 & 99570 & 434 & 99566 & 0 & 4\tabularnewline 10 & 648 & 99352 & 653 & 99347 & 1 & 6\tabularnewline 11 & 946 & 99054 & 957 & 99043 & 0 & 11\tabularnewline 12 & 1392 & 98608 & 1415 & 98585 & 4 & 27\tabularnewline 13 & 2032 & 97968 & 2065 & 97935 & 5 & 38\tabularnewline 14 & 2627 & 97373 & 2688 & 97312 & 10 & 71\tabularnewline 15 & 3588 & 96412 & 3647 & 96353 & 35 & 94\tabularnewline 16 & 5019 & 94981 & 5124 & 94876 & 51 & 156\tabularnewline 17 & 6512 & 93488 & 6673 & 93327 & 79 & 240\tabularnewline 18 & 8444 & 91556 & 8600 & 91400 & 165 & 321\tabularnewline 19 & 10416 & 89584 & 10667 & 89333 & 244 & 495\tabularnewline 20 & 13254 & 86746 & 13477 & 86523 & 425 & 648\tabularnewline 21 & 16248 & 83752 & 16481 & 83519 & 642 & 875\tabularnewline 22 & 19497 & 80503 & 19719 & 80281 & 929 & 1151\tabularnewline 23 & 23654 & 76346 & 23776 & 76224 & 1368 & 1490\tabularnewline 24 & 28485 & 71515 & 28389 & 71611 & 1914 & 1818\tabularnewline 25 & 32774 & 67226 & 32483 & 67517 & 2428 & 2137\tabularnewline 26 & 38126 & 61874 & 37411 & 62589 & 3221 & 2506\tabularnewline 27 & 43435 & 56565 & 42418 & 57582 & 3828 & 2811\tabularnewline 28 & 49333 & 50667 & 47840 & 52160 & 4565 & 3072\tabularnewline 29 & 55389 & 44611 & 53381 & 46619 & 5329 & 3321\tabularnewline 30 & 60826 & 39174 & 58388 & 41612 & 5918 & 3480\tabularnewline 31 & 66820 & 33180 & 64043 & 35957 & 6345 & 3568\tabularnewline 32 & 72190 & 27810 & 69036 & 30964 & 6685 & 3531\tabularnewline 33 & 77053 & 22947 & 73587 & 26413 & 6826 & 3360\tabularnewline 34 & 81816 & 18184 & 78157 & 21843 & 6673 & 3014\tabularnewline 35 & 85651 & 14349 & 82041 & 17959 & 6383 & 2773\tabularnewline 36 & 88985 & 11015 & 85657 & 14343 & 5721 & 2393\tabularnewline 37 & 92072 & 7928 & 88805 & 11195 & 5180 & 1913\tabularnewline 38 & 94329 & 5671 & 91444 & 8556 & 4451 & 1566\tabularnewline 39 & 95912 & 4088 & 93295 & 6705 & 3804 & 1187\tabularnewline 40 & 97232 & 2768 & 95201 & 4799 & 2967 & 936\tabularnewline 41 & 98179 & 1821 & 96506 & 3494 & 2356 & 683\tabularnewline 42 & 98826 & 1174 & 97489 & 2511 & 1786 & 449\tabularnewline 43 & 99275 & 725 & 98312 & 1688 & 1251 & 288\tabularnewline 44 & 99583 & 417 & 98872 & 1128 & 903 & 192\tabularnewline 45 & 99776 & 224 & 99339 & 661 & 576 & 139\tabularnewline 46 & 99865 & 135 & 99518 & 482 & 413 & 66\tabularnewline 47 & 99938 & 62 & 99744 & 256 & 226 & 32\tabularnewline 48 & 99956 & 44 & 99824 & 176 & 151 & 19\tabularnewline 49 & 99980 & 20 & 99914 & 86 & 85 & 19\tabularnewline 50 & 99993 & 7 & 99950 & 50 & 46 & 3\tabularnewline 51 & 99998 & 2 & 99971 & 29 & 28 & 1\tabularnewline 52 & 99998 & 2 & 99986 & 14 & 14 & 2\tabularnewline 53 & 99999 & 1 & 99992 & 8 & 7 & 0\tabularnewline 54 & 100000 & 0 & 99997 & 3 & 3 & 0\tabularnewline 55 & 100000 & 0 & 99999 & 1 & 1 & 0\tabularnewline 56 & 100000 & 0 & 99998 & 2 & 2 & 0\tabularnewline 57 & 100000 & 0 & 99999 & 1 & 1 & 0\tabularnewline 58 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 100 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Connectance \(\mathbf{C = 0.5}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 7 & 99993 & 7 & 99993 & 0 & 0\tabularnewline 3 & 32 & 99968 & 32 & 99968 & 0 & 0\tabularnewline 4 & 122 & 99878 & 122 & 99878 & 0 & 0\tabularnewline 5 & 320 & 99680 & 321 & 99679 & 0 & 1\tabularnewline 6 & 667 & 99333 & 673 & 99327 & 0 & 6\tabularnewline 7 & 1233 & 98767 & 1252 & 98748 & 0 & 19\tabularnewline 8 & 2123 & 97877 & 2156 & 97844 & 3 & 36\tabularnewline 9 & 3415 & 96585 & 3471 & 96529 & 16 & 72\tabularnewline 10 & 5349 & 94651 & 5450 & 94550 & 30 & 131\tabularnewline 11 & 7990 & 92010 & 8185 & 91815 & 81 & 276\tabularnewline 12 & 11073 & 88927 & 11301 & 88699 & 219 & 447\tabularnewline 13 & 14971 & 85029 & 15204 & 84796 & 445 & 678\tabularnewline 14 & 19754 & 80246 & 19992 & 80008 & 764 & 1002\tabularnewline 15 & 25020 & 74980 & 25239 & 74761 & 1185 & 1404\tabularnewline 16 & 30860 & 69140 & 30938 & 69062 & 1902 & 1980\tabularnewline 17 & 37844 & 62156 & 37562 & 62438 & 2758 & 2476\tabularnewline 18 & 44909 & 55091 & 44251 & 55749 & 3595 & 2937\tabularnewline 19 & 52322 & 47678 & 51011 & 48989 & 4573 & 3262\tabularnewline 20 & 60150 & 39850 & 58295 & 41705 & 5382 & 3527\tabularnewline 21 & 67147 & 32853 & 64895 & 35105 & 5925 & 3673\tabularnewline 22 & 74177 & 25823 & 71358 & 28642 & 6310 & 3491\tabularnewline 23 & 80297 & 19703 & 77034 & 22966 & 6507 & 3244\tabularnewline 24 & 85372 & 14628 & 82039 & 17961 & 6209 & 2876\tabularnewline 25 & 89719 & 10281 & 86539 & 13461 & 5562 & 2382\tabularnewline 26 & 92947 & 7053 & 90141 & 9859 & 4707 & 1901\tabularnewline 27 & 95436 & 4564 & 92950 & 7050 & 3844 & 1358\tabularnewline 28 & 97196 & 2804 & 95171 & 4829 & 2999 & 974\tabularnewline 29 & 98300 & 1700 & 96842 & 3158 & 2115 & 657\tabularnewline 30 & 99103 & 897 & 98033 & 1967 & 1466 & 396\tabularnewline 31 & 99502 & 498 & 98665 & 1335 & 1068 & 231\tabularnewline 32 & 99745 & 255 & 99185 & 815 & 696 & 136\tabularnewline 33 & 99881 & 119 & 99572 & 428 & 375 & 66\tabularnewline 34 & 99955 & 45 & 99788 & 212 & 191 & 24\tabularnewline 35 & 99979 & 21 & 99900 & 100 & 95 & 16\tabularnewline 36 & 99995 & 5 & 99950 & 50 & 50 & 5\tabularnewline 37 & 99997 & 3 & 99970 & 30 & 28 & 1\tabularnewline 38 & 99998 & 2 & 99986 & 14 & 13 & 1\tabularnewline 39 & 99999 & 1 & 99991 & 9 & 9 & 1\tabularnewline 40 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline 41 & 100000 & 0 & 99999 & 1 & 1 & 0\tabularnewline 42 & 100000 & 0 & 99999 & 1 & 1 & 0\tabularnewline 43 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Connectance \(\mathbf{C = 0.7}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 7 & 99993 & 7 & 99993 & 0 & 0\tabularnewline 3 & 106 & 99894 & 106 & 99894 & 0 & 0\tabularnewline 4 & 395 & 99605 & 397 & 99603 & 0 & 2\tabularnewline 5 & 1117 & 98883 & 1123 & 98877 & 0 & 6\tabularnewline 6 & 2346 & 97654 & 2367 & 97633 & 6 & 27\tabularnewline 7 & 4314 & 95686 & 4388 & 95612 & 16 & 90\tabularnewline 8 & 7327 & 92673 & 7456 & 92544 & 61 & 190\tabularnewline 9 & 11514 & 88486 & 11792 & 88208 & 150 & 428\tabularnewline 10 & 16247 & 83753 & 16584 & 83416 & 415 & 752\tabularnewline 11 & 22481 & 77519 & 22759 & 77241 & 884 & 1162\tabularnewline 12 & 29459 & 70541 & 29729 & 70271 & 1548 & 1818\tabularnewline 13 & 37631 & 62369 & 37567 & 62433 & 2419 & 2355\tabularnewline 14 & 46317 & 53683 & 45696 & 54304 & 3548 & 2927\tabularnewline 15 & 54945 & 45055 & 53695 & 46305 & 4671 & 3421\tabularnewline 16 & 63683 & 36317 & 61643 & 38357 & 5567 & 3527\tabularnewline 17 & 72004 & 27996 & 69375 & 30625 & 6124 & 3495\tabularnewline 18 & 79220 & 20780 & 76158 & 23842 & 6413 & 3351\tabularnewline 19 & 85286 & 14714 & 82283 & 17717 & 5982 & 2979\tabularnewline 20 & 90240 & 9760 & 87181 & 12819 & 5398 & 2339\tabularnewline 21 & 93676 & 6324 & 91077 & 8923 & 4468 & 1869\tabularnewline 22 & 96203 & 3797 & 94045 & 5955 & 3425 & 1267\tabularnewline 23 & 97866 & 2134 & 96161 & 3839 & 2496 & 791\tabularnewline 24 & 98842 & 1158 & 97633 & 2367 & 1713 & 504\tabularnewline 25 & 99433 & 567 & 98630 & 1370 & 1079 & 276\tabularnewline 26 & 99760 & 240 & 99259 & 741 & 655 & 154\tabularnewline 27 & 99895 & 105 & 99576 & 424 & 377 & 58\tabularnewline 28 & 99950 & 50 & 99790 & 210 & 194 & 34\tabularnewline 29 & 99981 & 19 & 99915 & 85 & 80 & 14\tabularnewline 30 & 99994 & 6 & 99952 & 48 & 47 & 5\tabularnewline 31 & 99998 & 2 & 99972 & 28 & 28 & 2\tabularnewline 32 & 99999 & 1 & 99992 & 8 & 8 & 1\tabularnewline 33 & 100000 & 0 & 99997 & 3 & 3 & 0\tabularnewline 34 & 100000 & 0 & 99999 & 1 & 1 & 0\tabularnewline 35 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Connectance \(\mathbf{C = 0.9}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 14 & 99986 & 14 & 99986 & 0 & 0\tabularnewline 3 & 240 & 99760 & 240 & 99760 & 0 & 0\tabularnewline 4 & 1008 & 98992 & 1016 & 98984 & 0 & 8\tabularnewline 5 & 2708 & 97292 & 2729 & 97271 & 2 & 23\tabularnewline 6 & 5669 & 94331 & 5755 & 94245 & 13 & 99\tabularnewline 7 & 9848 & 90152 & 10057 & 89943 & 91 & 300\tabularnewline 8 & 15903 & 84097 & 16201 & 83799 & 336 & 634\tabularnewline 9 & 22707 & 77293 & 23110 & 76890 & 765 & 1168\tabularnewline 10 & 30796 & 69204 & 31122 & 68878 & 1526 & 1852\tabularnewline 11 & 40224 & 59776 & 40082 & 59918 & 2649 & 2507\tabularnewline 12 & 49934 & 50066 & 49288 & 50712 & 3773 & 3127\tabularnewline 13 & 60138 & 39862 & 58803 & 41197 & 4984 & 3649\tabularnewline 14 & 69100 & 30900 & 67110 & 32890 & 5755 & 3765\tabularnewline 15 & 77607 & 22393 & 74884 & 25116 & 6273 & 3550\tabularnewline 16 & 84663 & 15337 & 81780 & 18220 & 5975 & 3092\tabularnewline 17 & 90075 & 9925 & 87290 & 12710 & 5209 & 2424\tabularnewline 18 & 93944 & 6056 & 91419 & 8581 & 4271 & 1746\tabularnewline 19 & 96650 & 3350 & 94530 & 5470 & 3287 & 1167\tabularnewline 20 & 98160 & 1840 & 96698 & 3302 & 2191 & 729\tabularnewline 21 & 99111 & 889 & 98133 & 1867 & 1389 & 411\tabularnewline 22 & 99588 & 412 & 98905 & 1095 & 903 & 220\tabularnewline 23 & 99837 & 163 & 99480 & 520 & 452 & 95\tabularnewline 24 & 99932 & 68 & 99744 & 256 & 228 & 40\tabularnewline 25 & 99976 & 24 & 99863 & 137 & 133 & 20\tabularnewline 26 & 99995 & 5 & 99950 & 50 & 49 & 4\tabularnewline 27 & 99996 & 4 & 99986 & 14 & 13 & 3\tabularnewline 28 & 100000 & 0 & 99993 & 7 & 7 & 0\tabularnewline 29 & 100000 & 0 & 99996 & 4 & 4 & 0\tabularnewline 30 & 100000 & 0 & 99998 & 2 & 2 & 0\tabularnewline 31 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \hypertarget{sigma}{\section{\texorpdfstring{Sensitivity of interaction strength (\(\sigma_{A}\)) values}{Sensitivity of interaction strength (\textbackslash{}sigma\_\{A\}) values}}\label{sigma}} Results below show stability results given varying interaction strengths (\(\sigma_{A}\)) for \(C = 0.05\) (note that system size \(S\) values are larger and increase by 10 with increasing rows). In the tables below (as \protect\hyperlink{IncrS}{above}), \texttt{A} and \texttt{M} refers to matrices for \(\gamma = 1\) and \(\sigma^{2}_{\gamma}\), respectively. \textbf{Interaction strength \(\mathbf{\sigma_{A} = 0.3}\)} \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 10 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 20 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 30 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 40 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 50 & 0 & 100000 & 0 & 100000 & 0 & 0\tabularnewline 60 & 2 & 99998 & 2 & 99998 & 0 & 0\tabularnewline 70 & 4 & 99996 & 4 & 99996 & 0 & 0\tabularnewline 80 & 6 & 99994 & 6 & 99994 & 0 & 0\tabularnewline 90 & 5 & 99995 & 5 & 99995 & 0 & 0\tabularnewline 100 & 11 & 99989 & 11 & 99989 & 0 & 0\tabularnewline 110 & 12 & 99988 & 13 & 99987 & 0 & 1\tabularnewline 120 & 23 & 99977 & 23 & 99977 & 0 & 0\tabularnewline 130 & 40 & 99960 & 40 & 99960 & 0 & 0\tabularnewline 140 & 62 & 99938 & 65 & 99935 & 0 & 3\tabularnewline 150 & 162 & 99838 & 165 & 99835 & 0 & 3\tabularnewline 160 & 325 & 99675 & 329 & 99671 & 2 & 6\tabularnewline 170 & 829 & 99171 & 851 & 99149 & 6 & 28\tabularnewline 180 & 1817 & 98183 & 1860 & 98140 & 31 & 74\tabularnewline 190 & 3927 & 96073 & 3989 & 96011 & 143 & 205\tabularnewline 200 & 8084 & 91916 & 8048 & 91952 & 557 & 521\tabularnewline 210 & 15558 & 84442 & 15147 & 84853 & 1534 & 1123\tabularnewline 220 & 26848 & 73152 & 25342 & 74658 & 3625 & 2119\tabularnewline 230 & 43386 & 56614 & 39535 & 60465 & 6992 & 3141\tabularnewline 240 & 62734 & 37266 & 56684 & 43316 & 9815 & 3765\tabularnewline 250 & 80128 & 19872 & 73080 & 26920 & 10128 & 3080\tabularnewline 260 & 92206 & 7794 & 86619 & 13381 & 7490 & 1903\tabularnewline 270 & 97946 & 2054 & 94824 & 5176 & 3797 & 675\tabularnewline 280 & 99659 & 341 & 98534 & 1466 & 1265 & 140\tabularnewline 290 & 99962 & 38 & 99696 & 304 & 281 & 15\tabularnewline 300 & 99994 & 6 & 99964 & 36 & 34 & 4\tabularnewline \bottomrule \end{longtable} \textbf{Interaction strength \(\mathbf{\sigma_{A} = 0.4}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 10 & 3 & 99997 & 3 & 99997 & 0 & 0\tabularnewline 20 & 15 & 99985 & 15 & 99985 & 0 & 0\tabularnewline 30 & 48 & 99952 & 48 & 99952 & 0 & 0\tabularnewline 40 & 85 & 99915 & 85 & 99915 & 0 & 0\tabularnewline 50 & 163 & 99837 & 163 & 99837 & 0 & 0\tabularnewline 60 & 280 & 99720 & 282 & 99718 & 0 & 2\tabularnewline 70 & 561 & 99439 & 566 & 99434 & 3 & 8\tabularnewline 80 & 1009 & 98991 & 1029 & 98971 & 6 & 26\tabularnewline 90 & 2126 & 97874 & 2175 & 97825 & 31 & 80\tabularnewline 100 & 4580 & 95420 & 4653 & 95347 & 142 & 215\tabularnewline 110 & 9540 & 90460 & 9632 & 90368 & 465 & 557\tabularnewline 120 & 19090 & 80910 & 18668 & 81332 & 1676 & 1254\tabularnewline 130 & 35047 & 64953 & 33220 & 66780 & 4172 & 2345\tabularnewline 140 & 56411 & 43589 & 52439 & 47561 & 7297 & 3325\tabularnewline 150 & 78003 & 21997 & 72574 & 27426 & 8477 & 3048\tabularnewline 160 & 92678 & 7322 & 88438 & 11562 & 5901 & 1661\tabularnewline 170 & 98614 & 1386 & 96670 & 3330 & 2397 & 453\tabularnewline 180 & 99839 & 161 & 99418 & 582 & 499 & 78\tabularnewline 190 & 99990 & 10 & 99945 & 55 & 52 & 7\tabularnewline 200 & 100000 & 0 & 99995 & 5 & 5 & 0\tabularnewline 210 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 300 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Interaction strength \(\mathbf{\sigma_{A} = 0.5}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 10 & 36 & 99964 & 36 & 99964 & 0 & 0\tabularnewline 20 & 195 & 99805 & 195 & 99805 & 0 & 0\tabularnewline 30 & 519 & 99481 & 523 & 99477 & 0 & 4\tabularnewline 40 & 1096 & 98904 & 1101 & 98899 & 2 & 7\tabularnewline 50 & 2375 & 97625 & 2397 & 97603 & 9 & 31\tabularnewline 60 & 4898 & 95102 & 4968 & 95032 & 83 & 153\tabularnewline 70 & 10841 & 89159 & 10916 & 89084 & 432 & 507\tabularnewline 80 & 22281 & 77719 & 21988 & 78012 & 1622 & 1329\tabularnewline 90 & 42010 & 57990 & 39998 & 60002 & 4458 & 2446\tabularnewline 100 & 67289 & 32711 & 63098 & 36902 & 7153 & 2962\tabularnewline 110 & 88137 & 11863 & 84023 & 15977 & 6108 & 1994\tabularnewline 120 & 97678 & 2322 & 95557 & 4443 & 2740 & 619\tabularnewline 130 & 99795 & 205 & 99304 & 696 & 578 & 87\tabularnewline 140 & 99989 & 11 & 99948 & 52 & 49 & 8\tabularnewline 150 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 300 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \textbf{Interaction strength \(\mathbf{\sigma_{A} = 0.6}\)} \begin{longtable}[]{@{}lllllll@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 10 & 162 & 99838 & 162 & 99838 & 0 & 0\tabularnewline 20 & 798 & 99202 & 799 & 99201 & 0 & 1\tabularnewline 30 & 2273 & 97727 & 2289 & 97711 & 6 & 22\tabularnewline 40 & 5259 & 94741 & 5298 & 94702 & 70 & 109\tabularnewline 50 & 12084 & 87916 & 12054 & 87946 & 446 & 416\tabularnewline 60 & 26072 & 73928 & 25511 & 74489 & 1810 & 1249\tabularnewline 70 & 50121 & 49879 & 47747 & 52253 & 4748 & 2374\tabularnewline 80 & 77806 & 22194 & 73810 & 26190 & 6421 & 2425\tabularnewline 90 & 94862 & 5138 & 92069 & 7931 & 3842 & 1049\tabularnewline 100 & 99527 & 473 & 98822 & 1178 & 870 & 165\tabularnewline 110 & 99984 & 16 & 99912 & 88 & 80 & 8\tabularnewline 120 & 100000 & 0 & 99998 & 2 & 2 & 0\tabularnewline 130 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 300 & 100000 & 0 & 100000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} \hypertarget{gam_dist}{\section{\texorpdfstring{Sensitivity of distribution of \(\gamma\)}{Sensitivity of distribution of \textbackslash{}gamma}}\label{gam_dist}} In the main text, I considered a uniform distribution of component response rates \(\gamma \sim \mathcal{U}(0, 2)\). The number of unstable and stable \(\mathbf{M}\) matrices are reported in \protect\hyperlink{IncrS}{a table above} across different values of \(S\). Here I show complementary results for three different distributions including an exponential, beta, and gamma distribution of \(\gamma\) values. The shape of these distributions is shown in the figure below. \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} \textbf{Distributions of component response rate (\(\boldsymbol{\gamma}\)) values in complex systems.} The stabilities of simulated complex systems with these \(\gamma\) distributions are compared to identical systems in which \(\gamma = 1\) across different system sizes (\(S\); i.e., component numbers) given a unit \(\gamma\) standard deviation (\(\sigma_{\gamma} = 1\)) for b-d. Distributions are as follows: (a) uniform, (b) exponential, (c) beta (\(\alpha = 0.5\) and \(\beta = 0.5\)), and (d) gamma (\(k = 2\) and \(\theta = 2\)). Each panel shows 1 million randomly generated \(\gamma\) values. \begin{center}\includegraphics{unnamed-chunk-21-1} \end{center} \begin{center}\rule{0.5\linewidth}{\linethickness}\end{center} The stability of \(\mathbf{A}\) versus \(\mathbf{M}\) was investigated for each of the distributions of \(\gamma\) shown in panels b-d above. The table below shows the number of \(\mathbf{A}\) versus \(\mathbf{M}\) that were stable for the exponential (exp), beta, and gamma distributions. \begin{longtable}[]{@{}lllllll@{}} \toprule S & exp\_A & exp\_M & beta\_A & beta\_M & gamma\_A & gamma\_M\tabularnewline \midrule \endhead 2 & 99965 & 99965 & 99974 & 99974 & 99977 & 99977\tabularnewline 3 & 99636 & 99635 & 99650 & 99648 & 99628 & 99628\tabularnewline 4 & 98576 & 98564 & 98482 & 98470 & 98508 & 98492\tabularnewline 5 & 96053 & 95971 & 96156 & 96096 & 96068 & 96004\tabularnewline 6 & 92036 & 91867 & 92104 & 91927 & 92233 & 92029\tabularnewline 7 & 86667 & 86333 & 86456 & 86070 & 86604 & 86161\tabularnewline 8 & 79670 & 79153 & 79392 & 78822 & 79393 & 78771\tabularnewline 9 & 71389 & 70911 & 70998 & 70529 & 71070 & 70548\tabularnewline 10 & 61674 & 61609 & 61794 & 61586 & 61265 & 61093\tabularnewline 11 & 51150 & 51935 & 51352 & 51924 & 51313 & 51951\tabularnewline 12 & 41209 & 42925 & 40954 & 42670 & 40708 & 42183\tabularnewline 13 & 30827 & 33462 & 30969 & 33770 & 31046 & 33522\tabularnewline 14 & 22203 & 25767 & 22208 & 25629 & 22342 & 25435\tabularnewline 15 & 15003 & 18877 & 15206 & 18913 & 15025 & 18464\tabularnewline 16 & 9613 & 13372 & 9504 & 13357 & 9418 & 12737\tabularnewline 17 & 5579 & 8967 & 5570 & 8976 & 5719 & 8487\tabularnewline 18 & 3104 & 5833 & 3048 & 5853 & 3060 & 5447\tabularnewline 19 & 1516 & 3578 & 1553 & 3633 & 1600 & 3185\tabularnewline 20 & 717 & 2067 & 799 & 2179 & 769 & 1862\tabularnewline 21 & 312 & 1196 & 310 & 1200 & 331 & 1039\tabularnewline 22 & 129 & 643 & 128 & 654 & 135 & 510\tabularnewline 23 & 48 & 321 & 48 & 359 & 57 & 242\tabularnewline 24 & 11 & 161 & 19 & 159 & 20 & 120\tabularnewline 25 & 1 & 59 & 5 & 81 & 7 & 45\tabularnewline 26 & 0 & 30 & 0 & 48 & 0 & 22\tabularnewline 27 & 0 & 10 & 0 & 16 & 0 & 6\tabularnewline 28 & 1 & 3 & 2 & 2 & 0 & 3\tabularnewline 29 & 0 & 2 & 0 & 0 & 0 & 0\tabularnewline 30 & 0 & 0 & 0 & 1 & 0 & 0\tabularnewline 31 & 0 & 0 & 0 & 1 & 0 & 0\tabularnewline 32 & 0 & 0 & 0 & 0 & 0 & 0\tabularnewline \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline 50 & 0 & 0 & 0 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} In comparison to the uniform distribution (a), proportionally fewer random systems are found with the exponential distribution (b), while more are found with the beta (c) and gamma (d) distributions. \hypertarget{structured}{\section{Stability of structured networks}\label{structured}} I tested the stability of one million random, small-world, scale-free, and cascade food web networks for different network parameters. Each of these networks is structured differently. In the main text, the random networks and cascade food webs that I built were saturated (\(C = 1\)), meaning that every component was connected to, and interacted with, every other component (see immediately below). \vspace{2mm} \begin{center}\includegraphics{unnamed-chunk-23-1} \end{center} \vspace{2mm} Small-world networks, in contrast, are not saturated. They are instead defined by components that interact mostly with other closely neighbouring components, but have a proportion of interactions (\(\beta\)) that are instead between non-neighbours\textsuperscript{\protect\hyperlink{ref-Watts1998}{4}}. Two small-world networks are shown below. \vspace{2mm} \begin{center}\includegraphics{unnamed-chunk-24-1} \end{center} \vspace{2mm} The small-world network on the left shows a system in which \(\beta = 0.01\), while the small-world network on the right shows one in which \(\beta = 0.1\). At the extremes of \(\beta = 0\) and \(\beta = 1\), networks are regular and random, respectively. The table below shows how \(\sigma^{2}_\gamma\) affects stability in small world networks across different values of \(S\) and \(\beta\). \begin{longtable}[]{@{}rrrrrrrrr@{}} \toprule beta & S & A\_unstable & A\_stable & M\_unstable & M\_stable & complex\_A & complex\_M & C\tabularnewline \midrule \endhead 0.00 & 24 & 17388 & 982612 & 17446 & 982554 & 0.5748066 & 0.6582632 & 0.1304348\tabularnewline 0.00 & 48 & 258024 & 741976 & 260579 & 739421 & 0.8073918 & 0.9294192 & 0.1063830\tabularnewline 0.00 & 72 & 715036 & 284964 & 722639 & 277361 & 0.9860840 & 1.1364805 & 0.0985915\tabularnewline 0.00 & 96 & 961434 & 38566 & 962788 & 37212 & 1.1369395 & 1.3110263 & 0.0947368\tabularnewline 0.00 & 120 & 999008 & 992 & 998857 & 1143 & 1.2700387 & 1.4649832 & 0.0924370\tabularnewline 0.00 & 144 & 999997 & 3 & 999994 & 6 & 1.3903192 & 1.6041216 & 0.0909091\tabularnewline 0.00 & 168 & 1000000 & 0 & 1000000 & 0 & 1.5010334 & 1.7320676 & 0.0898204\tabularnewline 0.01 & 24 & 17673 & 982327 & 17720 & 982280 & 0.5747156 & 0.6581503 & 0.1304319\tabularnewline 0.01 & 48 & 255038 & 744962 & 257647 & 742353 & 0.8073388 & 0.9292952 & 0.1063800\tabularnewline 0.01 & 72 & 708892 & 291108 & 716829 & 283171 & 0.9859457 & 1.1363940 & 0.0985884\tabularnewline 0.01 & 96 & 960635 & 39365 & 961876 & 38124 & 1.1370640 & 1.3112193 & 0.0947337\tabularnewline 0.01 & 120 & 999040 & 960 & 998794 & 1206 & 1.2698715 & 1.4648280 & 0.0924338\tabularnewline 0.01 & 144 & 999997 & 3 & 999994 & 6 & 1.3901601 & 1.6039285 & 0.0909060\tabularnewline 0.01 & 168 & 1000000 & 0 & 1000000 & 0 & 1.5009490 & 1.7319739 & 0.0898173\tabularnewline 0.10 & 24 & 20382 & 979618 & 20455 & 979545 & 0.5742520 & 0.6573563 & 0.1302974\tabularnewline 0.10 & 48 & 237747 & 762253 & 240370 & 759630 & 0.8066604 & 0.9284434 & 0.1062311\tabularnewline 0.10 & 72 & 679874 & 320126 & 685575 & 314425 & 0.9849695 & 1.1352553 & 0.0984349\tabularnewline 0.10 & 96 & 961984 & 38016 & 960128 & 39872 & 1.1358912 & 1.3097957 & 0.0945788\tabularnewline 0.10 & 120 & 999546 & 454 & 999275 & 725 & 1.2687142 & 1.4634587 & 0.0922779\tabularnewline 0.10 & 144 & 1000000 & 0 & 1000000 & 0 & 1.3890356 & 1.6025900 & 0.0907489\tabularnewline 0.10 & 168 & 1000000 & 0 & 1000000 & 0 & 1.4994818 & 1.7302649 & 0.0896598\tabularnewline 0.25 & 24 & 23654 & 976346 & 23775 & 976225 & 0.5722185 & 0.6546853 & 0.1296712\tabularnewline 0.25 & 48 & 228318 & 771682 & 231208 & 768792 & 0.8033257 & 0.9244966 & 0.1055259\tabularnewline 0.25 & 72 & 666982 & 333018 & 669104 & 330896 & 0.9808676 & 1.1304109 & 0.0977066\tabularnewline 0.25 & 96 & 966456 & 33544 & 961545 & 38455 & 1.1307841 & 1.3039452 & 0.0938392\tabularnewline 0.25 & 120 & 999749 & 251 & 999507 & 493 & 1.2632327 & 1.4571506 & 0.0915316\tabularnewline 0.25 & 144 & 1000000 & 0 & 1000000 & 0 & 1.3827642 & 1.5953248 & 0.0899987\tabularnewline 0.25 & 168 & 1000000 & 0 & 1000000 & 0 & 1.4926700 & 1.7224506 & 0.0889064\tabularnewline 1.00 & 24 & 26331 & 973669 & 26478 & 973522 & 0.5561013 & 0.6356655 & 0.1249651\tabularnewline 1.00 & 48 & 211199 & 788801 & 214154 & 785846 & 0.7720342 & 0.8881302 & 0.0991370\tabularnewline 1.00 & 72 & 613621 & 386379 & 615771 & 384229 & 0.9394912 & 1.0825566 & 0.0908153\tabularnewline 1.00 & 96 & 943191 & 56809 & 936396 & 63604 & 1.0812364 & 1.2466510 & 0.0867047\tabularnewline 1.00 & 120 & 999157 & 843 & 998396 & 1604 & 1.2065026 & 1.3916458 & 0.0842561\tabularnewline 1.00 & 144 & 1000000 & 0 & 999997 & 3 & 1.3199179 & 1.5227509 & 0.0826325\tabularnewline 1.00 & 168 & 1000000 & 0 & 1000000 & 0 & 1.4243560 & 1.6434386 & 0.0814738\tabularnewline \bottomrule \end{longtable} In the above, the complexity of \(\mathbf{A}\) and \(\mathbf{M}\), and the mean \(C\), are also shown. For similar magnitudes of complexity as in random networks of \(\sigma\sqrt{SC} \gtrapprox 1.26\), variation in \(\gamma\) typically results in more stable than unstable systems. Scale-free networks are also not saturated, but are defined by an interaction frequency distribution that follows a power law. In other words, a small number of components interact with many other components, while most components interact with only a small number of other components. Scale-free networks can be built by adding new components, one by one, to an existing system, with each newly added component interacting with a randomly selected subset of \(m\) existing components\textsuperscript{\protect\hyperlink{ref-Albert2002}{5}}. The network on the left below shows an example of a scale-free network in which \(m = 3\). The histogram on the right shows the number of other components with which each component interacts. \begin{center}\includegraphics{unnamed-chunk-26-1} \end{center} The table below shows how \(\sigma^{2}_\gamma\) affects stability across different scale-free networks with different \(S\) and \(m\) values. \begin{longtable}[]{@{}rrrrrrrrr@{}} \toprule m & S & A\_unstable & A\_stable & M\_unstable & M\_stable & complex\_A & complex\_M & C\tabularnewline \midrule \endhead 2 & 24 & 152791 & 847209 & 156034 & 843966 & 0.7891257 & 0.9034663 & 0.1648551\tabularnewline 3 & 24 & 320481 & 679519 & 326351 & 673649 & 0.9566487 & 1.0967499 & 0.2409420\tabularnewline 4 & 24 & 504433 & 495567 & 504826 & 495174 & 1.0922870 & 1.2532761 & 0.3134058\tabularnewline 5 & 24 & 670676 & 329324 & 660426 & 339574 & 1.2073054 & 1.3857169 & 0.3822464\tabularnewline 6 & 24 & 798637 & 201363 & 779345 & 220655 & 1.3067095 & 1.5004508 & 0.4474638\tabularnewline 7 & 24 & 884082 & 115918 & 862215 & 137785 & 1.3942577 & 1.6013368 & 0.5090580\tabularnewline 8 & 24 & 936190 & 63810 & 915630 & 84370 & 1.4722315 & 1.6908563 & 0.5670290\tabularnewline 9 & 24 & 964868 & 35132 & 948297 & 51703 & 1.5414455 & 1.7707292 & 0.6213768\tabularnewline 10 & 24 & 981460 & 18540 & 967911 & 32089 & 1.6030044 & 1.8417459 & 0.6721014\tabularnewline 11 & 24 & 989838 & 10162 & 980232 & 19768 & 1.6586511 & 1.9059313 & 0.7192029\tabularnewline 12 & 24 & 994393 & 5607 & 987436 & 12564 & 1.7081503 & 1.9628898 & 0.7626812\tabularnewline 2 & 48 & 303963 & 696037 & 310053 & 689947 & 0.7946875 & 0.9132519 & 0.0828901\tabularnewline 3 & 48 & 577855 & 422145 & 579996 & 420004 & 0.9685494 & 1.1141445 & 0.1227837\tabularnewline 4 & 48 & 810001 & 189999 & 799132 & 200868 & 1.1122992 & 1.2799335 & 0.1617908\tabularnewline 5 & 48 & 938004 & 61996 & 924613 & 75387 & 1.2369960 & 1.4236817 & 0.1999113\tabularnewline 6 & 48 & 984975 & 15025 & 976433 & 23567 & 1.3478291 & 1.5514420 & 0.2371454\tabularnewline 7 & 48 & 997160 & 2840 & 994005 & 5995 & 1.4473792 & 1.6663763 & 0.2734929\tabularnewline 8 & 48 & 999584 & 416 & 998590 & 1410 & 1.5385445 & 1.7716359 & 0.3089539\tabularnewline 9 & 48 & 999955 & 45 & 999707 & 293 & 1.6227742 & 1.8687074 & 0.3435284\tabularnewline 10 & 48 & 999992 & 8 & 999939 & 61 & 1.7006157 & 1.9583879 & 0.3772163\tabularnewline 11 & 48 & 999999 & 1 & 999990 & 10 & 1.7731759 & 2.0420990 & 0.4100177\tabularnewline 12 & 48 & 1000000 & 0 & 999999 & 1 & 1.8410402 & 2.1203112 & 0.4419326\tabularnewline 2 & 72 & 427243 & 572757 & 434600 & 565400 & 0.7964226 & 0.9166566 & 0.0553599\tabularnewline 3 & 72 & 741345 & 258655 & 739020 & 260980 & 0.9723446 & 1.1195788 & 0.0823552\tabularnewline 4 & 72 & 931043 & 68957 & 921145 & 78855 & 1.1188220 & 1.2888100 & 0.1089593\tabularnewline 5 & 72 & 989644 & 10356 & 984372 & 15628 & 1.2466268 & 1.4361875 & 0.1351721\tabularnewline 6 & 72 & 999131 & 869 & 997914 & 2086 & 1.3604666 & 1.5674966 & 0.1609937\tabularnewline 7 & 72 & 999946 & 54 & 999804 & 196 & 1.4642496 & 1.6872501 & 0.1864241\tabularnewline 8 & 72 & 999999 & 1 & 999988 & 12 & 1.5596340 & 1.7974044 & 0.2114632\tabularnewline 9 & 72 & 1000000 & 0 & 999999 & 1 & 1.6482181 & 1.8994441 & 0.2361111\tabularnewline 10 & 72 & 1000000 & 0 & 1000000 & 0 & 1.7307859 & 1.9947150 & 0.2603678\tabularnewline 11 & 72 & 1000000 & 0 & 1000000 & 0 & 1.8086766 & 2.0847262 & 0.2842332\tabularnewline 12 & 72 & 1000000 & 0 & 1000000 & 0 & 1.8817533 & 2.1689764 & 0.3077074\tabularnewline 2 & 96 & 527633 & 472367 & 535188 & 464812 & 0.7974024 & 0.9183557 & 0.0415570\tabularnewline 3 & 96 & 842274 & 157726 & 837756 & 162244 & 0.9741293 & 1.1224709 & 0.0619518\tabularnewline 4 & 96 & 975834 & 24166 & 969478 & 30522 & 1.1220115 & 1.2931371 & 0.0821272\tabularnewline 5 & 96 & 998391 & 1609 & 996991 & 3009 & 1.2511287 & 1.4422331 & 0.1020833\tabularnewline 6 & 96 & 999955 & 45 & 999838 & 162 & 1.3669903 & 1.5757699 & 0.1218202\tabularnewline 7 & 96 & 999999 & 1 & 999996 & 4 & 1.4725862 & 1.6977057 & 0.1413377\tabularnewline 8 & 96 & 1000000 & 0 & 1000000 & 0 & 1.5699145 & 1.8099762 & 0.1606360\tabularnewline 9 & 96 & 1000000 & 0 & 1000000 & 0 & 1.6606162 & 1.9146804 & 0.1797149\tabularnewline 10 & 96 & 1000000 & 0 & 1000000 & 0 & 1.7457971 & 2.0129344 & 0.1985746\tabularnewline 11 & 96 & 1000000 & 0 & 1000000 & 0 & 1.8260368 & 2.1055559 & 0.2172149\tabularnewline 12 & 96 & 1000000 & 0 & 1000000 & 0 & 1.9018608 & 2.1929362 & 0.2356360\tabularnewline 2 & 120 & 609563 & 390437 & 616036 & 383964 & 0.7979355 & 0.9194404 & 0.0332633\tabularnewline 3 & 120 & 904064 & 95936 & 899040 & 100960 & 0.9753815 & 1.1243251 & 0.0496499\tabularnewline 4 & 120 & 991710 & 8290 & 988410 & 11590 & 1.1239922 & 1.2957520 & 0.0658964\tabularnewline 5 & 120 & 999781 & 219 & 999477 & 523 & 1.2539362 & 1.4458518 & 0.0820028\tabularnewline 6 & 120 & 999999 & 1 & 999981 & 19 & 1.3707937 & 1.5806987 & 0.0979692\tabularnewline 7 & 120 & 1000000 & 0 & 999999 & 1 & 1.4775366 & 1.7038860 & 0.1137955\tabularnewline 8 & 120 & 1000000 & 0 & 1000000 & 0 & 1.5762636 & 1.8177236 & 0.1294818\tabularnewline 9 & 120 & 1000000 & 0 & 1000000 & 0 & 1.6680647 & 1.9238257 & 0.1450280\tabularnewline 10 & 120 & 1000000 & 0 & 1000000 & 0 & 1.7545110 & 2.0233838 & 0.1604342\tabularnewline 11 & 120 & 1000000 & 0 & 1000000 & 0 & 1.8363882 & 2.1178385 & 0.1757003\tabularnewline 12 & 120 & 1000000 & 0 & 1000000 & 0 & 1.9135798 & 2.2069806 & 0.1908263\tabularnewline \bottomrule \end{longtable} As in small-world networks, the mean \(C\) is shown, along with the mean complexities of \(\mathbf{A}\) and \(\mathbf{M}\). Like all other networks, \(\sigma^{2}_\gamma\) increases the stability of scale-free networks given sufficiently high complexity. Cascade food webs are saturated, and similar to predator-prey random networks. What distinguishes them from predator-prey networks is that cascade food webs are also defined by intactions in which components are ranked such that if the rank of \(i > j\), then \(A_{ij} < 0\) and \(A_{ji} > 0\)\textsuperscript{\protect\hyperlink{ref-Solow1998}{6},\protect\hyperlink{ref-Williams2000}{7}}. In other words, if interpreting components as ecological species, species can only feed off of a species of lower rank. The table below shows how \(\sigma^{2}_\gamma\) affects stability across system sizes in cascade food webs. \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & complex\_A & complex\_M\tabularnewline \midrule \endhead 2 & 0 & 1000000 & 0 & 1000000 & 0.6378839 & 0.6381485\tabularnewline 3 & 1 & 999999 & 1 & 999999 & 0.7055449 & 0.7525143\tabularnewline 4 & 2 & 999998 & 2 & 999998 & 0.8060500 & 0.8826100\tabularnewline 5 & 17 & 999983 & 17 & 999983 & 0.8974749 & 0.9967594\tabularnewline 6 & 42 & 999958 & 43 & 999957 & 0.9821323 & 1.0999762\tabularnewline 7 & 124 & 999876 & 124 & 999876 & 1.0600906 & 1.1938910\tabularnewline 8 & 303 & 999697 & 309 & 999691 & 1.1329713 & 1.2807302\tabularnewline 9 & 653 & 999347 & 661 & 999339 & 1.2009135 & 1.3616372\tabularnewline 10 & 1401 & 998599 & 1413 & 998587 & 1.2661142 & 1.4387567\tabularnewline 11 & 2534 & 997466 & 2566 & 997434 & 1.3276636 & 1.5113096\tabularnewline 12 & 4514 & 995486 & 4597 & 995403 & 1.3865754 & 1.5804005\tabularnewline 13 & 7570 & 992430 & 7722 & 992278 & 1.4424479 & 1.6462780\tabularnewline 14 & 12223 & 987777 & 12502 & 987498 & 1.4970134 & 1.7102322\tabularnewline 15 & 18433 & 981567 & 18879 & 981121 & 1.5498812 & 1.7719564\tabularnewline 16 & 26973 & 973027 & 27712 & 972288 & 1.6002970 & 1.8310447\tabularnewline 17 & 38272 & 961728 & 39499 & 960501 & 1.6494195 & 1.8884211\tabularnewline 18 & 52397 & 947603 & 54099 & 945901 & 1.6975099 & 1.9443860\tabularnewline 19 & 69986 & 930014 & 72342 & 927658 & 1.7439233 & 1.9987398\tabularnewline 20 & 92851 & 907149 & 95776 & 904224 & 1.7893524 & 2.0514394\tabularnewline 21 & 117487 & 882513 & 121095 & 878905 & 1.8335974 & 2.1030121\tabularnewline 22 & 147852 & 852148 & 151989 & 848011 & 1.8761874 & 2.1527108\tabularnewline 23 & 183501 & 816499 & 187888 & 812112 & 1.9186092 & 2.2019827\tabularnewline 24 & 222592 & 777408 & 226021 & 773979 & 1.9591518 & 2.2491948\tabularnewline 25 & 267691 & 732309 & 269822 & 730178 & 1.9999089 & 2.2963949\tabularnewline 26 & 316090 & 683910 & 316371 & 683629 & 2.0396325 & 2.3427211\tabularnewline 27 & 369830 & 630170 & 366550 & 633450 & 2.0785319 & 2.3879356\tabularnewline 28 & 426407 & 573593 & 419136 & 580864 & 2.1169703 & 2.4324407\tabularnewline 29 & 485068 & 514932 & 473666 & 526334 & 2.1545265 & 2.4759539\tabularnewline 30 & 544300 & 455700 & 527568 & 472432 & 2.1912376 & 2.5187795\tabularnewline 31 & 605803 & 394197 & 584385 & 415615 & 2.2271037 & 2.5603818\tabularnewline 32 & 664689 & 335311 & 638047 & 361953 & 2.2626270 & 2.6016360\tabularnewline 33 & 718848 & 281152 & 689172 & 310828 & 2.2979241 & 2.6424881\tabularnewline 34 & 770790 & 229210 & 737639 & 262361 & 2.3327303 & 2.6828460\tabularnewline 35 & 817531 & 182469 & 783112 & 216888 & 2.3666720 & 2.7221952\tabularnewline 36 & 858750 & 141250 & 823548 & 176452 & 2.3998286 & 2.7608037\tabularnewline 37 & 893017 & 106983 & 859194 & 140806 & 2.4332806 & 2.7994470\tabularnewline 38 & 921268 & 78732 & 890177 & 109823 & 2.4658414 & 2.8372307\tabularnewline 39 & 943551 & 56449 & 915655 & 84345 & 2.4974678 & 2.8741350\tabularnewline 40 & 961088 & 38912 & 936883 & 63117 & 2.5301278 & 2.9116114\tabularnewline 41 & 973664 & 26336 & 953645 & 46355 & 2.5616210 & 2.9481298\tabularnewline 42 & 982829 & 17171 & 967044 & 32956 & 2.5925309 & 2.9841081\tabularnewline 43 & 989464 & 10536 & 977033 & 22967 & 2.6228949 & 3.0191690\tabularnewline 44 & 993622 & 6378 & 984470 & 15530 & 2.6534626 & 3.0548439\tabularnewline 45 & 996221 & 3779 & 989678 & 10322 & 2.6832092 & 3.0890543\tabularnewline 46 & 997963 & 2037 & 993318 & 6682 & 2.7130588 & 3.1236201\tabularnewline 47 & 998818 & 1182 & 995957 & 4043 & 2.7423480 & 3.1575904\tabularnewline 48 & 999422 & 578 & 997446 & 2554 & 2.7714223 & 3.1912463\tabularnewline 49 & 999746 & 254 & 998532 & 1468 & 2.7999596 & 3.2244020\tabularnewline 50 & 999864 & 136 & 999132 & 868 & 2.8285547 & 3.2574510\tabularnewline 51 & 999934 & 66 & 999561 & 439 & 2.8566907 & 3.2900943\tabularnewline 52 & 999970 & 30 & 999761 & 239 & 2.8844703 & 3.3222721\tabularnewline 53 & 999985 & 15 & 999873 & 127 & 2.9122645 & 3.3544290\tabularnewline 54 & 999999 & 1 & 999935 & 65 & 2.9395400 & 3.3859103\tabularnewline 55 & 1000000 & 0 & 999971 & 29 & 2.9665996 & 3.4173273\tabularnewline 56 & 999999 & 1 & 999988 & 12 & 2.9936263 & 3.4486027\tabularnewline 57 & 1000000 & 0 & 999989 & 11 & 3.0199283 & 3.4789408\tabularnewline 58 & 1000000 & 0 & 999998 & 2 & 3.0460952 & 3.5094530\tabularnewline 59 & 1000000 & 0 & 999999 & 1 & 3.0728115 & 3.5401634\tabularnewline 60 & 1000000 & 0 & 1000000 & 0 & 3.0983367 & 3.5698067\tabularnewline \bottomrule \end{longtable} Cascade food webs are more likely to be stable than small-world or scale-free networks at equivalent magnitudes of complexity (note \(C = 1\) for all above rows). A higher number of stable \(\mathbf{M}\) than \(\mathbf{A}\) was found given \(S \geq 27\). \hypertarget{Feasibility}{\section{Feasibility of complex systems}\label{Feasibility}} When feasibility was evaluated with and without variation in \(\gamma\), there was no increase in stability for \(\mathbf{M}\) where \(\gamma\) varied as compared to where \(\gamma = 1\). Results below illustrate this result, which was general to all other simulations performed. \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_infeasible & A\_feasible & M\_infeasible & M\_feasible & A\_made\_feasible & A\_made\_infeasible\tabularnewline \midrule \endhead 2 & 749978 & 250022 & 749942 & 250058 & 35552 & 35516\tabularnewline 3 & 874519 & 125481 & 874296 & 125704 & 36803 & 36580\tabularnewline 4 & 937192 & 62808 & 937215 & 62785 & 26440 & 26463\tabularnewline 5 & 968776 & 31224 & 968639 & 31361 & 16319 & 16182\tabularnewline 6 & 984313 & 15687 & 984463 & 15537 & 9006 & 9156\tabularnewline 7 & 992149 & 7851 & 992161 & 7839 & 4991 & 5003\tabularnewline 8 & 996124 & 3876 & 996103 & 3897 & 2644 & 2623\tabularnewline 9 & 998014 & 1986 & 998027 & 1973 & 1361 & 1374\tabularnewline 10 & 999031 & 969 & 999040 & 960 & 698 & 707\tabularnewline 11 & 999546 & 454 & 999514 & 486 & 377 & 345\tabularnewline 12 & 999764 & 236 & 999792 & 208 & 160 & 188\tabularnewline 13 & 999883 & 117 & 999865 & 135 & 105 & 87\tabularnewline 14 & 999938 & 62 & 999945 & 55 & 40 & 47\tabularnewline 15 & 999971 & 29 & 999964 & 36 & 31 & 24\tabularnewline 16 & 999988 & 12 & 999991 & 9 & 8 & 11\tabularnewline 17 & 999996 & 4 & 999991 & 9 & 8 & 3\tabularnewline 18 & 999997 & 3 & 999999 & 1 & 1 & 3\tabularnewline 19 & 999998 & 2 & 999997 & 3 & 3 & 2\tabularnewline 20 & 1000000 & 0 & 999999 & 1 & 1 & 0\tabularnewline 21 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 22 & 999999 & 1 & 1000000 & 0 & 0 & 1\tabularnewline 23 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 24 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 25 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 26 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 27 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 28 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 29 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 30 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 31 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 32 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 33 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 34 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 35 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 36 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 37 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 38 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 39 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 40 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 41 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 42 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 43 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 44 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 45 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 46 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 47 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 48 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 49 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline 50 & 1000000 & 0 & 1000000 & 0 & 0 & 0\tabularnewline \bottomrule \end{longtable} Hence, in general, \(\sigma^{2}_{\gamma}\) does not appear to affect feasibility in pure species interaction networks\textsuperscript{\protect\hyperlink{ref-Servan2018}{8}}. \hypertarget{ga}{\section{\texorpdfstring{Stability given targeted manipulation of \(\gamma\) (genetic algorithm)}{Stability given targeted manipulation of \textbackslash{}gamma (genetic algorithm)}}\label{ga}} The figure below compares the stability of large complex systems given \(\gamma = 1\) versus targeted manipulation of \(\gamma\) elements. For each \(S\), 100000 complex systems are randomly generated. Stability of each complex system is tested given variation in \(\gamma\) using a genetic algorithm to maximise the effect of \(\gamma\) values on increasing stability, as compared to stability in an otherwise identical system in which \(\gamma\) is the same for all components. Blue bars show the number of stable systems in the absence of component response rate variation, while red bars show the number of stable systems that can be generated if component response rate is varied to maximise system stability. The black line shows the proportion of systems that are stable when component response rate is targeted to increase stability, but would not be stable if \(\sigma^{2}_{\gamma} = 0\). The y-axis shows the \(\ln\) number of systems that are stable across \(S = \{1, 2, ..., 39, 40\}\) for \(C = 1\), and the proportion of systems wherein a targeted search of \(\gamma\) values successfully resulted in system stability. \includegraphics{unnamed-chunk-30-1.pdf} Stability results are also shown in the table below. Results for \texttt{A} indicate systems in which \(\gamma = 1\), while \texttt{M} refers to systems in which the genetic algorithm searched for a set of \(\gamma\) values that stabilised the system. \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 26 & 99974 & 26 & 99974 & 0 & 0\tabularnewline 3 & 358 & 99642 & 358 & 99642 & 0 & 0\tabularnewline 4 & 1505 & 98495 & 1505 & 98495 & 0 & 0\tabularnewline 5 & 3995 & 96005 & 3982 & 96018 & 13 & 0\tabularnewline 6 & 8060 & 91940 & 7956 & 92044 & 104 & 0\tabularnewline 7 & 13420 & 86580 & 12953 & 87047 & 468 & 1\tabularnewline 8 & 20518 & 79482 & 18940 & 81060 & 1578 & 0\tabularnewline 9 & 28939 & 71061 & 25148 & 74852 & 3793 & 2\tabularnewline 10 & 38241 & 61759 & 30915 & 69085 & 7327 & 1\tabularnewline 11 & 48682 & 51318 & 36398 & 63602 & 12286 & 2\tabularnewline 12 & 58752 & 41248 & 40710 & 59290 & 18043 & 1\tabularnewline 13 & 68888 & 31112 & 44600 & 55400 & 24289 & 1\tabularnewline 14 & 77651 & 22349 & 47528 & 52472 & 30124 & 1\tabularnewline 15 & 84912 & 15088 & 49971 & 50029 & 34942 & 1\tabularnewline 16 & 90451 & 9549 & 52274 & 47726 & 38178 & 1\tabularnewline 17 & 94332 & 5668 & 54124 & 45876 & 40209 & 1\tabularnewline 18 & 96968 & 3032 & 55831 & 44169 & 41139 & 2\tabularnewline 19 & 98384 & 1616 & 58079 & 41921 & 40305 & 0\tabularnewline 20 & 99269 & 731 & 60181 & 39819 & 39088 & 0\tabularnewline 21 & 99677 & 323 & 63338 & 36662 & 36339 & 0\tabularnewline 22 & 99854 & 146 & 66350 & 33650 & 33504 & 0\tabularnewline 23 & 99947 & 53 & 70478 & 29522 & 29469 & 0\tabularnewline 24 & 99983 & 17 & 74121 & 25879 & 25862 & 0\tabularnewline 25 & 99991 & 9 & 78364 & 21636 & 21627 & 0\tabularnewline 26 & 99999 & 1 & 82635 & 17365 & 17364 & 0\tabularnewline 27 & 100000 & 0 & 86433 & 13567 & 13567 & 0\tabularnewline 28 & 100000 & 0 & 89951 & 10049 & 10049 & 0\tabularnewline 29 & 100000 & 0 & 92716 & 7284 & 7284 & 0\tabularnewline 30 & 100000 & 0 & 95171 & 4829 & 4829 & 0\tabularnewline 31 & 100000 & 0 & 96844 & 3156 & 3156 & 0\tabularnewline 32 & 100000 & 0 & 98128 & 1872 & 1872 & 0\tabularnewline 33 & 100000 & 0 & 98941 & 1059 & 1059 & 0\tabularnewline 34 & 100000 & 0 & 99358 & 642 & 642 & 0\tabularnewline 35 & 100000 & 0 & 99702 & 298 & 298 & 0\tabularnewline 36 & 100000 & 0 & 99856 & 144 & 144 & 0\tabularnewline 37 & 100000 & 0 & 99921 & 79 & 79 & 0\tabularnewline 38 & 100000 & 0 & 99970 & 30 & 30 & 0\tabularnewline 39 & 100000 & 0 & 99989 & 11 & 11 & 0\tabularnewline 40 & 100000 & 0 & 99994 & 6 & 6 & 0\tabularnewline \bottomrule \end{longtable} The distributions of nine \(\gamma\) vectors from the highest \(S\) values are shown below. This comparison shows the high number of stable \(\mathbf{M}\) that can be produced through a targeted search of \(\gamma\) values, and suggests that many otherwise unstable systems could potentially be stabilised by an informed manipulation of their component response times. Such a possibility might conceivably reduce the dimensionality of problems involving stability in social-ecological or economic systems. \includegraphics{unnamed-chunk-32-1.pdf} The distribution of \(\gamma\) values found by the genetic algorithm is uniform. A uniform distribution was used to initialise \(\gamma\) values, so there is therefore no evidence that a particular distribution of \(\gamma\) is likely to be found to stabilise a matrix \(\mathbf{M}\). \hypertarget{Gibbs}{\section{Consistency with Gibbs et al. (2018)}\label{Gibbs}} The question that I address in the main text is distinct from that of Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}}, who focused instead on the effect of a diagonal matrix of biological species densities \(\mathbf{X}\) on a community matrix \(\mathbf{M}\) given a species interaction matrix \(\mathbf{A}\). This is modelled as below, \[\mathbf{M} = \mathbf{XA}.\] Mathematically, the above is identical to my model in the main text where the system \(\mathbf{M}\) is defined by component interaction strengths \(\mathbf{A}\) and individual component response rates \(\boldsymbol{\gamma}\), \[\mathbf{M} = \mathbf{\gamma A}.\] I focused on the probability of observing a stable versus unstable system given variation in \(\mathbf{\gamma}\) as system complexity (\(\sigma\sqrt{SC}\)) increased. I increased system complexity by holding \(C\) and \(\sigma\) constant and incrementally increasing \(S\) to obtain numerical results. In contrast, Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}} applied analytical techniques to instead focus on a different question concerning the effect of \(\mathbf{\gamma}\) on the stability of \(\mathbf{M}\) given \(\mathbf{A}\) as \(S \to \infty\), with \(\sigma\) scaled so that \(\sigma = 1/\sqrt{S}\). Under such scaling, Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}} showed that the effect of \(\gamma\) on stability should decrease exponentially as \(S\) increases, which I demonstrate below by running simulations in which \(\sigma = 1/\sqrt{S}\). \begin{longtable}[]{@{}rrrrrrr@{}} \toprule S & A\_unstable & A\_stable & M\_unstable & M\_stable & A\_stabilised & A\_destabilised\tabularnewline \midrule \endhead 2 & 3111 & 96889 & 3111 & 96889 & 0 & 0\tabularnewline 3 & 5203 & 94797 & 5237 & 94763 & 1 & 35\tabularnewline 4 & 6743 & 93257 & 6818 & 93182 & 6 & 81\tabularnewline 5 & 7889 & 92111 & 8005 & 91995 & 20 & 136\tabularnewline 6 & 8834 & 91166 & 8991 & 91009 & 55 & 212\tabularnewline 7 & 9885 & 90115 & 10072 & 89928 & 81 & 268\tabularnewline 8 & 10516 & 89484 & 10764 & 89236 & 108 & 356\tabularnewline 9 & 11135 & 88865 & 11383 & 88617 & 145 & 393\tabularnewline 10 & 11819 & 88181 & 12095 & 87905 & 181 & 457\tabularnewline 11 & 12414 & 87586 & 12700 & 87300 & 213 & 499\tabularnewline 12 & 12865 & 87135 & 13136 & 86864 & 283 & 554\tabularnewline 13 & 13530 & 86470 & 13836 & 86164 & 324 & 630\tabularnewline 14 & 13745 & 86255 & 14042 & 85958 & 362 & 659\tabularnewline 15 & 14401 & 85599 & 14720 & 85280 & 387 & 706\tabularnewline 16 & 14793 & 85207 & 15123 & 84877 & 428 & 758\tabularnewline 17 & 15004 & 84996 & 15356 & 84644 & 444 & 796\tabularnewline 18 & 15361 & 84639 & 15735 & 84265 & 472 & 846\tabularnewline 19 & 16062 & 83938 & 16303 & 83697 & 592 & 833\tabularnewline 20 & 15814 & 84186 & 16184 & 83816 & 566 & 936\tabularnewline 21 & 16171 & 83829 & 16492 & 83508 & 640 & 961\tabularnewline 22 & 16671 & 83329 & 17049 & 82951 & 641 & 1019\tabularnewline 23 & 17000 & 83000 & 17291 & 82709 & 718 & 1009\tabularnewline 24 & 17411 & 82589 & 17666 & 82334 & 765 & 1020\tabularnewline 25 & 17414 & 82586 & 17742 & 82258 & 783 & 1111\tabularnewline 26 & 17697 & 82303 & 18027 & 81973 & 806 & 1136\tabularnewline 27 & 18010 & 81990 & 18316 & 81684 & 880 & 1186\tabularnewline 28 & 18584 & 81416 & 18735 & 81265 & 1008 & 1159\tabularnewline 29 & 18401 & 81599 & 18572 & 81428 & 942 & 1113\tabularnewline 30 & 18497 & 81503 & 18754 & 81246 & 952 & 1209\tabularnewline 31 & 18744 & 81256 & 18942 & 81058 & 991 & 1189\tabularnewline 32 & 18936 & 81064 & 19194 & 80806 & 1022 & 1280\tabularnewline 33 & 19174 & 80826 & 19346 & 80654 & 1113 & 1285\tabularnewline 34 & 19477 & 80523 & 19632 & 80368 & 1120 & 1275\tabularnewline 35 & 19659 & 80341 & 19777 & 80223 & 1206 & 1324\tabularnewline 36 & 19883 & 80117 & 19929 & 80071 & 1275 & 1321\tabularnewline 37 & 20275 & 79725 & 20348 & 79652 & 1308 & 1381\tabularnewline 38 & 20067 & 79933 & 20190 & 79810 & 1275 & 1398\tabularnewline 39 & 20416 & 79584 & 20516 & 79484 & 1340 & 1440\tabularnewline 40 & 20370 & 79630 & 20489 & 79511 & 1359 & 1478\tabularnewline 41 & 20295 & 79705 & 20430 & 79570 & 1382 & 1517\tabularnewline 42 & 20767 & 79233 & 20839 & 79161 & 1418 & 1490\tabularnewline 43 & 20688 & 79312 & 20705 & 79295 & 1471 & 1488\tabularnewline 44 & 21049 & 78951 & 21028 & 78972 & 1555 & 1534\tabularnewline 45 & 21114 & 78886 & 21034 & 78966 & 1572 & 1492\tabularnewline 46 & 21163 & 78837 & 21195 & 78805 & 1463 & 1495\tabularnewline 47 & 21373 & 78627 & 21353 & 78647 & 1535 & 1515\tabularnewline 48 & 21338 & 78662 & 21285 & 78715 & 1632 & 1579\tabularnewline 49 & 21547 & 78453 & 21566 & 78434 & 1575 & 1594\tabularnewline 50 & 21738 & 78262 & 21633 & 78367 & 1636 & 1531\tabularnewline 51 & 21967 & 78033 & 21892 & 78108 & 1698 & 1623\tabularnewline \bottomrule \end{longtable} Above table results can be compared to those of the \protect\hyperlink{IncrS}{main results}. Note that 100000 (not 1 million), simulations are run to confirm consistency with Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}}. The difference between my model and Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}} is that in the latter, \(\sigma\sqrt{SC} = 1\) remains constant with increasing \(S\). In the former, \(\sigma\sqrt{SC}\) increases with \(S\), so the expected complexity of the system also increases accordingly. Consequently, for the scaled \(\sigma\) in the table above, systems are not more likely to be stabilised by \(\gamma\) as \(S\) increases, consistent with Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}}. Note that overall stability does decrease with increasing \(S\) due to the increased density of eigenvalues (see below). \begin{center} \includegraphics{unnamed-chunk-34-1} \end{center} \textbf{Complexity as a function of \(S\) in the main text (solid) versus in Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}} (dashed).} When the complexity is scaled to \(\sigma\sqrt{SC} = 1\), an increase in \(S\) increases the eigenvalue density within a circle with a unit radius centred at \((-1, 0)\) on the complex plane. As \(S \to \infty\), this circle becomes increasingly saturated. Gibbs et al.\textsuperscript{\protect\hyperlink{ref-Gibbs2017}{9}} showed that a diagonal matrix \(\mathbf{\gamma}\) will have an exponentially decreasing effect on stability with increasing \(S\). Increasing \(S\) is visualised below, first with a system size \(S = 100\). \includegraphics{unnamed-chunk-36-1.pdf} The left panel above shows the distribution of eigenvalues; the blue ellipse shows the unit radius within which eigenvalues are expected to be contained. The right panel shows how eigenvalue distributions change given \(\gamma \sim \mathcal{U}(0,2)\). The vertical dotted line shows the threshold of stability, \(\Re = 0\). Increasing to \(S = 200\), the scaling \(\sigma = 1 / \sqrt{S}\) maintains the expected distribution of eigenvalues but increases eigenvalue density. \includegraphics{unnamed-chunk-37-1.pdf} We can increase the system size to \(S = 500\) and see the corresponding increase in eigenvalue density. \includegraphics{unnamed-chunk-38-1.pdf} Finally, below shows a increase in system size to \(S = 1000\). \includegraphics{unnamed-chunk-39-1.pdf} In contrast, in the model of the main text, the complexity of system is not scaled to \(\sigma\sqrt{SC} = 1\). Rather, the density of eigenvalues within a circle centred at \((-1, 0)\) with a radius \(\sigma\sqrt{SC}\) is held constant such that there are \(S / \pi(\sigma\sqrt{SC})^2\) eigenvalues per unit area of the circle. As \(S\) increases, so does the expected complexity of the system, but the density of eigenvalues remains finite causing error around this expectation. Below shows a system where \(S = 100\), \(C = 0.0625\), and \(\sigma = 0.4\), where \(\sigma \sqrt{SC} = 1\) (identical to the first example distribution above in which \(S = 100\) and \(\sigma = 1/\sqrt{S}\)). \includegraphics{unnamed-chunk-41-1.pdf} Now when \(S\) is increased to \(200\) while keeping \(C = 0.0625\) and \(\sigma = 0.4\), the area of the circle within which eigenvalues are contained increases to keep the density of eigenvalues constant. \includegraphics{unnamed-chunk-42-1.pdf} Note that the expected distribution of eigenvalues increases so that the threshold \(\Re = 0\) is exceeded. Below, system size is increased to \(S = 500\). \includegraphics{unnamed-chunk-43-1.pdf} Finally, \(S = 1000\) is shown below. Again, the density of eigenvalues per unit remains constant at ca 2, but the system has increased in complexity such that some real components of eigenvalues are almost assured to be greater than zero. \includegraphics{unnamed-chunk-44-1.pdf} \hypertarget{repr}{\section{Reproducing simulation results}\label{repr}} All results in the main text and the literature cited can be reproduced using the \href{https://github.com/bradduthie/RandomMatrixStability}{RandomMatrixStability} R package, which can be downloaded as instructed at the beginning of this Supplemental Information document. The most relevant R functions for reproducing simulations include the following: \begin{enumerate} \def\arabic{enumi}.{\arabic{enumi}.} \tightlist \item \texttt{rand\_gen\_var}: Simulates random complex systems and cascade food webs \item \texttt{rand\_rho\_var}: Simulates random complex systems across a fixed correlation of \(\rho = cor(A_{ij}, A_{ji})\) \item \texttt{rand\_gen\_swn}: Simulates randomly generated small-world networks \item \texttt{rand\_gen\_sfn}: Simulates randomly generated scale-free networks \item \texttt{Evo\_rand\_gen\_var}: Use a genetic algorithm to find stable random complex systems \end{enumerate} For the functions 1-4 above, R output will be a table of results. Below describes the headers of this table to more clearly explain what is being reported. \footnotesize \begin{longtable}[]{@{}llll@{}} \toprule Header & Description & Header\_cont. & Description\_cont.\tabularnewline \midrule \endhead S & The system size & A\_rho & Corr. between elements A{[}ij{]} and A{[}ji{]}\tabularnewline A\_unstable & No. of A that were unstable & M\_rho & Corr. between elements M{[}ij{]} and M{[}ji{]}\tabularnewline A\_stable & No. of A that were stable & rho\_diff & Diff. between A and M rho values\tabularnewline M\_unstable & No. of M that were unstable & rho\_abs & Diff. between A and M rho magnitudes\tabularnewline M\_stable & No. of M that were stable & complex\_A & Complexity of A\tabularnewline A\_stabilised & No. of A stabilised by gamma & complex\_M & Complexity of M\tabularnewline A\_destabilised & No. of A destabilised by gamma & A\_eig & Expected real part of leading A eigenvalue\tabularnewline A\_infeasible & No. of A that were infeasible & M\_eig & Expected real part of leading M eigenvalue\tabularnewline A\_feasible & No. of A that were feasible & LR\_A & Lowest obs. real part of leading A eigenvalue\tabularnewline M\_infeasible & No. of M that were infeasible & UR\_A & Highest obs. real part of leading A eigenvalue\tabularnewline M\_feasible & No. of M that were feasible & LR\_M & Lowest obs. real part of leading M eigenvalue\tabularnewline A\_made\_feasible & No. of A made feasible by gamma & UR\_M & Highest obs. real part of leading M eigenvalue\tabularnewline A\_made\_infeasible & No. of A made infeasible by gamma & C & Obs. network connectance\tabularnewline \bottomrule \end{longtable} \normalsize Note that output from \texttt{Evo\_rand\_gen\_var} only includes the first seven rows of the table above, and \texttt{rand\_gen\_var} does not include \(C\) (which can be defined as an argument). All results presented here and in the main text are available in the \href{https://github.com/bradduthie/RandomMatrixStability/tree/master/inst/extdata}{inst/extdata} folder of the \href{https://github.com/bradduthie/RandomMatrixStability}{RandomMatrixStability} R package. \hypertarget{ref}{\section*{Literature cited}\label{ref}} \addcontentsline{toc}{section}{Literature cited} \hypertarget{refs}{} \hypertarget{ref-May1972}{} 1. May, R. M. Will a large complex system be stable? \emph{Nature} \textbf{238,} 413--414 (1972). \hypertarget{ref-Allesina2012}{} 2. Allesina, S. \& Tang, S. Stability criteria for complex ecosystems. \emph{Nature} \textbf{483,} 205--208 (2012). \hypertarget{ref-Allesina2015}{} 3. Allesina, S. \emph{et al.} Predicting the stability of large structured food webs. \emph{Nature Communications} \textbf{6,} 7842 (2015). \hypertarget{ref-Watts1998}{} 4. Watts, D. J. \& Strogatz, S. H. Collective dynamics of 'small world' networks. \emph{Nature} \textbf{393,} 440--442 (1998). \hypertarget{ref-Albert2002}{} 5. Albert, R. \& Barabási, A. L. Statistical mechanics of complex networks. \emph{Reviews of Modern Physics} \textbf{74,} 47--97 (2002). \hypertarget{ref-Solow1998}{} 6. Solow, A. R. \& Beet, A. R. On lumping species in food webs. \emph{Ecology} \textbf{79,} 2013--2018 (1998). \hypertarget{ref-Williams2000}{} 7. Williams, R. J. \& Martinez, N. D. Simple rules yield complex food webs. \emph{Nature} \textbf{404,} 180--183 (2000). \hypertarget{ref-Servan2018}{} 8. Serván, C. A., Capitán, J. A., Grilli, J., Morrison, K. E. \& Allesina, S. Coexistence of many species in random ecosystems. \emph{Nature Ecology and Evolution} \textbf{2,} 1237--1242 (2018). \hypertarget{ref-Gibbs2017}{} 9. Gibbs, T., Grilli, J., Rogers, T. \& Allesina, S. The effect of population abundances on the stability of large random ecosystems. \emph{Physical Review E - Statistical, Nonlinear, and Soft Matter Physics} \textbf{98,} 022410 (2018). \end{document}
{ "timestamp": "2020-05-05T02:31:42", "yymm": "1806", "arxiv_id": "1806.01029", "language": "en", "url": "https://arxiv.org/abs/1806.01029" }
\section{Introduction} In a recent publication by Fornal \textit{et al.}~\cite{Fornal} a proposal was made to solve the persistent discrepancy between two methods of measuring the neutron life-time. Trapped neutrons in a bottle appear to have a shorter life-time than neutrons in a beam where the decay proton is detected. There is a discrepancy of around 8 seconds (3.5$\sigma$) between the two experimental set-ups. In Ref.\cite{Fornal} it was suggested that the reason for this difference could lie in a formerly unknown decay channel of the neutron to a dark fermion. This proposal came as an alternative to the previous hypothesis that the experimental disagreement could be caused by the neutron oscillating into it's mirror counterpart \cite{Mirror}. Both arguments rely on the proposed existence of a decay channel to a fermion almost degenerate with the neutron. This proposal attracted the attention of several collaborations and a number of publications followed the original release. In Ref.\cite{Tang} the authors argue through experimental evidence that in a decay of the form $n \rightarrow \text{DM} + \lambda$, i.e. a dark matter particle plus another decay product $\lambda$, that extra particle could not be a photon ($\lambda \not= \gamma$). Another publication\cite{Antineutrino} pointed out that this hypothesised decay could also explain a different experimental inconsistency, the ``reactor antineutrino anomaly", that is, the $3\sigma$ discrepancy between theory and measurement of the antineutrino flux from a reactor. Finally Czarnecki \textit{et al.}~\cite{Czarnecki} although they did not rule out this explanation, pointed out strong constraints related to the value of the neutron axial charge. In this publication we argue that allowing the neutron to decay to an almost degenerate dark fermion would mean that inside a neutron star, where the neutrons occupying a Fermi sea can sustain, through degeneracy, very large pressures, a large portion of these neutrons would decay to this dark fermion. This implies a severe decrease in pressure, which means that the maximum mass of neutron stars before gravitational collapse would be drastically lower than the masses of the stars measured so far. This was argued in Refs.~\cite{Motta:2018rxp,Baym,Reddy} and will be developed in further detail in this publication. \section{Framework} Simulating the internal structure of neutron stars ultimately amounts to solving the so-called Tolman-Oppenheimer-Volkof\cite{TOV} (TOV) equations for several different values of central energy density. The TOV equations give an internal profile for the pressure of the star through ($c=G=\hbar=1$) \begin{eqnarray}\label{tov} \frac{dP(r)}{dr}=-\frac{1}{r^2}\left ( \epsilon(r) + P(r) \right )\left ( M(r)+4\pi r^3 P(r) \right )\left ( 1-\frac{2M(r)}{r} \right )^{-1} \end{eqnarray} and the mass is given by the continuity equation \begin{equation} \frac{dM(r)}{dr}=4\pi r^2\epsilon(r). \label{massagain} \end{equation} This set of equations take, as an input, the equation of state (EOS) of the matter of which the star is made. We will adopt, as a model for the core of neutron stars, the infinite nuclear matter EOS from the quark-meson coupling model\cite{QMC} recently reviewed in Ref.\cite{QMC_Review}. This model is well established and has been shown to provide an adequate description of high density nuclear matter in several previous calculations\cite{QMC1,QMC2}. We compare that equation of state with a modified version of it where the neutron decays to a dark fermion. Since a difference in mass of the order of a few MeV makes absolutely no difference to the mass of a neutron star, we will take this dark fermion to be fully degenerate with the neutron. Ultimately we will show that adding a vector self interaction among the dark fermions can indeed bring the mass up to more acceptable values, as was also shown in Ref.~\cite{Vector}. However, in order for that to happen, the coupling of this vector intermediate particle with the dark fermion has to be simply huge and we will argue that recent publications~\cite{Barbecue,DAmico} rule out that explanation if the dark particle in neutron decay is the major component of the dark matter in the universe. \subsection{Dark Matter} The proposal by Fornal \textit{et al.}~\cite{Fornal} is based on the decay of the neutron into a dark matter fermion which is almost degenerate with the neutron itself, plus another lighter component to conserve energy. Their first of three proposals mentioned in the publication is $n \rightarrow \chi + \gamma$, where $\chi$ is (and hereafter refers to) the dark matter fermion. However, as argued above, this model was experimentally excluded by Tang \textit{et al.}~\cite{Tang}. The only viable mode seems to be \begin{eqnarray} n\rightarrow \chi + \phi \end{eqnarray} where $\phi$ is a much lighter dark boson. This requires that the energy of the dark particles be in the ranges \begin{eqnarray} &937.900\text{MeV}<m_\chi<938.543\text{MeV}\\ &937.900\text{MeV}<m_\chi+m_\phi<939.565\text{MeV}. \end{eqnarray} We argue that \begin{romanlist} \item In neutron stars, the presence of this light dark boson $\phi$ is completely irrelevant for it would escape the system very quickly. \item All of the proposed models indicate that, in neutron stars, the only change this hypothesis implies is a change in chemical composition from the equilibrium reaction $n \leftrightarrow \chi$, here imposed by the chemical equilibrium equation for the chemical potentials $\mu_n=\mu_\chi$. \end{romanlist} \subsection{QMC} The chosen model of nuclear matter interaction is the QMC model\cite{QMC}. Based on a quark description of the baryons as quark bags interacting directly with mesons (scalar-isoscalar $\sigma$, vector-isoscalar $\omega$, vector-isovector $\rho$) we derive the energy density of the system in Hartree-Fock (HF) approximation. The Hartree, or mean field, contribution amounts to \begin{align} &\epsilon_\text{Hartree}=\frac{m_\sigma^2\sigma^2}{2} + \frac{m_\omega^2\omega^2}{2}+ \frac{m_b^2b^2}{2}& \nonumber\\ &+\frac{1}{\pi^2}\int_{0}^{k_F^n}{k^2}{\sqrt{k^2+M_N^*(\sigma)^2}dk} + \frac{1}{\pi^2}\int_{0}^{k_F^p}{k^2}{\sqrt{k^2+M_N^*(\sigma)^2}dk} \nonumber\\ &+\frac{1}{\pi^2}\int_{0}^{k_F^e}{k^2}{\sqrt{k^2+m_e^2}dk} + \frac{1}{\pi^2}\int_{0}^{k_F^\mu}{k^2}{\sqrt{k^2+m_\mu^2}dk} +\frac{1}{\pi^2}\int_{0}^{k_F^\chi}{k^2}{\sqrt{k^2+m_\chi^2}dk} \end{align} where the effective mass of the nucleon is $M_N^*(\sigma)=m_n-g_\sigma\sigma+\frac{d}{2}(g_\sigma\sigma)^2$. The $d$ is what is refered to as scalar polarizability and it is a prominent feature of the QMC model. In our convention $\sigma$, $\omega$, and $b$ refer to the mean field values of the mesons (where $b$ is the mean field value of $\rho$). For each particle the fermi momenta and chemical potentials as functions of the number densities are calculated as \begin{align} &k_\varphi^{3}={{3\pi^2n_\varphi}},\quad \varphi=\{p,n,e,\mu,\chi \} \\ &\mu_n= \frac{\partial \epsilon}{\partial n_n}, \quad \mu_p= \frac{\partial \epsilon}{\partial n_p}, \quad \mu_l= \sqrt{k_f(n_l)^2+m_l^2} \end{align} And finally the Fock terms \scriptsize\begin{align} &\epsilon_\text{Fock}=-G_\omega\frac{1}{(2\pi)^6} \left[ \int_0^{k_F^p}d^3k_1 \int_0^{k_F^p}d^3k_2 \frac{m_\omega^2}{(\vec k_1 - \vec k_2)^2 + m_\omega^2} + \int_0^{k_F^n}d^3k_1 \int_0^{k_F^n}d^3k_2 \frac{m_\omega^2}{(\vec k_1 - \vec k_2)^2 + m_\omega^2}\right] \nonumber\\ &-\frac{G_\rho}{4}\frac{1}{(2\pi)^6} \left[ (1)\times\int_0^{k_F^n}d^3k_1 \int_0^{k_F^n}d^3k_2 \frac{m_\rho^2}{(\vec k_1 - \vec k_2)^2 + m_\rho^2} +(1)\times\int_0^{k_F^p}d^3k_1 \int_0^{k_F^p}d^3k_2 \frac{m_\rho^2}{(\vec k_1 - \vec k_2)^2 + m_\rho^2}\right. \nonumber\\ &\left. + (2)\times\int_0^{k_F^n}d^3k_1 \int_0^{k_F^p}d^3k_2 \frac{m_\rho^2}{(\vec k_1 - \vec k_2)^2 + m_\rho^2}+ (2)\times\int_0^{k_F^p}d^3k_1 \int_0^{k_F^n}d^3k_2 \frac{m_\rho^2}{(\vec k_1 - \vec k_2)^2 + m_\rho^2} \right]&\nonumber\\ & +\frac{1}{(2\pi)^6} \int_0^{k_F^p}d^3k_1 \int_0^{k_F^p}d^3k_2 \frac{1}{(\vec k_1 - \vec k_2)^2 + \tilde m_\sigma^2}\times\frac{M_N^*(\sigma)(-g_\sigma C(\sigma))}{\sqrt{M_N^*(\sigma)^2+k_1^2}} \times\frac{M_N^*(\sigma)(-g_\sigma C(\sigma))}{\sqrt{M_N^*(\sigma)^2+k_2^2}}\nonumber\\ &+\frac{1}{(2\pi)^6} \int_0^{k_F^n}d^3k_1 \int_0^{k_F^n}d^3k_2 \frac{1}{(\vec k_1 - \vec k_2)^2 + \tilde m_\sigma^2}\times\frac{M_N^*(\sigma)(-g_\sigma C(\sigma))}{\sqrt{M_N^*(\sigma)^2+k_1^2}} \times\frac{M_N^*(\sigma)(-g_\sigma C(\sigma))}{\sqrt{M_N^*(\sigma)^2+k_2^2}}\nonumber \end{align}\normalsize where \begin{align} \tilde m_\sigma^2 =m_\sigma^2 + \frac{1}{\pi^2}\sum_{p,n}\int_0^{k_f^n}k^2dk \frac{\partial^2}{\partial \sigma^2} \sqrt{M_N^*(\sigma)^2+k^2}. \end{align} The density dependent meson mean field equations in the QMC model are \begin{align} &\sigma(n_n,n_p) =- \frac{1}{m_\sigma^2\pi^2}\left( \frac{\partial M_N^*}{\partial\bar\sigma}\right)\left[\sum_{p,n} \int_{0}^{k_F}k^2dk\frac{M_N^*(\sigma)}{\sqrt{k^2+M_N^*(\sigma)^2}} \right], \\ &\omega(n_n,n_p)=\frac{g_\omega}{m_\omega^2}\left(n_n+n_p\right), \\ &b(n_n,n_p)=\frac{g_\rho}{m_\rho^2}\left(\frac{n_p}{2} -\frac{n_n}{2}\right). \end{align} and finally the pressure is calculated as $P=\sum_f\mu_fn_f-\epsilon$. In Table~\ref{tab:constants} we report the constants used to perform the calculations. They are chosen to fit the saturation density at $0.16\text{fm}^{-3}$, the binding energy of symmetric matter at saturation $-15.8\text{MeV}$ and symmetry energy $30\text{MeV}$. \begin{table} \tbl{Masses and coupling constants.} {\begin{tabular}{@{}lcccccccc@{}} \toprule \ &$\sigma$ & $\omega$ & $\rho$ & $n$ & $p$ & $e$ & $\mu$ & $\chi$ \\ Mass &700MeV&782MeV&775MeV&939MeV&939MeV&0.5MeV&105MeV&939MeV \\ Coupling ($g^2/m^2$) &11.33fm$^2$&7.27fm$^2$&4.56fm$^2$&.&.&.&.&. \end{tabular} } \label{tab:constants} \end{table} \section{Neutron Stars} Using the model presented above we calculate the equilibrium densities through the equations \begin{align} \text{Neutron $\beta$ decay}\quad&\mu_n=\mu_p+\mu_e\\ \text{Muon $\beta$ decay}\quad&\mu_\mu=\mu_e \\ \text{Charge neutrality}\quad&n_p = n_e+n_\mu \\ \text{Dark matter decay}\quad&\mu_n = \mu_\chi. \end{align} Solving these equations we get species fractions that vastly favours the dark matter particle (Fig.~\ref{fig:fraction}). \begin{figure}[ht] \begin{center} \includegraphics[width=4in]{fraction} \caption{Relative species fraction with dark matter present.} \label{fig:fraction} \end{center} \end{figure} That, in turn, leads to a drastic decrease in pressure in the equation of state (Fig.\ref{fig:eos}) and as a consequence the Mass versus Radius diagram has a maximum significantly lower than the case without dark matter (Fig.\ref{fig:massaraio}). \begin{figure}[ht] \begin{center} \includegraphics[width=4in]{eos} \caption{Equation of state with dark matter present.} \label{fig:eos} \end{center} \end{figure} The maximum mass in the diagram with dark matter is of around $0.7$M$_\odot$. The reason for that is that although the central energy density of a star with dark matter is much larger than the star without it, it does not have enough pressure to support itself and therefore the energy density goes down very quickly (Fig.\ref{fig:edens}) \begin{figure}[ht] \begin{center} \includegraphics[width=2.5in]{edens_radius} \caption{Energy density profile as a function of the internal radius of the star. Profile is presented for the maximum mass point of the diagram with dark matter present, an equally heavy star without dark matter and the maximum mass star without dark matter.} \label{fig:edens} \end{center} \end{figure} However, it is unreasonable to assume that even the upper most point in the mass radius diagram of the EOS with dark matter present could ever be reached. Since the star with dark matter has to come from a real star we take the maximum mass star without dark matter and check to which point in the diagram the decay star would occur, that is, which point in the dark matter diagram has a total baryon number plus total dark matter number equal the total baryon number of a star without dark matter. That leads to a maximum mass of $0.58$M$_\odot$ (Fig.\ref{fig:massaraio}) \begin{figure}[ht] \begin{center} \includegraphics[width=4in]{058_1} \caption{Maximum possible star as an end product of dark matter decay.} \label{fig:massaraio} \end{center} \end{figure} \section{Repulsive Vector Interaction} If the dark matter particle were self interacting through a repulsive interaction it is possible that it could build up pressure to sustain larger masses. This approach was used in Ref.\cite{Vector} and we here perform the same procedure within the framework of QMC. To compare with the neutron-$\omega$ physical system we vary the coupling/mass as multiples of the $n\omega$ vertex couplings, as indicated in the figures. We name this vector intermediate $V$. The species fraction changes as the $\chi V$ interaction becomes stronger and therefore restores the EOS to it's previous stiffer version. The greater the strength of the interaction, the less dark matter will be present in the star (Fig.\ref{fig:vectorfraction}). That allows the system to support much higher masses. \begin{figure}[ht] \begin{center} \includegraphics[width=4in]{fraction2} \caption{Species fraction considering different strengths of vector self-repulsion.} \label{fig:vectorfraction} \end{center} \end{figure} The maximum mass gets to the 2 solar masses value, as all neutron star models must per recent experimental determinations\cite{Antoniadis,Demorest}, only when the $g_V/m_V$ for the dark matter is 10 times greater than $g_\omega/m_\omega$ (Fig.\ref{fig:vectorinteraction}). \begin{figure}[ht] \begin{center} \includegraphics[width=4in]{vectorinteraction} \caption{Adding vector self-interactions between the dark fermions through the exchange of a vector boson.} \label{fig:vectorinteraction} \end{center} \end{figure} However, one must consider that Ref.~\cite{DAmico} severely limits the cross-section of such a dark matter particle through astrophysical data recently measured~\cite{Barbecue}. These values of couplings (that is $g_V/m_V$) are way to high to even enter consideration. \section{Conclusion} We have shown that the addition of this dark matter particle to the composition of neutron stars leads to a giant decrease in maximum mass. The mass versus radius diagram points to $0.7$M$_\odot$ as the mass upper limit for stars with dark matter, however further investigations suggest that, if a star with dark matter is a decay product of a normal neutron star the real maximum mass has to be around $0.58$M$_\odot$. Since most neutron stars measured have masses around $1.5$M$_\odot$ this points to a clear inconsistency of the hypothesis with data. Moreover, a repulsive self-interaction indeed can push the mass limit to an acceptable point only when the ratio coupling/mass of the $\chi V$ interaction is 10 times larger than the $n\omega$ vertex. If this dark matter particle were to correspond with astrophysical dark matter this result would be in clear contradiction to recent astrophysical measurements, as pointed out in Ref.~\cite{DAmico}. Even if it were unconnected with astrophysical dark matter, it would be truly remarkable to have a new kind of matter with self interactions an order of magnitude larger than the familiar strong force. We therefore state that this decay is simply in contradiction with the data of neutron star masses if the dark particle in neutron decay is a significant component of the dark matter in the universe. \section*{Acknowledgements} This work was supported by the University of Adelaide and by the Australian Research Council through the ARC Centre of Excellence for Particle Physics at the Terascale (CE110001104) and Discovery Project DP150103164. \bibliographystyle{ws-procs961x669}
{ "timestamp": "2018-06-15T02:04:29", "yymm": "1806", "arxiv_id": "1806.00903", "language": "en", "url": "https://arxiv.org/abs/1806.00903" }
\section{Introduction} Parabolic vector bundles over Riemann surfaces with marked points were introduced by C. Seshadri in \cite{Sesh} and similar to the Narasimhan-Seshadri correspondence, there is an analogous correspondence between stable parabolic bundles and the unitary representations of the fundamental group of the punctured surface with fixed holonomy class around each puncture \cite{Meht}. Later on, C. Simpson in \cite{Simp} provided a non-abelian Hodge correspondence in the non-compact case. The analysis on the non-compact algebraic curve has to presume the appropriate growth of the harmonic metric at the punctures, a notion called by C. Simpson tameness. In a particular case, parabolic Higgs bundles are in bijection with meromorphic flat connections, whose holonomy around each puncture defines a conjugacy class of an element in the unitary group described by the weights in the parabolic structure of the bundle. These connections correspond to representations of the fundamental group of the punctured surface in the general linear group, which send a small loop around each parabolic point to an element conjugate to a unitary element. Chern classes for parabolic bundles were constructed by I. Biswas in \cite{Biswas-Chern}; one can also define Chern characters of parabolic bundles in the rational Chow groups (see \cite{Iyer-Simpson}). In this article, we study connected components of moduli spaces of polystable parabolic $G$-Higgs bundles for a semisimple real Lie group $G$. These objects were explicitly defined in \cite{BiGaRi}, where a Hitchin-Kobayashi correspondence was also established. Moreover, P. Boalch in \cite{Boalch} provided a local Riemann-Hibert correspondence for logarithmic connections on $G$-bundles and parabolic $G$-bundles on a curve. In the case when $G$ is a split real form of a complex simple Lie group, there exists a topologically trivial connected component in the moduli space, extending N. Hitchin's classical result from the non-parabolic case \cite{Hit92}. To be more precise, we show: \begin{thm*}{\textnormal{\textbf{\ref{401}}}} Let $X$ be a compact Riemann surface of genus $g$ and let $D=\left\{ {{x}_{1}},\ldots {{x}_{s}} \right\}$ a divisor of $s$-many distinct points on $X$, such that $2g-2+s>0$, that is, the surface $X$ can be equipped with a metric of constant negative curvature (-4). Let $G$ be the adjoint group of the split real form of a complex simple Lie group with Cartan decomposition in the Lie algebra $\mathfrak{g}=\mathfrak{h}\oplus \mathfrak{m}$. The space of homomorphisms from the fundamental group of $X$ into $G$ with fixed conjugacy class of monodromy around the points in $D$ has a component of real dimension $2\left( g-1 \right){{\dim}_{\mathbb{R}}}G+2s\cdot \mathrm{rk}E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$. \end{thm*} An important tool towards studying the topology of moduli spaces of parabolic Higgs bundles over a Riemann surface $X$ with a divisor $D$ is provided by the correspondence of these objects to \textit{orbifold Higgs bundles} over a finite Galois covering $Y$ of $X$, ramified along $D$. I. Biswas in \cite{Biswas2} provided a correspondence between a parabolic vector bundle and an orbifold vector bundle, that is, a vector bundle on a variety equipped with a finite group action together with a lift of the action of the group to the bundle. In his work, I. Biswas explicitly constructs a class of parabolic bundles using the ``Covering Lemma'' of Y. Kawamata \cite{Kawa}; the correspondence depends on the choice of the parabolic weights, whereas the Galois covering $Y$ is constructed to have the same dimension as $X$. A similar correspondence without such restrictions was provided by I. Mundet i Riera in \cite{Rier}. In the Higgs bundle case, for a finite group $\Gamma$ acting as a group of automorphisms on a smooth projective variety $Y$, such that the quotient $X = Y/\Gamma$ is also a smooth variety, I. Biswas, S. Majumder and M. Wong in \cite{BMW} provide a bijective correspondence between parabolic Higgs bundles on $X$ and $\Gamma$-Higgs bundles on $Y$. When the parabolic weights are rational, an equivalence between parabolic bundles and holomorphic bundles over $V$-surfaces (that means 2-dimensional orbifolds) provides an effective method to study the moduli problem, developing a Yang-Mills-Higgs theory on Riemann $V$-surfaces and calculating the cohomology of the gauge group of a $V$-bundle. The correspondence between $V$-bundles and parabolic bundles was first studied by H. Boden in \cite{Boden} and by M. Furuta and B. Steer in \cite{FuSt} using similar methods. Subsequently, B. Nasatyr and B. Steer in \cite{NaSt} introduced Higgs $V$-bundles as a straightforward extension of the original approach of N. Hitchin to orbifold Riemann surfaces studying solutions of the $\mathrm{U}(2)$ - Yang-Mills-Higgs equations on orbifold Riemann surfaces and their reinterpretation as $\mathrm{SL}(2,\mathbb{C})$-representations of the orbifold fundamental group. Moreover, the Teichm\"uller space in the orbifold case was studied in \cite{NaSt}, as well as the topology of the moduli space of Higgs bundles in the orbifold situation for rank 2 bundles. Under the correspondence between parabolic Higgs bundles and Higgs $V$-bundles, we map a parabolic Higgs bundle over a Riemann surface $X$ with trivial filtration over each puncture ${{p}_{k}}$, for $1\le k\le s$, and weight either 0 or $\frac{1}{2}$ to a Higgs $V$-bundle over a $V$-manifold $M$ with $s$ many marked points around which the isotropy group is ${{\mathbb{Z}}_{2}}$, whereas $X$ is the underlying surface of $M$. $V$-cohomology with coefficients in ${{\mathbb{Z}}_{2}}$ is now used to describe new topological invariants and thus compute the least number of connected components of moduli of maximal parabolic $G$-Higgs bundles for semisimple Lie groups $G$, when the homogeneous space ${G}/{H}$ is a Hermitian symmetric space of tube type, where $H\subset G$ is a maximal compact subgroup. Note here that maximality is provided by a general Milnor-Wood type inequality established in \cite{BiGaRi}. Calculations in orbifold cohomology provide the rank of the cohomology groups where the topological invariants of the corresponding Higgs $V$-bundles live as Stiefel-Whitney classes; we deduce the following theorems by counting all possible numbers of these invariants for parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles with maximal degree. \begin{thm*}{\textbf{\ref{709}}}, {\textbf{\ref{710}}}. Let $X$ be a smooth Riemann surface of genus $g$ and let $D$ be a reduced effective divisor of $s$ many points on $X$, such that $2g-2+s>0$. \begin{enumerate} \item The moduli space ${{\mathsf{\mathcal{M}}}_{par}^{max}}( \mathrm{Sp}\left( 2,\mathbb{R} \right))$ of maximal polystable parabolic $\mathrm{Sp}\left( 2,\mathbb{R} \right)$-Higgs bundles over the pair $(X, D)$ has at least $2^{2g+s-1}$ connected components. \item The moduli space $\mathcal{M}_{par}^{max}(\mathrm{Sp}(4,\mathbb{R}))$ of maximal polystable parabolic $\mathrm{Sp}\left( 4,\mathbb{R} \right)$-Higgs bundles over the pair $(X, D)$ has at least $(2^s+1)2^{2g+s-1}+2^s(2g-3+s)$ connected components. \item The moduli space ${{\mathsf{\mathcal{M}}}_{par}^{max}}\left( \mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ of maximal polystable parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundles over the pair $(X, D)$, for $n \ge 3$, has at least $(2^s+1)2^{2g+s-1}$ connected components. \end{enumerate} \end{thm*} We subsequently study the topological invariants for the moduli space when the parabolic structure $\alpha$ is fixed, but not necessarily involving all weights equal to $\frac{1}{2}$. We introduce the notation $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ to mean polystable parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles, where $\alpha$ is a given parabolic structure, which is fixed and it is the same for all parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles in $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$, this means, they have the same filtration over each $x \in D$ with the same weight $\alpha(x)$, for every $x \in D$. Note that the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ is a subspace of $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$ considered earlier. In this case, the monodromy around the points in the divisor needs special attention. To be more precise, the $V$-fundamental group is described by \begin{align*} \pi_V^1(M)=\{a_1,b_1,...,a_g,b_g,\sigma_1,...,\sigma_s \quad | \quad \sigma_1...\sigma_s[a_1,b_1]...[a_g,b_g]=1,\sigma_i^2=1, \mathrm{ for } 1 \leq i \leq s\}, \end{align*} where $\sigma_i$ describe the monodromy around the point $x_i$. By the correspondence between line $V$-bundles and parabolic line bundles, the monodromy around $x_i$ corresponds to the weight of the corresponding parabolic line bundle over the point $x_i$. Thus fixing a parabolic structure $\alpha$ is equivalent to fixing the monodromy around $x_i$, for $1 \leq i \leq s$. However, not every parabolic structure corresponds to a well-defined element in $\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2)$. Indeed, the relation $\sigma_1...\sigma_s[a_1,b_1]...[a_g,b_g]=1$ implies that the number of nontrivial $\sigma_i$ is even. Equivalently, if the cardinality of the set $\{x \in D \mathrm{ } | \mathrm{ } \alpha(x)=\frac{1}{2}\}$ is even, then the parabolic structure corresponds to an element in $\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2)$, and such a parabolic structure could be a choice for the square root of $K(D)^2$. Thus we say that the parabolic structure $\alpha$ is \emph{even} (resp. \emph{odd}) if the cardinality of the set $\{x \in D \mathrm{ } | \mathrm{ } \alpha(x)=\frac{1}{2}\}$ is even (resp. odd). Recall that the divisor $D$ contains an integer number of $s$-many points. Our result which includes this extra analysis is the following. \begin{prop*}{\textnormal{\textbf{\ref{711}}}} Let $X$ be a smooth Riemann surface of genus $g$ and let $D$ be a reduced effective divisor of $s$ many points on $X$, such that $2g-2+s>0$. Consider the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ of maximal polystable parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles, where $\alpha$ is a given parabolic structure, which is fixed for all Higgs bundles in the moduli space, this means, the parabolic Higgs bundles have the same filtration over each $x \in D$ with the same weight $\alpha(x)$, for every $x \in D$. Then, \begin{enumerate} \item[$\mathrm{i}.$] If $\alpha$ is even, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(4,\mathbb{R}))$ has at least $2^{2g+s-1}+(2g-3+s)+2^{2g}$ connected components. \item[$\mathrm{ii}$] If $\alpha$ is odd, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(4,\mathbb{R}))$ has at least $2^{2g+s-1}+(2g-3+s)$ connected components. \item[$\mathrm{iii}.$] If $\alpha$ is even, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2,\mathbb{R}))$ has at least $2^{2g}$ connected components, and the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ has at least $2^{2g+s-1}+2^{2g}$ connected components. \item[$\mathrm{iv}.$] If $\alpha$ is odd, there are no maximal polystable parabolic $\mathrm{Sp}(2,\mathbb{R})$-Higgs bundles with fixed parabolic structure $\alpha$, and the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ has at least $2^{2g+s-1}$ many connected components. \end{enumerate} \end{prop*} Using the component count method above for the Lie group $G = \mathrm{Sp}\left( 4,\mathbb{R} \right)$ via the correspondence to orbifold Higgs bundles, we provide a minimum component count for moduli spaces of maximal polystable parabolic $G$-Higgs bundles analogously to the non-parabolic case \cite{BrGPGHermitian}. Our results are summarized in Tables 1, 2, 3 appearing at the end of the main body of this article. The classical tool in order to provide an exact count of the number of connected components of the moduli spaces considered in Table 1 involves the analysis of a particular moment map on the moduli space, which is also a Morse-Bott function. For parabolic $G$-Higgs bundles for a real Lie group $G$, this was pioneered in the dissertation of M. Logares \cite{Loga} in the case $G = \mathrm{U} (p, q)$ following analogous methods from non-parabolic cases. This problem for groups other than $\mathrm{U} (p, q)$ is addressed in our subsequent article \cite{KSZ2}, where we develop the relevant Morse theoretic machinery to show that for $\mathrm{Sp}(2n,\mathbb{R})$ the above numbers of components are in fact the exact ones (and the same follows for the rest of the groups appearing in Table 1). Therefore, we deduce that the topological invariants introduced in this article are fine enough to distinguish the connected components of polystable parabolic $G$-Higgs moduli spaces in the cases above. Yet, an alternative method for counting components of moduli spaces of (non-parabolic) $G$-Higgs bundles, especially for a split real form $G$, is by studying orbits of the monodromy group on the first $\mathrm{mod}2$ cohomology of the fibers of the Hitchin fibration; this was first described by L. Schaposnik in \cite{Schap} for the case $\mathrm{SL}(2,\mathbb{R})$, and subsequently the method was applied also for non split real cases as well (see \cite{BarSch1}, \cite{BarSch2}, \cite{HitSch}). These techniques in the parabolic $G$-Higgs setting are also developed in an ensuing article \cite{KSZ3}. It is interesting at this point to compare the results of Table 1 with the analogous results in the non-parabolic case from \cite{Stru}, \cite{BrGPGHermitian}, \cite{GaGMsymplectic}, \cite{Gothen} and \cite{Hit92}, thus providing further applications of our study of the topological invariants for parabolic $G$-Higgs bundles, as well as the methods developed for finding those. In \cite{Stru}, T. Strubel using Fenchel-Nielsen coordinates showed that the moduli space ${{\mathsf{\mathcal{R}}}^{\max }}\left( {{\Sigma }_{g,m}},\mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ of maximal representations of the fundamental group of a topological surface ${{\Sigma }_{g,m}}$ of genus $g$ and $m\ge 1$ boundary components into $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$, has exactly ${{2}^{2g+m-1}}$ connected components for every $n\ge 1$. We explain how one can use the method involving the $V$-manifold correspondence to obtain an alternative description of T. Strubel's result. Furthermore, we exhibit maximal non-parabolic $G$-Higgs bundles as $V$-bundles equipped with a trivial action, an interpretation which leads to an explanation of the component counts established by S. Bradlow, O. Garc\'{i}a-Prada, P. Gothen and I. Mundet i Riera, and are summarized in \cite{BrGPGHermitian}, as special cases of our parabolic case component count, when there is only one puncture considered. This article involves the study of topological properties of moduli of polystable $G$-Higgs bundles equipped with a parabolic structure, building on the general parabolic $G$-Higgs definitions from \cite{BiGaRi}. Section 2 contains the basic definitions for parabolic $\mathrm{GL}\left(n,\mathbb{C} \right)$-Higgs bundles, while the more general definitions for any semisimple Lie group $G$ are put in an Appendix. In this Appendix we also work out particular examples to demonstrate the relation between the general stability condition for any group $G$ and the classical one for $\mathrm{GL}\left(n,\mathbb{C}\right)$. In Section 3 we adapt the deformation theory from the non-parabolic case to provide a calculation of the expected dimension of the moduli space of polystable parabolic $G$-Higgs bundles. Section 4 contains the construction of parabolic Teichm\"{u}ller components in full generality for any split real form $G$ of a complex simple Lie group; a proof of the parabolic Higgs analog of N. Hitchin's main theorem in \cite{Hit92} is presented here and is a generalization of the argument in \cite{Biswas3}. From Section 5 on, we are dealing with the cases when the group $G$ is Hermitian symmetric. After introducing the parabolic Toledo invariant for parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundles, we prove a Milnor-Wood type inequality, which allows one to introduce the notion of maximality. Section 6 is again preparatory and contains no new results; we review here the correspondence to orbifold Higgs bundles in a way that should fit into our needs. In Section 7 we introduce the topological invariants induced by this correspondence and calculate the total number of invariants in the case $G = \mathrm{Sp}\left( 2n,\mathbb{R} \right)$. In Section 8 we continue applying this component count method for the rest Hermitian symmetric Lie groups of tube type. Lastly, Section 9 contains a realization of classical results from the non-parabolic case formulated in terms of our $V$-bundle correspondence method. \begin{ntn*} Throughout the article, we will be making use of the following notation for the corresponding moduli spaces we shall be considering: \begin{itemize} \item $\mathcal{M}_{par}^\alpha(G)$: moduli space of polystable parabolic $G$-Higgs bundles with fixed parabolic structure $\alpha$; all parabolic bundles have the same filtration over each $x \in D$ with the same weight $\alpha(x)$, for every $x \in D$. \item $\mathcal{M}_{par}^{\mathbf{n}}(G)=\cup_{\alpha\in\frac{1}{n}\mathfrak{t}}\mathcal{M}_{par}^{\alpha}(G)$: the points in $\mathcal{M}_{par}^\alpha(G)$ for $\alpha\in\frac{1}{n}\mathfrak{t}$, where $\mathfrak{t}$ is the Lie algebra of a maximal torus of $H$ which is a maximal compact subgroup of $G$. \item $\mathcal{M}_{par}(G)$: the moduli space $\mathcal{M}_{par}^{\mathbf{2}}(G)$. \item $\mathcal{M}_{par}^{max, \alpha}(G)$: the points in $\mathcal{M}_{par}^\alpha(G)$ with maximal parabolic Toledo invariant. \item $\mathcal{M}_{par}^{max}(G)$: the points in $\mathcal{M}_{par}^{\mathbf{2}}(G)$ with maximal parabolic Toledo invariant. \end{itemize} \end{ntn*} \section{Definitions} In this preliminary section, we review the basic definitions for parabolic $\mathrm{GL}\left( n,\mathbb{C} \right)$-Higgs bundles; further details may be found in \cite{Biswas1}, \cite{BoYo}, or \cite{GaGoMu}. Parabolic $G$-Higgs bundles for a non-compact real reductive group $G$ were introduced in \cite{BiGaRi}, where a Hitchin-Kobayashi correspondence was also established. We have included these more general definitions in an Appendix at the end of this article, where we exhibit the cases $G = \mathrm{GL}(n,\mathbb{C})$ and $G = \mathrm{Sp}(2n,\mathbb{R})$ as well as the stability condition in detail. \begin{defn}\label{201} Let $X$ be a closed, connected, smooth Riemann surface of genus $g\ge 2$ and $D=\left\{ {{x}_{1}},\ldots ,{{x}_{s}} \right\}$ a divisor of $s$-many distinct points on $X$; denote this pair by $\left( X,D \right)$. A \emph{parabolic vector bundle} $E$ over $\left( X,D \right)$ is a holomorphic vector bundle $E\to X$ with \emph{parabolic structure} at each $x\in D$ (\emph{weighted flag} on each fiber ${{E}_{x}}$): \[\begin{matrix} {{E}_{x}}={{E}_{x,1}}\supset {{E}_{x,2}}\supset \ldots \supset {{E}_{x,r\left( x \right)+1}}=\left\{ 0 \right\} \\ 0\le {{\alpha }_{1}}\left( x \right)<\ldots <{{\alpha }_{r\left( x \right)}}\left( x \right)<1. \\ \end{matrix}\] \end{defn} We usually write $\left( E, \alpha \right)$ to denote a vector bundle equipped with a parabolic structure determined by a system of weights $\alpha \left( x \right)=\left( {{\alpha }_{1}}\left( x \right),\ldots ,{{\alpha }_{n}}\left( x \right) \right)$ at each $x\in D$. Moreover, set ${{k}_{i}}\left( x \right)=\dim\left( {{{E}_{x,i}}}/{{{E}_{x,i+1}}}\; \right)$ be the \emph{multiplicity} of the weight ${{\alpha }_{i}}\left( x \right)$. We can also write the weights repeated according to their multiplicity as \[0\le {{\tilde{\alpha }}_{1}}\left( x \right)\le \ldots \le {{\tilde{\alpha }}_{n}}\left( x \right)<1\] where now $n=\mathrm{rk}E$. A weighted flag shall be called \emph{full}, if ${{k}_{i}}\left( x \right)=1$ for every $i$ and $x\in D$. Given a pair of parabolic vector bundles the basic constructions for a parabolic subbundle, direct sum, dual and tensor product have been described in \cite{Biswas1} and \cite{GaGoMu}; we will be making frequent use of these constructions. \begin{defn}\label{202} A holomorphic map $f:E\to {E}'$ of parabolic vector bundles $\left( E, \alpha \right),\left( {E}', \alpha^{\prime } \right)$ is called \emph{parabolic} if ${{\alpha }_{i}}\left( x \right)>{{{\alpha }'}_{j}}\left( x \right)$ implies $f\left( {{E}_{x,i}} \right)\subset {{{E}'}_{x,j+1}}$, for every $x\in D$.\\ Furthermore, we call such map \emph{strongly parabolic} if ${{\alpha }_{i}}\left( x \right)\ge {{{\alpha }'}_{j}}\left( x \right)$ implies $f\left( {{E}_{x,i}} \right)\subset {{{E}'}_{x,j+1}}$ for every $x\in D$.\\ \end{defn} \begin{defn}\label{203} A notion of \emph{parabolic degree} and \emph{parabolic slope} of a vector bundle equipped with a parabolic structure can be defined as follows \[\operatorname{pardeg} \left( E \right)=\deg E+\sum\limits_{x\in D}{\sum\limits_{i=1}^{r\left( x \right)}{{{k}_{i}}\left( x \right){{\alpha }_{i}}\left( x \right)}}\] \[\mathrm{par}\mu \left( \mathrm{E} \right)=\frac{\mathrm{pardeg}\left( E \right)}{\mathrm{rk}\left( E \right)}\] \end{defn} \begin{defn}\label{204} A parabolic vector bundle will be called \emph{stable} (resp. \emph{semistable}), if for every non-trivial proper parabolic subbundle $F\le E$, it is $\mathrm{par}\mu \left( F \right)<\mathrm{par}\mu \left( E \right)$, (resp. $\le $). \end{defn} \begin{defn}\label{205} Let $K=\Omega^{1}_{X} $ be the canonical bundle over $X$ and $E$ a parabolic vector bundle. Let $D=\{x_1,\ldots,x_s\}$. The bundle morphism $\Phi :E\to E\otimes K\left( D \right)$ will be called a \emph{parabolic Higgs field}, if it preserves the parabolic structure at each point $x\in D$: \[\Phi \left| _{x}\left( {{E}_{x,i}} \right) \right.\subset {{E}_{x,i}}\otimes K\left( D \right)\left| _{x} \right.\] In particular, we call the Higgs field $\Phi$ \emph{strongly parabolic}, if \[\Phi \left| _{x}\left( {{E}_{x,i}} \right) \right.\subset {{E}_{x,i+1}}\otimes K\left( D \right)\left| _{x} \right.,\] in other words, $\Phi$ is a meromorphic endomorphism valued 1-form with at most simple poles along the divisor $D$, whose residue at $x\in D$ is nilpotent with respect to the filtration. Note that the divisor $D$ is always considered to be an effective divisor, and since $K\left( D \right)$ does not carry a weighted filtration, the parabolic structure on $E\otimes K\left( D \right)$ is induced only by the one on $E$. \end{defn} After these considerations we define parabolic Higgs bundles as follows. \begin{defn}\label{206} Let $K$ be the canonical bundle over $X$ and $E$ be a parabolic vector bundle over $X$. A \emph{parabolic Higgs bundle} over $\left( X,D \right)$ is given by a pair $\left( E,\Phi \right)$, where $\Phi :E\to E\otimes K\left( D \right)$ is a parabolic Higgs field. \end{defn} Analogously to the non-parabolic case, we may define stability as follows. \begin{defn}\label{207} A parabolic Higgs bundle will be called \emph{stable} (resp. \emph{semistable}), if for every $\Phi $-invariant parabolic subbundle $F\le E$ it is $\mathrm{par}\mu \left( F \right)<\mathrm{par}\mu \left( E \right)$ (resp. $\le $). Furthermore, it will be called \emph{polystable}, if it is the direct sum of stable parabolic Higgs bundles of the same parabolic slope. \end{defn} In \cite{Yoko1} and \cite{Yoko2} K. Yokogawa has constructed the \emph{moduli space of semistable $K\left( D \right)$-pairs} ${{\mathsf{\mathcal{P}}}_{\alpha }}$, that is, pairs $\left( E,\Phi \right)$ with $\Phi$ parabolic, using geometric invariant theory and has shown that it is a normal, smooth at the stable points, quasi-projective variety of dimension \begin{align}\dim{{\mathsf{\mathcal{P}}}_{\alpha }}=\left( 2g-2+s \right){{n}^{2}}+1,\label{paradim}\end{align} for fixed $n=\mathrm{rk}E$, $d=\mathrm{pardeg} (E)$ and weight type $\alpha $. Moreover, in \cite{Konn} H. Konno constructed the \emph{moduli space of stable parabolic Higgs bundles} ${{\mathsf{\mathcal{N}}}_{\alpha }}$ as a hyperk{\"a}hler quotient. It is contained in ${{\mathsf{\mathcal{P}}}_{\alpha }}$ as a closed subvariety of dimension \begin{align}\dim{{\mathsf{\mathcal{N}}}_{\alpha }}=2\left( g-1 \right){{n}^{2}}+2+2\sum\limits_{x\in D}{{{f}_{x}}},\label{sparadim}\end{align} where ${{f}_{x}}=\frac{1}{2}\left( {{n}^{2}}-\sum\limits_{i=1}^{r\left( x \right)}{{{\left( {{k}_{i}}\left( x \right) \right)}^{2}}} \right)$ is the dimension of the associated flag variety. Parabolic $G$-Higgs bundles for a real reductive Lie group $G$ were first introduced in full generality in \cite{BiGaRi}; in the Appendix we include a brief review of these general definitions along with a detailed description of some examples. In particular, we check that the general definition of a parabolic $G$-Higgs bundle along with the (poly)stability condition in the case $G = \mathrm{GL}\left(n,\mathbb{C} \right)$ coincides with the definition included in this preliminary section. For a given parabolic structure, which we shall still denote by $\alpha$, we define $\mathcal{M}_{par}^\alpha(G)$ to be the \emph{moduli space of polystable parabolic $G$-Higgs bundles with fixed parabolic structure $\alpha$}. In particular, when $\alpha\in\frac{1}{n}\mathfrak{t}$ with $\mathfrak{t}$ the Lie algebra of a maximal torus of $H$ which is a maximal compact subgroup of $G$, we denote $\mathcal{M}_{par}^{\mathbf{n}}(G)=\cup_{\alpha\in\frac{1}{n}\mathfrak{t}}\mathcal{M}_{par}^{\alpha}(G)$. On the other hand, the standard notation $\mathcal{M}_{par}(G)$ will always mean in this article the moduli space in the special case when $n=2$, that is, polystable parabolic $G$-Higgs bundles with fixed parabolic structure $\alpha\in\frac{1}{2}\mathfrak{t}$. \begin{rem} The moduli spaces ${{\mathsf{\mathcal{P}}}_{\alpha }}$ and ${{\mathsf{\mathcal{N}}}_{\alpha }}$ considered above may be viewed as particular subspaces of $\mathcal{M}_{par}^\alpha(\mathrm{GL}\left(n,\mathbb{C} \right))$. \end{rem} \section{Deformation theory} The deformation theory for parabolic $K(D)$-pairs was studied by K. Yokogawa in \cite{Yoko2}. We now adapt results from that article to the case of parabolic $G$-Higgs bundles for $G$ semisimple, analogously to the non-parabolic case studied in \S 3.3 of \cite{GaGoRi}. For a semisimple Lie group $G$, let $H\subset G$ be a maximal compact subgroup and let $\mathfrak{g}=\mathfrak{h}\oplus \mathfrak{m}$ be a Cartan decomposition so that the Lie algebra structure of $\mathfrak{g}$ satisfies \[\left[ \mathfrak{h},\mathfrak{h} \right]\subset \mathfrak{h},\quad \left[ \mathfrak{h},\mathfrak{m} \right]\subset \mathfrak{m},\quad \left[ \mathfrak{m},\mathfrak{m} \right]\subset \mathfrak{h}.\] Let ${{\mathfrak{g}}^{\mathbb{C}}}={{\mathfrak{h}}^{\mathbb{C}}}\oplus {{\mathfrak{m}}^{\mathbb{C}}}$ be the complexification of the Cartan decomposition and consider the sheaves $PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$ of \emph{parabolic sections of $E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$} and $NE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$ of \emph{strongly parabolic sections of $E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$}; for the full definition, see the Appendix at the end of this article . \begin{defn}\label{301} Let $\left( E,\varphi \right)$ be a parabolic $G$-Higgs bundle over $\left( X,D \right)$. The \emph{deformation complex} of the \emph{parabolic $G$-Higgs bundle} $\left( E,\varphi \right)$ is the following complex of sheaves \[{{C}_P^{\bullet }}\left(G, E,\varphi \right):PE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right)\xrightarrow{[-,\varphi]} PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes K\left( D \right),\] where $[-,\varphi] :{{\mathfrak{h}}^{\mathbb{C}}}\to \mathrm{End}\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$ is the commutator $\psi \mapsto \psi\varphi-\varphi\psi$.\\ Assuming now that $\left( E,\varphi \right)$ is strongly parabolic, the \emph{deformation complex} of the \emph{strongly parabolic $G$-Higgs bundle} $\left( E,\varphi \right)$ is the complex of sheaves \[{{C}_N^{\bullet }}\left(G, E,\varphi \right):PE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right)\xrightarrow{[-,\varphi]} NE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes K\left( D \right).\] \end{defn} The above definition makes sense, since for instance in the parabolic case $\varphi $ is a meromorphic section of $PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes K\left( D \right)$ and $\left[ {\mathfrak{m}_\alpha^{\mathbb{C}}},{\mathfrak{h}_\alpha^{\mathbb{C}}} \right]\subseteq {\mathfrak{m}_\alpha^{\mathbb{C}}}$ for any $\alpha\in \sqrt{-1}\bar{\mathcal{A}}$; we refer to the definitions in the Appendix for details. An analogous statement is true also in the strongly parabolic Higgs bundle case. Whenever there would be no ambiguity, we shall use the notation ${{C}_i^{\bullet }}\left( E,\varphi \right)$ for the deformation complex, where $i=P,N$. \begin{prop}\label{302} The space of infinitesimal deformations of a parabolic $G$-Higgs bundle $\left( E,\varphi \right)$ is naturally isomorphic to the hypercohomology group ${{\mathbb{H}}^{1}}\left( {{C}_P^{\bullet }}\left( E,\varphi \right) \right)$. Analogously, the space of infinitesimal deformations of a strongly parabolic $G$-Higgs bundle $\left( E,\varphi \right)$ is naturally isomorphic to the hypercohomology group ${{\mathbb{H}}^{1}}\left( {{C}_N^{\bullet }}\left( E,\varphi \right) \right)$. \end{prop} \begin{proof} The proof follows exactly the same arguments as the proof of statement (3.2) from \cite{Thad} of M. Thaddeus, who described the infinitesimal deformations of parabolic Higgs bundles. \end{proof} For any parabolic $G$-Higgs bundle $\left( E,\varphi \right)$ there is a natural long exact sequence \[0\to {{\mathbb{H}}^{0}}\left( {{C}_P^{\bullet }}\left( E,\varphi \right) \right)\to {{H}^{0}}\left( PE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right) \right)\xrightarrow{[-,\varphi]}{{H}^{0}}\left( PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes K\left( D \right) \right)\] \[\to {{\mathbb{H}}^{1}}\left( {{C}_P^{\bullet }}\left( E,\varphi \right) \right)\to {{H}^{1}}\left( PE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right) \right)\xrightarrow{[-,\varphi]}{{H}^{1}}\left( PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes K\left( D \right) \right)\to {{\mathbb{H}}^{2}}\left( {{C}_P^{\bullet }}\left( E,\varphi \right) \right)\to 0.\] The Serre duality theorem for parabolic sheaves (Proposition 3.7 in \cite{Yoko2}) provides that there are natural isomorphisms for $i=P,N$: \[{{\mathbb{H}}^{i}}\left( {{C}_i^{\bullet }}\left( E,\varphi \right) \right)\cong {{\mathbb{H}}^{2-i}}{{\left( {{C}_i^{\bullet }}{{\left( E,\varphi \right)}^{*}}\otimes K\left( D \right) \right)}^{*}},\] where the dual of the deformation complex ${{C}_i^{\bullet }}\left( E,\varphi \right)$ is defined as \[{{C}_P^{\bullet }}{{\left( E,\varphi \right)}^{*}}:NE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes {{\left( K\left( D \right) \right)}^{-1}}\xrightarrow{[-,\varphi]}NE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right),\] while the dual of the deformation complex for strongly parabolic pairs is respectively \[{{C}_N^{\bullet }}{{\left( E,\varphi \right)}^{*}}:PE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\otimes {{\left( K\left( D \right) \right)}^{-1}}\xrightarrow{[-,\varphi]}NE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right).\] An important special case is when $G$ is a complex group: \begin{prop}\label{303} Assume that $G$ is a complex semisimple Lie group. Then there is a natural isomorphism \[{{\mathbb{H}}^{2}}\left( {{C}_N^{\bullet }}\left( E,\varphi \right) \right)\cong {{\mathbb{H}}^{0}}{{\left( {{C}_N^{\bullet }}\left( E,\varphi \right) \right)}^{*}}.\] \end{prop} \begin{proof}When $G$ is complex, $\mathrm{ad:}\,\mathfrak{g}\to \mathfrak{g}$ and the Cartan decomposition of $\mathfrak{g}$ is $\mathfrak{g}=\mathfrak{u}+i\mathfrak{u}$, where $\mathfrak{u}=\mathrm{Lie}\left( U \right)$ for $U\subset G$ a maximal compact subgroup. Thus, in this case $\varphi \in NE\left( \mathfrak{g} \right)\otimes K\left( D \right)$. Moreover, for a complex group $G$ the deformation complex is dual to itself, except for a sign in the map, which does not affect cohomology: \[{{C}^{\bullet }_N} {{\left( E,\varphi \right)}^{*}}\otimes K\left( D \right):PE\left( \mathfrak{g} \right)\xrightarrow{-\mathrm{ad}\left( \varphi \right)}NE\left( \mathfrak{g} \right)\otimes K\left( D \right).\] The result now follows from Serre duality. \end{proof} The proof of the next proposition is immediate, since $NE\left( {{\mathfrak{h}}^{\mathbb{C}}} \right)\oplus NE\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)=NE\left( {{\mathfrak{g}}^{\mathbb{C}}} \right)$, given the Cartan decomposition ${{\mathfrak{g}}^{\mathbb{C}}}={{\mathfrak{h}}^{\mathbb{C}}}\oplus {{\mathfrak{m}}^{\mathbb{C}}}$. The Corollary that follows is also immediate from Serre duality: \begin{prop}\label{304} Let $G$ be a real semisimple group and let ${{G}^{\mathbb{C}}}$ be its complexification. Let $\left( E,\varphi \right)$ be a strongly parabolic $G$-Higgs bundle. Then there is an isomorphism of complexes \[C_{N}^{\bullet }({G}^{\mathbb{C}}, E,\varphi )\cong C_{N}^{\bullet }\left(G, E,\varphi \right)\oplus C_{N}^{\bullet }{{\left(G, E,\varphi \right)}^{*}}\otimes K\left( D \right),\] where $C_{N}^{\bullet }\left({G}^{\mathbb{C}}, E,\varphi \right)$ denotes the deformation complex of $\left( E,\varphi \right)$ viewed as a strongly parabolic ${{G}^{\mathbb{C}}}$-Higgs bundle, while $C_{N}^{\bullet }\left(G, E,\varphi \right)$ denotes the deformation complex of $\left( E,\varphi \right)$ viewed as a strongly parabolic $G$-Higgs bundle. \end{prop} \begin{cor}\label{305} With the same hypotheses as in the previous Proposition, there is an isomorphism \[{{\mathbb{H}}^{0}}\left( C_{N}^{\bullet }({G}^{\mathbb{C}}, E,\varphi ) \right)\cong {{\mathbb{H}}^{0}}\left( C_{N}^{\bullet }\left( G,E,\varphi \right) \right)\oplus {{\mathbb{H}}^{2}}{{\left( C_{N}^{\bullet }\left(G, E,\varphi \right) \right)}^{*}}.\] \end{cor} Consider now for a semisimple Lie group $G$, a stable and simple parabolic $G$-Higgs bundle $\left( E,\varphi \right)$. As in the non-parabolic case \cite{GaGoRi}, if a (local) universal family exists then the dimension of the component of the moduli space containing the pair $\left( E,\varphi \right)$ is equal to the dimension of the infinitesimal deformation space ${{\mathbb{H}}^{1}}\left( {{C}^{\bullet }}\left( E,\varphi \right) \right)$; this dimension is referred to as the \emph{expected dimension} of the moduli space. In this situation, ${{\mathbb{H}}^{0}}\left( C_{i}^{\bullet }\left({G}^{\mathbb{C}}, E,\varphi \right) \right)=0$ and so \[{{\mathbb{H}}^{0}}\left( C^{\bullet }_i \left(G, E,\varphi \right) \right)=0={{\mathbb{H}}^{2}}\left( C^{\bullet }_i\left(G, E,\varphi \right) \right),\] for $G$ semisimple and $i=P,N$. The long exact sequence then provides that \[\dim{{\mathbb{H}}^{1}}\left( C^{\bullet }_i \left( E,\varphi \right) \right)=-\chi \left( C^{\bullet }_i \left( E,\varphi \right) \right),\] where for simplicity we are keeping the same notation $\left( C^{\bullet }_i \left( E,\varphi \right) \right)$ for the complex of sheaves for the group $G$ and $i=P,N$. The expected dimension of the moduli space ${{\mathsf{\mathcal{M}}}_{spar}}\left( G \right)$ of polystable parabolic $G$-Higgs bundles can be calculated using the Hirzebruch-Riemann-Roch formula and is independent of the choice of $\left( E,\varphi \right)$: \begin{prop}\label{306} For a semisimple Lie group $G$, the moduli space ${{\mathsf{\mathcal{M}}}^{\alpha}_{par}}\left( G\right)$ of polystable parabolic $G$-Higgs bundles with parabolic structure $\alpha$ and $\mathbb{H}^2(C_P^\bullet(E,\varphi))=0$ for generic $(E,\varphi)$ has expected dimension \begin{align} -\chi(C_P^\bullet(E,\varphi)) &= (g-1)\dim G^{\mathbb{C}}+ s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)\nonumber\\ &+ \sum_{i} \left\{\dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^{-}_{\alpha_i}\right) - \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^{-}_{\alpha_i}\right) \right\},\end{align} where $g$ is the genus of the Riemann surface $X$ and $s$ is the number of points in $D$. Here $\mathfrak{h}^{-}_{\alpha_i}$, $\mathfrak{m}^{-}_{\alpha_i}$ means the part of $\mathfrak{h}$, $\mathfrak{m}$ that is bounded by the $P_{\alpha_i}$-action as in Definition \ref{bs-}. Moreover, the moduli space of polystable strongly parabolic $G$-Higgs bundles with parabolic structure $\alpha$ ${{\mathsf{\mathcal{M}}}^{\alpha}_{spar}}\left( G\right)$ has expected dimension \begin{align} -\chi(C_N^\bullet(E,\varphi))& = (g-1)\dim G^{\mathbb{C}}+ s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)\nonumber \\ &+ \sum_{i} \left\{\dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^{-}_{\alpha_i}\right) - \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^{<0}_{\alpha_i}\right) \right\},\end{align} where $\mathfrak{m}^{<0}_{\alpha_i}$ is defined as in Definition \ref{bsls}. \end{prop} \noindent Note that the calculation for the expected dimension is based on a Riemann-Roch theorem argument, and thus we always mean here the complex dimension. \begin{proof} Let $\left( E,\varphi \right)$ be any stable parabolic $G$-Higgs bundle. By definition we have \[\chi(C_P^\bullet(E,\varphi))=\chi\left(PE\left(\mathfrak{h}^{\mathbb{C}}\right)\right)-\chi\left(PE\left(\mathfrak{m}^{\mathbb{C}}\right)\otimes K(D)\right).\] The short exact sequence \ref{PEses} provides that \[\chi\left(PE\left(\mathfrak{h}^{\mathbb{C}}\right)\right)=\chi\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)\right)-\sum_{i} \chi\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^{-}_{\alpha_i}\right).\] On the other hand, it is \[\chi\left(PE\left(\mathfrak{m}^{\mathbb{C}}\right)\otimes K(D)\right)=\chi\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\otimes K(D)\right)-\sum_{i} \chi\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^-_{\alpha_i}\right).\] The Killing form now induces the isomorphisms $E\left(\mathfrak{m}^{\mathbb{C}}\right)\cong E\left(\mathfrak{m}^{\mathbb{C}}\right)^*$ and $E\left(\mathfrak{h}^{\mathbb{C}}\right)\cong E\left(\mathfrak{h}^{\mathbb{C}}\right)^*$, and hence $\deg E\left(\mathfrak{m}^{\mathbb{C}}\right)=\deg E\left(\mathfrak{h}^{\mathbb{C}}\right)=0$. Also, since $E\left(\mathfrak{m}^{\mathbb{C}}\right)$ and $E\left(\mathfrak{h}^{\mathbb{C}}\right)$ are vector bundles, one has by Riemann-Roch that \begin{align*} \chi\left(PE\left(\mathfrak{h}^{\mathbb{C}}\right)\right)&=\deg\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)\right)+\mathrm{rk}\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)\right)(1-g)-\sum_{i} \chi\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^-_{\alpha_i}\right) \\ &=\mathrm{rk}\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)\right)(1-g)-\sum_{i} \dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^-_{\alpha_i}\right), \end{align*} as well as \begin{align*} &\chi\left(PE\left(\mathfrak{m}^{\mathbb{C}}\right)\otimes K(D)\right)\\&= \deg\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\otimes K(D)\right)+ \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)(1-g) -\sum_{i} \chi\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^-_{\alpha_i}\right) \\&= \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)(2g-2+s)+ \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)(1-g) -\sum_{i} \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^-_{\alpha_i}\right) \\&= \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)(g-1) + \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)\cdot s - \sum_{i} \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^-_{\alpha_i}\right). \end{align*} In conclusion we get \begin{align} -\chi(C_P^\bullet(E,\varphi)) &= (g-1)\dim G^{\mathbb{C}}+ s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right) \nonumber \\ & + \sum_{i} \left\{\dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^-_{\alpha_i}\right) - \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^-_{\alpha_i}\right) \right\}. \end{align} Similarly, for strongly parabolic Higgs bundles we only need to change from $PE\left(\mathfrak{m}^\mathbb{C}\right)$ to $NE\left(\mathfrak{m}^\mathbb{C}\right)$. Thus, we get \begin{align} -\chi(C_N^\bullet(E,\varphi)) =& (g-1)\dim G^{\mathbb{C}}+ s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right) \nonumber \\+ & \sum_{i} \left\{\dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^{-}_{\alpha_i}\right) - \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^{<0}_{\alpha_i}\right) \right\}. \end{align} The proof of the proposition is complete. \end{proof} In the case when $G$ is a complex Lie group, $\mathfrak{m}\cong \mathfrak{h}$, as so the last term in the summation for the parabolic deformation complex cancels out. We thus obtain the following: \begin{cor}\label{311} For a complex semisimple Lie group $G$, the moduli space of parabolic $G$-Higgs bundles is a smooth complex variety with expected dimension \[2(g-1)\dim_{\mathbb{C}} G + s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right).\] \end{cor} \noindent Notice that this calculation will give us real dimension \[4(g-1)\dim_{\mathbb{C}} G + 2s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right)=2(g-1)\dim_{\mathbb{\mathbb{R}}} G + 2s\cdot \mathrm{rk}\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)\right).\] The case for strongly parabolic will depend on the choice of parabolic structure. We can deduce part of the classical result for $G=\mathrm{GL}(n,\mathbb{C})$ in the following example: \begin{exmp} Let $G=\mathrm{GL}(n,\mathbb{C})$. As in previous calculation, we compute the Euler characteristic \[\chi(C_P^\bullet(E,\varphi)) =-n^2(2g-2+s).\] This does not provide directly the dimension of the moduli space, since when $G=\mathrm{GL}(n,\mathbb{C})$, we will have nonzero $\mathbb{H}^0(C_P^\bullet(E,\varphi))$, as $G$ is not semisimple. Indeed, $\dim \mathbb{H}^0(C_P^\bullet(E,\varphi))=1$, because there is an automorphism given by the identity. We would expect $\mathbb{H}^2(C_P^\bullet(E,\varphi))=0$ for a smooth point in the moduli space, then the expected dimension is \[\dim \mathcal{M}_{par}^\alpha(\mathrm{GL}(n,\mathbb{C})) = n^2(2g-2+s)+1.\] This calculation coincides with formula \ref{paradim}, which is also the result of Theorem 5.2 in \cite{Yoko2}.\\ In the case of strongly parabolic Higgs bundles, for a flag with grading such that $k_j(x_i)=\dim E_{x_i,j}-\dim E_{x_i,j+1}$ as in Example \ref{Gln}, we get \[\dim\left(E\left(\mathfrak{h}^{\mathbb{C}}\right)_{x_i}/\mathfrak{h}^{-}_{\alpha_i}\right) - \dim\left(E\left(\mathfrak{m}^{\mathbb{C}}\right)_{x_i}/\mathfrak{m}^{<0}_{\alpha_i}\right) =-\sum_{j}k_j(x_i)^2.\] Therefore, the Euler characteristic for the strongly parabolic Higgs bundle deformation complex will be \[-\chi(C_N^\bullet(E,\varphi)) =n^2(2g-2)+\sum_{i} \left(n^2-\sum_{j} k_{j}(x_i)^2 \right).\] In this situation, we have the Serre duality as in Corollary \ref{303}, which provides that \[\dim \mathbb{H}^0(C_N^\bullet(E,\varphi))=\dim \mathbb{H}^2(C_N^\bullet(E,\varphi))=1.\] Thus, one has \[\dim {\mathsf{\mathcal{M}}}^{\alpha}_{spar}(\mathrm{GL}(n,\mathbb{C})) =n^2(2g-2)+\sum_{i} \left(n^2-\sum_{j} k_{j}(x_i)^2 \right) + 2.\] In the case of a generic flag, which means that $k_{j}(x_i)=1$ for all possible $i,j$, the expected dimension is \[\dim {\mathsf{\mathcal{M}}}^{\alpha}_{spar}(\mathrm{GL}(n,\mathbb{C})) =n^2(2g-2)+s \cdot n(n-1)+2.\] This is in accordance with formula \ref{sparadim}, elaborated also in Proposition 2.4 in \cite{GaGoMu}. In a more general situation where we do not assume semisimplicity, we will be able to compute the dimension of $\mathbb{H}^0$ from the dimension of the center of $H^{\mathbb{C}}$ and derive a formula similar to Theorem II of \cite{Bhosle}, but we will not be discussing this here. \end{exmp} \begin{rem}\label{307} Notice that when the number of punctures $s$ is zero, this dimension count coincides with the dimension count in Proposition 3.19 of \cite{GaGoRi} in the non-parabolic case as there is no contribution from parabolic points. \end{rem} \section{Parabolic Teichm\"{u}ller components} In his seminal article \cite{Hit92}, N. Hitchin demonstrated the existence of topologically trivial connected components, which he then called \emph{Teichm\"{u}ller components}, in the moduli space $\mathrm{Ho}{{\mathrm{m}}^{+}}\left( {{\pi }_{1}}\left( \Sigma \right),G \right)$ of reductive fundamental group representations into the adjoint group $G$ of the split real form of a complex simple Lie group ${{G}^{\mathbb{C}}}$, for a compact oriented surface $\Sigma $ of genus $g\ge 2$. Recall that the split real forms of the classical groups are the groups $\mathrm{SL}\left( n,\mathbb{R} \right)$, $\mathrm{SO}\left( n+1,n \right)$, $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$ and $\mathrm{SO}\left( n,n \right)$. These components, in reality Euclidean spaces of dimension $2\left( g-1 \right){{\dim}_{\mathbb{R}}}G$, from the point of view of stable $G$-Higgs bundles are parameterized by fixed square roots of the canonical line bundle over the Riemann surface, for a choice of complex structure on $\Sigma $. Later on, in \cite{Biswas3} the authors have extended N. Hitchin's results for a Riemann surface with $s$-many punctures and the group $G=\mathrm{SL}\left( k,\mathbb{R} \right)$. In particular, for a compact Riemann surface $X$ of genus $g$ and a divisor of $s$-many distinct points $D=\left\{ {{x}_{1}},\ldots ,{{x}_{s}} \right\}$ such that $2g-2+s>0$, they showed that Fuchsian representations of ${{\pi }_{1}}\left( X\backslash D \right)$ into $\mathrm{PSL}\left( 2,\mathbb{R} \right)$ are in one-to-one correspondence with parabolic $\mathrm{SL}\left( 2,\mathbb{R} \right)$-Higgs bundles of the form $\left( E,\theta \right)$, satisfying the following: \begin{enumerate} \item $E:={{\left( L\otimes \xi \right)}^{*}}\oplus L$, \\ where $L$ is a line bundle with ${{L}^{2}}={{K}_{X}}$ and $\xi ={{\mathsf{\mathcal{O}}}_{X}}\left( D \right)$ is the line bundle over the divisor $D$; the bundle $E$ is equipped with a parabolic structure given by a trivial flag ${{E}_{{{x}_{i}}}}\supset \left\{ 0 \right\}$ and weight $\frac{1}{2}$ for every $1\le i\le s$. \item $\theta :=\left( \begin{matrix} 0 & 1 \\ a & 0 \\ \end{matrix} \right)\in {{H}^{0}}\left( X,\mathrm{End}\left( E \right)\otimes K\left( D \right) \right)$,\\ for a meromorphic quadratic differential $a\in {{H}^{0}}\left( X,{{K}^{2}}\left( D \right) \right)$. \end{enumerate} Considering the $\left( k-1 \right)$-symmetric product of the parabolic vector bundle $E$, an extension of this result was provided also in \cite{Biswas3} for representations into $\mathrm{PSL}\left( k,\mathbb{R} \right)$, for $k>2$. Fuchsian representations of ${{\pi }_{1}}\left( X\backslash D \right)$ into $\mathrm{PSL}\left( k,\mathbb{R} \right)$ correspond to parabolic $\mathrm{SL}\left( k,\mathbb{R} \right)$-Higgs bundles $\left( {{W}_{k}},\theta \left( {{a}_{2}},\ldots ,{{a}_{k}} \right) \right)$, satisfying the following: \begin{enumerate} \item ${{W}_{k}}={{S}^{k-1}}\left( E \right)\otimes {{\xi }^{m\left( k \right)}}$, where $m\left( k \right)=\left\{ \begin{matrix} \frac{k}{2}-1,\,\,\,k:\mathrm{even} \\ \frac{k-1}{2},\,\,\,k:\mathrm{odd} \\ \end{matrix} \right.$, equipped with the trivial flag ${{\left( {{W}_{k}} \right)}_{{{x}_{i}}}}\supset \left\{ 0 \right\}$ with weight $\beta =\left\{ \begin{matrix} \frac{1}{2},\mathrm{ for }k\mathrm{ even} \\ 0,\mathrm{ for }k\mathrm{ odd } \\ \end{matrix} \right.$, for every $1\le i\le s$. \item $\theta \left( {{a}_{2}},\ldots {{a}_{k}} \right)=\left( \begin{matrix} 0 & 1 & {} & 0 \\ 0 & 0 & \ddots & 0 \\ \vdots & {} & \ddots & 1 \\ {{a}_{k}} & \cdots & {{a}_{2}} & 0 \\ \end{matrix} \right)$, for merom. differentials ${{a}_{j}}\in {{H}^{0}}\left( X ,{{K}^{j}}\otimes {{\xi }^{j-1}} \right)$. \end{enumerate} Lastly, it was shown in \cite{Biswas3} that there exists a connected component of real dimension $2\left( g-1 \right)\left( {{k}^{2}}-1 \right)+s\left( {{k}^{2}}-k \right)$ in the moduli space of representations of ${{\pi }_{1}}\left( X\backslash D \right)$ into $\mathrm{SL}\left( k,\mathbb{R} \right)$ with fixed conjugacy class of monodromy around the punctures. In the sequel, we extend these results for general split real $G$. Using an irreducible representation $\phi :\mathrm{SL(2}\mathrm{,}\mathbb{R}\mathrm{)}\to G$ for a split real group $G$, which sends copies of a maximal compact subgroup of $\mathrm{SL(2}\mathrm{,}\mathbb{R}\mathrm{)}$ into copies of a maximal compact subgroup of $G$, one can provide the existence of a parabolic Teichm\"{u}ller component similarly to the classical method by N. Hitchin. This was discussed in \S8 of \cite{BiGaRi} . In particular, the representation $\phi$ considered induces a decomposition ${{\mathfrak{g}}^{\mathbb{C}}}=\underset{i=1}{\overset{l}{\mathop{\oplus }}}\,{{V}_{i}}$ into irreducible pieces. For a standard $\mathfrak{sl}_{2}$ basis $\left( H,X,Y \right)$, with $H=\sqrt{-1}{{\mathfrak{u}}_{1}}$, take ${{e}_{1}}=\phi \left( X \right)$ and ${{e}_{-1}}=\phi \left( Y \right)$. There exists a basis ${{p}_{1}},\ldots ,{{p}_{l}}$ of invariant polynomials on ${{\mathfrak{g}}^{\mathbb{C}}}$ of degrees ${{m}_{i}}+1$, where $2{{m}_{i}}+1$ is the dimension of ${{V}_{i}}$, or equivalently ${{m}_{i}}$ is the eigenvalue of $\mathrm{ad}(H)$ on a highest weight vector ${{e}_{i}}\in {{V}_{i}}$, for $1\le i\le l$, with the property that for elements of the form \[f={{e}_{-1}}+{{f}_{1}}{{e}_{1}}+\ldots +{{f}_{l}}{{e}_{l}}\] it is ${{p}_{i}}\left( f \right)={{f}_{i}}$. Analogously to the non-parabolic case of N. Hitchin \cite{Hit92}, one obtains a section $\psi$ of the map \[p:{{\mathsf{\mathcal{M}}}_{par}}\left( G \right)\to \underset{i=1}{\overset{l}{\mathop{\oplus }}}\,{{H}^{0}}\left( X,{{K}^{{{m}_{i}}+1}}\otimes {{\xi }^{{{m}_{i}}}} \right)\] consisted of a family of parabolic $G$-Higgs bundles $\left( \phi \left( E \right),\varphi \right)$, where: \begin{enumerate} \item $E={{\left( L\otimes \xi \right)}^{*}}\oplus L$, for ${{L}^{2}}={{K}_{X}}$ and $\xi ={{\mathsf{\mathcal{O}}}_{X}}\left( D \right)$, and $\phi \left( E \right)$ is equipped with the trivial flag ${{\left( \phi \left( E \right) \right)}_{{{x}_{i}}}}\supset \left\{ 0 \right\}$ with weight $\frac{1}{2}$, for every $1\le i\le s$, and \item The Higgs field is considered to be given by\[\varphi ={{e}_{-1}}+{{a}_{1}}{{e}_{1}}+\ldots {{a}_{l}}{{e}_{l}}\] with ${{a}_{j}}\in {{H}^{0}}\left( X,{{K}^{{{m}_{i}}+1}}\otimes {{\xi }^{{{m}_{i}}}} \right)$, for $1\le j\le l$. \end{enumerate} The Higgs field $\varphi $ is meromorphic with simple poles at the points ${{x}_{i}}$ in the divisor $D$ and the residue $\mathrm{Re}{{\mathrm{s}}_{{{x}_{i}}}}\varphi ={{e}_{-1}}$ is nilpotent with $l$-dimensional centralizer, where $l=\mathrm{rk}{{\mathfrak{g}}^{\mathbb{C}}}$. The section $\psi$ thus provides the existence of a \textit{parabolic Teichm\"{u}ller component}. In fact, these components are parameterized by parabolic square roots of the line bundle $K\left( D \right)$ of degree $2g-2+s$, that is, line bundles ${{L}_{0}} \to X$ with $\deg {{L}_{0}}=g-1$ equipped with a trivial flag ${{\left( {{L}_{0}} \right)}_{{{x}_{i}}}}\supset \left\{ 0 \right\}$ and parabolic weight $\frac{1}{2}$ on each fiber ${{\left( {{L}_{0}} \right)}_{{{x}_{i}}}}$, for $1\le i\le s$. Notice that, for such objects, $\operatorname{pardeg} {{L}_{0}}=g-1+\frac{s}{2}$. In \S 7.2 later on, we show that there are ${{2}^{2g+s-1}}$ many non-isomorphic such parabolic square roots of $K\left( D \right)$, thus there exist ${{2}^{2g+s-1}}$ parabolic Teichm\"{u}ller components of ${{\mathsf{\mathcal{M}}}_{par}}\left( G \right)$ for a split real group $G$. We finally apply the Riemann-Roch formula to compute the dimension of these components: \begin{thm}\label{401} Let $X$ be a compact Riemann surface of genus $g$ and let $D=\left\{ {{x}_{1}},\ldots {{x}_{s}} \right\}$ a divisor of $s$-many distinct points on $X$, such that $2g-2+s>0$, that is, the surface $X$ can be equipped with a metric of constant negative curvature (-4). Let $G$ be the adjoint group of the split real form of a complex simple Lie group with Cartan decomposition in the Lie algebra $\mathfrak{g}=\mathfrak{h}\oplus \mathfrak{m}$. The space of homomorphisms from the fundamental group of $X$ into $G$, with fixed conjugacy class of monodromy around the points in $D$, has a component of real dimension $2\left( g-1 \right){{\dim}_{\mathbb{R}}}G+2s\cdot \mathrm{rk}E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$. \end{thm} \begin{proof} In \cite{Boalch}, P. Boalch established a correspondence between parabolic connections and $G$-filtered fundamental group representations, thus extending the non-abelian Hodge correspondence over non-compact curves of C. Simpson \cite{Simp} for $\mathrm{GL}(n,\mathbb{C})$. As in N. Hitchin's classical approach, this correspondence identifies the subfamily defined by the parabolic Hitchin section $\psi$ with the moduli space of completely reducible flat $G$-connections on $X\backslash D$, meromorphic at ${{x}_{i}}\in D$ and whose holonomy is $G$-conjugated to an element $U\in H$, where $H\subset G$ is a maximal compact subgroup of $G$. \noindent From the Riemann-Roch formula, one obtains that the real dimension of the vector space $\underset{i=1}{\overset{l}{\mathop{\oplus }}}\,{{H}^{0}}\left( X,{{K}^{{{m}_{i}}+1}}\otimes {{\xi }^{{{m}_{i}}}} \right)$ is equal to \[2\left[ \sum\limits_{i=1}^{l}{\left( 2{{m}_{i}}+1 \right)\left( g-1 \right)+{{m}_{i}}s} \right]=2\left( g-1 \right){{\dim}_{\mathbb{R}}}G+2s\sum\limits_{i=1}^{l}{{{m}_{i}}}.\] For the family of parabolic Higgs bundles we have considered, the residue of the Higgs field $\mathrm{Re}{{\mathrm{s}}_{{{x}_{i}}}}\varphi ={{e}_{-1}}$ is regular, nilpotent. \noindent On the other hand, ${{\pi }_{1}}\left( X \right)=\left\langle {{c}_{1}},{{d}_{1}},\ldots ,{{c}_{g}},{{d}_{g}},{{e}_{1}},\ldots ,{{e}_{s}}\left| \prod\limits_{j=1}^{g}{\left[ {{c}_{j}},{{d}_{j}} \right]\prod\limits_{j=1}^{s}{{{e}_{j}}=id}} \right. \right\rangle $, where ${{c}_{j}},{{d}_{j}}$ are simple loops around the handles of $X$ and ${{e}_{j}}$ are simple loops around the points ${{x}_{i}}$ in the divisor. The image of the elements ${{c}_{j}},{{d}_{j}}$ via a representation $\rho :{{\pi }_{1}}\left( X \right)\to G$ depends on $\dim G$ different parameters. Moreover, for each loop ${{e}_{j}}$, the relations for weights and monodromies in Table 1 of \cite{BiGaRi} provide that the image of ${{e}_{j}}$ via a representation $\rho :{{\pi }_{1}}\left( X\backslash D \right)\to G$ is a regular unipotent element $E_j$. Let $U_j$ be the set of all conjugacy classes of $E_j$. To calculate the number of parameters for the image of $e_j$ is equivalent to calculating the number of parameters for the set $U_j$. Clearly, any element in $U_j$ can be written as $A E_j A^{-1}$, where $A\in {G}/{I}\;$, and where $I$ is the centralizer of the unipotent and regular element ${{E}_{j}}$. This means that $\dim I=l$, where $l=\mathrm{rk}{{\mathfrak{g}}^{\mathbb{C}}}$, thus the total number of parameters for the image $\rho \left( \left[ {{e}_{j}} \right] \right)$ is ${{\dim}_{\mathbb{C}}}{{\mathfrak{g}}^{\mathbb{C}}}-l=\sum\limits_{i=1}^{l}{2{{m}_{i}}}$. We deduce that the real dimension of the space of fundamental group representations into $G$ with the monodromy around the points in $D$ lying in the conjugacy class of an element in $H$, is equal to $2{{\dim}_{\mathbb{R}}}G\left( g-1 \right)+2s\sum\limits_{i=1}^{l}{{{m}_{i}}}$. This coincides with the dimension count for the vector space $\underset{i=1}{\overset{l}{\mathop{\oplus }}}\,{{H}^{0}}\left( X,{{K}^{{{m}_{i}}+1}}\otimes {{\xi }^{{{m}_{i}}}} \right)$, which can be written as \[2{{\dim}_{\mathbb{R}}}G\left( g-1 \right)+2s\cdot \mathrm{rk}E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)\] since for the weights in the family of Higgs bundles in the Hitchin section, it holds in particular that ${{\left( E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right) \right)}_{{{x}_{i}}}}\simeq {{\mathfrak{m}}^{\mathbb{C}}}$. \end{proof} \begin{rem}Note that the calculation here coincides with the calculation of expected real dimension of the moduli space from Corollary \ref{311}. Moreover, in the absence of punctures, the dimension of a Teichm\"{u}ller component coincides with the one from \cite{Hit92}. \end{rem} \section{Maximal Parabolic components} Distinguished components of the moduli space $\mathcal{M}_{par}^\alpha(G)$ also exist when the homogeneous space ${G}/{H}\;$ is a Hermitian symmetric space of noncompact type, where $H\subset G$ is a maximal compact subgroup. For the classical groups, this means considering the Lie groups $\mathrm{SU}(p,q)$, $\mathrm{Sp}(2n,\mathbb{R})$, $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$ and $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$. In this case, $\mathfrak{h}=\mathrm{Lie}\left( H \right)$ has a 1-dimensional center and there is a decomposition of ${{\mathfrak{m}}^{\mathbb{C}}}$ into its $\pm i$-eigenspaces ${{\mathfrak{m}}^{\mathbb{C}}}={{\mathfrak{m}}^{+}}\oplus {{\mathfrak{m}}^{-}}$. For a parabolic $G$-Higgs bundle $\left( E,\varphi \right)$ with the Higgs field $\varphi$ decomposing accordingly as $\varphi ={{\varphi }^{+}}+{{\varphi }^{-}}$, the authors in \cite{BiGaRi} define a Toledo invariant $\tau \left( E \right)$ analogously to the non-parabolic case and provide a general inequality of Milnor-Wood type. \begin{prop}[O. Biquard, O. Garc\'{i}a-Prada, I. Mundet i Riera \cite{BiGaRi}]\label{501} For a semistable parabolic $G$-Higgs bundle $\left( E,\varphi \right)$ on a Riemann surface with a divisor $\left( X,D \right)$, it holds that \[-\mathrm{rk}\left( {{\varphi }^{+}} \right)\left( 2g-2+s \right)\le \tau \left( E \right)\le \mathrm{rk}\left( {{\varphi }^{-}} \right)\left( 2g-2+s \right).\] \end{prop} In the sequel of this section, we study explicitly the case when $G=\mathrm{Sp}(2n,\mathbb{R})$, while some further details are moved to the Appendix. Then, in \S 6 and \S 7 we describe topological invariants for maximal parabolic $G=\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles. The analysis for the case $G=\mathrm{Sp}(2n,\mathbb{R})$ can be then readily adapted for the study of maximal parabolic $G$-Higgs bundles also for the other Hermitian symmetric spaces ${G}/{H}$. \subsection{Maximal Parabolic $\mathrm{Sp}\left( 2n,\mathbb{R}\right )$-Higgs bundles.} A maximal compact subgroup of $G=\mathrm{Sp}\left( 2n,\mathbb{R}\right )$ is $H=\mathrm{U}\left( n \right)$ and ${{H}^{\mathbb{C}}}=\mathrm{GL}\left( n,\mathbb{C} \right)$, thus the parabolic structure on a $\mathrm{GL}\left( n,\mathbb{C} \right)$-principal bundle is in this case defined by a weighted filtration. We will first fix some notation before giving the precise definitions. Let $X$ be a compact Riemann surface of genus $g$ and let the divisor $D:=\left\{ {{x}_{1}},\ldots ,{{x}_{s}} \right\}$ of $s$-many distinct points on $X$, assuming that $2g-2+s>0$. Let $K$ denote, as usual, the canonical line bundle over $X$ of degree $2g-2$, and $\xi :={{\mathsf{\mathcal{O}}}_{{X}}}\left( D \right)$ the line bundle on $X$ given by the divisor $D$. The degree of the line bundle $ K\otimes \xi$ is $2g-2+s$, where $s$ is the number of points in the divisor considered. Let $V$ be a rank $n$ holomorphic bundle over $X$. Equip this with a parabolic structure given by a weighted flag on each fiber ${{V}_{x_{i}}}$: \begin{equation}\tag{2} \begin{matrix} {{V}_{x_{i}}}\supset {{V}_{x_{i},2}}\supset \ldots \supset {{V}_{x_{i},n+1}}=\left\{ 0 \right\} \\ 0\le {{\alpha }_{1}}\left( x_{i} \right)\le \ldots \le {{\alpha }_{n}}\left( {{x}_{i}} \right)<1 \\ \end{matrix} \end{equation} for each ${{x}_{i}}\in D$. The parabolic degree of the parabolic bundle $\left( V,\alpha \right)$ is given by the rational number \[\operatorname{pardeg} V=\deg V+\sum\limits_{{{x}_{i}}\in D}{\sum\limits_{j=1}^{n}{{{\alpha }_{j}}\left( {{x}_{i}} \right)}}\] For a parabolic principal ${{H}^{\mathbb{C}}}=\mathrm{GL}\left(n,\mathbb{C}\right)$-bundle $E$, let $E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)$ denote the (parabolic) bundle associated to $E$ via the isotropy representation and, as a bundle, \[E\left( {{\mathfrak{m}}^{\mathbb{C}}} \right)=\mathrm{Sy}{{\mathrm{m}}^{n}}\left( V \right)\oplus \mathrm{Sy}{{\mathrm{m}}^{n}}\left( {{V}^{*}} \right)\] for $V$ the rank $n$ bundle associated by the standard representation. The definition of a parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundle according to the authors in \cite{BiGaRi} specializes to the following: \begin{defn}\label{502} Let $X$ be a compact Riemann surface of genus $g$ and let the divisor $D:=\left\{ {{x}_{1}},\ldots ,{{x}_{s}} \right\}$ of $s$-many distinct points on $X$, assuming that $2g-2+s>0$. A \emph{parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundle} is defined as a triple $\left( V,\beta ,\gamma \right)$, where \begin{itemize} \item $V$ is a rank $n$ bundle on $X$, equipped with a parabolic structure given by a weighted flag as in (2), and \item The maps $\beta :{{V}^{\vee }}\to V\otimes K\otimes \xi$ and $\gamma :V\to {{V}^{\vee }}\otimes K\otimes \xi$ are parabolic symmetric morphisms. \end{itemize} \end{defn} The parabolic structures on $V$ and ${{V}^{\vee }}$ now induce a parabolic structure on the parabolic sum $E=V\oplus {{V}^{\vee }}$, for which $\operatorname{pardeg} E=0$. We define alternatively a parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundle on $\left( X,D \right)$ as a parabolic Higgs bundle $\left( E,\Phi \right)$, where $E=V\oplus {{V}^{\vee }}$ and $\Phi =\left( \begin{matrix} 0 & \beta \\ \gamma & 0 \\ \end{matrix} \right):E\to E\otimes K\left( D \right)$; the stability condition for such pairs $\left( E,\Phi \right)$ will be the one considered in Definition \ref{204}. \begin{defn}\label{503} The \emph{parabolic Toledo invariant} of a parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundle is defined as the rational number \[\tau =\operatorname{pardeg} \left( V \right).\] \end{defn} Moreover, we may obtain a \emph{Milnor-Wood type inequality} for this topological invariant: \begin{prop}\label{504} Let $\left( E,\Phi \right)$ be a semistable parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundle. Then \[\left| \tau \right|\le n\left( g-1+\frac{s}{2} \right),\] where $s$ is the number of points in the divisor $D$. \end{prop} \begin{proof} Consider parabolic bundles $N=\ker \left( \gamma \right)$ and $I=\operatorname{Im}\left( \gamma \right)\otimes \left(K\otimes \xi\right)^{-1}\le {{V}^{\vee }}$. \\ We thus get an exact sequence of parabolic bundles \[0\to N\to V\to I\otimes K\otimes \xi\to 0\] and so \begin{align*} \mathrm{par}\deg \left( V \right) & = \mathrm{par}\deg \left( N \right)+\mathrm{par}\deg \left( I\otimes K\otimes \xi \right) \\ & = \mathrm{par}\deg \left( N \right)+\mathrm{par}\deg \left( I \right)+\mathrm{rk}\left( I \right)\left( 2g-2+s \right) \tag{3}\end{align*} using the formula that gives the parabolic degree for the tensor product and the fact that $\mathrm{par}\deg \left(K\otimes \xi\right)=2g-2+s$.\\ $I$ is a subsheaf of ${{V}^{\vee }}$ and $I\hookrightarrow {{V}^{\vee }}$ is a parabolic map. Let $\tilde{I}\subset {{V}^{\vee }}$ be its saturation, which is a subbundle of ${{V}^{\vee }}$ and endow it with the induced parabolic structure. So $N,V\oplus \tilde{I}\subset E$ are $\Phi $-invariant parabolic subbundles of $E$. The semistability of $\left( E,\Phi \right)$ now implies $\mathrm{par}\mu \left( N \right)\le \mathrm{par}\mu \left( E \right)$ and $\mathrm{par}\mu \left( V\oplus I \right)\le \mathrm{par}\mu \left( V\oplus \tilde{I} \right)\le \mathrm{par}\mu \left( E \right)$. However, \[\mathrm{par}\mu \left( E \right)=\frac{\operatorname{pardeg} \left(E \right)}{\mathrm{rk}\left( E \right)}=0,\] thus we have \[\operatorname{pardeg} \left( N \right)\le 0\] and \[ \operatorname{pardeg} \left( V \right) +\operatorname{pardeg} \left( I \right)\le 0.\] Equation (3) provides that \[ \operatorname{pardeg} \left( V \right)\le -\operatorname{pardeg} \left( V \right)+rk\left( I \right)\left( 2g-2+s \right),\] thus \[ \tau \le n\left( g-1+\frac{s}{2} \right).\] The map $\left( V,\beta ,\gamma \right)\mapsto \left( {{V}^{\vee }}\otimes \xi ,\gamma ,\beta \right)$ defines an isomorphism ${{\mathsf{\mathcal{M}}}_{-\tau }}\cong {{\mathsf{\mathcal{M}}}_{\tau }}$ providing the minimal bound $-\tau \le n\left( g-1+\frac{s}{2} \right)$, where ${{\mathsf{\mathcal{M}}}_{\tau }}$ denotes the subspace consisted of the pairs with fixed parabolic Toledo invariant $\tau$. \end{proof} \begin{defn}\label{505} The polystable parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundles with fixed parabolic structure $\alpha$ and parabolic Toledo invariant $\tau =n\left( g-1+\frac{s}{2} \right)$ will be called \emph{maximal} and we shall denote the subspace of the moduli space containing those by $\mathsf{\mathcal{M}}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$. We define $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$ to be the union of $\mathsf{\mathcal{M}}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ with $\alpha \in \frac{1}{2} \mathfrak{t}$, i.e. $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))=\cup_{\alpha\in\frac{1}{2}\mathfrak{t}}\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$. \end{defn} It can be shown that this subspace is non-empty (see \cite{Kydon}). \section{The correspondence to orbifold Higgs bundles} The topology of parabolic semistable $G$-Higgs bundle moduli spaces has been studied so far in the case when $G=\mathrm{GL(}n\mathrm{,}\mathbb{C}\mathrm{)}$ in \cite{GaGoMu}, $G=\mathrm{U}\left( 2,1 \right)$ in \cite{Loga2} and $G=\mathrm{U}\left( p,q \right)$ in \cite{GaLoMu}, \cite{Loga}, which pioneered the study of irreducible components of parabolic Higgs bundles for a real Lie group. In the case $G=\mathrm{Sp}\left( 2n,\mathbb{R} \right)$, we are fixing the parabolic degree $d=\mathrm{pardeg}(V)$ of the bundle and the weight type $\alpha$, where we assign weight equal to either $0$ or $\frac{1}{2}$ for the trivial flag on each fiber $V_{x_{i}}$, $x_{i}\in{D}$; let ${\mathsf{\mathcal{M}}}_{par}^d \left( \mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ be the moduli space of polystable parabolic $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$-Higgs bundles of degree $d$ and the weight type described above. We note that the parabolic structures we consider in the rest of the paper lie in $\frac{1}{2}\mathfrak{t}$. In other words, any parabolic weight can be written as a fraction with denominator $2$. The reason why we are fixing these particular parabolic structures for parabolic Higgs bundles in the moduli space $\mathsf{\mathcal{M}}_{par}^d \left( \mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ will become clear in what follows. In the case when the weights in the parabolic structure of the bundle are rational numbers, we use a correspondence between parabolic Higgs bundles and orbifold Higgs bundles in order to define appropriate topological invariants and count connected components. Under the assumption that the choices of the weights are either $0$ or $\frac{1}{2}$, any weight can be written as a fraction with denominator $2$. Thus, we may construct a special $V$-manifold from the data $\left( X,D,2 \right)$, where $m=2$ describes the cyclic group action around the points in the divisor $D$ and it precisely corresponds to the denominator $``2"$. The topological invariants of the corresponding Higgs $V$-bundles we are interested in will be conceived as characteristic classes in $V$-cohomology groups with ${{\mathbb{Z}}_{2}}$-coefficients. In this section, we review -for the most part- the correspondences from \cite{Biswas2}, \cite{FuSt} and \cite{NaSt}. In particular, we construct an orbifold Higgs field for a bundle of any rank following closely the constructions in the aforementioned references. \subsection{Orbifold Higgs bundles} Let $Y$ be a closed, connected, smooth Riemann surface and let $\mathrm{Aut}\left( Y \right)$ be the group of algebraic automorphisms of $Y$. Assume that the finite group $\Gamma$ acts faithfully on $Y$, in other words, there is an injective homomorphism $h :\Gamma \to \mathrm{Aut}\left( Y \right)$. Denote by $[Y / \Gamma]$ the orbifold and let $E$ be a vector bundle over $Y$. We say that $E$ is \emph{$\Gamma$-equivariant}, if there is a group action on $E$, $\rho: \Gamma \times E \rightarrow E$, such that $\phi \circ \rho(\gamma,z)=h(\gamma)(\phi(z))$, where $\phi : E \rightarrow Y$ is the projection. \begin{defn}\label{601} An \emph{orbifold sheaf} on $[Y / \Gamma]$ is a torsion free coherent sheaf $E$ on $Y$ together with a lift of the action of $\Gamma$ to $E$, such that the automorphism of the space of stalks for the action of any $g\in \Gamma $ is a coherent sheaf isomorphism between $E$ and $\rho {{\left( {{g}^{-1}} \right)}^{*}}E$; when $E$ is locally free, it is called an \emph{orbifold bundle}. \end{defn} \begin{defn} The \textit{degree} of an orbifold bundle $E$ on $[Y/\Gamma]$ is defined to be \[\deg_{orb}(E)=\frac{1}{|\Gamma|}\int_Y c_1(E),\] where $c_1(E)$ is the first Chern class of $E$ as a holomorphic bundle on $Y$. \noindent The \textit{orbifold slope} will be given by the fraction \[\mu_{orb}(E)=\frac{\deg(E)}{\mathrm{rk}(E)}.\] \end{defn} Recall that a \emph{Higgs field} $\Phi$ of a holomorphic bundle $E$ over $Y$ is a holomorphic section of $\mathrm{End}(E)\otimes K$, where $K$ is the canonical line bundle over $Y$. We define next the orbifold Higgs field: \begin{defn}\label{602} An \emph{orbifold Higgs field} $\Phi$ over the orbifold bundle $E$ is a Higgs field such that it is equivariant with respect to the action of $\Gamma$, i.e., $\rho(g^{-1})^* \Phi = \Phi$. \end{defn} \begin{defn}\label{603} A \emph{Higgs bundle over the orbifold} $[Y / \Gamma]$ is a pair $(E, \Phi)$, where $E$ is an orbifold bundle and $\Phi$ is an orbifold Higgs field. \end{defn} An orbifold bundle $E$ is called \emph{orbifold stable (resp. semistable)}, if for any $\Gamma$-invariant stable (resp. semistable) subbundle $F$ of $E$ with $0 < \mathrm{rank } F <\mathrm{rank } E$, the inequality $\mu_{orb}(F) < \mu_{orb}(E)$ (resp. $\mu_{orb}(F) \le \mu_{orb}(E)$) holds. An orbifold Higgs bundle $(E, \Phi)$ will be called \emph{orbifold stable (resp. semistable)}, if for any orbifold Higgs field $\Phi$ and $\Gamma$-invariant subbundle $F$, the above inequality holds; further details can be found in \cite{FuSt}, \cite{NaSt} and \cite{Biswas2}. In this article, we are interested in the global quotient $[Y / \Gamma]$ where the underlying space $X$ is a compact Riemann surface. \subsection{Local Picture of an Orbifold Higgs Bundle} Let $\widetilde{M}$ be a $k$-dimensional manifold with $s$-many marked points $x_1,...,x_s$. For each marked point, there is a linear representation $\sigma_i: \Gamma_i \rightarrow \mathrm{Aut}(\mathbb{R}^k)$ of a cyclic group ${{\Gamma }_{i}}=\left\langle {{\sigma }_{i}} \right\rangle $, $1 \leq i \leq s$, where $\Gamma_i$ acts freely on $\mathbb{R}^k \backslash \{0\}$ together with an atlas of coordinate charts \begin{align*} & \phi_i: U_i \rightarrow D^k / \sigma_i, & 1 \leq i \leq s;\\ & \phi_p: U_p \rightarrow D^k, & p \in \widetilde{M} \backslash \{x_1,...,x_s\}. \end{align*} We get the orbifold $M$ by gluing all local coordinate charts above, while $\widetilde{M}$ is the underlying manifold of $M$. The example we are interested in is $M=[Y / \Gamma]$, where $Y$ is a closed, connected, smooth Riemann surface and $\Gamma$ is a finite group acting effectively on $Y$. In this case, the underlying space $\widetilde{M}$ is exactly the underlying space $X$ of $[Y/\Gamma]$. In \cite{FuSt}, M. Furuta and B. Steer consider this construction to define a \emph{$V$-manifold}; we review some properties of $V$-manifolds in \S 7 later on. \begin{defn}\label{604} A \emph{holomorphic orbifold bundle} $E$ of rank $n$ over $M$ is defined locally on the charts as above with a collection of isotropy representations $\tau_i: \Gamma_i \rightarrow \mathrm{Aut}(C^n)$ and local trivializations $\theta_i : E|_{U_i} \rightarrow D^k \times C^n / \sigma_i \times \tau_i$, for $1 \leq i \leq s$. \end{defn} Forgetting the group action, we get a well defined holomorphic vector bundle $\widetilde{E}$ over the underlying space $\widetilde{M}$. We say that a local trivialization $\Theta_i : \widetilde{E}|_{D^k} \rightarrow D^k \times C^n$ is \emph{compatible} with the orbifold structure (with respect to $E$), if $\Theta_i$ is $\Gamma_i$-equivariant, where the $\Gamma_i$ action comes from the local trivialization $\theta_i$. Notice that Definition \ref{604} is the local description of Definition \ref{601}. We now give an example of the local chart of a rank $n$ holomorphic orbifold bundle $E$ over $M=[U / \mathbb{Z}_m]$, where $m \geq 2$. A local trivialization $\Theta : \widetilde{E} \rightarrow U \times \mathbb{C}^n$ is $\mathbb{Z}_m$-equivariant with respect to the following action \begin{align*} t(z;z_1,z_2,...,z_n)=(tz;t^{k_1}z_1,t^{k_2}z_2,...,t^{k_n}z_n), \end{align*} where $k_1,...,k_n$ are integers such that $k_1 \leq k_2 \leq ... \leq k_n \leq m$. We can take local holomorphic sections $f_1,...,f_n$ of $\widetilde{E}$ such that $\{f_1(x),...,f_n(x)\}$ is a basis of $(\widetilde{E})_x$ consisting of eigenvectors. Then, we can set \begin{align*} \Theta=(t^{-k_1}(t \cdot f_1),...,t^{-k_n}(t \cdot f_n)), \end{align*} where $t \cdot f_i(x)=t^{k_i}f_i(x)$. Let now $\Phi$ be a Higgs field over $E$. In our example, $\Phi$ can be written with respect to the local chart $[U / \mathbb{Z}_m]$ as follows: \begin{align*} \Phi=(\phi_{ij})_{1 \leq i,j \leq n}, \end{align*} where \begin{align*}\tag{4} \phi_{ij}= \begin{cases} z^{k_{i}-k_{j}} \hat{\phi}_{ij}(z^{m})\frac{dz}{z} & \mathrm{ if } k_i \geq k_j\\ 0 & \mathrm{ if } k_i < k_j, \end{cases} \end{align*} and $\hat{\phi}_{ij}$ are holomorphic functions on $\widetilde{E}$. We explain why $\Phi$ can in fact be written this way in the following two remarks. \begin{rem}\label{605} In general, $\Phi \in H^0(\mathrm{End}_0 (E)\otimes K)$ is $\mathbb{Z}_m$-equivariant, where $\mathrm{End}_0(E)$ denotes the traceless homomorphisms of $E$ and the action of $\mathbb{Z}_m$ on $\mathrm{End}_0 (E)\otimes K$ is conjugation. Under the conjugation action, we have \begin{align*} \phi_{ij}= \begin{cases} z^{k_{i}-k_{j}} \hat{\phi}_{ij}(z^{m})\frac{dz}{z} & \mathrm{ if } k_i \geq k_j\\ z^{k_{i}-k_{j}} \hat{\phi}_{ij}(z^{m})\frac{dz}{z} & \mathrm{ if } k_i < k_j. \end{cases} \end{align*} If $k_i \leq k_j$, then $z^{k_{i}-k_{j}}$ is a negative number, which means possibly a meromorphic section, not holomorphic. Hence, we may define $\phi_{ij}$ as in (4). \end{rem} \begin{rem}\label{606} In the next subsection, we construct the correspondence between an orbifold Higgs bundle and a parabolic Higgs bundle. Under this correspondence, $\Phi=(\phi_{ij})$ is a Higgs field, and the fact that $\Phi$ is a ``lower triangular matrix'' means that $\Phi$ preserves the filtration (cf. Definition 2.5), hence $\Phi$ is a well-defined parabolic Higgs field. \end{rem} \subsection{Orbifold Higgs bundle vs. Parabolic Higgs bundle} We construct a parabolic Higgs bundle over an underlying surface from a given orbifold Higgs bundle and show that this construction is precisely a one-to-one correspondence. This provides that given any parabolic Higgs bundle over the underlying surface, we can recover the orbifold Higgs bundle. We discuss the local construction in detail for both the holomorphic bundle and the Higgs field; this local construction can be glued naturally. The construction follows very closely \cite{FuSt}, \cite{NaSt}, and we are adding one more condition on the Higgs field which should be a lower triangular matrix; see also p. 624 in \cite{NaSt}. In this section, all parabolic structures are assumed to have rational weights. \subsubsection{Holomorphic bundle.} We briefly review the construction by M. Furuta and B. Steer for the holomorphic bundle (cf. \cite{FuSt}). Since we work on the local chart, let $E$ be a rank $n$ holomorphic orbifold bundle over the orbifold surface $M=[U / \mathbb{Z}_m]$, where $m \geq 2$, with local trivialization $\Theta : \widetilde{E} \rightarrow U \times C^n$. The local trivialization $\Theta$ is $\mathbb{Z}_m$-equivariant with respect to the action \begin{align*} t(z;z_1,z_2,...,z_n)=(tz;t^{k_1}z_1,t^{k_2}z_2,...,t^{k_n}z_n). \end{align*} \noindent Now we consider a bundle map $f\left( {{k}_{1}},\ldots ,{{k}_{n}} \right):U\backslash \left\{ x \right\}\times {{\mathbb{C}}^{n}}\to U\backslash \left\{ x \right\}\times {{\mathbb{C}}^{n}}$ defined by \[f=\left( \begin{matrix} {{z}^{{{k}_{1}}}} & {} & {} \\ {} & \ddots & {} \\ {} & {} & {{z}^{{{k}_{n}}}} \\ \end{matrix} \right).\] Let $\tilde{\Theta }=f{{\left( {{k}_{1}},\ldots ,{{k}_{n}} \right)}^{-1}}\Theta $. It is not hard to check that \begin{equation}\tag{5} \tilde{\Theta }\left( t\cdot z \right)=\left( \begin{matrix} {{z}^{{{k}_{1}}-{{k}_{1}}}} & {} & {} \\ {} & \ddots & {} \\ {} & {} & {{z}^{{{k}_{n}}-{{k}_{n}}}} \\ \end{matrix} \right)\tilde{\Theta }\left( z \right)=\tilde{\Theta }\left( z \right). \end{equation} Hence, we define $E\left( {{k}_{1}},\ldots {{k}_{n}} \right)$ to be the holomorphic orbifold bundle by patching $E\left| _{U\backslash \left\{ x \right\}} \right.$ and $U\times {{\mathbb{C}}^{n}}$ via $\tilde{\Theta }=f{{\left( {{k}_{1}},\ldots {{k}_{n}} \right)}^{-1}}\Theta $. From Equation (5), we also know the isotropy representation is trivial, thus $E\left( {{k}_{1}},\ldots {{k}_{n}} \right)$ is a well-defined holomorphic bundle over the underlying space $U$. To define the filtration corresponding to the orbifold bundle $\left( E,\Theta \right)$, we have to make another assumption on the numbers ${{k}_{i}}$: We say the local trivialization $\Theta $ is \emph{good}, if \[{{k}_{1}}\le {{k}_{2}}\le \ldots \le {{k}_{n}}.\] Let $r$ be the number of distinct ${{k}_{i}}$ and let ${{\kappa }_{1}},\ldots ,{{\kappa }_{r}}$ be the respective multiplicities of each of those distinct numbers. We define the \emph{parabolic structure} on $F=E\left( {{k}_{1}},\ldots {{k}_{n}} \right)$ at a point $p$ by the following filtration \[{{F}_{p}}={{F}_{1}}\supset {{F}_{2}}\supset \ldots \supset {{F}_{r+1}}=\left\{ 0 \right\},\] where ${{F}_{i}}=\overbrace{0\oplus \cdots \oplus 0}^{{{j}_{i-1}}}\oplus \overbrace{\mathbb{C}\oplus \cdots \oplus \mathbb{C}}^{r-{{j}_{i-1}}}$, with weight $\frac{{{k}_{{{j}_{s}}}}}{m}$ and ${{j}_{s}}={{\kappa }_{1}}+\ldots +{{\kappa }_{s}}$. Clearly, $F$ is a parabolic vector bundle over the underlying space. \begin{thm}[Theorem 5.7 in \cite{FuSt}]\label{607} The construction from $E$ to $F=E\left( {{k}_{1}},\ldots {{k}_{n}} \right)$ gives a bijective correspondence between isomorphism classes of holomorphic orbifold bundles with good trivialization $\left( E,\Theta \right)$ and isomorphism classes of parabolic bundles $\left( F,\tilde{\Theta } \right)$. \end{thm} \begin{rem} Recall that this correspondence is established assuming rational weights in the parabolic structure. \end{rem} \subsubsection{Higgs field.} We now describe the correspondence for the Higgs fields for $\left( E,\Theta \right)$ and $\left( F,\tilde{\Theta } \right)$. M. Furuta and B. Steer have constructed this correspondence in the rank 2 case. We construct the Higgs field for any rank in a similar way. The difference is that our construction of the Higgs field preserves the filtration of a parabolic bundle (Definition 2.5). Recall that Equation (4) gives the local description of the orbifold Higgs field on $M=[U / \mathbb{Z}_m]$. Under the correspondence $E \to F$ described in \S 6.3.1, the corresponding Higgs field $\tilde{\Phi }$ over the underlying space $\tilde{U}$ should be the conjugation of $\Phi $ by the matrix $f\left( {{k}_{1}},\ldots {{k}_{r}} \right)$. Hence, we have \begin{align*} {{\tilde{\phi }}_{ij}} & ={{z}^{{{k}_{j}}-{{k}_{i}}}}{{\hat{\phi}(z^m) }_{ij}}\frac{dz}{z}\\ & = \begin{cases} \frac{{{{\hat{\phi }}}_{ij}}\left( w \right)}{m w}dw & \mathrm{ if } k_i > k_j\\ 0 & \mathrm{ if } k_i \leq k_j, \end{cases} \end{align*} where we change the coordinate by $w={{z}^{m }}$ in the second equality above. From this calculation, it is implied that $\tilde{\Phi }=\left( {{{\tilde{\phi }}}_{ij}} \right)$ is a section with at most simple pole at $p$ in ${{H}^{0}}\left( \mathrm{End}_0\left( F \right)\otimes K\left( p \right) \right)$, in other words, $\tilde{\Phi }:F\to F\otimes K\left( p \right)$. Since the trivialization $\Theta $ is good, that is, ${{k}_{1}}\le {{k}_{2}}\le \ldots \le {{k}_{r}}$, the orbifold Higgs field $\Phi $ is a lower-triangular matrix and the same is true for $\tilde{\Phi }$, thus $\tilde{\Phi }$ preserves the filtration. In conclusion, $\tilde{\Phi }$ is a parabolic Higgs field. It is now not hard to recover the orbifold Higgs field $\Phi $ from $\tilde{\Phi }$, giving a one-to-one correspondence. In summary, we have the following theorem: \begin{thm}\label{608} The above construction gives a bijective correspondence between isomorphism classes of holomorphic orbifold Higgs bundles with good trivialization $\left( E,\Theta ,\Phi \right)$ and isomorphism classes of parabolic Higgs bundles $\left( F,\tilde{\Theta },\tilde{\Phi } \right)$. \end{thm} Given this theorem, the next step is to show that this correspondence holds in the semistable (resp. stable) case. The following theorem gives us a way to calculate the degree of an orbifold line bundle. \begin{thm}[Kawasaki-Riemann-Roch \cite{Kawasaki}]\label{609} If $E$ is a holomorphic orbifold line bundle over $[Y / \Gamma]$ with isotropy $\sigma_i^{\beta_i}$ at $x_i$, $1 \leq i \leq s$, then \begin{align*}\tag{6} \dim H^0(M,E) -\dim H^1(M,E)=1-g+\deg(E)-\sum_{i=1}^s \frac{\beta_i}{\alpha_i}, \end{align*} where $\deg(E)$ is the degree of $E$ as an orbifold bundle over $[Y/\Gamma]$, which is a rational number, and $\alpha_i$ is the order of the group generated by $\sigma_i$. \end{thm} We remind the reader that $\deg(E)-\sum_{i=1}^s \frac{\beta_i}{\alpha_i}$ is an integer. Under the correspondence of Theorem 6.8, we have \begin{equation*} \deg(F)=\deg(E)-\sum_{i=1}^s \frac{\beta_i}{\alpha_i}, \end{equation*} whereas Formula (6) implies that $\deg(E)= \operatorname{pardeg} (F)$. In conclusion, the equality of the degree provides the following proposition: \begin{prop}[Proposition 5.9 in \cite{FuSt}]\label{610} We have a bijective correspondence between isomorphism classes of holomorphic semistable (resp. stable) orbifold Higgs bundles with good trivialization $\left( E,\Theta ,\Phi \right)$ and isomorphism classes of semistable (resp. stable) parabolic Higgs bundles $\left( F,\tilde{\Theta },\tilde{\Phi } \right)$. \end{prop} The above correspondence is also established assuming rational weights in the parabolic structure. \begin{rem}\label{611} For the special maximal parabolic $G$-Higgs bundles we are considering, we have seen that the defining parabolic bundle data for those can be reinterpreted as a direct sum of parabolic vector bundles (as is $E=V\oplus {{V}^{\vee }}$ in the $\mathrm{Sp(}2n\mathrm{,}\mathbb{R}\mathrm{)}$ case), thus the correspondence of Proposition \ref{610} can be used into our setting. \end{rem} \section{Topological Invariants of Maximal Parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles} Under the correspondence described in the last section, we use the $V$-cohomology to describe the topological invariants of the maximal $\mathrm{Sp}(2n,\mathbb{R})$-parabolic Higgs bundles. In \cite{Scott}, an explanation is provided on how to construct the fundamental group of the orbifold, and on p. 426-427 of the same article the homology group is defined and the character is calculated. From this, we can clearly define the orbifold cohomology ${{H}^{1}}(M)$ in our case. The $V$-manifold we discuss in this section is exactly an orbifold. The terminology $V$-manifold comes from \cite{FuSt} and \cite{NaSt}. We first review some basic properties of a $V$-manifold. \subsection{$V$-manifold} The $V$-manifold is an orbifold. We review the definition of an orbifold and a holomorphic bundle over an orbifold from the last section. Let $\widetilde{M}$ be a $k$-dimensional manifold with $s$-many marked points $x_1,...,x_s$. For each marked point, there is a linear representation $\sigma_i: \Gamma_i \rightarrow \mathrm{Aut}(\mathbb{R}^k)$ of a cyclic group ${{\Gamma }_{i}}=\left\langle {{\sigma }_{i}} \right\rangle $, $1 \leq i \leq s$, where $\Gamma_i$ acts freely on $\mathbb{R}^k \backslash \{0\}$ together with an atlas of coordinate charts \begin{align*} & \phi_i: U_i \rightarrow D^k / \sigma_i, & 1 \leq i \leq s;\\ & \phi_x: U_x \rightarrow D^k, & x \in \widetilde{M} \backslash \{x_1,...,x_s\}. \end{align*} An orbifold $M$ is obtained by gluing all local coordinate charts above, while $\widetilde{M}$ is the underlying manifold of $M$. We call $M$ the $V$-manifold in this section. The \emph{$V$-bundle} $E$ of rank $l$ over $M$ (or vector bundle over the $V$-manifold $M$) is defined locally on the charts as above with a collection of isotropy representations $\tau_i: F_i \rightarrow \mathrm{Aut}(C^l)$ and local trivializations $\theta_i : E|_{U_i} \rightarrow D^k \times C^l / \sigma_i \times \tau_i, 1 \leq i \leq s$. We are interested in the case when the manifold $\widetilde{M}$ is $X=Y / \Gamma$, where $Y$ is a compact Riemann surface and $\Gamma$ is a finite group acting effectively on $Y$. The following theorem gives us the condition when the $V$-manifold $M$ can be written in the form $[Y / \Gamma]$. \begin{thm}[Theorem 1.2 in \cite{FuSt}]\label{701} Let ${{\alpha }_{i}}$ denote the order of the cyclic group generated by the linear representation ${{\sigma }_{i}}$ described above. Any compact oriented $V$-surface $M$, with $s \geq 3$ or $s=2$ and $\alpha_1 \neq \alpha_2$ if $g=0$, has the form $Y / \Gamma$, where $Y$ is a compact Riemann surface with genus $g$ and $\Gamma$ is a finite group acting effectively. \end{thm} \begin{thm}[Theorem 1.3 in \cite{FuSt}]\label{702} If the compact oriented $V$-manifold $M$ has the form $Y / \Gamma$, then there is a bijective correspondence between isomorphism classes of complex $V$-bundles over $M$ and equivariant isomorphism classes of complex vector bundles over $Y$ with a $\Gamma$-action. \end{thm} We now define $V$-cohomology as follows. Recall that $M$ is a union of $Y \backslash \{x_1,...,x_s\}$ and $\coprod U_i$, where $U_i = D_i/ \mathbb{Z}_{\alpha_i}$, $1 \leq i \leq s$. Then $M_V$ is defined as the union of $Y \backslash \{x_1,...,x_s\}$ and $\coprod E \mathbb{Z}_{\alpha_i} \times_{\mathbb{Z}_{\alpha_i}}D_i$. \begin{defn}\label{703} The $V$-cohomology $H^{*}_V(M)$ is defined as the cohomology \begin{align*} H^{*}_V(M)=H^{*}(M_V). \end{align*} \end{defn} The following theorem is the basic tool in order to calculate the cohomology group $H^{*}_V(M)$: \begin{thm}[Theorem 2.2 in \cite{FuSt}]\label{704} We have the following isomorphism about the first $V$-cohomology group \begin{align*} H^1_{V}(M,\mathbb{Z}) \cong H^1(M, \mathbb{Z}). \end{align*} \end{thm} \subsection{Line V-bundles} Let $M$ be an $V$-manifold. Line $V$-bundles are line bundles over the $V$-manifold $M$. Note that the $V$-manifold is also an orbifold. Thus the line $V$-bundles can be considered as line bundles over the orbifold. \begin{defn}\label{705} Under the tensor product, the topological isomorphism classes of line $V$-bundles form a group, which shall be denoted by $\mathrm{Pic}_V(M)$. \end{defn} The topological classification of the bundles on a $V$-Riemann surface is already done by M. Furuta and B. Steer \cite{FuSt}. Recall that there is a canonical line bundle $L_i$ at each point $x_i$ such that $L_i^{k_i}=\mathsf{\mathcal{O}}(x_i)$, and this bundle has isotropy $e^{2\pi \sqrt{-1} \frac{1}{k_i}}$ at each point $x_i$. Thus, if we have a $V$-bundle $L$ with isotropy $((\beta_1,k_1),\ldots,(\beta_s,k_s))$, we define the \textit{desingularization} $|L|$ to be \[|L|=L\otimes L_1^{-\beta_1}\otimes\ldots\otimes L_s^{-\beta_s}\] and it turns out that these completely classify the line bundles topologically. \begin{thm}[Proposition 1.4 in \cite{FuSt}]\label{706} There is a bijective correspondence between isomorphism classes of complex V-bundles and isotropy classes $\sigma_1^{\beta_1},\ldots,\sigma_s^{\beta_s}$ over points $x_1,\dots,x_s$ respectively, as well as the first Chern class of a line bundle on the underlying manifold $\widetilde{M}$. Thus \[\mathrm{Pic}_V(M)=\mathrm{Pic}(\widetilde{M})\oplus \bigoplus_{i=1}^s \mathbb{Z}_{k_i}.\] \end{thm} The idea of the proof is the following: We recall that on an ordinary Riemann surface, two line bundles $L_1$ and $L_{2}$ are topologically equivalent, if and only if $c_1(L_1)=c_1(L_2)$. Now we can use the desingularization to define line bundles $|L_1|$ and $|L_2|$ over $\widetilde{M}$. The line $V$-bundles are equivalent if and only if $c_1(|L_1|)=c_1(|L_2|)$ and their isotropy classes coincide under some trivialization. The class $(c_1(|L|),(\beta_1,k_1),\ldots,(\beta_s,k_s))$ is called the \textit{Seifert invariant} of this bundle. Note that this invariant depends on the local trivialization of each neighborhood. We have more invariants if we would like to classify the holomorphic bundle instead of the topological bundle. From the classical Narasimhan-Seshadri correspondence, there is a correspondence between unitary representations of the fundamental group and rank $n$ polystable bundles $E$ with trivial first Chern class and trivial $c_2(E)\cdot c_1(L)^{n-1}$ with the polystable Higgs bundle, where $L$ is an ample line bundle on a K\"{a}hler manifold. I. Biswas and A. Hogadi in \cite{BiHo} generalized this correspondence for a compact orbifold of any dimension and any rank: \begin{thm}[Theorem 1.2 in \cite{BiHo}]\label{707} Let $M$ be a complex projective orbifold of dimension $n$ and $E$ a vector bundle over $X$ with $L$ an ample line bundle. Then $E$ is polystable with respect to $L$ if and only if it corresponds to a unitary representation of an orbifold line bundle. \end{thm} Thus, in particular, in the case of a line bundle over an orbifold Riemann surface, the stability condition is trivial. We know that $\mathrm{Pic}^{0}_{V}(M):=\mathrm{Hom}(\pi_V^1(M),\mathbb{C}^*)$. We see that in the case of $s$-many marked points, since $\mathbb{C}^*$ is commutative, it is \[\mathrm{Pic}^0_V(M):=\mathrm{Hom}(\langle a_i,b_i, \sigma_i|\sigma_i^{k_i}=1,\prod_{k=1}^s \sigma_i=1\rangle,U(1))\cong(S^1)^{2g}\times\frac{\bigoplus_{i=1}^s\mathbb{Z}_{k_i}}{(1,1,\ldots,1)}.\] We deduce that in our case, that is, considering the trivial parabolic structure with weight $\frac{1}{2}$ over each point in the divisor $D$, in other words, with $\mathbb{Z}_2$-isotropy at the $s$-many marked points on the genus $g$ Riemann surface $M$, we have the identification \[\mathrm{Pic}^0_V(M)\cong(S^1)^{2g}\oplus \mathbb{Z}_2^{s-1}.\] For bundles of higher degree, we can get a degree $0$ bundle by tensoring with a degree $-d$ bundle, thus this also reduces to the degree $0$ case as the stability condition is trivial. We finally imply the following: \begin{prop}\label{708} Let $X$ be a Riemann surface with genus $g$. Denote by $M$ the $V$-manifold with $s$-many marked points $x_1,...,x_s$, around which the isotropy group is $\mathbb{Z}_2$, such that $X$ is the underlying surface of $M$. Let $\pi: M \rightarrow X$ be the natural map. Given any line bundle $L$ over $X$, the line bundle $\pi^* L$ has $2^{2g+s-1}$ many square roots over $M$. In particular, let $K$ be the canonical line bundle over $X$. The line $V$-bundle $\pi^* K$ over $M$ has $2^{2g+s-1}$ many square roots over $M$. \end{prop} Let $L$ be a line bundle over $X$. Sometimes we abuse the notation $L$, to mean the corresponding line $V$-bundle $\pi^* L$ over $M$ in the rest of the paper. \subsection{Calculations in orbifold cohomology} We consider the following special $V$-manifold \begin{align*} & M=U_1 \bigcup U_2, & U_1 = X \backslash \{x_1,...,x_s\}, && U_2 = \coprod_{i=1}^s D / \mathbb{Z}_2,\\ & M_V=V_1 \bigcup V_2, & V_1 = X \backslash \{x_1,...,x_s\}, && V_2 = \coprod_{i=1}^s D\times_{\mathbb{Z}_2} E\mathbb{Z}_2, \end{align*} where $D$ is a disk around the punctures $x_{i}$ and $X$ is a compact Riemann surface of genus $g$. We only calculate the rank of the $V$-cohomology group $H^{*}_V(M)$ with coefficients $\mathbb{Z}_2$. By the Mayer-Vietoris sequence, we have \begin{align*} 0 \rightarrow H^0(M_V) \rightarrow H^0(V_1) \bigoplus H^{0}(V_2) \rightarrow H^0(V_1\bigcap V_2)\\ \xrightarrow[]{j_1} H^1(M_V) \rightarrow H^1(V_1) \bigoplus H^{1}(V_2) \rightarrow H^1(V_1\bigcap V_2)\\ \xrightarrow[]{j_2} H^2(M_V) \rightarrow H^2(V_1) \bigoplus H^{2}(V_2) \rightarrow H^2(V_1\bigcap V_2).\\ \end{align*} \begin{enumerate} \item[(1)] Clearly, $V_1\bigcap V_2=\prod_{i=1}^s S^1$. We have \begin{align*} & \mathrm{rk}(H^0(V_1\bigcap V_2))=s,\\ & \mathrm{rk}(H^1(V_1\bigcap V_2))=s,\\ & \mathrm{rk}(H^2(V_1\bigcap V_2))=0.\\ \end{align*} \item[(2)] For the cohomology group of $V_1=X \backslash \{p_1,...,p_s\}$ we check that \begin{align*} \mathrm{rk}(H^0(V_1))=1,\\ \mathrm{rk}(H^1(V_1))=2g+s-1,\\ \mathrm{rk}(H^2(V_1))=0.\\ \end{align*} \item[(3)] We use the Leray spectral sequence to calculate the cohomology group of $V_2$. We have the following fibration \begin{align*} B\mathbb{Z}_2 \rightarrow D\times_{\mathbb{Z}_2} E\mathbb{Z}_2 \rightarrow D, \end{align*} where $B\mathbb{Z}_2$ is the classifying space of $\mathbb{Z}_2$ and $E\mathbb{Z}_2$ is the universal bundle over $B\mathbb{Z}_2$. By Leray spectral sequence, we have \begin{align*} H^{*}(B\mathbb{Z}_2,H^{*}(D)) \Rightarrow H^{*}(D\times_{\mathbb{Z}_2} E\mathbb{Z}_2). \end{align*} We know \begin{align*} H^{i}(D)= \begin{cases} \mathbb{Z}, \quad i=0\\ 0, \quad \mathrm{otherwise.} \end{cases} \end{align*} Hence, we have \begin{align*} H^{i}(D\times_{\mathbb{Z}_2} E\mathbb{Z}_2)= \begin{cases} \mathbb{Z}_2, \quad i=0,1,2\\ 0, \quad \mathrm{otherwise,} \end{cases} \end{align*} where $H^{i}(D\times_{\mathbb{Z}_2} E\mathbb{Z}_2)$ is the $\mathbb{Z}_2$-cohomology. Based on the calculation above, we have \begin{align*} \mathrm{rk}(H^0(V_2))=s,\\ \mathrm{rk}(H^1(V_2))=s,\\ \mathrm{rk}(H^2(V_2))=s.\\ \end{align*} \end{enumerate} From the calculation of the ranks of the cohomology groups, we induce that the map $j_1$ in the Mayer-Vietoris sequence is injective, while the map $j_2$ is an isomorphism. Since $M_V$ is connected, it is implied that $\mathrm{rk}(H^{0}(M_V))=1$. The Mayer-Vietoris sequence now provides that \begin{align*} & \mathrm{rk}(H^1(M_V))=2g+s-1,\\ & \mathrm{rk}(H^2(M_V))=s.\\ \end{align*} \subsection{Topological Invariants of Parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs Bundles in $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$} In this subsection, we study the topological invariants of parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles in $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$. For a parabolic Higgs bundle $(E,\Phi) \in \mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$, the denominator of its parabolic weight $\alpha(x)$ is $2$, for every $x \in D$ (with the same notation as in Definition \ref{201}). In other words, the parabolic weight $\alpha(x)$ can be either $0$ or $\frac{1}{2}$. We want to remind the reader one more time that the denominator $2$ of the weight corresponds to the group action $\mathbb{Z}_2$ around the punctures in $D$; if the denominator is $n$, then the group action is $\mathbb{Z}_n$. To study the topological invariants of parabolic Higgs bundles, we consider parabolic Higgs bundles as Higgs $V$-bundles under the correspondence we studied in \S 6. Thus we take the topological invariants of the corresponding Higgs $V$-bundles to define invariants for parabolic Higgs bundles. In this subsection, we slightly abuse terminology between parabolic Higgs bundles and Higgs $V$-bundles; when we discuss topological invariants this should always refer to the Higgs $V$-bundles. We first study the $\mathrm{Sp}(4,\mathbb{R})$-case. Recall that the definition of a parabolic $\mathrm{Sp(4}\mathrm{,}\mathbb{R}\mathrm{)}$-Higgs bundle over $X$ with divisor $D=\{x_1,...,x_s\}$ involves a pair $(E,\Phi)$, where $E=V\oplus {{V}^{\vee }}$ is a rank 4 parabolic vector bundle over $X$ and $\Phi =\left( \begin{matrix} 0 & \beta \\ \gamma & 0 \\ \end{matrix} \right): E \rightarrow E \otimes K(D)$ is a parabolic Higgs field. From Proposition 5.4, the maximal parabolic degree of the parabolic vector bundle $V$ is $2g-2+s$. In this maximal case, the proof of Proposition 5.4 implies that $\gamma$ is an isomorphism. Fix a square root $L_0$ of $K(D)$ (as a line $V$-bundle) and define $W:=V \otimes L^{-1}_0$. Clearly, we have \begin{align*} & c=\gamma \otimes 1_{L^{-1}_0} : W=V \otimes L^{-1}_0 \rightarrow V^{\vee} \otimes K(D) \otimes L^{-1}_0=W^{\vee},\\ & \phi=(\beta \otimes 1_{L_0}) \circ (\gamma \otimes 1_{L^{-1}_0}): W=V\otimes L^{-1}_0 \rightarrow V \otimes L^{\frac{3}{2}}_0=W\otimes K(D)^2, \end{align*} where the first map $c$ is an isomorphism. Given a parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundle with maximal degree $(E,\Phi)$, we consider the associated triple $(W,c,\phi)$ when we discuss the topological invariants. It is easy to check that the parabolic structure of $E$ uniquely determines the parabolic structure of $W$. Thus we use the same notation $\alpha$ for the parabolic structure of $W$. From the correspondence between the orbifold bundle ($V$-bundle) and the parabolic bundle we studied in \S 6, the parabolic Higgs bundle $E=V\oplus {{V}^{\vee }}$ over $X$ is equivalent to a Higgs $V$-bundle over $M$, where $M$ is the $V$-manifold with $s$-many marked points $x_1,...,x_s$, around which the isotropy group is $\mathbb{Z}_2$, and $X$ is the underlying surface of $M$. Under this correspondence, $c$ induces a quadratic form on the $V$-bundle $W$. Hence, the structure group of $W$ is $\mathrm{O}(2,\mathbb{C})$. Also, note that the $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundle $E$ has the real structure. More precisely, $E$ can be written as $E=E_{\mathbb{R}} \otimes \mathbb{C}$, where $E_{\mathbb{R}}$ is a real vector bundle. Similarly, we can write $W$ as $W_{\mathbb{R}}\otimes \mathbb{C}$. Therefore the structure group of $W_{\mathbb{R}}$ is $\mathrm{O}(2)$. In parallel to the definition of a Stiefel-Whitney class, the corresponding class $w_1$ of $W_{\mathbb{R}}$ in $H_V^1(M, \mathbb{Z}_2)$ is a well-defined topological invariant for $W$. By Theorem 7.7 and the calculations in \S 7.3, we deduce that the number of different elements in $H_V^1(M, \mathbb{Z}_2)$ is $2^{2g+s-1}$. An alternative description of this cohomology group is given by the fundamental group \begin{align*} H^1_V(M, \mathbb{Z}_2)=\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2), \end{align*} where $\pi^{1}_V(M)$ is the $V$-fundamental group with presentation \begin{align*} \pi_V^1(M)=\{a_1,b_1,...,a_g,b_g,\sigma_1,...,\sigma_s \quad | \quad \sigma_1...\sigma_s[a_1,b_1]...[a_g,b_g]=1,\sigma_i^2=1, 1 \leq i \leq s\}, \end{align*} where $a_1,b_1,...,a_g,b_g$ are generators of the underlying surface $X$ and $\sigma_i$ are represented by small loops around the points $x_i$, $1 \leq i \leq s$. We discuss the topological invariants for maximal parabolic $\mathrm{Sp(4}\mathrm{,}\mathbb{R}\mathrm{)}$-Higgs bundles based on the cohomology group $H^1_V(M,\mathbb{Z}_2)$. Let $w_1 \in H^1_V(M,\mathbb{Z}_2)$ and $w_2 \in H^2_V(M,\mathbb{Z}_2)$. We distinguish the following cases: \begin{enumerate} \item[(1)] If $w_1 \neq 0$, every pair $(w_1,w_2)$ is a topological invariant for $W_{\mathbb{R}}$. Thus the pair $(w_1,w_2)$ can be considered as the topological invariant of $W$. The number of topological invariants in this case is $2^s(2^{2g+s-1}-1)$. \item[(2)] If $w_1=0$, then the structure group can be reduced to $\mathrm{SO}(2, \mathbb{C}) \subset \mathrm{O}(2,\mathbb{C})$. From the identification $\mathrm{SO}(2,\mathbb{C}) \cong \mathbb{C}^*$, $W$ can be decomposed as the direct sum $W=L \bigoplus L^{\vee}$. Now, stability for the map $\phi:W \rightarrow W \otimes K(D)^2$, provides the existence of a non-trivial holomorphic map $L \rightarrow L^{\vee}\otimes K(D)^2$, therefore it holds necessarily that $\mathrm{pardeg}(L) \leq 2g-2+s$. \begin{enumerate} \item[a.] If $\mathrm{pardeg}(L)\neq 2g-2+s$, then every value of the parabolic degree gives a topological invariant and there are at least $2g-2+s$ different values. But the parabolic degree is not enough to give all possible topological invariants of $L$. Recall from the definition of parabolic degree that the parabolic degree $``\mathrm{pardeg}"$ can be written as the sum of the classical degree $``\deg"$ and the weight $``w"$, which is the sum of the weights over each point $x \in D$. Note that $w$ is uniquely determined by the parabolic structure of $L$. If we fix the parabolic structure of $L$, there are $2g-2+s$ many choices for the part of the classical degree $``\deg"$. At the same time, we have $2^s$ many choices for the parabolic structures of $L$. Thus the number of topological invariants in this case is $2^s(2g-2+s)$. \item[b.] If $\mathrm{pardeg}(L)=2g-2+s$, then $L^2 \cong K(D)^2$. This describes parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles $\left( E=V\oplus {{V}^{\vee }},\Phi \right)$ with $V=N\oplus {{N}^{\vee }}K\left( D \right)$, for a line bundle $N=K{{\left( D \right)}^{\frac{3}{2}}}$. Thus, square roots of $K\left( D \right)$ parameterize components containing such Higgs bundles, and this contributes to at least $2^{2g+s-1}$ topological invariants by Proposition \ref{708}. \end{enumerate} \end{enumerate} The discussion above implies our main theorem: \begin{thm}\label{709} The moduli space $\mathcal{M}_{par}^{max}(\mathrm{Sp}(4,\mathbb{R}))$ of maximal polystable parabolic $\mathrm{Sp}\left( 4,\mathbb{R} \right)$-Higgs bundles over a compact Riemann surface $X$ of genus $g$ with a divisor $D$ of $s$-many distinct points on $X$, such that $2g-2+s>0$, has at least $(2^s+1)2^{2g+s-1}+2^s(2g-3+s)$ connected components. \end{thm} For $n\ge 3$, the structure group of the $V$-bundle $W$ above is $\mathrm{O}( n,\mathbb{C})$ and the classification of $\mathrm{O}(n,\mathbb{C})$-bundles does not provide the extra invariant $\mathrm{pardeg}(L)$ in this case. Moreover, for every $n\ge 1$ in general, there are $2^{2g+s-1}$ connected components of the moduli space ${{\mathsf{\mathcal{M}}}_{par}^{max}}\left( \mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ parameterized by the square roots of the canonical line bundle $K(D)$ (the parabolic Teichm{\"u}ller components). This provides the following: \begin{thm}\label{710} The moduli space ${{\mathsf{\mathcal{M}}}_{par}^{\mathrm{max}}}( \mathrm{Sp}\left( 2,\mathbb{R} \right))$ of maximal polystable parabolic $\mathrm{Sp}\left( 2,\mathbb{R} \right)$-Higgs bundles over a compact Riemann surface $X$ of genus $g$ with a divisor of $s$-many distinct points on $X$, such that $2g-2+s>0$, has at least $2^{2g+s-1}$ connected components and the moduli space ${{\mathsf{\mathcal{M}}}_{par}^{\mathrm{max}}}\left( \mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ for $n \ge 3$ has at least $(2^s+1)2^{2g+s-1}$ connected components. \end{thm} \subsection{Topological Invariants of Parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs Bundles in $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$} In this subsection, we study the topological invariants of parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles in $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$, where $\alpha$ is a given parabolic structure. Note that the parabolic structure was not fixed for parabolic Higgs bundles in $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$ earlier. In this subsection, all parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles in $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ are assumed to have the same fixed parabolic structure $\alpha$. More precisely, they have the same filtration over each $x \in D$ with the same weight $\alpha(x)$, for every $x \in D$. The moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ is a subspace of $\mathcal{M}_{par}^{max}(\mathrm{Sp}(2n,\mathbb{R}))$, studied in \S 7.4. We shall still deal with the case of parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles. With the same notation as we used in \S 7.4, a parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundle $(E=V \oplus V^{\vee},\Phi =\left( \begin{matrix} 0 & \beta \\ \gamma & 0 \\ \end{matrix} \right))$ with maximal degree corresponds to a triple $(W,c,\phi)$, where $c$ is an isomorphism and $\phi$ is a parabolic $K(D)^2$-twisted Higgs field. In \S 7.4, we study the topological invariants of $(W,c,\phi)$, which gives the topological invariants of parabolic Higgs bundle $(E,\Phi)$. Now we use the same approach to study the topological invariants of parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles with a given parabolic structure $\alpha$. By considering the real structure of $W=W_{\mathbb{R}}\otimes \mathbb{C}$, the first and second $V$-cohomology $H^1_V(M,\mathbb{Z}_2)$, $H^2_V(M,\mathbb{Z}_2)$ are considered as our topological invariants. Similar to the classical case, there is natural map $\mathrm{Pic}_V(M) \rightarrow H^2_V(M,\mathbb{Z}_2)$. By the calculation in \S 7.3, we know that $\mathrm{rk }H^2_V(M,\mathbb{Z}_2)=s$. Thus the total number of elements in $H^2_V(M,\mathbb{Z}_2)$ is $2^s$. In fact, there is a one-to-one correspondence between $H^2_V(M,\mathbb{Z}_2)$ and the torsion points in $\mathrm{Pic}_V(M)$. Note that, for $M_{V}$, the exact sequence \begin{align*} 0 \rightarrow \mathbb{Z} \rightarrow O_{M_V} \rightarrow O^*_{M_V} \rightarrow 0, \end{align*} provides that \begin{align*} H^1(M_V, O_{M_V}) \rightarrow H^1(M_V, O^*_{M_V}) \xrightarrow{\varpi} H^2_V(M, \mathbb{Z}). \end{align*} The first cohomology $H^1(M_V, O^*_{M_V})$ is exactly the $V$-Picard group $\mathrm{Pic}_V(M)$. Therefore, there is a morphism \begin{align*} \varpi:\mathrm{Pic}_V(M) \rightarrow H^2_V(M, \mathbb{Z}). \end{align*} By taking the $\mathbb{Z}_2$-coefficient, it is easy to see that the morphism $\varpi$ induces the isomorphism \begin{align*} \bigoplus_{i=1}^{s} \mathbb{Z}_2 \cong H^2_V(M, \mathbb{Z}). \end{align*} This gives the one-to-one correspondence between $H^2_V(M,\mathbb{Z}_2)$ and the torsion points in $\mathrm{Pic}_V(M)$. In our case, this correspondence is precisely between the second $V$-cohomology $H^2_V(M,\mathbb{Z}_2)$ and parabolic structures of $W$. Thus fixing a parabolic structure $\alpha$ is equivalent to fixing an element in the second $V$-cohomology $H^2_V(M,\mathbb{Z}_2)$. Now suppose that $w_1$ is trivial. The parabolic bundle $W$ can be written as the sum of parabolic line bundles $L \oplus L^{\vee}$ such that $0 \leq \mathrm{pardeg} (L) \leq 2g-2+s$. As we discussed in {\rm Case (2a)} in \S 7.4, if we fix a parabolic structure $\alpha$ and assume that $\mathrm{pardeg} (L) \neq 2g-2+s$, the parabolic degree is the only topological invariant. In other words, the parabolic structure of $L$ is uniquely determined by that of $W$. We still use the notation $\alpha$ for the parabolic structure of $L$. In this paragraph, we consider $\alpha$ as the parabolic structure of the line bundle $L$. If $\mathrm{pardeg} (L) =2g-2+s$, the topological invariants are described by the square roots of $K(D)^2$. In other words, $L$ is a square root of $K(D)^2$, however, its parabolic structure is not arbitrary. Only a line bundle $L$ with \emph{even parabolic structure} can be a square root of $K(D)^2$. We will explain this property and define \emph{even} (resp. \emph{odd}) parabolic structure in this paragraph. As we discussed in \S 7.2, there are $2^{2g+s-1}$ many square roots, which correspond to $\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2)$. Recall that \begin{align*} \pi_V^1(M)=\{a_1,b_1,...,a_g,b_g,\sigma_1,...,\sigma_s \quad | \quad \sigma_1...\sigma_s[a_1,b_1]...[a_g,b_g]=1,\sigma_i^2=1, 1 \leq i \leq s\}, \end{align*} where the $\sigma_i$ describe the monodromy around the point $x_i$. By the correspondence between line $V$-bundles and parabolic line bundles, the monodromy around $x_i$ corresponds to the weight of the corresponding parabolic line bundle over the point $x_i$. Thus fixing a parabolic structure $\alpha$ is equivalent to fixing the monodromy around $x_i$, $1 \leq i \leq s$. However, not every parabolic structure corresponds to a well-defined element in $\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2)$. Indeed, the relation $\sigma_1...\sigma_s[a_1,b_1]...[a_g,b_g]=1$ implies that the number of nontrivial $\sigma_i$ is even. Equivalently, if the cardinality of the set \begin{align*} \{x \in D \mathrm{ } | \mathrm{ } \alpha(x)=\frac{1}{2}\} \end{align*} is even, then the parabolic structure corresponds to an element in $\mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2)$, and such a parabolic structure could be a choice for the square root of $K(D)^2$. Thus we say that the parabolic structure $\alpha$ is \emph{even} (resp. \emph{odd}) if the cardinality of the set \begin{align*} \{x \in D \mathrm{ } | \mathrm{ } \alpha(x)=\frac{1}{2}\} \end{align*} is even (resp. odd). Here is an easy example. Let $s=1$ and $D=\{x\}$. The parabolic structure $\alpha$ of a parabolic line bundle $L$ is uniquely determined by the weight $\alpha(x)$, which is either $0$ or $\frac{1}{2}$. In this case, the parabolic structure is even if and only if the weight $\alpha(x)=0$, and the parabolic structure is odd if and only if $\alpha(x)=\frac{1}{2}$. Based on the above discussion, the topological invariants of parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles with a fixed parabolic structure $\alpha$ are given as follows. \begin{enumerate} \item[(1)] If $w_1 \neq 0$, each element $w_1 \in H_V^1(M,\mathbb{Z}_2)$ is a topological invariant. The number of topological invariants in this case is $2^{2g+s-1}-1$. \item[(2)] Suppose that $w_1=0$. \begin{enumerate} \item[a.] If $\mathrm{pardeg}(L) \neq 2g-2+s$, the topological invariants are given by the parabolic degree. The number of topological invariants in this case is $2g-2+s$. \item[b.] If $\mathrm{pardeg}(L)=2g-2+s$, the topological invariants are given by the square roots. Note that the parabolic degree of any square root is an integer. In other words, the number of points with nontrivial monodromy is even. \begin{enumerate} \item[$\mu.$] If the parabolic structure $\alpha$ is even, the number of topological invariants is $2^{2g}$. \item[$\nu.$] If the parabolic structure $\alpha$ is odd, the square roots are not well-defined. \end{enumerate} \end{enumerate} \end{enumerate} The discussion above implies the following proposition. \begin{prop}\label{711} Let $X$ be a smooth Riemann surface of genus $g$ and let $D$ be a reduced effective divisor of $s$ many points on $X$, such that $2g-2+s>0$. Consider the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ of maximal polystable parabolic $\mathrm{Sp}(2n,\mathbb{R})$-Higgs bundles, where $\alpha$ is a given parabolic structure, which is fixed for all Higgs bundles in the moduli space, this means, the parabolic Higgs bundles have the same filtration over each $x \in D$ with the same weight $\alpha(x)$, for every $x \in D$. Then, \begin{enumerate} \item[$\mathrm{i}.$] If $\alpha$ is even, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(4,\mathbb{R}))$ has at least $2^{2g+s-1}+(2g-3+s)+2^{2g}$ connected components. \item[$\mathrm{ii}$] If $\alpha$ is odd, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(4,\mathbb{R}))$ has at least $2^{2g+s-1}+(2g-3+s)$ connected components. \item[$\mathrm{iii}.$] If $\alpha$ is even, the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2,\mathbb{R}))$ has at least $2^{2g}$ connected components, and the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ has at least $2^{2g+s-1}+2^{2g}$ connected components. \item[$\mathrm{iv}.$] If $\alpha$ is odd, there are no maximal polystable parabolic $\mathrm{Sp}(2,\mathbb{R})$-Higgs bundles with fixed parabolic structure $\alpha$, and the moduli space $\mathcal{M}_{par}^{max,\alpha}(\mathrm{Sp}(2n,\mathbb{R}))$ has at least $2^{2g+s-1}$ many connected components. \end{enumerate} \end{prop} \section{Other Lie groups} The topological invariants and component count method developed for the case $G=\mathrm{Sp}(2n,\mathbb{R})$ in the previous section hint towards counting the minimum number of maximal components of moduli of polystable parabolic $G$-Higgs bundles also in other cases in which ${G}/{H}\;$ is a Hermitian symmetric space. We directly adapt the treatment followed by the authors in \cite{BrGPGHermitian} in the non-parabolic case. We will restrict to the cases when the bounded symmetric domain corresponding to the Hermitian symmetric space ${G}/{H}\;$ is of tube type. For the classical semisimple Lie groups this means we will be interested in the groups $\mathrm{SU}(n,n)$, $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$ for even integer $n$, $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$ and $E_{7}^{-25}$ (cf. \cite{BrGPGHermitian} for a more detailed description). In the sequel, $\left( X,D \right)$ will always denote a compact Riemann surface $X$ of genus $g$ together with a divisor $D:=\left\{ {{x}_{1}},\ldots ,{{x}_{s}} \right\}$ of $s$-many distinct points on $X$, assuming that $2g-2+2s>0$. \subsection{$G=\mathrm{SU}(n,n)$} \begin{defn}\label{801} A \emph{parabolic $\mathrm{SU}(n,n)$-Higgs bundle} over $\left( X,D \right)$ is a parabolic Higgs bundle $\left( E,\Phi \right)$, such that \begin{enumerate} \item $E=V\oplus W$, where $V$ and $W$ are parabolic vector bundles of rank $n$ with $\operatorname{pardeg} V=-\operatorname{pardeg} W$. \item $\Phi =\left( \begin{matrix} 0 & \beta \\ \gamma & 0 \\ \end{matrix} \right):E\to E\otimes K\left( D \right)$, where $\beta :W\to V\otimes K\left( D \right)$ and $\gamma :V\to W\otimes K\left( D \right)$ are parabolic morphisms. \end{enumerate} \end{defn} A parabolic Toledo invariant for a parabolic $\mathrm{SU}(n,n)$-Higgs bundle is defined by $\tau =\operatorname{pardeg} V=-\operatorname{pardeg} W$ and similarly to the proof of Proposition 5.4 one can establish the Milnor-Wood bound \[\left| \tau \right|\le n\left( g-1+\frac{s}{2} \right).\] Maximality for the Toledo invariant provides that $\gamma :V\to W\otimes K\left( D \right)$ is a parabolic isomorphism. This, together with the condition $\det W={{\left( \det V \right)}^{-1}}$ for the corresponding $V$-bundles $V,W$ imply that \[{{\left( \det W \right)}^{2}}\simeq {{\left( {{\left( K\left( D \right) \right)}^{\vee }} \right)}^{n}}.\] Choosing a square root ${{L}_{0}}$ of $K\left( D \right)$ and defining $\tilde{W}=W\otimes {{L}_{0}}$, we have ${{\left( \det \tilde{W} \right)}^{2}}\simeq \mathsf{\mathcal{O}}$. Therefore, the topological invariant for $\det \tilde{W}$ is defined by the choices of a square root of the trivial line $V$-bundle, which can take ${{2}^{2g+s-1}}$ different values. We deduce the following: \begin{thm}\label{802} The minimal number of connected components of the moduli spaces of parabolic Higgs bundles $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{SU}(n,n)\right)$, $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{SU}(n,n)\right)$ is given as follows \begin{center} \begin{tabular}{|c|c|} \hline Moduli Space $\mathcal{M}$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}\right)$ \\ \hline\hline $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{SU}(n,n)\right)$ & \small{${{2}^{2g+s-1}}$} \\ \hline $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{SU}(n,n)\right)$, $\alpha$ is even & \small{$2^{2g}$} \\ \hline $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{SU}(n,n)\right)$, $\alpha$ is odd & \small{$-$} \\ \hline \end{tabular} \end{center} \end{thm} \begin{rem}\label{803} The preceding analysis coincides with the analysis for $\mathrm{Sp}(2,\mathbb{R})\simeq \mathrm{SU}(1,1)$. Note, however, that for $n\ne 1$ there are no Teichm{\"u}ller components, since $\mathrm{SU}(n,n)$ is not a split real form. \end{rem} \subsection{$\mathrm{S}{{\mathrm{O}}^{*}}(2n)$, for $n$ even} \begin{defn}\label{804} A \emph{parabolic $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$-Higgs bundle} over $\left( X,D \right)$ for $n=2m$ is a parabolic Higgs bundle $\left( E,\Phi \right)$, such that \begin{enumerate} \item $E=V\oplus {{V}^{\vee }}$, where $V$ is a parabolic vector bundle of rank $n$, and \item $\Phi =\left( \begin{matrix} 0 & \beta \\ \gamma & 0 \\ \end{matrix} \right):E\to E\otimes K\left( D \right)$, where $\beta :{{V}^{\vee }}\to V\otimes K\left( D \right)$ and $\gamma :V\to {{V}^{\vee }}\otimes K\left( D \right)$ are skew-symmetric parabolic morphisms. \end{enumerate} \end{defn} A parabolic Toledo invariant for a parabolic $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$-Higgs bundle is defined by $\tau =par\deg V$, for which: \[\left| \tau \right|\le n\left( g-1+\frac{s}{2} \right).\] Again the maximal case for $\tau$ will imply that $\gamma$ is an isomorphism and for a fixed square root ${{L}_{0}}$ of $K\left( D \right)$, and $\tilde{W}={{V}^{\vee }}\otimes {{L}_{0}}$, the homomorphism \[\omega :=\gamma \otimes {{I}_{L_{0}^{\vee }}}:{{\tilde{W}}^{\vee }}\to \tilde{W}\] is a skew-symmetric isomorphism defining a symplectic structure on the $V$-bundle $\tilde{W}$, in other words, $\left( \tilde{W},\omega \right)$ is an $\mathrm{Sp}(2n,\mathbb{C})$-holomorphic $V$-bundle. Thus, the moduli space of maximal parabolic $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$-Higgs bundles is homeomorphic to the moduli space of principal ${{H}^{\mathbb{C}}}$-bundles for $H\simeq \mathrm{Sp}(n)$, and $\mathrm{Sp}(n)$ is simply connected. Note that the moduli space of symplectic vector bundles is connected \cite{Rama}. Thus if we fix a parabolic structure $\alpha$ of $E$, i.e. a parabolic strucutre of $\tilde{W}$, the moduli space of pairs $\left( \tilde{W},\omega \right)$, where $\tilde{W}$ is of parabolic structure $\alpha$, is connected. The above discussion provides the following theorem. \begin{thm}\label{805} The moduli space $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{S}{{\mathrm{O}}^{*}}(2n)\right)$ of maximal polystable parabolic $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$-Higgs bundles has at least $2^s$ many connected components. The moduli space $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{S}{{\mathrm{O}}^{*}}(2n)\right)$ has at least one connected component. \end{thm} \subsection{$\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$} \begin{defn}\label{806} A \emph{parabolic $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$-Higgs bundle} over $\left( X,D \right)$ is a parabolic Higgs bundle $\left( E,\Phi \right)$, such that \begin{enumerate} \item $E=V\oplus W$, where $V=L\oplus {{L}^{\vee }}$ for a parabolic line bundle $L$ and $W$ corresponds to a rank $n$ orthogonal $V$-bundle. \item $\Phi =\left( \begin{matrix} 0 & 0 & \beta \\ 0 & 0 & \gamma \\ -{{\gamma }^{t}} & -{{\beta }^{t}} & 0 \\ \end{matrix} \right):E\to E\otimes K\left( D \right)$, where $\beta :W\to L\otimes K\left( D \right)$ and $\gamma :W\to {{L}^{\vee }}\otimes K\left( D \right)$ are parabolic morphisms. \end{enumerate} \end{defn} A parabolic Toledo invariant for a parabolic $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$-Higgs bundle is defined by $\tau =par\deg L$ and a Milnor-Wood bound is described by \[\left| \tau \right|\le 2g-2+s.\] Maximality for the Toledo invariant provides that $\gamma :V\to {{L}^{\vee }}\otimes K\left( D \right)$ has maximal rank one at all points and hence is surjective. Define $F=\ker \gamma $ and consider the short exact sequence \[0\to F\to W\to {{L}^{\vee }}\otimes K\left( D \right)\to 0.\] Then the sequence splits and $F$ inherits an $\mathrm{O}\left( n-1,\mathbb{C} \right)$-structure. Consider the line bundle ${{L}_{0}}:={{L}^{\vee }}\otimes K\left( D \right)$. From the exact sequence we deduce that ${{L}_{0}}\otimes \det F\simeq \mathsf{\mathcal{O}}$, hence $L_{0}^{2}\simeq \mathsf{\mathcal{O}}$. This, in turn, implies that ${{L}^{2}}\simeq {{\left( K\left( D \right) \right)}^{2}}$. From this point on, we further distinguish two cases: \paragraph{\textit{Case 1: $n\ge 4$.}} In this case, the only topological invariants we obtain are the Stiefel-Whitney classes for the $\mathrm{O}\left( n-1,\mathbb{C} \right)$-bundle. This provides a minimum of ${{2}^{s}}\cdot {{2}^{2g+s-1}}$ connected components for $\mathsf{\mathcal{M}}_{par}^{max }\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,n) \right)$. \paragraph{\textit{Case 2: $n=3$.}} In this case, $F$ is an $\mathrm{O}\left( 2,\mathbb{C} \right)$-bundle and the treatment is similar to the $\mathrm{Sp}(4,\mathbb{R})$-case. There is a distinguished component for every value of $\left( w_1,w_2 \right)$, for $w_1 \ne 0$, where $w_1 \in H^1_V(M,\mathbb{Z}_2)$ and $w_2 \in H^2_V(M,\mathbb{Z}_2)$; this provides at least ${{2}^{s}}\left( {{2}^{2g+s-1}}-1 \right)$ connected components. For $w_1=0$, there is a decomposition $F=M\oplus {{M}^{-1}}$ for a line $V$-bundle $M$. As in the case of $\mathrm{Sp}(4,\mathbb{R})$, one can show that there is a non-trivial holomorphic map $M\to {{\left( K\left( D \right) \right)}^{2}}$, which provides that $0\le \mathrm{pardeg} (M)\le 4g-4+2s$. For each value of the degree $\mathrm{pardeg} (M)<4g-4+2s$, there is a distinguished connected component for each fixed parabolic structure $\alpha$. Note here that in contrast to the $\mathrm{Sp}(4,\mathbb{R})$-case, when $\mathrm{pardeg} (M)=4g-4+2s$, there is an isomorphism $M\simeq {{\left( K\left( D \right) \right)}^{2}}$. Thus, there are no further invariants coming from this case. We conclude to the following: \begin{thm}\label{807} The minimal number of connected components of the moduli spaces of maximal polystable parabolic Higgs bundles $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,n)\right)$, $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,n)\right)$ is given as follows \begin{table}[htb] \begin{tabular}{|c|c|} \hline Moduli Space $\mathcal{M}$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}\right)$ \\ \hline\hline $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,3)\right)$ & \small{${{2}^{s}}\left( {{2}^{2g+s-1}}-1 \right)+2^s(4g-3+2s)$} \\ \hline $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,3)\right)$ & \small{$2^{2g+s-1}+(4g-3+2s)$} \\ \hline $\mathsf{\mathcal{M}}_{par}^{max}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,n)\right)$, $n \geq 4$ & \small{${{2}^{2g+2s-1}}$} \\ \hline $\mathsf{\mathcal{M}}_{par}^{max,\alpha}\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,n)\right)$, $n \geq 4$ & \small{$2^{2g+s-1}$} \\ \hline \end{tabular} \end{table} \end{thm} \subsection{$\bf{E}_{7}^{-25}$} The general Milnor-Wood type inequality established in \cite{BiGaRu} provides a description of maximal $G$-Higgs bundles also in the exceptional tube case when $G=E_{7}^{-25}$. In this case, a maximal compact subgroup is $H=E_{6}^{-78}{{\times }_{{{\mathbb{Z}}_{3}}}}\mathrm{U}\left( 1 \right)$ and $\mathrm{rk}\left( {G}/{H}\; \right)=3$. For the Toledo invariant $\tau =\tau \left( E \right)$ as defined in \cite{BiGaRi} in this full generality in the parabolic case, the Milnor-Wood inequality is given by \[\left| \tau \right|\le 3\left( g-1+\frac{s}{2} \right).\] In the maximal case for $\tau $, following the non-parabolic treatment of \cite{BiGaRu}, we get that a maximal parabolic $E_{7}^{-25}$-Higgs bundle corresponds to an ${H}'={{F}_{4}}\times {{\mathbb{Z}}_{2}}$-holomorphic $V$-bundle $\tilde{W}$. The group ${{{H}'}^{\mathbb{C}}}$ is not connected and the short exact sequence \[1\to {{{H}'}_{0}}^{\mathbb{C}}\to {{{H}'}^{\mathbb{C}}}\to {{\pi }_{0}}\left( {{{{H}'}}^{\mathbb{C}}} \right)\cong {{\mathbb{Z}}_{2}}\to 1\] provides the homomorphism in the induced long exact sequence in $V$-cohomology \[H_{V}^{1}\left( M,{{{{H}'}}^{\mathbb{C}}} \right)\to H_{V}^{1}\left( M,{{\mathbb{Z}}_{2}} \right)\cong {{\left( {{\mathbb{Z}}_{2}} \right)}^{2g+s-1}}.\] The associated invariants in $H_{V}^{1}\left( M,{{\mathbb{Z}}_{2}} \right)$ provide the following: \begin{thm}\label{808} The minimal number of connected components of the moduli space $\mathsf{\mathcal{M}}_{par}^{max }\left( E_{7}^{-25}\right)$ is $2^{2g+s-1}$. \end{thm} \begin{rem} Theorem \ref{808} gives the least number of components for the corresponding moduli space for the group $E_{7}^{-25}$ . We leave open the possibility of having more topological invariants coming from the second cohomology group $H^2_V(M,\mathbb{Z}_2)$. \end{rem} Summarizing the results of the theorems in the previous sections on the minimum number of connected components of the moduli spaces of polystable maximal parabolic $G$-Higgs bundles for the classical Hermitian symmetric Lie groups $G$, one has Tables 1, 2 and 3 included at the end of the main body of this article. \section{Two Special Cases} In this section, we discuss how one can obtain the classical component counts in \cite{Stru}, \cite{BrGPGHermitian},\cite{GaGMsymplectic}, \cite{Gothen} and \cite{Hit92} as special cases of the Theorems in \S 7.4 and \S 8. \subsection{Punctured Riemann Surface} In \cite{Stru}, T. Strubel defined Fenchel-Nielsen coordinates on the moduli space of maximal representations of the fundamental group of a topological surface ${{\Sigma }_{g,m}}$ of genus $g$ and $m\ge 1$ boundary components into $\mathrm{Sp}\left( 2n,\mathbb{R} \right)$. Using these coordinates and counting parameters for gluing pairs of pants to obtain a surface with $m$-boundary components, he showed that the moduli space ${{\mathsf{\mathcal{R}}}^{\max }}\left( {{\Sigma }_{g,m}},\mathrm{Sp}\left( 2n,\mathbb{R} \right) \right)$ has exactly ${{2}^{2g+m-1}}$ connected components for every $n\ge 1$. Note that for such representations there is no assumption on the monodromy around the boundary components. From our point of view, let $X$ be a compact Riemann surface of genus $g$ and $\left\{ {{p}_{1}},\ldots ,{{p}_{s}} \right\}$ a collection of $s$-many distinct points on $X$. We may use the method from \S 7 to compute the number of topological invariants, however, in this case, we do not have to construct the $V$-manifold: Let $M=U_1=X \backslash \{p_1,...,p_s\}$ be a punctured Riemann surface without any action on the $\Gamma$-equivariant bundle, in other words, without a construction of a $V$-bundle. The calculations from \S 7.3 now adapt to give the following \begin{align*} \mathrm{rk}(H^0(M))=1,\\ \mathrm{rk}(H^1(M))=2g+s-1,\\ \mathrm{rk}(H^2(M))=0.\\ \end{align*} Moreover, since $H^2(M)$ is trivial, the number of topological invariants of maximal parabolic $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles over the punctured Riemann surface $M$ is only determined by the first cohomology group $H^1(M)$, thus is exactly $2^{2g+s-1}$. This is Case (1) as we discussed in \S 7.4. We further notice that when the first cohomology $u \in H^1(M)$ is trivial, one can decompose $W=L \bigoplus L^{\vee}$ as the direct sum of line bundles and this line bundle $L$ is over a punctured Riemann surface, which is an affine space, and there is only one line bundle over an affine space, the trivial one. As a result, there are no extra topological invariants coming from Cases 2(a) and 2(b), therefore the minimum number of connected components over $M$ is $2^{2g+s-1}$. \begin{rem}\label{901} The same argument provides the number $2^{2g+s-1}$ also in the cases for $\mathrm{Sp}(2,\mathbb{R})$ and $\mathrm{Sp}(2n,\mathbb{R})$ for $n \ge 3$ from Theorem \ref{710}; this gives an alternative explanation to T. Strubel's main result from \cite{Stru}. \end{rem} \subsection{The case when s=1} The number of connected components of moduli of maximal $G$-Higgs bundles (non-parabolic) for the classical Hermitian symmetric spaces ${G}/{H}\;$ has been determined in \cite{BrGPGHermitian} and the references therein. For the reader's convenience the basic results from that article are included in Table 9.2.1. \begin{table}[htb] \caption*{Table 9.2.1. Number of connected components of the non-parabolic moduli space $\mathsf{\mathcal{M}}^{max }\left( G \right)$.} \begin{tabularx}{\textwidth}{XXX} Lie group $G$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}^{max }\left( G \right) \right)$ & Teichm{\"u}ller components \\ \hline\hline $\mathrm{Sp}(2,\mathbb{R})=\mathrm{SL}(2,\mathbb{R})$ & \small{${{2}^{2g}}$} & \small{${{2}^{2g}}$} \\ $\mathrm{Sp}(4,\mathbb{R})$ & \small{$3\cdot {{2}^{2g}}+4g-4$} & \small{${{2}^{2g}}$} \\ $\mathrm{Sp}(2n,\mathbb{R})$, for $n\ge 3$ & \small{$3\cdot{{2}^{2g}}$} & \small{${{2}^{2g}}$} \\ $\mathrm{SU}(n,n)$ & \small{${{2}^{2g}}$} & - (\small{${{2}^{2g}}$ if $n=1$}) \\ $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$, for $n$: even & \small{1} & - \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$ & \small{$ {{2}^{2g+1}}+8g-4$} & \small{1} \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$, for $n\ge 4$ & \small{${{2}^{2g+1}}$} & - \\ $E_{7}^{-25}$ & \small{${{2}^{2g}}$} & - \\ \hline\hline \end{tabularx} \end{table} For a line $V$-bundle $\tilde{L}$, consider a local chart ${U}/{{{\mathbb{Z}}_{2}}}\;$ around a single point $p\in D$. The cohomology group for the $V$-manifold $M$ is described by \begin{align*} H^1_V(M, \mathbb{Z}_2) = \mathrm{Hom}(\pi_V^1(M),\mathbb{Z}_2), \end{align*} where \begin{align*} \pi_V^1(M)=\{a_1,b_1,...,a_g,b_g,\sigma \quad | \quad \sigma[a_1,b_1]..[a_g,b_g]=1,\sigma^2=1\}. \end{align*} For a well-defined morphism $\rho \in \mathrm{Hom}\left( \pi _{V}^{1}\left( M \right),{{\mathbb{Z}}_{2}} \right)$, the first relation for the fundamental group provides that $\det \left( \rho \left( \sigma \left[ {{a}_{1}},{{b}_{1}} \right]\ldots \left[ {{a}_{g}},{{b}_{g}} \right] \right) \right)=1$. But $\det \left( \rho (\left[ {{a}_{i}},{{b}_{i}} \right]) \right)=1$ for $1\le i\le g$, since $\operatorname{Im}\rho $ lies in $\mathbb{Z}_2$. Thus $\det \left( \rho \left( \sigma \right) \right)=1$, that is, in terms of the construction included in \S 7.2 the $\mathbb{Z}_2$-isotropy around the point $p$ is \emph{trivial}. Hence, $\tilde{L}\to M$ is a holomorphic line bundle over the Riemann surface (non-parabolic); denote the latter by $L\to X$, and there are $2^{2g}$ many non-isomorphic square roots of the canonical line bundle, when $s=1$. This implies the non-parabolic component count for $G=\mathrm{Sp}(2n,\mathbb{R})$ when $n\ne 2$, $\mathrm{SU}(n,n)$, $E_{7}^{-25}$, $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$ when $n$ is even, and $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$ when $n\ge 4$. \textit{The component count for $\mathrm{Sp}(4,\mathbb{R})$.} The topological invariants that distinguish the connected components of $\mathsf{\mathcal{M}}_{par}^{\max }\left( \mathrm{Sp}(4,\mathbb{R}) \right)$ are $w_1 \in H^1_V(M,\mathbb{Z}_2)$, $w_2 \in H^2(M,\mathbb{Z}_2)$, the parabolic structure of a parabolic line bundle $L$ (equivalently a line bundle over $M$) and its parabolic degree $\mathrm{pardeg}(L)$ such that \[0 \le \mathrm{pardeg}(L) \le 2g-2+s\] as we discussed in \S 7.4. Their values distinguish connected components as follows: \[\underbrace{{{2}^{s}}\left( {{2}^{2g+s-1}}-1 \right)}_{{w_1}\ne 0,{w_2}}+ \underbrace{2^s(2g-2+s)}_{\mathrm{pardeg}(L)=0,1,\ldots ,2g-3+s}+\underbrace{{{2}^{2g+s-1}}}_{\mathrm{pardeg}(L)=2g-2+s}\] Now, as we have already checked, when $s=1$, $H_{V}^{1}\left( M,{{\mathbb{Z}}_{2}} \right)\simeq \mathbb{Z}_{2}^{2g}$ and this space is parameterizing non-isomorphic holomorphic line bundles $L\to M$, where $M$ is a $V$-manifold as considered in \S 7. Thus, we get accordingly the invariants: \[\underbrace{2\left( {{2}^{2g}}-1 \right)}_{{w_1}\ne 0,{w_2}}+\underbrace{2(2g-2+1)}_{\mathrm{pardeg}(L)}+\underbrace{ {{2}^{2g}} }_{\mathrm{pardeg}(L)=2g-1}.\] Now we consider the non-parabolic case of $K(D)$-twisted $\mathrm{Sp}(4,\mathbb{R})$-Higgs bundles, where $D$ includes a single point. The component count is given as follows \cite{BarSch2} \begin{align*} 2(2^{2g}-1)+(2g-2+1)+2^{2g}. \end{align*} Compared to the parabolic case, the difference comes from the middle part $(2g-2+1)$. The reason is that we have to consider the parabolic structure of the line bundle $L$ as we discussed in \S 7.4. If we forget the parabolic structure (or consider the parabolic structure with weight $0$), we revoke the classical $K(D)$-twisted case: $3\cdot 2^{2g}+2g-3$. \textit{The component count for $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$.} Quite similarly, the topological invariants that distinguish the connected components of $\mathsf{\mathcal{M}}_{par}^{max }\left( \mathrm{S}{{\mathrm{O}}_{0}}(2,3) \right)$ are $u,v$, the parabolic structure of a line bundle $M$ and its parabolic degree $\mathrm{pardeg}(M)$. As we discussed in \S 8.3, if $w_1=0$, there is a decomposition of the rank $3$ bundle $W=M \oplus \mathcal{O} \oplus M^{-1}$, where $M$ is a parabolic line bundle with $0 \le\mathrm{pardeg}(M) \le 4g-4+2s$. Connected components are distinguished as follows: \[\underbrace{{{2}^{s}}\left( {{2}^{2g+s-1}}-1 \right)}_{{w_1}\ne 0,{w_2}}+\underbrace{2^s(4g-3+2s)}_{\mathrm{pardeg}(M)}\] When $s=1$, for the parabolic line bundle $M$, we have \begin{align*} \underbrace{2 \left( {{2}^{2g}}-1 \right)}_{{w_1}\ne 0,{w_2}}+\underbrace{2(4g-3+2)}_{\mathrm{pardeg}(M)}, \end{align*} where $\mathrm{pardeg}(M)$ has $4g-3+2$ many choices and $2$ is the number of choices of the parabolic structures. Now we consider the non-parabolic case of $K(D)$-twisted $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$-Higgs bundles, where $D$ includes a single point. The component count is similar to the $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$-case, and we have \begin{align*} 2(2^{2g}-1)+(4g-3+2). \end{align*} Compared to the parabolic case under the assumption $s=1$, the difference comes from the part $(4g-3+2)$. The reason is the same as we discussed for the case of $\mathrm{Sp}(4,\mathbb{R})$. More precisely, we have to consider the parabolic structure of the line bundle $M$. If we forget the parabolic structure, we will go back to the non-parabolic $K(D)$-twisted case: $3\cdot 2^{2g}+4g-3$. \begin{rem}\label{902} The description of how the component count specializes to the non-parabolic case when $s=1$ for $G=\mathrm{Sp}(4,\mathbb{R})$ and $G=\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$, points out an important difference between parabolic and non-parabolic bundles. As we have seen already, all degree zero line bundles on an orbifold surface can be naturally lifted to a compact Riemann surface. The extra $2^s$-many choices of the invariants of the line $V$-bundle $L$ for $G=\mathrm{Sp}(4,\mathbb{R})$ are coming from tensoring with the square roots of $\mathsf{\mathcal{O}}\left( {{p}_{i}} \right)$, where $p_{i}$ are the points in the divisor. \end{rem} \begin{rem}\label{903} When $s=1$, we note that there is an element $\sigma \in \pi_V^1(M)$. By the discussion above, it seems that the calculation of connected components does not depend on the monodromy action, which means that the connected components should be the same for all $p \geq 2$ such that $\sigma^p=1$. We want to remind the reader that $p$ also corresponds to the denominator of the weight in the parabolic structure. If we change the monodromy action with $\sigma^p=1$, we use the following local charts $U_2 = \prod_{i=1}^s D / \mathbb{Z}_p$ and $V_2= \prod_{i=1}^s D \times_{\mathbb{Z}_p}E \mathbb{Z}_p$ to construct $M_V$. Here we follow the notation from \S 7.3. We use the Leray spectral sequence to calculate the cohomology of $V_2$, \begin{align*} H^{*}(B \mathbb{Z}_p, H^{*}(D)) \Rightarrow H^{*}(V_2), \end{align*} where $B \mathbb{Z}_p$ is the classifying space of $\mathbb{Z}_{p}$. The $\mathbb{Z}$-coefficient cohomology $H^i(B\mathbb{Z}_p,\mathbb{Z})$ is well-known (see for instance \cite{Hat}): \begin{equation*} H^i(B\mathbb{Z}_p,\mathbb{Z})= \begin{cases} \mathbb{Z},\, \mathrm{ for }\, i=0,\\ \mathbb{Z} / p\mathbb{Z}, \,\mathrm{ for } \, 2 | i,\\ 0,\, \mathrm{otherwise}. \end{cases} \end{equation*} Since $H^{*}(V_2)$ is $\mathbb{Z}_2$-cohomology, we consider the following two cases: \begin{enumerate} \item[(1)] When $p$ is odd, $H^1 (V_2)= H^2(V_2)=0$. In this case, $\mathrm{rk}(H^1(M_V))=2g+s-1$ and $\mathrm{rk}(H^2(M_V))=s$, which is the same as what we calculated in \S 7. \item[(2)] When $p$ is even, $H^1 (V_2)=0$ while $H^2(V_2) \neq 0$. Note that in this case, $\mathrm{rk}(H^1(M_V))$ and $\mathrm{rk}(H^2(M_V))$ do not coincide with our calculation in \S 7. \end{enumerate} In conclusion, when the monodromy group is the cyclic group $\mathbb{Z}_p$ with $p$ an odd integer, then the number of connected components coincides with the $\mathbb{Z}_2$ case. If $p$ is an even number, it does not. \end{rem} \newpage \begin{table}[htb] \caption{Minimum number of connected components of $\mathsf{\mathcal{M}}_{par}^{max }\left( G \right)$.} \begin{tabularx}{\textwidth}{XXX} Lie group $G$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}_{par}^{max }\left( G \right) \right)$ & Teichm{\"u}ller components \\ \hline\hline $\mathrm{Sp}(2,\mathbb{R})=\mathrm{SL}(2,\mathbb{R})$ & \small{${{2}^{2g+s-1}}$} & \small{${{2}^{2g+s-1}}$} \\ $\mathrm{Sp}(4,\mathbb{R})$ & \small{$\left( {{2}^{s}}+1 \right){{2}^{2g+s-1}}+2^s(2g-3+s)$} & \small{${{2}^{2g+s-1}}$} \\ $\mathrm{Sp}(2n,\mathbb{R})$, for $n\ge 3$ & \small{$\left( {{2}^{s}}+1 \right){{2}^{2g+s-1}}$} & \small{${{2}^{2g+s-1}}$} \\ $\mathrm{SU}(n,n)$ & \small{${{2}^{2g+s-1}}$} & - (\small{${{2}^{2g+s-1}}$ if $n=1$}) \\ $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$, for $n$: even & \small{$2^s$} & - \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$ & \small{${{2}^{s}}\left( {{2}^{2g+s-1}}-1 \right)+2^s(4g-3+2s)$} & \small{1} \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$, for $n\ge 4$ & \small{${{2}^{2g+2s-1}}$} & - \\ $E_{7}^{-25}$ & \small{${{2}^{2g+s-1}}$} & - \\ \hline\hline \end{tabularx} \end{table} \vspace{10mm} \begin{table}[htb] \caption{Minimum number of connected components of $\mathsf{\mathcal{M}}_{par}^{max,\alpha }\left( G \right)$ with even $\alpha$.} \begin{tabularx}{\textwidth}{XXX} Lie group $G$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}_{par}^{max,\alpha }\left( G \right) \right)$ & Teichm{\"u}ller components \\ \hline\hline $\mathrm{Sp}(2,\mathbb{R})=\mathrm{SL}(2,\mathbb{R})$ & \small{${{2}^{2g}}$} & \small{${{2}^{2g}}$} \\ $\mathrm{Sp}(4,\mathbb{R})$ & \small{${2}^{2g+s-1}+(2g-3+s)+2^{2g}$} & \small{${{2}^{2g}}$} \\ $\mathrm{Sp}(2n,\mathbb{R})$, for $n\ge 3$ & \small{${2}^{2g+s-1}+2^{2g}$} & \small{${{2}^{2g}}$} \\ $\mathrm{SU}(n,n)$ & \small{${{2}^{2g}}$} & - (\small{${{2}^{2g}}$ if $n=1$}) \\ $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$, for $n$: even & \small{1} & - \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$ & \small{${{2}^{2g+s-1}} +(4g-3+2s)$} & \small{1} \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$, for $n\ge 4$ & \small{${{2}^{2g+s-1}}$} & - \\ \hline\hline \end{tabularx} \end{table} \vspace{10mm} \begin{table}[htb] \caption{Minimum number of connected components of $\mathsf{\mathcal{M}}_{par}^{max,\alpha }\left( G \right)$ with odd $\alpha$.} \begin{tabularx}{\textwidth}{XXX} Lie group $G$ & $\#{{\pi }_{0}}\left( \mathsf{\mathcal{M}}_{par}^{max,\alpha }\left( G \right) \right)$ & Teichm{\"u}ller components \\ \hline\hline $\mathrm{Sp}(2,\mathbb{R})=\mathrm{SL}(2,\mathbb{R})$ & \small{-} & - \\ $\mathrm{Sp}(4,\mathbb{R})$ & \small{${2}^{2g+s-1}+(2g-3+s)$} & - \\ $\mathrm{Sp}(2n,\mathbb{R})$, for $n\ge 3$ & \small{${2}^{2g+s-1}$} & - \\ $\mathrm{SU}(n,n)$ & - & - \\ $\mathrm{S}{{\mathrm{O}}^{*}}(2n)$, for $n$: even & \small{1} & - \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,3)$ & \small{${{2}^{2g+s-1}} +(4g-3+2s)$} & \small{1} \\ $\mathrm{S}{{\mathrm{O}}_{0}}(2,n)$, for $n\ge 4$ & \small{${{2}^{2g+s-1}}$} & - \\ \hline\hline \end{tabularx} \end{table} \newpage
{ "timestamp": "2020-03-16T01:13:55", "yymm": "1806", "arxiv_id": "1806.00884", "language": "en", "url": "https://arxiv.org/abs/1806.00884" }
\section{Introduction} The fourth paradigm of data-intensive science, coined by Jim Gray \cite{4P}, rapidly became a major conceptual approach for multiple application domains encompassing and generating large-scale scientific drivers such as fusion reactors and light source facilities \cite{DOE1} \cite{DOE2}. Taking its root from data management technologies, the paradigm emphasized and generalized a data-driven knowledge discovery direction that complemented the computational branch of scientific disciplines. The success of data-intensive projects subsequently triggered an explosion of numerous machine learning approaches \cite{LeCun} \cite{Mnih}\cite{Hassabis} addressing a wide range of industrial and scientific applications, such as computer vision, self-driving cars, and brain modelling, just to name a few. The next generation of artificial intelligent systems clearly represents a paradigm shift from data processing pipelines towards knowledge-centric applications. As shown in Fig. 1, these systems broke the boundaries of computational and data-intensive paradigms and began to form a new ecosystem by merging and extending existing technologies. Identifying this trend as the fifth paradigm aims to infer common aspects among diverse cognitive computing applications and steer the development of complementary solutions for addressing emerging and future challenges. The initial landscape of data-intensive technologies was designed after Google's Big Data stack over 15 years ago. It represented a consolidated scalable platform bringing together database and computational technologies. The open-source version of this platform was further advanced with the Spark framework resolving the immediate requirements of numerous data-intensive projects. The new model of the Spark computing platform significantly extended the scope of data-intensive applications, spreading from SQL queries to machine learning to graph processing. According to the data-information-knowledge-wisdom model \cite{Ackoff}, these projects eventually elevated data-information pipelines to practical applications of knowledge development. Cognitive systems of the fifth paradigm take the relay baton from data-driven processing pipelines and generalizes their scope with knowledge acquisition processes carried out by rational agents through the exploration of their environments. \begin{figure}[h] \centering \includegraphics[height=5.5cm]{p5} \caption{The Fifth Paradigm. The diagram shows the conceptual structure of the new paradigm integrating resources from both its ancestors, computational and data-intensive sciences, for building cognitive computing applications.} \end{figure} In contrast with the MapReduce embarrassingly parallel pipelines, machine learning applications rely on the communication among distributed workers for synchronizing their internal representations. This mismatch led to the development of new distributed processing frameworks, such as GraphLab \cite{GraphLab}, CNTK \cite{CNTK}, TensorFlow \cite{TensorFlow}, and Gorila \cite{Gorila}. As with any standard evolutionary spiral, a variety and growing number of different approaches eventually raised the question of their consolidation. Similar problems are faced by researchers of large-scale scientific experimental facilities and computational projects. Prior to the Big Data era, most scientific algorithms were built within the third computational paradigm based on HPC clusters and Message Passing Interface (MPI) communication model. To address the immediate requirements of emerging applications, the fourth data-intensive paradigm was developed by minimally intersecting with the HPC ecosystem as shown in Fig. 1. The strategic transition from data-intensive science towards the fifth paradigm of composite cognitive computing applications is a long-term journey with many unknowns. This paper addresses the existing mismatch between Big Data and HPC applications by presenting the Spark-MPI integrated platform aiming to bring together Big Data analytics, HPC scientific algorithms and deep learning approaches for tackling new frontiers of data-driven discovery applications. The remainder of the paper is organized as follows. Section 2 provides a brief overview of the Spark data-intensive and MPI high-performance platforms and outlines the Spark-MPI integrated approach based on the MPI Process Management Interface (PMI). Section 3 and Section 4 further elaborate the approach within the context of the hybrid MPI/GPU ptychographic image reconstruction pipelines and distributed deep learning applications. Section 5 provides insights into future directions using the PMI-Exascale library. Finally, Section 6 and Section 7 survey related work and conclude with a summary. \section{Spark-MPI Platform} \begin{figure*} \centering \includegraphics[width=16cm,height=8cm]{spark-mpi-platform} \caption{Conceptual architecture overview of the Spark-MPI platform} \end{figure*} The integration of data-intensive and compute-intensive ecosystems was addressed by several projects. For example, Geoffrey Fox and colleagues provided a comprehensive overview of the Big Data and HPC domains. Their application analysis \cite{Fox} was based on several surveys, such as the NIST Big Data Public Working Group and NRC reports, including multiple application areas: energy, astronomy and physics, climate, and others. Overall, from a conceptual perspective, the Spark-MPI platform can be considered as the Spark-based version of the Exascale Fast Forward (EFF) I/O Stack three-tier architecture \cite{EFF}. The Spark-MPI platform furthermore focuses on the development of a common integrated approach addressing a wide spectrum of applications including large-scale computational studies, data-information-knowledge discovery pipelines for experimental facilities, and reinforcement learning systems. Fig. 2 shows a general overview of the Spark-MPI integrated environment. It is based on the Spark Resilient Distributed Dataset (RDD) middleware \cite{RDD} that decouples various data sources from high-level processing algorithms. RDDs are distributed fault-tolerant collections of in-memory objects that can be processed in parallel using a rich set of operations, transformations and actions. The top layer is represented by an open collection of high-level components addressing different types of data models and processing algorithms, including machine learning and graph processing. The interfaces between RDDs and distributed data sources are provided by Connectors that are already implemented for major databases and file systems. In addition, Spark is designed to cover a wide range of workloads that previously required separate distributed systems encompassing batch applications, iterative algorithms, interactive queries, and streaming. This combination forms a powerful processing ecosystem for building data analysis pipelines and supporting multiple higher-level components specialized for various workloads. The Spark Streaming module \cite{SparkStreaming} further extends the RDD pluggable mechanism with the Receiver framework enabling to ingest real-time data into RDDs from streaming sources, such as Kafka and ZeroMQ. Adherence to the RDD model automatically provided the Spark streaming applications with the same functional interface and strong fault-tolerance guarantees including exactly-once semantics. The combination of a data-intensive processing framework with a consolidated collection of diverse data analysis algorithms offered by Spark represents a strong asset for its application in large-scale scientific projects across different phases of the data-information-knowledge discovery path. In contrast with existing data management and analytics systems, the Spark in-situ approach does not require the transformation of data into different formats and provides a generic interface between heterogeneous algorithms with heterogeneous data sources. The current version of the Spark programming model, however, is limited by the embarrassingly parallel paradigm and the Spark-MPI approach serves to extend the Spark ecosystem with the MPI-based high-performance computational applications. MPI is an abbreviation for the Message Passing Interface standard that is developed and maintained by the MPI Forum \cite{MPI}. The process of creating the MPI standard began in April 1992 and as of now, it is used in most HPC applications. The popularity of the MPI standard was determined by the optimal combination of concepts and methods challenged by two conflicting requirements: scope of parallel applications and portability across different underlying communication protocols. The MPI standard interface extends the Spark embarrassingly parallel model with a rich collection of communication methods encompassing Remote Memory Access (RMA), pairwise point-to-point operations (e.g., send and receive), master-worker (e.g., scatter and gather) and peer-to-peer (e.g., allreduce) collective methods. In addition, the Barrier method within the collective category provides a synchronization mechanism for supporting the Bulk Synchronous Parallel paradigm. To address the scalability and performance aspects, MPI introduced the concept of Communicators that defined the scope for communication operations. As a result, this approach significantly facilitated the development and integration of parallel libraries using inter- and intra-communicators. To support the MPI parallel model across different operating and hardware systems, the MPI frameworks are based on a portable access layer. One of its initial specifications, Abstract Device Interface (ADI \cite{ADI}), was developed within the MPICH project. Later, the MVAPICH project further extended the ADI implementations to support InfiniBand interconnects and GPUDirect RDMA \cite{GPUDirectRDMA}. The OpenMPI team introduced a different solution, Modular Component Architecture (MCA) \cite{MCA}, that was derived as a generalization of four projects \cite{OMPI} bringing together over 40 frameworks. MCA utilizes components (a.k.a. plugins) to provide alternative implementations of key functional blocks such as message transport, mapping algorithms, and collective operations. As a result, OpenMPI Byte Transfer Layer (BTL) represents an open collection of network-specific components for supporting shared memory, TCP/IP, OpenFrabric verbs, and CUDA IPC, just to name a few. \begin{comment} Aiming to capture the functionality of all possible devices, ADI encompassed a large collection of methods. Their implementation was significantly facilitated by introducing the compact Channel Interface \cite{CH3} and emulating most ADI features. The third version of the Channel Interface (CH3) was further enhanced by the Nemesis communication system \cite{Nemesis} which addressed both the intra-node and inter-node communication cases. This multi-layer framework was reused by the MVAPICH project that then further extended it to support InfiniBand interconnects and GPUDirect RDMA \cite{GPUDirectRDMA}. {\color{green}The OpenMPI team introduced a different multi-layer architecture that was derived as a generalization of four projects \cite{OMPI} bringing together over 40 frameworks. The Modular Component Architecture (MCA) \cite{MCA} utilizes components (a.k.a. plugins) to provide alternative implementations of key functional blocks such as message transport, mapping algorithms, and collective operations. Components automatically build and become active if their required support is available, or can optionally be disabled by the user either at build or time of execution. Selection of the component used for a particular operation is based on relative priorities either using the default values or user-provide overrides.} \end{comment} Running parallel programs on HPC clusters requires interactions with external process and resource managers, such as SLURM and Torque, to enable the MPI processes to discover each other's communication endpoints. Within the MPI ecosystem, this topic is typically addressed by the Process Manager Interface (PMI \cite{PMI}). While the implementation of the PMI specification was never standardized, libraries nearly always consist of two parts: client and server. The client code is linked with the MPI program and provides messaging support to the server - it has no a priori knowledge of the overall application, and must rely on the server to provide any required information. The PMI server is instantiated on each node that supports an MPI process and has both the ability to communicate with its peers (usually over an out-of-band Ethernet connection) and full knowledge of how to contact those peers (e.g., the socket upon which each peer is listening). The server is typically either embedded in the local daemon of the system's resource manager, or executed as a standalone daemon started by a corresponding launcher such as $mpiexec$. The Spark-MPI approach extends the scope of the PMI mechanism for integrating the Spark and MPI frameworks. Specifically, it complemented the Spark conventional driver-worker model with the PMI server-worker interface for establishing MPI inter-worker communications as outlined in Fig. 3 and Fig. 4. \begin{figure}[h] \centering \includegraphics[height=3.8cm]{spark-mpi-approach} \caption{Spark-MPI approach} \end{figure} \begin{figure}[h] \centering \includegraphics[height=3.8cm]{spark-mpi-sequence} \caption{Sequence diagram of the Spark-MPI approach} \end{figure} The first version of the Spark-MPI approach was validated with the primary internal process manager, Hydra, used by two MPI projects (MPICH and MVAPICH). The Hydra Process Manager (PM) is started by the MPI launcher $mpirun$ on the launch node which subsequently spawns a tree-based collection of intereconnected proxies on the allocated nodes. Each proxy locally spawns one or more application processes and then acts as the PMI server for those processes. During initialization, each process ``publishes" its connection information to the proxy, which then performs a global collective operation to share the information across all proxies for eventual distribution to the application. Within the Spark-MPI integrated platform, MPI application processes are started by the Spark scheduler (see Fig. 4). Hydra local proxies therefore were modified to suppress their launching functionality. Recently, the Spark-MPI approach was integrated with the Open MPI framework. The OpenMPI Modular Component Architecture further streamlined its implementation as the $sparkmpi$ plugin of the OpenRTE Daemon's Local Launch Subsystem (ODLS). The following sections will demonstrate this approach within the context of the hybrid MPI/GPU ptychographic image reconstruction pipelines and deep learning applications. \begin{comment} \textit{\color{blue} Adjusting the Hydra PMI server to match Spark-based requirements primarily involved stripping the server of its role in launching the application processes. However, the PMI server in Hydra creates and assigns the communication port for each process when it spawns the process. Thus, converting the server to support Spark processes that it had not spawned required that ... } \textit{\color{blue} Ralph: you'll need to explain how you got around this problem. I assume you did some kind of hardcoded workaround for the purposes of the experiment, but was it something that is usable in general? Or just something for proof-of-concept? Either way, you should make it clear} \textit{\color{red} Nikolay: As far as I understand MPI implementation, this problem is automatically solved by the MPI\_Finalize() methods called from client applications. } \textit{\color{green} Ralph: I think this is where we miscommunicated, and my recent email tried to clarify. I think you need to include a full description here of what you did, as per your email response. If there was a reason you didn't just mpiexec the procs as per my email, then this would be the place to explain. } \textit{\color{blue} Ralph: I am also assuming you had to ensure that all Spark processes were available at the same time and called MPI\_Init, and that no Spark process was allowed to terminate until they all did, as otherwise MPI would be very, very unhappy. In other words, you had to conform to the bulk synchronous MPI model, even though that is a mismatch to the Spark model. I would emphasize that point here as it plays into the Future Directions work where PMIx removes that constraint} \textit{\color{red} Nikolay: This was the important resolved topic of our e-mails. Should we add the sequence diagram?} \textit{\color{red} Nikolay: I will add one paragraph about the PMIx-based conceptual solution ...} \textit{\color{green} Ralph: Yes, I think the sequence diagram would be of significant value to the discussion. } \end{comment} \section{Ptychographic Image Reconstruction Pipelines} Ptychography is one of the essential image reconstruction techniques used in light source facilities. It was originally proposed for electron microscopy \cite{Microscopy} and lately applied to X-ray imaging \cite{Xray1} \cite{Xray2}. The method consists of measuring multiple diffraction patterns by scanning a finite illumination (also called the probe) on an extended specimen (the object). The redundant information encoded in overlapping illuminated regions is then used for reconstructing the sample transmission function. Specifically, under the Born and paraxial approximations, the measured diffraction pattern for the jth scan position can be expressed as: \begin{equation} \label{eq1} \mathrm{I}_{j}(\mathbf{q}) = \mid \mathbf{F}\psi_{j} \mid^{2} \end{equation} where $\mathbf{F}$ denotes Fourier transformation, $\mathbf{q}$ is a reciprocal space coordinate, and $\psi_{j}$ represents the wave at the exit of the object O illuminated by the probe P: \begin{equation} \label{eq2} \psi_{j} = \mathrm{P}(\mathbf{r}-\mathbf{r}_{j})\mathrm{O}(\mathbf{r}) \end{equation} Then, the object and probe functions can be computed from the minimization of the distance $\|\Psi - \Psi^{0} \|^{2}$ as \cite{ProbeRetrieval}: \begin{equation} \label{eq3} \epsilon = \|\Psi - \Psi^{0} \|^{2} = \Sigma_{j}\Sigma_{r} \mid \psi_{j}(\mathbf{r}) - \mathrm{P}^{0}(\mathbf{r}-\mathbf{r}_{j})\mathrm{O}^{0}\mathbf{r}\mid^{2} \end{equation} \begin{equation} \label{eq4} \frac{\partial \epsilon}{\partial \mathrm{P}^{0}} = 0: \mathrm{P}^{0}(\mathbf{r}) = \frac{\Sigma_{j}\psi_{j}(\mathbf{r}+\mathbf{r}_{j})\mathrm{O}^{*}(\mathbf{r}+\mathbf{r}_{j})}{\Sigma_{j} \mid \mathrm{O}(\mathbf{r}+\mathbf{r}_{j})\mid ^{2}} \end{equation} \begin{equation} \label{eq5} \frac{\partial \epsilon}{\partial \mathrm{O}^{0}} = 0: \mathrm{O}^{0}(\mathbf{r}) = \frac{\Sigma_{j}\psi_{j}(\mathbf{r})\mathrm{P}^{*}(\mathbf{r}+\mathbf{r}_{j})}{\Sigma_{j} \mid \mathrm{P}(\mathbf{r}-\mathbf{r}_{j})\mid ^{2}} \end{equation} These minimization conditions need to be augmented with the modulus constraint \eqref{eq1} and included in the iteration loop. For example, the comprehensive overview of different iterative algorithms is provided by Klaus Giewekemeyer \cite{DifferenceMap1}. At this time, the difference map \cite{DifferenceMap2} is considered as one of the most generic and efficient approaches to address these types of imaging problems. It finds a solution in the intersection of two constraint sets using the difference of corresponding projection operators, $\pi_{1}$ and $\pi_{2}$, composed with associated maps, $\mathrm{f}_{1}$ and $\mathrm{f}_{2}$: \begin{align} \label{eq6} \psi^{n+1}&=\psi^{n}+\beta \Delta(\psi^{n}) \nonumber \\ \Delta &= \pi_{1}\circ \mathrm{f}_{2} - \pi_{2}\circ \mathrm{f}_{1} \\ \mathrm{f}_{i}(\psi) &= (1+\gamma_{i})\pi_{i}(\psi) - \gamma_{i}\psi \nonumber \end{align} where $\gamma_{1,2}$ are relaxation parameters. In the context of ptychographic applications, these projection operators are associated with the modulus \eqref{eq1} and overlap \eqref{eq2} constraints. By selecting different values of relaxation parameters, the difference map \eqref{eq6} can be specialized to different variants of phase retrieval methods and hybrid projection-reflection (HPR) algorithms. Further developing HPR, Russel Luke \cite{RAAR} introduced the relaxed averaged alternating reflections (RAAR) approach: \begin{equation} \label{eq7} \psi^{n+1} = [2\beta \pi_{0} \pi_{a} + (1-2\beta)\pi_{a} + \beta(1-\pi_{0})]\psi^{n} \end{equation} The RAAR algorithm was implemented in the SHARP program [31] at the Berkley Center for Advanced Mathematics for Research Applications (CAMERA). SHARP is a high-performance distributed ptychographic solver using GPU kernels and the MPI protocol. Since most equations with the exception of \eqref{eq4} and \eqref{eq5} are framewise intrinsically independent, the ptychographic application is naturally parallelized by dividing a set of data frames among multiple GPUs. Then, for updating a probe and an object, the partial summations of \eqref{eq4} and \eqref{eq5} are combined across distributed nodes with the MPI Allreduce method as shown in Fig. 5. \begin{figure}[h] \centering \includegraphics[height=4.0cm]{sharp} \caption{MPI communication model of the SHARP solver} \end{figure} The SHARP multi-GPU approach significantly boosted the performance of ptychographic applications at the NSLS-II light source facility and immediately highlighted the path for developing near-real-time processing pipelines. Table I compares the performance results for processing 512 frames on different numbers of GPUs. \begin{table}[htbp] \caption{Benchmark Results of the SHARP-NSLS2 Application } \begin{center} \begin{tabular}{|c|>{\centering}p{1.4cm}|>{\centering}p{1.4cm}|>{\centering}p{1.4cm}|} \hline \multirow{2}{*}{\textbf{Application}} & \multicolumn{3}{c|}{\textbf{Time (s) vs Number of GPUs (TESLA K80)}} \tabularnewline \cline{2-4} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{4} \tabularnewline \hline SHARP-NSLS2 & 22.7 & 13.6 & 8.6 \tabularnewline \hline \end{tabular} \end{center} \end{table} In the experimental settings, the time interval between frames takes approximately 50 ms, in other words 25 seconds for 512 frames. And according to Table I, the Spark-MPI application demonstrated the feasibility of the near-real-time scenario. This direction is especially important from the perspective of a new category of emerging four-dimensional tomographic applications that combine series of ptychographic projections generated at different angles of object rotation. In these experiments, each ptychographic projection is reconstructed from tens of thousands of detector frames and the MPI multi-GPU version becomes critical for addressing the GPU memory challenges. The Spark-MPI integrated platform immediately provided a connection between MPI applications and different types of distributed data sources including major databases and file systems. Furthermore, the Spark Streaming module reused and extended the RDD-based batch processing framework with a new programming abstraction called discretized stream, a sequence of RDDs, processed by micro-batch jobs. These new batches are created at regular time intervals. Similar to batch applications, streams can be ingested from multiple data sources like Kafka, Flume, Kinesis and TCP sockets. For evaluating the Spark-MPI approach, the SHARP ptychographic pipeline was tested with the Kafka streaming platform, an Apache open source project that was originally developed at LinkedIn \cite{Kafka}. The corresponding simulation-based scenario is described with a conceptual diagram (see Fig. 6). \begin{figure}[h] \centering \includegraphics[height=4.0cm]{kafka-spark-mpi} \caption{Streaming demo with the Spark-MPI approach} \end{figure} According to this scenario, the input stream represents a sequence of micro-batches. The Spark driver waits for a topic-init record and processes each micro-batch with the $run\_batch$ method. At the beginning of this method, the Kafka data are ingested into the Spark platform as the Kafka RDDs. To achieve a higher level of parallelism, records of micro-batches are divided into topics that are consumed by Kafka Receivers on distributed Spark workers. Each Kafka Receiver creates a topic-specific RDD and the Spark driver logically combines them together with a union operation. As a result, it prepares a distributed RDD to be processed with the MPI application. The acceleration of image processing algorithms with the next generation of GPU devices further strengthened the direction by creating the necessary conditions for augmenting photographic pipelines with optimization procedures. The modern ptychographic approaches depend on many parameters and their choice is important for achieving the most accurate reconstruction results. For example, Fig. 7 and Fig. 8 demonstrate reconstructed object phases for different choices of constraints. \begin{figure}[h] \centering \includegraphics[height=4.0cm]{sharp-nsls2-obj-phases-bad} \caption{Object phases reconstructed from 40,000 frames (object constraints: amp\_max = 1.0, amp\_min = 0.0, phase\_max = $\pi/2$, phase\_min = - $\pi/2$)} \end{figure} \begin{figure}[h] \centering \includegraphics[height=4.0cm]{sharp-nsls2-obj-phases-good} \caption{Object phases reconstructed from 40,000 frames (object constraints: amp\_max = 1.0, amp\_min = 0.95, phase\_max = 0.01, phase\_min = - 0.1)} \end{figure} Finding the most optimal parameters can be automated with conventional optimization approaches. In addition, the pipelines can be further advanced with modern machine learning techniques for image analysis and steering reconstruction algorithms. \section{Deep Learning Applications} The Spark-MPI platform was designed for building a new generation of composite data-information-knowledge discovery pipelines for experimental facilities. Deep learning applications advanced the scope and requirements of large-scale scientific projects to the next level. From the perspective of the fifth paradigm, Spark-MPI can be considered as a generic front-end of composite agent models for interacting with heterogeneous environments. Historically, distributed deep learning frameworks were developed within the fourth paradigm of data-intensive processing platforms. On the other hand, compute-intensive tasks have been already successfully addressed with a HPC stack of hardware and MPI applications from the third computational paradigm. The parallel acceleration of deep learning algorithms was then pursued by several MPI-based projects, such as CNTK \cite{CNTK}, TensorFlow-MaTex \cite{MaTex}, FireCaffe \cite{FireCaffe}, and S-Caffe \cite{SCaffe}. CNTK and TensorFlow are deep learning toolkits developed by Microsoft and Google, respectively. For distributed training, CNTK relies on the MPI communication platform and can be directly deployed on HPC clusters. In contrast, the original implementation of the TensorFlow distributed version is based on Google's gRPC interface developed for cloud computing systems using Ethernet. To leverage the HPC low latency interconnects, the TensorFlow-MaTEx project added two new TensorFlow operators, Global\_Broadcast and MPI\_Allreduce, and correspondingly modified the TensorFlow runtime. The FireCaffe and S-Caffe distributed approaches were developed around single-node Caffe deep learning solvers according to the data-parallel architecture. In addition, they further accelerated the data-parallel communication schema by replacing a parameter server with the allreduce communication pattern based on a reduction tree. Recently, the TensorFlow project was extended with a hybrid communication interface based on the combination of gRPC and MPI protocols. In contrast with other deep learning applications, the TensorFlow framework provides a pluggable mechanism for registering different communication interfaces that can be interchanged with other more advanced or application-specific versions. Within the beamline composite pipeline platform, the Spark-MPI approach was evaluated with Horovod \cite{Horovod}, a MPI training framework for TensorFlow. The Horovod team adopted Baidu's approach \cite{Baidu} based on the ring-allreduce algorithm \cite{RingAllReduce} and further developed its implementation with the NVIDIA's NCCL library for collective communication. As a result, the ring-allreduce approach replaced parameter servers of the TensorFlow distributed version with an efficient mechanism for averaging gradients among the TesnorFlow workers. Their integration with the Horovod distributed framework consists of two primary steps as illustrated by the $horovod\_train$ method in Fig. 9. First, Horovod and MPI is initialized with $hvd.init$. And then, the TensorFlow worker's optimizer is wrapped by $hvd.DistributedOptimizer$, a Horovod's ring-allreduce distributed adapter. \begin{figure}[h] \centering \includegraphics[height=4.5cm]{horovod} \caption{Horovod-TensorFlow example} \end{figure} The Spark-MPI pipelines enable to process the same method on Spark Workers within Map operations as shown in Fig. 10. To establish MPI communication among the Spark Workers, the Map operation needs only to define PMI-related environmental variables (such as PMIX\_RANK and a port number) for connecting the Horovod MPI application with the PMI server. \begin{figure}[h] \centering \includegraphics[height=4.5cm]{spark-horovod} \caption{Spark-MPI-Horovod example} \end{figure} Implementing deep learning applications on the MPI parallel framework immediately extended the scope of the Spark-MPI ecosystem with composite pipelines as shown in Fig. 11. \begin{figure}[h] \centering \includegraphics[height=3.5cm]{dlpipe} \caption{End-to-end machine learning pipeline including reconstruction, image analysis, and feedback loop} \end{figure} For light source facilities, the development of composite pipelines involves two major topics: application of deep learning approaches for analyzing reconstructed images and development of machine learning feedback systems for steering reconstruction algorithms. According to the survey by Geert Litjens and colleagues \cite{MedicalImageAnalysisSurvey}, deep learning techniques pervade every aspect of medical image analysis: detection, segmentation, quantification, registration, and image enhancement. The feedback system can be viewed from the perspective of a rational agent that interacts with a reconstruction pipeline representing its environment. Depending on the applications, an agent can be built with different learning techniques. One of the most important breakthroughs is associated with the introduction of a deep Q-network (DQN) model for reinforcement learning \cite{Mnih}. The DQN-based approach demonstrated state-of-the-art results in various applications \cite{DRLSurvey} ranging from playing video games to robotics. As shown in Fig. 11, Spark-MPI provides a generic front-end for distributed deep reinforcement learning platforms on the HPC cluster. \section{Related Work} The deployment of the Spark platform on HPC clusters and its comparison with the MPI approaches has been addressed by several projects. The Ohio State University team \cite{SparkRDMA} proposed an RDMA-based design for the data shuffle of Spark over InfiniBand. Alex Gittens and colleagues \cite{Gittens} demonstrated the performance gap between a close-to-metal parallelized C version and the Spark-based implementation of matrix factorization. To resolve this gap, they introduced the Alchemist system for socket-based interfacing between Spark and existing MPI libraries. Michael Anderson and colleagues \cite{Anderson} proposed an alternative approach based on the Linux shared memory file system. The third solution suggested by Cyprien Noel, Jun Shi and Andy Feng from the Yahoo Big ML team extended the Spark embarrassingly parallel model with the RDMA inter-worker communication interface. Later, this approach was reused by the Sharp-Spark project \cite{SharpSpark} within the context of ptychographic reconstruction applications. The Sharp-Spark approach followed the Yahoo Big ML peer-to-peer model and augmented it with a RDMA address exchange server that significantly facilitated the initialization phase responsible for establishing Spark inter-worker connections. As a result, the RDMA address exchange server captured the PMI functionality of the MPI implementations and provided a natural transition to the PMI-based Spark-MPI approach. The similarity between the Spark driver-worker computational model and the data-parallel approach of deep learning solvers triggered the development of a new category of applications such as SparkNet \cite{SparkNet}, CaffeOnSpark \cite{CaffeOnSpark}, TensorFlowOnSpark \cite{TensorFlowOnSpark}, and BigDL \cite{BigDL}. SparkNet directly relied on the Spark driver-executor scheme consisting of a single driver and multiple executors running the Caffe or TensorFlow deep learning solvers on its own subset of data. In this approach, a driver communicates with executors for aggregating gradients of model parameters and broadcasting averaged weights back for subsequent iterations. According to the SparkNet-based benchmark, the driver-executor scheme however introduced a substantial communication overhead that was minimized by subdividing the optimization loop into chunks of iterations. Addressing the same problem, the CaffeOnSpark team proposed extending the Spark model with an inter-worker interface providing a MPI Allreduce style method over Ethernet or InfiniBand. Later, the same team began the TensorFlowOnSpark project based on their RDMA extension to the TensorFlow distributed platform. In comparison with these projects, Spark-MPI aims to derive an application-neutral mechanism based on the MPI Process Management Interface for the effortless integration of Big Data and HPC ecosystems. \section{Path Towards Exascale Applications} The validation of the Spark-MPI conceptual solution established a basis for advancing this approach towards the production programming model based on the PMI-Exascale (PMIx) framework. Furthermore, this direction aligns with proposed changes to the MPI standard \cite{MPIWG} being supported by the PMIx community. PMIx \cite{PMIx} was created in response to the ever-increasing scale of supercomputing clusters, and the emergence of new programming models such as Spark that rely on dynamically steered workflows. The PMIx community has therefore focused on extending the earlier PMI work, adding flexibility to existing APIs (e.g., to support asynchronous operations) as well as new APIs that broaden the range of interactions with the resident resource manager. The initial version of the PMIx standard focused on resolving the scaling challenges faced by bulk-synchronous programming models operating in exascale systems\cite{PMIx2} \cite{PMIx3}. However, version 2 of the standard directly addressed the needs of dynamic, asynchronous programming models by providing APIs for changing resource allocations (both adding and returning resources, including the ability to ``lend'' resources back to the resource manager for limited periods of time); controlling application execution (e.g., ordering termination and/or migration of processes, and coordinating requests for application preemption); notification of events such as connection requests and process failures; and connections to servers from ``unknown'' processes not started by the server. The Spark-MPI programming model utilizes the last feature as a mechanism by which the processes started by the Spark scheduler can connect to a local PMIx server. The PMIx library includes methods for automatically authenticating connections to the server based on a plugin architecture, thus allowing for ready addition of new methods as required. Servers store their rendezvous information in files located under system-defined locations for easy discovery, and the client library executes a search algorithm to automatically find and connect to a server during initialization. Once connected to the server, the PMI-aware processes can utilize PMIx to asynchronously request connections to one or more processes. The connect and disconnect APIs in version 2 of the PMIx Standard retain support for bulk-synchronous programming models such as today's MPI while providing the extensions needed for asynchronous models. Both require that the operation be executed as a collective, with all specified processes participating in the operation prior to it being declared complete - i.e., all processes specified in a call to {PMIx\_Connect} must call that API in order to complete the operation. In addition, the standard requires that the host resource manager (RM) treat the specified processes as a new ``group'' when considering notifications, termination, and other operations, and that no request to ``disconnect'' from a connected group be allowed to complete until all collectives involving that group have also completed. Finally, the PMIx community recognized that programming libraries have continued to evolve towards more of an asynchronous model where processes regularly aggregate into groups that subsequently dissolve after completing some set of operations. These new approaches would benefit from an ability to notify other processes of a desire to aggregate, and to allow the aggregation process itself to take place asynchronously. Accordingly, calls by PMI-aware processes to {PMIx\_Connect} are first checked by the PMIx server to see if the other specified participants are available and have also called {PMIx\_Connect} - if so, then the connection request will result in each involved process receiving full information about the other participating processes (location, endpoint information, etc.) plus a callback containing the namespace assigned to the connected group. The latter can be considered the equivalent of a communicator and used for constructing that object. If one or more of the indicated processes has not executed its call to {PMIx\_Connect}, then the server will issue an event notification requesting that the process do so. Application processes can register callback functions to be executed upon receipt of a corresponding event, and events are cached so they can be delivered upon process startup if the event is generated before that occurs. Once receiving a connection request event, the process is given sufficient information in the notification to allow it to join the requesting group, thereby completing the collective operation. Applications can provide an optional timeout attribute to the call to {PMIx\_Connect} so the operation will terminate if all identified participants fail to respond within the given time limit. Note that any single process can be simultaneously engaged in multiple connect operations. For scalability, PMIx does not use a collective to assign a global identifier to the connect operation, instead utilizing the provided array of process IDs as a default method to identify a specific {PMIx\_Connect} operation. Applications can extend the ability to execute multiple parallel operations by providing their own string identifier for each collective as an attribute to the {PMIx\_Connect} API. Note that all partitipants in a given collective are required to call {PMIx\_Connect} with the same attribute value. In cases where the involved hosts are controlled by different RMs, the namespace identifier provided by the host RM for use in PMIx is no longer guaranteed to be unique, thereby leading to potential confusion in the process identifiers. Accordingly, PMIx defines a method for resolving any potential namespace overlap by modifying the namespace value for a given process identifier to include a $cluster identifier$ - a string name for the cluster that is provided by the host RM, or application itself in the case of non-managed hosts. The accumulated features of the PMIx distributed framework are identified as a new Exascale cluster service that supplements the conventional resource management and scheduling platform for gluing together HPC and Big Data applications. On HPC clusters, support for PMIx is currently integrated with the Open MPI run-time environment and Simple Linux Utility for Resource Management (SLURM \cite{SLURM}). Therefore, the deployment of the Spark-MPI platform on HPC clusters will be streamlined by adding SLURM into the list of Spark schedulers (see Fig. 2): Standalone, YARN \cite{YARN}, Apache Mesos \cite{Mesos} and Kubernetes \cite{Kubernetes}. As illustrated by the Spark-MPI ptychographic and deep learning examples, this deployment approach is consistent with the Spark computational model. Furthermore, the asynchronous models supported by the PMIx framework highlight the next direction for deploying reinforcement learning architectures \cite{AsynDRL} on HPC clusters. \begin{comment} Libraries such as Open MPI are working to implement the desired behavior based on these capabilities. Future development of the Spark-MPI programming model will therefore utilize the PMIx capabilities to address both areas of concern. \end{comment} \section{Conclusions} The paper addresses the existing mismatch between Big Data and HPC applications by presenting the Spark-MPI integrated platform for bringing together Big Data analytics, HPC scientific algorithms and deep learning approaches for tackling new frontiers of data-driven discovery applications. The approach was validated with three MPI projects (MPICH, MVAPICH and Open MPI) and established a basis for advancing the Spark-MPI interface towards the Exascale platform using the PMI-Exascale (PMIx) framework. Furthermore, this direction aligns with a paradigm shift from data-intensive processing pipelines towards the fifth paradigm of knowledge-centric cognitive applications. Within the context of new applications, Spark-MPI aims to provide a generic front-end for distributed deep reinforcement learning platforms on HPC clusters. As a result, the Spark-MPI platform represents a triple point solution located at the intersection of three paradigms.
{ "timestamp": "2018-06-05T02:18:13", "yymm": "1806", "arxiv_id": "1806.01110", "language": "en", "url": "https://arxiv.org/abs/1806.01110" }
\section{Introduction} It has long been clear that there is something peculiar about long wavelength gravitons on cosmological backgrounds \cite{Lifshitz:1945du}. Unlike photons, which are precluded by conformal invariance from locally perceiving the expansion of the Universe, inflationary expansion leads to the production of gravitons \cite{Grishchuk:1974ny,Ford:1977dj}. This process is the source of the tensor power spectrum predicted by primordial inflation \cite{Starobinsky:1979ty}. Long wavelength gravitons also make a peculiar contribution to the retarded propagator, which DeWitt and Brehme famously denoted as the ``tail term'' \cite{DeWitt:1960fc}. Unlike the usual delta function {\it on} the past light-cone, the tail contribution is nonzero {\it inside} the past light-cone \cite{Chu:2011ip}. This fact has great relevance to computations of gravitational radiation reaction in binary mergers \cite{Tanaka:1996ht,Mino:1996nk,Quinn:1996am}. It is also responsible for the curious infrared ``running'' of the Newtonian potential induced by the one loop gravitational vacuum polarization of conformal matter on de Sitter background \cite{Wang:2015eaa,Frob:2016fcr}, \begin{equation} \Psi = -\frac{GM}{a r} \Biggl\{ 1 + \frac{4 G}{15 \pi a^2 r^2} + \frac{2 G H^2}{5 \pi} \, \ln(a H r) + O(G^2)\Biggr\} \; . \label{confNewt} \end{equation} Here $H$ is the Hubble constant, $a = e^{H t}$ is the de Sitter scale factor and $r$ is the co-moving position. The fractional correction of $\frac{4 G}{15 \pi a^2 r^2}$ is just the de Sitter descendant of the flat space effect which has long been known \cite{Radkowski:1970,Capper:1974ed}. The new term proportional to $G H^2$ is specific to nonzero Hubble constant and causes perturbation theory to break down, both for large $r$ and at late times. Even though conformal matter induces almost the same vacuum polarization, in de Sitter conformal coordinates, as in flat space, the gravitational {\it response} to that source is very different on account of the strong de Sitter tail term. Analytic continuation carries the tail term of the retarded propagator into the tail part of the Feynman propagator which can mediate quantum graviton effects to other particles \cite{Tsamis:1992xa,Woodard:2004ut}. An important example is the one graviton contribution to the electromagnetic vacuum polarization \cite{Leonard:2013xsa}. This induces an infrared running of the Coulomb potential similar to (\ref{confNewt}) \cite{Glavan:2013jca}, \begin{equation} \Phi = \frac{Q}{4\pi r} \Biggl\{ 1 + \frac{2 G}{3 \pi a^2 r^2} + \frac{2 G H^2}{\pi} \, \ln(a H r) + O(G^2)\Biggr\} \; . \label{gravCoul} \end{equation} As with the Newtonian potential (\ref{confNewt}), the fractional correction $\frac{2 G}{3 \pi a^2 r^2}$ is just the de Sitter analogue of what happens in flat space \cite{Leonard:2012fs}, while the new term proportional to $G H^2$ causes perturbation theory to break down at large $r$ and at late times. The gravitational vacuum polarization on de Sitter also causes a secular enhancement of the electric field of a plane wave photon \cite{Wang:2014tza}, \begin{equation} F^{\rm 1~loop}_{0i} \longrightarrow \frac{2 G H^2}{\pi} \, \ln(a) \times F^{\rm tree}_{0i} \; . \label{gravphot} \end{equation} Like (\ref{gravCoul}), this result signals a late time breakdown of perturbation theory. A common feature in all three results (\ref{confNewt}), (\ref{gravCoul}) and (\ref{gravphot}) is the breakdown of perturbation theory when $\ln(a) \sim \frac1{G H^2}$. Uncovering what happens after this time requires going beyond perturbation theory. For the very similar infrared logarithms of scalar potential models Starobinsky has developed a stochastic formalism \cite{Starobinsky:1986fx} which exactly reproduces the leading infrared logarithms at each loop order \cite{Woodard:2005cw,Tsamis:2005hd}, and can be summed to elucidate the nonperturbative regime \cite{Starobinsky:1994bd}. The same technique can be applied to a Yukawa-coupled scalar \cite{Miao:2006pn}, and to scalar quantum electrodynamics \cite{Prokopec:2007ak}. However, it has not yet been generalized to quantum gravity. The obstacle to applying Starobinsky's formalism has been the derivative interactions of quantum gravity. These frustrate the proof \cite{Woodard:2005cw, Tsamis:2005hd} that works for scalar potential models. Derivative interactions also mean that the lowest order renormalization counterterms contribute at leading logarithm order, which means that dimensional regularization must be retained until a fully renormalized result is obtained \cite{Miao:2008sp}. The problem remains, despite notable progress understanding the simpler derivative interactions of nonlinear sigma models \cite{Kitamoto:2010et, Kitamoto:2011yx}. A notable advance was the discovery \cite{Miao:2008sp} that only the tail part of the graviton propagator is responsible for the secular enhancement of massless fermions on de Sitter background \cite{Miao:2005am,Miao:2006gj}. The purpose of this paper is to see if the tail term alone also explains the secular enhancement of dynamical photons (\ref{gravphot}) and the logarithmic running of the Coulomb potential (\ref{gravCoul}). In section 2 we review the relevant Feynman rules and identify precisely those parts of the vacuum polarization which are responsible for the two effects. Section 3 computes the tail contribution to the vacuum polarization. Our results are discussed in section 4. \section{Notation} The purpose of this section is to review notation. We begin with the Feynman rules which were used to compute the vacuum polarization \cite{Leonard:2013xsa}. This is where we define the ``tail'' part of the graviton propagator which plays a central work in this study. We also describe how the tensor structure of the vacuum polarization is represented using two structure functions, and we give the order $G H^2$ contributions to these structure functions which are responsible for the enhancement of dynamical photons (\ref{gravphot}) and the logarithmic running of the Coulomb potential (\ref{gravCoul}). \subsection{Feynman Rules} The Lagrangian relevant to our study is, \begin{equation} \mathcal{L} = \frac{[R \!-\! (D\!-\!2)(D\!-\!1) H^2] \sqrt{-g}}{16 \pi G} -\frac14 F_{\mu\nu} F_{\rho\sigma} g^{\mu\rho} g^{\nu\sigma} \sqrt{-g} + \Delta \mathcal{L} + \mathcal{L}_{GF} \; . \label{Lag} \end{equation} Here $D$ is the spacetime dimension, $H$ is the de Sitter Hubble constant and $G$ is Newton's constant. The two counterterms we require are, \begin{equation} \Delta \mathcal{L} = \overline{C} H^2 F_{\mu\nu} F_{\rho\sigma} g^{\mu\rho} g^{\nu\sigma} \sqrt{-g} + \Delta C H^2 F_{ij} F_{k\ell} g^{ik} g^{j\ell} \sqrt{-g} \; . \label{DLag} \end{equation} The noninvariant term (Roman indices are purely spatial) proportional to $\Delta C$ is required because of de Sitter breaking in the graviton sector \cite{Leonard:2013xsa,Glavan:2015ura}. Our electromagnetic and gravitational gauge fixing terms are \cite{Tsamis:1992xa,Woodard:2004ut}, \begin{equation} \mathcal{L}_{GF} = -\frac12 a^{D-4} \Bigl[ \eta^{\mu\nu} A_{\mu , \nu} \!-\! (D \!-\!4) H a A_0\Bigr]^2 -\frac12 a^{D-2} \eta^{\mu\nu} F_{\mu} F_{\nu} \; , \label{LGF} \end{equation} where $a \equiv -\frac1{H \eta}$ is the de Sitter scale factor (at conformal time $\eta$) and the gravitational term is, \begin{equation} F_{\mu} \equiv \eta^{\rho\sigma} \Bigl[h_{\mu\rho ,\sigma} \!-\! \frac12 h_{\rho\sigma , \mu} \!+\! (D \!-\! 2) H a h_{\mu\rho} \delta^0_{~\sigma} \Bigr] \; . \label{Fmu} \end{equation} Here and henceforth $h_{\mu\nu}$ is the conformally transformed graviton field whose indices are raised and lowered with the (spacelike) Minkowski metric, \begin{equation} g_{\mu\nu} \equiv a^2 \widetilde{g}_{\mu\nu} \equiv a^2 \Bigl[ \eta_{\mu\nu} + \kappa h_{\mu\nu} \Bigr] \qquad , \qquad \kappa^2 \equiv 16 \pi G \; . \label{graviton} \end{equation} Our gauge breaks de Sitter invariance but it does provide the simplest possible expressions for the photon and graviton propagators. They each take the form of a sum of constant tensor factors times scalar propagators, \begin{eqnarray} i\Bigl[\mbox{}_{\mu} \Delta_{\rho}\Bigr](x;x') & = & \overline{\eta}_{\mu\rho} \!\times\! a a' i\Delta_B(x;x') - \delta^0_{~\mu} \delta^0_{~\rho} \!\times\! a a' i \Delta_C(x;x') \; , \qquad \label{photprop} \\ i\Bigl[\mbox{}_{\mu\nu} \Delta_{\rho\sigma}\Bigr](x;x') & = & \sum_{I=A,B,C} \Bigl[\mbox{}_{\mu\nu} T^I_{\rho\sigma}\Bigr] \!\times\! i\Delta_I(x;x') \; , \label{gravprop} \end{eqnarray} where $\overline{\eta}_{\mu\nu} \equiv \eta_{\mu\nu} + \delta^0_{~\mu} \delta^0_{~\nu}$ is the spatial part of the Minkowski metric. The gravitational tensor factors are, \begin{eqnarray} \Bigl[\mbox{}_{\mu\nu} T^A_{\rho\sigma}\Bigr] & = & 2 \overline{\eta}_{\mu (\rho} \overline{\eta}_{\sigma) \nu} - \frac{2}{D \!-\! 3} \, \overline{\eta}_{\mu\nu} \overline{\eta}_{\rho\sigma} \; , \label{TA} \\ \Bigl[\mbox{}_{\mu\nu} T^B_{\rho\sigma}\Bigr] & = & -4 \delta^0_{~(\mu} \overline{\eta}_{\nu) (\rho} \delta^0_{~\sigma)} \; , \label{TB} \\ \Bigl[\mbox{}_{\mu\nu} T^C_{\rho\sigma}\Bigr] & = & \frac{2}{(D\!-\!2) (D \!-\! 3)} \Bigl[ (D \!-\! 3) \delta^0_{~\mu} \delta^0_{~\nu} \!+\! \overline{\eta}_{\mu\nu} \Bigr] \Bigl[ (D \!-\! 3) \delta^0_{~\rho} \delta^0_{~\sigma} \!+\! \overline{\eta}_{\rho\sigma} \Bigr] \; . \label{TC} \qquad \end{eqnarray} Here and henceforth parenthesized indices are symmetrized. It is useful to expand the three scalar propagators in progressively less and less singular terms, \begin{equation} i\Delta_I(x;x') = \frac{i\Delta(x;x')}{(a a')^{\frac{D}2-1}} + i\delta \Delta_I(x;x') + i\Delta_{\Sigma I}(x;x') \quad , \quad I = A, B, C \; . \label{scalarprops} \end{equation} Here the massless scalar propagator in flat space is \begin{equation} i\Delta(x;x') = \frac{ \Gamma(\frac{D}2 \!-\! 1)}{4 \pi^{\frac{D}2} \Delta x^{D-2}} \quad , \quad \Delta x^2(x;x') \equiv \Bigl\Vert \vec{x} \!-\! \vec{x}' \Bigr\Vert^2 - \Bigl(\vert \eta \!-\! \eta'\vert \!-\! i \epsilon\Bigr)^2 \; . \label{simpleprop} \end{equation} Note that $i\Delta(x;x')$ has the leading, $1/\Delta x^{D-2}$ singularity. The three $1/\Delta x^{D-4}$ terms are, \begin{eqnarray} \lefteqn{(a a')^{\frac{D}2 - 2} i\delta \Delta_A(x;x') = \frac{H^2}{4 \pi^{\frac{D}2}} \Biggl\{ \frac{ \Gamma(\frac{D}2 \!+\! 1)}{2 (D \!-\! 4)} \frac1{\Delta x^{D-4}} } \nonumber \\ & & \hspace{0cm} - \frac{\pi \cot(\frac{\pi D}{2}) \Gamma(D \!-\! 1)}{4 \Gamma(\frac{D}2)} \Bigl( \frac{a a' H^2}{4}\Bigr)^{\frac{D}2-2} + \frac{\Gamma(D \!-\! 1)}{\Gamma(\frac{D}2)} \Bigl( \frac{a a' H^2}{4} \Bigr)^{\frac{D}2-2} \ln(a a')\Biggr\} , \label{tail} \qquad \\ \lefteqn{(a a')^{\frac{D}2 - 2} i\delta \Delta_B(x;x') = \frac{H^2}{4 \pi^{\frac{D}2}} \Biggl\{\frac{\Gamma(\frac{D}2)}{\Delta x^{D-4}} - \frac{\Gamma(D \!-\! 2)}{\Gamma(\frac{D}2)} \Bigl( \frac{a a' H^2}{4}\Bigr)^{\frac{D}2-2} \Biggr\} , \label{Btail} } \\ \lefteqn{(a a')^{\frac{D}2 - 2} i\delta \Delta_C(x;x') = \frac{H^2}{4 \pi^{\frac{D}2}} \Biggl\{\frac{(\frac{D}2 \!-\! 3) \Gamma(\frac{D}2 \!-\! 1)}{\Delta x^{D-4}} + \frac{\Gamma(D \!-\! 3)}{\Gamma(\frac{D}2)} \Bigl( \frac{a a' H^2}{4}\Bigr)^{\frac{D}2-2} \Biggr\} . \label{Ctail} } \end{eqnarray} The $i\delta \Delta_I(x;x')$ determine the coincidence limits in dimensional regularization, but only $i\delta \Delta_A(x;x')$ produces a nonzero tail term when $D=4$. The three $i\Delta_{\Sigma I}(x;x')$ terms are each infinite series of less singular powers, which vanish for $D=4$. They play no role in our analysis, but their expansions are given in Appendix A for completeness. We can now identify the ``tail'' part of the graviton propagator, \begin{equation} i\Bigl[\mbox{}_{\mu\nu} \Delta^{\rm tail}_{\rho\sigma}\Bigr](x;x') \equiv \Bigl[\mbox{}_{\mu\nu} T^A_{\rho\sigma}\Bigr] \times i\delta \Delta_A(x;x') \; . \label{gravtail} \end{equation} The purpose of this paper is to check whether or not replacing the full graviton propagator by (\ref{gravtail}) gives those parts of the vacuum polarization which are responsible for the secular enhancement of dynamical photons (\ref{gravphot}) and the logarithmic running of the Coulomb potential (\ref{gravCoul}). \subsection{Representing Vacuum Polarization} The one graviton loop contribution to the vacuum polarization can be expressed in terms of expectation values of variations of the action, \begin{eqnarray} \lefteqn{i\Bigl[\mbox{}^{\mu} \Pi^{\nu}\Bigr](x;x') = \Biggl\langle \Omega \Biggl\vert \Biggl[\frac{i \delta S}{\delta A_{\mu}(x)} \Biggr]_{h A} \times \Biggl[ \frac{i \delta S}{\delta A_{\nu}(x')} \Biggr]_{h A} \Biggr\vert \Omega \Biggr\rangle } \nonumber \\ & & \hspace{6cm} + \Biggl\langle \Omega \Biggl\vert \Biggl[ \frac{i \delta^2 S}{\delta A_{\mu}(x) \delta A_{\nu}(x')} \Biggr]_{h h} \Biggr\vert \Omega \Biggr\rangle \; . \label{vacpolop} \qquad \end{eqnarray} The subscripts $h A$ and $h h$ indicate that the operator in square brackets is to be expanded to that order in the weak fields $h_{\mu\nu}$ and $A_{\mu}$. Expression (\ref{vacpolop}) is ideal for our study because each of these two expectation values is {\it separately} transverse, and for {\it any} graviton field. The tensor structure of the de Sitter background vacuum polarization can be represented using two structure functions \cite{Prokopec:2002uw,Leonard:2012si,Leonard:2012ex}, \begin{equation} i\Bigl[\mbox{}^{\mu} \Pi^{\nu}\Bigr](x;x') = \Bigl(\eta^{\mu\nu} \eta^{\rho\sigma} \!-\! \eta^{\mu\sigma} \eta^{\nu\rho}\Bigr) \partial_{\rho} \partial'_{\sigma} F(x;x') + \Bigl(\overline{\eta}^{\mu\nu} \overline{\eta}^{\rho\sigma} \!-\! \overline{\eta}^{\mu\sigma} \overline{\eta}^{\nu\rho}\Bigr) \partial_{\rho} \partial'_{\sigma} G(x;x') . \label{vacpolform} \end{equation} Each of the two terms on the right hand side of (\ref{vacpolform}) is transverse so we can work out contributions to $F(x;x')$ and $G(x;x')$ separately, from each of the two expectation values in (\ref{vacpolop}), and from any part of the graviton propagator such as (\ref{gravtail}). Given a transverse contribution to $i[\mbox{}^{\mu} \Pi^{\nu}](x;x')$, the corresponding contributions to the structure functions can be inferred from selected components \cite{Leonard:2012ex}, \begin{eqnarray} i\Bigl[\mbox{}^{0} \Pi^{0}\Bigr](x;x') & = & -\vec{\nabla} \!\cdot\! \vec{\nabla}' F(x;x') \; , \label{Fpick} \\ \eta_{\mu\nu} \!\times\! i\Bigl[\mbox{}^{\mu} \Pi^{\nu}\Bigr](x;x') & = & (D \!-\! 1) \partial \!\cdot\! \partial' F(x;x') + (D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' G(x;x') \; . \label{Gpick} \end{eqnarray} The same considerations imply that the two relevant counterterms (\ref{DLag}) make the following contributions \cite{Leonard:2013xsa}, \begin{equation} \Delta F(x;x') = 4 \overline{C} H^2 a^{D-4} i\delta^D(x \!-\! x') \;\; , \;\; \Delta G(x;x') = 4 \Delta C H^2 a^{D-4} i\delta^D(x \!-\! x') \; . \label{DFG} \end{equation} The full one loop vacuum polarization \cite{Leonard:2013xsa} contains some parts which are de Sitter-ized versions of the flat space result \cite{Leonard:2012fs}. However, the secular enhancement of dynamical photons (\ref{gravphot}) and the logarithmic running of the Coulomb potential (\ref{gravCoul}) originate in the intrisically de Sitter portions of the structure functions, \begin{eqnarray} F_{\rm dS}(x;x') & \!\!\! = \!\!\! & \frac{\kappa^2 H^2}{(2 \pi)^4} \Biggl\{ 2 \pi^2 \ln(a) i \delta^4(x \!-\! x') + \frac14 \partial^2 \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2)}{\Delta x^2} \Biggr] \nonumber \\ & & \hspace{5.7cm} + \partial_0^2\Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2) \!+\! 2}{\Delta x^2} \Biggr] \Biggr\} , \qquad \label{FdS} \\ G_{\rm dS}(x;x') & \!\!\! = \!\!\! & \frac{\kappa^2 H^2}{(2 \pi)^4} \Biggl\{ -\frac83 \pi^2 \ln(a) i \delta^4(x \!-\! x') - \frac13 \partial^2 \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2)}{\Delta x^2} \Biggr] \Biggr\} . \label{GdS} \end{eqnarray} The enhancement of dynamical photons actually derives entirely from just the $\ln(a)$ part of $F_{dS}(x;x')$ \cite{Wang:2014tza}. In contrast, all terms on the first lines of (\ref{FdS}-\ref{GdS}) contribute to the logarithmic running of the Coulomb potential \cite{Glavan:2013jca}. The terms on the second line of expression (\ref{FdS}) do not contribute to either the enhancement of photons or the running of the Coulomb potential. \section{Vacuum Polarization from the Tail} This section presents the key computation of the tail contribution to the two structure functions of the vacuum polarization. Because each of the terms in the operator expression (\ref{vacpolop}) is separately transverse, as is the contribution from the counter-action, we derive separate results for each of the three diagrams in Figure~\ref{vacpolgraphs}. Because the counterterms contribute at leading logarithm order it is necessary to retain dimensional regularization until the end. (The same thing was found in deriving the tail contribution to the fermion wave function \cite{Miao:2008sp}.) However, extensive simplifications result from anticipating terms which must vanish in the renormalized, unregulated limit. We begin with the simple 4-point contribution, then proceed to the more complicated contribution from two 3-point vertices, and finally add the appropriate counterterms. \begin{figure}[ht] \center \includegraphics[]{diagrams.eps} \caption{\label{fig:photon} Feynman diagrams relevant to the one loop vacuum polarization from gravitons. Wavy lines are photons, curly lines are gravitons and the cross represents counterterms.} \label{vacpolgraphs} \end{figure} \subsection{The 4-point contribution} The primitive 4-point contribution is the middle diagram of Fig.~\ref{vacpolgraphs} and has the operator representation, \begin{equation} i\Bigl[\mbox{}^{\mu} \Pi_{\rm 4pt}^{\nu}\Bigr](x;x') = \partial_{\rho} \partial'_{\sigma} \Bigl\langle \Omega \Bigl\vert a^{D-4} \sqrt{-\widetilde{g}} \, \Bigl( \widetilde{g}^{\mu\sigma} \widetilde{g}^{\nu\rho} \!-\! \widetilde{g}^{\mu\nu} \widetilde{g}^{\rho\sigma} \Bigr) i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{hh} \; . \label{4ptop} \end{equation} This expression is exact. Because the tail contribution comes from the purely spatial components of the graviton field we can use relation (\ref{Fpick}) to write a simple relation for the tail part of the structure function $F(x;x')$, \begin{equation} -\vec{\nabla} \!\cdot\! \vec{\nabla}' F_{\rm 4t}(x;x') = \partial_i \partial'_j \Bigl\langle \Omega \Bigl\vert a^{D-4} \sqrt{-\widetilde{g}} \, \widetilde{g}^{ij} i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \; . \qquad \label{F4top1} \end{equation} Isotropy implies, \begin{eqnarray} F_{\rm 4t}(x;x') & = & -\frac1{D\!-\!1} \Bigl\langle \Omega \Bigl\vert a^{D-4} \sqrt{-\widetilde{g}} \, \widetilde{g}^{kk} i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \; , \qquad \label{F4top2} \\ & = & \frac14 D (D \!-\! 5) \kappa^2 a^{D-4} i\delta \Delta_A(x;x) i \delta^D(x \!-\! x') \; . \qquad \label{F4t} \end{eqnarray} Expression (\ref{F4t}) agrees with the result (66) reported in \cite{Leonard:2013xsa}. Relation (\ref{Gpick}) determines the structure function $G(x;x')$, \begin{eqnarray} \lefteqn{ (D\!-\!1) \partial \!\cdot\! \partial' F_{\rm 4t} \!+\! (D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' G_{\rm 4t} = \partial_0 \partial'_0 \Bigl\langle \Omega \Bigl\vert a^{D-4} \sqrt{-\widetilde{g}} \, \widetilde{g}^{kk} i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{\rm tail} } \nonumber \\ & & \hspace{1.3cm} + \partial_i \partial'_j \Bigl\langle \Omega \Bigl\vert a^{D-4} \sqrt{-\widetilde{g}} \, \Bigl(\widetilde{g}^{ik} \widetilde{g}^{jk} \!+\! \widetilde{g}^{ij} (1 \!-\! \widetilde{g}^{kk}) \Bigr) i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \; . \qquad \label{G4t1} \end{eqnarray} Using relation (\ref{F4top2}) and exploiting isotropy implies, \begin{eqnarray} G_{\rm 4t} & \!\!\!=\!\!\! & \Bigl\langle \Omega \Bigl\vert \frac{a^{D-4} \sqrt{-\widetilde{g}}}{(D\!-\!1) (D\!-\!2)} \Bigl(\widetilde{g}^{k\ell} \widetilde{g}^{k\ell} \!+\! \widetilde{g}^{kk} [(D \!-\! 2) \!-\! \widetilde{g}^{\ell\ell}] \Bigr) i \delta^D(x \!-\! x') \Bigr\vert \Omega \Bigr\rangle_{tail} \label{G4top} \; , \qquad \\ & \!\!\!=\!\!\! & -\Bigl[D - \Bigl(\frac{D \!-\! 1}{D \!-\! 3}\Bigr)\Bigr] \kappa^2 a^{D-4} i\delta \Delta_A(x;x) i \delta^D(x \!-\! x') \; . \label{G4t} \qquad \end{eqnarray} Expression (\ref{G4t}) agrees with the result (67) reported in \cite{Leonard:2013xsa}. \subsection{The 3-point contribution} The primitive 3-point contribution is the left hand diagram of Fig.~\ref{vacpolgraphs}. From the first term of the operator expression (\ref{vacpolop}) we can infer a simpler operator expression for it, \begin{eqnarray} \lefteqn{i\Bigl[\mbox{}^{\mu} \Pi_{\rm 3pt}^{\nu}\Bigr](x;x') = -\partial_{\rho} \partial'_{\sigma} \Biggl\{ \Bigl\langle \Omega \Bigl\vert \Bigl[ \sqrt{-\widetilde{g}} \, \widetilde{g}^{\rho [\alpha} \widetilde{g}^{\beta ] \mu} \Bigr]_{h(x)} \!\times\! \Bigl[ \sqrt{-\widetilde{g}} \, \widetilde{g}^{\sigma [\gamma} \widetilde{g}^{\delta ] \nu} \Bigr]_{h(x')} \Bigr\vert \Omega \Bigr\rangle } \nonumber \\ & & \hspace{6.5cm} \times 4 (a a')^{D-4} \partial_{\alpha} \partial'_{\gamma} i\Bigl[\mbox{}_{\beta} \Delta_{\delta}\Bigr](x;x') \Biggr\} \; , \label{3top1} \qquad \end{eqnarray} where square bracketed indices are anti-symmetrized. If we specialize to just the tail contribution then the expectation value on the first line of (\ref{3top1}) goes like $1/\Delta x^{D-4}$. Hence the entire curly-bracketed term is at most logarithmically divergent, and that only when both of the derivatives on the second line of (\ref{3top1}) act on the most singular part of the photon propagator (\ref{photprop}). Because the less singular parts vanish for $D = 4$ we can make the simplification, \begin{equation} 4 ( a a')^{D-4} \partial_{\alpha} \partial'_{\gamma} i\Bigl[\mbox{}_{\beta} \Delta_{\delta}\Bigr](x;x') \longrightarrow 4 (a a')^{\frac{D}2 - 2} \eta_{\beta\delta} \partial_{\alpha} \partial'_{\gamma} i\Delta(x;x') \; . \label{photsimp} \end{equation} Substituting (\ref{photsimp}) in (\ref{3top1}), and exploiting relation (\ref{Fpick}), gives an operator expression for the tail contribution to the $F(x;x')$ structure function, \begin{eqnarray} \lefteqn{-\vec{\nabla} \!\cdot\! \vec{\nabla}' F_{3t}(x;x') = -\partial_i \partial'_j \Biggl\{ \Bigl\langle \Omega \Bigl\vert \Bigl[ \sqrt{-\widetilde{g}} \, \widetilde{g}^{ik} \Bigr]_{h(x)} \!\times\! \Bigl[ \sqrt{-\widetilde{g}} \, \widetilde{g}^{i\ell} \Bigr]_{h(x')} \Bigr\vert \Omega \Bigr\rangle_{\rm tail} } \nonumber \\ & & \hspace{5.2cm} \times (a a')^{\frac{D}2-2} \Bigl[ \delta_{k\ell} \partial_0 \partial'_0 \!-\! \partial_k \partial'_{\ell}\Bigr] i\Delta(x;x') \Biggr\} \; , \label{F3top1} \qquad \\ & & \hspace{0cm} = -\kappa^2 \partial_i \partial'_j \Biggl\{ \Bigl\langle \Omega \Bigl\vert \frac14 h^2 \delta^{ik} \delta^{j\ell} \!-\! \frac12 h^{ik} h \delta^{j\ell} \!-\! \frac12 h \delta^{ik} h^{j\ell} \!+\! h^{ik} h^{j\ell} \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \nonumber \\ & & \hspace{5.2cm} \times (a a')^{\frac{D}2-2} \Bigl[ \delta_{k\ell} \partial_0 \partial'_0 \!-\! \partial_k \partial'_{\ell}\Bigr] i\Delta(x;x') \Biggr\} \; . \label{F3top2} \end{eqnarray} Substituting the tail part of the propagator (\ref{gravtail}) and performing the simple contractions implies, \begin{equation} F_{3t}(x;x') = \kappa^2 i\delta \Delta_A(x;x') (a a')^{\frac{D}2-2} \Bigl[ (D \!-\! 1) \partial_0 \partial'_0 \!-\! \vec{\nabla} \!\cdot\! \vec{\nabla}' \Bigr] i\Delta(x;x') \; . \label{F3prop} \end{equation} The final step is extracting the derivatives from inside the square brackets of (\ref{F3prop}), which is done generically in Appendix B. From relation (\ref{keyID}) we infer, \begin{eqnarray} \lefteqn{F_{3t}(x;x') = -\frac{\kappa^2 H^2 \partial \!\cdot\! \partial'}{64 \pi^4} \Biggl[\frac{\ln(\frac14 H^2 \Delta x^2) \!-\! 4}{\Delta x^2}\Biggr] -\frac{\kappa^2 H^2 \partial_0 \partial'_0}{16 \pi^4} \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2) \!+\! 2}{\Delta x^2}\Biggr] } \nonumber \\ & & \hspace{5cm} -\frac{\kappa^2 H^{D-2} (D \!-\! 1) \Gamma(\frac{D}2 \!+\! 1) \, i\delta^D(x \!-\! x')}{ (4 \pi)^{\frac{D}2} (D \!-\!3) (D \!-\! 4)} \; . \qquad \label{F3final} \end{eqnarray} Both the divergence and the $\ln(\frac14 H^2 \Delta x^2)$ terms agree with the results reported in equations (129) and (130) of \cite{Leonard:2013xsa}. Relations (\ref{Fpick}-\ref{Gpick}) provide an operator expression for the $G(x;x')$ structure function, \begin{equation} (D \!-\! 1) \partial \!\cdot\! \partial' F(x;x') + (D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' G(x;x') = \vec{\nabla} \!\cdot\! \vec{\nabla}' F(x;x') + i \Bigl[\mbox{}^{k} \Pi^{k}\Bigr](x;x') \; . \label{Gop} \end{equation} Specializing (\ref{Gop}) to the 3-point tail contribution gives, \begin{eqnarray} \lefteqn{ (D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' G_{\rm 3t}(x;x') = -(D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' F_{\rm 3t}(x;x') + (D \!-\!1) \partial_0 \partial'_0 F_{\rm 3t}(x;x') } \nonumber \\ & & \hspace{-.5cm} -\partial_{\rho} \partial'_{\sigma} \Biggl\{ \Bigl\langle \Omega \Bigl\vert \Bigl[ \sqrt{-\widetilde{g}} \Bigl( \widetilde{g}^{\rho \alpha} \widetilde{g}^{\beta\mu} \!-\! \widetilde{g}^{\rho\beta} \widetilde{g}^{\alpha\mu} \Bigr)\Bigr]_{h(x)} \!\times\! \Bigl[ \sqrt{-\widetilde{g}} \Bigl( \widetilde{g}^{\sigma\gamma} \widetilde{g}^{\delta\nu} \!-\! \widetilde{g}^{\sigma\delta} \widetilde{g}^{\gamma\nu}\Bigr) \Bigr]_{h(x')} \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \nonumber \\ & & \hspace{6.5cm} \times (a a')^{D-4} \eta_{\beta\delta} \partial_{\alpha} \partial'_{\gamma} i\Delta(x;x') \Biggr\} \; . \label{3tGop1} \qquad \end{eqnarray} The $\rho = \sigma = 0$ component of the contraction in (\ref{3tGop1}) cancels the factor of $(D-1) \partial_0 \partial'_0 F_{3t}(x;x')$. Expanding out the remaining terms gives, \begin{eqnarray} \lefteqn{ (D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' G_{\rm 3t}(x;x') = -(D \!-\! 2) \vec{\nabla} \!\cdot\! \vec{\nabla}' F_{\rm 3t}(x;x') } \nonumber \\ & & \hspace{-.5cm} + \partial_{0} \partial'_{i} \Biggl\{ \Bigl\langle \Omega \Bigl\vert \sqrt{-\widetilde{g}} \, \widetilde{g}^{k\ell} \!\times\! \sqrt{-\widetilde{g}} \Bigl( \widetilde{g}^{ij} \widetilde{g}^{k\ell} \!-\! \widetilde{g}^{i\ell} \widetilde{g}^{jk} \Bigr) \Bigr\vert \Omega \Bigr\rangle_{\rm tail} (a a')^{\frac{D}2-2} \partial_0 \partial'_j i\Delta(x;x') \Biggr\} \nonumber \\ & & \hspace{-.5cm} + \partial_{i} \partial'_{0} \Biggl\{ \Bigl\langle \Omega \Bigl\vert \sqrt{-\widetilde{g}} \Bigl( \widetilde{g}^{ij} \widetilde{g}^{k\ell} \!-\! \widetilde{g}^{i\ell} \widetilde{g}^{jk} \Bigr) \!\times\! \sqrt{-\widetilde{g}} \, \widetilde{g}^{k\ell} \Bigr\vert \Omega \Bigr\rangle_{\rm tail} (a a')^{\frac{D}2-2} \partial_j \partial'_0 i\Delta(x;x') \Biggr\} \nonumber \\ & & \hspace{-.5cm} - \partial_{i} \partial'_{j} \Biggl\{ \Bigl\langle \Omega \Bigl\vert \sqrt{-\widetilde{g}} \Bigl( \widetilde{g}^{im} \widetilde{g}^{k\ell} \!-\! \widetilde{g}^{i\ell} \widetilde{g}^{mk} \Bigr) \!\times\! \sqrt{-\widetilde{g}} \Bigl(\widetilde{g}^{jn} \widetilde{g}^{k\ell} \!-\! \widetilde{g}^{j\ell} \widetilde{g}^{kn}\Bigr) \Bigr\vert \Omega \Bigr\rangle_{\rm tail} \nonumber \\ & & \hspace{7cm} \times (a a')^{\frac{D}2-2} \partial_m \partial'_n i\Delta(x;x') \Biggr\} , \label{3tGop2} \qquad \\ & & \hspace{-.7cm} = -\kappa^2 \vec{\nabla} \!\cdot\! \vec{\nabla}' \Biggl\{ \! (a a')^{\frac{D}2-2} i\delta \Delta_A(x;x') \! \Biggl[ 2 \Bigl( \frac{D^2 \!-\! 5D \!+\! 5}{D \!-\! 3}\Bigr) \vec{\nabla} \!\cdot\! \vec{\nabla}' \!+\! (D \!-\! 2) (D \!-\! 1) \partial_0 \partial'_0 \Biggr] \nonumber \\ & & \hspace{0cm} \times i\Delta(x;x') \Biggr\} + (D\!-\!2)^2 \kappa^2 \partial_0 \partial'_i \Biggl\{ (a a')^{\frac{D}2-2} i\delta \Delta_A(x;x') \partial_0 \partial'_i i\Delta(x;x') \Biggr\} \nonumber \\ & & \hspace{1cm} + (D\!-\!2)^2 \kappa^2 \partial_i \partial'_0 \Biggl\{ (a a')^{\frac{D}2-2} i\delta \Delta_A(x;x') \partial_i \partial'_0 i\Delta(x;x') \Biggr\} \nonumber \\ & & \hspace{1.5cm} - (D\!-\!4) (D \!-\! 1) \kappa^2 \partial_i \partial'_j \Biggl\{ (a a')^{\frac{D}2-2} i\delta \Delta_A(x;x') \partial_i \partial'_j i\Delta(x;x') \Biggr\} , \label{3tG1} \qquad \end{eqnarray} where some of the terms from the first line of (\ref{3tG1}) derive from the operator expressions on the last line of (\ref{3tGop2}) and spatial translation invariance has been exploited. It remains to extract the inner derivatives using relation (\ref{keyID}) and solve for $G_{\rm 3t}(x;x')$, \begin{equation} G_{3t}(x;x') = \frac{\kappa^2 H^2 \partial \!\cdot\! \partial'}{32 \pi^4} \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2) \!+\! 2}{\Delta x^2} \Biggr] \; . \label{G3final} \end{equation} This result agrees with the $\ln(\frac14 H^2 \Delta x^2)$ term reported in equation (132) of \cite{Leonard:2013xsa}. However, it has neither the ultraviolet divergence reported in equation (131) of that paper, nor the associated factor of $\ln(\mu^2 \Delta x^2)$ reported in equation (132). These terms come from the non-tail part of the graviton propagator. \subsection{Tail Renormalization} The right hand diagram of Fig.~\ref{vacpolgraphs} stands for renormalization counterterms. Their contributions to the two structure functions was given in equation (\ref{DFG}). We must bear in mind the fact that the coefficients $\overline{C}$ and $\Delta C$ are not those appropriate to the full vacuum polarization \cite{Leonard:2013xsa} but rather just the parts needed to cancel the divergences in our tail results (\ref{F4t}) and (\ref{F3final}) for $F(x;x')$ and (\ref{G4t}) and (\ref{G3final}) for $G(x;x)$. Based on expressions (\ref{F4t}) and (\ref{F3final}) the best choice for the $\overline{C}$ counterterm is, \begin{equation} \overline{C} = \frac{\kappa^2 H^{D-4}}{(4\pi)^{\frac{D}2}} \Biggl\{ \frac{D (D \!-\! 5) \Gamma(D \!-\! 1) \pi \cot(\frac{\pi D}{2})}{16 \Gamma(\frac{D}2)} + \frac{(D \!-\! 1) \Gamma(\frac{D}2 \!+\! 1)}{4 (D \!-\!3) (D \!-\! 4)} + 1 \Biggr\} \; . \label{Cbar} \end{equation} After combining with the primitive results (\ref{F4t}) and (\ref{F3final}) and taking the unregulated limit we obtain, \begin{eqnarray} \lefteqn{F_{\rm tail}(x;x') = \frac{\kappa^2 H^2}{(2 \pi)^4} \Biggl\{2 \pi^2 \ln(a) i\delta^4(x \!-\! x') + \frac14 \partial^2 \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2)}{\Delta x^2} \Biggr] } \nonumber \\ & & \hspace{7.5cm} + \partial_0^2 \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2) \!+\! 2}{\Delta x^2} \Biggr] \Biggr\} . \label{Ftail} \qquad \end{eqnarray} Expression (\ref{Ftail}) agrees exactly with the intrinsically de Sitter part of the full $F(x;x')$ structure function (\ref{FdS}), including even the parts on the second line which play no role in either the secular enhancement of dynamical photons \cite{Wang:2014tza} or the logarithmic running of the Coulomb potential \cite{Glavan:2013jca}. Based on expressions (\ref{G4t}) and (\ref{G3final}) the best choice for the nocovariant $\Delta C$ counterterm is, \begin{equation} \Delta C = \frac{\kappa^2 H^{D-4}}{(4\pi)^{\frac{D}2}} \Biggl\{- \frac{(D^2 \!-\! 4D \!+\! 1) \Gamma(D \!-\! 1) \pi \cot(\frac{\pi D}{2})}{4 (D \!-\! 3) \Gamma(\frac{D}2)} + 1 \Biggr\} \; . \label{DeltaC} \end{equation} The unregulated limit of the renormalized tail contribution to $G(x;x')$ is, \begin{equation} G_{\rm tail}(x;x') = \frac{\kappa^2 H^2}{(2 \pi)^4} \Biggl\{-4 \pi^2 \ln(a) i\delta^4(x \!-\! x') - \frac12 \partial^2 \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2)}{\Delta x^2} \Biggr] \Biggr\} . \label{Gtail} \end{equation} Expression (\ref{Gtail}) does not agree with (\ref{GdS}) because the primitive 3-point tail contribution (\ref{G3final}) lacks both the divergence and the associated $\mu$-dependent logarithm of the full 3-point result \cite{Leonard:2013xsa}. \section{Discussion} Our aim has been to see how much of the intrinsically de Sitter part (\ref{FdS}-\ref{GdS}) of the vacuum polarization arises from replacing the full graviton propagator (\ref{gravprop}) with just its tail part (\ref{gravtail}). Our result is that the tail reproduces all of (\ref{FdS}) but not all of (\ref{GdS}). This means that the graviton tail is responsible for the the secular enhancement of dynamical photons (\ref{gravphot}), but not for all of the logarithmic running of the Coulomb potential (\ref{gravCoul}). The remaining parts of (\ref{GdS}) come from using the most singular part of the graviton propagator in the 3-point contribution. Although these terms have no factor of $H^2$, they do contain $\frac1{a a'} = H^2 \eta \eta'$. When the inner derivatives are passed through this term they can act on the $\eta \eta'$ and leave the required factor of $H^2$. Our result means that the tail term is {\it not} responsible for all the interesting secular effects mediated by the one loop vacuum polarization. This may not be the setback it would seem for the crucial task of extending Satrobinsky's stochastic technique \cite{Starobinsky:1986fx,Starobinsky:1994bd} to quantum gravity. The large logarithms of interest derive from three sources: \begin{enumerate} \item{Explicit factors of $\ln(a a')$ and $\ln(H^2 \Delta x^2)$ in the tail part of the graviton propagator (\ref{gravtail});} \item{Factors of $(a a')^{\frac{D}2-2}/(D-4)$ and $(\Delta x)^{D-4}/(D-4)$ which arise either in primitive ultraviolet divergences or in the counterterms which remove them; and} \item{The integration of interaction vertices which one must do in higher loop diagrams.} \end{enumerate} The one loop tail contributions (\ref{Ftail}) and (\ref{Gtail}) that we have computed come from the first two sources. The reason (\ref{Gtail}) does not give all the interesting parts (\ref{GdS}) of the $G(x;x')$ structure function is that we have missed some ultraviolet divergences from the most singular part of the propagator. {\it These sorts of terms are easy to recover using renormalization group techniques.} The ``hard'' contributions --- the ones for which one loop divergences do not predict higher loop results --- are those from the other two sources. So perhaps the key to dealing with the large logarithms is to combine Starobinsky's technique with the renormalization group. \vskip 1cm \centerline{\bf Acknowledgements} We are grateful for conversations and correspondence with Y. Z Chu, N. C. Tsamis and C. L. Wang. This work was partially supported by Taiwan MOST grants 103-2112-M-006-001-MY3 and 106-2112-M-006-008-; by the D-ITP consortium, a program of the Netherlands Organization for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW); by NSF grant PHY-1506513; and by the Institute for Fundamental Theory at the University of Florida. \section{Appendix A: $i\Delta_{\Sigma I}(x;x')$ Expansions} The infinite series expansions for the scalar propagators (\ref{scalarprops}) are: \begin{eqnarray} \lefteqn{ i\Delta_{\Sigma A}(x;x') = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \sum_{n=1}^{\infty} \Bigl( \frac{ a a' H^2 \Delta x^2}{4}\Bigr)^n } \nonumber \\ & & \hspace{2cm} \times \Biggl\{ \frac{\Gamma(n \!+\! D \!-\! 1)}{n \, \Gamma(n \!+\! \frac{D}2)} - \frac{ \Gamma(n \!+\! \frac{D}2 \!+\! 1)}{(n \!-\! \frac{D}2 \!+\! 2) (n \!+\! 1)!} \Bigl( \frac{4}{a a' H^2 \Delta x^2} \Bigr)^{\frac{D}2-2} \Biggr\} , \qquad \label{DSigmaA} \\ \lefteqn{ i\Delta_{\Sigma B}(x;x') = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \sum_{n=1}^{\infty} \Bigl( \frac{ a a' H^2 \Delta x^2}{4}\Bigr)^n } \nonumber \\ & & \hspace{3.5cm} \times \Biggl\{ \frac{\Gamma(n \!+\! D \!-\! 2)}{\Gamma(n \!+\! \frac{D}2)} - \frac{ \Gamma(n \!+\! \frac{D}2)}{ (n \!+\! 1)!} \Bigl( \frac{4}{a a' H^2 \Delta x^2} \Bigr)^{\frac{D}2-2} \Biggr\} , \qquad \label{DSigmaB} \\ \lefteqn{ i\Delta_{\Sigma C}(x;x') = \frac{H^{D-2}}{(4\pi)^{\frac{D}2}} \sum_{n=1}^{\infty} \Bigl( \frac{ a a' H^2 \Delta x^2}{4}\Bigr)^n } \nonumber \\ & & \hspace{0cm} \times \Biggl\{ \frac{(n \!+\! 1) \Gamma(n \!+\! D \!-\! 3)}{\Gamma(n \!+\! \frac{D}2)} - \frac{ (n \!-\! \frac{D}2 \!+\! 3) \Gamma(n \!+\! \frac{D}2 \!-\! 1)}{ (n \!+\! 1)!} \Bigl( \frac{4}{a a' H^2 \Delta x^2} \Bigr)^{\frac{D}2-2} \Biggr\} . \qquad \label{DSigmaC} \end{eqnarray} \section{Appendix B: Extracting Derivatives} Evaluating the 3-point contributions requires that we wish pass derivatives of the photon propagator to the left of $(a a')^{\frac{D}2-2} i\delta \Delta_A(x;x')$ in expressions of the form, \begin{equation} (a a')^{\frac{D}2 -2} i\delta \Delta_A(x;x') \partial_{\mu} \partial'_{\nu} i\Delta(x;x') \; . \end{equation} The propagator $i\Delta(x;x')$ goes like $1/\Delta x^{D-2}$. From equation (\ref{tail}) we see that $(a a')^{\frac{D}2-2} i\delta \Delta_A(x;x')$ contains three distinct sorts of coordinate dependence. The result passing derivatives through each of these terms is, \begin{eqnarray} \frac1{\Delta x^{D-4}} \, \partial_{\mu} \partial'_{\nu} \frac1{\Delta x^{D-2}} & = & \frac{[D \partial_{\mu} \partial'_{\nu} \!-\! \eta_{\mu\nu} \partial \!\cdot\! \partial' ]}{4 (D \!-\! 3)} \, \frac1{\Delta x^{2D-6}} \; , \label{relation1} \\ (a a')^{\frac{D}2-2} \, \partial_{\mu} \partial'_{\nu} \frac1{\Delta x^{D-2}} & = & \Bigl[ \partial_{\mu} \!-\! \Bigl(\frac{D}2 \!-\! 2\Bigr) H a \delta^0_{~\mu}\Bigr] \nonumber \\ & & \hspace{1cm} \times \Bigl[ \partial'_{\nu} \!-\! \Bigl(\frac{D}2 \!-\! 2\Bigr) H a' \delta^0_{~\nu}\Bigr] \Bigl[ \frac{(a a')^{\frac{D}2-2}}{\Delta x^{D-2}} \Bigr] \; , \label{relation2} \qquad \\ \ln(a a') \, \partial_{\mu} \partial'_{\nu} \frac1{\Delta x^{D-2}} & = & \partial_{\mu} \partial'_{\nu} \Bigl[ \frac{\ln(a a')}{\Delta x^{D-2}} \Bigr] - \partial_{\mu} \Bigl[ \frac{H a' \delta^0_{~\nu}}{\Delta x^{D-2}} \Bigr] - \partial'_{\nu} \Bigl[ \frac{H a \delta^0_{~\mu}}{\Delta x^{D-2}} \Bigr] \; . \label{relation3} \qquad \end{eqnarray} The first terms on the right hand side of relations (\ref{relation1}-\ref{relation3}) give the derivatives acting on the product $(a a')^{\frac{D}2-2} i\delta \Delta_A(x;x') i\Delta(x;x')$. That product is integrable for $D=4$ so we can take its unregulated limit. The secondary terms of relations (\ref{relation2}-\ref{relation3}) cancel in $D=4$ dimensions, so it only remains to consider the second term on the right of relation (\ref{relation1}), \begin{eqnarray} \lefteqn{\partial \!\cdot\! \partial' \frac1{\Delta x^{2D-6}} = \partial \!\cdot\! \partial' \Biggl[ \frac1{\Delta x^{2D-6}} \!-\! \frac{\mu^{D-4}}{\Delta x^{D-2}}\Biggr] - \frac{4 \pi^{\frac{D}2} \mu^{D-4} i\delta^D(x \!-\! x')}{\Gamma(\frac{D}2 \!-\! 1)} \; , } \\ & & \hspace{-.5cm} = -\Bigl( \frac{D \!-\! 4}{2}\Bigr) \partial \!\cdot\! \partial' \Biggl[ \frac{\ln(\mu^2 \Delta x^2)}{\Delta x^{2}} \Biggr] + O\Bigl( (D \!-\! 4)^2\Bigr) - \frac{4 \pi^{\frac{D}2} \mu^{D-4} i\delta^D(x \!-\! x')}{\Gamma(\frac{D}2 \!-\! 1)} \; . \qquad \end{eqnarray} Setting $\mu = \frac12 H$ and putting everything together gives, \begin{eqnarray} \lefteqn{ (a a')^{\frac{D}2 -2} i\delta \Delta_A(x;x') \partial_{\mu} \partial'_{\nu} i\Delta(x;x') = -\frac{H^2 \partial_{\mu} \partial'_{\nu}}{32 \pi^4} \Biggl[ \frac{ \ln(\frac14 H^2 \Delta x^2) \!+\! 2}{\Delta x^2}\Biggr] } \nonumber \\ & & \hspace{-.5cm} + \frac{H^2 \eta_{\mu\nu} \partial \!\cdot\! \partial'}{128 \pi^4} \Biggl[ \frac{\ln(\frac14 H^2 \Delta x^2)}{\Delta x^2} \Biggr] \!+\! \frac{H^{D-2} \eta_{\mu\nu}}{(4\pi)^{\frac{D}2}} \frac{ \Gamma(\frac{D}2 \!+\! 1) \, i\delta^D(x \!-\! x')}{2 (D \!-\! 3) (D \!-\! 4)} \!+\! O(D \!-\! 4) . \label{keyID} \qquad \end{eqnarray}
{ "timestamp": "2018-06-05T02:09:37", "yymm": "1806", "arxiv_id": "1806.00742", "language": "en", "url": "https://arxiv.org/abs/1806.00742" }
\section{Introduction} Calder\'on-Zygmund theory was first developed for the Poisson equation in \cite{CZ}, which related the integrability of the gradient of the solution for the Poisson equation with the associated data. This represented the starting point of obtaining a priori estimates in Sobolev spaces for elliptic and parabolic equations. Since we are interested in Calder\'on-Zygmund theory for parabolic equations in this paper, we shall discuss the history of the problem only for parabolic equations and refer the reader to \cite{adimurthi2017sharp} and references therein for the elliptic counterpart. \emph{All the estimates mentioned in this introduction are quantitative in nature, but to avoid being too technical, we only recall the qualitative nature of the bounds. This is sufficient to highlight the nature of the results that we will prove in this paper. } The starting point of Calder\'on-Zygmund theory for quasilinear parabolic equations was developed in \cite{AM2}, where they considered the following problem: \[ \begin{array}{rcll} u_t - \dv (a(x,t)|\nabla u|^{p-2} \nabla u) &=& - \dv (|{\bf f}|^{p-2} {\bf f}) & \quad \text{in} \ \Om \times (-T,T), \end{array} \] with $a(x,t) \in \text{VMO}$ and $p > \frac{2n}{n+2}$, proving \[ |{\bf f}| \in L^{q}_{loc}\lbr \Om \times (-T,T)\rbr \Longrightarrow |\nabla u| \in L^q_{loc}\lbr \Om \times (-T,T)\rbr \qquad \text{for all} \ q >p. \] After this pioneering work, there have been numerous publications which extended these estimates to other quasilinear parabolic equations with constant $p$-growth. In \cite{bogelein2014global}, the authors improved the estimate in \cite{AM2} to obtain global a priori estimates (with non homogeneous boundary data) and proved \[ |{\bf f}| \in L^{q}\lbr \Omega \times (-T+\delta, T)\rbr \Longrightarrow |\nabla u| \in L^q\lbr \Omega \times (-T+\delta, T)\rbr \qquad \text{for all} \ q >p \ \text{and some}\ \delta \in (0,2T). \] This was subsequently extended in \cite{BOS1} to prove global a priori estimates for more general nonlinear structures satisfying a small BMO condition and Reifenberg-flat domains (see Section \ref{two} for the precise definitions). In this paper, we are interested in obtaining Calder\'on-Zygmund type bounds for the problem \begin{equation} \label{basic_pde} \left\{ \begin{array}{rcll} u_t - \dv \mathcal{A}(x,t,\nabla u) &=& -\dv (|{\bf f}|^{p(x,t)-2} {\bf f}) & \quad \text{in} \ \Om \times (-T,T),\\ u &=& 0 & \quad \text{on} \ \partial \Omega \times (-T,T). \end{array}\right. \end{equation} Here, the quasilinear operator $\mathcal{A}(x,t,\nabla u)$ is modeled after well known $p(x,t)$-Laplacian operator having the form $ |\nabla u|^{p(x,t) - 2} \nabla u$ with $p(\cdot) > \frac{2n}{n+2}$. For more on the importance of variable exponent problems, see \cite{AS,CLR,HU,RR,Ru,VVZ} and the references therein. In a recent paper \cite{baroni2014calderon}, the authors were able to show \[ |{\bf f}|^{p(\cdot)} \in L^{q}_{loc}\lbr \Omega \times (-T, T)\rbr \Longrightarrow |\nabla u|^{p(\cdot)} \in L^q_{loc}\lbr \Omega \times (-T, T)\rbr \quad \text{for all} \ 1<q<\infty. \] This was subsequently improved to a global estimate in \cite{byun2016nonlinear}, where they proved \begin{equation} \label{byunok} |{\bf f}|^{p(\cdot)} \in L^{q(\cdot)}\lbr \Omega \times (-T, T)\rbr \Longrightarrow |\nabla u|^{p(\cdot)} \in L^{q(\cdot)}\lbr \Omega \times (-T, T)\rbr \quad \text{for all} \ 1<q^- \leq q(\cdot) \leq q^+ <\infty. \end{equation} In particular, they could not take $q^- = 1$. On the other hand, from the definition of weak solution, it is easy to see that the following energy-type estimate holds: \begin{equation} \label{energy} |{\bf f}|^{p(\cdot)} \in L^{1}\lbr \Omega \times (-T, T)\rbr \Longrightarrow |\nabla u|^{p(\cdot)} \in L^{1}\lbr \Omega \times (-T, T)\rbr. \end{equation} Comparing \eqref{byunok} and \eqref{energy}, it seems reasonable to expect that \eqref{byunok} should hold with $1 \leq q^-\leq q(\cdot) \leq q^+ < \infty$, i.e., \emph{it should be possible to take $q^- =1$}. In this paper, we prove that we can indeed take $q^- = 1$ in \eqref{byunok}. In order to do this, we will obtain improved estimates below the natural exponent $p(\cdot)$ using the method of parabolic Lipschitz truncation developed in the seminal paper \cite{KL}, as well as the unified intrinsic scaling of \cite{adimurthi2018sharp}. In order to prove our results, we need to impose some restrictions on the variable exponent $p(x,t)$, on the nonlinear structure $\mathcal{A}(x,t,\nabla u)$ as well as on the boundary of the domain $\partial \Omega$. These restrictions will be described in detail in Section \ref{two}. The plan of the paper is as follows: In Section \ref{two}, we collect all assumptions that will be needed on the structure of the nonlinearity $\mathcal{A}$, on the domain $\Omega$ and on the variable exponent $p(\cdot)$. In Section \ref{Weak solution}, we define the notion of weak solutions and collect some of their well known properties. In Section \ref{three}, we state the main results of this paper. In Section \ref{four}, we collect all the preliminary results and well known lemmas that will be needed in subsequent parts of the paper. In Section \ref{four-two}, we describe the approximations that will be made along the way. In Section \ref{five} and Section \ref{six}, we prove crucial difference estimates below the natural exponent for energy solutions. In Section \ref{eight}, we demonstrate some important covering arguments. In Section \ref{nine}, the proof of the main theorems will be provided. Finally in Appendix \ref{lipschitz_truncation} and Appendix \ref{lipschitz_truncation_B}, we will describe the construction of test functions having Lipschitz regularity which will be needed to prove the estimates in Section \ref{five} and Section \ref{six}, respectively. \section{Regularity assumptions and notation \label{two} In this section, we shall collect all the structure assumptions as well as recall several useful lemmas that are already available in existing literature. \subsection{Metrics needed} Let us first collect a few metrics on $\RR^{n+1}$ that will be used throughout the paper. \begin{definition} \label{parabolic_metric} We define the parabolic metric $d_p$ on $\RR^{n+1}$ as follows: Let $z_1 = (x_1,t_1)$ and $z_2 = (x_2,t_2)$ be any two points on $\RR^{n+1}$, then \begin{equation*} \label{par_met} d_p(z_1,z_2) := \max \mgh{|x_1-x_2|, \sqrt{|t_1-t_2|}}. \end{equation*} \end{definition} Since we will use intrinsically scaled cylinders where the scaling depends on the center of the cylinder, we will also need to consider the following localized parabolic metric: \begin{definition} \label{loc_parabolic_metric} Given a function $1 < p(\cdot) < \infty$, some fixed point $z= (x,t) \in \RR^{n+1}$ and any $\tau > 0$, $d > 0$, we define the localized parabolic metric $d_z^{\tau,d}$ as follows: Let $z_1 = (x_1,t_1)$ and $z_2 = (x_2,t_2)$ be any two points on $\RR^{n+1}$, then \begin{equation*} \label{loc_par_met} d_z^{\tau,d}(z_1,z_2) := \max \mgh{\nscalex{\tau}{z}|x_1-x_2|, \sqrt{\nscalet{\tau}{z}|t_1-t_2|}}. \end{equation*} \end{definition} \subsection{Structure of the variable exponent} \label{exponent_structure} \begin{definition} \label{definition_p_log} We say that, a bounded measurable function $p(\cdot) : \RR^{n+1} \rightarrow \RR$ belongs to the $\log$-H\"older class $\log^{\pm}$, if the following conditions are satisfied: \begin{itemize} \item There exist constants $p^-$ and $p^+$ such that $1< p^- \leq p(z) \leq p^+ < \infty$ for every $z \in \RR^{n+1}$. \item $ |p(z_1) - p(z_2)| \leq \frac{L}{- \log |z_1-z_2|}$ holds for every $ z_1,z_2 \in \RR^{n+1}$ with $ d_p(z_1,z_2) \leq \frac12 $ and for some $L>0$. \end{itemize} \end{definition} \begin{remark}\label{remark_def_p_log} We remark that $p(\cdot)$ is log-H\"{o}lder continuous in $\RR^{n+1}$ if and only if there is a nondecreasing continuous function ${\omega_{\pp}} : [0,\infty) \rightarrow [0,\infty)$ such that \begin{itemize} \item $\lim_{r\rightarrow 0} \omega_{\pp}(r) = 0$ and $|p(z_1)- p(z_2)| \leq \omega_{\pp}(d_p(z_1,z_2))$ for every $z_1,z_2 \in \RR^{n+1}$. \item $\omega_{\pp}(r) \log \lbr \frac{1}{r} \rbr \leq {L}$ holds for all $ 0< r \leq \frac12.$ \end{itemize} The function $\omega_{\pp}$ is called the modulus of continuity of the variable exponent $p(\cdot)$. \end{remark} \subsection{Structure of the domain} The domain that we consider may be nonsmooth but should satisfy some regularity condition. This condition would essentially say that at each boundary point and every scale, we require the boundary of the domain to be between two hyperplanes separated by a distance proportional to the scale. \begin{definition} \label{reif_flat} Given any $\gamma \in (0,1)$ and any ${\bf S}_0 >0$, we say that $\Omega$ is $(\gamma,{\bf S}_0)$-Reifenberg flat domain if for every $x_0 \in \partial \Omega$ and every $r \in (0,{\bf S}_0]$, there exists a system of coordinates $\{y_1,y_2,\ldots,y_n\}$ (possibly depending on $x_0$ and $r$) such that in this coordinate system, $x_0 =0$ and \[ B_r(0) \cap \{y_n > \gamma r\} \subset B_r(0) \cap \Omega \subset B_r(0) \cap \{y_n > -\gamma r\}. \] \end{definition} The class of Reifenberg flat domains are standard in obtaining Calder\'on-Zygmund type estimates, in the elliptic case, see \cite{AP2,BO,BO1,BW-CPAM} and references therein, whereas for the parabolic case, see \cite{Bui1,MR3461425,BOS1,MR2836359} and the references therein. \begin{definition} \label{measure_def} We say that a bounded domain $\Omega$ is said to satisfy a uniform measure density condition with a constant $m_e >0$ if for every $x \in \overline{\Omega}$ and every $r >0$, there holds \[ |\Omega^c \cap B_r(x)| \geq m_e |B_r(x)|. \] \end{definition} From the definition of $(\gamma,{\bf S}_0)$-Reifenberg flat domains, it is easy to see that the following property holds: \begin{lemma} \label{measure_density} Let $\gamma \in (0,1/8)$ and ${\bf S}_0>0$ be given and suppose that $\Omega$ is a $(\gamma,{\bf S}_0)$-Reifenberg flat domain. Then the following measure density conditions hold: \begin{equation}\label{measure_one}\begin{array}{c} \sup_{y \in \Omega} \sup_{r \leq {\bf S}_0} \frac{|B_r(y)|}{|B_r(y) \cap \Omega|} \leq \lbr \frac{2}{1-\gamma} \rbr^n \leq \lbr \frac{16}{7} \rbr^n, \\ \inf_{y \in \partial\Omega} \inf_{r \leq {\bf S}_0} \frac{|\Omega^c \cap B_r(y)|}{|B_r(y)|} \geq \lbr \frac{1-\gamma}{2} \rbr^n \geq \lbr \frac7{16} \rbr^n. \end{array}\end{equation} \end{lemma} \subsection{Structure of the nonlinearity \texorpdfstring{$\mathcal{A}$}.} \label{nonlinear_structure} We first assume that $\mathcal{A}(\cdot,\cdot,\cdot)$ is a Carath\'eodory function in the sense: \begin{gather} (x,t) \mapsto \mathcal{A}(x,t,\zeta) \ \text{is measurable for every } \ \zeta \in \RR^n \nonumber, \\ \zeta \mapsto\mathcal{A}(x,t,\zeta) \ \text{is continuous for almost every } \ (x,t) \in \RR^n \times \RR \nonumber. \end{gather} Let $\mu \in [0,1]$ be given, then there exist two positive constants $\Lambda_0,\Lambda_1$ such that the following holds for almost every $x \in \Omega$ and every $\zeta, \eta \in \RR^n$, \begin{gather} (\mu^2 + |\zeta|^2)^{\frac12} |D_{\zeta} \mathcal{A}(x,t,\zeta)| + |\mathcal{A}(x,t,\zeta)| \leq \Lambda_1 (\mu^2 + |\zeta|^2)^{\frac{p(x,t) -1}{2}} \label{abounded}, \\ (\mu^2 + |\zeta|^2 )^{\frac{p(x,t)-2}{2}} |\eta|^2 \Lambda_0 \leq \iprod{D_{\zeta}\mathcal{A}(x,t,\zeta)\eta}{\eta} \label{aellipticity}. \end{gather} We point out that from \eqref{aellipticity}, one can derive the following monotonicity bound: \begin{equation} \label{monotonicity} \iprod{\mathcal{A}(x,t,\zeta) - \mathcal{A}(x,t,\eta)}{\zeta - \eta} \geq \tilde{\Lambda}_0 (\mu^2 + |\zeta|^2 + |\eta|^2)^{\frac{p(x,t)-2}{2}} |\zeta- \eta|^2, \end{equation} where $\tilde{\Lambda}_0 = \tilde{\Lambda}_0(\Lambda_0,n,p^+,p^-)>0$. By inserting $\eta=0$ into \eqref{monotonicity}, we also have the following coercivity bound: \begin{equation*}\label{coercivity} \tilde{\Lambda}_2 |\zeta|^{p(x,t)} \leq \iprod{\mathcal{A}(x,t,\zeta)}{\zeta} + \tilde{\Lambda}_1, \end{equation*} where $\tilde{\Lambda}_1 = \tilde{\Lambda}_1 (\Lambda_1,\Lambda_0,p^+,p^-,n)>0$ and $\tilde{\Lambda}_2 = \tilde{\Lambda}_2(\Lambda_1,\Lambda_0,p^+,p^-,n)>0$. \subsection{Smallness assumption} \label{smallness_assumption} In order to prove the main results, we need to assume a smallness condition satisfied by $(p(\cdot),\mathcal{A},\Omega)$. \begin{definition}\label{further_assumptions} Let $\gamma\in (0,1/8)$ and ${\bf S}_0>0$ be given, we then say $(p(\cdot),\mathcal{A},\Omega)$ is $(\gamma,{\bf S}_0)$-vanishing if the following three structure conditions are satisfied: \begin{description}[leftmargin=*] \item[(i) Assumption on $p(\cdot)$:] The variable exponent $p(\cdot)$ with modulus of continuity $\omega_{\pp}$ as defined in Definition \ref{definition_p_log} with $p^- > \frac{2n}{n+2}$, is further assumed to satisfy the smallness condition: \begin{equation} \label{small_px} \sup_{0<r\leq {\bf S}_0} {\omega_{\pp}}(r) \log \lbr \frac{1}{r} \rbr \leq \gamma. \end{equation} \item[(ii) Assumption on $\mathcal{A}$:] For a bounded open set $U \subset \RR^n$, let us denote \begin{equation*} \label{a_difference} \Theta(\mathcal{A},U)(x,t) := \sup_{\zeta \in \RR^n} \lbr[|] \frac{\mathcal{A}(x,t,\zeta)}{(\mu^2 + |\zeta|^2)^{\frac{p(x,t)-1}{2}}} - \avg{\frac{\mathcal{A}(\cdot,t,\zeta)}{(\mu^2 + |\zeta|^2)^{\frac{p(\cdot,t)-1}{2}}}}{U} \rbr[|], \end{equation*} where we have used the notation $\avg{f}{U} := \fint_U f(y)\ dy$. Note that if $\mu=0$, then $\zeta \in \RR^n\setminus \{0\}$. We assume that the nonlinearity $\mathcal{A}$ has small BMO with constant $\gamma$ if there holds \begin{equation} \label{small_aa} \sup_{\substack{t_1,t_2 \in \RR,\\ t_1 < t_2}} \sup_{0<r\leq {\bf S}_0} \sup_{y \in \RR^n} \fint_{t_1}^{t_2}\fint_{B_r(y)} \Theta(\mathcal{A},B_r(y))(x,t) \ dx \ dt \leq \gamma. \end{equation} \item[(iii) Assumption on $\partial \Omega$:] The domain $\Omega$ is $(\gamma,{\bf S}_0)$-Reifenberg flat in the sense of Definition \ref{reif_flat}. \end{description} \end{definition} \subsection{Notation} We shall use the following notations throughout the paper: \begin{itemize} \item We will use $z,\mathfrak{z},\tilde{z},\ldots$ to be points in $\RR^{n+1}$, symbols $x,\mathfrak{x},\tilde{x},y,\tilde{y},\ldots$ to denote space variables in $\RR^n$ and symbols $t,\mathfrak{t},s,\mathfrak{s},\ldots$ to denote time variables. We will also specifically match symbols, i.e., $z = (x,t)$ or $\mathfrak{z} = (\mathfrak{x},\mathfrak{t})$ and so on. \item In all subsequent sections, the subscript $[\cdot]_h$ will always denote the usual Steklov average. \item In what follows, the function $\omega_{\pp}$ denotes the modulus of continuity of $p(\cdot)$ and we denote $\omega_{\qq}$ for the modulus of continuity of $q(\cdot)$. \item We shall write $p(\cdot)$ as well as $p(\cdot,\cdot)$ depending on the necessity and we will switch between the two notations without notice throughout the paper. \item For the variable exponent $p(\cdot)$, we shall denote by $p^{\pm}_{\log}$ to include the constants $p^+$, $p^-$ and those that are part of the $\log$-H\"older continuity structure of $p(\cdot)$. Analogously, for variable exponents $q(\cdot)$, $r(\cdot)$ and $s(\cdot)$, we shall use $q^{\pm}_{\log}$, $r^{\pm}_{\log}$ and $s^{\pm}_{\log}$ to denote corresponding constants. \item Capital alphabets with subscripts as in radii ${\bf R}_0,{\bf R}_1\ldots$, or bounding values ${\bf M},{\bf M}_0,{\bf M}_1,\ldots$ will be fixed in subsequent sections once they are chosen. \item We shall use $\apprle$, $\apprge$ and $\approx$ to suppress writing the constants that could possibly change from line to line as long as they depend only on the structure constants of the form $n,p^{\pm}_{\log},q^{\pm}_{\log},\Lambda_0,\Lambda_1,{\bf S}_0$ and related quantities. \item We shall sometimes use $\sim$ to denote variables (without subscripts) that occur only within the proof of the concerned result, for example $\tilde{r}, \tilde{m}, \cdots$. \item Given a variable exponent $p(\cdot)$, we shall use the following notation: \begin{equation*}\label{notation_p_inf} p_E^- : = \inf_{x \in E} p(x) \qquad \text{and} \qquad p_E^+ := \sup_{x \in E} p(x). \end{equation*} We will drop the set $E$ and denote $p^+:= \sup_{z\in \RR^{n+1}} p(z)$ and $p^-:=\inf_{z\in \RR^{n+1}} p(z)$. \item We will denote $\Omega_T:= \Omega \times (-T,T)$ which is the region on which \eqref{basic_pde} is considered. We will also use the notation $\partial_p$ to denote the parabolic boundary, i.e, \[\partial_p Q_{\rho,s}(x,t):= B_{\rho}(x) \times \{t-s\} \bigcup \partial B_{\rho}(x) \times [t-s,t+s).\] \end{itemize} \subsection{Unified intrinsic cylinders}\label{Intrinsic cylinders} We will describe the intrinsically scaled cylinders that will be used in this paper. Let $\Omega$ be a bounded domain in $\RR^n$, and let $\rho >0,s>0$, $\lambda >0$ and $\mathfrak{z} = (\mathfrak{x},\mathfrak{t}) \in \RR^{n+1}$ be given. Furthermore, let $d$ be a fixed exponent satisfying \begin{equation} \label{def_d} \min\left\{ \frac{2}{p^+},1 \right\} > d > \frac{2n}{(n+2)p^-}. \end{equation} We define the following cylinders that will be used throughout the paper: \begin{gather*} Q_{\rho,s}(\mathfrak{z}) := B_{\rho}(\mathfrak{x}) \times (\mathfrak{t} - s^2, \mathfrak{t} + s^2),\\ Q^{\lambda}_{\rho,s}(\mathfrak{z}) := B_{\scalex{\lambda}{\mathfrak{z}}\rho}(\mathfrak{x}) \times (\mathfrak{t} - \scalet{\lambda}{\mathfrak{z}}s^2, \mathfrak{t} + \scalet{\lambda}{\mathfrak{t}}s^2) := B_{\rho}^{\lambda}(\mathfrak{x}) \times I_{s}^{\lambda}(\mathfrak{t}). \end{gather*} We will also use the following short notation: \begin{equation*} \begin{array}{ll} \Omega_{\rho}(\mathfrak{x}) := B_{\rho}(\mathfrak{x}) \cap \Omega, & K_{\rho,s}(\mathfrak{z}) := Q_{\rho,s}(\mathfrak{z}) \cap \Omega_T, \\ I_{\rho}(\mathfrak{t}) := (\mathfrak{t} - \rho^2, \mathfrak{t} + \rho^2), & I_{\rho}^{\lambda}(\mathfrak{t}) := (\mathfrak{t} - \scalet{\lambda}{\mathfrak{z}}\rho^2, \mathfrak{t} + \scalet{\lambda}{\mathfrak{z}}\rho^2),\\ \Omega^{\lambda}_{\rho}(\mathfrak{x}) := B_{\scalex{\lambda}{\mathfrak{x},\mathfrak{t}}\rho(\mathfrak{x})} \cap \Omega,& K^{\lambda}_{\rho,s}(\mathfrak{z}) := Q^{\lambda}_{\rho,s}(\mathfrak{z}) \cap \Omega_T, \\ \partial_w \Omega_{\rho}(\mathfrak{x}):= B_{\rho}(\mathfrak{x}) \cap \partial \Omega, & \partial_w K_{\rho,s}(\mathfrak{z}) := K_{\rho,s}(\mathfrak{z}) \cap \left\{ \partial \Omega \times (-T,T) \right\},\\ \partial_w \Omega^{\lambda}_{\rho}(\mathfrak{x}):= B_{\scalex{\lambda}{\mathfrak{z}}\rho}(\mathfrak{x}) \cap \partial \Omega, & \partial_w K^{\lambda}_{\rho,s}(\mathfrak{z}) := K^{\lambda}_{\rho,s}(\mathfrak{z}) \cap \left\{ \partial \Omega \times (-T,T) \right\},\\ \partial_p Q_{\rho,s}(\mathfrak{z}):= B_{\rho}(\mathfrak{x}) \times \{\mathfrak{t}-s^2\} \bigcup \partial B_{\rho}(x) \times I_s(\mathfrak{t}), & \partial_p Q^{\lambda}_{\rho,s}(\mathfrak{z}):= B^{\lambda}_{\rho}(\mathfrak{x}) \times \{\mathfrak{t}-\scalet{\lambda}{\mathfrak{z}}s^2\} \bigcup \partial B^{\lambda}_{\rho}(x) \times I^{\lambda}_s(\mathfrak{t}),\\ Q_{\rho}(\mathfrak{z}) := Q_{\rho,\rho}(\mathfrak{z}), \quad K_{\rho}(\mathfrak{z}) := K_{\rho,\rho}(\mathfrak{z}), & Q_{\rho}^{\lambda}(\mathfrak{z}) := Q_{\rho,\rho}^{\lambda}(\mathfrak{z}), \quad K_{\rho}^{\lambda}(\mathfrak{z}) := K_{\rho,\rho}^{\lambda}(\mathfrak{z}). \end{array} \end{equation*} We will also have to deal with half spaces, and use the following notation in that regard: \begin{equation*} \begin{array}{c} B_{\rho}^+(\mathfrak{x}) := B_{\rho}(\mathfrak{x}) \cap \{ \mathfrak{x}_n > 0\}, \qquad B_{\rho}^{\lambda,+}(\mathfrak{x}) := B_{\rho}^{\lambda}(\mathfrak{x}) \cap \{ \mathfrak{x}_n > 0\}, \\ Q_{\rho}^{\lambda,+}(\mathfrak{z}) := B_{\scalex{\lambda}{\mathfrak{z}}\rho}^+(\mathfrak{x}) \times \lbr \mathfrak{t} - \scalet{\lambda}{\mathfrak{z}}\rho^2,\mathfrak{t} + \scalet{\lambda}{\mathfrak{z}}\rho^2\rbr,\\ T_{\rho}^{\lambda}(\mathfrak{z}) := B_{\scalex{\lambda}{\mathfrak{z}}\rho}(\mathfrak{x}) \cap \{\mathfrak{x}_n >0\} \times \lbr \mathfrak{t} - \scalet{\lambda}{\mathfrak{z}}\rho^2,\mathfrak{t} + \scalet{\lambda}{\mathfrak{z}}\rho^2\rbr. \end{array} \end{equation*} An important thing to note is that the cylinders considered above are intrinsically scaled both in space and time simultaneously. This enables us to handle both the singular case ($p(\cdot) <2$) and degenerate case ($p(\cdot) >2$) simultaneously. \subsection{Restriction on radii} \label{sub_radii} In this subsection, let us collect all the restrictions we will make on some universal constants. First, let us describe all the restriction on the radii $\rho_0$: \begin{description} \descitem{(R1)}{R1} Let $\rho_0 \leq \frac14$ such that $|Q_{\rho_0}| = (\rho_0)^{n+2} |B_1|\leq 1$. \descitem{(R2)}{R2} Let $\rho_0$ be such that $\frac{\omega_{\pp}(8\rho_0)}{p^-} < \min\{\tilde{\beta}_1,\tilde{\beta}_2\}$, where $\tilde{\beta}_1$ is from Theorem \ref{high_weak} and $\tilde{\beta}_2$ is from Theorem \ref{high_very_weak} applied with ${\bf M}_{\vec{f}}={\bf M}_0$. Here ${\bf M}_0$ is given in \eqref{def_M_0}. \descitem{(R3)}{R3} Let $\rho_0 \leq \min \{ \tilde{\rho_1}, \tilde{\rho}_2\}$, where $\tilde{\rho_1}$ is from Theorem \ref{high_weak} and $\tilde{\rho_2}$ is from Theorem \ref{high_very_weak} applied with ${\bf M}_{\vec{f}}={\bf M}_0$. \descitem{(R4)}{R4} Let $1024\rho_0 \leq \min\left\{ \frac{1}{{\bf M}_0},\frac{1}{{\bf M}_u},\frac{1}{{\bf M}_w}\right\}$, where ${\bf M}_0$, ${\bf M}_u$, and ${\bf M}_w$ are from \eqref{def_M_0}, \eqref{size_u_f}, and \eqref{size_w}, respectively \descitem{(R5)}{R5} Let $\rho_0$ satisfy $\frac{\omega_{\pp}(12\rho_0)}{{p^{-}} -1} \leq \beta_0 \overset{\text{Section \ref{def_be_0}}}{\le}\min\{\tilde{\beta}_1,\tilde{\beta}_2\}$, where $\tilde{\beta}_1$ is from Theorem \ref{high_weak} and $\tilde{\beta}_2$ is from Theorem \ref{high_very_weak} applied with ${\bf M}_{\vec{f}}={\bf M}_0$. \descitem{(R6)}{R6} With ${\bf M}_p = \max\{ {\bf M}_0, {\bf M}_u, {\bf M}_w\}$, we will apply Lemma \ref{scaled_poincare} and Theorem \ref{measure_density_poincare} which will impose the restriction $\rho_0 \leq {\bf R}_p$. \descitem{(R7)}{R7} Let $\rho_0 \le \frac{{\bf S}_0}{\Gamma^2}$, where $\Gamma$ is given in \eqref{cv0-1} and $S_0$ is from Definition \ref{further_assumptions}. \descitem{(R8)}{R8} Let $\omega_{\pp}(2\rho_0) \leq \min \lbr[\{] \frac{p^-\sigma}{2}, \frac{\Lambda_0}{2\Lambda_1}, \frac{(p^--1)\sigma}{4}, \frac14, d_0 p^-, d_0 p^- (p^--1) \rbr[\}]$, where $\sigma$ is given in Remark \ref{high_int_remark} and $d_0$ is defined in \eqref{def_d_0}. \descitem{(R9)}{R9} Let $\omega_{\qq}(2\rho_0) \leq \min \left\{\frac{q^-\sigma}{4}, \frac14 \right\}$, where $\sigma$ is given in Remark \ref{high_int_remark} and $q(\cdot)$ is the exponent appearing in Theorem \ref{main_theorem1}. \end{description} \begin{remark}\label{remark_radius}Note that all the restrictions on $\rho_0$ are such that $\rho_0 = \rho_0(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},{\bf M}_0)\in (0,1/4)$ and henceforth we will always take the radius $\rho$ to satisfy $128\rho \leq \rho_0$. \end{remark} \subsection{Fixing a few other exponents} \label{def_be_0} We will first collect all the restrictions on the higher integrability exponent: \begin{description} \descitem{(B1)}{B1} Let $0<\beta_0 \leq \min\mgh{ \frac{1}{p^+}, \tilde{\beta}_1, \tilde{\beta}_2, \tilde{\beta}_3, \tilde{\beta}_4 }$, where $\tilde{\beta}_1$, $\tilde{\beta}_2$, $\tilde{\beta}_3$, and $\tilde{\beta}_4$ are given in Theorem \ref{high_weak}, Theorem \ref{high_very_weak}, Theorem \ref{first_diff_thm}, and Theorem \ref{second_diff_thm}, respectively. \descitem{(B2)}{B2} Once $\beta_0$ is fixed, let $\sigma_0$ be a number chosen such that $0<\sigma_0 \le \min\mgh{\frac{\beta_0}{3(1-\beta_0)}, \frac{q^- - 1}{3}, 1}$ holds. \descitem{(B3)}{B3} Let $\vartheta_0 = \max\{ \tilde{\vartheta}_1, \tilde{\vartheta}_2\}$, where $\tilde{\vartheta}_1$ and $\tilde{\vartheta}_2$ are from Theorem \ref{high_weak} and Theorem \ref{high_very_weak} with ${\bf M}_{\vec{f}} = {\bf M}_0$. Here ${\bf M}_0$ is given in \eqref{def_M_0}. \end{description} \begin{remark} \label{high_int_remark}Henceforth, we will assume $0<\sigma \leq \sigma_0$ and $0<\beta \leq \beta_0$.\end{remark} \section{Weak solution}\label{Weak solution} \subsection{Sobolev spaces with variable exponents} Let $\tilde{\Omega}$ be a bounded domain in $\RR^{N}$ for some $N\geq 1$, and let $s(\cdot)$ be an admissible variable exponent as in Section \ref{exponent_structure}. Given a positive integer $m$, the \emph{variable exponent Lebesgue space} $L^{s(\cdot)}(\tilde{\Omega},\RR^m)$ consists of all measurable functions ${\bf f}: \tilde{\Omega} \to \RR^m$ satisfying \[ \int_{\tilde{\Omega}} |{\bf f}(z)|^{s(z)} \ dz < \infty, \] endowed with the Luxemburg norm \[ \|{\bf f}\|_{L^{s(\cdot)}(\tilde{\Omega},\RR^m)} := \inf \left\{ \lambda >0 : \int_{\tilde{\Omega}} \left|\frac{{\bf f}(z)}{\lambda} \right|^{s(z)} \ dz \leq 1 \right\}. \] Analogously, we can define the \emph{variable exponent Sobolev space} as \[ W^{1,s(\cdot)}(\tilde{\Omega},\RR^m) := \{ {\bf f} \in L^{s(\cdot)}(\tilde{\Omega},\RR^m) : \nabla{\bf f} \in L^{s(\cdot)}(\tilde{\Omega},\RR^{mN}) \}, \] equipped with the norm \begin{equation}\label{norm_up} \| {\bf f} \|_{W^{1,s(\cdot)}(\tilde{\Omega},\RR^m)} := \| {\bf f} \|_{L^{s(\cdot)}(\tilde{\Omega},\RR^m)} + \| \nabla {\bf f} \|_{L^{s(\cdot)}(\tilde{\Omega},\RR^{mN})}. \end{equation} We shall denote $W_0^{1,s(\cdot)}(\tilde{\Omega},\RR^m)$ to be the closure of $C_c^\infty(\tilde{\Omega},\RR^m)$ under the norm from \eqref{norm_up}. Then all function spaces mentioned above are separable Banach spaces. For $m=1$, we write $L^{s(\cdot)}(\tilde{\Omega})$ and $W^{1,s(\cdot)}(\tilde{\Omega})$ for simplicity. We will also use the following modular function: \[ \varrho_{L^{s(\cdot)}(\tilde{\Omega})}(f) := \int_{\tilde{\Omega}} |f(z)|^{s(z)} \ dz.\] We mention the following useful relation between the modular and the norm in variable exponent spaces (see \cite[Lemma 3.2.5]{diening} for details): \begin{lemma} \label{integral_norm} For any $f \in L^{s(\cdot)}(\tilde{\Omega})$, the following holds: \begin{equation*} \label{norm_integral} \min \left\{ \varrho_{L^{s(\cdot)}(\tilde{\Omega})} (f)^{\frac{1}{s^-}}, \varrho_{{L^{s(\cdot)}(\tilde{\Omega})}} (f)^{\frac{1}{s^+}} \right\} \leq \| f\|_{L^{s(\cdot)}(\tilde{\Omega})} \leq \max \left\{ \varrho_{{L^{s(\cdot)}(\tilde{\Omega})}} (f)^{\frac{1}{s^-}}, \varrho_{{L^{s(\cdot)}(\tilde{\Omega})}} (f)^{\frac{1}{s^+}} \right\}. \end{equation*} \end{lemma} Let us now define some function spaces involving time. Let $\Omega \subset \RR^n$ be a bounded domain, and the space $L^{s(\cdot)}\lbr-T,T; W^{1,s(\cdot)}(\Omega)\rbr$ is defined as \[ L^{s(\cdot)}\lbr-T,T; W^{1,s(\cdot)}(\Omega)\rbr := \left\{ f \in L^{s(\cdot)}(\Omega_T) : \nabla_{\text{space}} f \in L^{s(\cdot)}(\Omega_T,\RR^{n}) \right\}, \] equipped with the norm \[ \| f \|_{L^{s(\cdot)}\lbr-T,T; W^{1,s(\cdot)}(\Omega)\rbr} := \| f \|_{L^{s(\cdot)}(\Omega_T)} + \| \nabla f \|_{L^{s(\cdot)}(\Omega_T,\RR^{n})}. \] We shall define $L^{s(\cdot)}\lbr-T,T; W_0^{1,s(\cdot)}(\Omega)\rbr := L^{s(\cdot)}\lbr-T,T; W^{1,s(\cdot)}(\Omega)\rbr \cap L^1\lbr-T,T; W_0^{1,1}(\Omega)\rbr$, and let us denote $L^{s(\cdot)}\lbr-T,T; W^{1,s(\cdot)}(\Omega)\rbr'$ the dual space of $L^{s(\cdot)}\lbr-T,T; W_0^{1,s(\cdot)}(\Omega)\rbr$. We remark that if $s(\cdot)$ is constant function, then all function spaces considered above become well known classical parabolic Sobolev spaces. \subsection{Definition of weak solution} There is a well known difficulty in defining the notion of solution for \eqref{basic_pde} due to \emph{ a lack of time derivative of $u$}. To overcome this, one can either use Steklov average or convolution in time. In this paper, we shall use the former approach (see also \cite[Chapter 2]{DiB1} for further details). Let us first define Steklov average as follows: let $h \in (0, 2T)$ be any positive number, then we define \begin{equation}\label{stek1} [u]_{h}(\cdot,t) := \left\{ \begin{array}{ll} \Xint\tilthlongdash_t^{t+h} u(\cdot, \tau) \ d\tau \quad & t\in (-T,T-h), \\ 0 & \text{else}. \end{array}\right. \end{equation} Let us now define the notion of solution that will considered in this paper. \begin{definition} \label{weak_solution} Let $h \in (0,2T)$ be given, we then say $u \in L^2\lbr-T,T; L^2(\Omega)\rbr \cap L^{p(\cdot)}\lbr-T,T; W_0^{1,p(\cdot)}(\Omega)\rbr$ is a weak solution of \eqref{basic_pde} if for any $\phi \in W_0^{1,{p(\cdot)}}(\Omega)$, the following holds: \begin{equation*} \label{def_weak_solution} \int_{\Omega \times \{t\}} \frac{d [u]_{h}}{dt} \phi + \iprod{[\mathcal{A}(x,t,\nabla u)]_{h}}{\nabla \phi} \ dx = 0 \quad \text{for almost every} \ -T < t < T-h. \end{equation*} \end{definition} \subsection{Existence and uniqueness of weak solution} \label{existence} We begin with the following well known existence and uniqueness result: \begin{proposition}[\cite{erhardt2013existence,DNR12}] \label{ext_sol} Let $\tilde{\Omega}$ be any bounded domain satisfying a uniform measure density condition (see Definition \ref{measure_def}). Suppose that $\vec{f} \in L^{p(\cdot)}(\tilde{\Omega}_T)$, $f \in L^{p(\cdot)}\lbr-T,T; W^{1,p(\cdot)}(\tilde{\Omega})\rbr$ with $\ddt{f} \in L^{p(\cdot)}\lbr-T,T; W^{1,p(\cdot)}(\tilde{\Omega})\rbr'$ and $f_0 \in L^{2}(\tilde{\Omega})$ are given. Then there is a unique weak solution $\phi \in C^0\lbr-T,T;L^2(\tilde{\Omega})\rbr\cap L^{p(\cdot)}\lbr-T,T; W^{1,p(\cdot)}(\tilde{\Omega})\rbr$ solving \begin{equation*} \label{ext_u} \left\{ \begin{array}{rcll} \phi_t - \dv \mathcal{D}(z,\nabla \phi) &=& -\dv |\vec{f}|^{p(\cdot)-2}\vec{f} & \quad \text{in} \ \tilde{\Omega}_T, \\ \phi &=& f & \quad \text{on} \ \partial \tilde{\Omega} \times (-T,T),\\ \phi(\cdot,-T) & = & f_0 & \quad \text{on} \ \tilde{\Omega}, \end{array}\right. \end{equation*} where $\mathcal{D}$ is any operator satisfying all the assumptions in Section \ref{nonlinear_structure}. Moreover if $f =0$, we then have the following energy estimate: \begin{equation*} \label{energy_phi} \sup_{-T \leq t \leq T} \|\phi(\cdot,t)\|_{L^2(\tilde{\Omega})}^2 + \iint_{\tilde{\Omega}_T} |\nabla \phi|^{p(\cdot)} \ dz \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \lbr \iint_{\tilde{\Omega}_T}\left[|\vec{f}|^{p(\cdot)} + 1\right] \ dz + \|f_0\|_{L^2(\tilde{\Omega})}^2\rbr. \end{equation*} \end{proposition} Returning to our problem \eqref{basic_pde}, Proposition \ref{ext_sol} yields the existence and uniqueness result as follows: \begin{corollary} There exists a unique weak solution $u \in C^0\lbr-T,T;L^2(\Omega)\rbr\cap L^{p(\cdot)}\lbr-T,T; W^{1,p(\cdot)}(\Omega)\rbr$ solving \eqref{basic_pde} with the estimate \begin{equation} \label{energy_u} \sup_{-T \leq t \leq T} \|u(\cdot,t)\|_{L^2(\Omega)}^2 + \iint_{\Omega_T} |\nabla u|^{p(\cdot)} \ dz \leq C_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \iint_{\Omega_T}\left[|{\bf f}|^{p(\cdot)} + 1\right] \ dz. \end{equation} \end{corollary} \section{Main results} \label{three} We now state the main results of this paper. Let us first set \begin{gather}\label{vt_def} \vartheta(z) := \frac{1}{-\frac{n}{p(z)} + \frac{nd}{2} + d} \quad \text{and} \quad \vartheta^+ := \sup_{z\in \Omega_T} \vartheta(z), \end{gather} where the constant $d$ is given in \eqref{def_d}. The first theorem concerns the local estimate around small balls. \begin{theorem} \label{main_theorem1} Assume that $u$ is the weak solution of the problem \eqref{basic_pde} under the structure conditions \eqref{abounded} and \eqref{aellipticity}. Let $0< {\bf S}_0 < 1$, and $q(\cdot)$ be log-H\"{o}lder continuous satisfying $1 < q^- \le q(\cdot) \le q^+ < \infty$. There exist constants ${\gamma_0} \in (0,1/8)$ and ${\beta_0} \in (0,1/4)$, both depending only on $\Lambda_0$, $\Lambda_1$, $p^{\pm}_{\log}$, $q^{\pm}_{\log}$, $n$, such that if $(p(\cdot),\mathcal{A},\Omega)$ is $(\gamma,{\bf S}_0)$-vanishing for some $\gamma \in (0,{\gamma_0})$, then there exists a constant $C_0 = {C_0}_{(\Lambda_0,\Lambda_1,p^{\pm}_{\log}, q^{\pm}_{\log}, n, {\bf S}_0)}>0$ such that for any $\mathfrak{z} \in \Omega_T$, $\beta \in (0,{\beta_0})$ and $\rho \in (0,1/(C_0 {\bf M})]$, we have \begin{equation*}\label{main-r1} \begin{aligned} \Yint\tiltlongdash_{K_{\rho}(\mathfrak{z})} |\nabla u|^{p(z)(1-\beta)q(z)} \ dz &\le C \left\{\Yint\tiltlongdash_{K_{4\rho}(\mathfrak{z})} |\nabla u|^{p(z)(1-\beta)} \ dz + \lbr \Yint\tiltlongdash_{K_{4\rho}(\mathfrak{z})} |{\bf f}|^{p(z)(1-\beta)q(z)} \ dz \rbr^{\frac{1}{q(\mathfrak{z})}} +1 \right\}^{1+ \vartheta(\mathfrak{z})(q(\mathfrak{z}) -1)} \end{aligned} \end{equation*} for some constant $C=C_{(\Lambda_0,\Lambda_1,p^{\pm}_{\log}, q^{\pm}_{\log}, n)}>0$. Here ${\bf M}$ and $\vartheta(\mathfrak{z})$ are given in \eqref{def_MM} and \eqref{vt_def}, respectively. \end{theorem} In the above theorem, it is important to note that the exponent $q^->1$, on the other hand, the above estimate has $p(z)(1-\beta)q(z)$ as the exponent. In particular, the term $(1-\beta)$ in the exponent provides sufficient gap in order to prove the end point version of the result as highlighted in the introduction. To do this, we use a standard covering argument followed by uniformizing the exponents which enables us to remove the term $(1-\beta)$. Thus our main theorem now takes the following form: \begin{theorem} \label{main_theorem2} Let $M^+ > 1$ and let $r(\cdot)$ be log-H\"{o}lder continuous satisfying $1 \le r^- \leq r(\cdot) \leq r^+ < M^+ < \infty$. Then under the assumptions in Theorem \ref{main_theorem1}, there is a constant ${\gamma_0} \in (0,1/4)$ depending only on $\Lambda_0$, $\Lambda_1$, $p^{\pm}_{\log}$, $r^{\pm}_{\log}$, $M^+$, $n$, such that if $(p(\cdot),\mathcal{A},\Omega)$ is $(\gamma,{\bf S}_0)$-vanishing for some $\gamma \in (0,{\gamma_0})$, then there exists a constant $C = C_{(\Lambda_0, \Lambda_1, p^{\pm}_{\log}, r^{\pm}_{\log}, M^+, n, \Omega_T, {\bf S}_0)}>0$ such that the following global bound holds: \begin{equation*}\label{main-r2} \iint_{\Omega_T} |\nabla u|^{p(z)r(z)} \ dz \le C \left\{ \lbr \iint_{\Omega_T} |{\bf f}|^{p(z)r(z)} \ dz \rbr^{\gh{1+\vartheta^+ \gh{M^+ -1}}(n+3)M^+ - (n+2)} +1\right\}, \end{equation*} where the constant $\vartheta^+$ is given in \eqref{vt_def}. \end{theorem} \begin{remark} \textcolor{black}{ If $p(\cdot) \equiv p$, then we can take $d = \min \mgh{\frac2p,1}$ in \eqref{def_d} (see \cite{adimurthi2018sharp} for more details). Substituting this into \eqref{vt_def} yields \begin{equation*} \vartheta(z) \equiv \vartheta = \left\{ \begin{array}{lrl} \frac{p}{2} &\text{if} &p \ge 2,\\ \frac{2p}{p(n+2)-2n} &\text{if} &\frac{2n}{n+2} < p < 2. \end{array} \right. \end{equation*} This is the standard scaling deficit coefficient introduced in \cite{AM}.} \end{remark} \section{Some useful inequalities} \label{four} In this section, we shall collect and prove in some cases well known estimates that will be used in subsequent sections. We first recall an integral version of Poincar\'e's inequality which was proved in \cite[Lemma 4.12]{adimurthi2017sharp}: \begin{lemma} \label{scaled_poincare} Let $s(\cdot) \in \log^{\pm}$ and let ${\bf M}_{p} \ge 1$ be given. Define ${\bf R}_p := \min \left\{\frac{1}{2{\bf M}_p}, \frac1{\omega_n^{1/n}}, \frac12 \right\}$. Then for any $\phi \in W^{1,s(\cdot)}(B_{4r})$ with $4r < {\bf R}_p$ satisfying \begin{equation*}\label{assumption_on_phi_no_weight} \int_{B_{4r}} |\nabla \phi (x)|^{s(x)} \ dx + 1 \leq {\bf M}_p, \end{equation*} the following estimate holds: \[ \int_{B_r} \lbr \frac{|\phi - \avg{\phi}{B_r}|}{\diam(B_r)}\rbr^{s(x)} \ dx \apprle_{(n,s^{\pm}_{\log})} \int_{B_{r}} |\nabla \phi(x)|^{s(x)} \ dx + |B_{r}|, \] where we have used the notation $\avg{\phi}{B_r} := \fint_{B_r} \phi(y)\ dy$. Since $\diam(B_r) = 2r \leq {\bf R}_p <1$, we also obtain \[ \int_{B_r} {|\phi - \avg{\phi}{B_r}|}^{s(x)} \ dx \apprle_{(n,s^{\pm}_{\log})} \int_{B_{r}} |\nabla \phi(x)|^{s(x)} \ dx + |B_{r}|. \] \end{lemma} Another Poincar\'e's inequality that will be needed is one where the function has a reasonable large zero set: \begin{theorem} \label{measure_density_poincare} Let $s(\cdot) \in \log^{\pm}$ and let ${\bf M}_p\geq 1$ and $\varepsilon \in (0,1)$ be given. Define ${\bf R}_p := \min \left\{\frac{1}{2{\bf M}_p}, \frac1{\omega_n^{1/n}}, \frac12 \right\}$. For any $\phi \in W^{1,p(\cdot)}(B_{2r})$ with $2r < {\bf R}_p$ satisfying \begin{gather*} |\{ N(\phi)\}| := |\{ x \in B_r : \phi(x) =0\}|> \varepsilon |B_r|\txt{and} \int_{B_{2r}} |\nabla \phi (x)|^{s(x)} \ dx + 1 \leq {\bf M}_p, \end{gather*} the following estimate holds: \[ \int_{B_r} \lbr \frac{|\phi|}{\diam(B_r)}\rbr^{s(x)} \ dx \apprle_{(s^{\pm}_{\log},n,\varepsilon)} \int_{B_{r}} |\nabla \phi(x)|^{s(x)} \ dx + |B_{r}|. \] \end{theorem} We note that Theorem \ref{measure_density_poincare} is slightly different than the one proved in \cite[Theorem 4.13]{adimurthi2017sharp}. In order to obtain this improvement where the ball $B_r$ is the same on both sides of the inequality, we can repeat the arguments in the proof of \cite[Theorem 4.13]{adimurthi2017sharp} and combine them with the technical lemma from \cite[Lemma 3.4]{han2011elliptic}. The next lemma that we need is an estimate in $L\log L$-space which can be found in \cite{AM} and references therein: \begin{lemma} \label{llogl} Let $\beta >0$ and let $s>1$. Then for any $f \in L^s(\tilde{\Omega})$, we have \[ \fint_{\tilde{\Omega}} |f| \lbr[[] \log \lbr e + \frac{|f|}{\avg{|f|}{\tilde{\Omega}}}\rbr \rbr[]]^{\beta}\ dx \apprle_{(n,s,\beta)} \lbr \fint_{\tilde{\Omega}} |f|^s \ dx \rbr^{\frac{1}{s}}, \] where we have used the notation ${\avg{|f|}{\tilde{\Omega}}} := \fint_{\tilde{\Omega}} |f(y)|\ dy$. \end{lemma} We record some useful property as follows: \begin{lemma}\label{useful int} Let $\tilde{\Omega}$ be an open set in $\RR^N$ and let $q > s \ge 0$. For $g \in L^1(\tilde{\Omega})$, we have \begin{equation}\label{useful int1} \int_{\tilde{\Omega}} |g|_k^{q-s} |g|\ dx = (q-s) \int_0^k \alpha^{q-s-1} \int_{\{y\in \tilde{\Omega} : |g(y)| > \alpha\}} |g(x)| \ dx d\alpha, \end{equation} where the truncation function $|g|_k := \min\mgh{|g|, k}$ for some constant $k>0$. If $g \in L^{q-s+1}(U)$, then \eqref{useful int1} also holds for $k=\infty$. \end{lemma} \begin{proof} By Fubini's theorem, it is easy to check that Lemma \ref{useful int} holds. \end{proof} We also use the following technical lemma which was proved in \cite[Lemma 4.3]{han2011elliptic}: \begin{lemma}\label{useful tech} Let $g$ be a bounded nonnegative function in $[\tau_0, \tau_1]$ with $\tau_0 \ge 0$. Suppose that for $\tau_0 \le s_1 < s_2 \le \tau_1$, we have \begin{equation*} f(s_1) \le \theta f(s_2) + \frac{P_1}{(s_2-s_1)^k} + P_2, \end{equation*} for some $k,P_1,P_2 \ge 0$ and $\theta \in [0,1)$. Then for any $\tau_0 \le s_1 < s_2 \le \tau_1$, there holds $$ f(s_1) \apprle_{(k,\theta)} \mgh{\frac{P_1}{(s_2-s_1)^k} + P_2}. $$ \end{lemma} \subsection{Maximal Function} For any $f \in L^1(\RR^{n+1})$, let us now define the strong maximal function in $\RR^{n+1}$ as follows: \begin{equation} \label{par_max} \mathcal{M}(|f|)(x,t) := \sup_{\tilde{Q} \ni(x,t)} \Yint\tiltlongdash_{\tilde{Q}} |f(y,s)| \ dy \ ds, \end{equation} where the supremum is taken over all parabolic cylinders $\tilde{Q}_{a,b}$ with $a,b \in \RR^+$ such that $(x,t)\in \tilde{Q}_{a,b}$. An application of the Hardy-Littlewood maximal theorem in $x-$ and $t-$ directions shows that the Hardy-Littlewood maximal theorem still holds for this type of maximal function (see \cite[Lemma 7.9]{Gary} for details): \begin{lemma} \label{max_bound} If $f \in L^1(\RR^{n+1})$, then for any $\alpha >0 $, there holds \[ |\{ z \in \RR^{n+1} : \mathcal{M}(|f|)(z) > \alpha\}| \leq \frac{5^{n+2}}{\alpha} \|f\|_{L^1(\RR^{n+1})}, \] and if $f \in L^{\vartheta}(\RR^{n+1})$ for some $1 < \vartheta \leq \infty$, then there holds \[ \| \mathcal{M}(|f|) \|_{L^{\vartheta}(\RR^{n+1})} \leq C_{(n,\vartheta)} \| f \|_{L^{\vartheta}(\RR^{n+1})}. \] \end{lemma} \section{Approximations} \label{four-two} In this section, we describe gradient higher integrability type results and the approximations that will be made. \subsection{Gradient higher integrability estimates} In this subsection, let us collect a few important higher integrability results that will be used throughout the paper. In order to state the general theorems, let $\phi \in L^{p(\cdot)}\lbr -T,T;W_0^{1,p(\cdot)}(\Omega)\rbr$ be a weak solution of \begin{equation} \label{pde_high} \left\{ \begin{array}{rcll} \phi_t - \dv \mathcal{A}(x,t,\nabla \phi) &=& -\dv (|\vec{f}|^{p(x,t)-2} \vec{f}) & \quad \text{in} \ \Om \times (-T,T),\\ \phi &=& 0 & \quad \text{on} \ \partial \Omega \times (-T,T), \end{array}\right. \end{equation} where the nonlinearity is assumed to satisfy \eqref{abounded} and \eqref{monotonicity}. Here the domain $\Omega$ is assumed to satisfy a uniform measure density condition with constant $m_e$ as defined in Definition \ref{measure_def}. Let us define \begin{equation} \label{deb_bm_f} {\bf M}_{\vec{f}}:= \iint_{\Omega_T} \lbr[[]|\vec{f}|^{p(z)}+1\rbr[]]\ dz + 1, \end{equation} which combined with \eqref{energy_u} shows \[ {\bf M}_{\phi}:= \iint_{\Omega_T} \lbr[[]|\nabla \phi|^{p(z)}+1\rbr[]] \ dz + 1 \leq C_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)}{\bf M}_{\vec{f}}. \] The first result we recall is the higher integrability above the natural exponent. In the interior case, this was proved in \cite{AZ05,bogelein2011higher} whereas in the boundary case, using the measure density condition satisfied by $\Omega$, the result was proved in \cite[Lemma 3.5]{byun2016nonlinear}. Using the unified intrinsic scaling approach, we can obtain the following modified higher integrability above the natural exponent: \begin{theorem} \label{high_weak} Let $\tilde{\sigma}>0$ be given, then there exists $\tilde{\beta}_1 = \tilde{\beta}_1(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},\Omega) \in (0,\tilde{\sigma}]$ such that if $\vec{f} \in L^{p(\cdot)(1+\tilde{\sigma})}(\Omega_T)$ and $\phi\in L^{p(\cdot)}\lbr-T,T;W_0^{1,p(\cdot)}(\Omega)\rbr$ is a weak solution to \eqref{pde_high}, then $|\nabla \phi| \in L^{p(\cdot)(1+\beta)}(\Omega_T)$ for all $\beta \in (0,\tilde{\beta}_1]$. Moreover, with ${\bf M}_{\vec{f}}$ defined as \eqref{deb_bm_f}, there exists a radius $\tilde{\rho}_1 = \tilde{\rho}_1(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1,{\bf M}_{\vec{f}})$ such that for any $2\rho \in (0,\tilde{\rho}_1]$ and any $\mathfrak{z} \in \overline{\Omega} \times (-T,T)$, there holds \[ \Yint\tiltlongdash_{K_{\rho}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)(1+\beta)} \ dz \apprle_{(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},\Omega)} \lbr \Yint\tiltlongdash_{K_{2\rho}(\mathfrak{z})}\lbr |\nabla \phi|+ |\vec{f}|\rbr^{p(\cdot)} \ dz \rbr^{1+\beta \tilde{\vartheta}_1} + \Yint\tiltlongdash_{K_{2\rho}(\mathfrak{z})}\lbr |\vec{f}|+1\rbr^{p(\cdot)(1+\beta)} \ dz, \] where the constant $\tilde{\vartheta}_1 = \tilde{\vartheta}_1(p(\mathfrak{z}),n)\geq 1$. \end{theorem} We will also need an improved higher integrability result below the natural exponent. The following theorem was proved for a weaker class of solutions called \emph{very weak solutions}, but also holds true for \emph{weak solutions} as considered in this paper. The interior regularity in the singular case, i.e., when $\frac{2n}{n+2}<p^+\leq 2$, the result was proved in \cite{li2017very} and the interior regularity in the degenerate case, i.e., when $p^- \geq 2$, the result was proved in \cite{bogelein2014very}. Subsequently, using the \emph{unified intrinsic scaling}, this restriction can be removed and the full result up to the boundary with $\frac{2n}{n+2} < p^- \leq p(\cdot)\leq p^+ < \infty$ was proved in \cite{adimurthi2018sharp} for domains satisfying a uniform measure density condition as in Definition \ref{measure_def}. \begin{theorem}[\cite{adimurthi2018sharp}] \label{high_very_weak} Let $\tilde{\sigma}>0$ be given and suppose $\vec{f} \in L^{p(\cdot)(1+\tilde{\sigma})}(\Omega_T)$ and $\phi\in L^{p(\cdot)}\lbr -T,T;W_0^{1,p(\cdot)}(\Omega)\rbr$ is a weak solution to \eqref{pde_high}. With ${\bf M}_{\vec{f}}$ defined as \eqref{deb_bm_f}, there exist radius $\tilde{\rho}_2 = \tilde{\rho}_2(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1,{\bf M}_{\vec{f}})$ and $\tilde{\beta}_2 = \tilde{\beta}_2(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log}) \in (0,\tilde{\sigma}]$ with $\tilde{\beta}_2 \leq \frac14$ such that for any $2\rho \in (0,\tilde{\rho}_2]$, $\beta \in (0,\tilde{\beta}_2]$ and any $\mathfrak{z} \in \overline{\Omega} \times (-T,T)$, there holds \[ \Yint\tiltlongdash_{K_{\rho}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)} \ dz \apprle_{(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log})} \lbr \Yint\tiltlongdash_{K_{2\rho}(\mathfrak{z})}\lbr |\nabla \phi|+ |\vec{f}|\rbr^{p(\cdot)(1-\beta)} \ dz \rbr^{1+\beta \tilde{\vartheta}_2} + \Yint\tiltlongdash_{K_{2\rho}(\mathfrak{z})}\lbr |\vec{f}|+1\rbr^{p(\cdot)} \ dz, \] where the constant $\tilde{\vartheta}_2 = \tilde{\vartheta}_2(n,p(\mathfrak{z}))\ge1$. \end{theorem} \begin{remark} For weak solutions, from the papers \cite{bogelein2011higher} and \cite{byun2016nonlinear}, the exponent $\tilde{\vartheta}_1$ in Theorem \ref{high_weak} was explicitly given by \begin{equation*} \tilde{\vartheta}_1 := \left\{ \begin{array}{ll} \frac{p(\mathfrak{z})}{2} & \txt{if} p(\mathfrak{z}) \geq 2,\\ \frac{2p(\mathfrak{z})}{p(\mathfrak{z})(n+2) -2n} & \txt{if} \frac{2n}{n+2} < p(\mathfrak{z}) < 2. \end{array}\right. \end{equation*} On the other hand, using the unified intrinsic scaling approach and recalculating the estimates from \cite{bogelein2011higher}, we can obtain the following unified exponent $\tilde{\vartheta}_1 = \frac{1}{-\frac{n}{p(\mathfrak{z})}+\frac{d(n+2)}{2}}$ (recall the exponent $d$ from \eqref{def_d}) which holds in the full range $\frac{2n}{n+2} < p(\mathfrak{z}) < \infty$. For very weak solutions, in \cite{adimurthi2018sharp}, the exponent $\tilde{\vartheta}_2 := \frac{1}{-\frac{n}{p(\mathfrak{z})} + \frac{(n+2)d}{2} - \beta}$ for any $\frac{2n}{n+2} < p(\mathfrak{z}) < \infty$. Note that since $\beta \leq \frac14$, one can uniformize the exponent $\tilde{\vartheta}_2 = \tilde{\vartheta}_2(n,p(\mathfrak{z}))$ only, i.e., it does not depend on $\beta$. For the purposes of this paper, the explicitly computed exponents $\tilde{\vartheta}_1$ and $\tilde{\vartheta}_2$ will not be needed except for the following two properties: firstly, we observe that $\tilde{\vartheta}_1, \tilde{\vartheta}_2 \geq 1$ and secondly, $\tilde{\vartheta}_1$ and $\tilde{\vartheta}_2$ can be made to depend only on $n$ and $p(\mathfrak{z})$. \end{remark} Before we end this subsection, let us prove the following important corollary: \begin{corollary} \label{normalized_higher_integrability} Let $\mathfrak{z} \in \Omega_T$ be any fixed point, and let $\alpha \geq 1$ be given. Suppose that $\phi$ and $\vec{f}$ solve \begin{equation} \label{pde_local_norm} \left\{ \begin{array}{rcll} \phi_t - \dv \mathcal{A}(x,t,\nabla \phi) &=& -\dv (|\vec{f}|^{p(x,t)-2} \vec{f}) & \quad \text{in} \ K_{3r}^{\alpha}(\mathfrak{z}),\\ \phi &=& 0 & \quad \text{on} \ \partial_w K_{3r}^{\alpha}(\mathfrak{z}). \end{array}\right. \end{equation} Let $\beta \leq \min\{ \tilde{\beta}_1, \tilde{\beta}_2\}$ where $\tilde{\beta}_1$ is from Theorem \ref{high_weak} and $\tilde{\beta}_2$ is from Theorem \ref{high_very_weak}. Assume the following are satisfied for some constants $\tilde{{\bf M}}\geq 1$, $c_{\ast}$, $c_p$ and $\Gamma$: \begin{equation}\label{bm_1} \iint_{K_{3r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)(1-\beta)} + |\vec{f}|^{p(\cdot)(1-\beta)} + 1 \ dz \leq \tilde{{\bf M}}, \end{equation} \begin{equation}\label{hyp_one} \Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)(1-\beta)} + \lbr \Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} |\vec{f}|^{p(\cdot)(1-\beta)\kappa} \ dz \rbr^{\frac{1}{\kappa}} \leq c_{\ast} \alpha^{1-\beta} \quad \text{for some}\ \ \kappa \geq \frac{1+\beta}{1-\beta}. \end{equation} Let $3r\leq \min\{\tilde{\rho}_1, \tilde{\rho}_2\}$ where $\tilde{\rho}_1$ is from Theorem \ref{high_weak} and $\tilde{\rho}_2$ is from Theorem \ref{high_very_weak}, furthermore, for the strictly positive constant (see \eqref{def_d}) defined by \begin{equation} \label{def_d_0} d_0 := \frac{d(n+2)}{2} - \frac{n}{p^-} >0, \end{equation} assume the following assumptions hold: \begin{equation} \label{one_5.18} p^+_{K_{3r}^{\alpha}(\mathfrak{z})} - p^-_{K_{3r}^{\alpha}(\mathfrak{z})} \leq \omega_{\pp}(32r) \leq \min\left\{d_0p^-, d_0 p^- (p^--1)\right\} \txt{and} \alpha^{p^+_{K_{3r}^{\alpha}(\mathfrak{z})} - p^-_{K_{3r}^{\alpha}(\mathfrak{z})}} \leq c_p. \end{equation} Then for any $\sigma \in (0,\beta]$, the following estimate holds: \begin{equation}\label{one_bnd} \Yint\tiltlongdash_{K_{r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)(1+\sigma)} \ dz \apprle_{(c_{\ast},c_p,p^-,p(\mathfrak{z}))} \alpha^{1+\sigma}. \end{equation} \end{corollary} \begin{proof} From \eqref{pde_local_norm}, we see that under the change of variables, $x := \scalex{\alpha}{\mathfrak{z}} y$ and $t := \scalet{\alpha}{\mathfrak{z}} \tau$, with \begin{gather*} \phi_1(y,\tau) := \frac{\phi(y,\tau)}{\alpha^{\frac{d}{2}}}, \qquad \vec{f}_1 := \alpha^{\frac{1-p(\mathfrak{z})}{p(\mathfrak{z})(p(y,\tau) -1)}} \vec{f}(y,\tau) \txt{and} \bar{\bf a}(y,\tau,\zeta) := \alpha^{\frac{1-p(\mathfrak{z})}{p(\mathfrak{z})}} \mathcal{A}(y,\tau,\alpha^{\frac{1}{p(\mathfrak{z})}} \zeta), \end{gather*} the following equation is satisfied: \begin{equation*} \left\{ \begin{array}{rcll} \frac{d \phi_1(y,\tau)}{d\tau} - \dv_y \bar{\bf a}(y,\tau,\nabla_y \phi_1(y,\tau)) &=& -\dv_y (|\vec{f}_1(y,\tau)|^{p(y,\tau)-2} \vec{f}_1(y,\tau)) & \quad \text{in} \ K_{3r}(\mathfrak{z}),\\ \phi_1 &=& 0 & \quad \text{on} \ \partial_w K_{3r}(\mathfrak{z}). \end{array}\right. \end{equation*} From the assumptions \eqref{def_d_0}, \eqref{one_5.18} and \eqref{def_d}, it is easy to see that the following bounds hold: \begin{gather} -\frac{p(\cdot)(1-\beta)}{p(\mathfrak{z})} + \frac{n}{p(\mathfrak{z})} - \frac{d(n+2)}{2} + 1 \leq \frac{p(\mathfrak{z})-p(\cdot)}{p(\mathfrak{z})} + \frac{n}{p^-} - \frac{d(n+2)}{2} \leq \frac{\omega_{\pp}(32r)}{p^-} - d_0 \overset{\eqref{one_5.18}}{\leq} 0, \label{one5.19}\\ \frac{(1-p(\mathfrak{z}))p(\cdot)}{p(\mathfrak{z})(p(\cdot) -1)}+ \frac{n}{p(\mathfrak{z})} - \frac{d(n+2)}{2} + 1 \leq \frac{p(\cdot) - p(\mathfrak{z})}{p(\mathfrak{z}) (p(\cdot) -1)} -d_0 \leq \frac{\omega_{\pp}(32r)}{p^-(p^--1)} - d_0 \overset{\eqref{one_5.18}}{\leq} 0.\label{one5.21} \end{gather} From a simple change of variables and using the fact that $\alpha \geq 1$, we see that \begin{equation} \label{one5.17} \begin{array}{rcl} \iint_{K_{3r}(\mathfrak{z})} |\nabla_y \phi_1(y,\tau)|^{p(y,\tau)(1-\beta)} \ dy\ d\tau & = & \iint_{K_{3r}^{\alpha}(\mathfrak{z})} \alpha^{-\frac{p(x,t)(1-\beta)}{p(\mathfrak{z})}+ \frac{n}{p(\mathfrak{z})} -\frac{d(n+2)}{2} +1} \abs{\nabla_x \phi(x,t)}^{p(x,t)(1-\beta)} \ dx\ dt \\ & \overset{\eqref{one5.19}}{\leq} & \iint_{K_{3r}^{\alpha}(\mathfrak{z})} \abs{\nabla_x \phi(x,t)}^{p(x,t)(1-\beta)} \ dx\ dt. \end{array} \end{equation} Analogously, we get \begin{equation} \label{5.20} \begin{array}{rcl} \iint_{K_{3r}(\mathfrak{z})} |\vec{f}_1(y,\tau)|^{p(y,\tau)(1-\beta)} \ dy\ d\tau & = & \iint_{K_{3r}^{\alpha}(\mathfrak{z})} \alpha^{\frac{(1-p(\mathfrak{z}))p(x,t)}{p(\mathfrak{z})(p(x,t) -1)}+ \frac{n}{p(\mathfrak{z})} - \frac{d(n+2)}{2} + 1}\abs{ \vec{f}(x,t)}^{p(x,t)(1-\beta)} \ dx\ dt \\ & \overset{\eqref{one5.21}}{\leq} & \iint_{K_{3r}^{\alpha}(\mathfrak{z})} \abs{\vec{f}(x,t)}^{p(x,t)(1-\beta)} \ dx\ dt. \end{array} \end{equation} Thus combining \eqref{one5.17} and \eqref{5.20} and using the hypothesis \eqref{bm_1}, we get \begin{equation*} \label{one5.26} \iint_{K_{3r}(\mathfrak{z})}\lbr[[]|\nabla \phi_1(y,\tau)|^{p(y,\tau)(1-\beta)} + |\vec{f}_1(y,\tau)|^{p(y,\tau)(1-\beta)} + 1 \rbr[]]\ dy \ ds \leq \tilde{{\bf M}}. \end{equation*} For the sake of simplicity, let us denote $p(y,\tau) = \tilde{p}(z)$ and $p(x,t) = p(z)$. We will now proceed with proving \eqref{one_bnd} as follows: \begin{equation} \label{one5.27} \begin{array}{rcl} \Yint\tiltlongdash_{K_{r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(z)(1+\sigma)} \ dz & = & \Yint\tiltlongdash_{K_{r}(\mathfrak{z})} \alpha^{\frac{\tilde{p}(z)(1+\sigma)}{p(\mathfrak{z})} - \frac{n}{p(\mathfrak{z})} + \frac{d(n+2)}{2} -1}|\nabla \phi_1|^{\tilde{p}(z)(1+\sigma)} \ dz\\ & \leq & \Yint\tiltlongdash_{K_{r}(\mathfrak{z})} \alpha^{\frac{(\tilde{p}(z)-p(\mathfrak{z}))(1+\sigma)}{p(\mathfrak{z})} - \frac{n}{p^+} + \frac{d(n+2)}{2} +\sigma}|\nabla \phi_1|^{\tilde{p}(z)(1+\sigma)} \ dz\\ & \overset{\redlabel{a5.25}{a}}{\apprle} & c_p^{\frac{2}{p^-}}\Yint\tiltlongdash_{K_{r}(\mathfrak{z})} \alpha^{- \frac{n}{p^+} + \frac{d(n+2)}{2} +\sigma}|\nabla \phi_1|^{\tilde{p}(z)(1+\sigma)} \ dz\\ & \overset{\redlabel{b5.25}{b}}{\apprle} & c_p^{\frac{2}{p^-}}\Yint\tiltlongdash_{K_{r}(\mathfrak{z})} \alpha^{1 +\sigma}|\nabla \phi_1|^{\tilde{p}(z)(1+\sigma)} \ dz. \end{array} \end{equation} To obtain \redref{a5.25}{a}, we made use of \eqref{one_5.18} and the fact $1+\sigma \leq 2$ and to obtain \redref{b5.25}{b}, we made use of the bound $- \frac{n}{p^+} + \frac{d(n+2)}{2} \leq 1$ which follows from \eqref{def_d}. We can now apply Theorem \ref{high_weak} to obtain the higher integrability from $\tilde{p}(z)$ to $\tilde{p}(z)(1+\sigma)$ and apply Theorem \ref{high_very_weak} to obtain the higher integrability from $\tilde{p}(z)(1-\beta)$ to $\tilde{p}(z)$. Thus the expression on the right of \eqref{one5.27} can be estimated as \begin{equation} \label{one5.30} \begin{array}{rcl} \Yint\tiltlongdash_{K_{r}(\mathfrak{z})} |\nabla \phi_1|^{\tilde{p}(z)(1+\sigma)} \ dz & \apprle & \bgh{ \lbr \Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} (|\nabla \phi_1|+ |\vec{f}_1|)^{\tilde{p}(z)(1-\beta)}\ dz\rbr^{1+ \beta \tilde{\vartheta}_2} + \Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} |\vec{f}_1|^{\tilde{p}(z)} \ dz }^{1+ \sigma \tilde{\vartheta}_1} \\ && \qquad + \Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} |\vec{f}_1|^{\tilde{p}(z) (1+\sigma)} \ dz + 1. \end{array} \end{equation} In order to prove \eqref{one_bnd}, it is sufficient to bound \eqref{one5.30} by a constant from which the result will follow by using \eqref{one5.27}. In order to do this, we scale back to get \begin{equation}\label{on5.26} \begin{array}{rcl} \Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} |\nabla \phi_1|^{\tilde{p}(z)(1-\beta)} \ dz& = & \frac{|K_{3r}^{\alpha}(\mathfrak{z})|}{|K_{3r}(\mathfrak{z})|}\Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} \alpha^{-\frac{\tilde{p}(z)(1-\beta)}{p(\mathfrak{z})}+\frac{n}{p(\mathfrak{z})}- \frac{d(n+2)}{2} +1}|\nabla \phi|^{p(z)(1-\beta)}\ dz\\ & \overset{\redlabel{a5.26}{c}}{\leq} & \alpha^{(1-\beta) \lbr\frac{p(\mathfrak{z})-{p}^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p(\mathfrak{z})}\rbr} \alpha^{-(1-\beta)}\Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(z)(1-\beta)}\ dz\\ & \overset{\redlabel{b5.26}{d}}{\apprle} & c_{\ast} c_p^{\frac{1}{p^-}}. \end{array} \end{equation} To obtain \redref{a5.26}{c}, we used the fact that $\frac{|K_{3r}^{\alpha}(\mathfrak{z})|}{|K_{3r}(\mathfrak{z})|} = \scalexn{\alpha}{\mathfrak{z}} \scalet{\alpha}{\mathfrak{z}}$ and to obtain \redref{b5.26}{d}, we made use of \eqref{hyp_one} and \eqref{one_5.18} along with the trivial bound $1-\beta \leq 1$. To estimate the terms containing $\vec{f}_1$ in \eqref{one5.30}, let us denote $\varpi$ to be either $(1-\beta)$, $1$ or $(1+\sigma)$ and estimate $\Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} |\vec{f}_1|^{\tilde{p}(z)\varpi}\ dz$ as follows: \begin{equation}\label{on5.27} \begin{array}{rcl} \Yint\tiltlongdash_{K_{3r}(\mathfrak{z})} |\vec{f}_1|^{\tilde{p}(z)\varpi} & \overset{\redlabel{a5}{e}}{=} & \frac{|K_{3r}^{\alpha}(\mathfrak{z})|}{|K_{3r}(\mathfrak{z})|}\Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} \alpha^{\frac{(1-p(\mathfrak{z}))p(x,t)\varpi}{p(\mathfrak{z})(p(x,t) -1)}+\frac{n}{p(\mathfrak{z})}- \frac{d(n+2)}{2} +1}|\vec{f}|^{p(z)\kappa}\ dz\\ & \overset{\redlabel{b5}{f}}{\leq} & \alpha^{-\frac{\lbr {p}^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}\rbr'\varpi}{p(\mathfrak{z})'}} \Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} |\vec{f}|^{p(z)\varpi}\ dz\\ & \overset{\redlabel{c5}{g}}{\leq} & \alpha^{\varpi \lbr \frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-p(\mathfrak{z})}{p(\mathfrak{z}) ({p}^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1}\rbr}\alpha^{-\varpi} \lbr \Yint\tiltlongdash_{K_{3r}^{\alpha}(\mathfrak{z})} |\vec{f}|^{p(z)(1-\beta)\kappa}\ dz\rbr^{\frac{\varpi}{(1-\beta)\kappa}}\\ & \overset{\redlabel{d5}{h}}{\apprle} & c_{\ast}c_p^{\frac{2}{(p^-)^2}}. \end{array} \end{equation} To obtain \redref{a5}{e}, we performed the usual change of variables, to obtain \redref{b5}{f}, we used the fact that $\frac{|K_{3r}^{\alpha}(\mathfrak{z})|}{|K_{3r}(\mathfrak{z})|} = \scalexn{\alpha}{\mathfrak{z}} \scalet{\alpha}{\mathfrak{z}}$, to obtain \redref{c5}{g}, we used the fact that $\kappa (1-\beta) \geq \varpi$ from \eqref{hyp_one} and finally to obtain \redref{d5}{h}, we made use of \eqref{hyp_one} and \eqref{one_5.18} along with the bound $\varpi \leq 2$. Thus combining \eqref{on5.26} and \eqref{on5.27} into \eqref{one5.30} and finally substituting the resulting expression into \eqref{one5.27}, we see that for some $\tilde{\vartheta} = \tilde{\vartheta}(n,p(\mathfrak{z}))$, there holds \[ \Yint\tiltlongdash_{K_{r}^{\alpha}(\mathfrak{z})} |\nabla \phi|^{p(\cdot)(1+\sigma)} \ dz \leq C_{(c_{\ast},c_p,p^-,p(\mathfrak{z}))} \alpha^{1+\sigma}, \] which completes the proof. \end{proof} \subsection{Approximations} In this subsection, let $\alpha \geq 1$ be a given constant, let $\rho$ be as in Remark \ref{remark_radius}, and let $\mathfrak{z} = (\mathfrak{x},\mathfrak{t}) \in \Omega_T$ be any fixed point. Also note that the existence of all the solutions considered below follows from Proposition \ref{ext_sol}. First, let us consider the unique weak solution $w \in C^0\lbr I_{4\rho}^{\alpha}(\mathfrak{t});L^2(\Omega_{4\rho}^{\alpha}(\mathfrak{x})\rbr \cap L^{p(\cdot)}\lbr I_{4\rho}^{\alpha}(\mathfrak{t});W^{1,p(\cdot)}(\Omega_{4\rho}^{\alpha}(\mathfrak{x}))\rbr$ solving \begin{equation} \label{wapprox_int} \left\{ \begin{array}{rcll} w_t - \dv \mathcal{A}(x,t,\nabla w) &=& 0 & \quad \text{in} \ K_{4\rho}^{\alpha}(\mathfrak{z}),\\ w &=&u & \quad \text{on} \ \partial_p K_{4\rho}^{\alpha}(\mathfrak{z}). \end{array}\right. \end{equation} This is possible, since \eqref{basic_pde} shows $u \in L^{p(\cdot)}\lbr I_{4\rho}^{\alpha}(\mathfrak{t});W^{1,p(\cdot)}(\Omega_{4\rho}^{\alpha}(\mathfrak{x})\rbr$ and $\ddt{u} \in L^{p(\cdot)}\lbr I_{4\rho}^{\alpha}(\mathfrak{t});W^{1,p(\cdot)}(\Omega_{4\rho}^{\alpha}(\mathfrak{x})\rbr'$. We can now compare the solutions of \eqref{basic_pde} and \eqref{wapprox_int} to get the following lemma: \begin{lemma} \label{energy_diff_est} For any $\rho >0$ and any weak solution $w$ to \eqref{wapprox_int}, the following estimate holds: \begin{gather} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla u|^{p(z)} \ dz \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u|^{p(z)} + |{\bf f}|^{p(z)} + 1 \ dz, \label{diff_energy_w}\\ \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(z)} \ dz \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u|^{p(z)} + |{\bf f}|^{p(z)} + 1 \ dz. \label{energy_w} \end{gather} \end{lemma} The proof of Lemma \ref{energy_diff_est} follows by taking $u-w$ as a test function in \eqref{basic_pde} and \eqref{wapprox_int} (see for example \cite[(4.11)]{byun2016nonlinear} for the proof of \eqref{diff_energy_w}). A simple application of triangle inequality to \eqref{diff_energy_w} implies \eqref{energy_w}. \begin{lemma} \label{sobolev_reg_w} Let $2\rho\leq \rho_0$ with $\rho_0$ as in Remark \ref{remark_radius}, then any weak solution $w\in L^{p(\cdot)}\lbr I_{4\rho}^{\alpha}(\mathfrak{t});W^{1,p(\cdot)}(\Omega_{4\rho}^{\alpha}(\mathfrak{x}))\rbr$ has the improved regularity $\nabla w \in L^{p(\mathfrak{z})} \gh{K_{3\rho}^{\alpha}(\mathfrak{z})}$. \end{lemma} \begin{proof} Since $\rho$ satisfies Remark \ref{remark_radius}, we can apply Theorem \ref{high_weak} to \eqref{wapprox_int} which implies $\nabla w \in L^{p(\cdot)(1+\beta)}K_{3\rho}^{\alpha}(\mathfrak{z})$ for any $\beta \in (0,\beta_0]$ with $\beta_0$ as in Remark \ref{high_int_remark}. As a consequence, we have the following sequence of estimates \begin{equation*} \begin{array}{rcl} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})} \ dz & =& \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})\frac{p(\cdot)(1+\beta_0)}{p(\cdot)(1+\beta_0)}} \ dz \\ & \apprle& \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla w|+1\rbr^{p(\cdot)(1+\beta_0)\frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}(1+\beta_0)}} \ dz \\ & \overset{\redlabel{a1}{a}}{\apprle}& \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla w|+1\rbr^{p(\cdot)(1+\beta_0)} \ dz\\ & \overset{\redlabel{b1}{b}}{\apprle} & \lbr \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla w|+1\rbr^{p(\cdot)} \ dz\rbr^{1+\beta_0 \vartheta_0}. \end{array} \end{equation*} To obtain \redref{a1}{a}, we made use of \descref{R2}{R2} which implies ${\frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}(1+\beta)}}\leq 1$ and to obtain \redref{b1}{b}, we made use of Theorem \ref{high_weak} along \descref{B3}{B3}. \end{proof} We will also need the following regularity with respect to the time derivative of the weak solution $w$ to \eqref{wapprox_int} which will enable us to use $w$ as boundary data so that Proposition \ref{ext_sol} can be applied. \begin{lemma} \label{time_reg_w} We have $\ddt{w} \in L^{p(\mathfrak{z})}\lbr I_{3\rho}^{\alpha}(\mathfrak{t});W^{1,p(\mathfrak{z})}(\Omega_{3\rho}^{\alpha}(\mathfrak{x}))\rbr'$. \end{lemma} \begin{proof} In order to prove the lemma, from \eqref{wapprox_int}, we see that it is sufficient to show $\mathcal{A}(x,t,\nabla w) \in L^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}(K_{3\rho}^{\alpha}(\mathfrak{z}))$. We show this as follows: \begin{equation} \label{5.15} \begin{array}{rcl} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\mathcal{A}(x,t,\nabla w)|^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}} \ dz & \overset{\eqref{abounded}}{\apprle}& \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} (|\nabla w|+1)^{(p(\cdot)-1)\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}} \ dz\\ & \overset{\redlabel{a2}{a}}{\leq} & \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} (|\nabla w|+1)^{p(\cdot)\lbr 1+ \frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1}\rbr} \ dz. \end{array} \end{equation} To obtain \redref{a2}{a}, we used the following sequence of estimates on $K_{3\rho}^{\alpha}(\mathfrak{z})$: \begin{equation*} (p(\cdot) -1) \frac{p(\mathfrak{z})}{p(\mathfrak{z}) -1} \leq (p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})} -1) \frac{p(\mathfrak{z})}{p(\mathfrak{z}) -1} \leq (p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})} -1) \frac{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})} -1} \leq p(\cdot) \lbr 1 + \frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1}\rbr. \end{equation*} Using Remark \ref{remark_def_p_log} with the observation $\alpha \geq 1$ which implies $K_{3\rho}^{\alpha}(\mathfrak{z}) \subset K_{3\rho}(\mathfrak{z})$, we see that \begin{equation} \label{5.16} \frac{p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1} \leq\frac{\omega_{\pp}(6\rho)}{p^-_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1}\leq\frac{\omega_{\pp}(6\rho)}{p^- -1} \overset{\text{\descref{R5}{R5}}}{\leq} \beta_0. \end{equation} Substituting \eqref{5.16} into \eqref{5.15} and making use of Theorem \ref{high_weak} (where $\tilde{\beta}_1$ is obtained), we get \[ \begin{array}{rcl} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\mathcal{A}(x,t,\nabla w)|^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}} \ dz & \apprle& \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} (|\nabla w|+1)^{p(\cdot)\lbr 1+\beta_0 \rbr} \ dz\\ & \overset{\redlabel{a3}{b}}{\apprle} & |K_{3\rho}^{\alpha}(\mathfrak{z})| \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} (1 + |\nabla w|)^{p(\cdot)(1-\beta_0)} \ dz \rbr^{(1+\beta_0\vartheta_0)(1+\beta_0\vartheta_0)}\\ & \overset{\redlabel{b2}{c}}{\apprle} & |K_{3\rho}^{\alpha}(\mathfrak{z})| \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} (1 + |\nabla w|)^{p(\cdot)(1-\beta_0)} \ dz \rbr^{1+\beta_0c_0}. \end{array} \] To obtain \redref{a3}{b}, we have used Theorem \ref{high_weak} and Theorem \ref{high_very_weak} along with \descref{B3}{B3} and to obtain \redref{b2}{c}, we have used the fact that $\beta_0 <1$ and $\vartheta_0 \geq 1$. This completes the proof of the lemma. \end{proof} Let us now construct an averaged operator which will be needed. For any $\alpha\ge 1$ and any $4\rho \leq \rho_0$, let us define the following vector valued function $\mathcal{B}: K_{3\rho}^{\alpha}(\mathfrak{z}) \times \RR^n \rightarrow \RR^n$ by \begin{equation} \label{def_bb} \mathcal{B}(z,\zeta):= \mathcal{A}(z,\zeta) \lbr \mu^2 + |\zeta|^2 \rbr^{\frac{p(\mathfrak{z})-p(z)}{2}}. \end{equation} From direct computations (see \cite[(4.18)]{byun2016nonlinear}), we see that the following bounds are satisfied: \begin{equation}\label{bbounded}\begin{array}{c} (\mu^2 + |\zeta|^2)^{\frac12} |D_{\zeta} \mathcal{B}(z,\zeta)| + |\mathcal{B}(z,\zeta)| \leq 3\Lambda_1 (\mu^2 + |\zeta|^2)^{\frac{p(\mathfrak{z}) -1}{2}}, \\ (\mu^2 + |\zeta|^2 )^{\frac{p(\mathfrak{z})-2}{2}} |\eta|^2 \frac{\Lambda_0}{2} \leq \iprod{D_{\zeta}\mathcal{B}(z,\zeta)\eta}{\eta}. \end{array}\end{equation} In particular, the operator $\mathcal{B}(\cdot,\zeta)$ which is defined on $K_{3\rho}^{\alpha}(\mathfrak{z})$ is a constant exponent operator. \begin{description}[leftmargin=*] \item[Interior case:] Subsequently, in this case, i.e., when $K_{3\rho}^{\alpha}(\mathfrak{z}) = Q_{3\rho}^{\alpha}(\mathfrak{z})\subset \Omega_T$, we define another averaged operator $\overline{\mathcal{B}}: \RR^n \times (\mathfrak{t} - \scalet{\alpha}{\mathfrak{z}} 9\rho^2, \mathfrak{t} + \scalet{\alpha}{\mathfrak{z}} 9\rho^2) \to \RR^n$ by \begin{equation*} \label{avg_bbb} \overline{\mathcal{B}}(t,\zeta):= \fint_{B_{\scalex{\alpha}{\mathfrak{z}}3\rho}(\mathfrak{x})} \mathcal{B}(y,\mathfrak{t},\zeta) \ dy. \end{equation*} From \eqref{small_aa}, we see that \[ \Yint\tiltlongdash_{K^{\alpha}_{3\rho}(\mathfrak{z})} \sup_{\zeta \in \RR^n} \frac{\abs{\overline{\mathcal{B}}(t,\zeta)- \mathcal{B}(z,\zeta)}}{\lbr\mu^2 + |\zeta|^2 \rbr^{\frac{p(\mathfrak{z})-1}{2}}} \ dz \leq \Yint\tiltlongdash_{Q^{\alpha}_{3\rho}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))(z) \ dz \leq \gamma. \] In the above estimate, we have used the fact $\alpha\geq 1$ which implies $\scalex{\alpha}{\mathfrak{z}} \leq 1$. \item[Boundary case:] Subsequently, in this case we make use of the $(\gamma, {\bf S}_0)$-Reifenberg flat condition, i.e., when $K_{3\rho}^{\alpha}(\mathfrak{z}) = B_{3\rho}^{\alpha}(\mathfrak{x}) \cap \Omega \times I_{3\rho}^{\alpha}(\mathfrak{t})$ and \[ B^{\alpha,+}_{3\rho}(\mathfrak{x}) \subset \Omega_{3\rho}(\mathfrak{x}) \subset B_{3\rho}^{\alpha} \cap \{x_n > -3 \scalex{\alpha}{\mathfrak{z}} \gamma \rho\}, \] we define another averaged operator $\overline{\mathcal{B}}: (\mathfrak{t} - \scalet{\alpha}{\mathfrak{z}} 9\rho^2, \mathfrak{t} + \scalet{\alpha}{\mathfrak{z}} 9\rho^2) \times \RR^n \to \RR^n$ by \begin{equation*} \label{avg_bbbb} \overline{\mathcal{B}}(t,\zeta):= \fint_{B_{\scalex{\alpha}{\mathfrak{z}}3\rho}^+(\mathfrak{x})} \mathcal{B}(y,\mathfrak{t},\zeta) \ dy. \end{equation*} From \eqref{small_aa}, we see that \[ \Yint\tiltlongdash_{Q^{\alpha,+}_{3\rho}(\mathfrak{z})} \sup_{\zeta \in \RR^n} \frac{\abs{\overline{\mathcal{B}}(t,\zeta)- \mathcal{B}(z,\zeta)}}{\lbr\mu^2 + |\zeta|^2 \rbr^{\frac{p(\mathfrak{z})-1}{2}}} \ dz = \Yint\tiltlongdash_{Q^{\alpha,+}_{3\rho}(\mathfrak{z})} \Theta(\mathcal{A}, B_{3\rho}^{\alpha,+}(\mathfrak{x}))(z) \ dz \leq 4 \Yint\tiltlongdash_{Q^{\alpha}_{3\rho}(\mathfrak{z})} \Theta(\mathcal{A}, B_{3\rho}^{\alpha}(\mathfrak{x}))(z) \ dz\leq 4 \gamma. \] In the above estimate, we have used the fact $\alpha\geq 1$ which implies $\scalex{\alpha}{\mathfrak{z}} \leq 1$. \end{description} From Lemma \ref{sobolev_reg_w} and Lemma \ref{time_reg_w}, we can now define the following approximation: \begin{equation} \label{vapprox_bnd} \left\{ \begin{array}{rcll} v_t - \dv \overline{\mathcal{B}}(t,\nabla v) &=& 0 & \quad \text{in} \ K_{3\rho}^{\alpha}(\mathfrak{z}),\\ v &=& w & \quad \text{on} \ \partial_p K_{3\rho}^{\alpha}(\mathfrak{z}), \end{array}\right. \end{equation} which admits a unique weak solution $v \in C^0\lbr I_{3\rho}^{\alpha}(\mathfrak{t});L^2(\Omega_{3\rho}^{\alpha}(\mathfrak{x})\rbr \cap L^{p(\mathfrak{z})}\lbr I_{3\rho}^{\alpha}(\mathfrak{t});W^{1,p(\mathfrak{z})}(\Omega_{3\rho}^{\alpha}(\mathfrak{x}))\rbr$ since Proposition \ref{ext_sol} is applicable. In the interior case, it is well known that the weak solution $v$ has locally Lipschitz bounds (see \cite{DiB1} for details). On the other hand, in the boundary case, we need to make one further approximation in which we consider a weak solution $\overline{V} \in C^0\lbr I_{2\rho}^{\alpha}(\mathfrak{t});L^2(\Omega_{2\rho}^{\alpha,+}(\mathfrak{x})\rbr \cap L^{p(\mathfrak{z})}\lbr I_{2\rho}^{\alpha}(\mathfrak{t});W^{1,p(\mathfrak{z})}(\Omega_{2\rho}^{\alpha,+}(\mathfrak{x}))\rbr$ solving \begin{equation} \label{Vapprox_bnd} \left\{ \begin{array}{rcll} \overline{V}_t - \dv \overline{\mathcal{B}}(t,\nabla \overline{V}) &=& 0 & \quad \text{in} \ Q_{2\rho}^{\alpha,+}(\mathfrak{z}),\\ \overline{V} &=& 0 & \quad \text{on} \ \partial_w Q_{2\rho}^{\alpha,+}(\mathfrak{z}). \end{array}\right. \end{equation} \begin{lemma} \label{existence_ov} For any $\varepsilon \in (0,1)$, there exists $\gamma = \gamma(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},\varepsilon)>0$ such that if $v$ is the weak solution of \eqref{vapprox_bnd}, then there is a weak solution $\overline{V} \in C^0\lbr I_{2\rho}^{\alpha}(\mathfrak{t});L^2(\Omega_{2\rho}^{\alpha,+}(\mathfrak{x})\rbr \cap L^{p(\mathfrak{z})}\lbr I_{2\rho}^{\alpha}(\mathfrak{t});W^{1,p(\mathfrak{z})}(\Omega_{2\rho}^{\alpha,+}(\mathfrak{x}))\rbr$ solving \eqref{Vapprox_bnd} such that \begin{equation}\label{ov est1} \Yint\tiltlongdash_{Q^{\alpha,+}_{2\rho}(\mathfrak{z})} |\nabla v - \nabla \overline{V}|^{p(\mathfrak{z})} \ dz \le \varepsilon^{p(\mathfrak{z})} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla v|^{p(\mathfrak{z})} \ dz. \end{equation} Furthermore, we have \begin{gather}\label{ov est2} \sup_{Q^{\alpha,+}_{\rho}(\mathfrak{z})} |\nabla \overline{V}| \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \lbr \Yint\tiltlongdash_{Q^{\alpha,+}_{2\rho}(\mathfrak{z})} |\nabla \overline{V}|^{p(\mathfrak{z})} \ dz + 1\rbr^{\frac{1}{p(\mathfrak{z})}}. \end{gather} \end{lemma} \begin{proof} We will prove the lemma by scaling. Define the rescaled functions \begin{gather*} V_{\alpha,\rho} (y,s) := \frac{1}{\alpha^{\frac{d}{2}}\rho} \overline{V}\lbr \scalex{\alpha}{\mathfrak{z}} \rho x, \scalet{\alpha}{\mathfrak{z}} \rho^2 t \rbr \txt{and} {\overline{\mathbf{b}}}_{\alpha,\rho}(t,\zeta):= \alpha^{\frac{1-p(\mathfrak{z})}{p(\mathfrak{z})}} \overline{\mathcal{B}}\lbr\alpha^{\frac{1}{p(\mathfrak{z})}}\zeta, \scalet{\alpha}{\mathfrak{z}}\rho^2 t\rbr, \end{gather*} under the change of variables $x = \scalex{\alpha}{\mathfrak{z}} ry$ and $t = \scalet{\alpha}{\mathfrak{z}}\rho^2 s$. We then see that $(x,t) \in Q_{2\rho}^{\alpha,+}(\mathfrak{z})$ implies $(y,s) \in Q_{2}^{+}(\mathfrak{z})$. From that fact that $\overline{V}$ solves \eqref{Vapprox_bnd}, we have \begin{equation*} \begin{array}{lll} 0 & = \ddt{\overline{V}}(x,t) - \dv_x \overline{\mathcal{B}}(\nabla_x \overline{V}(x,t),t) \\ & = \frac{1}{\alpha^{-1+\frac{d}{2}}\rho} \lbr \dds{V_{\alpha,\rho}}(y,s) - \dv_y {\overline{\mathbf{b}}}_{\alpha,\rho}\lbr \nabla_y V_{\alpha,\rho}(y,s),s\rbr \rbr \txt{for} (y,s) \in Q_{2}^{+}(\mathfrak{z}). \end{array} \end{equation*} In particular, we see that $V_{\alpha,\rho}(y,s)$ is a weak solution of \[ \left\{ \begin{array}{ll} \dds{V_{\alpha,\rho}}(y,s) - \dv_y {\overline{\mathbf{b}}}_{\alpha,\rho}\lbr \nabla_y V_{\alpha,\rho}(y,s),s\rbr = 0 & \txt{in} Q_{2}^{+}(\mathfrak{z}), \\ V_{\alpha,\rho} = 0 & \txt{on} Q_2(\mathfrak{z}) \cap \{y_n = 0\}. \end{array}\right. \] From \cite[Theorem 1.6]{lieberman1993boundary}, we obtain the estimate \[ \sup_{Q^+_{1}(\mathfrak{z})} |\nabla V_{\alpha,\rho}| \leq C_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} \lbr \Yint\tiltlongdash_{Q^{+}_{2}(\mathfrak{z})} |\nabla V_{\alpha,\rho}|^{p(\mathfrak{z})} \ dz + 1\rbr^{\frac{1}{p(\mathfrak{z})}}, \] which implies the estimate \eqref{ov est2}. Moreover, a similar argument of \cite[Lemma 3.8]{BOS1} yields the estimate \eqref{ov est1}. \end{proof} \subsection{Fixing the size of solutions} \label{size_fix} Let us define \begin{equation} \label{def_M_0} {\bf M}_0 := \iint_{\Omega_T} \lbr[[]|{\bf f}|^{p(z)} + 1\rbr[]] \ dz + 1. \end{equation} From \eqref{energy_u}, we see that \begin{equation} \label{size_u_f} {\bf M}_u \leq C_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} {\bf M}_0 \txt{where we have set}{\bf M}_u := \iint_{\Omega_T} \lbr[[]|\nabla u|^{p(z)}+1\rbr[]] \ dz + 1. \end{equation} From \eqref{energy_w} (which holds for any $\rho>0$), we see that there holds \begin{equation} \label{size_w} {\bf M}_w \leq C_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)} {\bf M}_0 \txt{where we have set}{\bf M}_w := \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr[[]|\nabla w|^{p(z)}+1\rbr[]] \ dz + 1. \end{equation} \section{First difference estimate below the natural exponent} \label{five} In this section, we will prove a difference estimate between the weak solution of \eqref{basic_pde} and the weak solution of \eqref{wapprox_int}. To do this, we will use the method of Lipschitz truncation developed by \cite{KL} which is modified for use in the current setting in Appendix \ref{lipschitz_truncation}. \begin{theorem}\label{first_diff_thm} Let $\alpha \geq 1$ be fixed, then there exists $\tilde{\rho}_3 = \tilde{\rho}_3(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1,{\bf M}_0)$ such that for any $128 \rho \leq \tilde{\rho}_3$ and for any $\varepsilon \in (0,1]$, there exists $\tilde{\beta}_3 = \tilde{\beta}_3(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log})$ such that for any $\beta \in (0,\tilde{\beta}_3]$, there holds the estimate \begin{equation*} \label{diff_est_one} \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \ dz \leq \varepsilon \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u|^{p(\cdot)(1-\beta)}\ dz + C_{(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log})} \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr[[]|{\bf f}|^{p(\cdot)(1-\beta)} + 1 \rbr[]]\ dz. \end{equation*} Here $u$ is the weak solution of \eqref{basic_pde} and $w$ is the weak solution to \eqref{wapprox_int}. \end{theorem} \begin{proof} Let us denote \begin{equation*} \label{def_s_a} s:= \scalet{\alpha}{\mathfrak{z}} (4\rho)^2, \end{equation*} and we consider the following cut-off function {$\zeta_{\ve} \in C^{\infty} (\RR)$} such that $0 \leq \zeta_{\ve}(t) \leq 1$ and \begin{equation*} \label{def_zv} \zeta_{\ve}(t) = \left\{ \begin{array}{ll} 1 & \text{for} \ t \in (\mathfrak{t}-s+\varepsilon,\mathfrak{t}+s-\varepsilon),\\ 0 & \text{for} \ t \in (-\infty,\mathfrak{t}-s)\cup (\mathfrak{t}+s,\infty). \end{array}\right. \end{equation*} It is easy to see that \begin{equation*} \label{bound_zv} \begin{array}{c} \zeta_{\ve}'(t) = 0 \ \txt{for} \ t \in (-\infty,\mathfrak{t}-s) \cup (\mathfrak{t}-s+\varepsilon,\mathfrak{t}+s-\varepsilon)\cup (\mathfrak{t}+s,\infty), \\ |\zeta_{\ve}'(t)| \leq \frac{c}{\varepsilon}\ \txt{for} \ t \in (\mathfrak{t}-s,\mathfrak{t}-s+\varepsilon) \cup (\mathfrak{t}+s-\varepsilon,\mathfrak{t}+s). \end{array} \end{equation*} Without loss of generality, we shall always take $2h \leq \varepsilon$ since we will take limits in the following order $\lim_{\varepsilon \rightarrow 0} \lim_{h \rightarrow 0}$. We shall use $\lsbt{v}{\lambda,h}(z) \zeta_{\ve}(t)$ as a test function in \eqref{basic_pde} and \eqref{wapprox_int} where $\lsbt{v}{\lambda,h}$ is as constructed in Appendix \ref{lipschitz_truncation} (more specifically in \eqref{lipschitz_function}). Thus we get \begin{equation*} \begin{array}{l} L_1 + L_2 :=\iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \ddt{[u-w]_h} \lsbt{v}{\lambda,h} \zeta_{\ve} \ dx \ dt + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \\ \hspace*{6cm} = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt=: L_3. \end{array} \end{equation*} \begin{description} \item[Estimate for $L_1$:] Setting $\lsbo{E}{\lambda}^{\tau} = \{(x,t) \in \lsbo{E}{\lambda}: t=\tau\}$ where $\lsbo{E}{\lambda}$ is as defined in \eqref{elambda}, we get \begin{equation*} \label{6.38} \begin{array}{ll} L_1 & = \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}\setminus \lsbo{E}{\lambda}^{\tau}} \dds{\lsbt{v}{\lambda,h}} (\lsbt{v}{\lambda,h}-v_h) \zeta_{\ve}(s)\ dy \ d\tau + \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \frac{d{\lbr \lbr[[](v_h)^2 - (\lsbt{v}{\lambda,h} - v_h)^2\rbr[]]\zeta_{\ve}(\tau) \rbr }}{d\tau} \ dy \ d\tau \\ & \qquad - \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \dds{\zeta_{\ve}} \lbr v_h^2 - (\lsbt{v}{\lambda,h} - v_h)^2 \rbr \ dy \ d\tau\\ & := J_2 + J_1(\mathfrak{t}+s) - J_1(\mathfrak{t}-s) - J_3, \end{array} \end{equation*} where we have set \begin{equation*} \label{def_i_1}J_1(\tau) := \frac12 \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} ( (v_h)^2 - (\lsbt{v}{\lambda,h} - v_h)^2 ) (y,\tau) \zeta_{\ve}(\tau) \ dy. \end{equation*} Note that $J_1(\mathfrak{t}-s) = J_1(\mathfrak{t}+s) =0$ since $\zeta_{\ve}(\mathfrak{t}-s) =\zeta_{\ve}(\mathfrak{t}+s) = 0$. Applying the bound from Lemma \ref{lemma6.8-2}, we have \begin{equation*} \label{6.39} \begin{array}{ll} |J_2| & \apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \left| \dds{\lsbt{v}{\lambda,h}} (\lsbt{v}{\lambda,h}-v_h)\right| \ dy \ d\tau \apprle\lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}| . \end{array} \end{equation*} \item[Estimate for $L_2$:] We split $L_2$ and make use of the fact that $\lsbt{v}{\lambda,h}(z) = v_h(z)$ for all $z\in \lsbo{E}{\lambda}\cap {K_{4\rho}^{\alpha}(\mathfrak{z})}.$ \begin{equation*} \begin{array}{ll} L_2 & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve}\ dz \\ & \qquad + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve}\ dz\\ & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla [u-w]_h} \zeta_{\ve}\ dz \\ & \qquad + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve}\ dz\\ & =: L_2^1 + L_2^2. \end{array} \end{equation*} \begin{description} \item[Estimate for $L_2^1$:] Using \eqref{monotonicity}, we get \begin{equation*} \begin{array}{ll} L_2^1 & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[A(x,t,\nabla u) - A(x,t,\nabla w)]_h}{\nabla [u-w]_h} \zeta_{\ve}\ dz \\ & \apprge \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |\nabla [u-w]_h|^2 \lbr \mu^2 + |\nabla [u]_h|^2 + |\nabla [w]_h|^2 \rbr^{\frac{p(\cdot)-2}{2}} \zeta_{\ve} \ dz. \end{array} \end{equation*} \item[Estimate for $L_2^2$:] Using the bound from Lemma \ref{lemma6.7-1}, \eqref{abounded}, we get \begin{equation} \label{7.10} \begin{array}{ll} L_2^2 & \apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \left|[A(x,t,\nabla u) - A(x,t,\nabla w)]_h\right| |\nabla \lsbt{v}{\lambda,h}| \ dz\\ & \apprle \sum_{i\in\NN} \lambda^{\frac{1}{p(z_i)}} \iint_{2Q_i} \lbr[[]\lbr \mu^2 + |\nabla u|^2 + |\nabla w|^2 \rbr^{\frac{p(\cdot)-1}{2}}\rbr[]]_h \ dz \\ & \apprle \sum_{i \in \NN} \lambda^{\frac{1}{p(z_i)}} \lambda^{\frac{p^+_{2Q_i}-1}{p^-_{2Q_i}}}|\hat{Q}_i|\\ & \apprle \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation} In the last inequality, we made use of $\lambda^{\frac{1}{p(z_i)} + \frac{p^+_{2Q_i}}{p^-_{2Q_i}}-\frac{1}{p^-_{2Q_i}}-1} \leq C(p^{\pm}_{\log},n)$. \end{description} \item[Estimate for $L_3$:] Analogously to estimate $L_2$, we split $L_3$ as follows: \begin{equation*} \begin{array}{ll} L_3 & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve}\ dz + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dz\\ & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla [u-w]_h} \zeta_{\ve}\ dz + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve}\ dz\\ & =: L_3^1 + L_3^2. \end{array} \end{equation*} \begin{description} \item[Estimate for $L_3^1$:] Using the fact that $\lsbt{v}{\lambda,h}(z) = v_h(z)$ for all $z\in \lsbo{E}{\lambda}\cap {K_{4\rho}^{\alpha}(\mathfrak{z})}$, we get \begin{equation*} \begin{array}{ll} L_3^1 & = \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[|{\bf f}|^{p(\cdot)-2} {\bf f}]_h}{\nabla [u-w]_h} \zeta_{\ve} \ dz \\ & \leq \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} [|{\bf f}|^{p(\cdot)-1}]_h |\nabla [u-w]_h| \ dz. \end{array} \end{equation*} \item[Estimate for $L_3^2$:] Similar to the bound in \eqref{7.10}, we get \begin{equation*} \begin{array}{ll} L_3^2 & \apprle \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} \end{description} \end{description} Combining all the above estimates, we get \begin{equation*} \label{combined_1} \begin{array}{l} - \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \dds{\zeta_{\ve}} \lbr v_h^2 - (\lsbt{v}{\lambda,h} - v_h)^2 \rbr \ dy \ d\tau + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |\nabla [u-w]_h|^2 \lbr\mu^2 + |\nabla [u]_h|^2 +|\nabla [w]_h|^2 \rbr^{\frac{p(\cdot)-2}{2}}\zeta_{\ve} \ dz \\ \hspace*{6cm} \apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} [|{\bf f}|^{p-1}]_h{|\nabla [u-w]_h|} \ dz + \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}| . \end{array} \end{equation*} In order to estimate $- \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \dds{\zeta_{\ve}} \lbr v_h^2 - (\lsbt{v}{\lambda,h} - v_h)^2 \rbr \ dy \ d\tau$, we observe that on $\lsbo{E}{\lambda}$, there holds $\lsbt{v}{\lambda} = v$. Taking limits first in $h\searrow 0$ followed by $\varepsilon \searrow 0$, we get \begin{equation*} \begin{array}{ll} - \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \dds{\zeta_{\ve}} \lbr v_h^2 - (\lsbt{v}{\lambda,h} - v_h)^2 \rbr \ dy \ d\tau \xrightarrow{\lim_{\varepsilon \searrow 0}\lim_{h \searrow 0}} & \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} (v^2 - (\lsbt{v}{\lambda} - v)^2 )(x,\mathfrak{t}+s) \ dx \\ & - \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} (v^2 - (\lsbt{v}{\lambda} - v)^2 )(x,\mathfrak{t}-s) \ dx. \end{array} \end{equation*} For the second term, we observe that on $\lsbo{E}{\lambda}$, we have $\lsbt{v}{\lambda} = v$; and on $\lsbo{E}{\lambda}^c$, we have $\lsbt{v}{\lambda}(\cdot,\mathfrak{t}-s) = v(\cdot,\mathfrak{t}-s) = 0$. Thus, the second term vanishes because on $\lsbo{E}{\lambda}$, we can use the initial boundary condition; and on $\lsbo{E}{\lambda}^c$, it is zero by construction. Thus we get \begin{equation*} - \int_{\mathfrak{t}-s}^{\mathfrak{t}+s} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \dds{\zeta_{\ve}} \lbr v_h^2 - (\lsbt{v}{\lambda,h} - v_h)^2 \rbr \ dy \ d\tau \xrightarrow{\lim_{\varepsilon \searrow 0} \lim_{h \searrow 0}} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} (v^2 - (\lsbt{v}{\lambda} - v)^2 )(x,\mathfrak{t}+s) \ dx. \end{equation*} In fact, if we consider a cut-off function $\zeta_{\ve}^{t_0} (\tau)$ for some $t_0 \in (\mathfrak{t}-s,\mathfrak{t}+s)$, where \begin{equation*} \zeta_{\ve}^{t_0}(\tau) = \left\{ \begin{array}{ll} 1 & \text{for} \ \tau \in (-t_0+\varepsilon,t_0-\varepsilon),\\ 0 & \text{for} \ \tau \in (-\infty,-t_0)\cup (t_0,\infty), \end{array}\right. \end{equation*} we would have obtained the following estimate after taking limits: \begin{equation*} \label{combined_1_new_11} \begin{array}{l} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} (v^2 - (\lsbt{v}{\lambda} - v)^2 )(x,t_0) \ dx + \int_{-t_0}^{t_0} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \cap \lsbo{E}{\lambda}} |\nabla (u-w)|^2 \lbr\mu^2 + |\nabla u|^2 +|\nabla w|^2 \rbr^{\frac{p(\cdot)-2}{2}} \ dx \ dt \\ \hfill \apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} |{\bf f}|^{p(\cdot)-1}|\nabla (u-w)| \ dz + \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}| . \end{array} \end{equation*} In particular, we get for any $t_0 \in (\mathfrak{t}-s,\mathfrak{t}+s)$ \begin{equation} \label{combined_1_new_1} \begin{array}{l} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} (v^2 - (\lsbt{v}{\lambda} - v)^2 )(x,t_0) \ dx + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |\nabla (u-w)|^2 \lbr\mu^2+ |\nabla u|^2 +|\nabla w|^2 \rbr^{\frac{p(\cdot)-2}{2}} \ dx \ dt \\ \hfill \apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} |{\bf f}|^{p(\cdot)-1}|\nabla (u-w)| \ dz + \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}| . \end{array} \end{equation} Using Lemma \ref{cruc_3}, for any $t \in (\mathfrak{t}-s,\mathfrak{t}+s)$, there holds \begin{equation*} \label{4.13} \begin{array}{ll} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} | (v)^2 - (\lsbt{v}{\lambda} - v)^2 | (y,t) \ dy & \apprge - \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} Furthermore, using the above estimate in \eqref{combined_1_new_1} gives \begin{equation} \label{fully_combined} \begin{array}{ll} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |\nabla (u-w)|^2 \lbr \mu^2 + |\nabla u|^2 +|\nabla w|^2 \rbr^{\frac{p(\cdot)-2}{2}} \ dx \ dt &\apprle \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} |{\bf f}|^{p(\cdot)-1}|\nabla (u-w)| \ dz +\lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation} Let us now multiply \eqref{fully_combined} with $\lambda^{-1-\beta}$ and integrate over $(1,\infty)$ to get \begin{equation*} \label{K_expression} K_1 +K_2\apprle K_3, \end{equation*} where we have set \begin{equation*} \begin{array}{@{}r@{}c@{}l@{}} K_1 \ &:=& \ \int_{1}^{\infty} \lambda^{-1-\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |\nabla (u-w)|^2 \lbr\mu^2+ |\nabla u|^2 +|\nabla u|^2 \rbr^{\frac{p(\cdot)-2}{2}}\ dz \ d\lambda,\\ K_2 \ &:=& \ \int_{1}^{\infty} \lambda^{-1-\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \cap \lsbo{E}{\lambda}} |{\bf f}|^{p(\cdot)-1}|\nabla (u-w)| \ dz \ d\lambda, \\ K_3\ &:=& \ \int_{1}^{\infty} \lambda^{-1-\beta} \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}| \ d\lambda. \\%\label{def_K_3}\\ \end{array} \end{equation*} Let us define $\tilde{g} = \max\{ g, 1\}$ where $g$ is from \eqref{def_g_A}, then we estimate each of the above terms as follows: \begin{description}[leftmargin=*] \item[Estimate for $K_1$:] Applying Fubini's theorem, we get \begin{equation*} \label{3.13} K_1 \apprge \frac{1}{\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \tilde{g}(z)^{-\beta}|\nabla (u-w)|^2 \lbr \mu^2 + |\nabla u|^2 +|\nabla u|^2 \rbr^{\frac{p(\cdot)-2}{2}}\ dz. \end{equation*} Let us define \[ K^+(\mathfrak{z}) := \{ z \in K_{4\rho}^{\alpha}(\mathfrak{z}): p(z) \geq 2\} \txt{and}K^-(\mathfrak{z}) := \{ z \in K_{4\rho}^{\alpha}(\mathfrak{z}): p(z) \leq 2\}, \] and consider the following two subcases: \begin{description}[leftmargin=*] \item[Subcase $K^-(\mathfrak{z})$:] We have the following simple decomposition: \begin{equation} \label{p-great-twoo} \begin{array}{rcl} |\nabla u - \nabla w|^{p(z)(1-\beta)} &=& \left[ (\mu^2+ |\nabla u|^2 + |\nabla w|^2)^{\frac{p(z)-2}{2}} |\nabla u - \nabla w |^2 g^{-\beta} \right]^{\frac{p(z)(1-\beta)}{2}} \times \\ && \times \left( \mu^2 + |\nabla u|^2 + |\nabla w|^2 \right)^{\frac{p(z)(1-\beta)(2-p(z))}{4}} \times \tilde{g}^{\frac{p(z)(1-\beta)}{2}\beta}. \end{array} \end{equation} Integrating \eqref{p-great-twoo} over $K^-(\mathfrak{z})$ and making use of Young's inequality with exponents $\frac{2}{p(z)(1-\beta)}, \frac{2}{2-p(z)} $ and $\frac{2}{p(z)\beta}$, we get \begin{equation} \label{p-less-twoo} \begin{array}{ll} \iint_{K^-(\mathfrak{z})} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \, dx & \apprle \epsilon_1 \iint_{K^-(\mathfrak{z})} (\mu^2 +|\nabla u|^2 + |\nabla w|^2)^{\frac{p(\cdot)(1-\beta)}{2}} \, dz + \epsilon_2 \iint_{K^-(\mathfrak{z})} \tilde{g}(z)^{1-\beta} \, dz \\ &\qquad + C_{(\epsilon_1,\epsilon_2)} \iint_{K^-(\mathfrak{z})} ( \mu^2 + |\nabla u|^2 + |\nabla w|^2)^{\frac{p(\cdot)-2}{2}} |\nabla u - \nabla w |^2 \tilde{g}(z)^{-\beta} \, dz. \end{array} \end{equation} From the strong maximal function bound of Lemma \ref{max_bound}, we see that \begin{equation} \label{7.27} \begin{array}{rcl} \iint_{K^-(\mathfrak{z})} \tilde{g}(z)^{1-\beta} \, dz & \apprle & \iint_{\RR^{n+1}} \tilde{g}^{1-\beta} \ dz + |K^-(\mathfrak{z})|\\ & \apprle & \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr \frac{|u-w|}{\rho} + |\nabla u| + |\nabla w| + |{\bf f}| + \mu+ 1\rbr^{{p(\cdot)(1-\beta)}} \ dz + |K^-(\mathfrak{z})|\\ & \apprle & \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla u| + |\nabla w| + |{\bf f}| +\mu+ 1\rbr^{p(\cdot)(1-\beta)} \ dz. \end{array} \end{equation} Combining \eqref{p-less-twoo} and \eqref{7.27}, we get \begin{equation*} \label{p-less-twoo-young} \begin{array}{ll} \iint_{K^-(\mathfrak{z})} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \, dz & \apprle (\epsilon_1 + \epsilon_2) C_{(p^{\pm}_{\log},n,\Lambda_0,\Lambda_1)} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} |\nabla u|^{p(\cdot)(1-\beta)} + |\nabla w-\nabla u|^{p(\cdot)(1-\beta)}\, dz \\ &\qquad + C_{(\epsilon_1,\epsilon_2)} \iint_{K^-(\mathfrak{z})} ( \mu^2 + |\nabla u|^2 + |\nabla w|^2)^{\frac{p(\cdot)-2}{2}} |\nabla u - \nabla w |^2 \tilde{g}(z)^{-\beta} \, dz \\ & \qquad + \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}}\lbr[[]|{\bf f}|^{p(\cdot)(-\beta)} + 1\rbr[]]\, dz.\\ \end{array} \end{equation*} \item[Subcase $K^+(\mathfrak{z})$:] In this case, we proceed as follows: \begin{equation*} \label{p-geq-two} \begin{array}{rcl} \iint_{K^+(\mathfrak{z})} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \ dz & \leq &C_{(\varepsilon_3)}\iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \tilde{g}(z)^{-\beta} |\nabla u - \nabla w|^{p(\cdot)} \ dz + \epsilon_3 \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \tilde{g}(z)^{1-\beta} \ dz \\ & \overset{\eqref{7.27}}{\apprle}& C_{(\epsilon_3)} \iint_{K^+(\mathfrak{z})} \tilde{g}(z)^{-\beta} ( \mu^2 + |\nabla u|^2 + |\nabla w|^2)^{\frac{p(\cdot)-2}{2}} |\nabla u - \nabla w |^2 \, dz\\ && \qquad + \epsilon_3 \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} |\nabla u-\nabla w|^{p(\cdot)(1-\beta)} + |\nabla u|^{p(\cdot)(1-\beta)} + |{\bf f}|^{p(\cdot)(1-\beta)}+ 1\, dz. \end{array} \end{equation*} \end{description} \item[Estimate for $K_2$:] Again by Fubini's theorem, we get \begin{equation*} \label{3.18} K_2 = \frac{1}{\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \tilde{g}(z)^{-\beta} \iprod{|{\bf f}|^{p(\cdot)-2} {\bf f}}{\nabla u - \nabla w} \ dz. \end{equation*} From the definition of $g(z)$, we see that for $z \in {K_{4\rho}^{\alpha}(\mathfrak{z})}$, we have $\tilde{g}(z) \geq |\nabla u - \nabla w|(z)$ which implies $\tilde{g}(z)^{-\beta} \leq |\nabla u - \nabla w|^{-\beta}(z)$. We can now apply Young's inequality with exponents $\frac{p(\cdot)(1-\beta)}{p(\cdot)-1}$ and $\frac{p(\cdot)(1-\beta)}{1-p(\cdot)\beta}$ to get: \begin{equation*} \label{3.19} \begin{array}{ll} K_2 & \apprle \frac{C_{(\epsilon_4)}}{\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} |{\bf f}|^{p(\cdot)(1-\beta)} \ dz + \frac{\epsilon_4}{\beta} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \ dz. \end{array} \end{equation*} \item[Estimate for $K_3$:] Applying the layer cake representation followed by Lemma \ref{max_bound}, we get \begin{equation*} \label{3.22} \begin{array}{rcl} K_3 & = &\frac{1}{1-\beta} \iint_{\RR^{n+1}} \tilde{g}(z)^{1-\beta} \ dz \\ & \overset{\eqref{7.27}}{\apprle} & \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} \lbr |\nabla u - \nabla w| + |\nabla u| + |{\bf f}|+\mu + 1\rbr^{p(\cdot)(1-\beta)} \ dz. \end{array} \end{equation*} \end{description} Combining everything, we get the following estimate: \begin{equation*} \begin{array}{rcl} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})}} |\nabla u - \nabla w|^{p-\beta} \ dz & \apprle & (\varepsilon_1+\varepsilon_2+\varepsilon_3+C_{(\varepsilon_1,\varepsilon_2,\varepsilon_3)}\beta) \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u|^{p(\cdot)(1-\beta)} \ dz \\ & & + (\varepsilon_1+\varepsilon_2+\varepsilon_3+\varepsilon_4+ C_{(\varepsilon_1,\varepsilon_2,\varepsilon_3)}\beta) \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla u - \nabla w|^{p(\cdot)(1-\beta)} \ dz \\ && + C_{(\varepsilon_1,\varepsilon_2,\varepsilon_3,\varepsilon_4,\beta)} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr[[]|{\bf f}|^{p(\cdot)(1-\beta)} + 1\rbr[]] \ dz. \end{array} \end{equation*} Choosing $\varepsilon_1,\varepsilon_2,\varepsilon_3$ and $\varepsilon_4$ small followed by $\beta \in (0,\tilde{\beta}_3]$, we get the proof of the estimate. \end{proof} \section{Second difference estimate below the natural exponent} \label{six} In this section, we will prove a difference estimate between the weak solution of \eqref{wapprox_int} and the weak solution of \eqref{vapprox_bnd}. To do this, we will use the method of Lipschitz truncation from Appendix \ref{lipschitz_truncation_B}. For this section, let us denote \begin{equation*} \label{def_s_b} s:= \scalet{\alpha}{\mathfrak{z}} (3\rho)^2. \end{equation*} \begin{theorem} \label{second_diff_thm} Let $(p(\cdot),\mathcal{A},\Omega)$ be $(\gamma,{\bf S}_0)$-vanishing. Suppose that $w$ and $v$ are weak solutions of \eqref{wapprox_int} and \eqref{vapprox_bnd}, respectively, and let $\alpha \geq 1$ be given such that the following assumptions hold: \begin{equation} \label{hypothesis} \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})(1-\beta)} \ dz \le c_{\ast} \alpha^{1-\beta} \txt{and} \alpha^{p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})}}\leq c_p. \end{equation} Further assume that \begin{equation}\label{more_hyp} \alpha^{-\frac{n}{p(\mathfrak{z})} + \frac{nd}{2} + d} \leq \Gamma^2 (4\rho)^{-(n+2)} \txt{and} p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})} \leq \omega_{\pp}(4\rho\Gamma),\end{equation} for some $\Gamma, c_p, c_{\ast} >1$ to be selected as fixed constants in Section \ref{eight}. Then there exists $\tilde{\rho}_4 = \tilde{\rho}_4(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1,{\bf M}_0)$ such that for any $128 \rho \leq \tilde{\rho}_4$ and for any $\varepsilon \in (0,1]$, there exist $\tilde{\beta}_4 = \tilde{\beta}_4(\varepsilon, n,\Lambda_0,\Lambda_1,p^{\pm}_{\log})$ and $\tilde{\gamma}_0 = \tilde{\gamma}_0(\varepsilon, n,\Lambda_0,\Lambda_1,p^{\pm}_{\log})$ such that for any $\beta \in (0,\tilde{\beta}_4]$ and any $\gamma \in (0,\tilde{\gamma}_0)$, the following estimate holds: \begin{equation} \label{diff_est_two} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \leq \varepsilon \alpha \txt{and} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \apprle \alpha. \end{equation} \end{theorem} \begin{proof} The first estimate in \eqref{diff_est_two} and \eqref{hypothesis} directly implies the second estimate in \eqref{diff_est_two} after making use of the triangle inequality. Thus we only prove the first estimate in \eqref{diff_est_two}. Consider the following cut-off function \textcolor{black}{\bf $\zeta_{\ve} \in C^{\infty} (\RR)$} such that $0 \leq \zeta_{\ve}(t) \leq 1$ and \begin{equation*} \label{def_zv_B} \zeta_{\ve}(t) = \left\{ \begin{array}{ll} 1 & \text{for} \ t \in (\mathfrak{t}-s+\varepsilon,\mathfrak{t}+s-\varepsilon),\\ 0 & \text{for} \ t \in (-\infty,\mathfrak{t}-s)\cup (\mathfrak{t}+s,\infty). \end{array}\right. \end{equation*} It is easy to see that \begin{equation*} \label{bound_zv_B} \begin{array}{c} \zeta_{\ve}'(t) = 0 \ \txt{for} \ t \in (-\infty,\mathfrak{t}-s) \cup (\mathfrak{t}-s+\varepsilon,\mathfrak{t}+s-\varepsilon)\cup (\mathfrak{t}+s,\infty), \\ |\zeta_{\ve}'(t)| \leq \frac{c}{\varepsilon}\ \txt{for} \ t \in (\mathfrak{t}-s,\mathfrak{t}-s+\varepsilon) \cup (\mathfrak{t}+s-\varepsilon,\mathfrak{t}+s). \end{array} \end{equation*} Without loss of generality, we shall always take $2h \leq \varepsilon$ since we will take limits in the following order $\lim_{\varepsilon \rightarrow 0} \lim_{h \rightarrow 0}$. We shall use $\lsbt{v}{\lambda,h}(z) \zeta_{\ve}(t)$ as a test function where $\lsbt{v}{\lambda,h}$ is as constructed in Appendix \ref{lipschitz_truncation_B} (more specifically in \eqref{lipschitz_function_B}). This is valid since $\lsbt{v}{\lambda,h} \in C^{0,1}(K_{3\rho}^{\alpha}(\mathfrak{z}))$. Using this, we get \begin{equation*} \begin{array}{l} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}} \ddt{[w-v]_h} \lsbt{v}{\lambda,h} \zeta_{\ve} \ dx \ dt + \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}} \iprod{[\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \\ \hspace*{6cm} = \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}} \iprod{[\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt\\ \hspace*{6.5cm} + \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt. \end{array} \end{equation*} Proceeding as in Theorem \ref{first_diff_thm}, after taking limits, we get for any $t_0 \in (\mathfrak{t}-s,\mathfrak{t}+s)$, the estimate \begin{equation} \label{B_7.8} \begin{array}{l} \int_{\Omega_{3\rho}^{\alpha}(\mathfrak{x})} (v^2 - (\lsbt{v}{\lambda} - v)^2)(x,t_0) \ dx +\iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)}{\nabla (v-w)} \zeta_{\ve} \ dx \ dt \\ \hspace*{3cm}= -\iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \\ \hspace*{4cm}+ \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt\\ \hspace*{4cm}+\iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \\ \hspace*{4cm}+ \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}(x,t,\nabla w)]_h}{\nabla (v-w)} \zeta_{\ve} \ dx \ dt\\ \hspace*{4cm}+\iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla (v-w)} \zeta_{\ve} \ dx \ dt. \end{array} \end{equation} Let us multiply \eqref{B_7.8} by $\lambda^{-1-\beta}$ and integrate over $[1,\infty)$ to get \[ K_1 + K_2 \leq K_3 + K_4 + K_5 + K_6 + K_7, \] where \begin{equation*}\label{define_K}\begin{array}{rcl} K_1 &:=& \int_1^{\infty} \lambda^{-1-\beta} \int_{\Omega_{\rho}(\mathfrak{t})} (v^2 - (\lsbt{v}{\lambda} - v)^2)(x,t_0) \ dx \ d\lambda,\\ K_2 &:=& \int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)}{\nabla (v-w)} \zeta_{\ve} \ dx \ d\lambda, \\ K_3 &:=& -\int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \ d\lambda,\\ K_4 &:=& \int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \ d\lambda, \\ K_5 &:=& \int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \ d\lambda, \\ K_6 &:=& \int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}(x,t,\nabla w)]_h}{\nabla (v-w)} \zeta_{\ve} \ dx \ dt \ d\lambda, \\ K_7 &:=& \int_1^{\infty} \lambda^{-1-\beta} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\cap \lsbo{E}{\lambda}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla (v-w)} \zeta_{\ve} \ dx \ dt \ d\lambda. \end{array}\end{equation*} Let us set $\tilde{g}(z) := \max\{1,g(z)\}$ where $g(z)$ is defined in \eqref{def_g_B} and estimate each of the terms as follows: \begin{description} \item[Estimate for $K_1$:] Using Lemma \ref{cruc_3_B}, we see that \[ \int_{\Omega_{\rho}(t_0)} (v^2 - (\lsbt{v}{\lambda} - v)^2)(x,t_0) \ dx \geq - \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \] Using this along with Fubini's theorem, we see that \begin{equation*} \label{estimate_K1} \begin{array}{rcl} K_1 & \apprge & - \int_1^{\infty} \lambda^{-1-\beta} \lambda |\{ z \in \RR^{n+1}: \tilde{g}(z) \geq \lambda| \ d\lambda \\ & = &-\frac{1}{1-\beta}\iint_{\RR^{n+1}} \tilde{g}(z)^{1-\beta} \ dz \\ & \apprge & - \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})}\lbr |\nabla w - \nabla v| + |\nabla w| + 1\rbr^{p(\mathfrak{z})(1-\beta)} \ dz. \end{array} \end{equation*} \item[Estimate for $K_2$:] Similar to the estimates in Theorem \ref{first_diff_thm}, we see that \begin{equation*} \label{estimate_K2} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \apprle C_{(\varepsilon_1)} \beta K_2 + \varepsilon_1 \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})(1-\beta)} + 1\ dz. \end{equation*} \item[Estimate for $K_3$:] Using the bound from Lemma \ref{lemma6.7-1_B}, we get \begin{equation*} \begin{array}{rcl} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\overline{\mathcal{B}}(t,\nabla v) - \overline{\mathcal{B}}(t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt & \apprle & \sum_{i \in \NN} \lambda^{\frac{1}{p(\mathfrak{z})}} \iint_{2Q_i} \lbr \mu^2 + |\nabla w|^2 + |\nabla v|^2 \rbr^{\frac{p(\mathfrak{z})-1}{2}} \ dz \\ & \apprle & \lambda^{\frac{1}{p(\mathfrak{z})}} \lambda^{\frac{p(\mathfrak{z}) -1}{p(\mathfrak{z})}} |16Q_i| \apprle \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} Using the above bound in $K_3$ followed by applying Fubini's theorem, we get \begin{equation} \label{estimate_K3} \begin{array}{rcl} K_3 & \apprle & \int_1^{\infty} \lambda^{-1-\beta} \lambda |\{ z \in \RR^{n+1}: \tilde{g}(z) \geq \lambda| \ d\lambda =\frac{1}{1-\beta}\iint_{\RR^{n+1}} \tilde{g}(z)^{1-\beta} \ dz \\ & \apprle & \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})}\lbr |\nabla w - \nabla v| + |\nabla w| + 1\rbr^{p(\mathfrak{z})(1-\beta)} \ dz. \end{array} \end{equation} \item[Estimate for $K_4$:] Similar to the estimate for $K_3$, we get \begin{equation*} \label{estimate_K4} K_4 \apprle \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})}\lbr |\nabla w - \nabla v| + |\nabla w| + 1\rbr^{p(\mathfrak{z})(1-\beta)} \ dz. \end{equation*} \item[Estimate for $K_5$:] In this case, we proceed as follows: \begin{equation*} \begin{array}{rcl} &&\hspace*{-4cm}\iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})}\setminus \lsbo{E}{\lambda}} \iprod{[\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)]_h}{\nabla \lsbt{v}{\lambda,h}} \zeta_{\ve} \ dx \ dt \\ & \apprle & \sum_{i\in\NN} \lambda^{\frac{1}{p(\mathfrak{z})}} \iint_{2Q_i} \abs{\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)} \ dx \ dt \\ & \overset{\eqref{aa_bb}}{\apprle} & \sum_{i\in\NN}|2Q_i|\lambda^{\frac{1}{p(\mathfrak{z})}} \Yint\tiltlongdash_{2Q_i} \lbr |\nabla w| + |\nabla v| + 1\rbr^{p(\mathfrak{z}) -1} \ dx \ dt \\ & \overset{\eqref{elambda_B}}{\apprle} & \sum_{i\in\NN}|2Q_i|\lambda^{\frac{1}{p(\mathfrak{z})}} \lambda^{\frac{p(\mathfrak{z}) -1}{p(\mathfrak{z})}} \apprle \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} This is bounded exactly as in \eqref{estimate_K3} to get \begin{equation*} \label{estimate_K5} K_5 \apprle \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})}\lbr |\nabla w - \nabla v| + |\nabla w| + 1\rbr^{p(\mathfrak{z})(1-\beta)} \ dz. \end{equation*} \item[Estimate for $K_6$:] Applying Fubini's theorem, we see that \begin{equation*} \begin{array}{rcl} K_6: &=& \frac{1}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\overline{\mathcal{B}}(t,\nabla w) - \mathcal{B}[x,t,\nabla w)| |\nabla (v-w)|\tilde{g}^{-\beta}(z) \ dz \\ & \apprle & \frac{1}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))(1+|\nabla w|)^{p(\mathfrak{z})-1} |\nabla v-\nabla w|\tilde{g}^{-\beta}(z) \ dz \\ & \apprle & \frac{1}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))(1+|\nabla w|)^{p(\mathfrak{z})-1} |\nabla v-\nabla w|^{1-\beta} \ dz \\ & \apprle &\frac{\varepsilon_2}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz + \frac{C_{(\varepsilon_2)}}{\beta} \underbrace{\iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}(1+|\nabla w|)^{p(\mathfrak{z})} \ dz}_{\tilde{K}_6}.\\ \end{array} \end{equation*} We shall estimate the second term as follows: ($\sigma$ is to be chosen appropriately later on) \begin{equation*} \begin{array}{rcl} \frac{\tilde{K}_6}{|K_{3\rho}^{\alpha}(\mathfrak{z})|} & \apprle & \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}\frac{4+\sigma}{\sigma}}\ dz \rbr^{\frac{\sigma}{4+\sigma}} \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})}(1+|\nabla w|)^{p(\mathfrak{z})\frac{4+\sigma}{4}} \ dz\rbr^{\frac{4}{4+\sigma}}. \end{array} \end{equation*} If we restrict $p^+_{K_{4\rho}(\mathfrak{z})} - p^-_{K_{4\rho}(\mathfrak{z})} \leq \frac{(p^--1)\sigma}{4}$, we see that the following two bounds hold: \begin{equation} \label{bounds_pp_mfz} \begin{array}{c} p(\mathfrak{z}) \leq \frac{p(\mathfrak{z})}{p(\mathfrak{z}) - 1} (p^+_{K_{4\rho}(\mathfrak{z})} -1) \leq \frac{p^-_{K_{4\rho}(\mathfrak{z})}(p^+_{K_{4\rho}(\mathfrak{z})}-1)}{p^-_{K_{4\rho}(\mathfrak{z})}-1} \leq p(\cdot) \lbr 1 + \frac{p^+_{K_{4\rho}(\mathfrak{z})} -p^-_{K_{4\rho}(\mathfrak{z})}}{p^--1} \rbr \leq p(\cdot) \lbr 1 + \frac{\sigma}{4} \rbr,\\ p(\mathfrak{z})\lbr 1 + \frac{\sigma}{4} \rbr\leq \frac{p(\mathfrak{z})}{p(\mathfrak{z}) - 1} (p^+_{K_{4\rho}(\mathfrak{z})} -1) \lbr 1 + \frac{\sigma}{4} \rbr \leq p(\cdot) \lbr 1 + \frac{p^+_{K_{4\rho}(\mathfrak{z})} -p^-_{K_{4\rho}(\mathfrak{z})}}{p^--1} \rbr \lbr 1 + \frac{\sigma}{4} \rbr \leq p(\cdot) \lbr 1 + {\sigma} \rbr. \end{array} \end{equation} Let us set $a = \frac{p^+_{K_{4\rho}(\mathfrak{z})} -p^-_{K_{4\rho}(\mathfrak{z})}}{p^--1} \lbr 1 + \frac{\sigma}{4} \rbr + \frac{\sigma}{4} \leq \sigma$, then we get from Corollary \ref{normalized_higher_integrability} that \begin{equation} \label{8.25} \begin{array}{rcl} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})}(1+|\nabla w|)^{p(\mathfrak{z})\frac{4+\sigma}{4}} \ dz & \apprle& \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})}(1+|\nabla w|)^{p(\cdot) ( 1+a)} \ dz\\ & \overset{\text{Corollary \ref{normalized_higher_integrability}}}{\apprle} & \alpha^{1+a}\\ & = & \alpha^{1+\frac{\sigma}{4}}\alpha^{\frac{p^+_{K_{4\rho}(\mathfrak{z})} -p^-_{K_{4\rho}(\mathfrak{z})}}{p^--1} \lbr 1 + \frac{\sigma}{4} \rbr}\\ & \overset{\eqref{hypothesis}}{\apprle} & c_p^{\frac{1}{p^--1} \lbr 1 + \frac{\sigma}{4} \rbr} \alpha^{1+\frac{\sigma}{4}}. \end{array} \end{equation} From \eqref{abounded} and \eqref{small_aa}, we see that \begin{equation*} \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} \Theta(\mathcal{A}, B_{4\rho}^{\alpha}(\mathfrak{x}))^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}\frac{4+\sigma}{\sigma}}\ dz \rbr^{\frac{\sigma}{4+\sigma}} \apprle \gamma^{\frac{\sigma}{4+\sigma}}. \end{equation*} Combining everything, we see that \begin{equation*} K_6 \apprle \frac{\varepsilon_2}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz + \frac{C_{(\varepsilon_2)}}{\beta} \gamma^{\frac{\sigma}{4+\sigma}} |K_{3\rho}^{\alpha}(\mathfrak{z})| c_p^{\frac{1}{p^--1}}\alpha. \end{equation*} \item[Estimate for $K_7$:] Applying Fubini's theorem, we get \begin{equation*} \begin{array}{rcl} K_7 &=& \frac{1}{\beta}\iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)| |\nabla (v-w)|\tilde{g}^{-\beta}(z) \ dz \\ &\apprle & \frac{1}{\beta}\iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\mathcal{B}(x,t,\nabla w) - \mathcal{A}(x,t,\nabla w)| |\nabla (v-w)|^{1-\beta} \ dz \\ &\overset{\eqref{def_bb},\eqref{abounded}}{\apprle} & \frac{1}{\beta}\iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} (\mu^2 + |\nabla w|^2)^{\frac{p(z)-1}{2}} \left|1 - (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}} \right| |\nabla (v-w)|^{1-\beta} \ dz. \\ \end{array} \end{equation*} Applying Young's inequality, we get for $E:=\{z \in K_{3\rho}^{\alpha}(\mathfrak{z}): \mu^2 + |\nabla w(z)|^2>0\}$, the bound \begin{equation} \label{w_v_11} \begin{array}{rcl} \frac{K_7}{|K_{3\rho}^{\alpha}(\mathfrak{z})|} & \apprle & \frac{1}{\beta} \varepsilon_3 \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \\ & & \qquad + \frac{C_{(\varepsilon_3)}}{\beta |K_{3\rho}^{\alpha}(\mathfrak{z})|} \underbrace{\iint_E \bgh{(\mu^2 + |\nabla w|^2)^{\frac{p(z)-1}{2}} \left|1 - (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}} \right|}^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}} \ dz}_{J}. \end{array} \end{equation} We shall now proceed with estimating the second term in \eqref{w_v_11} as follows: For each $z \in E$, in view of the mean value theorem applied to $(\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}\mathfrak{a}}$, there exists $\mathfrak{a}_z \in [0,1]$ such that we get \begin{equation} \label{w_v_13} (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}} -1 = \frac{p(\mathfrak{z})-p(z)}{2} (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}\mathfrak{a}_z} \log (\mu^2 + |\nabla w|^2). \end{equation} This implies \begin{equation*} \label{w_v_13.1} \begin{array}{l} (\mu^2 + |\nabla w|^2)^{\frac{p(z)-1}{2}} \left|1 - (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}} \right| \apprle \omega_{\pp}(4\rho\Gamma) (\mu^2 + |\nabla w|^2)^{\frac{(p(\mathfrak{z})-p(z))\mathfrak{a}_z + p(z) -1}{2}} \log (\mu^2 + |\nabla w|^2). \end{array} \end{equation*} Let us now define the sets \begin{equation} \label{split_sets} E^1 : = \{ z \in K_{3\rho}^{\alpha}(\mathfrak{z}) : |\nabla w(x)| \leq 1\} \quad \text{and} \quad E^2 : = \{ x \in K_{3\rho}^{\alpha}(\mathfrak{z}) : |\nabla w(x)| > 1\}. \end{equation} Recall that $\mu \leq 1$ and hence using the inequality $t^{\beta} |\log t| \leq \max \left\{ \frac{1}{e^{\beta}}, 2^{\beta}\log 2\right\}$ which holds for all $t \in (0,2]$ and any $\beta >0$, we get for $z \in E^1$ \begin{equation} \label{w_v_14} (\mu^2 + |\nabla w|^2)^{\frac{p(z)-1}{2}} \left|1 - (\mu^2 + |\nabla w|^2)^{\frac{p(\mathfrak{z})-p(z)}{2}} \right| \apprle \omega_{\pp}(4\rho\Gamma) \max \left\{ \frac{1}{e^{\frac{p^--1}{2}}}, 2^{\frac{p^+-1}{2}}\log 2 \right\}. \end{equation} To obtain the above estimate, with $\beta(z) := \frac{\mathfrak{a}_z (p(\mathfrak{z})-p(z)) + p(z) -1}{2}$, there holds \begin{equation*} \label{bound_beta} \frac{p^--1}{2} \leq \beta(z) \leq \frac{p(\mathfrak{z})-1}{2} \leq \frac{p^+-1}{2}. \end{equation*} Hence using \eqref{split_sets} and combining \eqref{w_v_14} into \eqref{w_v_13}, we get \begin{equation} \label{w_v_15} \begin{array}{ll} |\mathcal{B}(z,\nabla w) - \mathcal{A}(z,\nabla w)|& \apprle \lsb{\chi}{E^1}\omega_{\pp}(4\rho\Gamma) \max \left\{ \frac{1}{e^{\frac{p^--1}{2}}}, 2^{\frac{p^+-1}{2}}\log 2 \right\} \\ & \qquad \qquad + \lsb{\chi}{E^2} {\omega_{\pp}(4\rho\Gamma)} |\nabla w|^{(p(\mathfrak{z})-p(z))\mathfrak{a}_z + p(z) -1} \log(e + |\nabla w|). \end{array} \end{equation} Combining \eqref{w_v_15} and \eqref{w_v_11}, we get \begin{equation*} \label{w_v_16} J \apprle \omega_{\pp}(4\rho\Gamma)^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}|K_{3\rho}^{\alpha}(\mathfrak{z})| \lbr 1+ J_1\rbr. \end{equation*} where $J_1:= \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{\mathfrak{b}} [\log(e+|\nabla w|^{\mathfrak{b}})]^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}} \ dz$ with $\mathfrak{b}:={\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}(p^+_{K_{3\rho}^{\alpha}(\mathfrak{z})}-1)}$. Using the inequality $\log(e + ab) \leq \log(e+a) + \log(e+b)$ for $a,b>0$ along with the simple bound $\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1} \leq \frac{p^-}{p^--1}$, we get \begin{equation*} \label{w_v_17} \begin{array}{ll} J_1 & \apprle \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{\mathfrak{b}} \lbr[[]\log \lbr e + \frac{|\nabla w|^{\mathfrak{b}}}{\avg{|\nabla w|^{\mathfrak{b}}}{K_{3\rho}^{\alpha}(\mathfrak{z})}} \rbr \rbr[]]^{\frac{p^-}{p^--1}}\ dz \\ & \qquad + \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{\mathfrak{b}} \lbr[[]\log \lbr e + {\avg{|\nabla w|^{\mathfrak{b}}}{K_{3\rho}^{\alpha}(\mathfrak{z})}} \rbr \rbr[]]^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}\ dz \\ & =: J_2 + J_3. \end{array} \end{equation*} \begin{description} \item[Estimate for $J_2$:] We now apply Lemma \ref{llogl} with $f = |\nabla w|^{\mathfrak{b}}$, $\beta = \frac{p^-}{p^--1}$ and $s=1+\frac{\sigma}{4}$ to get \begin{equation} \label{w_v_18} \begin{array}{rcl} J_2 &\apprle& \lbr \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{\mathfrak{b} \lbr 1 + \frac{\sigma}{4} \rbr} \ dz \rbr^{\frac{4}{4+\sigma}} \leq \lbr \Yint\tiltlongdash_{K_{4\rho}^{\alpha}(\mathfrak{z})} (1+|\nabla w|)^{p(\cdot)(1+a)} \ dz \rbr^{\frac{4}{4+\sigma}} \\ & \overset{\eqref{8.25}}{\apprle} & c_p^{\frac{1}{p^--1}}\alpha, \end{array} \end{equation} where $a = \frac{p^+_{K_{4\rho}(\mathfrak{z})} -p^-_{K_{4\rho}(\mathfrak{z})}}{p^--1} \lbr 1 + \frac{\sigma}{4} \rbr + \frac{\sigma}{4}$ satisfying $a \leq \sigma$. \item[Estimate for $J_3$:] From \eqref{bounds_pp_mfz} and \eqref{8.25}, we see that \begin{equation} \label{w_v_19} \begin{array}{rcl} \log \lbr e + \avg{|\nabla w|^{\mathfrak{b}}}{K_{3\rho}^{\alpha}(\mathfrak{z})}\rbr & \leq &\log \lbr e + c_p^{\frac{1}{p^--1}}\alpha\rbr = \log(e + c_1 \alpha) \apprle C_{(c_1)}( \log \alpha + 1) \\ & \apprle &C_{(c_1)} \lbr \log \alpha^{-\frac{n}{p(\mathfrak{z})} + \frac{nd}{2} + d} + 1\rbr\\ & \overset{\eqref{more_hyp}}{ \leq} &C_{(c_1)} \left\{ \log \lbr \Gamma^2 (4\rho)^{-(n+2)} \rbr +1 \right\}. \end{array} \end{equation} Here we have denoted $c_1 = c_p^{\frac{1}{p^--1}}$ where $c_p$ is from \eqref{8.25}. Substituting \eqref{w_v_19} into $J_3$ and making use of the bound from \eqref{w_v_18}, we get \begin{equation*} \label{w_v_21} J_3 \apprle C_{(c_1)} \left\{ \log \lbr \Gamma^2 (4\rho)^{-(n+2)} \rbr +1 \right\}^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}\alpha. \end{equation*} \end{description} Then we have \begin{equation*} \label{estimate_K7_pre} \begin{array}{ll} |K_7| & \apprle \frac{\epsilon_3}{\beta} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \\%+ \frac{C_{(\epsilon_3)}}{\beta}\omega_{\pp}(4\rho\Gamma)^{\frac{p^-}{p^--1}} |K_{3\rho}^{\alpha}(\mathfrak{z})| \\ & \qquad \qquad + \frac{C_{(\epsilon_3)}}{\beta}\omega_{\pp}(4\rho\Gamma)^{\frac{p^-}{p^--1}} |K_{3\rho}^{\alpha}(\mathfrak{z})| C_{(c_1)} \left\{ \log \lbr \Gamma^2 (4\rho)^{-(n+2)} \rbr +1 \right\}^{\frac{p(\mathfrak{z})}{p(\mathfrak{z})-1}}\alpha. \end{array} \end{equation*} The restriction $\rho \leq \frac{1}{4e \Gamma^{n+5}}$ implies \begin{equation*}\label{two_7.34} \begin{array}{rcl} \omega_{\pp}(4\rho\Gamma) \left\{ \log \lbr \Gamma^2 (4\rho)^{-(n+2)} + 1 \rbr \right\}& =& \omega_{\pp}(4\rho\Gamma) \log\lbr 4\rho e \Gamma^2 (4\rho)^{-(n+3)}\rbr\\ & \leq & \omega_{\pp}(4\rho\Gamma) \log\lbr \Gamma^{-(n+3)} (4\rho)^{-(n+3)}\rbr\\ & \leq & (n+3) \omega_{\pp}(4\rho\Gamma) \log\lbr \frac{1}{4\rho\Gamma}\rbr\\ & \apprle & \gamma. \end{array} \end{equation*} Using this, we get \begin{equation*} \label{estimate_K7} |K_7| \apprle \frac{\epsilon_3}{\beta} \int_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz + \frac{C_{(\epsilon_3)}}{\beta}\gamma^{\frac{p^-}{p^--1}} |K_{3\rho}^{\alpha}(\mathfrak{z})| C_{(c_1)}\alpha. \end{equation*} \end{description} Combining all the estimates, we get, \begin{equation*} \begin{array}{rcl} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz & \apprle & (\varepsilon_1 + \varepsilon_2 + \varepsilon_4 + \beta) \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \\ & & + 3\beta \iint_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})(1-\beta)} \ dz \\ && + {C_{(\epsilon_3)}}\gamma^{\frac{p^-}{p^--1}} |K_{3\rho}^{\alpha}(\mathfrak{z})| C_{(c_1)}\alpha. \end{array} \end{equation*} Now choosing $\varepsilon_1,\varepsilon_2,\varepsilon_4$ and $\beta$ small, we get for any $\varepsilon >0$, the estimate \begin{equation*} \begin{array}{rcl} \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz & \le & (\varepsilon_1 + \varepsilon_2 + \varepsilon_4 + \beta) \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w|^{p(\mathfrak{z})(1-\beta)} \ dz + \gamma^{\frac{p^-}{p^--1}} C_{(c_1)}\alpha\\ & \overset{\eqref{hypothesis}}{\le} & \varepsilon \alpha^{1-\beta} + \gamma^{\frac{p^-}{p^--1}} \lbr 1+c_p^{\frac{1}{p^--1}}\rbr\alpha. \end{array} \end{equation*} Note that $\alpha \geq 1$ which implies $\alpha^{1-\beta} \leq \alpha$. Now choose $\gamma$ sufficiently small such that for any $\varepsilon \in (0,1)$, there holds \[ \Yint\tiltlongdash_{K_{3\rho}^{\alpha}(\mathfrak{z})} |\nabla w - \nabla v|^{p(\mathfrak{z})(1-\beta)} \ dz \leq \varepsilon \alpha, \] which completes the proof. \end{proof} \section{Covering arguments} \label{eight} Let $\beta\ \in (0,\beta_0)$ and let ${\bf S}_0 >0$ be given, where $\beta_0$ is from Section \ref{def_be_0}. Assume that $(p(\cdot),\mathcal{A},\Omega)$ is $(\gamma,{\bf S}_0)$-vanishing in the sense of Definition \ref{further_assumptions}. Let $q(\cdot)$ be log-H\"older continuous in the sense of Definition \ref{definition_p_log}. We fix any $\rho \le \frac{\rho_0}{4}$, where $\rho_0$ is given in Remark \ref{remark_radius}, and fix any $\mathfrak{z} = (\mathfrak{x},\mathfrak{t}) \in \Omega_T$ with $\gh{\mathfrak{t}-(4\rho)^2, \mathfrak{t}+(4\rho)^2} \subset (-T,T)$ We observe that from Theorem \ref{high_weak} and Theorem \ref{high_very_weak} that \begin{equation}\label{hi1} \mint{K_\rho(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)(1+\sigma)}}{dz} \lesssim \gh{\mint{K_{2\rho}(\mathfrak{z})}{\gh{|\nabla u|+|{\bf f}|}^{p(z)(1-\beta)}}{dz}}^{1+\sigma\tilde{\theta}} + \mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)(1+\sigma)}}{dz} + 1, \end{equation} where $\beta$ and $\sigma$ are given in Remark \ref{high_int_remark} and for some $\tilde{\theta} = \tilde{\theta}(n,p(\mathfrak{z})) > 0$. It follows from Section \ref{sub_radii} that for each $z \in K_{4\rho}(\mathfrak{z})$, \begin{gather} \label{ca1} \frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-} \le p(z)(1-\beta)\gh{1+\frac{\omega_{\qq}(8\rho)}{q^-}} \le p(z)(1-\beta)(1+\sigma),\\ \label{ca2} \frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-} \le p(z)(1-\beta)(1+3\sigma) \le \min\mgh{p(z), p(z)(1-\beta)q^-}. \end{gather} We first verify some parabolic localization properties under our unified intrinsic cylinders. \begin{lemma}\label{cv lemma} Let $c_a > 1$ and let ${\bf M}_0$ be given in \eqref{def_M_0}. Then there is a constant $c_1 = c_1(n,\Lambda_0, \Lambda_1, p^{\pm}_{\log}, q^{\pm}_{\log}) \ge 1$ such that for any $\lambda \ge 1$, any $\tilde{\mathfrak{z}} \in K_{2\rho}(\mathfrak{z})$, and any $\tilde{\rho}>0$, \begin{equation}\label{cv0-1} \tilde{\rho} \le \Gamma^{-2}{\bf S}_0, \quad \text{where}\ \ \Gamma := 2c_1 c_a {\bf M}_0 \gamma^{-1} \ge 2, \end{equation} satisfying $K_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}}) \subset K_{2\rho}(\mathfrak{z})$, if \begin{equation}\label{cv0-2} \alpha \le c_a \mgh{ \mint{K_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} }, \end{equation} then we have \begin{gather} \label{cv0-3} \alpha^{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d} \le \Gamma^2 \tilde{\rho}^{-(n+2)}, \qquad p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^- \le \omega_{\pp}(\Gamma \tilde{\rho}), \qquad \alpha^{p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^-} \le e^\frac{2n+5}{-\frac{n}{p^-} + \frac{nd}{2} + d} =: c_p,\\ \label{cv0-4} q_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - q_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^- \le \omega_{\qq}(\Gamma \tilde{\rho}), \qquad \alpha^{q_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - q_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^-} \le e^\frac{(2n+5) L}{-\frac{n}{p^-} + \frac{nd}{2} + d} =: c_q. \end{gather} \end{lemma} \begin{proof} Fix $K_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}}) \subset K_{2\rho}(\mathfrak{z})$. We compute \begin{equation}\label{cv0-5} \begin{array}{rcl} & &\hspace*{-3cm} \mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}}\\ & \overset{\eqref{ca1},\eqref{ca2}}{\le} & \frac{1}{\gamma} \mgh{ \mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)\gh{1+\frac{\omega_{\qq}(8\rho)}{q^-}}}}{dz} + \mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)}}{dz} + 1 }\\ &\overset{\eqref{hi1}}{\apprle} &\frac{1}{\gamma} \mgh{ \gh{\mint{K_{4\rho}(\mathfrak{z})}{\gh{|\nabla u|+|{\bf f}|}^{p(z)(1-\beta)}}{dz}}^{1+\frac{\omega_{\qq}(8\rho)}{q^-}\tilde{\theta}} + \mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)}}{dz} + 1}\\ & \overset{\eqref{def_M_0}}{\apprle} & \frac{{\bf M}_0}{\gamma|K_{4\rho}(\mathfrak{z})|} \mgh{\gh{\frac{{\bf M}_0}{|K_{4\rho}(\mathfrak{z})|}}^{\frac{\omega_{\qq}(8\rho)}{q^-}\tilde{\theta}} + 1} \overset{\text{Section \ref{sub_radii}}}{\apprle} \frac{{\bf M}_0}{\gamma|K_{4\rho}(\mathfrak{z})|}. \end{array}\end{equation} Then we see \begin{equation*} \begin{aligned} \alpha^{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d} &\overset{\eqref{cv0-2}}{\le} \frac{c_a \alpha^{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} -1 + d}}{|K_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})|} \mgh{ \integral{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\integral{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} }\\ &\overset{\eqref{cv0-5}}{\le} \frac{c_1 c_a {\bf M}_0}{\gamma \tilde{\rho}^{n+2}} \overset{\eqref{cv0-1}}{\le} \Gamma \tilde{\rho}^{-(n+2)}, \end{aligned} \end{equation*} for some $c_1 = c_1(n,\Lambda_0, \Lambda_1, p^{\pm}_{\log}, q^{\pm}_{\log}) \ge 1$. On the other hand, it follows from Remark \ref{remark_def_p_log} that $$ p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^- \le \omega_{\pp} \gh{\max\mgh{\alpha^{-\frac{1}{p(\tilde{\mathfrak{z}})} + \frac{d}{2}}, \alpha^{\frac{-1+d}{2}} } 2\tilde{\rho}} \le \omega_{\pp}(2 \tilde{\rho}) \le \omega_{\pp}(\Gamma \tilde{\rho}), $$ which implies $$ \Gamma^{p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^-} \le \Gamma^{\omega_{\pp}(\Gamma \tilde{\rho})} \overset{\eqref{cv0-1}}{\le} \gh{\frac{\Gamma}{{\bf S}_0}}^{\omega_{\pp}\gh{\frac{{\bf S}_0}{\Gamma}}} \overset{\eqref{small_px}}{\le} e^\gamma \le e. $$ Then we discover $$ \alpha^{p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^+ - p_{Q_{\tilde{\rho}}^\alpha(\tilde{\mathfrak{z}})}^-} \le \gh{\Gamma \tilde{\rho}^{-(n+2)}}^{\frac{\omega_{\pp}(\Gamma \tilde{\rho})}{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d}} \le \Gamma^{\frac{(n+3)\omega_{\pp}(\Gamma \tilde{\rho})}{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d}} \gh{\frac{1}{\Gamma \tilde{\rho}}}^{\frac{(n+2)\omega_{\pp}(\Gamma \tilde{\rho})}{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d}} \le \gh{e^{2n+5}}^{\frac{1}{-\frac{n}{p^-} + \frac{nd}{2} + d}}. $$ Similarly, we can also obtain the inequalities \eqref{cv0-4}. \end{proof} We now consider a Vitali type covering lemma for intrinsic parabolic cylinders as follow: \begin{lemma}\label{cv argument} Let $\alpha, c_p, c_q >1$ and let $\mathcal{F} := \mgh{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}_{j \in \mathcal{J}} \subset Q_{2r}(\mathfrak{z})$ be any collection of intrinsic parabolic cylinders, where $\alpha_j := \alpha^{\frac{q_{Q_{4r}(\mathfrak{z})}^-}{q(\mathfrak{z}_j)}}$ and $\rho_j >0$, satisfying \begin{equation}\label{cv0-6} \alpha_j^{p_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^+ - p_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^-} \le c_p \quad \text{and} \quad \alpha_j^{q_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^+ - q_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^-} \le c_q \quad \text{for every} \ \ j \in \mathcal{J}. \end{equation} Then there exists a countable subcollection $\mathcal{G} = \mgh{Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}_{i \in \mathcal{I}}, \mathcal{I} \subset \mathcal{J}$, of mutually disjoint cylinders such that \begin{equation*} \bigcup_{j \in \mathcal{J}} Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \subset \bigcup_{i \in \mathcal{I}} Q_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i), \end{equation*} for some constant $\chi = \chi_{(n,c_p,c_q,p^{\pm}_{\log},q^{\pm}_{\log})} \ge 1$. \end{lemma} \begin{proof} The proof is similar to that of the standard Vitali covering lemma except in the setting of the unified intrinsic cylinders. See \cite[Lemma 5.3]{BO} and \cite[Lemma 7.1]{bogelein2014very} for other intrinsic cylinder cases. For completeness, we give the proof. Write $D := \sup_{j\in\mathcal{J}} \rho_j$. Set $$ \mathcal{F}_k := \mgh{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \in \mathcal{F} : \frac{D}{2^k} < \rho_j \le \frac{D}{2^{k-1}}} \quad (k=1,2,\cdots). $$ We define $\mathcal{G}_k \subset \mathcal{F}_k$ as follows: \begin{itemize} \item Let $\mathcal{G}_1$ be any maximal disjoint collection of intrinsic cylinders in $\mathcal{F}_1$. \item Assuming that $\mathcal{G}_1, \cdots, G_{k-1}$ have been selected, we choose $\mathcal{G}_k$ to be any maximal disjoint subcollection of $$ \mgh{Q \in \mathcal{F}_k : Q \cap Q' = \emptyset \ \ \text{for all} \ \ Q' \in \bigcup_{l=1}^{k-1} \mathcal{G}_l}. $$ \item Finally, we define $$ \mathcal{G} :=\bigcup_{k=1}^\infty \mathcal{G}_k. $$ \end{itemize} Clearly $\mathcal{G}$ is a countable collection of disjoint intrinsic cylinders and $\mathcal{G} \subset \mathcal{F}$. Now it suffices to show that for each intrinsic cylinder $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \in \mathcal{F}$, there exists an intrinsic cylinder $Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) \in \mathcal{G}$ such that $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \cap Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) \neq \emptyset$ and $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \subset Q_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)$. Fix $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \in \mathcal{F}$. Then there is an index $k$ such that $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \in \mathcal{F}_k$. By the maximality of $\mathcal{G}_k$, there exists an intrinsic cylinder $Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) \in \bigcup_{l=1}^{k} \mathcal{G}_l$ with $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \cap Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) \neq \emptyset$. Since $\rho_i > \frac{D}{2^k}$ and $\rho_j \le \frac{D}{2^{k-1}}$, we know $\rho_j < 2 \rho_i$. Choose $\mathfrak{z}_0 \in Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \cap Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)$. We compute \begin{equation*} \begin{array}{rcl} \alpha_j^{-1+d} &=& \alpha_i^{-1+d} \alpha^{q_{Q_{4r}(\mathfrak{z})}^- \frac{(1-d)(q(\mathfrak{z}_j)-q(\mathfrak{z}_0))-(1-d)(q(\mathfrak{z}_0)-q(\mathfrak{z}_i))}{q(\mathfrak{z}_i)q(\mathfrak{z}_j)}}\\ &\le & \alpha_i^{-1+d} \alpha_j^{\frac{(1-d)\gh{q_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^+ - q_{Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j)}^-}}{q(\mathfrak{z}_i)}} \alpha_i^{\frac{(1-d)\gh{q_{Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}^+ - q_{Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}^-}}{q(\mathfrak{z}_j)}}\\ &\overset{\eqref{cv0-6}}{\le} &c_q^{\frac{2(1-d)}{q^-}} \alpha_i^{-1+d}. \end{array} \end{equation*} where $d$ is given in \eqref{def_d}. Similarly, it follows from \eqref{cv0-6} that $$ \alpha_j^{-\frac{1}{p(\mathfrak{z}_j)}+\frac{d}{2}} \apprle_{(c_p,c_q,p^{\pm}_{\log},q^{\pm}_{\log})} \alpha_i^{-\frac{1}{p(\mathfrak{z}_i)}+\frac{d}{2}}. $$ Thus, from the definition of intrinsic cylinders in Section \ref{Intrinsic cylinders}, there exists a constant $\chi = \chi_{(n,c_p,c_q,p^{\pm}_{\log},q^{\pm}_{\log})} \ge 1$ such that $Q_{\rho_j}^{\alpha_j}(\mathfrak{z}_j) \subset Q_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)$, which completes the proof. \end{proof} \subsection{Stopping-time argument} We employ in this subsection a {\em stopping-time argument} from \cite{adimurthi2018weight} to derive a covering of the upper-level set of $|\nabla u|^\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}$ with respect to some intrinsic parameter $\alpha$. Let us define $\tilde{\alpha}$ by \begin{equation}\label{cv2-1} \tilde{\alpha}^\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+} := \mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \mgh{\gh{\mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} + 1}, \end{equation} where the constants $\beta$ and $\sigma$ are given in Remark \ref{high_int_remark} and \begin{equation}\label{vt_max_def} \vartheta_{K_{4\rho}(\mathfrak{z})}^+ := \sup_{z \in K_{4\rho}(\mathfrak{z})} \vartheta(z) \overset{\eqref{vt_def}}{=} \frac{1}{-\frac{n}{p_{K_{4\rho}(\mathfrak{z})}^-}+\frac{nd}{2}+d}. \end{equation} For $\alpha \ge 1$ and $s \ge 1$, let $E(s,\alpha)$ denote the upper-level set of $|\nabla u(\cdot)|^\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}$, defined by \begin{equation}\label{cv2-2} E(s,\alpha) := \mgh{z \in K_{s\rho}(\mathfrak{z}) : |\nabla u(z)|^\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-} > \alpha}. \end{equation} Fix any $1 \le s_1 < s_2 \le 2$ and any $\alpha \ge 1$ satisfying \begin{equation}\label{cv2-3} \alpha > A \tilde{\alpha}, \quad \text{where} \ \ A := \mgh{\gh{\frac{16}{7}}^n \gh{\frac{120\chi}{s_2-s_1}}^{n+2}}^{\vartheta_{K_{4\rho}(\mathfrak{z})}^+}. \end{equation} Here $\chi$ is given in Lemma \ref{cv argument}. Fix any \begin{equation}\label{cv2-4} \tilde{\rho} \in \left( \frac{(s_2-s_1)\rho}{60\chi}, (s_2-s_1)\rho \right]. \end{equation} We check that for all $\tilde{\mathfrak{z}} \in K_{s_1r}(\mathfrak{z})$, \begin{equation*} \begin{array}{rcl} &&\hspace*{-3cm}\mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}}\\ &\le & \frac{|Q_{2r}|}{|K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})|} \mgh{ \mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} }\\ &\overset{\eqref{cv2-1}}{\le}& \frac{|Q_{2r}|}{\alpha_{\tilde{\mathfrak{z}}}^{-\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} -1 +d} |K_{\tilde{\rho}}(\tilde{\mathfrak{z}})|} \tilde{\alpha}^{\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+}} \overset{\eqref{measure_one}}{\le} \gh{\frac{16}{7}}^n \gh{\frac{2r}{\tilde{\rho}}}^{n+2} \alpha_{\tilde{\mathfrak{z}}}^{\frac{n}{p(\mathfrak{z})} - \frac{nd}{2} +1 -d} \tilde{\alpha}^{\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+}}\\ &\overset{\eqref{cv2-4}}{\le}& \gh{\frac{16}{7}}^n \gh{\frac{120\chi}{(s_2-s_1)\rho}}^{n+2} \alpha_{\tilde{\mathfrak{z}}}^{\frac{n}{p(\tilde{\mathfrak{z}})} - \frac{nd}{2} +1 -d} \tilde{\alpha}^{\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+}}\\ &\overset{\eqref{cv2-3}}{<} &\alpha_{\tilde{\mathfrak{z}}}^{\frac{n}{p(\tilde{\mathfrak{z}})} - \frac{nd}{2} +1 -d} \alpha^{\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+}} \le \alpha, \end{array} \end{equation*} where $\alpha_{\tilde{\mathfrak{z}}} := \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q(\tilde{\mathfrak{z}})}}$. The last inequality has used the fact that $\frac{1}{\vartheta_{K_{4\rho}(\mathfrak{z})}^+} \le -\frac{n}{p(\tilde{\mathfrak{z}})} + \frac{nd}{2} + d$ and $1 \le \alpha_{\tilde{\mathfrak{z}}} \le \alpha$. On the other hand, in view of the Lebesgue differentiation theorem, for every Lebesgue point $\tilde{\mathfrak{z}}$ of $|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}$ in $E(s_1,\alpha)$, we have \begin{equation*}\label{cv2-12} \lim_{\tilde{\rho}\to 0} \mgh{ \mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} } > \alpha. \end{equation*} Then for almost every such point, there exists $\rho_{\tilde{\mathfrak{z}}} \in \left(0, \frac{(s_2-s_1)\rho}{60\chi} \right]$ such that \begin{gather*}\label{cv2-13} \mint{K_{\rho_{\tilde{\mathfrak{z}}}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\rho_{\tilde{\mathfrak{z}}}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} = \alpha,\\ \mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\tilde{\rho}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} < \alpha \quad \forall \tilde{\rho} \in \left(\rho_{\tilde{\mathfrak{z}}}, \frac{(s_2-s_1)\rho}{60\chi} \right]. \end{gather*} Applying Lemma \ref{cv lemma} and Lemma \ref{cv argument} to the collection of intrinsic cylinders $\mgh{Q_{\rho_{\mathfrak{z}}}^{\alpha_{\tilde{\mathfrak{z}}}}(\tilde{\mathfrak{z}})}$ with $\rho_{\tilde{\mathfrak{z}}}$ replacing $\tilde{\rho}$ and $\alpha_{\tilde{\mathfrak{z}}}$ replacing $\alpha$, there exist $\mgh{\mathfrak{z}_i}_{i=1}^\infty \subset E(s_1,\alpha)$ and $\rho_i \in \left(0, \frac{(s_2-s_1)\rho}{60\chi} \right]$, where $\alpha_i := \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q(\mathfrak{z}_i)}}$ for $i=1,2,\cdots,$ such that $\mgh{Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}_{i=1}^\infty$ is mutually disjoint, \begin{equation}\label{cv2-6} E(s_1,\alpha) \setminus N \subset \bigcup_{i=1}^{\infty} K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset K_{s_2 \rho}(\mathfrak{z}), \end{equation} for some Lebesgue measure zero set $N$, and for each $i$ we have \begin{equation}\label{cv2-7} \mint{K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} = \alpha, \end{equation} and \begin{equation}\label{cv2-8} \mint{K_{\tilde{\rho}}^{\alpha_i}(\mathfrak{z}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{\tilde{\rho}}^{\alpha_i}(\mathfrak{z}_i)}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} < \alpha, \end{equation} for any $\tilde{\rho} \in \left(\rho_i, (s_2-s_1)\rho \right]$. Note that since $\min\mgh{1,\alpha^{\frac{1}{p(\mathfrak{z})}-\frac{d}{2}},\alpha^{\frac{1-d}{2}}} = 1$, we have $\bigcup_{i=1}^{\infty} K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset K_{s_2 \rho}(\mathfrak{z})$. \subsection{Power decay estimates on unified intrinsic cylinders} Here we derive the power decay estimate \eqref{cv4-9} on the upper-level set of $|\nabla u|^\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}$, where $\beta$ is given in Remark \ref{high_int_remark}. For any $1 \le s_1 < s_2 \le 2$ and any $\alpha \ge 1$ satisfying \eqref{cv2-3}, we consider $Q_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)$, $i=1,2,\cdots,$ selected in the previous subsection, with \begin{equation}\label{cv3-1} \alpha_i := \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q(\mathfrak{z}_i)}} \quad \text{and} \quad 60\chi \rho_i \le (s_2 - s_1)\rho \le \rho, \end{equation} where $\chi$ is given in Lemma \ref{cv argument}. We divide into the two cases: $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset \Omega_T$ and $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \not\subset \Omega_T$. We only consider the boundary case $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \not\subset \Omega_T$. The interior case $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset \Omega_T$ can be proved in a similar way. Since $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \not\subset \Omega_T$, there exists a boundary point $(\tilde{\mathfrak{x}}_i, \mathfrak{t}_i) \in \gh{\partial\Omega \times (-T,T)} \cap Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)$. Since $(p(\cdot),\mathcal{A},\Omega)$ is $(\gamma,{\bf S}_0)$-vanishing, there exists a new coordinate system modulo rotation and translation, which we still denote by $\{x_1, \cdots, x_n, t\}$, with the origin is $(\tilde{\mathfrak{x}}_i, \mathfrak{t}_i) + 56\chi \gamma \rho_i e_n$, where $e_n := (0, \cdots, 0,1)$ and \begin{equation*} B_{\rho}^+(0) \ \subset \ \Omega_{\rho}(0) \ \subset \ B_{\rho}(0) \cap \{(x, t) : x_n > - 112\chi \gamma \rho\} \quad \text{for any} \ \ 0 < \rho < 48\chi \rho_i. \end{equation*} Set $\tilde{\mathfrak{z}}_i := (0,\mathfrak{t}_i)$. Since $|\mathfrak{x}_i| \le |\mathfrak{x}_i - \tilde{\mathfrak{x}}_i| + |\tilde{\mathfrak{x}}_i| \le (4+56\gamma)\chi \rho_i \le 11\chi \rho_i$, we have from \eqref{cv3-1} and \eqref{cv2-6} that \begin{equation*}\label{cv3-2.5} K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i) \subset K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i) \subset K_{60\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset K_{s_2 \rho}(\mathfrak{z}) \subset K_{4\rho}(\mathfrak{z}), \end{equation*} and thus \begin{equation*}\label{cv3-3} p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^- \le p_{K_{4\rho}(\mathfrak{z})}^+ - p_{K_{4\rho}(\mathfrak{z})}^- \le \omega_{\pp}(2\rho_0) \quad \text{and} \quad q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^- \le \omega_{\qq}(2\rho_0). \end{equation*} We employ \eqref{cv2-7} with taking $c_a =2(48)^{n+2}$ to derive \begin{equation*}\label{cv3-4} \alpha_i \le \alpha < c_a\mgh{ \mint{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \frac{1}{\gamma} \gh{\mint{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} }, \end{equation*} where $\beta$ and $\sigma$ are given in Remark \ref{high_int_remark}. Now applying Lemma \ref{cv lemma} with $\alpha=\alpha_i$, $\tilde{\rho}=12\chi \rho_i$ and $\tilde{\mathfrak{z}}=\tilde{\mathfrak{z}}_i$, we obtain \begin{equation*}\label{cv3-5} \begin{array}{c} \alpha_i^{-\frac{n}{p(\tilde{\mathfrak{z}}_i)} + \frac{nd}{2} + d} \le \Gamma^2 (12\chi \rho_i)^{-(n+2)}, \qquad p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^- \le \omega_{\pp}(12\Gamma \chi \rho_i),\\ \alpha_i^{p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - p_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-} \le c_p, \end{array} \end{equation*} and \begin{gather} \label{cv3-6} q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^- \le \omega_{\qq}(12\Gamma \chi \rho_i), \quad \alpha_i^{q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-} \le c_q, \end{gather} where $c_p$ and $c_q$ are given in \eqref{cv0-3} and \eqref{cv0-4}, respectively. We can now directly compute to get \begin{equation*}\label{cv3-7} \begin{array}{rcl} \mint{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u|^{p(z)(1-\beta)}}{dz} &\le & \gh{\mint{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + 1}^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-}}\\ & \overset{\eqref{cv2-8}}{\apprle}& \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-}} = \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^- \gh{q(\mathfrak{z}_i) - q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-}}{q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^- q(\mathfrak{z}_i)}} \alpha^{\frac{q_{K_{4\rho}(\mathfrak{z})}^-}{q(\mathfrak{z}_i)}}\\ & \overset{\eqref{cv3-1}}{\le}& \alpha_i^{\frac{q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^+ - q_{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}^-}{q^-}} \alpha_i \overset{\eqref{cv3-6}}{\apprle} \alpha_i. \end{array} \end{equation*} Proceeding similarly, we also get \begin{equation*}\label{cv3-7.1} \begin{aligned} \mint{K_{48\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|{\bf f}|^{p(z)(1-\beta)}}{dz} \apprle \gamma^{\frac{q^-}{q^+}}\alpha_i. \end{aligned} \end{equation*} Therefore, applying Theorem \ref{first_diff_thm}, Theorem \ref{second_diff_thm}, and Lemma \ref{existence_ov}, we have the following lemma: \begin{lemma}\label{cv3_lemma1} For any $\varepsilon \in (0,1)$, there exists $\gamma = \gamma(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},q^{\pm}_{\log},\varepsilon) > 0$ satisfying $Q_{4\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \not\subset \Omega_T$ such that \begin{gather*} \label{cv3-8} \mint{K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u-\nabla w|^{p(z)(1-\beta)}}{dz} \le \varepsilon \alpha_i, \qquad \mint{K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla w-\nabla\bar{V}|^{p(\mathfrak{z})(1-\beta)}}{dz} \le \varepsilon \alpha_i,\\ \label{cv3-9} \mint{K_{24\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla w|^{p(z)(1-\beta)}}{dz} \le \varepsilon \alpha_i, \qquad \text{and} \quad \Norm{\nabla\bar{V}}_{L^\infty(K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i),\RR^n)}^{p(\mathfrak{z})(1-\beta)} \apprle \alpha_i. \end{gather*} \end{lemma} From a similar way in \cite[Corollary 5.6]{BO}, we can also obtain from Lemma \ref{cv3_lemma1} that the following estimates: \begin{lemma}\label{cv3_lemma2} Under the assumptions as in Lemma \ref{cv3_lemma1}, we have \begin{equation}\label{cv3-10} \mint{K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u-\nabla \bar{V}|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \le \varepsilon \alpha \quad \text{and} \quad \Norm{|\nabla\bar{V}|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}_{L^\infty(K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i),\RR^n)} \le \alpha c_2 \end{equation} for some constant $c_2 = c_2(n,\Lambda_0,\Lambda_1,p^{\pm}_{\log},q^{\pm}_{\log}) \ge 1$. \end{lemma} We now estimate the integration of $|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}$ on the upper-level set $E(s_1,B\alpha)$, where \begin{equation}\label{cv4-1} B := 2^{\frac{p^+ (1-\beta) q^+}{q^-}} c_2 \ge 1, \end{equation} and $c_2$ is given in Lemma \ref{cv3_lemma2}. Recalling \eqref{cv2-2}, it follows from \eqref{cv2-6} that \begin{equation*}\label{cv4-2} E(s_1, B\alpha) \setminus N \subset E(s_1, \alpha) \setminus N \subset \bigcup_{i=1}^{\infty} K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i) \subset K_{s_2 \rho}(\mathfrak{z}), \end{equation*} and \begin{equation*}\label{cv4-3} \integral{E(s_1, B\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \le \sum_{i=1}^{\infty} \integral{E(s_1, B\alpha) \cap K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}. \end{equation*} We discover that for any $z \in E(s_1, B\alpha) \cap K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)$, \begin{equation*}\label{cv4-4} \begin{array}{rcl} |\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}} &\overset{\eqref{cv3-10}}{\le} &2^{\frac{p^+ (1-\beta) q^+}{q^-}-1} \gh{|\nabla u - \nabla\bar{V}|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}} + c_2 \alpha}\\ &\overset{\eqref{cv2-2},\eqref{cv4-1}}{\le}& 2^{\frac{p^+ (1-\beta) q^+}{q^-}-1} |\nabla u - \nabla\bar{V}|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}} + \frac12 |\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}. \end{array} \end{equation*} Then this implies \begin{equation*}\label{cv4-5} \begin{array}{rcl} \integral{E(s_1, B\alpha) \cap K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} &\le& 2^{\frac{p^+ (1-\beta) q^+}{q^-}} \integral{E(s_1, B\alpha) \cap K_{12\chi \rho_i}^{\alpha_i}(\tilde{\mathfrak{z}}_i)}{|\nabla u- \nabla\bar{V}|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}\\ &\overset{\eqref{measure_one}, \eqref{cv3-10}}{\apprle} & \varepsilon \alpha |K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)|, \end{array} \end{equation*} that is, \begin{equation}\label{cv4-6} \integral{E(s_1, B\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \apprle \varepsilon \alpha \sum_{i=1}^{\infty} |K_{\chi \rho_i}^{\alpha_i}(\mathfrak{z}_i)|. \end{equation} On the other hand, we know from \eqref{cv2-7} that either \begin{equation*}\label{cv4-7} \frac{\alpha}{2} \le \mint{K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \txt{or} \frac{\alpha}{2} \le \frac{1}{\gamma} \gh{\mint{K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}}, \end{equation*} and then we calculate \begin{equation}\label{cv4-8} \begin{array}{rcl} |K_{\tilde{\rho}}^{\alpha_i}(\mathfrak{z}_i)| &\le & \frac{4}{\alpha} \iint_{\mgh{z \in K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) : |\nabla u(z)|^\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-} > \frac{\alpha}{4}}}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}\\ && \quad + \gh{\frac{4}{\gamma\alpha}}^{1+\sigma} \iint_{\mgh{z \in K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i) : |{\bf f}|^\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-} > \frac{\gamma\alpha}{4}}}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}. \end{array} \end{equation} Plugging \eqref{cv4-8} into \eqref{cv4-6} and using the fact that the family $\mgh{K_{\rho_i}^{\alpha_i}(\mathfrak{z}_i)}_{i=1}^\infty \subset K_{s_2 \rho}(\mathfrak{z})$ is pairwise disjoint, we conclude \begin{equation}\label{cv4-9} \begin{aligned} \integral{E(s_1, B\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} &\apprle \varepsilon \integral{E(s_2,\frac{\alpha}{4})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}\\ &\quad + \frac{\varepsilon}{\gamma^{1+\sigma} \alpha^{\sigma}} \integral{\mgh{z \in K_{s_2 \rho}(\mathfrak{z}) : |{\bf f}|^\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-} > \frac{\gamma\alpha}{4}}}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}. \end{aligned} \end{equation} \section{Proof of the main results} \label{nine} \subsection{Proof of Theorem \ref{main_theorem1}}\label{proof of local estimate} Fix any $\mathfrak{z} \in \Omega_T$, $\beta \in (0,{\beta_0})$, and $\rho \in (0,\rho_0]$, where $\beta_0$ and $\rho_0$ are given in Section \ref{def_be_0} and Remark \ref{remark_radius}, respectively. Define the constant ${\bf M}$ by \begin{equation}\label{def_MM} {\bf M} := \iint_{\Omega_T} \lbr[[]|{\bf f}|^{p(z)\max\mgh{(1-\beta)q^-,1}} + 1\rbr[]] \ dz + 1. \end{equation} Clearly, we have ${\bf M} \apprge {\bf M}_0 \ge 1$, where ${\bf M}_0$ is given in \eqref{def_M_0}. Putting $\rho_0 =\frac{1}{C_0 {\bf M}}$ for some constant $C_0 = {C_0}_{(\Lambda_0,\Lambda_1,p^{\pm}_{\log}, q^{\pm}_{\log}, n, {\bf S}_0)}>0$, we can apply all results in Section \ref{eight}. For $k >0$, we define the truncation of $|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}$ as \begin{equation*} \gh{|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}_k (z) := \min \mgh{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}, k}. \end{equation*} Let $1 \le s_1 < s_2 \le 2$. Lemma \ref{useful int} implies that for sufficiently large $k > 1$, \begin{equation}\label{ma1-1} \begin{aligned} &\hspace*{-2cm}\integral{K_{s_1 \rho}(\mathfrak{z})}{\gh{|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}_k^{q_{K_{4\rho}(\mathfrak{z})}^- -1} |\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}\\ &\quad = \gh{q_{K_{4\rho}(\mathfrak{z})}^- -1} \int_0^k \alpha^{q_{K_{4\rho}(\mathfrak{z})}^- -2} \integral{E(s_1,\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz d\alpha}\\ &\quad = \gh{q_{K_{4\rho}(\mathfrak{z})}^- -1} B^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \int_0^{\frac{k}{B}} \alpha^{q_{K_{4\rho}(\mathfrak{z})}^- -2} \integral{E(s_1,B\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz d\alpha}\\ &\quad = \gh{q_{K_{4\rho}(\mathfrak{z})}^- -1} B^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \int_0^{A\tilde{\alpha}} \alpha^{q_{K_{4\rho}(\mathfrak{z})}^- -2} \ d\alpha \integral{K_{s_1 \rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dzd\alpha}\\ &\qquad \ + \gh{q_{K_{4\rho}(\mathfrak{z})}^- -1} B^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \int_{A\tilde{\alpha}}^{\frac{k}{B}} \alpha^{q_{K_{4\rho}(\mathfrak{z})}^- -2} \integral{E(s_1,B\alpha)}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz d\alpha}\\ &\quad =: I_1 + I_2, \end{aligned} \end{equation} where $\tilde{\alpha}$, $A$, $B$, and $E(s_1,\alpha)$ are given in \eqref{cv2-1}, \eqref{cv2-3}, \eqref{cv4-1}, and \eqref{cv2-2}, respectively. For $I_1$, we compute directly that \begin{equation}\label{ma1-2} I_1 \le \gh{A B \tilde{\alpha}}^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \integral{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \apprle \frac{ \tilde{\alpha}^{q_{K_{4\rho}(\mathfrak{z})}^- -1}}{(s_2-s_1)^{(n+2)(q^+ -1)\vartheta_{K_{4\rho}(\mathfrak{z})}^+}} \integral{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}. \end{equation} For $I_2$, it follows from \eqref{cv4-9} and Lemma \ref{useful int} that \begin{equation}\label{ma1-3} \begin{aligned} I_2 \apprle \varepsilon \integral{K_{s_2 \rho}(\mathfrak{z})}{\gh{|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}_k^{q_{K_{4\rho}(\mathfrak{z})}^- -1} |\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \varepsilon \gamma^{-q_{K_{4\rho}(\mathfrak{z})}^-} \integral{K_{s_2 \rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}. \end{aligned} \end{equation} Here we choose $\varepsilon$ small enough which also determines $\gamma_0$. Plugging \eqref{ma1-2} and \eqref{ma1-3} into \eqref{ma1-1} and applying Lemma \ref{useful tech}, we deduce \begin{equation*}\label{ma1-4} \begin{aligned} &\integral{K_{\rho}(\mathfrak{z})}{\gh{|\nabla u|^{\frac{p(\cdot)(1-\beta)q(\cdot)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}_k^{q_{K_{4\rho}(\mathfrak{z})}^- -1} |\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}\\ &\qquad \apprle \tilde{\alpha}^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \integral{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \integral{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}. \end{aligned} \end{equation*} As $k \to \infty$, we have \begin{equation}\label{ma1-5} \integral{K_{\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)q(z)}}{dz} \apprle \tilde{\alpha}^{q_{K_{4\rho}(\mathfrak{z})}^- -1} \integral{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \integral{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}. \end{equation} On the other hand, we note that \begin{equation}\label{ma1-6} \begin{array}{rcl} \gh{\mint{K_{2\rho}(\mathfrak{z})}{\bgh{|\nabla u|^{p(z)(1-\beta)}+|{\bf f}|^{p(z)(1-\beta)q^-}}}{dz}}^{\omega_{\qq}(8\rho)} &\overset{\eqref{def_MM},\eqref{size_u_f}}{\apprle} &\gh{\frac{{\bf M}}{|K_{2\rho}(\mathfrak{z})|}}^{\omega_{\qq}(8\rho)}\\ &\apprle &\gh{\frac{1}{8\rho}}^{(n+3)\omega_{\qq}(8\rho)} \overset{\text{Definition \ref{definition_p_log}}}{\apprle} 1, \end{array} \end{equation} and similarly \begin{equation}\label{ma1-6.5} \gh{\mint{K_{2\rho}(\mathfrak{z})}{\bgh{|\nabla u|^{p(z)(1-\beta)}+|{\bf f}|^{p(z)(1-\beta)q^-}}}{dz}}^{\omega_{\pp}(8\rho)} \apprle 1. \end{equation} Recalling \eqref{vt_def} and \eqref{vt_max_def}, it follows \begin{equation}\label{ma1-7} \vartheta_{K_{4\rho}(\mathfrak{z})}^+ - \vartheta(\mathfrak{z}) \apprle \omega_{\pp}(8\rho). \end{equation} Then we see \begin{equation}\label{ma1-8} \begin{array}{rcl} &&\hspace*{-2cm}\mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} \overset{\eqref{ca1}}{\apprle} \mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta) \gh{1+\frac{\omega_{\qq}(8\rho)}{q^-}}}}{dz} + 1\\ & \overset{\eqref{hi1}}{\apprle}& \gh{\mint{K_{4\rho}(\mathfrak{z})}{\gh{|\nabla u|+|{\bf f}|}^{p(z)(1-\beta)}}{dz}}^{1+\frac{\omega_{\qq}(8\rho)}{q^-}\tilde{\theta}} + \mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)\gh{1+\frac{\omega_{\qq}(8\rho)}{q^-}}}}{dz} + 1\\ & \overset{\eqref{ma1-6},\eqref{ca2}}{\apprle} & \mint{K_{4\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)}}{dz} + \gh{\mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q^-}}{dz}}^{\frac{1}{q^-}} + 1\\ & \overset{\eqref{ma1-6}}{\apprle} & \mint{K_{4\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)}}{dz} + \gh{\mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}}^{\frac{1}{q(\mathfrak{z})}} + 1, \end{array} \end{equation} and \begin{equation}\label{ma1-9} \begin{aligned} \tilde{\alpha}^{q_{K_{4\rho}(\mathfrak{z})}^- -1} &\overset{\eqref{cv2-1}}{\apprle} \mgh{\mint{K_{2\rho}(\mathfrak{z})}{|\nabla u|^{\frac{p(z)(1-\beta)q(z)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz} + \gh{\mint{K_{2\rho}(\mathfrak{z})}{|{\bf f}|^{\frac{p(z)(1-\beta)q(z)(1+\sigma)}{q_{K_{4\rho}(\mathfrak{z})}^-}}}{dz}}^{\frac{1}{1+\sigma}} + 1}^{\vartheta_{K_{4\rho}(\mathfrak{z})}^+ \gh{q_{K_{4\rho}(\mathfrak{z})}^- -1}}\\ & \overset{\eqref{ca2}, \eqref{ma1-6}-\eqref{ma1-8}}{\apprle} \mgh{\mint{K_{4\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)}}{dz} + \gh{\mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}}^{\frac{1}{q(\mathfrak{z})}} + 1}^{\vartheta(\mathfrak{z}) \gh{q(\mathfrak{z}) -1}}. \end{aligned} \end{equation} We finally obtain from \eqref{ma1-5}, \eqref{ma1-8}, and \eqref{ma1-9} that \begin{equation}\label{ma1-10} \mint{K_{\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)q(z)}}{dz} \apprle \mgh{ \mint{K_{4\rho}(\mathfrak{z})}{|\nabla u|^{p(z)(1-\beta)}}{dz} + \gh{\mint{K_{4\rho}(\mathfrak{z})}{|{\bf f}|^{p(z)(1-\beta)q(z)}}{dz}}^{\frac{1}{q(\mathfrak{z})}} + 1}^{1+\vartheta(\mathfrak{z}) \gh{q(\mathfrak{z}) -1}}, \end{equation} which completes the proof. \subsection{Proof of Theorem \ref{main_theorem2}} We extend the local estimate \eqref{ma1-10} up to the boundary. We first choose $\rho = \frac{1}{C_0 {\bf M}}$, where $C_0$ and ${\bf M}$ are given in Section \ref{proof of local estimate}. From the standard covering argument, we can find finitely many disjoint parabolic cylinders $\mgh{Q_{\frac{\rho}{3}}(\mathfrak{z}_k)}_{k=1}^m$, $\mathfrak{z}_k \in \Omega_T$, such that $\bar{\Omega}_T \subset \bigcup_{k=1}^m Q_{\rho}(\mathfrak{z}_k)$. Note that for an integrable function $f$, we have $\sum_{k=1}^m \integral{K_{4\rho}(\mathfrak{z}_k)}{f}{dz} \apprle_{(n)} \integral{\Omega_T}{f}{dz}.$ Then it follows from \eqref{ma1-10} that \begin{equation}\label{ge-1} \begin{aligned} &\integral{\Omega_T}{|\nabla u|^{p(z)(1-\beta)q(z)}}{dz} \le \sum_{k=1}^m \integral{K_{\rho}(\mathfrak{z}_k)}{|\nabla u|^{p(z)(1-\beta)q(z)}}{dz}\\ &\quad \lesssim \rho^{n+2} \mgh{\rho^{-(n+2)q^+} \gh{\integral{\Omega_T}{\bgh{|\nabla u|^{p(z)(1-\beta)}+1}}{dz}}^{q^+} + \rho^{-(n+2)}\integral{\Omega_T}{\bgh{|{\bf f}|^{p(z)(1-\beta)q(z)}+1}}{dz}}^{1+\vartheta^+ \gh{q^+ -1}}, \end{aligned} \end{equation} where $\vartheta^+ :=\sup_{z \in\Omega_T} \vartheta(z)$. Let $M^+$ and $M^-$ be any two constants such that additionally we have $1 < M^- \le q^- \leq q(\cdot) \le q^+\leq M^+<\infty$. In the proof of Theorem \ref{main_theorem1} in Section \ref{proof of local estimate}, we see that ${\beta_0}$ can be chosen to depend on $M^+$ instead of $q^{\pm}$. This, in particular, implies that we can choose $\beta_0$ independent of $M^-$. Let us now define $r(z) := \frac{p(z)(1-\beta)}{p(z)} q(z)$ for $\beta \in (0,\beta_0)$ (it is important to note that we cannot take $\beta=0$), then we trivially have $$r^- \ge \gh{\min_{z\in\Omega_T}\frac{p(z)(1-\beta)}{p(z)}} M^- \quad \text{and} \quad r^+ \le \gh{\max_{z\in\Omega_T}\frac{p(z)(1-\beta)}{p(z)}} M^+.$$ Note that $r(\cdot)$ is clearly log-H\"{o}lder continuous with the log-H\"older constants equivalent to the ones satisfied by $q(\cdot)$. Since all the estimates above are independent of $M^-$ and $\beta_0$ is is independent of $M^-$, we can choose $M^-$ small such that $\gh{\min_{z\in\Omega_T}\frac{p(z)(1-\beta)}{p(z)}} M^- \leq 1$. This in particular allows $r^-=1$. For this choice of the exponent $r(\cdot)$, we conclude from \eqref{ge-1}, \eqref{def_MM}, and the definition of $\rho$ that \begin{equation*}\label{ge-3} \begin{aligned} \integral{\Omega_T}{|\nabla u|^{p(z)r(z)}}{dz} &\le C \mgh{\gh{\integral{\Omega_T}{|{\bf f}|^{p(z)r(z)}}{dz}}^{\gh{1+\vartheta^+ \gh{q^+ -1}}(n+3)q^+ - (n+2)} + 1}\\ &\le C \mgh{\gh{\integral{\Omega_T}{|{\bf f}|^{p(z)r(z)}}{dz}}^{\gh{1+\vartheta^+ \gh{M^+ -1}}(n+3)M^+ - (n+2)} +1}, \end{aligned} \end{equation*} for some constant $C=C_{(\Lambda_0,\Lambda_1,p^{\pm}_{\log}, r^{\pm}_{\log}, M^+, n, \Omega_T,{\bf S}_0)}>0$, which completes the proof. \begin{appendices} \addtocontents{toc}{\def\protect\cftchappresnum{}} \section{The method of Lipschitz truncation - first difference estimate} \label{lipschitz_truncation} In this appendix, following the techniques developed in \cite{adimurthi2018sharp} which were originally pioneered in \cite{KL}, we will develop a modified version of Lipschitz truncation suited to our needs. Recall that $u$ is a weak solution of \eqref{basic_pde} and $w$ is a weak solution of \eqref{wapprox_int}. For this section, we only need to assume the following restrictions on the size of the region $K_{4\rho}^{\alpha}(\mathfrak{z})$: In particular, we will take $\tilde{\rho}_3$ small such that \descref{R6}{R6} and \descref{R4}{R4} are applicable. To simplify the notation, we will define \begin{equation} \label{def_s} s:= \scalet{\alpha}{\mathfrak{z}} (4\rho)^2. \end{equation} Let us now collect some well known results that will be needed in the course of the proof. The first lemma is a time localised version of the parabolic Poincar\'e inequality (see \cite[Lemma 4.2]{adimurthi2018weight} for the proof): \begin{lemma} \label{lemma_crucial_1} Let $f \in L^{\vartheta} (-T,T; W^{1,\vartheta}(\Omega))$ with $\vartheta \in (1,\infty)$ and suppose that $\mathcal{B}_{r} \Subset \Omega$ be compactly contained ball of radius $r>0$. Let $I \subset (-T,T)$ be a time interval and $\rho(x,t) \in L^1(\mathcal{B}_r \times I)$ be any positive function such that $$\|\rho\|_{L^{\infty}(\mathcal{B}_r\times I)} \apprle_{(n)} \frac{|\mathcal{B}_r\times I|}{\|\rho\|_{L^1(\mathcal{B}_r\times I)}} $$ and $\mu(x) \in C_c^{\infty}(\mathcal{B}_r)$ be such that $\int_{\mathcal{B}_r} \mu(x) \ dx = 1$ with $|\mu| \leq \frac{C_{(n)}}{r^n}$ and $|\nabla \mu| \leq \frac{C_{(n)}}{r^{n+1}}$, then there holds: \begin{equation*} \begin{array}{ll} \Yint\tiltlongdash_{\mathcal{B}_r \times I} \left|\frac{f(z)\lsb{\chi}{J} - \avgs{f\lsb{\chi}{J}}{\rho}}{r}\right|^{\vartheta} \ dz & \apprle_{(n,s,C^{\mu})} \Yint\tiltlongdash_{\mathcal{B}_r \times I} |\nabla f|^{\vartheta}\lsb{\chi}{J} \ dz + \sup_{t_1,t_2 \in I} \left| \frac{\avgs{f\lsb{\chi}{J}}{\mu}(t_2) - \avgs{f\lsb{\chi}{J}}{\mu}(t_1)}{r} \right|^{\vartheta} \end{array} \end{equation*} where $\avgs{f\lsb{\chi}{J}}{\rho}:= \int_{\mathcal{B}_r\times I} f(z)\lsb{\chi}{J} \frac{\rho(z)}{\|\rho\|_{L^1(\mathcal{B}_r\times I)}} \ dz\ $, $\avgs{f\lsb{\chi}{J}}{\mu}(t_i) := \int_{\mathcal{B}_r} f(x,t_i) \mu(x) \lsb{\chi}{J} \ dx$ and $J \Subset (-\infty,\infty)$ is some fixed time-interval. \end{lemma} \begin{lemma} \label{lemma_crucial_2} For any $h \in (0,2s)$ and let $\phi(x) \in C_c^{\infty}({\Omega_{4\rho}^{\alpha}(\mathfrak{x})})$ and $\varphi(t) \in C^{\infty}(\mathfrak{t}-s,\infty)$ with $\varphi(\mathfrak{t}-s) = 0$ be a non-negative function and $[u]_h,[w]_h$ be the Steklov average as defined in \eqref{stek1}. Then the following estimate holds for any time interval $(t_1,t_2) \subset [\mathfrak{t}-s,\mathfrak{t}+s]$: \begin{equation*} \label{lemma_crucial_2_est} \begin{array}{rcl} |\avgs{{[u-w]_h \varphi}}{\phi} (t_2) - \avgs{{[u-w]_h\varphi}}{\phi}(t_1)| & \leq & \|\nabla \phi\|_{L^{\infty}{({\Omega_{4\rho}^{\alpha}(\mathfrak{z})})}} \|\varphi\|_{L^{\infty}(t_1,t_2)} \iint_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \times (t_1,t_2)} \abs{\mathcal{A}(z,\nabla w) - \mathcal{A}(z,\nabla u)} \ dz \\ & &\qquad +\|\nabla \phi\|_{L^{\infty}{({\Omega_{4\rho}^{\alpha}(\mathfrak{z})})}} \|\varphi\|_{L^{\infty}(t_1,t_2)} \iint_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \times (t_1,t_2)} [|{\bf f}|^{p(\cdot)-1}]_h \ dz \\ & &\qquad + \|\phi\|_{L^{\infty}{({\Omega_{4\rho}^{\alpha}(\mathfrak{x})})}} \|\varphi'\|_{L^{\infty}(t_1,t_2)} \iint_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \times (t_1,t_2)} |[u-w]_h| \ dz. \end{array} \end{equation*} \end{lemma} \subsection{Construction of test function} Let us denote the following functions: \begin{gather*} v(z) := u(z) - w(z) \txt{and} v_h(z) := [u-w]_h(z), \label{def_vh} \end{gather*} where $[u-w]_h(z)$ denotes the usual Steklov average. It is easy to see that $v_h \xrightarrow{h \searrow 0} v$. We also note that $v(z) = 0$ for $z \in \partial_p K_{4\rho}^{\alpha}(\mathfrak{z})$. For some fixed $\mathfrak{q}$ such that $1 <\mathfrak{q}< \frac{p^-}{p^+-1}$, with $\mathcal{M}$ as given in \eqref{par_max}, let us now define \begin{equation} \label{def_g_A} \begin{array}{ll} g(z) & := \mathcal{M}\lbr \lbr[[]\frac{|v|}{\scalex{\alpha}{\mathfrak{z}}\rho} + |\nabla u| + |\nabla w| + |{\bf f}| + 1\rbr[]]^{\frac{p(z)}{\mathfrak{q}}} \lsb{\chi}{K_{4\rho}^{\alpha}(\mathfrak{z})}\rbr^{\mathfrak{q}(1-\beta)}. \end{array} \end{equation} For a fixed $\lambda \geq 1$, let us define the \emph{good set} by \begin{equation} \label{elambda} \lsbo{E}{\lambda} := \{ z \in \RR^{n+1} : g(z) \leq \lambda^{1-\beta}\} \end{equation} For the rest of this section, we will always assume that the following bound holds: \begin{lemma} \label{bound_rho} With $\rho \leq \tilde{\rho}_3$, there holds \[ \rho^{\pm \abs{p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})}}} \leq C_{(p^{\pm}_{\log},n)}. \] \end{lemma} \begin{proof} Since $p(\cdot) \in p^{\pm}_{\log}$, we have from Remark \ref{remark_def_p_log}, \begin{equation*} p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})} - p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})} \leq \omega_{\pp} \lbr \max\left\{ 8\scalex{\alpha}{\mathfrak{z}}\rho, \sqrt{\scalet{\alpha}{\mathfrak{z}}32\rho^2} \right\} \rbr \leq \omega_{\pp}(32\rho). \end{equation*} Since $\rho \leq 1$, we only need to bound $\rho^{-(p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})}-p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})})}$, which we do as follows: \begin{equation*} \rho^{p^-_{K_{4\rho}^{\alpha}(\mathfrak{z})}-p^+_{K_{4\rho}^{\alpha}(\mathfrak{z})}} \leq \rho^{-32\omega_{\pp}(\rho) } = e^{32 \omega_{\pp}(\rho) \log \frac{1}{\rho}}\leq C_{(p^{\pm}_{\log},n)}. \end{equation*} This completes the proof of the lemma. \end{proof} Following the ideas from \cite[Lemma 5.10]{adimurthi2018sharp}, we can obtain a Vitali-type covering lemma. \begin{lemma} \label{lemma_vitali} Let $\lambda \geq 1$ be such that \eqref{elambda} is given, then for every $z \in K_{4\rho}^{\alpha}(\mathfrak{z})\setminus\lsbo{E}{\lambda}$, consider the parabolic cylinders of the form \begin{equation*} \label{q_rho_z} Q_{\rho_z}^{\lambda}(z) := B_{\scalex{\lambda}{z}\rho_z}(x) \times (t - \scalet{\lambda}{z}\rho_z^2,t + \scalet{\lambda}{z}\rho_z^2) \end{equation*} where $\rho_z := d^{\lambda}_z(z,\lsbo{E}{\lambda}) := \inf_{\tilde{z} \in \lsbo{E}{\lambda}} d^{\lambda}_z(z,\tilde{z}).$ Let $\mathfrak{k} \in (0,1]$ be a given constant and consider the open covering of $K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$ given by \begin{equation*} \label{covering_F} \mathcal{F} := \left\{ Q_{\mathfrak{k} \rho_z}^{\lambda}(z)\right\}_{z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}}. \end{equation*} Then there exists a universal constant $\mathfrak{X} = \mathfrak{X}(p^{\pm}_{\log},n)\geq 9$ and a countable disjoint subcollection $ \mathcal{G} := \{Q_{\rho_i}^{\lambda}(z_i)\}_{i \in \NN}\subset \mathcal{F}$ such that there holds \begin{equation*} \bigcup_{\mathcal{F}}Q_{\mathfrak{k}\rho_z}^{\lambda}(z) \subset \bigcup_{\mathcal{G}} Q_{\mathfrak{X} \rho_{z_i}}^{\lambda} (z_i). \end{equation*} \end{lemma} We now have the following Whitney type covering whose proof is very similar to \cite[Lemma 5.11]{adimurthi2018sharp}. \begin{lemma} \label{whitney_covering} There exists a universal constant $\delta \in (0,1/4)$ such that for $\mathcal{F}$, a given covering of $K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$ given by the cylinders: $\mathcal{F} := \left\{Q_{\frac{\delta}{\mathfrak{X}}\rho_z}^{\lambda}(z)\right\}_{z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}},$ where $\mathfrak{X}$ is the constant from Lemma \ref{lemma_vitali}, there exists a countable subcollection $\mathcal{G} = \left\{ Q_{\delta \rho_{z_i}}^{\lambda}(z_i)\right\}_{i \in \NN} = \{ Q_{r_i}^{\lambda}(z_i)\}_{i \in \NN} $ subordinate to the covering $\mathcal{F}$ such that the following holds: \begin{description} \descitem{(W1)}{W1} $K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda} \subset \bigcup_{i \in \NN}Q_i $. \descitem{(W2)}{W2} Each point $z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$ belongs to utmost $C_{(n,p^{\pm}_{\log})}$ cylinders of the form $2Q_i$. \descitem{(W3)}{W3} There exists a constant $C=C_{(n,p^{\pm}_{\log})}$ such that for any two cylinders $Q_i$ and $Q_j$ with $2Q_i \cap 2Q_j \neq \emptyset$, there holds \begin{equation*} |B_i| \leq C |B_j| \leq C |B_i| \txt{and} |I_i| \leq C |I_j| \leq C |I_i|. \end{equation*} In particular, there holds $|Q_i| \approx_{(p^{\pm}_{\log},n)} |Q_j|$. \descitem{(W4)}{W4} There exists a constant $\hat{c} = \hat{c}_{(n,p^{\pm}_{\log})}\geq 9$ such that for all $i \in \NN$, there holds: \begin{equation*} \hat{c} Q_i \subset \RR^{n+1} \setminus \lsbo{E}{\lambda} \txt{and} 8\hat{c} Q_i \cap \lsbo{E}{\lambda} \neq \emptyset. \end{equation*} \descitem{(W5)}{W5} For the constant $\hat{c}$ from above, there holds $2Q_i \cap 2Q_j \neq \emptyset$ implies $2Q_i \subset \hat{c}Q_j$. \end{description} \end{lemma} Once we have obtained the Whitney type covering lemma, we can now obtain the following standard partition of unity lemma: \begin{lemma} \label{partition_unity} Subordinate to the covering $\mathcal{G}$ obtained in Lemma \ref{whitney_covering} , we obtain a partition of unity $\{ \psi\}_{i=1}^{\infty}$ on $\RR^{n+1} \setminus \lsbo{E}{\lambda}$ that satisfies the following properties: \begin{itemize} \item $\sum_{i=1}^{\infty} \psi_i(z) = 1$ for all $z \in {K_{4\rho}^{\alpha}(\mathfrak{z})} \setminus\lsbo{E}{\lambda}$. \item $\psi_i \in C_c^{\infty}(2Q_i)$. \item $\|\psi_i\|_{\infty} + \scalex{\lambda}{z_i} r_i \| \nabla \psi_i\|_{\infty} + \scalet{\lambda}{z_i} r_i^2 \| \partial_t \psi_i\|_{\infty} \leq C_{(p^{\pm}_{\log},n)}$ where we have used the notation $r_i := \delta \rho_{z_i}$ which is the parabolic radius of $Q_i$ with respect to the metric $d_{z_i}^{\lambda}$ (see Lemma \ref{whitney_covering} for the notation). \item $\psi_i \geq C_{(p^{\pm}_{\log},n)}$ on $Q_i$. \end{itemize} \end{lemma} Before we end this subsection, let us recall the following useful bound that will be used throughout this section. For a proof, see the proof of \cite[Lemma 5.10, (5.23)]{adimurthi2018sharp}. \begin{equation} \label{2.2.28-1} \lambda^{p^+_{2Q_i} - p^-_{2Q_i}} \leq C_{(p^{\pm}_{\log},n)}. \end{equation} \subsection{Construction of Lipschitz truncation function} Let us first clarify some of the notation that will subsequently be used in the rest of this section: for $\hat{c}$ from \descref{W4}{W4}, we denote \begin{equation*} \label{def_qihat} \hat{Q}_i := \hat{c}Q_i = Q_{\hat{r}_i}^{\lambda}(z_i), \txt{where} \hat{r}_i := \hat{c} r_i. \end{equation*} We shall also use the notation \begin{equation*} \label{mcii} \mathcal{I}(i) := \{ j \in \NN : \spt(\psi_i) \cap \spt(\psi_j) \neq \emptyset\} \txt{and} \mathcal{I}_z := \{ j \in \NN: z \in \spt(\psi_j) \}. \end{equation*} We are now ready to construct the Lipschitz truncation function: \begin{equation} \label{lipschitz_function} \lsbt{v}{\lambda,h}(z) := v_h(z) - \sum_{i} \psi_i(z) \lbr v_h(z) - v_h^i\rbr, \end{equation} where we have defined \begin{equation} \label{def_tuh} v_h^i := \left\{ \begin{array}{ll} \Yint\tiltlongdash_{2Q_i} v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz & \text{if} \ \ 2Q_i \subset \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\infty), \\ 0 & \text{else}. \end{array}\right. \end{equation} From construction in \eqref{lipschitz_function} and \eqref{def_tuh}, we see that \begin{equation*} \spt(\lsbt{v}{\lambda,h}) \subset \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\infty). \end{equation*} We see that $\lsbt{v}{\lambda,h}$ has the right support for the test function and hence the rest of this section will be devoted to proving the Lipschitz regularity of $\lsbt{v}{\lambda,h}$ on $K_{4\rho}^{\alpha}(\mathfrak{x})$ as well as some useful estimates. \subsection{Some estimates on the test function} In this subsection, we will collect some useful estimates on the test function. The proofs of these estimates follow similarly to those in \cite{adimurthi2018sharp} and hence we will only provide an outline of the proofs. \begin{lemma} \label{lemma3.6_pre} Let $ \mathfrak{z} \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then from \descref{W1}{W1}, we have that $\mathfrak{z} \in 2Q_i$ for some $i \in \mathcal{I}_{\mathfrak{z}}$. For any $1 \leq \theta \leq \frac{p^-}{\mathfrak{q}}$, there holds \begin{gather} |v_h^i|^{\theta} \leq \Yint\tiltlongdash_{2Q_i} |v_h(\tilde{z})|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \apprle_{(p^{\pm}_{\log},n)} (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lambda^{\frac{\theta}{p(z_i)}}, \label{lemma3.6_pre_one}\\ \Yint\tiltlongdash_{2Q_i}|\nabla v_h(\tilde{z})|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \apprle_{(p^{\pm}_{\log},n)} \lambda^{\frac{\theta}{p(z_i)}} \label{lemma3.6_pre_two}. \end{gather} \end{lemma} \begin{proof} \begin{description}[leftmargin=*] \item[Proof of \eqref{lemma3.6_pre_one}:] We prove this estimate as follows: \begin{equation*} |v_h^i|^{\theta} \apprle (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lbr \Yint\tiltlongdash_{8\hat{c}Q_i} \left[ 1 + \abs{\frac{v(z)}{\scalex{\alpha}{\mathfrak{z}}\rho}} \right]^{\frac{p(\cdot)}{\mathfrak{q}}}\ d\tilde{z}\rbr^{\frac{\theta\mathfrak{\mathfrak{q}}}{p^-_{2Q_i}}} \overset{\eqref{elambda}}{\apprle} (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lambda^{\frac{\theta}{p^-_{2Q_i}}} \overset{\eqref{2.2.28-1}}{\apprle} (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lambda^{\frac{\theta}{p(z_i)}}. \end{equation*} \item[Proof of \eqref{lemma3.6_pre_two}:] From \eqref{elambda}, we see that \begin{equation*}\label{lemma3.6_pre_1} \Yint\tiltlongdash_{2Q_i} |\nabla v_h|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \apprle \lbr \Yint\tiltlongdash_{8\hat{c}Q_i} \left[|\nabla v| +1\right]^{\frac{p(\cdot)}{\mathfrak{q}}}\ d\tilde{z} \rbr^{\frac{\theta\mathfrak{\mathfrak{q}}}{p^-_{2Q_i}}} \overset{\eqref{elambda}}{\apprle} \lambda^{\frac{\theta}{p^-_{2Q_i}}} \overset{\eqref{2.2.28-1}}{\apprle} \lambda^{\frac{\theta}{p(z_i)}}. \end{equation*} \end{description} \end{proof} \begin{corollary} \label{lemma3.6} For any $z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, we have $z \in 2Q_i$ for some $i \in \mathcal{I}_{z}$, then there holds \begin{equation*} |v_h(z)| \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)}(\scalex{\alpha}{\mathfrak{z}}\rho) \lambda^{\frac{1}{p(z_i)}}, \end{equation*} where $z_i$ is the centre of $Q_i$. \end{corollary} \begin{lemma} \label{improved_est} Let $2Q_i$ be a parabolic Whitney type cylinder, then for any $1 \leq \theta \leq \frac{p^-}{\mathfrak{q}}$, there holds \begin{equation*} \Yint\tiltlongdash_{2Q_i} |v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{z_i}r_i\right\}^{\theta} \lambda^{\frac{\theta}{p(z_i)}}. \end{equation*} \end{lemma} \begin{proof} Let us consider the following two cases: \begin{description}[leftmargin=*] \item[Case $\scalex{\alpha}{\mathfrak{z}}\rho \leq \scalex{\lambda}{z_i}r_i$:] In this case, we can use triangle inequality along with \eqref{lemma3.6_pre_one} to get \begin{equation} \label{6.18} \Yint\tiltlongdash_{2Q_i} |v_h(\tilde{z})\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ d\tilde{z} \apprle 2 \Yint\tiltlongdash_{2Q_i} |v_h(\tilde{z})|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \overset{\eqref{lemma3.6_pre_one}}{\apprle} (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lambda^{\frac{\theta}{p(z_i)}}. \end{equation} \item[Case $\scalex{\alpha}{\mathfrak{z}}\rho \geq \scalex{\lambda}{z_i}r_i$:] Applying Lemma \ref{lemma_crucial_2} with $\mu \in C_c^{\infty}(2B_i)$ such that $|\mu(x)| \apprle \frac{1}{\lbr \scalex{\lambda}{z_i} r_i\rbr^n}$ and $|\nabla \mu(x)| \apprle \frac{1}{\lbr \scalex{\lambda}{z_i} r_i\rbr^{n+1}}$, we get \begin{equation} \label{6.19} \begin{aligned} \Yint\tiltlongdash_{2Q_i} |v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ dz &\leq \lbr \scalex{\lambda}{z_i} r_i\rbr^{\theta} \Yint\tiltlongdash_{2Q_i} |\nabla v_h|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z}\\ &\qquad + \sup_{t_1,t_2 \in 2I_i\cap [\mathfrak{t}-s,\mathfrak{t}+s]} |\avgs{v_h}{\mu}(t_2) - \avgs{v_h}{\mu}(t_1)|^{\theta}.\\ \end{aligned} \end{equation} The first term on the right of \eqref{6.19} can be estimated using \eqref{lemma3.6_pre_two} to get \begin{equation} \label{est_J_1} \lbr \scalex{\lambda}{z_i} r_i\rbr^{\theta} \Yint\tiltlongdash_{2Q_i} |\nabla v_h|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \apprle \lbr \scalex{\lambda}{z_i} r_i\rbr^{\theta} \lambda^{\frac{\theta}{p(z_i)}}. \end{equation} To estimate the second term on the right of \eqref{6.19}, we make use of Lemma \ref{lemma_crucial_2} with $\phi(x) = \mu(x)$ and $\varphi(t) \equiv 1$, we get \begin{equation} \label{est_J_2_one} \begin{array}{rcl} |\avgs{v_h}{\mu}(t_2) - \avgs{v_h}{\mu}(t_1)| & \apprle & \frac{|2Q_i|}{\lbr \scalex{\lambda}{z_i} r_i\rbr^{n+1}} \Yint\tiltlongdash_{2Q_i} |\mathcal{A}(\tilde{z},\nabla u) - \mathcal{A}(\tilde{z},\nabla w)| + |{\bf f}|^{p(\tilde{z})-1} \ d\tilde{z} \\ & \apprle & \frac{\scalet{\lambda}{z_i}r_i^2}{\scalex{\lambda}{z_i} r_i} \lbr \Yint\tiltlongdash_{8\hat{c}Q_i} (1 + |\nabla u|+|\nabla w| + |{\bf f}|)^{\frac{p(\tilde{z})}{\mathfrak{q}}} \ d\tilde{z} \rbr^{\frac{q(p^+_{2Q_i} -1)}{p^-_{2Q_i}}}\\ & \overset{\eqref{elambda}}{\apprle} & \lambda^{-1 + \frac{1}{p(z_i)} + \frac{d}{2}} r_i \lambda^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}}. \end{array} \end{equation} Now making use of \eqref{2.2.28-1} along with the fact that $\lambda \geq 1$ and $p^-_{2Q_i} \leq p(z_i)$, we get \begin{equation} \label{6.22} \lambda^{-1+ \frac{1}{p(z_i)} + \frac{p^+_{2Q_i}}{p^-_{2Q_i}} - \frac{1}{p^-_{2Q_i}}} = \lambda^{\frac{p^+_{2Q_i}-p^-_{2Q_i}}{p^-_{2Q_i}}} \lambda^{\frac{p^-_{2Q_i}-p(z_i)}{p(z_i)p^-_{2Q_i}}} \leq \lambda^{\frac{p^+_{2Q_i}-p^-_{2Q_i}}{p^-_{2Q_i}}} \overset{\eqref{2.2.28-1}}{\apprle} C_{(p^{\pm}_{\log},n)}. \end{equation} Substituting \eqref{6.22} into \eqref{est_J_2_one}, we get \begin{equation} \label{est_J_2} |\avgs{v_h}{\mu}(t_2) - \avgs{v_h}{\mu}(t_1)| \apprle \lbr \scalex{\lambda}{z_i} r_i \rbr \lambda^{\frac{1}{p(z_i)}}. \end{equation} Thus combining \eqref{est_J_1} and \eqref{est_J_2} into \eqref{6.19}, we get \[ \Yint\tiltlongdash_{2Q_i} |v_h(\tilde{z})\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ d\tilde{z} \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lbr \scalex{\lambda}{z_i} r_i\rbr^{\theta} \lambda^{\frac{\theta}{p(z_i)}}. \] which proves the lemma. \end{description} \end{proof} \begin{corollary} \label{corollary3.7} For any $i \in \NN$ and any $j \in \mathcal{I}_i$, there holds \[ |v_h^i - v_h^j| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{z_i}r_i\right\} \lambda^{\frac{1}{p(z_i)}}. \] \end{corollary} \subsection{Bounds on \texorpdfstring{$\lsbt{v}{\lambda,h}$ and $\nabla \lsbt{v}{\lambda,h}$}.} \begin{lemma} \label{lemma6.7-1} Let $Q_i$ be a parabolic Whitney type cylinder. Then for any $z \in 2Q_i$, we have the following bound: \begin{equation} \label{lemma6.7-1_est} \lbr\frac{1}{\scalex{\alpha}{\mathfrak{z}}\rho} |\lsbt{v}{\lambda,h}(z)| + |\nabla \lsbt{v}{\lambda,h}(z)|\rbr \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\frac{1}{p(z_i)}}. \end{equation} \end{lemma} \begin{corollary} \label{corollary6.7-2} Let $z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then $z \in 2Q_i$ for some $i \in \NN$. Then there holds for any $\delta \in (0,1]$, the estimates \begin{gather} \frac{1}{\scalex{\lambda}{z_i} r_i } |\lsbt{v}{\lambda,h}(z)| \apprle_{{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}}\frac{\lambda^{\frac{1}{p(z_i)}}}{\delta} + \frac{\delta}{\lbr \scalex{\lambda}{z_i}r_i\rbr^2 \lambda^{\frac{1}{p(z_i)}}} |v_h^i|^2, \label{bound+6.31}\\ |\nabla \lsbt{v}{\lambda,h}(z)| \apprle_{{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}}\frac{\lambda^{\frac{1}{p(z_i)}}}{\delta}. \label{bound+6.31_two} \end{gather} \end{corollary} \begin{lemma} \label{lemma6.7-3} Let $z \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then $z \in 2Q_i$ for some $i \in \NN$. Then there holds for any $\delta \in (0,1]$, the estimates \begin{gather} |\lsbt{v}{\lambda,h}(z)| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{\scalex{\lambda}{z_i} r_i\lambda^{\frac{1}{p(z_i)}}}{\delta} + \frac{\delta}{\scalex{\lambda}{z_i} r_i\lambda^{\frac{1}{p(z_i)}}} \Yint\tiltlongdash_{\hat{Q}_i} |v_h(\tilde{z})|^2 \ d\tilde{z}, \label{bound_6.7-3-1} \\ |\nabla \lsbt{v}{\lambda,h}(z)| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\frac{1}{p(z_i)}} + \frac{\delta}{\lbr \scalex{\lambda}{z_i} r_i\rbr^2\lambda^{\frac{1}{p(z_i)}}} \Yint\tiltlongdash_{\hat{Q}_i} |v_h(\tilde{z})|^2 \ d\tilde{z}. \nonumber \end{gather} \end{lemma} \subsection{Estimates on the time derivative of \texorpdfstring{$\lsbt{v}{\lambda,h}$}.} \begin{lemma} \label{time_vlh} Let $ z \in {K_{4\rho}^{\alpha}(\mathfrak{z})}$, then $z \in 2Q_i$ for some $i \in \NN$. We then have the following estimates for the time derivative of $\lsbt{v}{\lambda,h}$: \begin{equation} \label{bound_time_vlh_one} |\partial_t \lsbt{v}{\lambda,h}(\tilde{z})| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{1}{ \scalet{\lambda}{z_i} r_i^2} \Yint\tiltlongdash_{Q_i} |v_h(z)| \lsb{\chi}{[-s-s]} \ dz. \end{equation} We also have the improved estimate \begin{equation} \label{bound_time_vlh_two} |\partial_t \lsbt{v}{\lambda,h}(\tilde{z})| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{1}{\scalet{\lambda}{z_i} r_i^2} \lambda^{\frac{1}{p(z_i)}} \min \left\{\scalex{\lambda}{z_i}r_i, \scalex{\alpha}{\mathfrak{z}}\rho \right\}. \end{equation} \end{lemma} \begin{proof} Let us prove each of the assertions as follows: \begin{description}[leftmargin=*] \item[Estimate \eqref{bound_time_vlh_one}:] In this case, we proceed as follows \begin{equation*} \begin{array}{rcl} |\partial_t \lsbt{v}{\lambda,h}(z)| & \leq & \sum_{j \in I_i} |v_h^j| |\partial_t\psi_j(z)| \overset{\eqref{lemma3.6_pre_one}}{\apprle} \frac{1}{\scalet{\lambda}{z_i} r_i^2} \Yint\tiltlongdash_{Q_i} |v_h(z)| \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz. \end{array} \end{equation*} \item[Estimate \eqref{bound_time_vlh_two}:] From the fact that $\sum_{j \in I_i} \psi_j(z) = 1$, we see that $\sum_{j \in I_i} \partial_t \psi_j(z) = 0$ which along with Lemma \ref{partition_unity} gives the following sequence of estimates \begin{equation*} \begin{array}{rcl} |\partial_t\lsbt{v}{\lambda,h}(z)| & = & \abs{\sum_{j \in I_i} \lbr v_h^j - v_h^i \rbr \partial_t \psi_j(z) } \\ & \overset{\text{Corollary \ref{corollary3.7}}}{\apprle} & \frac{1}{\scalet{\lambda}{\mathfrak{z}} r_i^2} \min\left\{\scalex{\alpha}{\mathfrak{z}} \rho, \scalex{\lambda}{z_i}r_i\right\} \lambda^{\frac{1}{p(z_i)}}. \end{array} \end{equation*} \end{description} \end{proof} \subsection{Some important estimates for the test function} \begin{lemma} \label{lemma6.8} Let $Q_i$ be a Whitney-type parabolic cylinder for some $ i \in \NN$. Then for any $\vartheta \in [1,2]$, there holds \begin{equation*} \label{lemma6.8-one} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \setminus \lsbo{E}{\lambda}} |\lsbt{v}{\lambda,h}(z)|^{\vartheta} \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \iint_{{K_{4\rho}^{\alpha}(\mathfrak{z})} \setminus \lsbo{E}{\lambda}} |v_h(z)|^{\vartheta} \ dz. \end{equation*} \end{lemma} \begin{lemma} \label{lemma6.8-1} Let $Q_i$ be a Whitney-type parabolic cylinder for some $i \in \NN$, then there holds \begin{equation*} \label{lemma6.8-two} \Yint\tiltlongdash_{Q_i} |\lsbt{v}{\lambda,h}(z) - uh(z)| \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\lambda}{z_i} r_i, \scalex{\alpha}{\mathfrak{z}}\rho \right\} \lambda^{\frac{1}{p(z_i)}}. \end{equation*} \end{lemma} \begin{lemma} \label{lemma6.8-2} Let $Q_i$ be a Whitney-type parabolic cylinder for some $i \in \NN$, then there holds \begin{equation*} \label{lemma6.8-three} \begin{array}{rl} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda} } |\partial_t \lsbt{v}{\lambda,h}(z) \lbr \lsbt{v}{\lambda,h}(z) - v_h(z)\rbr|^{\vartheta} \ dz & \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\vartheta} |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} \end{lemma} \begin{proof} From \descref{W2}{W2}, we see that $K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda} \subset \bigcup 2Q_i$, thus for a given $i \in \NN$, let use define the following \begin{equation*} \label{lm6.8-3-1} \begin{array}{rcl} J_i:=\iint_{2Q_i } \abs{\partial_t \lsbt{v}{\lambda,h}(z) \lbr \lsbt{v}{\lambda,h}(z) - v_h(z)\rbr}^{\vartheta} \lsb{\chi}{K_{4\rho}^{\alpha}(\mathfrak{z})}\ dz. \end{array} \end{equation*} Making use of \eqref{bound_time_vlh_two}, we get \begin{equation*} \begin{array}{rcl} J_i & \apprle & \lbr \frac{\lambda^{\frac{1}{p(z_i)}}}{\scalet{\lambda}{z_i} r_i^2} \min\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{z_i} r_i\} \rbr^{\vartheta} \iint_{2Q_i} \abs{\lsbt{v}{\lambda,h}(z)\lsb{\chi}{K_{4\rho}^{\alpha}(\mathfrak{z})} - v_h(z) \lsb{\chi}{K_{4\rho}^{\alpha}(\mathfrak{z})}}^{\vartheta} \ dz \\ & \apprle & \lbr \frac{\lambda^{\frac{1}{p(z_i)}}}{\scalet{\lambda}{z_i} r_i^2} \min\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{z_i} r_i\} \rbr^{\vartheta} \sum_{j \in I_i}\iint_{2Q_i} \abs{ v_h(z)\lsb{\chi}{K_{4\rho}^{\alpha}(\mathfrak{z})} - v_h^j }^{\vartheta} \ dz \\ & \overset{\text{Lemma \ref{improved_est}}}{\apprle} & \lbr \frac{\lambda^{\frac{1}{p(z_i)}}}{\scalet{\lambda}{z_i} r_i^2} \min\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{z_i} r_i\} \scalex{\lambda}{z_i} r_i \lambda^{\frac{1}{p(z_i)}}\rbr^{\vartheta} |\hat{Q}_i| = |\hat{Q}_i| \lambda^{\vartheta}. \end{array} \end{equation*} Summing over all $i \in \NN$, we get the desired inequality. \end{proof} \subsection{Lipschitz continuity estimates} We will now show that the function $\lsbt{v}{\lambda}$ constructed in \eqref{lipschitz_function} is Lipschitz continuous on $B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s)$ where $s$ is as defined in \eqref{def_s}. To do this, we shall use the integral characterization of Lipschitz continuous functions obtained in \cite[Theorem 3.1]{Prato} which says the following: \begin{lemma}[Lipschitz characterization] \label{deprato} Let $\tilde{z} \in B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s)$ and $r >0$ be given. Define the parabolic cylinder $Q_r(\tilde{z}) := B_r(\tilde{x}) \times (\tilde{t} - r^2, \tilde{t}+r^2)$, i.e., $Q_r(\tilde{z}) := \{ z \in \RR^{n+1}: d_p(z,\tilde{z}) \leq r\}$ where $d_p$ is as defined in Definition \ref{parabolic_metric}. Furthermore suppose that the following expression is bounded independent of $\tilde{z} \in B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s)$ and $r>0$ \[ I_r(\tilde{z}) := \frac{1}{\abs{B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s) \cap Q_r(\tilde{z})}} \iint_{B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s) \cap Q_r(\tilde{z})} \abs{\frac{\lsbt{v}{\lambda,h}(z) - \avgs{\lsbt{v}{\lambda,h}}{B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s)\cap Q_r(\tilde{z})}}{r}} \ dz < \infty, \] then $\lsbt{v}{\lambda} \in C^{0,1}(B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s))$. \end{lemma} \begin{remark} From \eqref{def_d} and the fact that $\alpha \geq 1$, for any $\tilde{z}_1,\tilde{z}_2 \in \RR^{n+1}$ and any $\tilde{z} \in \RR^{n+1}$, we get \begin{equation} \label{equiv_dist} \begin{array}{rcl} d_p(\tilde{z}_1,\tilde{z}_2) & \overset{\text{Definition} \ \ref{parabolic_metric}}{:=} & \max\lbr[\{]|x_1-x_2|, \sqrt{|t_1-t_2|} \rbr[\}] \\ & \leq& \max\left\{\nscalex{\alpha}{z}|x_1-x_2|, \sqrt{\nscalet{\alpha}{z} |t_1-t_2|} \right\}\overset{\text{Definition} \ \ref{loc_parabolic_metric}}{ =:} d_{\tilde{z}}(\tilde{z}_1,\tilde{z}_2)\\ & \leq & \alpha^{{\frac{1}{p^-}-\frac{d}{2}}}\alpha^{\frac32-\frac{d}{2}}\max\left\{|x_1-x_2|, \sqrt{|t_1-t_2|} \right\} \leq C_{(\alpha,p^-,d)} d_p(\tilde{z}_1,\tilde{z}_2). \end{array} \end{equation} This shows that for any $\tilde{z} \in \RR^{n+1}$, we have $d_p \approx_{(\alpha,p^-,d)} d_{\tilde{z}}$. \end{remark} In this subsection, we want to apply Lemma \ref{deprato}, hence we only need to ensure the constants involved are independent of $r>0$ and $\tilde{z}$ only. \emph{Only for this subsection, we will use the notation $o(1)$ to denote a constant which can depend on $\alpha,\alpha_0,p^{\pm}_{\log},\Lambda_0,\Lambda_1,n,\|uh\|_{L^1}, \|u\|_{L^1}$ but {\bf NOT} on $r>0$ and the point $\tilde{z}$.} \begin{lemma} \label{lipschitz_continuity} Let $\alpha \geq 1$, then for any $\tilde{z} \in K_{4\rho}^{\alpha}(\mathfrak{z})$ and $r>0$, there exists a constant $C>0$ independent of $\tilde{z}$ and $r$ such that \begin{equation*}\label{da_pr} I_r(\tilde{z}) := \frac{1}{\abs{K_{4\rho}^{\alpha}(\mathfrak{z}) \cap Q_r(\tilde{z})}} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z}) \cap Q_r(\tilde{z})} \abs{\frac{\lsbt{v}{\lambda,h}(z) - \avgs{\lsbt{v}{\lambda,h}}{K_{4\rho}^{\alpha}(\mathfrak{z}) \cap Q_r(\tilde{z})}}{r}} \ dz \leq C < \infty. \end{equation*} In particular, this implies for any $\tilde{z}_1, \tilde{z}_2 \in B_{4\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\mathfrak{t}+s)$, there exists a constant $K>0$ such that \[ |\lsbt{v}{\lambda,h}(\tilde{z}_1) - \lsbt{v}{\lambda,h}(\tilde{z}_2)| \leq K d_p(\tilde{z}_1,\tilde{z}_2). \] \end{lemma} \begin{proof} Let $r>0$ and $\tilde{z} \in K_{4\rho}^{\alpha}(\mathfrak{z}) $ and denote the cylinder $Q_r(\tilde{z}) = Q$. We will now proceed as follows: \begin{description}[leftmargin=*] \item[Case $2Q \subset \lsbo{E}{\lambda}^c$:] From \eqref{lipschitz_function}, it is easy to see that $\lsbt{v}{\lambda,h} \in C^{\infty}(\lsbo{E}{\lambda}^c)$. Thus, we can apply the mean value theorem to get \begin{equation}\label{A.47}\begin{array}{rcl} I_r(\tilde{z}) &\apprle& \frac{1}{r} \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \abs{\lsbt{v}{\lambda,h}(z_1) - \lsbt{v}{\lambda,h}(z_2)} \ dz_1 \ dz_2 \\ &\apprle &\sup_{z\in Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \lbr |\nabla \lsbt{v}{\lambda,h}(z)| + r |\partial_t \lsbt{v}{\lambda,h}(z)| \rbr. \end{array} \end{equation} Since $2Q \subset \lsbo{E}{\lambda}^c$, we can use \eqref{bound+6.31_two} with $\delta =1$ and \eqref{bound_time_vlh_two} to bound \eqref{A.47} as follows: \begin{equation}\label{A.48} I_r(\tilde{z}) \apprle \sup_{z \in Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \lbr \lambda^{\frac{1}{p(z)}} + r \frac{\lambda^{\frac{d}{2}r_i}}{\scalet{\lambda}{z} r_i^2} \rbr. \end{equation} Here we recall that $z \in 2Q_i$ for some $i \in \NN$ and $r_i$ is the radius of the cylinder $Q_i$. Since $Q \in \lsbo{E}{\lambda}^c$, we also have that $z \in 2Q_i$ for some $i \in \NN$. Let $z_i$ be the centre of $Q_i$, then we have \begin{equation}\label{A.49} r \leq d_p(z,\lsbo{E}{\lambda}) \leq d_p(z,z_i) + d_p(z_i,\lsbo{E}{\lambda}) \leq r_i + d_{z_i}(z_i,\lsbo{E}{\lambda}) \overset{\eqref{equiv_dist}}{\leq} r_i + \hat{c} r_i = (1+\hat{c}) r_i. \end{equation} Substituting \eqref{A.49} into \eqref{A.48}, we get \[ I_r(\tilde{z}) \apprle \lambda^{\frac{1}{p^-}} + (1+ \hat{c}) \lambda^{1- \frac{d}{2}} = o(1). \] \item[Case $2Q \nsubseteq \lsbo{E}{\lambda}^c$:] In this case, we split the proof into three subcases as follows: \begin{description} \item[Subcase $2Q \subset \RR^n \times {(-\infty,s]}$ or $2Q \subset \RR^n \times {[-s,\infty)}$:] In this situation, it is easy to see that the following holds: \begin{equation} \label{A.70} \abs{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} \apprge |Q|. \end{equation} We apply triangle inequality and estimate $I_r(\tilde{z})$ by \begin{equation*} \label{3.81} \begin{array}{rcl} I_r(\tilde{z}) & \leq & 2J_1 + J_2, \end{array} \end{equation*} where we have set \begin{equation}\label{def_J_1_2}\begin{array}{c} J_1:= \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \left| \frac{\lsbt{v}{\lambda,h}(z) - v_h(z)}{r}\right| \ dz, \\ J_2 := \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \left| \frac{v_h(z) - \avgs{v_h}{Q\cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr}}{r}\right|\ dz. \end{array}\end{equation} We now estimate each of the terms of \eqref{def_J_1_2} as follows: \begin{description} \item[Estimate for $J_1$:] From \eqref{lipschitz_function}, we get \begin{equation} \label{3.82} \begin{array}{ll} J_1 &\apprle \sum_{i\in \NN} \frac{1}{|Q\cap \lbr\RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr|} \iint_{Q \cap\lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr\cap 2Q_i} \left| \frac{v_h(z) - v_h^i}{r}\right| \ dz \\ & = \sum_{i\in \NN} \frac{1}{|Q\cap\lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr|} \iint_{Q \cap\lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr\cap 2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r}\right| \ dz. \end{array} \end{equation} Let us fix an $i \in \NN$ and take two points $z_1 \in Q \cap 2Q_i$ and $z_2 \in \lsbo{E}{\lambda} \cap 2Q$. Making use of \descref{W5}{W5} along with the trivial bound $d_p (z_1, z_2) \leq 4r$ and $d_p (z_i, z_1) \leq 2r_i$, we get \begin{equation} \label{3.83.1} \hat{c}r_i =d_p(z_i,\lsbo{E}{\lambda}) \leq d_p (z_i, z_1) + d_p(z_1, z_2) \leq 2r_i + 4r \ \Longrightarrow \ r_i \apprle_{(\hat{c})} r, \end{equation} where $z_i$ denotes the centre of $Q_i$ as in \descref{W1}{W2} and $\hat{c}$ is from \descref{W4}{W4}. Note that \eqref{A.70} holds and thus summing over all $i \in \NN$ such that $Q \cap\lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr\cap 2Q_i \neq \emptyset$ in \eqref{3.82} and making use of \eqref{3.83.1}, we get \begin{equation*} \label{3.84.1} \begin{array}{rcl} J_1 &\apprle& \sum_{\substack{i\in\NN \\ Q \cap\lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr\cap 2Q_i \neq \emptyset }} \frac{|2Q_i|}{|Q\cap\lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr|} \Yint\tiltlongdash_{2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r}\right| \ dz \\ &\overset{\eqref{A.70},\eqref{3.83.1}}{\apprle} & \sum_{i\in \NN} \Yint\tiltlongdash_{2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r_i}\right| \ dz. \end{array} \end{equation*} Using Lemma \ref{improved_est}, we get \begin{equation*} \label{bound_J_1} J_1 \apprle o(1). \end{equation*} \item[Estimate for $J_2$:] To estimate this term, we proceed as follows: Note that $Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr$ is another cylinder. If $Q \subset B_{4\rho}^{\alpha}(\mathfrak{x}) \times \RR$, then choose a cut-off function $\mu \in C_c^{\infty}(B)$ with $|\nabla \mu| \leq \frac{C_{(n)}}{r^{n+1}}$ to get \begin{equation*} \begin{array}{ll} J_2 & = \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - \avgs{v_h\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}}{Q\cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr}}{r}\right|\ dz \\ & \apprle \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} |\nabla v_h| \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} + \sup_{t_1,t_2 \in [\mathfrak{t}-s,\mathfrak{t}+s] \cap Q} \left| \frac{\avgs{v_h\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}}{\mu}(t_1) - \avgs{v_h\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}}{\mu}(t_1)}{r} \right|. \end{array} \end{equation*} Recall that we are in the case $2Q \cap \lsbo{E}{\lambda} \neq \emptyset$ and $2Q \cap \lsbo{E}{\lambda}^c \neq \emptyset$. Further applying Lemma \ref{lemma_crucial_2} and proceeding similarly to \eqref{est_J_2_one}, we see that \begin{equation*} \label{J_2_bound_first_case} J_2 \apprle o(1). \end{equation*} On the other hand, if $Q \nsubseteq B_{4\rho}^{\alpha}(\mathfrak{x}) \times \RR$, then we can apply Poincar\'e's inequality directly to get \begin{equation*} \begin{array}{ll} J_2 & \apprle \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}}{r}\right|\ dz \apprle \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times {[\mathfrak{t}-s,\mathfrak{t}+s]}\rbr} \left| \nabla v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}\right|\ dz. \end{array} \end{equation*} Recall that we are in the case $2Q \cap \lsbo{E}{\lambda} \neq \emptyset$ and $2Q \cap \lsbo{E}{\lambda}^c \neq \emptyset$. Using \eqref{A.70}, we thus get \begin{equation} \label{bound_J_2_second_case} J_2 \apprle o(1). \end{equation} \end{description} \item[Subcase $2Q \cap \RR^n \times (-\infty, s) \neq \emptyset$ and $2Q \cap \RR^n \times (-s,\infty)\neq \emptyset$ AND $r^2 \leq s$:] In this case, we see that $$|Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr| = |B| \times s.$$ We apply triangle inequality and estimate $I_r(z)$ by \begin{equation*} \label{3.81_n} \begin{array}{ll} I_r(z) & \leq 2J_1 + J_2, \end{array} \end{equation*} where we have set \begin{equation*}\label{def_J_1_2_n}\begin{array}{c} J_1:= \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} \left| \frac{\lsbt{v}{\lambda,h}(z) - v_h(z)}{r}\right| \ dz, \\ J_2 := \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} \left| \frac{v_h(z) - \avgs{v_h}{Q\cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr}}{r}\right|\ dz. \end{array}\end{equation*} Proceeding as before, we get \begin{equation*} \label{3.82.n} \begin{array}{rcl} J_1 & \apprle & \sum_{i \in \NN} \frac{|2Q_i|}{|Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr|} \Yint\tiltlongdash_{2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r}\right| \ dz \\ & \overset{\eqref{3.83.1}}{\apprle}& \frac{r_i^{n+2} \scalet{\lambda}{z_i}\scalexn{\lambda}{z_i}}{r^{n} s} \sum_{i\in \NN} \Yint\tiltlongdash_{2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r_i}\right| \ dz\\ & \overset{\eqref{3.83.1}}{\apprle}& \frac{r^{n+2} \scalet{\lambda}{z_i}\scalexn{\lambda}{z_i} }{r^{n} s} \sum_{i\in \NN} \Yint\tiltlongdash_{2Q_i} \left| \frac{v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - v_h^i}{r_i}\right| \ dz\\ & \overset{\text{Lemma \ref{improved_est}}}{\apprle} & o(1). \end{array} \end{equation*} To obtain the last inequality, we made use of the bound $r^2 \leq s$. The estimate for $J_2$ is exactly as in \eqref{bound_J_2_second_case} to get \begin{equation*} \label{bound_J_2_third_case} J_2 \apprle o(1). \end{equation*} \item[Subcase $2Q \cap \RR^n \times (-\infty, s) \neq \emptyset$ and $2Q \cap \RR^n \times (-s,\infty)\neq \emptyset$ AND $r^2 \geq s$:] In this case, we proceed as follows. Using triangle inequality and the bound $|Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr| = |B| \times s$ where $s$ is from \eqref{def_s}, we get \begin{equation*} \label{3.91} \begin{array}{l} \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} \left| \frac{\lsbt{v}{\lambda,h}(z) - \avgs{\lsbt{v}{\lambda,h}}{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr}}{r} \right|\ dz \\ \hspace*{4cm} \apprle \frac{1}{|Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr|} \iint_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr \cap \lsbo{E}{\lambda}} |\lsbt{v}{\lambda,h}(z)| \ dz \\ \hspace*{4cm} \qquad + \frac{1}{|Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr|} \iint_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr \setminus \lsbo{E}{\lambda}} |\lsbt{v}{\lambda,h}(z)| \ dz. \end{array} \end{equation*} By construction of $\lsbt{v}{\lambda,h}$ in \eqref{lipschitz_function}, we have $\lsbt{v}{\lambda,h} = v_h$ on $\lsbo{E}{\lambda}$. On $\lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr \setminus \lsbo{E}{\lambda}$, we can apply Corollary \ref{lemma3.6} to obtain the following bound: \begin{equation*} \begin{array}{ll} \Yint\tiltlongdash_{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} \left| \frac{\lsbt{v}{\lambda,h}(z) - \avgsnoleft{Q \cap \lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr}{\lsbt{v}{\lambda,h}}}{r} \right|\ dz & \apprle \frac{1}{r^n s} \iint_{\lbr \RR^n \times [\mathfrak{t}-s,\mathfrak{t}+s]\rbr} |v_h(z)| \ dz + o(1) \apprle o(1). \end{array} \end{equation*} \end{description} \end{description} This completes the proof of the Lipschitz continuity. \end{proof} \subsection{Crucial estimates for the test function} In this subsection, we shall prove three crucial estimates that will be needed. \begin{lemma} \label{cruc_1} Let $\lambda \geq 1$, then for any $i \in \NN$, $\delta \in (0,1]$ and a.e. $t \in (\mathfrak{t}-s,\mathfrak{t}+s)$, there exists a constant $C = C_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}$ such that there holds \begin{equation} \label{cruc_est_1} \abs{\int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \lbr v(x,t) - v^i \rbr \lsbt{v}{\lambda,h}(x,t) \psi_i(x,t)\ dx} \leq C \lbr \frac{\lambda}{\delta} |Q_i| + \delta |\hat{B}_i|\Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}\ dz\rbr. \end{equation} \end{lemma} \begin{proof} Let us fix any $t \in (-s,s]$, $i \in \NN$ and take $ \omega_i(y,\tau) \lsbt{v}{\lambda,h}(y,\tau)$ as a test function in \eqref{basic_pde} and \eqref{wapprox_int}. Further integrating the resulting expression over $ \left(t_i - \scalet{\lambda}{z_i}4r_i^2 , t\right)$ along with making use of the fact that $\psi_i(y,t_i - \scalet{\lambda}{z_i} 4r_i^2) = 0$, we get for any $a\in \RR$, the equality \begin{equation} \label{3.323} \begin{array}{ll} \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \lbr[(] (v_h - a) \psi_i \lsbt{v}{\lambda,h} \rbr (y,t) \ dy & = \int_{t_i - \max\{\scalet{\lambda}{z_i} 4r_i^2,-s\}}^t \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \partial_t \left( (v_h - a) \psi_i \lsbt{v}{\lambda,h} \right) (y,\tau) \ dy \ d\tau \\ & = \int_{t_i - \max\{\scalet{\lambda}{z_i} 4r_i^2,-s\}}^t \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \partial_t \left( [u-w]_h\psi_i \lsbt{v}{\lambda,h} - a \psi_i \lsbt{v}{\lambda,h} \right) (y,\tau) \ dy \ d\tau \\ & = \int_{t_i - \max\{\scalet{\lambda}{z_i} 4r_i^2,-s\} }^t \int_{\Omega} \iprod{[\mathcal{A}(y,\tau,\nabla w)]_h - [\mathcal{A}(y,\tau,\nabla u)]_h}{\nabla ( \psi_i \lsbt{v}{\lambda,h})} \ dy \ d\tau \\ & \qquad +\ \int_{t_i - \max\{\scalet{\lambda}{z_i} 4r_i^2,-s\}}^t \int_{\Omega} [|{\bf f}|^{p(\cdot)-2} {\bf f}]_h \ \nabla\lbr \psi_i \lsbt{v}{\lambda,h} \right) (y,\tau) \ dy \ d\tau \\ & \qquad -\ \int_{t_i - \max\{\scalet{\lambda}{z_i} 4r_i^2,-s\}}^t \int_{\Omega} a \partial_t \lbr \psi_i \lsbt{v}{\lambda,h}\rbr \ dy \ d\tau. \end{array} \end{equation} We can estimate $|\nabla ( \psi_i \lsbt{v}{\lambda,h})|$ using the chain rule and Lemma \ref{partition_unity}, to get \begin{equation} \label{3.326} \begin{array}{ll} |\nabla (\psi_i \lsbt{v}{\lambda,h})| & \apprle \frac{1}{\scalex{\lambda}{z_i}r_i} |\lsbt{v}{\lambda,h}| + |\nabla \lsbt{v}{\lambda}|. \end{array} \end{equation} Similarly, we can estimate $\left|\partial_t\lbr \psi_i \lsbt{v}{\lambda} \right)\right|$ using the chain rule, to get \begin{align*} \left| \partial_t \lbr \psi_i \lsbt{v}{\lambda,h}\rbr\right| & \apprle \frac{1}{\scalet{\lambda}{z_i} r_i^2} |\lsbt{v}{\lambda}| + |\partial_t \lsbt{v}{\lambda}|.\label{3.328} \end{align*} Let us now prove each of the assertions of the lemma. \begin{description}[leftmargin=*] \item[Proof of \eqref{cruc_est_1}:] Let us take $a=v_h^i$ in the \eqref{3.323} followed by letting $h \searrow 0$ and making use of \eqref{3.326}, \eqref{abounded} and \eqref{bbounded}, we get \begin{equation*} \label{first_1} \begin{array}{ll} \left| \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \lbr[(] (v- v^i) \omega_i \lsbt{v}{\lambda} \rbr (y,t) \ dy \right| & \apprle J_1 + J_2 + J_3, \end{array} \end{equation*} where we have set \begin{align} J_1& := \frac{1}{\scalex{\lambda}{z_i}r_i} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{p(z)-1} |\lsbt{v}{\lambda}| \lsb{\chi}{2Q_i\cap K_{4\rho}^{\alpha}(\mathfrak{z})} \ dz, \nonumber \\ J_2& := \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} \lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{p(z)-1} |\nabla \lsbt{v}{\lambda}| \lsb{\chi}{2Q_i\cap K_{4\rho}^{\alpha}(\mathfrak{z})} \ dz, \nonumber \\ J_3&:= \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |v-v^i| | \partial_t (\psi_i \lsbt{v}{\lambda})| \lsb{\chi}{2Q_i\cap K_{4\rho}^{\alpha}(\mathfrak{z})} \ dz. \nonumbe \end{align} Let us now estimate each of the terms as follows: \begin{description}[leftmargin=*] \item[Bound for $J_1$:] We split the estimate into two cases, the first is when $\scalex{\alpha}{\mathfrak{z}}\rho \leq \scalex{\lambda}{z_i} r_i$. In this case, we make use of \eqref{lemma6.7-1_est} along with \eqref{elambda} to get \[ \begin{array}{rcl} J_1 &\apprle& \frac{\scalex{\alpha}{\mathfrak{z}}\rho \lambda^{\frac{1}{p(z_i)}}}{\scalex{\lambda}{z_i} r_i} |Q_i| \Yint\tiltlongdash_{2Q_i} \lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{p(z)-1} \ dz \\ & \apprle & \frac{\scalex{\alpha}{\mathfrak{z}}\rho \lambda^{\frac{1}{p(z_i)}}}{\scalex{\lambda}{z_i} r_i} |Q_i| \lbr \Yint\tiltlongdash_{2Q_i} \lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{\frac{p(\cdot)}{\mathfrak{q}}} \ dz\rbr^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}}\\ & \apprle & \frac{\scalex{\alpha}{\mathfrak{z}}\rho \lambda^{\frac{1}{p(z_i)}}}{\scalex{\alpha}{\mathfrak{z}}\rho} |Q_i| \lambda^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}}\\ & \apprle & |Q_i| \lambda \leq \frac{\lambda}{\delta} |2Q_i|. \end{array} \] To obtain the last inequality, we have used $\lambda^{\frac{1}{p(z_i)} + \frac{p^+_{2Q_i}}{p^-_{2Q_i}}-\frac{1}{p^-_{2Q_i}}-1} \leq C_{(p^{\pm}_{\log},n)}$. In the case $ \scalex{\alpha}{\mathfrak{z}} \rho \geq \scalex{\lambda}{z_i} r_i$, we get for any $\delta \in(0,1]$ using \eqref{bound_6.7-3-1} \[ \begin{array}{rcl} J_1 & \apprle & \lbr \frac{\lambda^{\frac{1}{p(z_i)}}}{\delta} + \frac{\delta}{\lbr \scalex{\lambda}{z_i} r_i \rbr^2 \lambda^{\frac{1}{p(z_i)}} } \Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \rbr |Q_i| \Yint\tiltlongdash_{2Q_i} \lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{p(z)-1} \ dz \\ & \apprle & |Q_i| \lbr \frac{\lambda^{\frac{1}{p(z_i)}}}{\delta} + \frac{\delta}{\lbr \scalex{\lambda}{z_i} r_i \rbr^2 \lambda^{\frac{1}{p(z_i)}} } \Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \rbr \lambda^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}}\\ & \apprle & |Q_i| \frac{\lambda \lambda^{\frac{1}{p(z_i)} -1 + \frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}}}{\delta} + \delta |B_i| \frac{\scalet{\lambda}{z_i} r_i^2\lambda^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}} }{\lbr \scalex{\lambda}{z_i} r_i \rbr^2 \lambda^{\frac{1}{p(z_i)}}}\Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz\\ & \apprle & \frac{\lambda}{\delta} |Q_i| + \delta |\hat{c}B_i| \Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz. \end{array} \] To obtain the last inequality, we again made use of $\lambda^{\frac{1}{p(z_i)} + \frac{p^+_{2Q_i}}{p^-_{2Q_i}}-\frac{1}{p^-_{2Q_i}}-1} \leq C_{(p^{\pm}_{\log},n)}$. \begin{equation*} \label{bound_I_1} J_1 \apprle \frac{\lambda}{\delta} |\hat{Q}_i| + \delta |\hat{B}_i| \Yint\tiltlongdash_{\hat{Q}_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz. \end{equation*} \item[Bound for $J_2$:] In this case, we can directly use \eqref{bound+6.31_two} to get for any $\delta \in (0,1]$, the bound \begin{equation*} \label{bound_I_22} \begin{array}{rcl} J_2 & \apprle & \frac{\lambda^{\frac{1}{p(z_i)}}}{\delta} |Q_i| \Yint\tiltlongdash_{2Q_i}\lbr |\nabla u|+|\nabla w|+|{\bf f}|+1\rbr^{p(z)-1} \ dz \\ & \apprle & |Q_i| \frac{\lambda^{\frac{1}{p(z_i)}}}{\delta}\lambda^{\frac{p^+_{2Q_i} -1}{p^-_{2Q_i}}} \apprle |Q_i| \frac{\lambda}{\delta}. \end{array} \end{equation*} To obtain the last inequality, we again made use of $\lambda^{\frac{1}{p(z_i)} + \frac{p^+_{2Q_i}}{p^-_{2Q_i}}-\frac{1}{p^-_{2Q_i}}-1} \leq C_{(p^{\pm}_{\log},n)}$. \item[Bound for $J_3$:] Recall that $\hat{r}_i = \hat{c} r_i$ where $\hat{c}$ is from \descref{W4}{W4}. In this case, we make use of \eqref{bound+6.31} and \eqref{bound_time_vlh_two} to get \begin{equation} \label{6.75} \begin{array}{rcl} \left| \partial_t \lbr \psi_i \lsbt{v}{\lambda}\rbr\right| & \apprle & \frac{1}{\scalet{\lambda}{z_i} r_i^2} \lbr \frac{\scalex{\lambda}{z_i}r_i \lambda^{\frac{1}{p(z_i)}}}{\delta} + \frac{\delta}{\scalex{\lambda}{z_i}r_i \lambda^{\frac{1}{p(z_i)}}}\Yint\tiltlongdash_{\hat{Q}_i} |v|^2 \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \rbr+ \\ & & \quad + \frac{\scalex{\lambda}{z_i} r_i}{\scalet{\lambda}{z_i} r_i^2} \lambda^{\frac{1}{p(z_i)}}\\ \end{array} \end{equation} Now making use of Lemma \ref{improved_est}, we see that \begin{equation} \label{6.76} \begin{array}{rcl} \iint_{K_{4\rho}^{\alpha}(\mathfrak{z})} |v-v^i| \lsb{\chi}{2Q_i\cap K_{4\rho}^{\alpha}(\mathfrak{z})} \ dz & \apprle & |Q_i| \Yint\tiltlongdash_{\hat{Q}_i} \abs{v\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} - \avgs{v\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}}{\hat{Q}_i}} \ dz\\ & \apprle & |Q_i| \lbr \scalex{\lambda}{z_i} r_i \rbr \lambda^{\frac{1}{p(z_i)}}. \end{array} \end{equation} Combining \eqref{6.75} and \eqref{6.76}, we get \begin{equation*} \begin{array}{rcl} J_3 &\apprle & \frac{\lambda}{\delta} |Q_i| + \frac{\delta |Q_i|}{\scalet{\lambda}{z_i}r_i^2} \Yint\tiltlongdash_{\hat{Q}_i} |v|^2 \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \\ & \apprle & \frac{\lambda}{\delta} |Q_i| + \delta |\hat{B}_i| \Yint\tiltlongdash_{\hat{Q}_i} |v|^2 \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz. \end{array} \end{equation*} \end{description} \end{description} This completes the proof of the lemma. \end{proof} \begin{lemma} \label{cruc_3} Let $\lambda \geq 1$, then for a.e. $t \in [\mathfrak{t}-s,\mathfrak{t}+s]$, there exists a constant $C = C_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}$ such that there holds \begin{equation} \label{cruc_est_3} \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})\setminus \lsbo{E}{\lambda}(t)} \lbr |v|^2 - |v- \lsbt{v}{\lambda}|^2 \rbr \ dx \geq -C \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{equation} \end{lemma} \begin{proof} Let us fix any $t\in [\mathfrak{t}-s,\mathfrak{t}+s]$ and any point $x \in \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \setminus \lsbo{E}{\lambda}(t)$. Now define \begin{equation*} {\Upsilon} := \left\{ i \in \Theta: \spt(\psi_i) \cap \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times \{t\} \neq \emptyset, \ |v| + |\lsbt{v}{\lambda}| \neq 0 \ \text{on}\ \spt(\psi_i)\cap (\Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times \{t\}) \right\}. \end{equation*} If $i \neq {\Upsilon}$, then $v= \lsbt{v}{\lambda} = 0$ on $\spt(\psi_i) \cap \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times \{t\}$, which implies \[ \int_{\spt(\psi_i) \cap {\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \times\{t\}} |u|^2 - |u - \lsbt{v}{\lambda}|^2 \ dx = 0. \] Hence we only need to consider $i \in {\Upsilon}$. Noting that $\sum_{i \in {\Upsilon}} \psi_i(\cdot,t) \equiv 1$ on $\RR^{n} \cap \lsbo{E}{\lambda}(t)$, we can rewrite the left-hand side of \eqref{cruc_est_3} as \begin{equation} \label{3324} \begin{array}{rcl} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \setminus \lsbo{E}{\lambda}(t)} (|u|^2 - |v- \lsbt{v}{\lambda}|^2)(x,t) \ dx &=& \sum_{i \in {\Upsilon}} \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}\psi_i \lbr |v|^2 - |v- \lsbt{v}{\lambda}|^2 \rbr \ dx = J_1 - J_2.\\ \end{array} \end{equation} where we have set \begin{gather*} J_1:= \sum_{i \in {\Upsilon}} \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}\psi_i \lbr |v^i|^2 + 2 \lsbt{v}{\lambda} (v- v^i) \rbr \ dx, \qquad J_2:= \sum_{i \in {\Upsilon}} \int_{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}\psi_i |\lsbt{v}{\lambda} - v^i|^2 \ dx. \end{gather*} We shall now estimate each of the terms as follows: \begin{description}[leftmargin=*] \item[Estimate of $J_1$:] Using \eqref{cruc_est_1}, we get \begin{equation} \label{I_1_1} J_1 \apprge \sum_{i \in {\Upsilon}} \int_{{\Omega_{4\rho}^{\alpha}(\mathfrak{x})}} \omega_i(z) |v^i|^2 \ dz - \delta \sum_{i \in {\Upsilon}} |\hat{B}_i| |v^i|^2 - \sum_{i \in {\Upsilon}} \frac{\lambda}{\delta} |\hat{Q}_i|. \end{equation} From \eqref{def_tuh}, we have $v^i = 0$ whenever $\spt(\psi_i) \nsubseteq \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times [-s,\infty)$. Hence we only have to sum over all those $i \in {\Upsilon}_1$ for which $\spt(\psi_i) \subset \Omega_{4\rho}^{\alpha}(\mathfrak{x}) \times [-s,\infty)$. In this case, we make use of a suitable choice for $\delta \in (0,1]$, and use \descref{W4}{W4} to estimate \eqref{I_1_1} from below. We get \begin{equation} \label{bound_I1} J_1 \apprge -\lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{equation} \item[Estimate of $J_2$:] For any $x \in K_{4\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}(t)$, we have from Lemma \ref{partition_unity} that $\sum_{j} \psi_j(x,t) = 1$, which gives \begin{equation} \label{I_2_1} \begin{array}{ll} \psi_i(z) |\lsbt{v}{\lambda,h}(z) - v^i|^2 & \apprle \sum_{j \in I_i} |\psi_j(z)|^2 \lbr v^j - v^i\rbr^2 \apprle \min\{ \rho, \scalex{\lambda}{z_i} r_i\}^2 \lambda^{\frac{2}{p(z_i)}}. \end{array} \end{equation} To obtain \redref{3.104.a}{a} above, we made use of Corollary \ref{corollary3.7} along with \descref{W3}{W3}. Substituting \eqref{I_2_1} into the expression for $J_2$, we get \begin{equation} \label{bound_I_2} \begin{array}{rcl} J_2 &\apprle& \sum_{i \in {\Upsilon}} |{\Omega_{4\rho}^{\alpha}(\mathfrak{x})} \cap 2B_i| {\lbr \scalex{\lambda}{z_i} r_i\rbr^2} \lambda^{\frac{2}{p(z_i)}} \\ & \apprle& \sum_{i \in {\Upsilon}} \frac{|Q_i|}{\scalet{\lambda}{z_i} r_i^2} {\lbr \scalex{\lambda}{z_i} r_i\rbr^2} \lambda^{\frac{2}{p(z_i)}} \\ &\apprle & \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation} \end{description} Substituting \eqref{bound_I1} and \eqref{bound_I_2} into \eqref{3324}, the proof of the lemma follows. \end{proof} \section{The method of Lipschitz truncation - second difference estimate} \label{lipschitz_truncation_B} In Appendix \ref{lipschitz_truncation}, we constructed a suitable test function which was used to obtain a difference estimate between the weak solutions of \eqref{basic_pde} and \eqref{wapprox_int}. In this appendix, we will obtain an analogous Lipschitz truncation method that will be used as a test function to obtain difference estimate between the weak solutions of \eqref{wapprox_int} and \eqref{vapprox_bnd}. Most of the estimates follow exactly as in Appendix \ref{lipschitz_truncation} and hence we will only highlight the modifications needed. Let us first note that the Lipschitz truncation is now constructed over the constant exponent $p(\mathfrak{z})$ which actually simplifies a lot of the estimates from Appendix \ref{lipschitz_truncation}. Let us denote \begin{equation*} \label{def_s_B} s:= \scalet{\alpha}{\mathfrak{z}} (3\rho)^2. \end{equation*} Firstly, let us recall the modified Lemma \ref{lemma_crucial_2}: \begin{lemma} \label{lemma_crucial_2_B} For any $h \in (0,2s)$ and let $\phi(x) \in C_c^{\infty}({\Omega_{3\rho}^{\alpha}(\mathfrak{x})})$ and $\varphi(t) \in C^{\infty}(\mathfrak{t}-s,\infty)$ with $\varphi(\mathfrak{t}-s) = 0$ be a non-negative function and $[w]_h,[v]_h$ be the Steklov average as defined in \eqref{stek1}. Then the following estimate holds for any time interval $(t_1,t_2) \subset [\mathfrak{t}-s,\mathfrak{t}+s]$: \begin{equation} \label{lemma_crucial_2_est_B} \begin{array}{rcl} |\avgs{{[w-v]_h \varphi}}{\phi} (t_2) - \avgs{{[w-v]_h\varphi}}{\phi}(t_1)| & \leq & \|\nabla \phi\|_{L^{\infty}{({\Omega_{3\rho}^{\alpha}(\mathfrak{z})})}} \|\varphi\|_{L^{\infty}(t_1,t_2)} \iint_{{\Omega_{3\rho}^{\alpha}(\mathfrak{x})} \times (t_1,t_2)} \abs{\overline{\mathcal{B}}(t,\nabla v) - \mathcal{A}(z,\nabla w)} \ dz \\ & &\qquad + \|\phi\|_{L^{\infty}{({\Omega_{3\rho}^{\alpha}(\mathfrak{x})})}} \|\varphi'\|_{L^{\infty}(t_1,t_2)} \iint_{{\Omega_{3\rho}^{\alpha}(\mathfrak{x})} \times (t_1,t_2)} |[w-v]_h| \ dz. \end{array} \end{equation} \end{lemma} \subsection{Construction of test function} Let us denote the following functions: \begin{gather*} v(z) := w(z) - v(z) \txt{and} v_h(z) := [w-v]_h(z) \label{def_vh_B}. \end{gather*} where $[w-v]_h(z)$ denotes the usual Steklov average. It is easy to see that $v_h \xrightarrow{h \searrow 0} v$. We also note that $v(z) = 0$ for $z \in \partial_p K_{3\rho}^{\alpha}(\mathfrak{z})$. For some fixed $\mathfrak{q}$ such that $1 <\mathfrak{q}< \frac{p^-}{p^+-1}$, with $\mathcal{M}$ as defined in \eqref{par_max}, let us now define \begin{equation} \label{def_g_B} \begin{array}{ll} g(z) & := \mathcal{M}\lbr \lbr[[]\frac{|v|}{\scalex{\alpha}{\mathfrak{z}}\rho} + |\nabla w| + |\nabla v| + 1\rbr[]]^{\frac{p(\mathfrak{z})}{\mathfrak{q}}} \lsb{\chi}{K_{3\rho}^{\alpha}(\mathfrak{z})}\rbr^{\mathfrak{q}(1-\beta)}. \end{array} \end{equation} For a fixed $\lambda \geq 1$, let us define the \emph{good set} by \begin{equation} \label{elambda_B} \lsbo{E}{\lambda} := \{ z \in \RR^{n+1} : g(z) \leq \lambda^{1-\beta}\} \end{equation} Since we are dealing with constant exponent $p(\mathfrak{z})$, we have the following Whitney-type covering lemma (see \cite[Chapter 3]{bogelein2013regularity} or \cite[Lemma 3.1]{diening2010existence} for the proof): \begin{lemma} \label{whitney_covering_B} There exists a Whitney covering $\{Q_i(z_i)\}$ of $\lsbo{E}{\lambda}^c$ in the following sense: \begin{description} \descitem{(W6)}{W6} $Q_j(z_j) = B_j(x_j) \times I_j(t_j)$ where $B_j(x_j) = B_{\scalex{\lambda}{\mathfrak{z}}r_j}(x_j)$ and $I_j(t_j) = (t_j - \scalet{\lambda}{\mathfrak{z}} r_j^2, t_j + \scalet{\lambda}{\mathfrak{z}} r_j^2)$. \descitem{(W7)}{W7} $\bigcup_j Q_j(z_j) = \lsbo{E}{\lambda}^c$. \descitem{(W8)}{W8} for all $j \in \NN$, we have $8Q_j \subset \lsbo{E}{\lambda}^c$ and $16Q_j \cap \lsbo{E}{\lambda} \neq \emptyset$. \descitem{(W9)}{W9} if $Q_j \cap Q_k \neq \emptyset$, then $\frac1{c} r_k \leq r_j \leq c r_k$. \descitem{(W10)}{W10} $\sum_j \lsb{\chi}{8Q_j}(z) \leq c(n)$ for all $z \in \lsbo{E}{\lambda}^c$. \end{description} Subordinate to this Whitney covering, we have an associated partition of unity denoted by $\{ \psi_j\} \in C_c^{\infty}(\RR^{n+1})$ such that the following holds: \begin{description} \descitem{(W11)}{W11} $\lsb{\chi}{Q_j} \leq \psi_j \leq \lsb{\chi}{2Q_j}$. \descitem{(W12)}{W12} $\|\psi_j\|_{\infty} + \scalex{\lambda}{\mathfrak{z}}r_j \| \nabla \psi_j\|_{\infty} + \scalet{\lambda}{\mathfrak{z}}r_j^2 \| \partial_t \psi_j\|_{\infty} \leq C$. \end{description} For a fixed $k \in \NN$, let us define \begin{equation*}\label{Ak}A_k := \left\{ j \in \NN: \frac34Q_k \cap \frac34Q_j \neq \emptyset\right\},\end{equation*} then we have \begin{description} \descitem{(W13)}{W13} Let $i \in \NN$ be given, then $\sum_{j \in A_i} \psi_j(z) = 1$ for all $z \in 2Q_i$. \descitem{(W14)}{W14} Let $i \in \NN$ be given and let $j \in A_i$, then $\max \{ |Q_j|, |Q_i|\} \leq C_{(n)} |Q_j \cap Q_i|.$ \descitem{(W15)}{W15} Let $i \in \NN$ be given and let $j \in A_i$, then $ \max \{ |Q_j|, |Q_i|\} \leq \left|2Q_j \cap 2Q_i\right|.$ \descitem{(W16)}{W16} For any $i \in \NN$, we have $\# A_i \leq c(n)$. \descitem{(W17)}{W17} Let $i \in \NN$ be given, then for any $j \in A_i$, we have $2Q_j \subset 8Q_i$. \end{description} \end{lemma} \subsection{Construction of Lipschitz truncation function} We shall also use the notation \begin{equation*} \label{mcii_B} \mathcal{I}(i) := \{ j \in \NN : \spt(\psi_i) \cap \spt(\psi_j) \neq \emptyset\} \txt{and} \mathcal{I}_z := \{ j \in \NN: z \in \spt(\psi_j) \}. \end{equation*} We are now ready to construct the Lipschitz truncation function: \begin{equation} \label{lipschitz_function_B} \lsbt{v}{\lambda,h}(z) := v_h(z) - \sum_{i} \psi_i(z) \lbr v_h(z) - v_h^i\rbr, \end{equation} where we have defined \begin{equation*} \label{def_tuh_B} v_h^i := \left\{ \begin{array}{ll} \Yint\tiltlongdash_{2Q_i} v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz & \text{if} \ \ 2Q_i \subset \Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\infty), \\ 0 & \text{else}. \end{array}\right. \end{equation*} From construction in \eqref{lipschitz_function} and \eqref{def_tuh}, we see that \begin{equation*} \spt(\lsbt{v}{\lambda,h}) \subset \Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times (\mathfrak{t}-s,\infty). \end{equation*} We see that $\lsbt{v}{\lambda,h}$ has the right support for the test function and hence the rest of this section will be devoted to proving the Lipschitz regularity of $\lsbt{v}{\lambda,h}$ on $K_{3\rho}^{\alpha}(\mathfrak{x})$ as well as some useful estimates. \subsection{Some estimates on the test function} In this subsection, we will collect some useful estimates on the test function. The proofs of these estimates are very similar to the corresponding ones from Appendix \ref{lipschitz_truncation} (in fact simpler because we are dealing with the constant exponent $p(\mathfrak{z})$) and will be omitted. Let us first derive a useful estimate: \begin{equation} \label{aa_bb} \begin{array}{rcl} |\overline{\mathcal{B}}(t,\nabla v) - \mathcal{A}(z,\nabla w)| & = & |\overline{\mathcal{B}}(t,\nabla v) - \mathcal{B}(z,\nabla w)| + |\mathcal{B}(z,\nabla w) - \mathcal{A}(z,\nabla w)|\\ & \overset{\eqref{bbounded}}{\apprle} & \lbr \mu^2 + |\nabla v|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + |\mathcal{B}(z,\nabla w) - \mathcal{A}(z,\nabla w)|\\ & \overset{\eqref{def_bb}}{\apprle} & \lbr \mu^2 + |\nabla v|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + |\mathcal{A}(z,\nabla w)| \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -p(z)}{2}}\\ & \overset{\eqref{abounded}}{\apprle} & \lbr \mu^2 + |\nabla v|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -p(z)}{2}+ \frac{p(z) -1}{2}}\\ & \apprle & \lbr \mu^2 + |\nabla v|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}} + \lbr \mu^2 + |\nabla w|^2 \rbr^{\frac{p(\mathfrak{z}) -1}{2}}. \end{array} \end{equation} The primary use of \eqref{aa_bb} would be needed to estimate the first term on the right hand side of \eqref{lemma_crucial_2_est_B}. \begin{lemma} \label{lemma3.6_pre_B} Let $ \mathfrak{z} \in K_{3\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then from \descref{W1}{W1}, we have that $\mathfrak{z} \in 2Q_i$ for some $i \in \mathcal{I}_{\mathfrak{z}}$. For any $1 \leq \theta \leq \frac{p^-}{\mathfrak{q}}$, there holds \begin{equation} \label{lemma3.6_pre_two_B}\begin{array}{l} |v_h^i|^{\theta} \leq \Yint\tiltlongdash_{2Q_i} |v_h(z)|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \apprle_{(p^{\pm}_{\log},n)} (\scalex{\alpha}{\mathfrak{z}}\rho)^{\theta} \lambda^{\frac{\theta}{p(\mathfrak{z})}}, \\ \Yint\tiltlongdash_{2Q_i}|\nabla v_h(z)|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ dz \apprle_{(p^{\pm}_{\log},n)} \lambda^{\frac{\theta}{p(\mathfrak{z})}}. \end{array}\end{equation} \end{lemma} \begin{corollary} \label{lemma3.6_B} For any $z \in K_{3\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, we have $z \in 2Q_i$ for some $i \in \mathcal{I}_{z}$, then there holds \begin{equation*} |v_h(z)| \apprle_{(n,p^{\pm}_{\log},\Lambda_0,\Lambda_1)}(\scalex{\alpha}{\mathfrak{z}}\rho) \lambda^{\frac{1}{p(\mathfrak{z})}}. \end{equation*} \end{corollary} \begin{lemma} \label{improved_est_B} Let $2Q_i$ be a parabolic Whitney type cylinder, then for any $1 \leq \theta \leq \frac{p^-}{\mathfrak{q}}$, there holds \begin{equation*} \Yint\tiltlongdash_{2Q_i} |v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{\mathfrak{z}}r_i\right\}^{\theta} \lambda^{\frac{\theta}{p(\mathfrak{z})}}. \end{equation*} \end{lemma} \begin{proof} Let us consider the following two cases: \begin{description}[leftmargin=*] \item[Case $\scalex{\alpha}{\mathfrak{z}}\rho \leq \scalex{\lambda}{\mathfrak{z}}r_i$:] This is very similar to \eqref{6.18}. \item[Case $\scalex{\alpha}{\mathfrak{z}}\rho \geq \scalex{\lambda}{\mathfrak{z}}r_i$:] Applying Lemma \ref{lemma_crucial_2} with $\mu \in C_c^{\infty}(2B_i)$ such that $|\mu(x)| \apprle \frac{1}{\lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^n}$ and $|\nabla \mu(x)| \apprle \frac{1}{\lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{n+1}}$, we get \begin{equation} \label{6.19_B} \begin{aligned} \Yint\tiltlongdash_{2Q_i} |v_h(z)\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ dz &\leq \lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{\theta} \Yint\tiltlongdash_{2Q_i} |\nabla v_h|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \\ &\qquad + \sup_{t_1,t_2 \in 2I_i\cap [\mathfrak{t}-s,\mathfrak{t}+s]} |\avgs{v_h}{\mu}(t_2) - \avgs{v_h}{\mu}(t_1)|^{\theta}.\\ \end{aligned} \end{equation} The first term on the right of \eqref{6.19_B} can be estimated using \eqref{lemma3.6_pre_two_B} to get \begin{equation*} \label{est_J_1_B} \lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{\theta} \Yint\tiltlongdash_{2Q_i} |\nabla v_h|^{\theta}\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \ d\tilde{z} \apprle \lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{\theta} \lambda^{\frac{\theta}{p(\mathfrak{z})}}. \end{equation*} To estimate the second term on the right of \eqref{6.19_B}, we make use of Lemma \ref{lemma_crucial_2_B} with $\phi(x) = \mu(x)$ and $\varphi(t) \equiv 1$, we get \begin{equation*} \label{est_J_2_one_B} \begin{array}{rcl} |\avgs{v_h}{\mu}(t_2) - \avgs{v_h}{\mu}(t_1)| & \apprle & \frac{|2Q_i|}{\lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{n+1}} \Yint\tiltlongdash_{2Q_i} |\overline{\mathcal{B}}(t,\nabla v) - \mathcal{A}(z,\nabla w)| \ dz \\ & \overset{\eqref{aa_bb}}{\apprle} & \frac{\scalet{\lambda}{\mathfrak{z}}r_i^2}{\scalex{\lambda}{\mathfrak{z}} r_i} \lbr \Yint\tiltlongdash_{16Q_i} (1 + |\nabla w|+|\nabla v|)^{\frac{p(\mathfrak{z})}{\mathfrak{q}}} \ d\tilde{z} \rbr^{\frac{q(p(\mathfrak{z}) -1)}{p(\mathfrak{z})}}\\ & \overset{\eqref{elambda_B}}{\apprle} & \lambda^{-1 + \frac{1}{p(\mathfrak{z})} + \frac{d}{2}} r_i \lambda^{\frac{p(\mathfrak{z}) -1}{p(\mathfrak{z})}}= \lbr \scalex{\lambda}{\mathfrak{z}} r_i \rbr \lambda^{\frac{1}{p(\mathfrak{z})}}. \end{array} \end{equation*} Thus combining \eqref{est_J_1} and \eqref{est_J_2} into \eqref{6.19}, we get \[ \Yint\tiltlongdash_{2Q_i} |v_h(\tilde{z})\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}- v_h^i|^{\theta} \ d\tilde{z} \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^{\theta} \lambda^{\frac{\theta}{p(\mathfrak{z})}}. \] \end{description} This proves the lemma. \end{proof} \begin{corollary} \label{corollary3.7_B} For any $i \in \NN$ and any $j \in \mathcal{I}_i$, there holds \[ |v_h^i - v_h^j| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\alpha}{\mathfrak{z}}\rho, \scalex{\lambda}{\mathfrak{z}}r_i\right\} \lambda^{\frac{1}{p(\mathfrak{z})}}. \] \end{corollary} \subsection{Bounds on \texorpdfstring{$\lsbt{v}{\lambda,h}$ and $\nabla \lsbt{v}{\lambda,h}$}.} \begin{lemma} \label{lemma6.7-1_B} Let $Q_i$ be a parabolic Whitney type cylinder. Then for any $z \in 2Q_i$, we have the following bound: \begin{equation*} \label{lemma6.7-1_est_B} \lbr\frac{1}{\scalex{\alpha}{\mathfrak{z}}\rho} |\lsbt{v}{\lambda,h}(z)| + |\nabla \lsbt{v}{\lambda,h}(z)|\rbr \lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]} \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\frac{1}{p(\mathfrak{z})}}. \end{equation*} \end{lemma} \begin{corollary} \label{corollary6.7-2_B} Let $z \in K_{3\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then $z \in 2Q_i$ for some $i \in \NN$. Then there holds for any $\delta \in (0,1]$, the estimates \begin{gather*} \frac{1}{\scalex{\lambda}{\mathfrak{z}} r_i } |\lsbt{v}{\lambda,h}(z)| \apprle_{{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}}\frac{\lambda^{\frac{1}{p(\mathfrak{z})}}}{\delta} + \frac{\delta}{\lbr \scalex{\lambda}{\mathfrak{z}}r_i\rbr^2 \lambda^{\frac{1}{p(\mathfrak{z})}}} |v_h^i|^2, \label{bound+6.31_B}\\ |\nabla \lsbt{v}{\lambda,h}(z)| \apprle_{{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}}\frac{\lambda^{\frac{1}{p(\mathfrak{z})}}}{\delta}. \label{bound+6.31_two_B} \end{gather*} \end{corollary} \begin{lemma} \label{lemma6.7-3_B} Let $z \in K_{3\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda}$, then $z \in 2Q_i$ for some $i \in \NN$. Then there holds for any $\delta \in (0,1]$, the estimates \begin{gather*} |\lsbt{v}{\lambda,h}(z)| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{\scalex{\lambda}{\mathfrak{z}} r_i\lambda^{\frac{1}{p(\mathfrak{z})}}}{\delta} + \frac{\delta}{\scalex{\lambda}{\mathfrak{z}} r_i\lambda^{\frac{1}{p(\mathfrak{z})}}} \Yint\tiltlongdash_{\hat{Q}_i} |v_h(\tilde{z})|^2 \ d\tilde{z}, \label{bound_6.7-3-1_B} \\ |\nabla \lsbt{v}{\lambda,h}(z)| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\frac{1}{p(\mathfrak{z})}} + \frac{\delta}{\lbr \scalex{\lambda}{\mathfrak{z}} r_i\rbr^2\lambda^{\frac{1}{p(\mathfrak{z})}}} \Yint\tiltlongdash_{\hat{Q}_i} |uh(\tilde{z})|^2 \ d\tilde{z}. \label{bound_6.7-3-2_B} \end{gather*} \end{lemma} \subsection{Estimates on the time derivative of \texorpdfstring{$\lsbt{v}{\lambda,h}$}.} \begin{lemma} \label{time_vlh_B} Let $ z \in {K_{3\rho}^{\alpha}(\mathfrak{z})}$, then $z \in 2Q_i$ for some $i \in \NN$. We then have the following estimates for the time derivative of $\lsbt{v}{\lambda,h}$: \begin{equation*} \label{bound_time_vlh_one_B} |\partial_t \lsbt{v}{\lambda,h}(\tilde{z})| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{1}{ \scalet{\lambda}{\mathfrak{z}} r_i^2} \Yint\tiltlongdash_{\tilde{Q}_i} |v_h(z)| \lsb{\chi}{[-s-s]} \ dz. \end{equation*} We also have the improved estimate \begin{equation*} \label{bound_time_vlh_two_B} |\partial_t \lsbt{v}{\lambda,h}(\tilde{z})| \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \frac{1}{\scalet{\lambda}{\mathfrak{z}} r_i^2} \lambda^{\frac{1}{p(\mathfrak{z})}} \min \left\{\scalex{\lambda}{\mathfrak{z}}r_i, \scalex{\alpha}{\mathfrak{z}}\rho \right\}. \end{equation*} \end{lemma} \subsection{Some important estimates for the test function} \begin{lemma} \label{lemma6.8_B} Let $Q_i$ be a Whitney-type parabolic cylinder for some $ i \in \NN$. Then for any $\vartheta \in [1,2]$, there holds \begin{equation*} \label{lemma6.8-one_B} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})} \setminus \lsbo{E}{\lambda}} |\lsbt{v}{\lambda,h}(z)|^{\vartheta} \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \iint_{{K_{3\rho}^{\alpha}(\mathfrak{z})} \setminus \lsbo{E}{\lambda}} |v_h(z)|^{\vartheta} \ dz. \end{equation*} \end{lemma} \begin{lemma} \label{lemma6.8-1_B} Let $Q_i$ be a Whitney-type parabolic cylinder for some $i \in \NN$, then there holds \begin{equation*} \label{lemma6.8-two_B} \Yint\tiltlongdash_{2Q_i} |\lsbt{v}{\lambda,h}(z) - \tilde{v}_h(z)| \ dz \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \min\left\{ \scalex{\lambda}{\mathfrak{z}} r_i, \scalex{\alpha}{\mathfrak{z}}\rho \right\} \lambda^{\frac{1}{p(\mathfrak{z})}}. \end{equation*} \end{lemma} \begin{lemma} \label{lemma6.8-2_B} Let $Q_i$ be a Whitney-type parabolic cylinder for some $i \in \NN$, then there holds \begin{equation*} \label{lemma6.8-three_B} \begin{array}{rl} \iint_{K_{3\rho}^{\alpha}(\mathfrak{z}) \setminus \lsbo{E}{\lambda} } |\partial_t \lsbt{v}{\lambda,h}(z) \lbr \lsbt{v}{\lambda,h}(z) - v_h(z)\rbr|^{\vartheta} \ dz & \apprle_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)} \lambda^{\vartheta} |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{array} \end{equation*} \end{lemma} \subsection{Lipschitz continuity} \begin{lemma} \label{lipschitz_continuity_B} Let $\lambda \geq 1$, then for any $\tilde{z} \in \Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times [\mathfrak{t} -s,\mathfrak{t}+s]$ and $r>$, there exists a constant $C>0$ independent of $\tilde{z}$ and $r$ such that \begin{equation*}\label{da_pr_B} I_r(\tilde{z}) := \frac{1}{\abs{\Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times [\mathfrak{t} -s,\mathfrak{t}+s]}} \iint_{\Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times [\mathfrak{t} -s,\mathfrak{t}+s]} \abs{\frac{\lsbt{v}{\lambda}(z) - \avgs{\lsbt{v}{\lambda}}{\Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times [\mathfrak{t} -s,\mathfrak{t}+s]}}{r}} \ dz \leq C < \infty. \end{equation*} In particular, this implies for any $z_1, z_2 \in \Omega_{3\rho}^{\alpha}(\mathfrak{x}) \times [\mathfrak{t} -s,\mathfrak{t}+s]$, there exists a constant $K>0$ such that \[ |\lsbt{v}{\lambda}(z_1) - \lsbt{v}{\lambda}(z_2)| \leq K d_p(z_1,z_2). \] \end{lemma} \subsection{Crucial estimates for the test function} In this subsection, we shall prove three crucial estimates that will be needed. Note that by the time these estimates are applied, we would have taken $h \searrow 0$ in the Steklov average. \begin{lemma} \label{cruc_1_B} Let $\lambda \geq 1$, then for any $i \in \NN$, $\delta \in (0,1]$ and a.e. $t \in (\mathfrak{t}-s,\mathfrak{t}+s)$, there exists a constant $C = C_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}$ such that there holds \begin{equation*} \label{cruc_est_1_B} \abs{\int_{\Omega_{3\rho}^{\alpha}(\mathfrak{x})} \lbr v(x,t) - v^i \rbr \lsbt{v}{\lambda}(x,t) \psi_i(x,t)\ dx} \leq C \lbr \frac{\lambda}{\delta} |Q_i| + \delta |B_i|\Yint\tiltlongdash_{2Q_i} |v(z)|^2\lsb{\chi}{[\mathfrak{t}-s,\mathfrak{t}+s]}\ dz\rbr. \end{equation*} \end{lemma} \begin{lemma} \label{cruc_3_B} Let $\lambda \geq 1$, then for a.e. $t \in [\mathfrak{t}-s,\mathfrak{t}+s]$, there exists a constant $C = C_{(p^{\pm}_{\log},\Lambda_0,\Lambda_1,n)}$ such that there holds \begin{equation*} \label{cruc_est_3_B} \int_{\Omega_{3\rho}^{\alpha}(\mathfrak{x})\setminus \lsbo{E}{\lambda}(t)} \lbr |\tilde{v}|^2 - |v- \lsbt{v}{\lambda}|^2 \rbr \ dx \geq -C \lambda |\RR^{n+1} \setminus \lsbo{E}{\lambda}|. \end{equation*} \end{lemma} \end{appendices} \section*{References}
{ "timestamp": "2018-06-05T02:14:03", "yymm": "1806", "arxiv_id": "1806.00944", "language": "en", "url": "https://arxiv.org/abs/1806.00944" }
\section{Introduction} \label{sec:Introduction} \IEEEPARstart{C}{}ooperative spectrum sensing is a key component of cognitive radio networks ($\ensuremath {\mathit{CRN}}{\xspace}$s) essential for enabling dynamic and opportunistic spectrum access~\cite{akyildiz2011,guizani2015large,khalfi2015distributed}. It consists of having secondary users (\ensuremath {\mathit{SU}}{\xspace} s) sense the licensed channels on a regular basis and collaboratively decide whether a channel is available prior to using it so as to avoid harming primary users (\ensuremath {\mathit{PU}}{\xspace} s). % One of the most popular spectrum sensing techniques is energy detection, thanks to its simplicity and ease of implementation, which essentially detects the presence of \ensuremath {\mathit{PU}}{\xspace}'s signal by measuring and relying on the energy strength of the sensed signal, commonly known as the received signal strength (\ensuremath {\mathit{RSS}}{\xspace})~\cite{fatemieh2011using}. {\let\thefootnote\relax\footnote{{\\Digital Object Identifier 10.1109/TIFS.2016.2622000}}} Broadly speaking, cooperative spectrum sensing techniques can be classified into two categories: centralized and distributed~\cite{akyildiz2011}. In centralized techniques, a central entity called fusion center (\ensuremath {\mathit{FC}}{\xspace}) orchestrates the sensing operations as follows. It selects one channel for sensing and, through a control channel, requests that each \ensuremath {\mathit{SU}}{\xspace}~perform local sensing on that channel and send its sensing report (e.g., the observed \ensuremath {\mathit{RSS}}{\xspace}~value) back to it. It then combines the received sensing reports, makes a decision about the channel availability, and diffuses the decision back to the \ensuremath {\mathit{SU}}{\xspace} s. In distributed sensing techniques, \ensuremath {\mathit{SU}}{\xspace} s do not rely on a \ensuremath {\mathit{FC}}{\xspace}~for making channel availability decisions. They instead exchange sensing information among one another to come to a unified decision~\cite{akyildiz2011}. Despite its usefulness and effectiveness in promoting dynamic spectrum access, cooperative spectrum sensing suffers from serious security and privacy threats. One big threat to \ensuremath {\mathit{SU}}{\xspace} s, which we tackle in this work, is location privacy, which can easily be compromised due to the wireless nature of the signals communicated by \ensuremath {\mathit{SU}}{\xspace} s during the cooperative sensing process. In fact, it has been shown that \ensuremath {\mathit{RSS}}{\xspace}~values of $\ensuremath {\mathit{SU}}{\xspace}$s are highly correlated to their physical locations~\cite{li2012location}, thus making it easy to compromise the location privacy of \ensuremath {\mathit{SU}}{\xspace} s when sending out their sensing reports. The fine-grained location, when combined with other publicly available information, could easily be exploited to infer private information about users~\cite{wicker2012loss}. Examples of such private information are shopping patterns, user preferences, and user beliefs, just to name a few~\cite{wicker2012loss}. With such privacy threats and concerns, $\ensuremath {\mathit{SU}}{\xspace}$s may refuse to participate in the cooperative sensing tasks. It is therefore imperative that cooperative sensing schemes be enabled with privacy preserving capabilities that protect the location privacy of \ensuremath {\mathit{SU}}{\xspace} s, thereby encouraging them to participate in such a key $\ensuremath {\mathit{CRN}}{\xspace}$ function, the spectrum sensing. In this paper, we propose two efficient privacy-preserving schemes with several variants for cooperative spectrum sensing. These schemes exploit various cryptographic mechanisms to preserve the location privacy of \ensuremath {\mathit{SU}}{\xspace} s while performing the cooperative sensing task reliably and efficiently. In addition, we study the cost-performance tradeoffs of the proposed schemes, and show that higher privacy and better performance can be achieved, but at the cost of deploying an additional architectural entity in the system. We show that our proposed schemes are secure and more efficient than their existing counterparts, and are robust against sporadic topological changes and network dynamism. \subsection{Related Work} \label{sec:related} Security and privacy in $\ensuremath {\mathit{CRN}}{\xspace}$s have gained some attention recently. Adem et al.~\cite{adem2016mitigating} addressed jamming attacks in \ensuremath {\mathit{CRN}}{\xspace} s. Yan et al.~\cite{6195839} discussed security issues in fully distributed cooperative sensing. Qin et al.\cite{qin2014preserving} proposed a privacy-preserving protocol for \ensuremath {\mathit{CRN}}{\xspace}~transactions using a commitment scheme and zero-knowledge proof. Wang et al.~\cite{wang2015privacy} proposed a privacy preserving framework for collaborative spectrum sensing in the context of multiple service providers. Location privacy, though well studied in the context of location-based services (LBS)~\cite{6567111,6567112}, has received little attention in the context of \ensuremath {\mathit{CRN}}{\xspace} s~\cite{gao2013location,liu2013location,li2012location}. Some works focused on location privacy but not in the context of cooperative spectrum sensing (e.g., database-driven spectrum sensing~\cite{grissa2015cuckoo,gao2013location} and dynamic spectrum auction~\cite{liu2013location}) and are skipped since they are not within this paper's scope. In the context of cooperative spectrum sensing, Shuai et al.~\cite{li2012location} showed that \ensuremath {\mathit{SU}}{\xspace} s' locations can easily be inferred from their \ensuremath {\mathit{RSS}}{\xspace}~reports, and called this the SRLP (single report location privacy) attack. They also identified the DLP (differential location privacy) attack, where a malicious entity can estimate the \ensuremath {\mathit{RSS}}{\xspace}~(and hence the location) of a leaving/joining user from the variations in the final aggregated \ensuremath {\mathit{RSS}}{\xspace}~measurements before and after user's joining/leaving of the network. They finally proposed \ensuremath {\mathit{PPSS}}{\xspace}~to address these two attacks. Despite its merits, \ensuremath {\mathit{PPSS}}{\xspace}~has several limitations: (i) It needs to collect all the sensing reports in order to decode the aggregated result. This is not fault tolerant, since some reports may be missing due, for example, to the unreliable nature of wireless channels. (ii) It cannot handle dynamism if multiple users join or leave the network simultaneously. (iii) The pairwise secret sharing requirement incurs extra communication overhead and delay. (iv) The underlying encryption scheme requires solving the {\em Discrete Logarithm Problem}, which is possible only for very small plaintext space and can be extremely costly. Chen et al.~\cite{chen2014pdaft} proposed {\em PDAFT}{\xspace}, a fault-tolerant and privacy-preserving data aggregation scheme for smart grid communications. {\em PDAFT}{\xspace}, though proposed in the context of smart grids, is suitable for cooperative sensing schemes. But unlike \ensuremath {\mathit{PPSS}}{\xspace}, {\em PDAFT}{\xspace}~relies on an additional semi-trusted entity, called gateway, and like other aggregation based methods, is prone to the DLP attack. In our previous work \cite{grissa2015location} we proposed an efficient scheme called \ensuremath {\mathit{LPOS}}{\xspace}~to overcome the limitations that existent approaches suffer from. \ensuremath {\mathit{LPOS}}{\xspace}~combines order preserving encryption and yao's millionaire protocol to provide a high location privacy to the users while enabling an efficient sensing performance. \subsection{Our Contribution} \label{subsec:OurContribution} In this paper, we propose two location privacy-preserving schemes for cooperative spectrum sensing that achieve: \begin{itemize} \item Location privacy of secondary users while performing the cooperative spectrum sensing effectively and reliably. \item Fault tolerance and robustness against network dynamism (e.g., multiple \ensuremath {\mathit{SU}}{\xspace} s join/leave the network) and failures (e.g., missed sensing reports). \item Reliability and resiliency against malicious users via an efficient reputation mechanism. \item Accurate spectrum availability decisions via half-voting rules while incurring minimum communication and computation overhead. \end{itemize} Compared to our preliminary works~\cite{grissa2016an} and~\cite{grissa2015location}, this paper provides a more efficient version of \ensuremath {\mathit{LPOS}}{\xspace}~\cite{grissa2015location}, referred to as \ROLPOS~in this paper, that is also robust against malicious users and adapted to a stronger threat model for \ensuremath {\mathit{FC}}{\xspace}. Besides, this paper provides another variant of \LPGW~\cite{grissa2016an} that improves the crytpographic end-to-end delay. Finally, this paper provides an improved security analysis and more comprehensive performance analysis. The reason why we present two variants is to give more options to system designers to decide which topology and which approach is more suitable to their specific requirements. There are tradeoffs between the two options. While \ROLPOS~provides location privacy guarantees without needing to introduce an extra architectural entity, it requires relatively high computational overhead due to the use of the Yao's millionaires' protocol. On the other hand, \LPGW~provides stronger privacy guarantees (as the private inputs are shared among 3 non-colluding entities) and reduces the computational overhead substantially when compared to \ROLPOS, but at the cost of introducing an extra architectural entity. The remainder of this paper is organized as follows. Section~\ref{sec:system} provides our system and security threat models. Section~\ref{sec:Preliminaries} presents our preliminary concepts and definitions. Section~\ref{sec:replpos} and~\ref{sec:lpgw} provide an extensive explanation of the proposed schemes. Section~\ref{sec:SecAnalysis} gives the security analysis of these schemes. Section~\ref{sec:PerformanceAnalysis} presents their performance analysis and a comparison with existent approaches. Finally, Section~\ref{sec:Conclusion} concludes this work. \section{System and Security Threat Models}\renewcommand{\figurename}{Fig.} \label{sec:system} \subsection{System Model} We consider a cooperative spectrum sensing architecture that consists of a \ensuremath {\mathit{FC}}{\xspace}~and a set of \ensuremath {\mathit{SU}}{\xspace} s. Each \ensuremath {\mathit{SU}}{\xspace}~is assumed to be capable of measuring \ensuremath {\mathit{RSS}}{\xspace}~on any channel by means of an energy detection method~\cite{fatemieh2011using}. In this cooperative sensing architecture, the \ensuremath {\mathit{FC}}{\xspace}~combines the sensing observations collected from the \ensuremath {\mathit{SU}}{\xspace} s, decides about the spectrum availability, and broadcasts the decision back to the \ensuremath {\mathit{SU}}{\xspace} s through a control channel. This could typically be done via either {\em hard} or {\em soft} decision rules. The most common soft decision rule is aggregation, where \ensuremath {\mathit{FC}}{\xspace}~collects the \ensuremath {\mathit{RSS}}{\xspace}~values from the \ensuremath {\mathit{SU}}{\xspace} s and compares their average to a predefined threshold, \ensuremath {\mathit{\tau}}{\xspace}, to decide on the channel availability. In hard decision rules, e.g. voting, \ensuremath {\mathit{FC}}{\xspace}~combines votes instead of \ensuremath {\mathit{RSS}}{\xspace}~values. Here, each \ensuremath {\mathit{SU}}{\xspace}~compares its \ensuremath {\mathit{RSS}}{\xspace}~value with \ensuremath {\mathit{\tau}}{\xspace}, makes a local decision (available or not), and then sends to the \ensuremath {\mathit{FC}}{\xspace}~its one-bit local decision/vote instead of sending its \ensuremath {\mathit{RSS}}{\xspace}~value. \ensuremath {\mathit{FC}}{\xspace}~applies then a voting rule on the collected votes to make a channel availability decision. However, for security reasons to be discussed shortly, it may not be desirable to share \ensuremath {\mathit{\tau}}{\xspace}~with \ensuremath {\mathit{SU}}{\xspace} s. In this case, \ensuremath {\mathit{FC}}{\xspace}~can instead collect the \ensuremath {\mathit{RSS}}{\xspace}~values from the \ensuremath {\mathit{SU}}{\xspace} s, make a vote for each \ensuremath {\mathit{SU}}{\xspace}~separately, and then combine all votes to decide about the availability of the channel. In this work, we opted for the voting-based decision rule, with \ensuremath {\mathit{\tau}}{\xspace}~is not to be shared with the \ensuremath {\mathit{SU}}{\xspace} s, over the aggregation-based rule. Two reasons for why choosing voting over aggregation: One, aggregation methods are more prone to sensing errors; for example, receiving some erroneous measurements that are far off from the average of the \ensuremath {\mathit{RSS}}{\xspace}~values can skew the computed \ensuremath {\mathit{RSS}}{\xspace}~average, thus leading to wrong decision. Two, voting does not expose users to the DLP attack~\cite{li2012location} (which was identified earlier in Section~\ref{sec:related}). We chose not to share \ensuremath {\mathit{\tau}}{\xspace}~with the \ensuremath {\mathit{SU}}{\xspace} s because doing so limits the action scope of malicious users that may want to report falsified \ensuremath {\mathit{RSS}}{\xspace}~values for malicious and/or selfish purposes. In this paper, in addition to this 2-party (i.e., \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace} s) cooperative sensing architecture that we just described above, we investigate a 3-party cooperative sensing architecture, where a third entity, called gateway (\ensuremath {\mathit{GW}}{\xspace}), is incorporated along with the \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace} s to cooperate with them in performing the sensing task. As we show later, this gateway allows to achieve higher privacy and lesser computational overhead, but of course at its cost. \subsection{Security Threat Models and Objectives} We make the following security assumptions: \begin{assumption} \label{asm:asm1} (i) \ensuremath {\mathit{FC}}{\xspace}~may modify the value of \ensuremath {\mathit{\tau}}{\xspace}~in different sensing periods to extract information about the \ensuremath {\mathit{RSS}}{\xspace}~values of \ensuremath {\mathit{SU}}{\xspace} s; (ii) \ensuremath {\mathit{GW}}{\xspace}~executes the protocol honestly but shows interest in learning information about the other parties; (iii) \ensuremath {\mathit{FC}}{\xspace}~does not collude with \ensuremath {\mathit{SU}}{\xspace} s; and (iv) \ensuremath {\mathit{GW}}{\xspace}~does not collude with \ensuremath {\mathit{SU}}{\xspace} s or \ensuremath {\mathit{FC}}{\xspace}. \end{assumption} We aim to achieve the following security objectives: \begin{objective} \label{obj:SecurityObj-1} (i) Keep \ensuremath {\mathit{RSS}}{\xspace}~value of each \ensuremath {\mathit{SU}}{\xspace}~confidential; and (ii) Keep \ensuremath {\mathit{\tau}}{\xspace}~confidential. This should hold during all sensing periods and for any network membership change. \end{objective} \section{Preliminaries} \label{sec:Preliminaries} \subsection{Half-Voting Availability Decision Rule} \label{subsec:half-voting} Our proposed schemes use the {\em half-voting decision rule}, shown to be optimal in~\cite{zhang2008cooperative}, and for completeness, we here highlight its main idea. Details can be found in~\cite{zhang2008cooperative}. Let $\ensuremath {\mathit{h}}{\xspace}_0$ and $\ensuremath {\mathit{h}}{\xspace}_1$ be the spectrum sensing hypothesis that \ensuremath {\mathit{PU}}{\xspace}~is absent and present, respectively. Let $P_f$, $P_d$ and $P_m$ denote the probabilities of false alarm, detection, and missed detection, respectively, of one \ensuremath {\mathit{SU}}{\xspace}; i.e., $P_f = Pr(\ensuremath {\mathit{RSS}}{\xspace}>\ensuremath {\mathit{\tau}}{\xspace}\mid \ensuremath {\mathit{h}}{\xspace}_0)$, $P_d = Pr(\ensuremath {\mathit{RSS}}{\xspace}>\ensuremath {\mathit{\tau}}{\xspace}\mid \ensuremath {\mathit{h}}{\xspace}_1)$, and $P_m = 1 - P_{d}$. \ensuremath {\mathit{FC}}{\xspace}~collects the 1-bit decision $\ensuremath {\mathit{D}}{\xspace}_i$ from each $\ensuremath {\mathit{SU}}{\xspace}_i$ and fuses them together according to the following fusion rule~\cite{zhang2008cooperative}: \begin{equation} \label{FCHyp} \ensuremath {\mathit{dec}}{\xspace} = \begin{cases} \ensuremath \mathcal{H}{\xspace}_1, & \displaystyle\sum \limits _{i=1}^n \ensuremath {\mathit{D}}{\xspace}_i\geq \ensuremath {\mathit{\lambda}}{\xspace} \\ \ensuremath \mathcal{H}{\xspace}_0, & \displaystyle\sum \limits _{i=1}^n \ensuremath {\mathit{D}}{\xspace}_i <\ensuremath {\mathit{\lambda}}{\xspace}\end{cases} \end{equation} \ensuremath {\mathit{FC}}{\xspace}~infers that \ensuremath {\mathit{PU}}{\xspace}~is present, i.e. $\ensuremath \mathcal{H}{\xspace}_1$, when at least \ensuremath {\mathit{\lambda}}{\xspace}~\ensuremath {\mathit{SU}}{\xspace} s are inferring $\ensuremath {\mathit{h}}{\xspace}_1$. Otherwise, \ensuremath {\mathit{FC}}{\xspace}~decides that \ensuremath {\mathit{PU}}{\xspace}~is absent, i.e. $\ensuremath \mathcal{H}{\xspace}_0$. Note that the {\em OR} fusion rule, in which \ensuremath {\mathit{FC}}{\xspace}~decides $\ensuremath \mathcal{H}{\xspace}_1$ if at least one of the decisions from the \ensuremath {\mathit{SU}}{\xspace} s is $\ensuremath {\mathit{h}}{\xspace}_1$, corresponds to the case where $\ensuremath {\mathit{\lambda}}{\xspace}=1$. The {\em AND} fusion rule, in which \ensuremath {\mathit{FC}}{\xspace}~decides $\ensuremath \mathcal{H}{\xspace}_1$ if and only if all decisions from the \ensuremath {\mathit{SU}}{\xspace} s are $\ensuremath {\mathit{h}}{\xspace}_1$, corresponds to the case where $\ensuremath {\mathit{\lambda}}{\xspace}=\nbr$. The cooperative spectrum sensing false alarm probability, $Q_f$, and missed detection probability, $Q_m$, are: $Q_f = Pr(\ensuremath \mathcal{H}{\xspace}_1\mid\ensuremath {\mathit{h}}{\xspace}_0)$ and $Q_m = Pr(\ensuremath \mathcal{H}{\xspace}_0\mid\ensuremath {\mathit{h}}{\xspace}_1)$. Letting $n$ be the number of \ensuremath {\mathit{SU}}{\xspace} s, the optimal value of \ensuremath {\mathit{\lambda}}{\xspace}~that minimizes $Q_f+Q_m$ is $\ensuremath {\mathit{\lambda}}{\xspace}_{opt} = \min(\nbr, \lceil{\nbr}/{(1+\alhv)}\rceil)$, where $\alhv = \ln (\frac{P_f}{1-P_m})/\ln (\frac{P_m}{1-P_f})$ and $\lceil\cdot\rceil$ denotes the ceiling function. The value of $\ensuremath {\mathit{\lambda}}{\xspace}_{opt}$ comes from the half-voting rule presented in~\cite{zhang2008cooperative}. We use it since it was proven in~\cite{zhang2008cooperative} to provide the best sensing performance in voting based cooperative sensing. For simplicity, $\ensuremath {\mathit{\lambda}}{\xspace}_{opt}$ is denoted as $\ensuremath {\mathit{\lambda}}{\xspace}$ throughout this paper. \subsection{Reputation Mechanism} To make the voting rule more reliable, we incorporate a reputation mechanism that allows \ensuremath {\mathit{FC}}{\xspace}~to progressively eliminate faulty and malicious \ensuremath {\mathit{SU}}{\xspace} s. It does so by updating and maintaining a reputation score for each \ensuremath {\mathit{SU}}{\xspace}~that reflects its level of reliability. Our proposed schemes incorporate the {\em Beta Reputation} mechanism~\cite{arshad2011robust}. For completeness, we highlight its key features next; more details can be found in~\cite{arshad2011robust} from which all computations in this subsection are based. At the end of each sensing period $t$, \ensuremath {\mathit{FC}}{\xspace}~obtains a decision vector, $\mbox{\boldmath$ \bin$}(t)=[\bin_1(t),\bin_2(t),\ldots,\bin_\nbr(t)]^T$ with $\bin_i(t) \in \{0,1\}$, where $\bin_i(t)=0$ (resp. $\bin_i(t)=1$) means that the spectrum is reported to be free (resp. busy) by user $\ensuremath {\mathit{SU}}{\xspace}_i$. \ensuremath {\mathit{FC}}{\xspace}~then makes a global decision using the fusion rule $f$ as follows: \begin{equation} \label{fDef} \ensuremath {\mathit{dec}}{\xspace}(t)=f(\boldsymbol{w}(t),\mbox{\boldmath$ \bin$}(t)) = \begin{cases} 1 & \mbox{if } \sum\limits_{i=1}^{\nbr}w_i(t) \bin_i(t) \geq\ensuremath {\mathit{\lambda}}{\xspace} \\ 0 & \mbox{otherwise }\end{cases} \end{equation} where $\boldsymbol{w}(t) = [w_1(t),w_2(t)\ldots,w_\nbr(t)]^T$ is the weight vector calculated by \ensuremath {\mathit{FC}}{\xspace}~based on the credibility score of each user, as will be shown shortly, and $\ensuremath {\mathit{\lambda}}{\xspace}$ is the voting threshold determined by the Half-voting rule~\cite{zhang2008cooperative}, as presented in Section~\ref{subsec:half-voting}. For each $\ensuremath {\mathit{SU}}{\xspace}_i$, \ensuremath {\mathit{FC}}{\xspace}~maintains positive and negative rating coefficients, $\posr_i(t)$ and $\eta_i(t)$, that are updated every sensing period $t$ as: $\posr_i(t) = \posr_i(t - 1) +\nu_1(t)$ and $\eta_i(t) = \eta_i(t - 1) +\nu_2(t)$, where $\nu_1(t)$ and $\nu_2(t)$ are calculated as \vspace{-20pt} \begin{multicols}{2} \begin{equation*} \nu_1(t) = \begin{cases} 1 & \bin_i(t) = \ensuremath {\mathit{dec}}{\xspace}(t)\\ 0 & \mbox{otherwise }\end{cases} \end{equation*}\break \begin{equation*} \nu_2(t) = \begin{cases} 1 & \bin_i(t) \neq \ensuremath {\mathit{dec}}{\xspace}(t)\\ 0 & \mbox{otherwise }\end{cases} \end{equation*} \end{multicols} Here, $\posr_i(t)$ (resp. $\eta_i(t)$) reflects the number of times $\ensuremath {\mathit{SU}}{\xspace}_i$'s observation, $\bin_i(t)$, agrees (resp. disagrees) with the $\ensuremath {\mathit{FC}}{\xspace}$'s global decision, $\ensuremath {\mathit{dec}}{\xspace}$(t). \ensuremath {\mathit{FC}}{\xspace}~computes then $\ensuremath {\mathit{SU}}{\xspace}_i$'s credibility score, $\varphi_i$(t), and contribution weight, $w_i$(t), at sensing period $t$ as suggested in~\cite{arshad2011robust}: \vspace{-30pt} \begin{multicols}{2} \begin{equation} \label{cred} \varphi_i (t) \!=\! \dfrac{\posr_i(t) + 1}{\posr_i(t) \! + \! \eta_i(t)\! +\! 2} \end{equation}\break \begin{equation} \label{weight} w_i (t)\!= \! {\varphi_i(t)}/{\sum\limits_{j=1}^{\nbr}\!\varphi_j(t)} \end{equation} \end{multicols} \subsection{Cryptographic Building Blocks} \label{subsec:crypto} Our schemes use a few known cryptographic building blocks, which we define next before using them in the next sections when describing our schemes so as to ease the presentation. \begin{definition}~\label{def:OPE} \noindent \textbf{Order Preserving Encryption~\mbox{\boldmath$(\ensuremath {\mathit{OPE}}{\xspace})$}:} is a deterministic symmetric encryption scheme whose encryption preserves the numerical ordering of the plaintexts, i.e. for any two messages $m_1$ and $m_2 \:~s.t.~\: m_1 \leq m_2$, we have $c_1\ensuremath {\leftarrow}{\xspace}\OEnc{K}{m_1}$ $\leq c_2\ensuremath {\leftarrow}{\xspace}\OEnc{K}{m_2}$ \cite{boldyreva2009order}, with $c\ensuremath {\leftarrow}{\xspace}\OEnc{K}{m}$ is order preserving encryption of a message $m\in\{0,1\}^{d}$ under key $K$, where $d$ is the block size of \ensuremath {\mathit{OPE}}{\xspace}. \end{definition} \begin{definition} \label{def:YM} \noindent \textbf{\em Yao's Millionaires'~\mbox{\boldmath$(\ensuremath {\mathit{YM}}{\xspace})$ Protocol~\cite{yao1982protocols}:}} is a Secure Comparison protocol that enables two parties to execute "the greater-than" function, $GT(x, y) = [x > y]$, without disclosing any other information apart from the outcome. \end{definition} \begin{definition} \label{def:GroupKey} \textbf{\em Tree-based Group Elliptic Curve Diffie-Hellman $\boldsymbol{(\ensuremath {\mathit{TGECDH}}{\xspace})}$~\cite{wang2006performance}:} is a dynamic and contributory group key establishment protocol that permits multiple users to collaboratively establish and update a group key $K$. \end{definition} \begin{definition} \label{def:KeyIndependence} \textbf{Group Key independence:} given a subset of previous keys, an attacker cannot know any other group key. \end{definition} \begin{definition} \label{def:ECDLP} \textbf{\em Elliptic Curve Discrete Logarithm Problem $\boldsymbol{(\ensuremath {\mathit{ECDLP}}{\xspace}):}$} given an elliptic curve $\mathscr{E}$ over $GF(q)$ and points $(P,Z) \in E$, find an integer $x$, if any exists, s.t. $Z = xP$. \end{definition} \begin{definition} \label{def:DigSign} \textbf{\em Digital Signature:} A digital signature scheme \ensuremath {\mathit{SGN}}{\xspace}~is used to validate the authenticity and integrity of a message $m$. It contains three components defined as follows: $\bullet$ Key generation algorithm ($\mathsf{Kg}$): returns a private/public key pair given a security parameter $1^\ensuremath {\mathit{\kappa}}{\xspace}}\newcommand{\posr}{\ensuremath {\mathit{\varrho}}{\xspace}$, $(\ensuremath {\mathit{SK_{DS}}}{\xspace},\ensuremath {\mathit{PK_{DS}}}{\xspace})\leftarrow \mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Kg}(1^\ensuremath {\mathit{\kappa}}{\xspace}}\newcommand{\posr}{\ensuremath {\mathit{\varrho}}{\xspace})$. $\bullet$ Signing algorithm ($\mathsf{Sign}$): takes as input a message $m$ and the secret key $\ensuremath {\mathit{SK_{DS}}}{\xspace}$ and returns a signature $\ensuremath {\mathit{\sigma}}{\xspace}$, $\ensuremath {\mathit{\sigma}}{\xspace} \leftarrow \mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Sign}(\ensuremath {\mathit{SK_{DS}}}{\xspace},m)$. $\bullet$ Verification algorithm ($\mathsf{Ver}$): takes as input the public key $\ensuremath {\mathit{PK_{DS}}}{\xspace}$, $m$ and $\ensuremath {\mathit{\sigma}}{\xspace}$. It returns $1$ if valid and $0$ if invalid, $\{0,1\}\leftarrow \mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Ver}(\ensuremath {\mathit{PK_{DS}}}{\xspace},m,\ensuremath {\mathit{\sigma}}{\xspace})$. \end{definition} Note that communications are made over a secure (authenticated) channel maintained with a symmetric key (e.g., via SSL/TLS) to ensure confidentiality and authentication. For the sake of brevity, we will only write encryptions but not the authentication tags (e.g., Message Authentication Codes~\cite{JonathanKatzModernCrytoBook}) for the rest of the paper. In the following we present the two schemes that we propose in this paper. For convenience and before getting into the details of the proposed approaches, we have summarized the different notations that we use in the remaining parts of this paper in Table~\ref{t:notations}. \begin{table}[h!] \caption{Notations} \centering \scriptsize \resizebox{0.5\textwidth}{!}{ \label{t:notations} \begin{tabular}{cl} \hline \noalign{\medskip } $\ensuremath {\mathit{SU}}{\xspace}$ & Secondary user \\ $\ensuremath {\mathit{FC}}{\xspace}$ & Fusion center\\ $\ensuremath {\mathit{GW}}{\xspace}$ & Gateway\\ $\ensuremath {\mathit{RSS}}{\xspace}$ & Received signal strength\\ $\gam$ & $=|\ensuremath {\mathit{RSS}}{\xspace}|=|\ensuremath {\mathit{\tau}}{\xspace}|$ \\ $\nbr$ & Average number of \ensuremath {\mathit{SU}}{\xspace} s per sensing period \\ $\mathcal{G}$ & Set of all \ensuremath {\mathit{SU}}{\xspace} s in the system\\ $\ensuremath {\mathit{\lambda}}{\xspace}$ & Optimal voting threshold \\ $\ensuremath {\mathit{\tau}}{\xspace}$ & Energy sensing threshold \\ $q$ & Large prime number for \ensuremath {\mathit{ECElG}}{\xspace}\\ $\mathscr{E}$ & Elliptic curve over a finite field $GF(q)$ \\ $b_i$ & Outcome of \ensuremath {\mathit{YM.ECElG}}{\xspace}~between $\ensuremath {\mathit{\tau}}{\xspace}$~and $\ensuremath {\mathit{RSS}}{\xspace}_i$ \\ $\ensuremath {\mathit{dec}}{\xspace}$ & Final decision made by \ensuremath {\mathit{FC}}{\xspace} \\ $K$ & Group key established by \ensuremath {\mathit{SU}}{\xspace} s\\ $\ensuremath {\mathit{\sigma}}{\xspace}$ & Digital signature\\ $\boldsymbol{w}$ & Vector of weights assigned to \ensuremath {\mathit{SU}}{\xspace} s \\ $T$ & Table of \ensuremath {\mathit{ECElG}}{\xspace}~ciphertexts exchanged in \ensuremath {\mathit{YM.ECElG}}{\xspace} \\ $\ensuremath {\mathit{PK_{DS}}}{\xspace}$ & Public key used for the digital signature\\ $\ensuremath {\mathit{SK_{DS}}}{\xspace}$ & Secret key used for the digital signature\\ $k_{\ensuremath {\mathit{FC}}{\xspace},i}$ & Secret key established between $\ensuremath {\mathit{FC}}{\xspace}~\&~\ensuremath {\mathit{SU}}{\xspace}_i$ \\ $k_{\ensuremath {\mathit{GW}}{\xspace},i}$ & Secret key established between $\ensuremath {\mathit{GW}}{\xspace}~\&~\ensuremath {\mathit{SU}}{\xspace}_i$ \\ $k_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}$ & Secret key established between $\ensuremath {\mathit{FC}}{\xspace}~\&~\ensuremath {\mathit{GW}}{\xspace}$ \\ $(E,D)$ & \ensuremath {\mathit{ECElG}}{\xspace}~encryption-decryption for \ensuremath {\mathit{YM.ECElG}}{\xspace} \\ $(\mathcal{E},\mathcal{D})$ & IND-CPA secure block cipher encryption-decryption\\ $\ensuremath{\mathit{OPE}.\mathcal{E}}$ & \ensuremath {\mathit{OPE}}{\xspace}~encryption \\ $c_i$ & $= \OEnc{K}{\ensuremath {\mathit{RSS}}{\xspace}_i}$ \\ $\ensuremath {\mathit{\theta}}{\xspace}_{i}$ & $=\Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}}$ \\ $\ensuremath {\mathit{\varsigma}}{\xspace}_{i}$ & $=\Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_i}}$ \\ $\ensuremath {\mathit{\zeta}}{\xspace}$ &$ =\Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\{\bin_i\}_{i=1}^{n}}$ \\ $\ensuremath {\mathit{chn}}{\xspace}_i$ & Secure authenticated channel between \ensuremath {\mathit{FC}}{\xspace}~and $\ensuremath {\mathit{SU}}{\xspace}_i$\\ $\ensuremath{\mathcal{L}_1} $ & History list including all values learned by $\{\ensuremath {\mathit{SU}}{\xspace}_i\}_{i=1}^{n}$\\ $\ensuremath{\mathcal{L}_2}$ & History list including all values learned by \ensuremath {\mathit{FC}}{\xspace}\\ $\ensuremath{\mathcal{L}_3}$ & History list including all values learned by \ensuremath {\mathit{GW}}{\xspace}\\ $\ensuremath {\mathit{\beta}}{\xspace}(t)$ & Average number of \ensuremath {\mathit{SU}}{\xspace} s joining the \ensuremath {\mathit{CRN}}{\xspace}~at $t$ \\ $\ensuremath {\mathit{\mu}}{\xspace}$ & Average of the membership change process\\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} } \end{table} \section{\ROLPOS} \label{sec:replpos} We now present our first proposed scheme, which is a voting-based approach designed for the 2-party cooperative spectrum sensing network, consisting of one \ensuremath {\mathit{FC}}{\xspace}~and a set of \ensuremath {\mathit{SU}}{\xspace} s. Throughout, we refer to this scheme by \ROLPOS~(location privacy for 2-party spectrum sensing architecture). \ROLPOS~achieves the aforementioned security objectives via an innovative integration of the \ensuremath {\mathit{OPE}}{\xspace}, \ensuremath {\mathit{TGECDH}}{\xspace}~and \ensuremath {\mathit{YM}}{\xspace}~protocols. Voting-based spectrum sensing offers several advantages over its aggregation-based counterparts as discussed in Section \ref{sec:Preliminaries}, but requires comparing \ensuremath {\mathit{FC}}{\xspace}'s threshold $\ensuremath {\mathit{\tau}}{\xspace}$ and \ensuremath {\mathit{SU}}{\xspace} s' \ensuremath {\mathit{RSS}}{\xspace} s, thereby forcing at least one of the parties to expose its information to the other. One solution is to use a secure comparison protocol, such as \ensuremath {\mathit{YM}}{\xspace}, between \ensuremath {\mathit{FC}}{\xspace}~and each \ensuremath {\mathit{SU}}{\xspace}, which permits \ensuremath {\mathit{FC}}{\xspace}~to learn the total number of \ensuremath {\mathit{SU}}{\xspace} s above/below $\ensuremath {\mathit{\tau}}{\xspace}$ but nothing else. However, secure comparison protocols involve several costly public key crypto operations (e.g., modular exponentiation), and therefore $\mathcal{O}(\nbr)$ invocations of such a protocol per sensing period, thus incurring prohibitive computational and communication overhead. $\bullet$ {\em Intuition}: The key observation that led us to overcome this challenge is the following: If we enable \ensuremath {\mathit{FC}}{\xspace}~to learn the relative order of \ensuremath {\mathit{RSS}}{\xspace}~values but nothing else, then the number of \ensuremath {\mathit{YM}}{\xspace}~invocations can be reduced drastically. That is, {\em the knowledge of relative order permits \ensuremath {\mathit{FC}}{\xspace}~to execute \ensuremath {\mathit{YM}}{\xspace}~protocol at worst-case $\mathcal{O}(log(\nbr))$ by utilizing a binary-search type approach}, as opposed to running \ensuremath {\mathit{YM}}{\xspace}~with each user in total $\mathcal{O}(\nbr)$ overhead. This is where \ensuremath {\mathit{OPE}}{\xspace}~comes into play. {\em The crux of our idea is to make users \ensuremath {\mathit{OPE}}{\xspace}~encrypt their \ensuremath {\mathit{RSS}}{\xspace}~values under a group key $K$, which is derived via \ensuremath {\mathit{TGECDH}}{\xspace}~at the beginning of the protocol}. This allows \ensuremath {\mathit{FC}}{\xspace}~to learn the relative order of encrypted \ensuremath {\mathit{RSS}}{\xspace}~values but nothing else (and users do not learn each others' \ensuremath {\mathit{RSS}}{\xspace}~values, as they are sent to \ensuremath {\mathit{FC}}{\xspace}~over a pairwise secure channel). \ensuremath {\mathit{FC}}{\xspace}~then uses this knowledge to run \ensuremath {\mathit{YM}}{\xspace}~protocol by utilizing a {\em binary-search} strategy, which enables it to identify the total number of users above/below $\ensuremath {\mathit{\tau}}{\xspace}$ and then compares it to $\ensuremath {\mathit{\lambda}}{\xspace}$. As \ensuremath {\mathit{FC}}{\xspace}~may try to maliciously modify the value of \ensuremath {\mathit{\tau}}{\xspace}~as stated in Security Assumption~\ref{asm:asm1}, this makes it easier for it to infer the \ensuremath {\mathit{RSS}}{\xspace}~values of \ensuremath {\mathit{SU}}{\xspace} s, thus their location. We rely on digital signatures to overcome this limitation. A digital signature is used by \ensuremath {\mathit{SU}}{\xspace} s to verify the integrity of the information that was sent by \ensuremath {\mathit{FC}}{\xspace}~during the execution of \ensuremath {\mathit{YM}}{\xspace}~protocol and signed by the service operator as we explain in more details next. This strategy makes \ROLPOS~achieve \ensuremath {\mathit{SU}}{\xspace} s' location privacy with efficient spectrum sensing, fault-tolerance and network dynamism simultaneously. Before we describe our protocol in more details, we first highlight how we improve the \ensuremath {\mathit{YM}}{\xspace}~protocol proposed in \cite{lin2005efficient} as shown next. \subsection{Our Improved \ensuremath {\mathit{YM.ECElG}}{\xspace}~Scheme} \label{sub:yme} To achieve high efficiency, we improve the \ensuremath {\mathit{YM}}{\xspace}~protocol in \cite{lin2005efficient}, in which only the initiator of the protocol learns the outcome, and call this improved scheme {\em\ensuremath {\mathit{YM.ECElG}}{\xspace}}. \ensuremath {\mathit{YM.ECElG}}{\xspace}, described next, is used by our proposed \ROLPOS~to perform secure comparisons. Our secure comparison scheme improves \ensuremath {\mathit{YM}}{\xspace}~protocol proposed in~\cite{lin2005efficient} in two aspects: (i) We adapt it to work with additive homomorphic encryption (specifically \ensuremath {\mathit{ECElG}}{\xspace}) to enable compact comparison operations in Elliptic Curves (EC) domain. (ii) The final stage of \ensuremath {\mathit{YM.ECElG}}{\xspace}~requires solving \ensuremath {\mathit{ECDLP}}{\xspace}~(Definition \ref{def:ECDLP}), which is only possible with small plaintext domains, and this is the case for our 8-bit encoded RSS values required by {\em IEEE 802.22 standard}~\cite{IEEEStd80222a}. However, despite small plaintext domain, solving \ensuremath {\mathit{ECDLP}}{\xspace}~with brute-force is still costly. We improve this step by adapting {\em Pollard-Lambda}{\xspace}~method~\cite{blake1999elliptic} to solve the \ensuremath {\mathit{ECDLP}}{\xspace}~for the reverse map, which offers decryption efficiency and compactness. The {\em Pollard-Lambda}{\xspace}~method is designed to solve the \ensuremath {\mathit{ECDLP}}{\xspace}~for points that are known to lie in a small interval, which is the case for \ensuremath {\mathit{RSS}}{\xspace}~values~\cite{blake1999elliptic}. Below, we outline our optimized \ensuremath {\mathit{YM.ECElG}}{\xspace}. $\bullet$ {\em Notation}: Let $\gam=|\ensuremath {\mathit{RSS}}{\xspace}|=|\ensuremath {\mathit{\tau}}{\xspace}|$ denote the size in bits of the \ensuremath {\mathit{RSS}}{\xspace}~value of a \ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{\tau}}{\xspace}~of \ensuremath {\mathit{FC}}{\xspace}~to be privately compared. Also, let $\nbr$ denote the average number of \ensuremath {\mathit{SU}}{\xspace} s per sensing period, $q$ be a large prime number, $\mathscr{E}$ an elliptic curve over a finite field $GF(q)$, $Z$ a point on the curve with prime order $m$. $(\ensuremath {\mathit{sk}}{\xspace},\ensuremath {\mathit{pk}}{\xspace})$ is a private/public key pair of Elliptic Curve ElGamal (\ensuremath {\mathit{ECElG}}{\xspace}) encryption~\cite{koblitz1987elliptic}, generated under $(\mathscr{E},q,Z,m)$. Let $\ensuremath {\mathit{\pi}}{\xspace}=(\gam,\mathscr{E},q,Z,m,\langle\ensuremath {\mathit{sk}}{\xspace},\ensuremath {\mathit{pk}}{\xspace}\rangle)$ be \ensuremath {\mathit{YM.ECElG}}{\xspace}~parameters generated by \ensuremath {\mathit{FC}}{\xspace}~which is the initiator of the protocol. \ensuremath {\mathit{YM.ECElG}}{\xspace}~returns $b\ensuremath {\leftarrow}{\xspace}\ensuremath {\mathit{YM.ElGamal}}{\xspace}(\ensuremath {\mathit{\tau}}{\xspace},\ensuremath {\mathit{RSS}}{\xspace},\ensuremath {\mathit{\pi}}{\xspace})$, where $b=0$ if $\ensuremath {\mathit{\tau}}{\xspace}<\ensuremath {\mathit{RSS}}{\xspace}$ and $b=1$ otherwise. Only \ensuremath {\mathit{FC}}{\xspace}~learns $b$ but $(\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{SU}}{\xspace})$ learn nothing else. For simplicity during the description of \ensuremath {\mathit{YM.ECElG}}{\xspace}, we denote \ensuremath {\mathit{\tau}}{\xspace}~as $x$ and \ensuremath {\mathit{RSS}}{\xspace}~as $y$. \ensuremath {\mathit{YM.ECElG}}{\xspace}, as in \ensuremath {\mathit{YM}}{\xspace}, is based on the fact that {\em $x$ is greater than $y$ $\mathit{iff}$ $S^1_x$ and $S^0_y$ have a common element} where $S^1_x$ and $S^0_y$ are the 1-encoding of $x$ and the 0-encoding of $y$ respectively. The 0-encoding of a binary string $s = s_\gam s_{\gam-1}\ldots s_1 \in \{0,1\}^{\gam}$ is given by $S^0_s = \{s_\gam s_{\gam-1}\ldots s_{i+1}1|s_i=0, 1\leq i\leq \gam\}$ and the 1-encoding of $s$ is given by $S^1_s = \{s_\gam s_{\gam-1}\ldots s_i|s_i=1, 1\leq i\leq \gam\}$. For example, if we have a string $s=101101$, then $S^0_s = \{11,10111\}$ and $S^1_s = \{1,101,1011,101101\}$. If we want to compare two values $x=46=101110$ and $y=45=101101$, we need first to construct $S^1_x = \{1,101,1011,10111\}$ and $S^0_y = \{11,10111\}$. Since $S^1_x \cap S^0_y \neq \emptyset$, then $x>y$. \ensuremath {\mathit{FC}}{\xspace}~with a private input $x = x_\gam x_{\gam-1} \ldots x_1$ generates \ensuremath {\mathit{\pi}}{\xspace}~for encryption and decryption $(E,D)$ then prepares a $2\times \gam$-table $T[i,j]$, $i \in {0,1}, 1\leq j\leq \gam$ such that $T[x_i,i]=E(1)$ and $T[\bar{x_i},i]=E(r_i)$ for a random $r_i$ in the subgroup $G_q$ and finally sends $T$ to \ensuremath {\mathit{SU}}{\xspace}. \ensuremath {\mathit{SU}}{\xspace}~with private input $y = y_\gam y_{\gam-1} \ldots y_1$ computes $c_t$ for each $t=t_l t_{\gam-1} \ldots t_i \in S^0_y$ as follows \begin{equation} \label{eq:ym} c_t=T[t_\gam,\gam]\oplus T[t_{\gam-1},\gam-1]\ldots \oplus T[t_i,i] \end{equation} with $\oplus$ denotes Elliptic Curve point addition operations ($\oplus$ replaces $\times$ in the original \ensuremath {\mathit{YM}}{\xspace}~scheme). \ensuremath {\mathit{SU}}{\xspace}~then prepares $l=\gam-|S^0_y|$ random encryptions $z_j = (a_j,\bin_j)\in G^2_q, 1\leq j\leq l$ and permutes $c_t$'s and $z_j$'s to obtain $\hat{c}_1,\cdots,\hat{c}_\gam$ which are sent back to \ensuremath {\mathit{FC}}{\xspace}~that decrypts $D(\hat{c}_i) = m_i$, $1\leq j\leq \gam$ via {\em Pollard-Lambda}{\xspace}~algorithm \cite{blake1999elliptic} and decides $x>y$ $\mathit{iff}$ some $m_i=0$ ($m_i=1$ in the original \ensuremath {\mathit{YM}}{\xspace}). The different steps of this protocol are summarized in Figure~\ref{fig:yme}. \begin{figure}[h!] \center \includegraphics[width=0.45\textwidth]{YM-ECElG.pdf} \caption{\ensuremath {\mathit{YM.ECElG}}{\xspace}~protocol} \label{fig:yme} \end{figure} \subsection{\ROLPOS~Descitpion} Next we describe our proposed scheme \ROLPOS~whose main steps are outlined in Algorithm~\ref{alg1}. \begin{algorithm}[h!] \caption{\ROLPOS~Algorithm}\label{alg1} \begin{algorithmic}[1] \Statex \textbf{Initialization}: Executed only once. \State Service operator sets \ensuremath {\mathit{\tau}}{\xspace}. \State \ensuremath {\mathit{FC}}{\xspace}~generates \ensuremath {\mathit{\pi}}{\xspace}, sets \ensuremath {\mathit{\lambda}}{\xspace}~and $\boldsymbol{w} \gets \boldsymbol{1}$. \State \ensuremath {\mathit{FC}}{\xspace}~pre-computes $T$ using \ensuremath {\mathit{pk}}{\xspace}. \State Service operator computes $\ensuremath {\mathit{\sigma}}{\xspace} \leftarrow \mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Sign}(\ensuremath {\mathit{SK_{DS}}}{\xspace},T)$. \State Service operator shares \ensuremath {\mathit{PK_{DS}}}{\xspace}~with \ensuremath {\mathit{SU}}{\xspace} s. \State $\mathcal{G}=\{\ensuremath {\mathit{SU}}{\xspace}_i\}_{i=1}^{\nbr}$ establish $K$ via \ensuremath {\mathit{TGECDH}}{\xspace}~protocol. \State \ensuremath {\mathit{FC}}{\xspace}~establishes $\ensuremath {\mathit{chn}}{\xspace}_i$~with each $\ensuremath {\mathit{SU}}{\xspace}_i$ for $i = 1,\ldots,\nbr$. \hspace{20pt}\algrule \Statex \textbf{Private Sensing}: Executed every sensing period $t_w$ \State $\ensuremath {\mathit{SU}}{\xspace}_i$ computes $c_i \ensuremath {\leftarrow}{\xspace} \OEnc{K}{\ensuremath {\mathit{RSS}}{\xspace}_i}$ for $i=1,\ldots,\nbr$. \State $\ensuremath {\mathit{SU}}{\xspace}_i$ sends $c_i$ to \ensuremath {\mathit{FC}}{\xspace}~over $\ensuremath {\mathit{chn}}{\xspace}_i$ for $i=1,\ldots,\nbr$. \State \ensuremath {\mathit{FC}}{\xspace}~sorts encrypted RSS values as $c_{min} \leq \ldots \leq c_{max}$.\label{alg1:line:pt} \State \ensuremath {\mathit{FC}}{\xspace}~runs $b_{id_{max}}\ensuremath {\leftarrow}{\xspace}\ensuremath {\mathit{YM.ECElG}}{\xspace}$ $(\ensuremath {\mathit{RSS}}{\xspace}_{max},\ensuremath {\mathit{\tau}}{\xspace},\ensuremath {\mathit{\pi}}{\xspace})$ with $\ensuremath {\mathit{SU}}{\xspace}_{id_{max}}$ having $c_{max}$.\label{alg1:umax} \State $\ensuremath {\mathit{SU}}{\xspace}_{id_{max}}$ verifies $T$ using \ensuremath {\mathit{\sigma}}{\xspace}. \label{alg1:verif1} \If {$\mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Ver}(\ensuremath {\mathit{PK_{DS}}}{\xspace},T,\ensuremath {\mathit{\sigma}}{\xspace})=0$} \State $\ensuremath {\mathit{SU}}{\xspace}_{id_{max}}$ leaves the sensing \State Go to Step~\ref{alg1:umin}. \EndIf \If {$b_{id_{max}}=0$} $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel free, $\{b_i\}_{i=1}^\nbr \ensuremath {\leftarrow}{\xspace} \boldsymbol{0}$.\label{alg1:decVec1} \Else \: \ensuremath {\mathit{FC}}{\xspace}~runs $b_{id_{min}}\ensuremath {\leftarrow}{\xspace}\ensuremath {\mathit{YM.ECElG}}{\xspace}(\ensuremath {\mathit{RSS}}{\xspace}_{min},\ensuremath {\mathit{\tau}}{\xspace},\ensuremath {\mathit{\pi}}{\xspace})$ with $\ensuremath {\mathit{SU}}{\xspace}_{id_{min}}$ having $c_{min}$. \label{alg1:umin} \State $\ensuremath {\mathit{SU}}{\xspace}_{id_{min}}$ verifies $T$ using \ensuremath {\mathit{\sigma}}{\xspace}. \label{alg1:verif2} \If {$\mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Ver}(\ensuremath {\mathit{PK_{DS}}}{\xspace},T,\ensuremath {\mathit{\sigma}}{\xspace})=0$} \State $\ensuremath {\mathit{SU}}{\xspace}_{id_{min}}$ leaves the sensing \State Go to Step~\ref{alg1:all}. \EndIf \If {$b_{id_{min}}=1$} $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel busy, $\{b_i\}_{i=1}^\nbr \ensuremath {\leftarrow}{\xspace} \boldsymbol{1}$.\label{alg1:decVec2}\Else \Repeat \State \ensuremath {\mathit{FC}}{\xspace}~computes $I \gets BinarySearch(\mathcal{G})$ \State \ensuremath {\mathit{FC}}{\xspace}~runs $b_I\ensuremath {\leftarrow}{\xspace}\ensuremath {\mathit{YM.ECElG}}{\xspace}$ $(\ensuremath {\mathit{RSS}}{\xspace}_I,\ensuremath {\mathit{\tau}}{\xspace},\ensuremath {\mathit{\pi}}{\xspace})$ with $\ensuremath {\mathit{SU}}{\xspace}_I$ having $c_{I}$.\label{alg1:ui} \State $\ensuremath {\mathit{SU}}{\xspace}_I$ verifies $T$ using \ensuremath {\mathit{\sigma}}{\xspace}.\label{alg1:all} \If {$\mathsf{\ensuremath {\mathit{SGN}}{\xspace}.Ver}(\ensuremath {\mathit{PK_{DS}}}{\xspace},T,\ensuremath {\mathit{\sigma}}{\xspace})=0$} \State $\ensuremath {\mathit{SU}}{\xspace}_I$ leaves the sensing \EndIf \Until {$\ensuremath {\mathit{RSS}}{\xspace}_{I-1} \leq \ensuremath {\mathit{\tau}}{\xspace} \leq \ensuremath {\mathit{RSS}}{\xspace}_I$} \State \ensuremath {\mathit{FC}}{\xspace}~assigns $b_i\ensuremath {\leftarrow}{\xspace} 0$\:for\:$i=1,\ldots,I-1$\:and\: $b_j\ensuremath {\leftarrow}{\xspace} 1$ for $j=I,\ldots,\nbr$ \label{alg1:decVec3} \State \ensuremath {\mathit{FC}}{\xspace}~computes $\votes\ensuremath {\leftarrow}{\xspace} \sum\limits_{i=1}^{\nbr}w_i\times b_i$ \label{alg1:rep1} \If {$\votes \geq \ensuremath {\mathit{\lambda}}{\xspace}$} $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel busy \Else \: $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel free \EndIf \EndIf \EndIf \State \ensuremath {\mathit{FC}}{\xspace}~updates $\{\varphi_i\}_{i=1}^\nbr$ and $\{w_i\}_{i=1}^\nbr$ as in Eqs.~\eqref{cred} \& \eqref{weight}\label{alg1:rep3} \Return $\ensuremath {\mathit{dec}}{\xspace}$ \hspace{20pt}\algrule \Statex \textbf{Update after $\mathcal{G}$ Membership Changes or Breakdown}: \If {\ensuremath {\mathit{SU}}{\xspace}(s) join/leave $\mathcal{G}$ or breakdown in $t_w$} \State New group $\mathcal{G}'$ form new $K'$ using \ensuremath {\mathit{TGECDH}}{\xspace}. \State \ensuremath {\mathit{FC}}{\xspace}~updates \ensuremath {\mathit{\lambda}}{\xspace}~and \ensuremath {\mathit{\pi}}{\xspace}~as \ensuremath {\mathit{\lambda}}{\xspace}'~and \ensuremath {\mathit{\pi}}{\xspace}',~respectively, if required. \State Execute the private sensing with $(K',\ensuremath {\mathit{\lambda}}{\xspace}',\ensuremath {\mathit{\pi}}{\xspace}')$. \EndIf \end{algorithmic} \end{algorithm} $\bullet$~{\em Initialization:} The service operator sets up the value of energy threshold \ensuremath {\mathit{\tau}}{\xspace}. \ensuremath {\mathit{FC}}{\xspace}~sets up \ensuremath {\mathit{ECElG}}{\xspace}~crypto parameters, voting threshold and users reputation weights values. Initially, all the users are considered credible so the weight vector $\boldsymbol{w}$ is constituted of ones. \ensuremath {\mathit{FC}}{\xspace}, then, constructs the table $T$ used in \ensuremath {\mathit{YM}}{\xspace}~protocol as described in Section~\ref{sub:yme} with \ensuremath {\mathit{\tau}}{\xspace}~as input using the \ensuremath {\mathit{FC}}{\xspace}'s \ensuremath {\mathit{ECElG}}{\xspace}~public key \ensuremath {\mathit{pk}}{\xspace}. Notice here that since the same \ensuremath {\mathit{\tau}}{\xspace}~is always used during different sensing periods, the table $T$ can be precomputed during the {\em Initialization} phase. This considerably reduces this protocol's computational overhead. Then the service operator that manages the network signs $T$ using a digital signature scheme with secret key \ensuremath {\mathit{SK_{DS}}}{\xspace}. This digital signature is used to make sure that \ensuremath {\mathit{FC}}{\xspace}~does not maliciously modify the value of \ensuremath {\mathit{\tau}}{\xspace}~to learn \ensuremath {\mathit{RSS}}{\xspace}~values of users and thus infer their locations. The service operator then shares the public key \ensuremath {\mathit{PK_{DS}}}{\xspace}~with \ensuremath {\mathit{SU}}{\xspace} s to use it for verifying the integrity of $T$ and thus of \ensuremath {\mathit{\tau}}{\xspace}. \ensuremath {\mathit{SU}}{\xspace} s establish a group key $K$ via \ensuremath {\mathit{TGECDH}}{\xspace}, with which they \ensuremath {\mathit{OPE}}{\xspace}~encrypt their \ensuremath {\mathit{RSS}}{\xspace}~values during the private sensing. \ensuremath {\mathit{FC}}{\xspace}~also establishes a secure channel $\ensuremath {\mathit{chn}}{\xspace}_i$ with each user $\ensuremath {\mathit{SU}}{\xspace}_i$. $\bullet$~{\em Private Sensing:} Each $\ensuremath {\mathit{SU}}{\xspace}_i$ \ensuremath {\mathit{OPE}}{\xspace}~encrypts its $\ensuremath {\mathit{RSS}}{\xspace}_i$ with group key $K$ and sends ciphertext $c_i$ to \ensuremath {\mathit{FC}}{\xspace}~over $\ensuremath {\mathit{chn}}{\xspace}_i$. \ensuremath {\mathit{FC}}{\xspace}~then sorts ciphertexts as $c_{min} \leq \ldots \leq c_{max}$ (as all $\ensuremath {\mathit{RSS}}{\xspace}_i$s are \ensuremath {\mathit{OPE}}{\xspace}~encrypted under the same $K$) without learning corresponding \ensuremath {\mathit{RSS}}{\xspace}~values, and the secure channel $\ensuremath {\mathit{chn}}{\xspace}_i$ protects the communication of $\ensuremath {\mathit{SU}}{\xspace}_i$ from other users as well as from outside attackers. \ensuremath {\mathit{FC}}{\xspace}~then initiates \ensuremath {\mathit{YM.ECElG}}{\xspace}~first with the $\ensuremath {\mathit{SU}}{\xspace}_{id_{max} }$~that has the highest \ensuremath {\mathit{RSS}}{\xspace}~value $\ensuremath {\mathit{RSS}}{\xspace}_{max}$. If it is smaller than energy sensing threshold $\ensuremath {\mathit{\tau}}{\xspace}$, then the channel is free. Otherwise, \ensuremath {\mathit{FC}}{\xspace}~initiates \ensuremath {\mathit{YM.ECElG}}{\xspace}~with the user that has $\ensuremath {\mathit{RSS}}{\xspace}_{min}$. If it is greater than $\ensuremath {\mathit{\tau}}{\xspace}$, then the channel is busy. Otherwise, to make the final decision based on the optimal sensing threshold $\ensuremath {\mathit{\lambda}}{\xspace}$, \ensuremath {\mathit{FC}}{\xspace}~runs \ensuremath {\mathit{YM.ECElG}}{\xspace}~according to the binary-search strategy which guarantees the decision at the worst $\mathcal{O}(log(\nbr))$ invocations. Note that before participating in \ensuremath {\mathit{YM.ECElG}}{\xspace}, each \ensuremath {\mathit{SU}}{\xspace}~first verifies the integrity of $T$ using the digital signature \ensuremath {\mathit{\sigma}}{\xspace}~that was provided by the service operator as indicated in Steps~\ref{alg1:verif1}, \ref{alg1:verif2} $\&$ \ref{alg1:all}. A \ensuremath {\mathit{SU}}{\xspace}~that detects a change in the value of $T$ refuses to participate in the sensing to prevent \ensuremath {\mathit{FC}}{\xspace}~from learning any sensitive information regarding its location. In that case the system stops and the malicious intents of \ensuremath {\mathit{FC}}{\xspace}~are detected. In Steps~\ref{alg1:decVec1}, \ref{alg1:decVec2} $\&$ \ref{alg1:decVec3} of Algorithm~\ref{alg1}, \ensuremath {\mathit{FC}}{\xspace}~constructs the vector of local decisions of \ensuremath {\mathit{SU}}{\xspace} s after running the private comparisons between \ensuremath {\mathit{\tau}}{\xspace}~and \ensuremath {\mathit{RSS}}{\xspace}~values. Based on the decision vector $\boldsymbol{\bin}$ and the weights vector $\boldsymbol{w}$ that was computed previously, \ensuremath {\mathit{FC}}{\xspace}~computes $\votes$ in Step \ref{alg1:rep1} using Equation~\ref{fDef} to finally make the final decision \ensuremath {\mathit{dec}}{\xspace}~using voting threshold $\ensuremath {\mathit{\lambda}}{\xspace}$. \ensuremath {\mathit{FC}}{\xspace}~then computes the credibility score and the weights that will be given to all users in the next sensing period. If $\ensuremath {\mathit{SU}}{\xspace}_i$ has a decision $b_i \neq \ensuremath {\mathit{dec}}{\xspace}$, its assigned weight decreases. But if a \ensuremath {\mathit{SU}}{\xspace}~makes the same decision as \ensuremath {\mathit{FC}}{\xspace}, it is assigned the highest weight. The main steps of the private sensing phase are summarized in Figure~\ref{Fig:LP-2PSS}. \begin{figure}[h!] \center \includegraphics[width=0.46\textwidth,height = 9.2cm]{LP-2PSS.pdf} \caption{\ROLPOS's Private Sensing phase} \label{Fig:LP-2PSS} \end{figure} $\bullet$~{\em Update after $\mathcal{G}$ Membership Changes or Breakdown:} At the beginning of $t_w$, if membership status of $\mathcal{G}$ changes, a new group key is formed via \ensuremath {\mathit{TGECDH}}{\xspace}, and then \ensuremath {\mathit{FC}}{\xspace}~updates \ensuremath {\mathit{\lambda}}{\xspace}. If some \ensuremath {\mathit{SU}}{\xspace} s breakdown and fail to sense or send their measurements, \ensuremath {\mathit{\lambda}}{\xspace}~also must be updated. In new sensing period, Algorithm \ref{alg1} is executed with new parameters and group key. \subsection*{Choice of digital signature} Choosing the right digital signature scheme depends on the network and users constraints. In the following we briefly discuss some of the schemes that could be applied in \ROLPOS. One scheme that could be used is {\em RSA} \cite{Rivest1978MOD} which is one of the first and most popular digital signature schemes. {\em RSA} has a very large signature but offers a fast signature verification. However, newer schemes outperform it in terms of signature and key size and/or computational efficiency. Another scheme could be {\em ECDSA} \cite{johnson2001elliptic} which is an elliptic curve analogue of the {\em DSA} \cite{fips1994186} digital signature scheme. It provides more compact signatures than its counterparts thanks to the use of Elliptic Curve crypto. It has a moderate speed, though, in terms of verification and encryption compared to {\em RSA}. It is more suitable for situations where the communication overhead is the main concern. One-time signatures, e.g. \ensuremath {\mathit{HORS}}{\xspace}~\cite{reyzin2002better} and its variants \cite{neumann2004horse,pieprzyk2004multiple}, are digital signatures that are based on one-way functions without a trapdoor which makes them much faster than commonly used digital signatures, like {\em RSA}. The main drawbacks of this kind of digital signatures are their large size and the complexity of their "one-timed-ness" which requires a new call to the key generation algorithm for each use. In our context, we should not worry about the latter since we sign $T$ only once so we don't have to regenerate the keys. In that case, one-time signatures may be the best option when computation speed at \ensuremath {\mathit{SU}}{\xspace} s is the main concern. {\em NTRU} \cite{hoffstein2003ntrusign} signature could also be applied here. It provides a tradeoff between signature size and computational efficiency. Indeed it has a moderate signature size that is larger than the one of {\em ECDSA} but it is faster than both {\em ECDSA} and {\em RSA} in key generation, signing and verification. \section{\LPGW} \label{sec:lpgw} We now present an alternative scheme that we call \LPGW~(location privacy for 3-party spectrum sensing architecture), which offers higher privacy and significantly better performance than that of \ROLPOS, but at the cost of deploying an additional entity in the network, referred to as Gateway (\ensuremath {\mathit{GW}}{\xspace}) (thus "3P" refers to the 3 parties: \ensuremath {\mathit{SU}}{\xspace} s, \ensuremath {\mathit{FC}}{\xspace}, and \ensuremath {\mathit{GW}}{\xspace}). \ensuremath {\mathit{GW}}{\xspace}~enables a higher privacy by preventing \ensuremath {\mathit{FC}}{\xspace}~from even learning the order of encrypted \ensuremath {\mathit{RSS}}{\xspace}~values of \ensuremath {\mathit{SU}}{\xspace} s (as in \ROLPOS). \ensuremath {\mathit{GW}}{\xspace}~also learns nothing but secure comparison outcome of a \ensuremath {\mathit{RSS}}{\xspace}~values and \ensuremath {\mathit{\tau}}{\xspace}, as in \ensuremath {\mathit{YM}}{\xspace}~but only using \ensuremath {\mathit{OPE}}{\xspace}. Thus, no entity learns any information on \ensuremath {\mathit{RSS}}{\xspace}~or~\ensuremath {\mathit{\tau}}{\xspace}~beyond a pairwise secure comparison, which is the minimum information required for a voting-based decision. $\bullet$ {\em Intuition}: The main idea behind \LPGW~is simple yet very powerful: We enable \ensuremath {\mathit{GW}}{\xspace}~to privately compare \nbr~distinct \ensuremath {\mathit{OPE}}{\xspace}~encryptions of \ensuremath {\mathit{\tau}}{\xspace}~and \ensuremath {\mathit{RSS}}{\xspace}~values, which were computed under \nbr~pairwise keys established between \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace} s. These \ensuremath {\mathit{OPE}}{\xspace}~encrypted pairs permit \ensuremath {\mathit{GW}}{\xspace}~to learn the comparison outcomes without deducing any other information. \ensuremath {\mathit{GW}}{\xspace}~then sends these comparison results to \ensuremath {\mathit{FC}}{\xspace}~to make the final decision. \ensuremath {\mathit{FC}}{\xspace}~learns no information on \ensuremath {\mathit{RSS}}{\xspace}~values and \ensuremath {\mathit{SU}}{\xspace} s cannot obtain the value of \ensuremath {\mathit{\tau}}{\xspace}, which complies with our Security Objectives \ref{obj:SecurityObj-1}. Note that \LPGW~relies {\em only on symmetric cryptography} to guarantee the location privacy of \ensuremath {\mathit{SU}}{\xspace} s. Hence, it is the {\em most computationally efficient and compact} scheme among all alternatives but with an additional entity in the system. \LPGW~is described in Algorithm~\ref{alg2} and outlined below. \begin{algorithm}[h!] \caption{\LPGW~Algorithm}\label{alg2} \begin{algorithmic}[1] \Statex \textbf{Initialization}: Executed only once. \State Service operator sets \ensuremath {\mathit{\tau}}{\xspace}. \State \ensuremath {\mathit{FC}}{\xspace}~sets \ensuremath {\mathit{\lambda}}{\xspace}~and $\boldsymbol{w} \gets \boldsymbol{1}$. \State \ensuremath {\mathit{FC}}{\xspace}~establishes $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}$ with $\ensuremath {\mathit{SU}}{\xspace}_i$, $i=1,\ldots ,\nbr$. \State \ensuremath {\mathit{GW}}{\xspace}~establishes $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}$ with $\ensuremath {\mathit{SU}}{\xspace}_i$, $i=1,\ldots ,\nbr$. \State \ensuremath {\mathit{FC}}{\xspace}~establishes $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}$ with \ensuremath {\mathit{GW}}{\xspace}. \State \ensuremath {\mathit{FC}}{\xspace}~computes $\ensuremath {\mathit{\theta}}{\xspace}_{i} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}}$, $i=1,\ldots ,\nbr$ and sends $\{\ensuremath {\mathit{\theta}}{\xspace}_i\}_{i=1}^{n}$ to \ensuremath {\mathit{GW}}{\xspace}. \label{alg1:enc1} \hspace{20pt}\algrule \Statex \textbf{Private Sensing}: Executed every sensing period $t_w$ \State $\ensuremath {\mathit{SU}}{\xspace}_i$ computes $\ensuremath {\mathit{\varsigma}}{\xspace}_{i} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_i}}$, $i = 1,\ldots,\nbr$ and sends $\{\ensuremath {\mathit{\varsigma}}{\xspace}_i\}_{i=1}^{n}$ to \ensuremath {\mathit{GW}}{\xspace}.\label{alg1:enc2} \State \ensuremath {\mathit{GW}}{\xspace}~obtains $\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}} \gets \Dec{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\ensuremath {\mathit{\theta}}{\xspace}_{i}}$ and $\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_i} \gets \Dec{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}}{\ensuremath {\mathit{\varsigma}}{\xspace}_{i}}$, $i = 1,\ldots,\nbr$. \For{\texttt{$i = 1,\ldots,\nbr$}} \If {$\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_i} < \OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}$} $\bin_i \gets 0$\label{alg1:comp} \Else \:$\bin_i \gets 1$ \EndIf \EndFor \State \ensuremath {\mathit{GW}}{\xspace}~computes $\ensuremath {\mathit{\zeta}}{\xspace} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\{\bin_i\}_{i=1}^{n}}$ and sends $\ensuremath {\mathit{\zeta}}{\xspace}$ to \ensuremath {\mathit{FC}}{\xspace}. \label{alg1:enc} \State \ensuremath {\mathit{FC}}{\xspace}~decrypts $\ensuremath {\mathit{\zeta}}{\xspace}$ and computes $\votes\ensuremath {\leftarrow}{\xspace} \sum\limits_{i=1}^{\nbr}w_i\times b_i$ \label{alg1:sum} \If {$\votes \geq \ensuremath {\mathit{\lambda}}{\xspace}$} $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel busy \Else \: $\ensuremath {\mathit{dec}}{\xspace}$ $ \gets $ Channel free \EndIf \State \ensuremath {\mathit{FC}}{\xspace}~updates $\{\varphi_i\}_{i=1}^\nbr$ and $\{w_i\}_{i=1}^\nbr$ as in Eqs.~\eqref{cred} \& \eqref{weight}\label{upWeight} \Return $\ensuremath {\mathit{dec}}{\xspace}$ \hspace{20pt}\algrule \Statex \textbf{Update after $\mathcal{G}$ Membership Changes or Breakdown}: \If{$\ensuremath {\mathit{SU}}{\xspace}_j$ joins \ensuremath {\mathit{CRN}}{\xspace}} \State $\ensuremath {\mathit{SU}}{\xspace}_j$ establishes $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},j}$ with \ensuremath {\mathit{FC}}{\xspace}~and $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},j}$ with \ensuremath {\mathit{GW}}{\xspace}. \EndIf \If{\ensuremath {\mathit{SU}}{\xspace} s join/leave/breakdown} \State \ensuremath {\mathit{FC}}{\xspace}~updates \ensuremath {\mathit{\lambda}}{\xspace}~as \ensuremath {\mathit{\lambda}}{\xspace}'. \State Execute the private sensing with \ensuremath {\mathit{\lambda}}{\xspace}'. \EndIf \end{algorithmic} \end{algorithm} $\bullet$~{\em Initialization:} Service operator and \ensuremath {\mathit{FC}}{\xspace}~set up spectrum sensing and crypto parameters. Let $(\ensuremath{\mathcal{E}}{\xspace},\ensuremath{\mathcal{D}}{\xspace})$ be IND-CPA secure~\cite{JonathanKatzModernCrytoBook} block cipher (e.g. \aes) encryption/decryption operations. \ensuremath {\mathit{FC}}{\xspace}~establishes a secret key with each \ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{GW}}{\xspace}. \ensuremath {\mathit{GW}}{\xspace}~establishes a secret key with each \ensuremath {\mathit{SU}}{\xspace}. \ensuremath {\mathit{FC}}{\xspace}~encrypts \ensuremath {\mathit{\tau}}{\xspace}~with \ensuremath {\mathit{OPE}}{\xspace}~using $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}$, $i=1 \ldots \nbr$. \ensuremath {\mathit{FC}}{\xspace}~then encrypts \ensuremath {\mathit{OPE}}{\xspace}~ciphertexts with \ensuremath{\mathcal{E}}{\xspace}~using $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}$ and sends these $\ensuremath {\mathit{\theta}}{\xspace}_{i}$s to \ensuremath {\mathit{GW}}{\xspace}, $i=1 \ldots \nbr$. Since these encryptions are done offline at the beginning of the protocol, they do not impact the online private sensing phase. \ensuremath {\mathit{FC}}{\xspace}~may also pre-compute a few extra encrypted values in the case of new users joining the sensing. $\bullet$~{\em Private Sensing:} Each $\ensuremath {\mathit{SU}}{\xspace}_i$~encrypts $\ensuremath {\mathit{RSS}}{\xspace}_i$ with \ensuremath {\mathit{OPE}}{\xspace}~using $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}$, which was used by \ensuremath {\mathit{FC}}{\xspace}~to \ensuremath {\mathit{OPE}}{\xspace}~encrypt \ensuremath {\mathit{\tau}}{\xspace}~value. $\ensuremath {\mathit{SU}}{\xspace}_i$ then encrypts this ciphertext with \ensuremath{\mathcal{E}}{\xspace}~using key $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}$, and sends the final ciphertext $\ensuremath {\mathit{\varsigma}}{\xspace}_{i}$ to \ensuremath {\mathit{GW}}{\xspace}. \ensuremath {\mathit{GW}}{\xspace}~decrypts $2\nbr$ ciphertexts $\ensuremath {\mathit{\theta}}{\xspace}_{i}$s and $\ensuremath {\mathit{\varsigma}}{\xspace}_{i}$s with \ensuremath{\mathcal{D}}{\xspace}~using $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}$ and $\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}$, which yields \ensuremath {\mathit{OPE}}{\xspace}~encrypted values. \ensuremath {\mathit{GW}}{\xspace}~then compares each \ensuremath {\mathit{OPE}}{\xspace}~encryption of \ensuremath {\mathit{RSS}}{\xspace}~with its corresponding \ensuremath {\mathit{OPE}}{\xspace}~encryption of \ensuremath {\mathit{\tau}}{\xspace}. Since both were encrypted with the same key, \ensuremath {\mathit{GW}}{\xspace}~can compare them and conclude which one is greater as in Step~\ref{alg1:comp}. \ensuremath {\mathit{GW}}{\xspace}~stores the outcome of each comparison in a binary vector $\mbox{\boldmath$ \bin$}$, encrpyts and sends it to \ensuremath {\mathit{FC}}{\xspace}. Finally, \ensuremath {\mathit{FC}}{\xspace}~compares the summation of votes \votes~to the optimal voting threshold \ensuremath {\mathit{\lambda}}{\xspace}~to make the final decision about spectrum availability and updates the reputation scores of the users. $\bullet$~{\em Update after $\mathcal{G}$ Membership Changes or Breakdown:} Each new user joining the sensing just establishes a pairwise secret key with \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{GW}}{\xspace}. This has no impact on existing users. If some users leave the network, \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{GW}}{\xspace}~remove their secret keys, which also has no impact on existing users. In both cases, and also in the case of a breakdown or failure, \ensuremath {\mathit{\lambda}}{\xspace}~must be updated accordingly. \begin{figure}[h!] \center \includegraphics[width=0.45\textwidth]{LP-3PSS.pdf} \caption{\LPGW~protocol, $\ensuremath {\mathit{\theta}}{\xspace}_{i} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}}$, $\ensuremath {\mathit{\varsigma}}{\xspace}_{i} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{GW}}{\xspace},i}}{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_i}}$ and $\ensuremath {\mathit{\zeta}}{\xspace} \gets \Enc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},\ensuremath {\mathit{GW}}{\xspace}}}{\{\bin_i\}_{i=1}^{n}}$} \label{yme} \end{figure} \begin{myremark} A malicious \ensuremath {\mathit{FC}}{\xspace}~in \LPGW~following Security Assumption~\ref{asm:asm1} may want to maliciously modify the value of \ensuremath {\mathit{\tau}}{\xspace}. But since \ensuremath {\mathit{GW}}{\xspace}~is the one that performs the comparison between \ensuremath {\mathit{RSS}}{\xspace}~values and \ensuremath {\mathit{\tau}}{\xspace}, changing \ensuremath {\mathit{\tau}}{\xspace}~maliciously has almost no benefit to \ensuremath {\mathit{FC}}{\xspace}~as it does not have access to individual comparison outcomes. This makes \LPGW~robust against this malicious \ensuremath {\mathit{FC}}{\xspace}.\\ \end{myremark} It is worth iterating that the \ensuremath {\mathit{GW}}{\xspace}~only needs to perform simple comparison operations between the \ensuremath {\mathit{RSS}}{\xspace}~values of the \ensuremath {\mathit{SU}}{\xspace} s and the energy sensing threshold \ensuremath {\mathit{\tau}}{\xspace}~of the \ensuremath {\mathit{FC}}{\xspace}~as we explained earlier. Thus, such an entity does not interfere with the spectrum sensing process in the \ensuremath {\mathit{CRN}}{\xspace}. Moreover, it does not need to be provided with large computational resources as these comparisons are very simple and fast to perform. It could be a standalone entity, one of the \ensuremath {\mathit{SU}}{\xspace} s that is dedicated to perform the tasks of the \ensuremath {\mathit{GW}}{\xspace}~or even a secure hardware that is deployed inside the \ensuremath {\mathit{FC}}{\xspace}~itself as we discuss next. This gives multiple options to system designers. If FCC's regulation allows introducing an additional entity to the \ensuremath {\mathit{CRN}}{\xspace}, then \ensuremath {\mathit{GW}}{\xspace}~could be deployed without any concern. If not, system designers could consider introducing a secure hardware within \ensuremath {\mathit{FC}}{\xspace}~or dedicating one of the \ensuremath {\mathit{SU}}{\xspace} s to perform the tasks of \ensuremath {\mathit{GW}}{\xspace}. \subsection*{\LPGW~with Secure Hardware} \LPGW~could also be implemented in a slightly different way by relying on a {\em secure hardware} deployed within the \ensuremath {\mathit{FC}}{\xspace}~itself instead of using a dedicated gateway. All the computation that is performed by \ensuremath {\mathit{GW}}{\xspace}~could be relayed to this hardware. This {\em secure hardware}, which is referred to as {\em secure co-processor} (\ensuremath {\mathit{SCPU}}{\xspace}) or as {\em trusted platform module} ({\em TPM}) in the literature, is physically shielded from penetration, and the I/O interface to the module is the only way to access the internal state of the module \cite{yee1995secure}. An \ensuremath {\mathit{SCPU}}{\xspace}~that meets the FIPS 140-2 level 4 \cite{fips2001140} physical security requirements guarantees that \ensuremath {\mathit{FC}}{\xspace}~cannot tamper with its computation. Any attempt to tamper with this \ensuremath {\mathit{SCPU}}{\xspace}~from \ensuremath {\mathit{FC}}{\xspace}~that results somehow in penetrating the shield, leads to the automatic erasure of sensitive memory areas containing critical secrets. The {\em SCPU} may provide several benefits to the network. First, there is no need anymore of adding a new standalone entity managed by a third party to the network as was the case with \ensuremath {\mathit{GW}}{\xspace}. Also, despite its high cost, having an \ensuremath {\mathit{SCPU}}{\xspace}~deployed within \ensuremath {\mathit{FC}}{\xspace}~itself may reduce the communication latency that is incurred by having a gateway that needs to communicate with \ensuremath {\mathit{FC}}{\xspace}~and with every user in the network. In terms of performance, it was proven in \cite{bajaj2014trusteddb} that at a large scale the computation inside an \ensuremath {\mathit{SCPU}}{\xspace}~is orders of magnitude cheaper than equivalent cryptography that is performed on an unsecured server hardware, despite the overall greater acquisition cost of secure hardware. All of this makes using an {\em SCPU} a good alternative to using a dedicated gateway in the network thanks to its performance and the security guarantees that it provides. \section{Security Analysis} \label{sec:SecAnalysis} We first describe the underlying security primitives, on which our schemes rely, and then precisely quantify the information leakage of our schemes, which we prove to achieve our Security Objectives \ref{obj:SecurityObj-1}. At the end of this section, we discuss the security of the modified versions of our schemes. \begin{fact}\label{fact:IdealSecOPE} An \ensuremath {\mathit{OPE}}{\xspace}~is \emph{indistinguishable under ordered chosen-plaintext attack (IND-OCPA)} \cite{boldyreva2009order} if it has no leakage, except the order of ciphertexts (e.g. \cite{popa2013ideal,kerschbaum2014optimal}). \end{fact} \begin{fact}\label{fact:YME} \ensuremath {\mathit{YM.ECElG}}{\xspace}~is secure by Definition \ref{def:YM} if \ensuremath {\mathit{ECElG}}{\xspace}~cryptosystem~\cite{koblitz1987elliptic}, whose security relies on the \ensuremath {\mathit{ECDLP}}{\xspace}~(Definition \ref{def:ECDLP}), is secure. \end{fact} \begin{fact}\label{fact:SecyreGHD} \ensuremath {\mathit{TGECDH}}{\xspace}~is secure with key independence by Definition \ref{def:KeyIndependence} if \ensuremath {\mathit{ECDLP}}{\xspace}~is intractable by Definition \ref{def:ECDLP}. \end{fact} Let \ensuremath{\mathcal{E}}{\xspace}~and \ensuremath{\mathit{OPE}.\mathcal{E}}~be {\em IND-CPA secure}~\cite{JonathanKatzModernCrytoBook} and {\em IND-OCPA secure} symmetric ciphers, respectively. $(\{\ensuremath {\mathit{RSS}}{\xspace}_{i}^{j}\}_{i=1,j=1}^{n,\ell},\tau)$ are \ensuremath {\mathit{RSS}}{\xspace}~values and \ensuremath {\mathit{\tau}}{\xspace}~of each $\ensuremath {\mathit{SU}}{\xspace}_i$ and \ensuremath {\mathit{FC}}{\xspace}~for sensing periods $j=1,\ldots,\ell$ in a group $\mathcal{G}$. $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2},\ensuremath{\mathcal{L}_3})$ are history lists, which include all values learned by entities $\ensuremath {\mathit{SU}}{\xspace}_i$, \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{GW}}{\xspace}, respectively, during the execution of the protocol for all sensing periods and membership status of $\mathcal{G}$. Vector \ensuremath{\vec{V}}~is a list of IND-CPA secure values transmitted over secure (authenticated) channels. \ensuremath{\vec{V}}~may be publicly observed by all entities including external attacker \ensuremath{\mathcal{A}}. Hence, \ensuremath{\vec{V}}~is a part of all lists $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2},\ensuremath{\mathcal{L}_3})$. Values (jointly) generated by an entity such as cryptographic keys or variables stored only by the entity itself (e.g., \ensuremath {\mathit{\lambda}}{\xspace}, \ensuremath {\mathit{\pi}}{\xspace}) are not included in history lists for the sake of brevity. Moreover, information exchanged during the execution of \ensuremath {\mathit{YM.ECElG}}{\xspace}~protocol are not included in history lists, since they do not leak any information by Fact \ref{fact:YME}. \begin{mytheorem} \label{the:Security:2P} Under Security Assumptions \ref{asm:asm1}, \ROLPOS~leaks no information on $(\{\ensuremath {\mathit{RSS}}{\xspace}_{i}^{j}\}_{i=1,j=1}^{n,\ell},\tau)$ beyond IND-CPA secure $\{\ensuremath{\vec{V}}^{j}\}_{j=1}^{\ell}$, IND-OCPA secure order of tuple $(\{\ensuremath{\vec{Z}}^{j}=\OEnc{K^{j}}{\ensuremath {\mathit{RSS}}{\xspace}_{1}^{j}},\ldots,\OEnc{K^{j}}{\ensuremath {\mathit{RSS}}{\xspace}_{n}^{j}}\}_{j=1}^{\ell},\ensuremath {\mathit{\tau}}{\xspace})$ and $\{b_{i}^{j}\}_{i=1,j=1}^{n,\ell}$ to \ensuremath {\mathit{FC}}{\xspace}. \end{mytheorem} \noindent {\em Proof:} $\ensuremath{\vec{V}}^{j}=\{\ensuremath {\mathit{chn}}{\xspace}_{i}^{j}\}_{i=1,j=1}^{n,\ell}$ at Step 6 of Algorithm \ref{alg1}. History lists are as follows for each sensing period $j=1,\ldots,\ell$: \begin{eqnarray*} \label{eq:HistorList3P} \ensuremath{\mathcal{L}_1}=\ensuremath{\vec{V}}^{j},~~~\ensuremath{\mathcal{L}_2} = (\{b_{i}^{j}\}_{i=1}^{n},\ensuremath{\vec{V}}^{j},\ensuremath{\vec{Z}}^{j}), \end{eqnarray*} where $\{b_{i}^{j}\}_{i=1}^{n}$ are the outcomes of \ensuremath {\mathit{YM.ECElG}}{\xspace}~protocol (Steps~\ref{alg1:umax}, \ref{alg1:umin} \& \ref{alg1:ui} of Algorithm \ref{alg1}). By Fact \ref{fact:YME}, \ensuremath {\mathit{YM.ECElG}}{\xspace}~protocol leaks no information beyond $\{b_{i}^{j}\}_{i=1}^{n}$ to \ensuremath {\mathit{FC}}{\xspace}~and no information to anyone else. Variables in $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2})$ are IND-CPA and IND-OCPA secure, and therefore leak no information beyond the order of tuples in $\ensuremath{\vec{Z}}^{j}$ to \ensuremath {\mathit{FC}}{\xspace}~by Fact~\ref{fact:IdealSecOPE}. Any membership status update on $\mathcal{G}$ requires an execution of \ensuremath {\mathit{TGECDH}}{\xspace}~protocol, which generates a new group key $\ensuremath{\bar{K}}^{j}$. By Fact \ref{fact:SecyreGHD}, \ensuremath {\mathit{TGECDH}}{\xspace}~guarantees key independence property (Definition \ref{def:KeyIndependence}), and therefore $\ensuremath{\bar{K}}^{j}$ is only available to new members and is independent from previous keys. Hence, history lists $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2})$ are computed identically as described above for the new membership status of $\mathcal{G}$ but with $\ensuremath{\bar{K}}^{j}$, which are IND-CPA secure and IND-OCPA secure.\hfill Using a digital signature gives \ensuremath {\mathit{SU}}{\xspace} s the possibility to learn the intentions of \ensuremath {\mathit{FC}}{\xspace}~and detect whether it is trying to locate them. Since no \ensuremath {\mathit{SU}}{\xspace}~wants its location to be revealed, \ensuremath {\mathit{SU}}{\xspace} s will simply refuse to participate in the sensing upon detection of malicious activity of \ensuremath {\mathit{FC}}{\xspace}~by verifying the signed messages. The only way that \ensuremath {\mathit{FC}}{\xspace}~can learn the location of a \ensuremath {\mathit{SU}}{\xspace}~in this case is when this \ensuremath {\mathit{SU}}{\xspace}~continues to participate in the sensing even after detecting the malicious intents of \ensuremath {\mathit{FC}}{\xspace}. \hfill$\square$ \begin{mytheorem} \label{the:Security:3P} Under Security Assumptions \ref{asm:asm1}, \LPGW~leaks no information on $(\{\ensuremath {\mathit{RSS}}{\xspace}_{i}^{j}\}_{i=1,j=1}^{n,\ell},\tau)$ beyond IND-CPA secure $\{\ensuremath{\vec{V}}^{j}\}_{j=1}^{\ell}$, IND-OCPA secure pairwise order $\{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_{i}^{j}},$ $\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}\}_{i=1,j=1}^{n,\ell}$ to \ensuremath {\mathit{GW}}{\xspace}~and $\{b_{i}^{j}\}_{i=1,j=1}^{n,\ell}$ to \ensuremath {\mathit{FC}}{\xspace}. \end{mytheorem} \noindent {\em Proof:} $\ensuremath{\vec{V}}^{j}=\{\ensuremath {\mathit{\theta}}{\xspace}_{i}^{j},\ensuremath {\mathit{\varsigma}}{\xspace}_{i}^{j},\ensuremath {\mathit{\zeta}}{\xspace}^{j}\}_{i=1,j=1}^{n,\ell}$, where $\{\ensuremath {\mathit{\theta}}{\xspace}_{i}^{j}\}_{i=1,j=1}^{n,\ell}$ and $\{\ensuremath {\mathit{\varsigma}}{\xspace}_{i}^{j},\ensuremath {\mathit{\zeta}}{\xspace}^{j}\}_{i=1,j=1}^{n,\ell}$ are generated at the initialization and private sensing in Algorithm~\ref{alg2}, respectively. History lists are as follows for each sensing period $j=1,\ldots,\ell$: \begin{eqnarray*} \label{eq:HistorList3P} \ensuremath{\mathcal{L}_1} & = & \ensuremath{\vec{V}}^{j},~~~\ensuremath{\mathcal{L}_2} = (\{b_{i}^{j}\}_{i=1,j=1}^{n,\ell},\ensuremath{\vec{V}}^{j}), \\ \ensuremath{\mathcal{L}_3} & = & (\{\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{RSS}}{\xspace}_{i}^{j}},\OEnc{\ensuremath {\mathit{k}}{\xspace}_{\ensuremath {\mathit{FC}}{\xspace},i}}{\ensuremath {\mathit{\tau}}{\xspace}}\}_{i=1,j=1}^{n,\ell},\ensuremath{\vec{V}}^{j},\\& &\{b_{i}^{j}\}_{i=1,j=1}^{n,\ell}) \end{eqnarray*} Variables in $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2},\ensuremath{\mathcal{L}_3})$ are IND-CPA secure and IND-OCPA secure, and therefore leak no information beyond the pairwise order of ciphertexts to \ensuremath {\mathit{GW}}{\xspace}~by Fact \ref{fact:IdealSecOPE}. Any membership status update on $\mathcal{G}$ requires an authenticated channel establishment or removal for joining or leaving members, whose private keys are independent from each other. Hence, history lists $(\ensuremath{\mathcal{L}_1},\ensuremath{\mathcal{L}_2},\ensuremath{\mathcal{L}_3})$ are computed identically as described above for the new membership status of $\mathcal{G}$, which are IND-CPA secure and IND-OCPA secure.\hfill$\square$ \begin{mycorollary} \label{Cor:SecurityObjectives} Theorem \ref{the:Security:2P} and Theorem \ref{the:Security:3P} guarantee that in our schemes, RSS values and \ensuremath {\mathit{\tau}}{\xspace}~are IND-OCPA secure for all sensing periods and membership changes. Hence, our schemes achieve Objectives \ref{obj:SecurityObj-1}. \end{mycorollary} \vspace{-5mm} \subsection{Discussion about {\em SCPU}-based \LPGW's security} The security of {\em SCPU}-based \LPGW~could be reduced to that of the {\em SCPU} that is used. Since no direct communication exists between \ensuremath {\mathit{FC}}{\xspace}~and \ensuremath {\mathit{SU}}{\xspace} s, the only way for \ensuremath {\mathit{FC}}{\xspace}~to learn \ensuremath {\mathit{RSS}}{\xspace}~values of \ensuremath {\mathit{SU}}{\xspace} s is by compromising the {\em SCPU}. Having the secret keys that were used to \ensuremath {\mathit{OPE}}{\xspace}~encrypt \ensuremath {\mathit{SU}}{\xspace} s' \ensuremath {\mathit{RSS}}{\xspace}~values, a successful attempt to break into this secure hardware by \ensuremath {\mathit{FC}}{\xspace}~will allow it to decrypt the \ensuremath {\mathit{RSS}}{\xspace}~values and learn \ensuremath {\mathit{SU}}{\xspace} s' locations. However, as mentioned earlier, a \ensuremath {\mathit{SCPU}}{\xspace}~that complies with the physical security requirements of FIPS 140-2 level 4 \cite{fips2001140} should guarantee that such a breach does not happen. And thus, \ensuremath {\mathit{FC}}{\xspace}~should not be able to retrieve the data in the {\em SCPU} even though the latter is deployed inside the malicious \ensuremath {\mathit{FC}}{\xspace}~itself. \subsection{Discussion about collusion between different entities} We also investigate how our schemes perform under collusion. We discuss different collusion scenarios for each proposed scheme separately. For \ROLPOS, if multiple \ensuremath {\mathit{SU}}{\xspace} s collude to learn another \ensuremath {\mathit{SU}}{\xspace}'s location information, their collusion can only allow them to learn IND-CPA secure values \ensuremath{\vec{V}}~which contain the \ensuremath {\mathit{OPE}}{\xspace}~encrypted \ensuremath {\mathit{RSS}}{\xspace} s transmitted over the authenticated secure channel between the target \ensuremath {\mathit{SU}}{\xspace}~and \ensuremath {\mathit{FC}}{\xspace}. This means that collusion among \ensuremath {\mathit{SU}}{\xspace} s does not allow them to learn \ensuremath {\mathit{RSS}}{\xspace}~measurements of other \ensuremath {\mathit{SU}}{\xspace} s and, thus, nor their location. The second scenario is when \ensuremath {\mathit{FC}}{\xspace}~colludes with some \ensuremath {\mathit{SU}}{\xspace} s to localize other \ensuremath {\mathit{SU}}{\xspace} s. In this case, \ensuremath {\mathit{FC}}{\xspace}~will have access to the group key $K$ used by \ensuremath {\mathit{SU}}{\xspace} s to encrypt their \ensuremath {\mathit{RSS}}{\xspace}~measurements. Only in this case would \ensuremath {\mathit{FC}}{\xspace}~be able to learn \ensuremath {\mathit{SU}}{\xspace} s' locations. Therefore, \ROLPOS~is robust against collusion among compromised \ensuremath {\mathit{SU}}{\xspace} s, but assumes that \ensuremath {\mathit{FC}}{\xspace}~cannot collude with \ensuremath {\mathit{SU}}{\xspace} s. Similar reasoning applies to \LPGW. Collusion among \ensuremath {\mathit{SU}}{\xspace} s does not allow them to infer other \ensuremath {\mathit{SU}}{\xspace} s' locations. And if \ensuremath {\mathit{SU}}{\xspace} s collude with \ensuremath {\mathit{GW}}{\xspace}, they can only manage to learn the \ensuremath {\mathit{OPE}}{\xspace}~encrypted \ensuremath {\mathit{RSS}}{\xspace}~measurements of the \ensuremath {\mathit{SU}}{\xspace} s but nothing more, as each \ensuremath {\mathit{SU}}{\xspace}~\ensuremath {\mathit{OPE}}{\xspace}~encrypts its \ensuremath {\mathit{RSS}}{\xspace}~measurement with its own secret key. Also, collusion between \ensuremath {\mathit{FC}}{\xspace}~and some \ensuremath {\mathit{SU}}{\xspace} s cannot reveal the \ensuremath {\mathit{RSS}}{\xspace}~measurements of other \ensuremath {\mathit{SU}}{\xspace} s as the latter send their \ensuremath {\mathit{OPE}}{\xspace}~encrypted \ensuremath {\mathit{RSS}}{\xspace} s through their authenticated channels established individually with the \ensuremath {\mathit{GW}}{\xspace}. This prevents colluding \ensuremath {\mathit{SU}}{\xspace} s and \ensuremath {\mathit{FC}}{\xspace}~from accessing the private information of other \ensuremath {\mathit{SU}}{\xspace} s and subsequently localizing them. Thus, for \LPGW, only collusion between \ensuremath {\mathit{GW}}{\xspace}~and \ensuremath {\mathit{FC}}{\xspace}~could reveal \ensuremath {\mathit{RSS}}{\xspace}~measurements of all \ensuremath {\mathit{SU}}{\xspace} s as \ensuremath {\mathit{FC}}{\xspace}~has the secret keys that were used by \ensuremath {\mathit{SU}}{\xspace} s to \ensuremath {\mathit{OPE}}{\xspace}~encrypt their \ensuremath {\mathit{RSS}}{\xspace}~values before sending them to \ensuremath {\mathit{GW}}{\xspace}. However, this specific collusion scenario could be dealt with, for example, by deploying a secure hardware within \ensuremath {\mathit{FC}}{\xspace}~to play the role of \ensuremath {\mathit{GW}}{\xspace}. The inherent nature of such a hardware prevents \ensuremath {\mathit{FC}}{\xspace}~from accessing it and colluding with it. We provided an explanation to this in Section~\ref{sec:lpgw}. This proofs that \LPGW~is not only robust against collusion among \ensuremath {\mathit{SU}}{\xspace} s themselves, but also against collusion between \ensuremath {\mathit{FC}}{\xspace}~and compromised \ensuremath {\mathit{SU}}{\xspace} s. \begin{table*}[ht!] \scriptsize \centering \caption{Computational overhead comparison} \label{tab:Table2} \resizebox{\textwidth}{!}{% \renewcommand{\arraystretch}{1.25}{ \begin{tabular}{||c||c|c|c|c||} \hline \multicolumn{1}{||c||}{\multirow{2}{*}{\textbf{\em Scheme}}} & \multicolumn{4}{|c||}{\textbf{Computation}} \\ \cline{2-5} \multicolumn{1}{||c||}{} & \textbf{ {\em FC}} & \multicolumn{2}{c|}{\textbf{ {\em SU}}} & \textbf{ {\em GW}}\\ \hline \hline \multicolumn{1}{||c||}{\textbf{\ROLPOS}} & $\gam/2 \cdot(2+log\:\nbr)\cdot (PMulQ + PAddQ + \sqrt{\delta} \cdot \pol)$ & \multicolumn{2}{c|}{$(4\gam-6)\cdot PAddQ+\ensuremath {\mathit{OPE}}{\xspace} +\ensuremath {\mathit{\mu}}{\xspace}\cdot(2\:log\:\nbr +2)\cdot PMulQ$ } & - \\ \hline \multicolumn{1}{||c||}{\textbf{\ensuremath {\mathit{LPOS}}{\xspace}}} & $1/2 \cdot(2+log\:\nbr)\cdot\gam \cdot|p|\cdot Mulp$ & \multicolumn{2}{c|}{$(2\gam\cdot |p|+2\gam)\cdot Mulp+\ensuremath {\mathit{OPE}}{\xspace} +2\ensuremath {\mathit{\mu}}{\xspace}\cdot log\: \nbr \cdot PMulQ$} & - \\ \hline \multicolumn{1}{||c||}{\ensuremath {\mathit{ECEG}}{\xspace}} &$PMulQ + PAddQ + \sqrt{\nbr\cdot \delta} \cdot \pol$ & $\qquad 2PMulQ +PAddQ \qquad$ & $(\nbr-2)\cdot PAddQ$ & - \\ \hline \multicolumn{1}{||c||}{\ensuremath {\mathit{PPSS}}{\xspace}} & $H + (\nbr+2) \cdot Mulp + (2^{\gam-1}\cdot\nbr + 2) \cdot Expp$ & \multicolumn{2}{c|}{$H + 2Expp + Mulp$} & -\\ \hline \hline \hline \multicolumn{1}{||c||}{\textbf{\LPGW}} & $\ensuremath{\mathcal{D}}{\xspace} + \ensuremath {\mathit{\beta}}{\xspace}(t)\cdot(\ensuremath{\mathcal{E}}{\xspace}+\ensuremath {\mathit{OPE}}{\xspace}_E)$ & \multicolumn{2}{c|}{$\ensuremath {\mathit{OPE}}{\xspace}_E + \ensuremath{\mathcal{E}}{\xspace}$ }& $\nbr \cdot\ensuremath{\mathcal{D}}{\xspace} + \ensuremath{\mathcal{E}}{\xspace}$ \\ \hline \multicolumn{1}{||c||}{{\em PDAFT}{\xspace}} & $2Exp\ensuremath {\mathit{N}}{\xspace}^2 + Inv\ensuremath {\mathit{N}}{\xspace}^2 + \ensuremath {\mathit{y}}{\xspace} \cdot Mul\ensuremath {\mathit{N}}{\xspace}^2$ &\multicolumn{2}{c|}{$ 2Exp\ensuremath {\mathit{N}}{\xspace}^2 + Mul\ensuremath {\mathit{N}}{\xspace}^2$ } & $\nbr\cdot Mul\ensuremath {\mathit{N}}{\xspace}^2$\\ \hline \end{tabular}}} \begin{flushleft} \scriptsize {\doublespacing \textbf{(i) Variables:} $\ensuremath {\mathit{\kappa}}{\xspace}}\newcommand{\posr}{\ensuremath {\mathit{\varrho}}{\xspace}$ security parameter, $\ensuremath {\mathit{N}}{\xspace}$: modulus in Paillier, $p$: modulus of El Gamal, $H$: cryptographic hash operation, $K$: secret group key of \ensuremath {\mathit{OPE}}{\xspace}. $Expu$ and $Mulu$ denote a modular exponentiation and a modular multiplication over modulus $u$ respectively, where $u \in \{\ensuremath {\mathit{N}}{\xspace}, \ensuremath {\mathit{N}}{\xspace}^2, p\}$. $Inv\ensuremath {\mathit{N}}{\xspace}^2$: modular inversion over $\ensuremath {\mathit{N}}{\xspace}^2$, $PMulQ$: point multiplication of order $Q$, $PAddQ$: point addition of order $Q$. $\ensuremath {\mathit{y}}{\xspace}$: number of servers needed for decryption in {\em PDAFT}{\xspace}. \textbf{(ii) Parameters size:} For a security parameter $\kappa = 80$, suggested parameter sizes by {\em NIST 2012} are given by : $|\ensuremath {\mathit{N}}{\xspace}| = 1024$, $|p| = 1024$, $|Q|=192$ as indicated in \cite{keylength}. \textbf{(iii) YM.ECElGamal:} The communication cost for one comparison is $4\gam \cdot |Q|$. The total computational cost of the scheme for one comparison is $\gam\cdot (PMulQ+5PAddQ+\sqrt{\delta} \cdot \pol)-6PAddQ$. \textbf{(iv) ECEG:} The decryption of the aggregated message in \ensuremath {\mathit{ECEG}}{\xspace}~is done by solving the constrained ECDLP problem on small plaintext space similarly to \cite{li2012location} via Pollard's Lambda algorithm, which requires $O(\sqrt{n \cdot\delta})\cdot \pol$ computation and $O(log(n\delta))$ storage \cite{menezes2010handbook}, where $\delta = a-b$ if $RSS \in [a,b]$ and \pol \;is the number of point operations in Pollard Lambda algorithm which varies depending on algorithm implementation used. For \ensuremath {\mathit{SU}}{\xspace}'s overhead, the left column shows the cost for a normal \ensuremath {\mathit{SU}}{\xspace}~in \ensuremath {\mathit{ECEG}}{\xspace}~and the right column shows the cost of the \ensuremath {\mathit{SU}}{\xspace}~that plays the role of a gateway in \ensuremath {\mathit{ECEG}}{\xspace}. \textbf{(v) TGECDH}: It permits the alteration of group membership (i.e., join/leave), on average $\mathcal{O}(log(n))$ communication and computation (i.e., ECC scalar multiplication) \cite{steiner1996diffie}. \textbf{(vi) OPE}: we rely on \ensuremath {\mathit{OPE}}{\xspace}~scheme proposed by Boldyreva~\cite{boldyreva2009order} for our evaluation because of its popularity and public implementation but our schemes can use {\em {any secure}} \ensuremath {\mathit{OPE}}{\xspace}~scheme (e.g.,~\cite{boldyreva2009order,popa2013ideal,kerschbaum2014optimal}) as a building block. \textbf{(vi) \ensuremath{\mathcal{E}}{\xspace}}: We rely on \aes~\cite{daemen1999aes}\footnote{AES is a symmetric block cipher adopted by the U.S. government and known to be the strongest symmetric crypto algorithm.} as our (\ensuremath{\mathcal{E}}{\xspace},\ensuremath{\mathcal{D}}{\xspace}) for our cost analysis. } \end{flushleft} \end{table*} \section{Performance Evaluation} \label{sec:PerformanceAnalysis} We now evaluate our proposed schemes, \ROLPOS~and \LPGW, by comparing \ROLPOS~to its predecessor \ensuremath {\mathit{LPOS}}{\xspace}~\cite{grissa2015location}, \ensuremath {\mathit{ECEG}}{\xspace}~and \ensuremath {\mathit{PPSS}}{\xspace}~as these schemes are all designed for the sensing architecture without a gateway, and comparing \LPGW~to {\em PDAFT}{\xspace}~as both are designed for the sensing architecture with a gateway. \subsection{Existing Approaches: \ensuremath {\mathit{PPSS}}{\xspace}, \ensuremath {\mathit{ECEG}}{\xspace}, and {\em PDAFT}{\xspace}} \ensuremath {\mathit{PPSS}}{\xspace}~\cite{li2012location} uses secret sharing and the Privacy Preserving Aggregation (PPA) process proposed in~\cite{shi2011privacy} to hide the content of specific sensing reports and uses dummy report injections to cope with the DLP attack. In~\ensuremath {\mathit{ECEG}}{\xspace},~\ensuremath {\mathit{SU}}{\xspace} s encrypt their \ensuremath {\mathit{RSS}}{\xspace} s with \ensuremath {\mathit{FC}}{\xspace}'s \ensuremath {\mathit{ECElG}}{\xspace}~public key. One of the nodes aggregates these ciphertexts including its own and then sends the aggregated result to \ensuremath {\mathit{FC}}{\xspace}. The \ensuremath {\mathit{FC}}{\xspace}~then decrypts the aggregated result with its \ensuremath {\mathit{ECElG}}{\xspace}~private key and makes the final decision. {\em PDAFT}{\xspace}~\cite{chen2014pdaft} combines Paillier cryptosystem~\cite{paillier1999public} with Shamir's secret sharing~\cite{shamir1979share}, where a set of smart meters sense the consumption of different households, encrypt their reports using Paillier, then send them to a gateway. The gateway multiplies these reports and forwards the result to the control center, which selects a number of servers (among all servers) to cooperate in order to decrypt the aggregated result. {\em PDAFT}{\xspace}~requires a dedicated gateway, just like \LPGW, to collect the encrypted data, and a minimum number of working servers in the control center to decrypt the aggregated result. \subsection{Performance Analysis and Comparison} We focus on communication and computational overheads. We consider the overhead incurred during the sensing operations but not that related to system initialization (e.g. key establishment), where most of the computation and communication is done offline. We model the membership change events in the network as a random process \ensuremath {\mathit{R}}{\xspace}~that takes on $0$ and $1$, and whose average is $\ensuremath {\mathit{\mu}}{\xspace}$. $\ensuremath {\mathit{R}}{\xspace}=0$ means that no change occurred in the network and $\ensuremath {\mathit{R}}{\xspace}=1$ means that some \ensuremath {\mathit{SU}}{\xspace} s left/joined the sensing task. Let \ensuremath {\mathit{\beta}}{\xspace}(t)~be a function that models the average number of \ensuremath {\mathit{SU}}{\xspace} s that join the sensing at the current sensing period $t$. We precise that our performance analysis is not based on a simulation but rather on measuring the computational and communication overhead involved in the cryptographic operations that we deployed, like \ensuremath {\mathit{YM.ECElG}}{\xspace}~protocol and \ensuremath {\mathit{OPE}}{\xspace}. This gives us an idea about how our schemes perform compared to existent approaches in terms of incurred overhead. The execution times of the different primitives and protocols were measured on a laptop running Ubuntu 14.10 with 8GB of RAM and a core M 1.3 GHz Intel processor, with cryptographic libraries MIRACL~\cite{miracl}, Crypto++~\cite{crypto++} and {\em Louismullie}'s Ruby implementation of \ensuremath {\mathit{OPE}}{\xspace}~\cite{opeRuby}. C++ implementations that we developed for the optimized \ensuremath {\mathit{ECElG}}{\xspace}~and the \ensuremath {\mathit{YM.ECElG}}{\xspace}~schemes will be provided for public use. \noindent \textbf{Computational Overhead}: Table \ref{tab:Table2} provides an analytical computational overhead comparison including the details of variables, parameters and the overhead of building blocks. In \ROLPOS, \ensuremath {\mathit{FC}}{\xspace}~requires only a logarithmic number of \ensuremath {\mathit{YM.ECElG}}{\xspace}~executions. An \ensuremath {\mathit{SU}}{\xspace}~requires a small constant number of {\em Point additions} $PAddQ$, one \ensuremath {\mathit{OPE}}{\xspace}~encryption and group key update, which is necessary only \ensuremath {\mathit{\mu}}{\xspace}~percent of the time when there is a change in the network (with only a logarithmic overhead in the number of \ensuremath {\mathit{SU}}{\xspace} s). The signature verification operation, that new \ensuremath {\mathit{SU}}{\xspace} s have to perform upon joining the sensing, is extremely fast in most of the digital signature schemes compared to the system overall computational overhead that we study in this section. This makes the delay introduced by the digital signature negligible compared to the overall computational overhead inferred by \ROLPOS~regardless of the used digital signature scheme. Thus, we don't consider this delay in our evaluation. This makes \ROLPOS~much more efficient than \ensuremath {\mathit{ECEG}}{\xspace}~and \ensuremath {\mathit{PPSS}}{\xspace}, especially for a relatively large number of \ensuremath {\mathit{SU}}{\xspace} s. In \LPGW, \ensuremath {\mathit{FC}}{\xspace}~requires only a small constant number of $(\ensuremath{\mathcal{D}}{\xspace},\ensuremath{\mathcal{E}}{\xspace},\ensuremath {\mathit{OPE}}{\xspace})$ operations. An \ensuremath {\mathit{SU}}{\xspace}~requires one \ensuremath {\mathit{OPE}}{\xspace}~and \ensuremath{\mathcal{E}}{\xspace}~encryptions of its \ensuremath {\mathit{RSS}}{\xspace}. Finally, \ensuremath {\mathit{GW}}{\xspace}~requires one \ensuremath{\mathcal{D}}{\xspace}~operation per \ensuremath {\mathit{SU}}{\xspace}~and one \ensuremath{\mathcal{E}}{\xspace}~of vector $\mbox{\boldmath$\bin$}$. All computations in \LPGW~rely on only symmetric cryptography, which makes it {\em the most computationally efficient scheme among all alternatives} as discussed below. For illustration purpose, we plot in Figure~\ref{fig:perfComp} the system end-to-end computational overhead of the different schemes. Figure~\ref{fig:comp_overhead_wo_gw} shows that \ROLPOS~incurs an overhead that is comparable to that incurred by \ensuremath {\mathit{ECEG}}{\xspace}, but much lower than that incurred by \ensuremath {\mathit{PPSS}}{\xspace}. Figure~\ref{fig:comp_overhead_wo_gw} shows also that \ROLPOS~performs slightly better than its predecessor \ensuremath {\mathit{LPOS}}{\xspace}. Figure~\ref{fig:comp_overhead_w_gw} shows that \LPGW~is several order of magnitudes faster than {\em PDAFT}{\xspace}~for any number of \ensuremath {\mathit{SU}}{\xspace} s. \begin{figure}[h!] \centering \subfigure[Schemes w/o gateway]{\includegraphics[height=4.1cm,width=4.35cm]{compOverheadWOGW.pdf}\label{fig:comp_overhead_wo_gw}} \subfigure[Schemes w/ gateway]{\includegraphics[height=4.1cm,width=4.35cm]{compOverheadGW.pdf}\label{fig:comp_overhead_w_gw}} \caption{Computation Overhead, $\ensuremath {\mathit{\beta}}{\xspace} = 5$, $\ensuremath {\mathit{\mu}}{\xspace}=20\%$ \& $\ensuremath {\mathit{\kappa}}{\xspace}}\newcommand{\posr}{\ensuremath {\mathit{\varrho}}{\xspace} =80$} \label{fig:perfComp} \end{figure} \begin{figure*}[h!] \centering \subfigure[FC: w/o gateway]{\includegraphics[scale=0.27]{CompBarFCWOGW.pdf}\label{fig:CompBarFCWOGW}} \subfigure[FC: w/ gateway]{\includegraphics[scale=0.27]{CompBarFCGW.pdf}\label{fig:CompBarFCWG}} \subfigure[SU: w/o gateway]{\includegraphics[scale=0.27]{CompBarSUWOGW.pdf}\label{fig:CompBarSUWOGW}} \subfigure[SU: w/ gateway]{\includegraphics[scale=0.27]{CompBarSUGW.pdf}\label{fig:CompBarSUGW}} \subfigure[GW]{\includegraphics[scale=0.27]{CompBarGW.pdf}\label{fig:CompBarGW}} \caption{Computational Overhead Variation with Respect to Security Parameter $\kappa$ for $\nbr=1200$, $\ensuremath {\mathit{\beta}}{\xspace} = 5$ \& $\ensuremath {\mathit{\mu}}{\xspace}=20\%$} \label{fig:perfCompBar} \vspace{-0.1in} \end{figure*} Notice that the key generation and signing operations are done only once at the beginning of the protocol as \ensuremath {\mathit{\tau}}{\xspace}, and thus $T$, should be static over time unless a dramatic change in the system environment occurs which leads to the re-execution of the \ROLPOS's initialization phase. That is why these operations are not counted for the operational overhead of our scheme. We also study the impact of the security parameter, $\kappa$, which controls the encryption key length, by varying it in accordance with {\em NIST}'s recommendations~\cite{keylength}. Note that this assesses the suitability of a scheme for a long term deployment in a stable networking infrastructure. Figure~\ref{fig:perfCompBar}, evaluating the schemes under three values of $\kappa$, shows that our schemes are the least impacted by increasing security parameters. It also shows that \LPGW~is significantly more efficient than {\em PDAFT}{\xspace}~in terms of computation overhead for all entities. Note that our schemes achieve a delay, which is well below the $2$-second computation delay required by {\em IEEE 802.22 standard} for TV white space management~\cite{IEEEStd80222a}. This standard requires that the system handles dynamism in the network and that \ensuremath {\mathit{RSS}}{\xspace}~values lie within the interval $[-104,23.5]dB$ and are encoded under 8 bits. Figures~\ref{fig:CompBarFCWOGW} \& \ref{fig:CompBarSUWOGW} show the gain in computational performance of \ROLPOS~over \ensuremath {\mathit{LPOS}}{\xspace}~especially for high security levels and from the \ensuremath {\mathit{SU}}{\xspace} s side. \noindent \textbf{Communication Overhead}: Table~\ref{tab:Table3} provides the analytical communication overhead comparison. \ROLPOS~requires $\log(\nbr)$ message exchanges for \ensuremath {\mathit{YM}}{\xspace}~protocol, \nbr~\ensuremath {\mathit{OPE}}{\xspace}~ciphertexts and $\log(\nbr)$ messages for group key update (only needed \ensuremath {\mathit{\mu}}{\xspace}~percent of the time when there is a membership change). If some \ensuremath {\mathit{SU}}{\xspace} s join the \ensuremath {\mathit{CRN}}{\xspace}, \ROLPOS~requires sharing the digital signature $\ensuremath {\mathit{\sigma}}{\xspace}$ of message $T$ and the public key $\ensuremath {\mathit{PK_{DS}}}{\xspace}$ used to construct this signature with \ensuremath {\mathit{\beta}}{\xspace}~new \ensuremath {\mathit{SU}}{\xspace} s. \LPGW~requires (\nbr+1) \ensuremath{\mathcal{E}}{\xspace}~ciphertexts and single \ensuremath {\mathit{\zeta}}{\xspace}, which are significantly smaller than the values transmitted by {\em PDAFT}{\xspace}. \begin{table*}[h!] \scriptsize \centering \caption{Communication overhead comparison} \label{tab:Table3} \renewcommand{\arraystretch}{1.25}{ \begin{tabular}{||c||c||} \hline \multicolumn{1}{||c||}{\textbf{\em Scheme}} & \multicolumn{1}{|c||}{\textbf{Communication}} \\ \hline \hline \multicolumn{1}{||c||}{\textbf{\ROLPOS}} & $ 2\gam \cdot |Q| \cdot (2+log\:\nbr)+ \nbr \cdot \epsilon_{\ensuremath {\mathit{OPE}}{\xspace}}+\ensuremath {\mathit{\mu}}{\xspace}\cdot|Q| \cdot log\:\nbr \;+ \;\ensuremath {\mathit{\beta}}{\xspace}\cdot(\vert \ensuremath {\mathit{\sigma}}{\xspace} \vert + \vert \ensuremath {\mathit{PK_{DS}}}{\xspace} \vert)_{DS}$ \\ \hline \hline \multicolumn{1}{||c||}{\textbf{\ensuremath {\mathit{LPOS}}{\xspace}}} & $ 2\gam \cdot |p| \cdot (2+log\:\nbr)+ \nbr \cdot \epsilon_{\ensuremath {\mathit{OPE}}{\xspace}}+\ensuremath {\mathit{\mu}}{\xspace}\cdot|Q| \cdot log\: \nbr$ \\ \hline \multicolumn{1}{||c||}{\textbf{\ensuremath {\mathit{ECEG}}{\xspace}}} & $4|Q| \cdot (4\nbr+\ensuremath {\mathit{\beta}}{\xspace})$ \\ \hline \multicolumn{1}{||c||}{\ensuremath {\mathit{PPSS}}{\xspace}} & $ |p|\cdot \nbr + \ensuremath {\mathit{\beta}}{\xspace}\cdot\ensuremath {\mathit{\mu}}{\xspace}\cdot |p| \cdot \nbr$ \\ \hline \hline \multicolumn{1}{||c||}{\textbf{\LPGW}} & $ (\nbr + 1 )\cdot \blck$ \\ \hline \multicolumn{1}{||c||}{{\em PDAFT}{\xspace}} & $|\ensuremath {\mathit{N}}{\xspace}|\cdot (2(\nbr+1)+\ensuremath {\mathit{\beta}}{\xspace})$ \\ \hline \end{tabular}} \begin{flushleft} \scriptsize{ $\epsilon_{\ensuremath {\mathit{OPE}}{\xspace}} = 128\:bits$: maximum ciphertext size obtained under \ensuremath {\mathit{OPE}}{\xspace}~encryption, \blck: size of ciphertext under \ensuremath{\mathcal{E}}{\xspace}. $\vert \ensuremath {\mathit{\sigma}}{\xspace} \vert$ and $\vert \ensuremath {\mathit{PK_{DS}}}{\xspace} \vert$ are respectively the size of the digital signature and the public key of the digital signature scheme $DS$. } \end{flushleft} \vspace{-3mm} \end{table*} \begin{figure}[h!] \centering \subfigure[Schemes w/o. gateway]{\includegraphics[height=4.1cm,width=4.35cm]{commOverheadWOGW.pdf}\label{fig:comm_overhead_wo_gw}} \subfigure[Schemes w. gateway]{\includegraphics[,height=4.1cm,width=4.35cm]{commOverheadGW.pdf}\label{fig:comm_overhead_w_gw}} \caption{Communication Overhead, $\ensuremath {\mathit{\beta}}{\xspace} = 5$, $\ensuremath {\mathit{\mu}}{\xspace}=20\%$ \& $\ensuremath {\mathit{\kappa}}{\xspace}}\newcommand{\posr}{\ensuremath {\mathit{\varrho}}{\xspace} =80$} \label{fig:perfComm} \end{figure} We further compare our schemes with their counterparts in terms of communication overhead in Figure~\ref{fig:perfComm}. Figure~\ref{fig:comm_overhead_wo_gw} illustrates the communication overhead induced by \ROLPOS~using different digital signature schemes (\ensuremath {\mathit{HORS}}{\xspace}, {\em ECDSA} and {\em NTRU}) compared to the original scheme, \ensuremath {\mathit{LPOS}}{\xspace}, and also to existent approaches \ensuremath {\mathit{PPSS}}{\xspace}~and \ensuremath {\mathit{ECEG}}{\xspace}. This Figure shows that \ROLPOS~is more efficient than \ensuremath {\mathit{PPSS}}{\xspace}~and \ensuremath {\mathit{ECEG}}{\xspace}~due to the use of elliptic curve cryptography with smaller key sizes. Using {\em ECDSA} or {\em NTRU} seems to be the best option in terms of communication overhead as expected. For a large number of \ensuremath {\mathit{SU}}{\xspace} s, using a digital signature scheme with large signature size like \ensuremath {\mathit{HORS}}{\xspace}~does not prevent \ROLPOS-\ensuremath {\mathit{HORS}}{\xspace}~from performing better than existent approaches especially for a large number of \ensuremath {\mathit{SU}}{\xspace} s. Figure~\ref{fig:comm_overhead_w_gw} shows that \LPGW~has the smallest communication overhead when compared with {\em PDAFT}{\xspace}, since it relies on symmetric cryptography only. \ensuremath {\mathit{PPSS}}{\xspace}~and {\em PDAFT}{\xspace}~have a very high communication overhead due to the use of expensive public key encryptions (e.g., Pailler~\cite{paillier1999public}). We also study and show in Figure~\ref{fig:perfCommBar} the impact of the security parameter, $\kappa$, on the communication overhead. Note that the performance gap between our schemes and their counterparts drastically grows when $\kappa$ is increased, showing the suitability of our schemes for long term deployment. Our schemes possess this desirable feature, thanks to their innovative use of compact cryptographic primitives. Figures~\ref{fig:comm_overhead_wo_gw} \& \ref{fig:commBarWOGW} show again how efficient \ROLPOS~is compared to the original \ensuremath {\mathit{LPOS}}{\xspace}~in terms of communication overhead. \begin{figure}[h!] \centering \subfigure[Schemes w/o. gateway]{\includegraphics[scale=0.27,height=3.3cm]{CommBarWOGW.pdf}\label{fig:commBarWOGW}} \subfigure[Schemes w. gateway]{\includegraphics[scale=0.27,height=3.3cm]{CommBarGW.pdf}\label{fig:commBarGW}} \caption{Communication Overhead with varying $\kappa=(80,112,$ $128)$ for $\nbr=1200$, $\ensuremath {\mathit{\beta}}{\xspace} = 5$ \& $\ensuremath {\mathit{\mu}}{\xspace}=20\%$.} \label{fig:perfCommBar} \vspace{-5mm} \end{figure} Overall, our performance analysis indicates that \LPGW~is more efficient than \ROLPOS, and significantly more efficient than all other counterpart schemes in terms of computation and communication overhead, even for increased values of the security parameters, but with the cost of including an additional entity. Moreover, Figures~\ref{fig:perfCompBar} \& \ref{fig:perfCommBar} show that our schemes are impacted much less by increased security parameters when compared to existing alternatives, and therefore are ideal for long term deployment. Note that our performance analysis lacks the evaluation of the {\em SCPU}-based version of \LPGW~due to the fact that this hardware is very expensive. \section{Conclusion} \label{sec:Conclusion} We developed two efficient schemes for cooperative spectrum sensing that protect the location privacy of \ensuremath {\mathit{SU}}{\xspace} s with a low cryptographic overhead while guaranteeing an efficient spectrum sensing. Our schemes are secure and robust against \ensuremath {\mathit{SU}}{\xspace} s' dynamism, failures, and maliciousness. Our performance analysis indicates that our schemes outperform existing alternatives in various metrics. \ifCLASSOPTIONcaptionsoff \newpage \fi \small{ \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:13:34", "yymm": "1806", "arxiv_id": "1806.00916", "language": "en", "url": "https://arxiv.org/abs/1806.00916" }
\chapter*{\bibname}\@mkboth{\MakeUppercase\bibname}{\MakeUppercase\bibname}}{\section*{References}}{}{} \makeatother \renewcommand{\theequation}{\arabic{equation}} \setlength{\textwidth}{17 cm} \setlength{\textheight}{23 cm} \setlength{\oddsidemargin}{0.3cm} \setlength{\evensidemargin}{0.3cm} \setlength{\hoffset}{-1cm } \setlength{\voffset}{-1cm} \setcounter{secnumdepth}{3} \setcounter{tocdepth}{3} \pagestyle{headings} \theoremstyle{plain} \newtheorem{thm}{Theorem} \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{defn}{Definition} \newtheorem{ass}{Assumption} \newtheorem{asss}{Assumptions} \newtheorem{oss}{Remark} \setcounter{ass}{1} \newcommand{\FF}{\phantom{\frac{|}{|}}} \newcommand{\ff}{\phantom{\tfrac{|}{|}}} \title{A Feynman-Kac type formula for a fixed delay CIR model} \author{Federico Flore\thanks{Dipartimento di Economia Aziendale - Universit\`{a} di Roma Tre, Via Silvio D'Amico,77, 00145 Roma, Italy.} \and Giovanna Nappo\thanks{Dipartimento di Matematica - Universit\`{a} di Roma ``Sapienza", Piazzale A. Moro 5, 00185 Roma, Italy.}} \date{} \begin{document} \maketitle \begin{abstract} \noindent Stochastic delay differential equations (SDDE's) have been used for financial modeling. In this article, we study a SDDE obtained by the equation of a CIR process, with an additional fixed delay term in drift; in particular, we prove that there exists a unique strong solution (positive and integrable) which we call fixed delay CIR process. Moreover, for the fixed delay CIR process, we derive a Feynman-Kac type formula, leading to a generalized exponential-affine formula, which is used to determine a bond pricing formula when the interest rate follows the delay's equation. It turns out that, for each maturity time $T$, the instantaneous forward rate is an affine function (with time dependent coefficients) of the rate process and of an auxiliary process (also depending on $T$). The coefficients satisfy a system of deterministic differential equations. \end{abstract} \section{Introduction}\label{sec:introduction} In a seminal paper (\cite{CIR:SIR}) Cox, Ingersoll and Ross proposed a model for the interest rates, that has found considerable use also as a model for volatility and other financial quantities. The model is named Cox-Ingersoll-Ross (CIR) model or mean-reverting square root process (and is also known as Bessel-square process or Feller process), and is expressed as the solution of the following stochastic differential equation \begin{equation}\label{eq:CIR}\begin{cases} dr(t)=a_r(\gamma_r-r(t))dt+\sigma_r\sqrt{r(t)}dW_r(t),\\ r(t_0)=r_0, \end{cases}\end{equation} where $W_r(t)$ is a standard Brownian motion and $a_r$, $\gamma_r$ and $\sigma_r$ are positive constants. There are three appealing properties why this model is used so widely. First, Eq.~\eqref{eq:CIR} has a unique nonnegative solution for any positive initial value with probability one, which is very important since this equation is often used to model the interest rate (or volatility). Second, it is mean-reverting and the expectation of $r(t)$ converges to $\gamma_r$, the so-called long-term value, with the speed $a_r$ (see, e.g., Higham and Mao \cite{HighamMao} and the references therein). Third, since its incremental variance is proportional to the current value, one can compute explicitly the term structure.\\ In order to better capture the properties of the empirical data, there are many extensions of the CIR model, e.g., Chan, Karolyi, Longstaff and Sander \cite{JOFI:JOFI4011} generalize the CIR model as \[dr(t)=a_r(\gamma_r-r(t))dt+\sigma_rr^\theta(t)dW_r(t),\] where $\theta \geq \frac{1}{2}$. As explained in Hull and White \cite{Hull01101990}, another generalization of the CIR model can be obtained so that it is consistent with both the current term structure of interest rates and either the current volatilities of all spot interest rates or the current volatilities of all forward interest rates. In their paper, the authors consider the following version of CIR model with time-dependent parameters; i.e.,~they consider the following model \begin{equation}\label{eq:CIR_HullWhite}\begin{cases} dr(t)=\left[\varphi(t)+a_r(t)\left(\gamma_r-r(t)\right)\right]dt+\sigma_r(t)r^\theta(t)dW_r(t),\\ r(t_0)=r_0. \end{cases}\end{equation} From the financial point view, the interest has focused in models where the underlying asset's dynamics is given by a stochastic delay differential equation (SDDE). In this regard, we can cite Arriojas, Hu, Mohammed and Pa \cite{ArrHuMohaPap}: the authors consider a market where the evolution of the stock price $S(t)$ is described by the following equation \begin{equation}\label{eq:AHMP_equation}\begin{cases} dS(t)&=f(t,S_t)dt+ g(S(t-b))S(t)dW(t),\quad t \in [0,T]\\ S(t) &= \varphi(t), \qquad t \in [-\tau, 0] \end{cases}\end{equation} where the drift coefficient $f:[0,T]\times \mathcal{C}([-\tau,0];\mathbb{R}) \rightarrow \mathbb{R}$ is a given continuous functional, $S_t\in\mathcal{C}([-\tau,0];\mathbb{R})$ stands for the segment process $S_t(u):=S(t+u)$, $u\in[-\tau,0]$, the diffusion coefficient $g$ is a continuous function, the parameters $\tau$ and $b$ are positive constants, with $\tau \geq b$, and the process $\varphi(t)$ is $\mathcal{F}_0$-measurable with respect to the Borel $\sigma$-algebra of $\mathcal{C}([-\tau,0];\mathbb{R})$. Under the suitable hypotheses on $f$ and $g$, the authors prove that Eq.~\eqref{eq:AHMP_equation} has a pathwise unique solution. Furthermore when the drift coefficient $f(t,\eta)$ is equal to $\mu\eta(-a)\eta(0)$, i.e., $f(t,S_t)=\mu S(t-a)S(t)$, where $a$ is positive constant; setting $\tau=\max\{a,b\}$, the authors develop an explicit formula for pricing European options. Moreover, the authors give an alternative model for the stock price dynamics with variable delay and, also in this case, are able to develop a formula for the option price.\\ Although is not present a financial application, we can also cite Wu, Mao and Chen \cite{WuMaoChen} where the authors generalize the Euler-Maruyama (EM) scheme to the following model with delay term in diffusion coefficient \begin{equation}\label{eq:WMC}\begin{cases} dS(t) &= \lambda(\mu-S(t))dt+ \sigma S(t - \tau )^\gamma\sqrt{S(t)}dW(t),\quad t\in[0,T]\\ S(\theta) &= \xi(\theta), \qquad \theta \in [-\tau , 0], \end{cases}\end{equation} where $\xi \in \mathcal{C}([- \tau , 0];\mathbb{R}^+)$, and prove the strong convergence of the EM approximate solutions to the (unique and nonnegative) solution of~\eqref{eq:WMC}.\\ In this paper, we assume that the spot rate satisfies the following SDDE with fixed delay in the drift coefficient \begin{equation}\label{eq:CIRdelay}\begin{cases} dr(t)=[a_r(\gamma_r(t)-r(t))+b_rr(t-\tau)]dt+\sigma_r\sqrt{r(t)}dW_r(t),\\ r(t)=r_0(t) \qquad t_0-\tau\leq t \leq t_0,\end{cases}\end{equation} with $a_r$, $\sigma_r$ positive constants, $\gamma_r(t)$ a positive function, bounded on bounded time intervals, and $b_r\geq 0$, so that our model~\eqref{eq:CIRdelay} is a generalization of the classical CIR model~\eqref{eq:CIR}. \\ In the first part of this paper, we focus our interests on existence and uniqueness of the solution of Eq.~\eqref{eq:CIRdelay}. (We will refer to the unique solution as the fixed delay CIR process.) In the remaining part of the paper, we derive a Feynman-Kac type formula in order to determine a formula for the unitary zero-coupon bond (uZCB) price and a formula for the instantaneous forward rate.\\ The paper is organized as follows. Section \ref{sec:Existence} is devoted to state the properties of our fixed delay CIR process; in particular, under the Assumptions~\ref{asss:hp_r}, Eq.~\eqref{eq:CIRdelay} has a unique and nonnegative solution. Moreover, if the Feller condition \eqref{eq:feller's_condition_delay} holds, the solution is positive (see Theorem~\ref{thm:ExistenceUniqueness}). Furthermore, under Assumption~\ref{ass:integrabilityUnidimensional}, i.e., if the initial segment $r_0(t) $ satisfies an integrability condition, uniformly on the interval $[t_0-\tau,t_0]$ (see \eqref{eq:SCintegrability}), then the solution is integrable, together with its supremum on any bounded interval (see Proposition~\ref{prop:integrabilityCIR} and Remark~\ref{oss:SCintegrability}). The aim of Section~\ref{sec:TSBV} is to determine a pricing formula for uZCB. Under Assumption~\ref{ass:MarketPriceOfRisk-OneDim}, there exists a risk-neutral probability measure and hence, the financial market is arbitrage-free (see Theorem~\ref{thm:existenceQ} and Remark~\ref{oss:MeasureQ}). Another important result of this section is an extension of the well-known Feynman-Kac formula (see Theorem~\ref{thm:term_structure_one-dimensional_CIR_delay}) that is used to determine a pricing formula for uZCB. In Section~\ref{sec:FR}, we recall the definition of instantaneous forward rate and prove that if the spot rate is a fixed delay CIR process, then the instantaneous forward rate is a linear function of the spot rate and of another suitable process (see Theorem~\ref{thm:term_structure_derivate_delay-one-dim}); this result extends the usual formula of the CIR instantaneous forward rate. Appendix~\ref{app:ProofsCIR} is devoted to the proofs of some technical results. \section{Properties of the Fixed Delay CIR Process}\label{sec:Existence} Throughout this paper, unless otherwise specified, we use the following notations. Let $(\Omega,\mathcal{F}, \mathbb{P})$ be a complete probability space with a right continuous filtration $\{\mathcal{F}_t\}_{t\geq t_0}$ and let $\mathcal{F}_{t_0}$ contain all $\mathbb{P}$-null sets. Let $W_r(t)$ be a scalar Brownian motion defined on this probability space. \\ Our aim is to use Eq.~\eqref{eq:CIRdelay} as a model for interest rate, volatility and other financial quantities, therefore, besides existence and uniqueness of the solution, it is crucial that the solution be positive. Actually, we examine the equation \begin{equation}\label{eq:CIRdelay2}\begin{cases} dr(t)=[a_r(\gamma_r(t)-r(t))+b_r r(t-\tau)]dt+\sigma_r\sqrt{|r(t)|}dW_r(t)\\ r(t)=r_0(t) \qquad t_0-\tau\leq t \leq t_0,\end{cases} \end{equation} Throughout this paper we make the following assumptions. \begin{asss}\label{asss:hp_r}${}$ \begin{description} \item[(i)]The process $W_r(t)$, $t\geq t_0$, is a Brownian motion with respect to the filtration $\mathcal{F}_t$, so that $\mathcal{F}_{t_0}$ is independent of natural filtration $\mathcal{F}^{W_r}_t$; \item[(ii)]the parameters $a_r$ and $\sigma_r$ are positive constants, and $b_r$ is a nonnegative constant;\\ \item[(iii)]the segment $r_0(\cdot)$ is a positive continuous random function on $[t_0-\tau, t_0]$ such that \begin{equation} \label{eq:MeasurableCond-CIR} \int_{t_0-\tau}^{t}r_0(u) du < +\infty,\;\;\mathbb{P}\text{-a.s.};\end{equation} moreover, we require that $r_0(t)$ is a $\mathcal{F}_{t_0}$-measurable for $t_0-\tau\leq t\leq t_0$ and therefore $\sigma\{\left(r_0(u)\,; u\in[t_0-\tau,t_0]\right)\}$ is independent of $\mathcal{F}^{W_r}_t$, $t\geq t_0$; \item[(iv)] the deterministic function $\gamma_r(t)$ is measurable, positive, and bounded on every bounded interval. \end{description} \end{asss} \begin{thm}\label{thm:ExistenceUniqueness}${}$\\ Under the Assumptions~\ref{asss:hp_r}, the Eq.~\eqref{eq:CIRdelay2} admits a unique solution and the solution is nonnegative. \\ Assume that $0\leq \underline{b}_r \leq b_r$, and that the initial segment $\underline{r}_0(t)$ is such that $\underline{r}_0(t)\leq r_0(t)$, for $t_0-\tau\leq t\leq t_0$. Let $r^{(b_r)}(t)$ be the solution of~\eqref{eq:CIRdelay2}, and $r^{(\underline{b}_r)}(t)$ be the solution of~\eqref{eq:CIRdelay2}, with $\underline{b}_r$ and the initial segment $\underline{r}_0(t)$ in place of $b_r$ and $r_0(t)$, respectively. Then \begin{equation}\label{comparison} r^{(\underline{b}_r)}(t) \leq r^{(b_r)}(t), \quad \text{for all $t\geq t_0$}. \end{equation} If moreover the following inequality holds \begin{equation}\label{eq:feller's_condition_delay} \sigma_r^2\leq2a_r\gamma_r(t) \quad\text{for all $t\geq t_0$}, \end{equation} then the process $r^{(b_r)}(t)$ is positive. \end{thm} \begin{proof}${}$\\ Existence, uniqueness and the comparison results can be achieved by induction on the intervals $[t_0+k \tau, t_0+(k+1)\tau]$, $k \geq 0$. In the first interval $[t_0, t_0+\tau]$ Eq.~\eqref{eq:CIRdelay} is a particular case of the equations \begin{equation}\label{eq:DDequation}dX(t)=(2\beta X(t)+\delta(t))dt+g(X(t))dW(t)\quad\text{for all $t\in[0,+\infty)$}, \end{equation} studied in Deelstra and Delbaen~\cite{deelstra_delbaen}, with $2\beta=-a_r$, $\delta(t)=b_r\,r_0(t-\tau)+ a_r \gamma_r(t)$ and $g(x)=\sigma_r \sqrt{|x|}$. As observed in the latter paper, `` \emph{Eq.~\eqref{eq:CIRdelay2} is a Dol\'{e}ans, Dade and Protter's equation, and it is shown by Jacod~\cite{Jacod} that there exists a unique strong solution. Extending comparison results as in Karatzas and Shreve~\cite{karatzas_shreve} (p.\@ 293) or Revuz-Yor~\cite{RevuzYor} (p.\@ 394), it is easy to check that the solution remains nonnegative a.s.\@ (see, e.g., Deelstra~\cite{DeelstraPhD}),}'' and that inequality~\eqref{comparison} holds. Finally, taking $\underline{b}_r=0$, (so that the initial segment $\underline{r}_0(t)$ is irrelevant) under the Feller condition~\eqref{eq:feller's_condition_delay} the process $r^{(\underline{b}_r)}(t)$ is the classical CIR model and is positive a.s. Then the comparison~\eqref{comparison} immediately implies that also $r^{(b_r)}(t)$ remains positive a.s. \end{proof} \begin{oss}\label{oss:UiL}${}$\\ The previous result guarantees existence and strong uniqueness of Eq.~\eqref{eq:CIRdelay}, for all initial segment with continuous paths. Then Yamada-Watanabe theorem (see, e.g., Cherny~\cite{Cherny}), implies weak uniqueness, i.e., uniqueness in distribution of the solutions. \end{oss} The next result deals with the integrability of a fixed delay CIR process, i.e., the unique solution of Eq.~\eqref{eq:CIRdelay}. \begin{prop}\label{prop:integrabilityCIR}${}$\\ Suppose that on a probability space $(\Omega,(\mathcal{F}_t)_{t\geq t_0},\mathbb{P})$, the process $r(t)$ is a fixed delay CIR process, defined by Eq.~\eqref{eq:CIRdelay}. If \begin{equation}\label{eq:IntCondr1} \int_{t_0-\tau}^{t_0}\mathbb{E}\left[r_0(u)\right]du<+\infty, \end{equation} and \begin{equation}\label{eq:IntCondr2} \mathbb{E}\left[r_0(t_0)\right]<+\infty,\end{equation} then, \begin{enumerate} \item for all $t\geq t_0$ \begin{equation}\label{eq:integr-uniform} \mathbb{E}\left[\sup_{t_0\leq u \leq t}r(u)\right] < \infty, \end{equation} \item the following formula holds, for all $t\geq t_1\geq t_0$ \begin{equation}\label{eq:ExpValDelay-CIR} \mathbb{E}\left[r(t)\right]=\mathit{e}^{-a_r(t-t_1)}\mathbb{E}\left[r(t_1)\right] +\int_{t_1}^t\mathit{e}^{-a_r(t-u)}\left(a_r\gamma_r(u)+b_r\mathbb{E}\left[r(u-\tau)\right] \right)\,du.\end{equation} \end{enumerate} \end{prop} \begin{proof}${}$\\ Similarly to the previous Theorem~\ref{thm:ExistenceUniqueness}, the proof can be achieved by induction on the intervals $[t_0+k \tau, t_0+(k+1)\tau]$ by proving that \begin{equation}\label{eq:integr-uniform-k} \mathbb{E}\left[\sup_{t_0+k\tau\leq u \leq t_0+ (k+1)\tau}r(u)\right] < \infty. \end{equation} For $k=0$ the thesis follows by Lemma~1 in Deelstra and Delbaen~\cite{DDLongTerm}: hypotheses~\eqref{eq:IntCondr1} and~\eqref{eq:IntCondr2}, imply the integrability condition~\eqref{eq:integr-uniform-k} (with $k=0$). It is important to stress that this implication does not appear in the statement of Lemma~1, but it is one of the steps in the proof of the above mentioned result (see p.\@ 166 in~\cite{DDLongTerm}). The induction step from $k$ to $k+1$ follows by observing that condition~\eqref{eq:integr-uniform-k} is stronger than necessary. \end{proof} \begin{oss}${}$\\\label{oss:SCintegrability} The following integrability condition (uniform on the interval $[t_0-\tau,t_0]$) \begin{equation}\label{eq:SCintegrability} \sup_{t\in[t_0-\tau,t_0]}\mathbb{E}\left[r_0(t)\right]<+\infty, \end{equation} ensures that the assumptions \eqref{eq:IntCondr1} and \eqref{eq:IntCondr2} of Proposition \ref{prop:integrabilityCIR} hold true. \end{oss} \section{Term Structure for Bond Valuation}\label{sec:TSBV} Term structures of interest rates describe the relation between interest rates and bonds with different maturity times. We recall that, by convention, a unitary zero-coupon bond with maturity $T<+\infty$, pays one unit of cash at the prescribed date $T$ in the future, and its price is denoted by $B(t,T)$, at time $t\leq T$; it is thus clear that, necessarily, $B(T,T)=1$ for any maturity $T$.\\ At time $t$, the yield to maturity $R(t,T)$ of the uZCB $B(t,T)$ is the continuously compounded (constant) rate of return that causes the bond price to rise to one a time $T$, i.e., $$ B(t,T)\mathit{e}^{(T-t)R(t,T)}=1, $$ or, solving for the yield, \begin{equation}\label{eq:YieldToMaturity} R(t,T):=-\frac{1}{T-t}\ln(B(t,T)). \end{equation} For a fixed time $t$, the curve $T\,\mapsto\,R(t,T)$ determines the term structure of interest rates.\\ \begin{defn}\label{defn:ShortRate} The (instantaneous) spot rate $r(t)$ is defined by \begin{equation}\label{eq:ShortRate} r(t):=\lim_{T\rightarrow t}R(t,T). \end{equation} \end{defn} In an Arbitrage-free market, the Bond price is given by \begin{equation}\label{eq:BondPrice} B(t,T)=\mathbb{E}^{\mathbb{Q}}\left[\mathit{e}^{-\int_t^Tr(u)du}\bigg{|} \mathcal{F}_t\right]\quad\text{for all $t\in[t_0,T]$},\end{equation} where $\mathbb{E}^\mathbb{Q}$ is the expectation with respect to the risk-neutral measure used by market, $r(t)$ is the $\mathcal{F}_t$-adapted instantaneous interest rate (see, e.g., Lamberton and Lepeyre~\cite{lamberton_lapeyre} or Musiela and Rutkowski~\cite{MusieleRutkowski}). When $r(t)$ is a classical CIR process, the existence of a risk-neutral probability measure $\mathbb{Q}$ is guaranteed by the uniqueness of the martingale problem (see, e.g., Theorem $2.4$ in Cheredito, Filipovi\'c and Yor~\cite{ChereditoFilYor2005}). Cheredito, Filipovi\'c and Kimmel~\cite{cheridito2007market} prove the existence of a risk-neutral probability measure $\mathbb{Q}$ taking advantage of the uniqueness in law of the involved processes (see Theorem~$1$ in Cheredito, Filipovi\'c and Kimmel~\cite{cheridito2007market}). We extend the result of Cheredito, Filipovi\'c and Kimmel to prove the existence of $\mathbb{Q}$ for a fixed delay CIR process. \\ In what follows we assume that the interest rate $r(t)$ is the fixed delay CIR process solution of Eq.~\eqref{eq:CIRdelay}, and since we need to consider different Brownian motions, under different probability measure, we will use the notation $W^\mathbb{P}_r(t)$ instead of $W_r(t)$. \\ \begin{thm}\label{thm:existenceQ}${}$\\ Under Assumptions~\ref{asss:hp_r}, let $r(t)$ be the solution of $$\begin{cases} r(t)=r_0(t_0)+ \int_{t_0}^t \mu^\mathbb{P}(s,r(\cdot))ds+ \int_{t_0}^t \sigma_r \sqrt{r(s)} \, dW^\mathbb{P}_r(s),\quad t\in [t_0,T],\FF \\ r(t)=r_0(t), \quad t\in [t_0-\tau, t_0],\FF \end{cases} $$ where $$\mu^\mathbb{P}(t,x(\cdot))=a_r(\gamma_r(t)-x(t)))+b_r x(t-\tau)$$ and the parameters $a_r$ and $\sigma_r$ and the function~$\gamma_r(t)$ satisfy the Feller condition~\eqref{eq:feller's_condition_delay}.\\ Assume that $b^\mathbb{Q}_r\geq 0$, $a^\mathbb{Q}_r>0$, the function $\gamma^\mathbb{Q}_r(t)$ is measurable, positive, and bounded on every bounded interval, and finally that the Feller condition \begin{equation}\label{eq:feller's_condition_delay-Q} \sigma_r^2\leq2a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t) \end{equation} holds. Consider the functional $\mu^\mathbb{Q}(t,x(\cdot))$ so defined \begin{equation}\label{eq:mu-Q} \mu^\mathbb{Q}(t,x(\cdot)):=a^\mathbb{Q}_r\big(\gamma^\mathbb{Q}_r(t)-x(t)\big)+b^\mathbb{Q}_r x(t-\tau). \end{equation} Then, there exists a probability measure $\mathbb{Q}$, such that \begin{enumerate} \item $\mathbb{Q}=\mathbb{P}$ on $(\Omega, \mathcal{F}_{t_0})$, \item for each $T>t_0$, $\mathbb{Q}$ is equivalent to $\mathbb{P}$ on $(\Omega, \mathcal{F}_T)$, \item there exists a process $W^\mathbb{Q}_r$, which is a Brownian motion under $\mathbb{Q}$, and such that $$ r(t)=r_0(t_0)+ \int_{t_0}^t \mu^\mathbb{Q}(s,r(\cdot))ds+ \int_{t_0}^t \sigma_r \sqrt{r(s)} \, dW^\mathbb{Q}_r(s),\quad t\in [t_0,T]. $$ \end{enumerate} Finally the probability measure $\mathbb{Q}$ on $(\Omega, \mathcal{F}_T)$ is defined by $d\mathbb{Q}=Z_T\, d\mathbb{P}$, where $$ Z_t :=\exp\left\{ -\int_{t_0}^t \xi_r(s,r(\cdot)) \, dW^\mathbb{P}(s)- \frac{1}{2}\,\int_{t_0}^t \xi^2_r(s,r(\cdot))\, ds\right\}, $$ with \begin{align}\notag \xi_r(t,r(\cdot))&:= \frac{\mu^\mathbb{P}(t,r(\cdot))-\mu^\mathbb{Q}(t,r(\cdot))}{\sigma_r\sqrt{r(t)}} \\&= \frac{a_r\gamma_r(t) - a^\mathbb{Q}_r\gamma_r^\mathbb{Q}(t) -(a_r- a_r^\mathbb{Q}) r(t) + (b_r-b_r^\mathbb{Q})r(t-\tau)}{\sigma_r\sqrt{r(t)}}. \label{eq:xi-r-t-r} \end{align} \end{thm} \begin{proof}${}$\\ (See Appendix~\ref{app:ProofsCIR}). \end{proof} Besides Assumptions~\ref{asss:hp_r}, we now assume some further conditions. \begin{ass}\label{ass:integrabilityUnidimensional}${}$\\ The fixed delay CIR process $r(t)$ satisfies the integrability condition~\eqref{eq:SCintegrability}. \end{ass} \begin{ass}\label{ass:MarketPriceOfRisk-OneDim}${}$\\ The market price of risk is a one-dimensional process $\xi_r(t)$ adapted with respect to the filtration~$\mathcal{F}_t$ and right continuous such that $\xi_r(t)$ is given by \begin{equation}\label{eq:MarketPriceOfRisk-OneDim} \xi_r(t)=\frac{\sqrt{r(t)}}{\sigma_r}\,\psi^r,\end{equation} where $\psi^r$ is a nonnegative constant, i.e., the risk-neutral measure $\mathbb{Q}$ is defined as the measure on the same space $(\Omega,\mathcal{F}_T)$ with Radon-Nikodym derivate given by \begin{equation}\label{eq:Q_measure-CIR}\frac{d\mathbb{Q}}{d\mathbb{P}}\bigg{|}_{\mathcal{F}_T}=Z^{\xi_r}(T),\quad\text{that is}\quad \mathbb{Q}(F)=\int_FZ^{\xi_r}(T)\mathbb{P}(d\omega),\; F\in\mathcal{F}_T, \end{equation} where $Z^{\xi_r}(t)$ is the following one-dimensional $\mathbb{P}$-martingale process \begin{equation}\label{eq:martingala_esponenziale_unidimensionale} Z^{\xi_r}(t)=\mathit{e}^{-\int_{t_0}^t\xi_r(s)dW_r(s)+\frac{1}{2}\int_{t_0}^t\xi^2_r(s)ds}\quad t\geq t_0.\end{equation} \end{ass} \begin{oss}\label{oss:MeasureQ}${}$\\ Under Assumption~\ref{ass:MarketPriceOfRisk-OneDim}, by construction, the probability measures $\mathbb{P}$ and $\mathbb{Q}$ are equal on $\mathcal{F}_{t_0}$. Consequently, Assumption~\ref{ass:integrabilityUnidimensional} holds true also w.r.t.\@ the risk-neutral measure~$\mathbb{Q}$, and the process~$r(t)$ is again a fixed delay CIR process w.r.t.\@ the measure $\mathbb{Q}$, with dynamics described by \begin{equation}\label{eq:IRQequation}\begin{cases} dr(t)=[a_r^\mathbb{Q}(\gamma^\mathbb{Q}_r(t)-r(t))+b_r^{\mathbb{Q}}r(t-\tau)]dt+\sigma_r\sqrt{r(t)}dW^\mathbb{Q}_r(t),\\ r(t_0)=r_0(t)\quad t_0-\tau\leq t\leq t_0,\end{cases} \end{equation} where the $\mathbb{Q}$-parameters \begin{equation}\label{eq:Qparameter_r-BP}a^\mathbb{Q}_r=a_r+\psi^r,\quad \gamma^\mathbb{Q}_r(t)=\frac{a_r}{a_r+\psi^r} \,\gamma_r(t), \quad b_r^{\mathbb{Q}}=b_r \end{equation} are positive, the function $\gamma^\mathbb{Q}_r(t)$ is measurable, positive, and bounded on bounded intervals, moreover the Feller condition~\eqref{eq:feller's_condition_delay} under $\mathbb{P}$ automatically implies~\eqref{eq:feller's_condition_delay-Q}, the Feller condition under $\mathbb{Q}$. Indeed, with the above positions the market price of risk $\xi_r(t)$~in~\eqref{eq:MarketPriceOfRisk-OneDim} coincides with the process $\xi_r(t,r(\cdot))$~in~\eqref{eq:xi-r-t-r}, and Theorem~\ref{thm:existenceQ} applies. \\ As far as Assumption~\ref{ass:MarketPriceOfRisk-OneDim} is concerned, actually, we could define the market price of risk also as follows $$ \xi_r(t)=\psi_0^r\frac{\sqrt{r(t)}}{\sigma_r}+\psi_1^r\frac{1}{\sigma_r\sqrt{r(t)}}+\psi_2^r\frac{r(t-\tau)}{\sigma_r\sqrt{r(t)}}, $$ where $\psi_0^r$, $\psi_1^r$ and $\psi_2^r$ are constants satisfying suitable conditions: indeed, if the $\mathbb{Q}$-parameters \begin{align}\label{eq:Q-parameters-Gen} a_r^\mathbb{Q}=a_r+\psi_0^r,\quad\gamma^\mathbb{Q}_r(t)=\frac{a_r\gamma_r(t)-\psi_1^r}{a_r+\psi_0^r},\quad b^\mathbb{Q}_r=b_r-\psi_2^r, \end{align} are positive and satisfy the Feller condition~\eqref{eq:feller's_condition_delay-Q}, then Theorem~\ref{thm:existenceQ} guarantees that the measure~$\mathbb{Q}$ is a probability measure, and that, under $\mathbb{Q}$, the process $r(t)$ has stochastic differential given by~\eqref{eq:IRQequation}. As already observed, with our choice, i.e., with $\psi_0^r=\psi^r\geq 0$, $\psi_1^r=\psi_2^r=0$, the above conditions are automatically satisfied, while, in general, this is not the case. \end{oss} In order to get the bond price for a fixed maturity $T$, the idea is to get a representation of the following functional (slightly more general than the functional~\eqref{eq:BondPrice} ) \begin{equation}\label{eq:FKformula} \mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_t^Tr (u)du-wr(T)}\bigg{|}\mathcal{F}_t \right],\quad t\in[t_0,T], \text{ with } T \text{ fixed, and $w\geq 0$}, \end{equation} as a deterministic function $v^\mathbb{Q}(t,T,r,y;w)$ evaluated in $(r,y)=(r(t),y^\mathbb{Q}(t,T;w))$, where the process $y^\mathbb{Q}(t,T;w)$ is defined as follows \begin{equation}\label{eq:def_yQ} y^\mathbb{Q}(t,T;w):=\int_{t-\tau}^t \Gamma^\mathbb{Q}(u,T;w) r(u)\mathbf{1}_{[t_0-\tau,T-\tau]}(u) du, \end{equation} with $\Gamma^\mathbb{Q}(t,T;w)$ a suitably chosen deterministic function (for its explicit definition see \eqref{eq:def_GammaQ}). Note that, independently of the definition of $\Gamma^\mathbb{Q}(t,T;w)$, the following final condition holds \begin{equation}\label{eq:def_yQ(T,T)} y^\mathbb{Q}(T,T;w)=0. \end{equation} \\ It turns out that the function $v^\mathbb{Q}$ is given by \begin{equation}\label{eq:term_structure_one-dimensional_CIR_delay} v^\mathbb{Q}(t,T,r,y;w)=\begin{cases} \begin{array}{cc} \mathit{e}^{-\alpha_0^\mathbb{Q}(t,T;w)-\alpha_r^\mathbb{Q}(t,T;w)r -y}& \text{for $t<T$},\\ \mathit{e}^{-wr-y}&\text{for $t=T$}, \end{array}\end{cases}\end{equation} where $w$ is a nonnegative parameter and the functions $\alpha_0^\mathbb{Q}(t,T;w)$ and $\alpha_r^\mathbb{Q}(t,T;w)$ are deterministic and positive, and such that $\alpha_0^\mathbb{Q}(T,T;w)=0$ and $\alpha_r^\mathbb{Q}(T,T;w)=w$. \\ In the following Theorem \ref{thm:term_structure_one-dimensional_CIR_delay}, we give the correct choice of the functions $\Gamma^\mathbb{Q}(t,T;w)$, $\alpha_0^\mathbb{Q}(t,T;w)$ and $\alpha_r^\mathbb{Q}(t,T;w)$. Then the Bond price is obtained by \eqref{eq:FKformula}, with $w=0$, i.e., \begin{equation}\label{eq:BOND} B(t,T)=v^\mathbb{Q}\big(t,T,r(t),y^\mathbb{Q}(t,T;0);0\big)=\mathit{e}^{-\alpha_0^\mathbb{Q}(t,T;0) -\alpha_r^\mathbb{Q}(t,T;0)r(t) -y^\mathbb{Q}(t,T;0)}, \end{equation} and we recover $B(T,T)=1$ by the final condition in~\eqref{eq:term_structure_one-dimensional_CIR_delay}, and by~\eqref{eq:def_yQ(T,T)}.\\ \begin{thm}\label{thm:term_structure_one-dimensional_CIR_delay}${}$\\ With the notations and under the assumptions of Theorem~\ref{thm:existenceQ}, consider the following differential system \begin{equation}\label{eq:term_structure_diff_system_delay-BP}\begin{cases} \frac{d}{d t}\alpha_r (t)=&\frac{1}{2}\sigma_r^2(\alpha_r(t))^2+ a^\mathbb{Q}_r\alpha_r(t)-1\phantom{xxxxxxxxxxxx}\text{for $T-\tau\leq t\leq T$},\\ \vspace{2mm}\\ \frac{d}{dt}\alpha_r(t)=&\frac{1}{2}\sigma_r^2(\alpha_r(t))^2+a^\mathbb{Q}_r \alpha_r(t)-1-b_r\alpha_r(t+\tau)\phantom{x}\text{for $t_0\leq t\leq T-\tau$},\\ \vspace{2mm}\\ \frac{d}{dt}\alpha_0 (t )=&-a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t)\alpha_r (t) \phantom{xxxxxxxxxxxxxxxxxxxx}\text{for $t_0\leq t\leq T$},\end{cases}\end{equation} with the boundary conditions \begin{equation}\label{eq:boundary_con_system_delay}\begin{cases} \alpha_r(T)&=w,\\ \vspace{-2mm}\\ \alpha_r(T-\tau)&=\alpha_r((T-\tau)^+),\\ \vspace{-2mm}\\ \alpha_0(T)&=0.\end{cases}\end{equation} Then, for all $w \in \left[0,\frac{\sqrt{(a_r^\mathbb{Q})^2 +2\sigma_r^2}-a_r^\mathbb{Q}}{\sigma_r^2}\right)$, \begin{enumerate} \item the system \eqref{eq:term_structure_diff_system_delay-BP}-\eqref{eq:boundary_con_system_delay} has a unique solution $\left(\alpha_r^\mathbb{Q}(t,T;w);\alpha_0^\mathbb{Q}(t,T;w)\right)$. \item Moreover, the functions $\alpha_r^\mathbb{Q}(t,T;w)$ and $\alpha_0^\mathbb{Q}(t,T;w)$ are continuous, positive and right differentiable w.r.t.\@ $w$.\end{enumerate} If furthermore Assumption~\ref{ass:integrabilityUnidimensional} holds, and the deterministic function $\Gamma^\mathbb{Q}(t,T;w)$ is chosen as follows\footnote{Actually, the function $\Gamma^\mathbb{Q}(t,T;w)$ can assume whatever value for $T-\tau\leq t\leq T$. The choice in \eqref{eq:def_GammaQ} has been made in order to make $\Gamma^\mathbb{Q}(t,T;w)$ a continuous function.} \begin{equation}\label{eq:def_GammaQ} \Gamma^\mathbb{Q}(t,T;w)=\begin{cases}\begin{array}{cc} b_r\alpha^\mathbb{Q}_r(t+\tau,T;w)&\text{for $t_0\leq t\leq T-\tau$,}\ff\\ b_rw&\text{for $T-\tau\leq t\leq T$},\ff \end{array}\end{cases}\end{equation} then the generalized term structure for the one-dimensional fixed delay CIR model is given by the fun\-ction~$v^\mathbb{Q}$ (defined in \eqref{eq:term_structure_one-dimensional_CIR_delay}), i.e., \begin{equation}\label{eq:term_str}\mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t \right]=v^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w);w),\end{equation} where, for $t_0\leq t\leq T$, \begin{align} y^\mathbb{Q}(t,T;w)=\int_{t-\tau}^tb_r\alpha_r^\mathbb{Q}(u+\tau,T;w)r(u) \mathbf{1}_{[t_0-\tau,t-\tau]}(u)du.\label{eq:def_yQ_w-BP} \end{align} \end{thm} Before giving the proof, we make some observations. \begin{oss}\label{oss:A1-A2-A3}${}$\\ In the framework of Remark~\ref{oss:MeasureQ}, if Assumptions~\ref{asss:hp_r} and \ref{ass:integrabilityUnidimensional} hold, together with the Feller condition~\eqref{eq:feller's_condition_delay} under $\mathbb{P}$, the hypotheses of Theorem~\ref{thm:existenceQ} hold under the further condition that $a^\mathbb{Q}_r$, $b^\mathbb{Q}_r$ and~$\gamma^\mathbb{Q}_r(t)$ in~\eqref{eq:Q-parameters-Gen} be positive and satisfy the Feller condition~\eqref{eq:feller's_condition_delay-Q}, and therefore, replacing~$b_r$ with~$b^\mathbb{Q}_r$, Theorem~\ref{thm:term_structure_one-dimensional_CIR_delay} can be applied. In particular Theorem~\ref{thm:term_structure_one-dimensional_CIR_delay} can be applied if Assumptions~\ref{asss:hp_r}, \ref{ass:integrabilityUnidimensional}, and \ref{ass:MarketPriceOfRisk-OneDim} hold, together with the Feller condition~\eqref{eq:feller's_condition_delay} under $\mathbb{P}$, with $a^\mathbb{Q}_r$, $b^\mathbb{Q}_r$ and $\gamma^\mathbb{Q}_r(t)$ as in~\eqref{eq:Qparameter_r-BP} without any further assumption. \end{oss} In accordance to \eqref{eq:YieldToMaturity}, by \eqref{eq:BOND} and \eqref{eq:term_str}, the term structure is a linear function of $r(t)$ and of the process $y^\mathbb{Q}(t,T;0)$: \begin{equation}\label{eq:YtM-CIRdelay} R(t,T)=\frac{1}{T-t}\left[\alpha_0^\mathbb{Q}(t,T;0)+ \alpha_r^\mathbb{Q}(t,T;0)r(t) +\int_{t-\tau}^t b_r\, \alpha_r^\mathbb{Q}(u+\tau,T;0)\, r(u)\mathbf{1}_{[t_0-\tau,T-\tau]}(u) du\right]. \end{equation} The previous formula extends the formula of the term structure in the classical CIR model in which the rate $R(t,T)$ is affine function of $r(t)$. \begin{proof}[Proof of Theorem~\ref{thm:term_structure_one-dimensional_CIR_delay}]${}$\\ We start with some preliminary observations. Let $r(t)$ and $y^\mathbb{Q}(t,T;w)$ be the stochastic processes with dynamics described by \eqref{eq:IRQequation} and \eqref{eq:def_yQ}, with $\Gamma^\mathbb{Q}(t,T;w)$ still to be chosen.\\ The process $y^\mathbb{Q}(t,T;w)$ has stochastic differential given by \begin{equation}\label{eq:def_Diff_y-CIR}\begin{split} dy^\mathbb{Q}(t,T;w)=\, &\Gamma^\mathbb{Q}(t,T;w) r(t)\mathbf{1}_{[t_0-\tau,T-\tau]}(t)dt - \Gamma^\mathbb{Q}(t-\tau,T;w)\, r(t-\tau) \mathbf{1}_{[t_0-\tau,T-\tau]}(t-\tau)dt\\ =\, &\Gamma^\mathbb{Q}(t,T;w) r(t)\mathbf{1}_{[t_0-\tau,T-\tau]}(t)dt - \Gamma^\mathbb{Q}(t-\tau,T;w)\, r(t-\tau) \mathbf{1}_{[t_0,T]}(t)dt,\FF\end{split} \end{equation} and, by construction, the process $y^\mathbb{Q}(t,T;w)$, evaluated in $t=T$, is zero; actually, \[y^\mathbb{Q}(T,T;w)=\int_{T-\tau}^T \Gamma^\mathbb{Q}(u,T;w) r(u)\mathbf{1}_{[t_0-\tau,T-\tau]}(u) du=0,\] for any choice of $\Gamma^\mathbb{Q}(t,T;w)$.\\ Define the process $z(t)$ as follows \begin{equation}\label{eq:z(t)_BP}z(t):=\mathit{e}^{-\int_{t_0}^t r(u)du} v^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w);w),\end{equation} where $v^\mathbb{Q}(t,T, r,y;w)$ is defined in~\eqref{eq:term_structure_one-dimensional_CIR_delay}, with $\alpha^\mathbb{Q}_0(t,T;w)$ and $\alpha^\mathbb{Q}_r(t,T;w)$ nonnegative and continuous in $t$, and such that $\alpha_0^\mathbb{Q}(T,T;w)=0$ and $\alpha_r^\mathbb{Q}(T,T;w)=w$, i.e., satisfy the boundary conditions~\eqref{eq:boundary_con_system_delay}. The idea is to show that the process $z(t)$ is a $\mathbb{Q}$-martingale if the functions $\alpha_0^\mathbb{Q}(t,T;w)$ and $\alpha_r^\mathbb{Q}(t,T;w)$ satisfy the system \eqref{eq:term_structure_diff_system_delay-BP}-\eqref{eq:boundary_con_system_delay} and $\Gamma^\mathbb{Q}(t,T;w)$ is defined as in~\eqref{eq:def_GammaQ}. Indeed, if $z(t)$ is a martingale, taking into account that $y^\mathbb{Q}(T,T;w)=0$, and that therefore \[z(T)=\mathit{e}^{-\int_{t_0}^Tr(u)du-wr(T)-y^\mathbb{Q}(T,T;w)}=\mathit{e}^{-\int_{t_0}^Tr(u)du-wr(T)},\] we get the result, observing that \[z(t)=\mathbb{E}^\mathbb{Q}\left[z(T)\big{|}\mathcal{F}_t\right]=\mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_{t_0}^T r(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right]\quad t_0\leq t\leq T,\] and that, by the definition \eqref{eq:z(t)_BP} of $z(t)$, we have \[\mathit{e}^{ -\int_{t_0}^t r(u)du} v^\mathbb{Q}\left(t,T,r(t), y^\mathbb{Q}(t,T;w);w\right)=\mathbb{E}^\mathbb{Q}\left[ \mathit{e}^{ -\int_{t_0}^Tr(u)du-wr(T)}\Big| \mathcal{F}_{t}\right],\] that is \[v^\mathbb{Q}\left(t,T,r(t), y^\mathbb{Q}(t,T;w);w\right)=\mathbb{E}^\mathbb{Q}\left[ \mathit{e}^{ -\int_{t}^Tr(u) du-wr(T)}\Big| \mathcal{F}_{t}\right].\] The rest of the proof is devoted to show the martingale property of $z(t)$. To this end, an important observation is that, under Assumption~\ref{ass:integrabilityUnidimensional}, recalling Remark~\ref{oss:MeasureQ}, the process $r(t)$ is integrable w.r.t.\@ $\mathbb{Q}$, as immediately follows by Proposition~\ref{prop:integrabilityCIR} with $\mathbb{Q}$ instead of $\mathbb{P}$.\\ The process $z(t)$ (defined in \eqref{eq:z(t)_BP}) has stochastic differential given by \begin{align}\label{eq:differenziale_z(t)_BP} \nonumber dz(t)=&d\left(\mathit{e}^{-\int_{t_0}^t r(u)du} \right) v^\mathbb{Q}\left(t,T,r(t), y^\mathbb{Q}(t,T;w);w\right) \\ &+\mathit{e}^{-\int_{t_0}^tr(u)du} dv^\mathbb{Q}\left(t,T,r(t), y^\mathbb{Q}(t,T;w);w\right). \end{align} By It\^{o}'s formula we obtain, for $t_0\leq t\leq T$, \begin{align*} dz(t)=&- r(t) z(t) dt + z(t) \left[-\left(\tfrac{\partial }{\partial t}\alpha^\mathbb{Q}_0(t,T;w)+ r(t) \tfrac{\partial }{\partial t} \alpha^\mathbb{Q}_r(t,T;w) \right) dt \right.\ff\\ &\left.-\alpha^\mathbb{Q}_r(t,T;w) dr(t)- dy^\mathbb{Q}(t,T;w) + \tfrac{1}{2} \sigma_r^2 r(t) (\alpha_r^\mathbb{Q}(t,T;w))^2 dt \right]\FF \\=&- r(t)z(t) dt + z(t) \left[-\left(\tfrac{\partial }{\partial t}\alpha^\mathbb{Q}_0(t,T;w)+ r(t) \tfrac{\partial }{\partial t} \alpha^\mathbb{Q}_r(t,T;w) \right) dt\right.\ff \\ &\left.-\alpha^\mathbb{Q}_r(t,T;w) \left[a^\mathbb{Q}_r(\gamma^\mathbb{Q}_r(t)- r(t))+b_r r(t-\tau)\right]dt -\alpha^\mathbb{Q}_r(t,T;w) \sigma_r\,\sqrt{|r(t)|}dW^\mathbb{Q}_r(t)\right.\ff \\ &\left.-\Gamma^\mathbb{Q}(t,T;w)r(t)\mathbf{1}_{[t_0-\tau,T-\tau]}(t)dt +\Gamma^\mathbb{Q}(t-\tau,T;w)r(t-\tau)\mathbf{1}_{[t_0,T]}(t)dt\right.\ff \\ &\left.+ \tfrac{1}{2} \sigma_r^2 \, r(t) (\alpha_r^\mathbb{Q}(t,T;w))^2 dt \right].\ff \end{align*} If $\Gamma^\mathbb{Q}$ is chosen as in \eqref{eq:def_GammaQ}, all the terms multiplying $r(t-\tau)$ disappear.\\ Then, the process $z(t)$ is a local martingale if and only if the finite variation term vanishes, i.e., if and only if, for $t_0\leq t\leq T$, \begin{equation}\label{eq:drift-z=0} \begin{split} {} &-\tfrac{\partial }{\partial t}\alpha^\mathbb{Q}_0(t,T;w)-\tfrac{\partial }{\partial t}\alpha^\mathbb{Q}_r(t,T;w) r(t)-r(t)-\alpha^\mathbb{Q}_r(t,T;w)[a^\mathbb{Q}_r(\gamma^\mathbb{Q}_r(t)- r(t))]\ff \\ {} &- b_r\alpha^\mathbb{Q}_r(t+\tau,T;w) r(t)\mathbf{1}_{[t_0,T-\tau]}(t) +\tfrac{1}{2}r(t)\sigma_r^2(\alpha_r^\mathbb{Q}(t,s;w))^2=0. \end{split} \end{equation} Moreover, thanks to the previous observation on the integrability of $r(t)$, the process $z(t)$ is a (square integrable) martingale if $\alpha^\mathbb{Q}_0(u,T;w)$ and $\alpha^\mathbb{Q}_r(u,T;w)$ are nonnegative continuous functions; indeed, then $0\leq z(t)\leq 1$, and setting $m_r(t):=\sigma_r z(t)\sqrt{|r(t)|}\alpha^\mathbb{Q}_r(t,T;w)$, we have that \begin{align*} \mathbb{E}^\mathbb{Q}\left[\int_{t_0}^T|m_r(t)|^2dt\right] \leq&\sigma_r^2\max_{t_0\leq u\leq T}|\alpha^\mathbb{Q}_r(u,T;w)|^2\int_{t_0}^T\mathbb{E}^\mathbb{Q}\left[|r(t)|\right]dt<+\infty. \end{align*} Gathering in \eqref{eq:drift-z=0} the terms multiplying $r(t)$, we get the condition \begin{align*} &\left(-\tfrac{\partial}{\partial t}\alpha^\mathbb{Q}_r(t,T;w) +a^\mathbb{Q}_r\alpha^\mathbb{Q}_r (t,T;w)-b_r\alpha^\mathbb{Q}_r (t+\tau,T;w)\mathbf{1}_{[t_0,T-\tau]}(t)+\tfrac{1}{2}\sigma^2_r(\alpha^\mathbb{Q}_r(t,T;w))^2-1\right)r(t)\ff \\ &+\left(-\tfrac{\partial}{\partial t}\alpha^\mathbb{Q}_0(t,T;w)-a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t) \alpha^\mathbb{Q}_r(t,T;w)\right)=0. \end{align*} Since the previous equation holds for all $r(t)\geq0$, the functions $\alpha^\mathbb{Q}_r(t,T;w)$ and $\alpha^\mathbb{Q}_0(t,T;w)$ solve the system~\eqref{eq:term_structure_diff_system_delay-BP} with the respective boundary conditions~\eqref{eq:boundary_con_system_delay}.\\ By Lemma~\ref{lem:LemmaTecnico} (see Appendix~\ref{app:ProofsCIR}), with $a=a^\mathbb{Q}_r$, $b=b_r$, and $\sigma=\sigma_r$, the ordinary differential equation \[\begin{cases} -\tfrac{d}{d t}\alpha_r(t) +a^\mathbb{Q}_r\alpha_r(t)-b_r\alpha_r(t+\tau)\mathbf{1}_{[t_0,T-\tau]}(t)+\tfrac{1}{2}\sigma^2_r\alpha_r^2(t)-1=0,\FF\\ \alpha_r(T)=w,\FF \end{cases}\] has a unique solution $\alpha^\mathbb{Q}_r(t,T;w)$, positive and right differentiable w.r.t\@ $w$.\\ Consequently, also the following ordinary differential equation \[\begin{cases} &-\tfrac{d}{d t}\alpha_0(t)-a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t) \alpha_r(t)=0\quad\text{for $t_0\leq t \leq T$},\\ &\alpha_0(T)=0, \end{cases}\] has a unique solution $\alpha^\mathbb{Q}_0(t,T;w)$, given by \[\alpha^\mathbb{Q}_0(t,T;w)=a^\mathbb{Q}_r\int_t^T\gamma^\mathbb{Q}_r(u)\alpha^\mathbb{Q}_r(u,T;w)du,\] positive and right differentiable w.r.t\@ $w$. \end{proof} \section{Instantaneous Forward Rate}\label{sec:FR} Since very often the traders are interested to determine the future yield on a bond, given by the instantaneous forward rate $f(t,T)$, we focus our interest on it. The main result of this section states that if the spot rate is a fixed delay CIR process $r(t)$, and if the Assumptions~\ref{asss:hp_r}, \ref{ass:integrabilityUnidimensional}, and \ref{ass:MarketPriceOfRisk-OneDim} hold, then the instantaneous forward rate is a deterministic linear function of the process $r(t)$ and another suitable process $\tilde{y}^\mathbb{Q}(t,T;0)$; that is \begin{equation}\label{eq:FRclosedformula} f(t,T):=\beta_0^\mathbb{Q}(t,T;0)+\beta_r^\mathbb{Q}(t,T;0)r(t)+\tilde{y}^\mathbb{Q}(t,T;0), \end{equation} where $\beta_0^\mathbb{Q}(t,T;0)$ and $\beta_r^\mathbb{Q}(t,T;0)$ are deterministic functions (see Theorem~\ref{thm:term_structure_derivate_delay-one-dim} and the subsequent Remark~\ref{rem:linear-repr-FR}). Thus we obtain a generalization of the well-known property of the classical CIR model (\cite{CIR:SIR}). \\ More precisely $\tilde{y}^\mathbb{Q}(t,T;0)$, $\beta_0^\mathbb{Q}(t,T;0)$ and $\beta_r^\mathbb{Q}(t,T;0)$ are obtained by taking the partial derivatives in $w=0$ of $y^\mathbb{Q}(t,T;w)$, $\alpha_0^\mathbb{Q}(t,T;w)$ and $\alpha_r^\mathbb{Q}(t,T;w)$, respectively (see \eqref{eq:def_tildeyQ} and \eqref{eq:beta_alpha}). In Theorem~\ref{thm:term_structure_derivate_delay-one-dim} we show that $\beta_0^\mathbb{Q}(t,T;0)$ and $\beta_r^\mathbb{Q}(t,T;0)$ are characterized as the solution of a deterministic linear system of differential equations, and finally the process $\tilde{y}^\mathbb{Q}(t,T;0)$ has an alternative expression (see \eqref{eq:def_tildeyQ_BP}). \\ We start by recalling the definition and some properties of the forward rate. Let $f(t,T,S)$ be the forward rate at time $t$ for the expiry time $T$ and maturity time $S$. In an Arbitrage-free market, the following equality holds $$ \mathit{e}^{R(t,S)(S-t)}=\mathit{e}^{R(t,T)(T-t)}\mathit{e}^{f(t,T,S)(S-T)}, $$ so that \begin{equation}\label{eq:ForwardRate} f(t,T,S):= -\frac{\ln(B(t,S))-\ln(B(t,T))}{S-T}. \end{equation} \begin{defn}\label{defn:InstanteousForwardRate}${}$\\ The instantaneous forward rate (or shortly forward rate) at time $t$ with maturity time $T>t$, $f(t,T)$ is defined by \begin{equation}\label{eq:InstanteousForwardRate} f(t,T)=\lim_{S\rightarrow T}f(t,T,S)=-\frac{\partial}{\partial T} \left(\log B(t,T)\right)=-\frac{1}{B(t,T)}\frac{\partial}{\partial T}B(t,T). \end{equation} \end{defn} It corresponds to the instantaneous interest rate that one can contract at time $t$, on a risk-less loan that begins at the date $T$ and is returned on a date later than $T$.\\ By \eqref{eq:InstanteousForwardRate}, we can computed the price of a uZCB as a functional of the instantaneous forward rate; that is, \begin{equation}\label{eq:BPwithFR1}B(t,T)=\mathit{e}^{-\int_t^Tf(t,u)du}.\end{equation} \begin{prop}\label{prop:FormulaTassoForward}${}$\\ Let the spot rate be a nonnegative process $r(t)$. Assume that the process $r(t)$ is integrable and uniformly in bounded intervals, then in order to ensure that this financial market satisfies the no-arbitrage condition, the following condition holds \begin{equation}\label{eq:gen_forward_rate}f(t,T)=\frac{\mathbb{E}^{\mathbb{Q}}\left[r(T)\mathit{e}^ {-\int_t^Tr(u)du}\bigg{|}\mathcal{F}_t\right]}{\mathbb{E}^{\mathbb{Q}}\left[\mathit{e}^{-\int_t^Tr(u)du} \bigg{|}\mathcal{F}_t\right]}.\end{equation} Furthermore the numerator in the previous equation, can be evaluated as \begin{equation}\label{eq:gen_forward_rateNUM} \mathbb{E}^{\mathbb{Q}}\left[r(T)\mathit{e}^ {-\int_t^Tr(u)du}\bigg{|}\mathcal{F}_t\right]\, =\,-\,\frac{\partial}{\partial w^+}\,\mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right]\bigg|_{w=0}. \end{equation} \end{prop} \begin{proof}${}$\\ Taking into account that $f(t,T)$ is $\mathcal{F}_t$-measurable, the equality~\eqref{eq:gen_forward_rate} is equivalent to \begin{equation}\label{eq:FR_equality} \mathbb{E}^{\mathbb{Q}}\left[f(t,T)\mathit{e}^{-\int_t^Tr(u)du}\bigg{|}\mathcal{F}_t\right] =\mathbb{E}^{\mathbb{Q}}\left[r(T)\mathit{e}^{-\int_t^Tr(u)du}\bigg{|}\mathcal{F}_t\right]. \end{equation} Observing that, by \eqref{eq:BPwithFR1} and \eqref{eq:BondPrice}, \[B(t,T)\,\frac{\mathit{e}^{-\int_T^{T+h}f(t,u)du}-1}{h}=\frac{B(t,T+h)-B(t,T)}{h} =\mathbb{E}^{\mathbb{Q}}\left[\mathit{e}^{-\int_t^Tr(u)du}\,\frac{\mathit{e}^{-\int_T^{T+h}r(u)du}-1}{h} \bigg{|}\mathcal{F}_t\right],\] and letting $h\rightarrow0^+$, the left-hand side converges to $ B(t,T)f(t,T)$, and the right-hand side converges to $\mathbb{E}^{\mathbb{Q}}\left[r(T)\mathit{e}^{-\int_t^Tr(u)du}\bigg{|}\mathcal{F}_t\right]$. The latter limit holds thanks to the observation that, $r(t)$ being nonnegative, $$ \left|\mathit{e}^{-\int_t^Tr(u)du}\,\frac{\mathit{e}^{-\int_T^{T+h}r(u)du}-1}{h}\right|\leq \sup_{T\leq u \leq T+1} r(u), \quad \text{for all} \, 0\leq h \leq 1, $$ and the integrability condition on $r(t)$. Similarly we get \[\frac{\partial}{\partial w^+}\mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right]=- \mathbb{E}^\mathbb{Q}\left[r(T)\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right],\] and therefore, taking $w=0$, we get \eqref{eq:gen_forward_rateNUM}. \end{proof} \bigskip To obtain formula \eqref{eq:FRclosedformula}, we need a representation formula for the numerator of~\eqref{eq:gen_forward_rate}. In this regard, as we have seen in the previous section (see Theorem~\ref{thm:term_structure_one-dimensional_CIR_delay}), if the spot rate $r(t)$ is a fixed delay CIR process, we can represent \[\mathbb{E}^\mathbb{Q}\left[\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right]=v^\mathbb{Q} (t,T,r(t),y^\mathbb{Q}(t,T;w);w),\] where the function $v^\mathbb{Q}(t,T,r,y;w)$ is defined in \eqref{eq:term_structure_one-dimensional_CIR_delay}. Then accordingly to~\eqref{eq:gen_forward_rateNUM} in Proposition~\ref{prop:FormulaTassoForward}, we can represent \[\mathbb{E}^\mathbb{Q}\left[r(T)\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\bigg{|}\mathcal{F}_t\right]\, =\,-\,\frac{\partial}{\partial w^+}v^\mathbb{Q} (t,T,r(t),y^\mathbb{Q}(t,T;w);w) \quad \text{$t\in[t_0,T]$ with $T$ fixed},\] where $y^\mathbb{Q}(t,T;w)$ is the process defined in~\eqref{eq:def_yQ}. \\ As we will prove below, the main observation is that the left-hand side of the previous equality can be expressed as a function $\tilde{v}^\mathbb{Q}(t,T,r,y,\tilde{y};w)$ (see its expression in~\eqref{eq:tilde_vQ}), evaluated in $(r,y,\tilde{y})=(r(t),y^\mathbb{Q}(t,T;w),\tilde{y}^\mathbb{Q}(t,T;w))$, where $\tilde{y}^\mathbb{Q}(t,T;w)$ is \begin{align}\notag \tilde{y}^\mathbb{Q}(t,T;w)&=\frac{\partial}{\partial w^+}y^\mathbb{Q}(t,T;w)= \int_{t-\tau}^t\frac{\partial}{\partial w^+}\Gamma^\mathbb{Q}(u,T;w)r(u)\mathbf{1}_{[t_0-\tau,t-\tau]}(u)du \intertext{(thanks to the expression~\eqref{eq:def_GammaQ} of $\Gamma^\mathbb{Q}(t,T;w)$)} &=\int_{t-\tau}^tb_r \frac{\partial}{\partial w^+}\alpha^\mathbb{Q}_r(u+\tau,T;w)r(u) \mathbf{1}_{[t_0-\tau,t-\tau]}(u)du, \label{eq:def_tildeyQ} \end{align} The function $\tilde{v}^\mathbb{Q}$ is given by \begin{equation}\label{eq:tilde_vQ} \tilde{v}^\mathbb{Q}(t,T,r,y,\tilde{y};w)=\begin{cases}\begin{array}{cc} \left(\beta^\mathbb{Q}_0(t,T;w)+r\beta^\mathbb{Q}_r(t,T;w)+\tilde{y}\right)\mathit{e}^{-\alpha_0^\mathbb{Q}(t,T;w) -\alpha_r^\mathbb{Q}(t,T;w)r -y}& \text{$t<T$}\\ (r+\tilde{y})\mathit{e}^{-wr-y}&\text{$t=T$},\end{array}\end{cases} \end{equation} where we have set \begin{align} \beta^\mathbb{Q}_0(t,T;w)=\frac{\partial}{\partial w^+}\alpha^\mathbb{Q}_0(t,T;w), \qquad \beta^\mathbb{Q}_r(t,T;w)=\frac{\partial}{\partial w^+}\alpha^\mathbb{Q}_r(t,T;w).\label{eq:beta_alpha}\end{align} Indeed, for $t<T$ \begin{align*} &\tilde{v}^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w),\tilde{y}^\mathbb{Q}(t,T;w);w)= -\tfrac{\partial}{\partial w^+}v^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w);w)\\ &=v^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w);w)\left(\tfrac{\partial}{\partial w^+}\alpha^\mathbb{Q}_0(t,T;w)+r(t)\tfrac{\partial}{\partial w^+}\alpha^\mathbb{Q}(t,T;w)+ \tfrac{\partial}{\partial w^+}y^\mathbb{Q}(t,T;w)\right)\\ &=v^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w);w)\left(\beta^\mathbb{Q}_0(t,T;w)+r(t)\beta^\mathbb{Q}(t,T;w) +\tilde{y}^\mathbb{Q}(t,T;w)\right), \end{align*} while for $t=T$ \begin{align*} \tilde{v}^\mathbb{Q}(T,T,r(T),y^\mathbb{Q}(T,T;w),\tilde{y}^\mathbb{Q}(T,T;w);w)= &-\tfrac{\partial}{\partial w^+}v^\mathbb{Q}(T,T,r(T),y^\mathbb{Q}(T,T;w);w)\\ =&v^\mathbb{Q}(T,T,r(T),y^\mathbb{Q}(T,T;w);w)(r(T)+\tilde{y}^\mathbb{Q}(T,T;w)). \end{align*} Observe that, since $y^\mathbb{Q}(T,T;w)=0$ and $\tilde{y}^\mathbb{Q}(T,T;w)=0$ for all $w$, the latter formula coincides with $v^\mathbb{Q}(T,T,r(T),0;w)r(T)$, from which one can reobtain the obvious identity $f(T,T)=r(T)$. \\ With the following theorem, we characterize the functions $\beta_0^\mathbb{Q}(t,T;w)$ and $\beta_r^\mathbb{Q}(t,T;w)$ as the solutions of a system of linear differential equations. \begin{thm}\label{thm:term_structure_derivate_delay-one-dim}${}$\\ Let the risk-free interest rate $r(t)$ be the process described by \eqref{eq:IRQequation}, under the probability measure~$\mathbb{Q}$. Let $\alpha_0^\mathbb{Q}(t,T;w)$, $\alpha_r^\mathbb{Q}(t,T;w)$ be the continuous solution of system \eqref{eq:term_structure_diff_system_delay-BP}-\eqref{eq:boundary_con_system_delay}. Assume that the deterministic function $\Gamma^\mathbb{Q}(t,T;w)$ is chosen as in \eqref{eq:def_GammaQ}. Then, under Assumptions~\ref{asss:hp_r}, \ref{ass:integrabilityUnidimensional} and \ref{ass:MarketPriceOfRisk-OneDim}, we have that, for all $w \in \left[0,\frac{\sqrt{(a_r^\mathbb{Q})^2 +2\sigma_r^2}-a_r^\mathbb{Q}}{\sigma_r^2}\right)$, \begin{enumerate} \item the following linear differential system \begin{equation}\label{eq:term_struction_derivate_system_delay-one-dim}\begin{cases} \frac{d}{dt}\beta_r(t)=&\left(\sigma_r^2\alpha_r^\mathbb{Q}(t,T;w)+a^\mathbb{Q}_r\right) \beta_r(t) \phantom{xxxxxxxxxxxxx}\text{for $T-\tau\leq t\leq T$}\\ \vspace{2mm}\\ \frac{d}{dt}\beta_r(t)=&\left(\sigma_r^2\alpha^\mathbb{Q}_r(t,T;w)+a^\mathbb{Q}_r\right) \beta_r(t)-b_r\beta_r(t+\tau)\phantom{xxx}\text{for $t_0\leq t\leq T-\tau$}\\ \vspace{2mm}\\ \frac{d}{dt}\beta_0(t)=&-a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t)\beta_r(t) \phantom{xxxxxxxxxxxxxxxxxxxxxx}\text{for $t_0\leq t\leq T$} \end{cases} \end{equation} with the boundary conditions \begin{equation}\label{eq:BC_derivative_system}\begin{cases} \beta_r(T)&=1,\\ \beta_r(T-\tau)&=\beta_r((T-\tau)^+),\\ \beta_0(T)&=0, \end{cases}\end{equation} has a unique solution with components $\beta^\mathbb{Q}_r(t,T;w)$ and $\beta^\mathbb{Q}_0(t,T;w)$, coinciding with the functions defined in~\eqref{eq:beta_alpha}; \item the functions $\beta^\mathbb{Q}_r(t,T;w)$ and $\beta^\mathbb{Q}_0(t,T;w)$ are continuous and positive; \item the following representation formula holds: \begin{equation}\label{eq:num_FRw}\begin{split} &\mathbb{E}^\mathbb{Q}\left[r(T)\mathit{e}^{-\int_t^Tr(u)du-wr(T)}\left|\mathcal{F}_t\right.\right]= \tilde{v}^\mathbb{Q}(t,T,r(t),y^\mathbb{Q}(t,T;w), \tilde{y}^\mathbb{Q}(t,T;w);w) \\&= \left(\beta^\mathbb{Q}_0(t,T;w)+r(t)\beta^\mathbb{Q}_r(t,T;w)+\tilde{y}^\mathbb{Q}(t,T;w)\right) \mathit{e}^{-\alpha_0^\mathbb{Q}(t,T;w)-\alpha_r^\mathbb{Q}(t,T;w)r(t)-y^\mathbb{Q}(t,T;w)}, \end{split} \end{equation} where, for $t_0\leq t\leq T$, $y^\mathbb{Q}(t,T;w)$ is given in \eqref{eq:def_yQ_w-BP}, and \begin{align} \tilde{y}^\mathbb{Q}(t,T;w)=\int_{t-\tau}^tb_r\beta_r^\mathbb{Q}(u+\tau,T;w)r(u) \mathbf{1}_{[t_0-\tau,t-\tau]}(u)du.\label{eq:def_tildeyQ_BP} \end{align} \end{enumerate} \end{thm} \begin{oss}\label{rem:linear-repr-FR} The announced linear representation~\eqref{eq:FRclosedformula} of the instantaneous forward rate now can be easily derived. Indeed, under the assumptions of Theorem~\ref{thm:term_structure_derivate_delay-one-dim}, \eqref{eq:FRclosedformula} follows by the definition~\eqref{eq:gen_forward_rate} of the instantaneous forward rate, together with \eqref{eq:num_FRw}, \eqref{eq:term_str}, and \eqref{eq:term_structure_one-dimensional_CIR_delay}, all evaluated in $w=0$. Furthermore, as a direct consequence of~\eqref{eq:BPwithFR1}, we can also represent the zero-coupon bond price with the following relation \begin{equation}\label{eq:BPwithFR2} B(t,T)=\mathit{e}^{-\int_t^T\left[\beta^\mathbb{Q}_0(t,u;0)+r(t)\beta^\mathbb{Q}_r(t,u;0)+ \tilde{y}^\mathbb{Q}(t,u;0)\right]du}. \end{equation} \end{oss} \begin{proof}[Proof of Theorem~\ref{thm:term_structure_derivate_delay-one-dim}] ${}$\\ We prove only points $1.$ and $2.$ since thanks to \eqref{eq:tilde_vQ} and \eqref{eq:beta_alpha}, the point $3.$ immediately follows.\\ Right-differentiating with respect to the variable $w$, the first equation of system \eqref{eq:term_structure_diff_system_delay-BP}, we obtain for $T-\tau\leq t\leq T$ \begin{align*}\tfrac{\partial}{\partial w^+}\left(\tfrac{\partial}{\partial t}\alpha^\mathbb{Q}_r(t,T;w)\right)&=\sigma_r^2\alpha^\mathbb{Q}_r(t,T;w)\tfrac{\partial }{\partial w^+}\alpha^\mathbb{Q}_r(t,T;w)+a^\mathbb{Q}_r\tfrac{\partial}{\partial w^+} \alpha^\mathbb{Q}_r(t,T;w)\\&=\left(\sigma_r^2\,\alpha^\mathbb{Q}_r(t,T;w)+a^\mathbb{Q}_r\right) \tfrac{\partial }{\partial w^+}\alpha^\mathbb{Q}_r(t,T;w).\end{align*} Then, formally, by the first definition in \eqref{eq:beta_alpha}, we have \[\tfrac{\partial }{\partial t}\beta^\mathbb{Q}_r(t,T;w)=\left(\sigma_r^2\,\alpha^\mathbb{Q}_r(t,T;w)+a^\mathbb{Q}_r\right) \beta^\mathbb{Q}_r(t,T;w).\] A rigorous proof of the above equation can be achieved by standard results on ordinary differential equations, depending on a parameter, under global Lipschitz conditions, thanks to Remark~\ref{rem:LemmaTecnico} (see Appendix~\ref{app:ProofsCIR}). Solving this equation with the boundary condition \[\beta^\mathbb{Q}_r(T,T;w)=\tfrac{\partial }{\partial w^+}\alpha^\mathbb{Q}_r(T,T;w)=1,\] we obtain that the unique solution is given by \begin{equation}\label{eq:betaQ_r1} \beta^\mathbb{Q}_r(t,T;w)=\mathit{e}^{-\int_t^T\left(\sigma_r^2\,\alpha^\mathbb{Q}_r(u,T;w)+ a^\mathbb{Q}_r\right)du},\quad\text{for $T-\tau\leq t \leq T$}, \end{equation} which is positive. Similarly we get \begin{equation}\label{eq:betaQ_r2_edo} \tfrac{\partial }{\partial t}\beta^\mathbb{Q}_r(t,T-\tau;w)=\left(\sigma_r^2\alpha^\mathbb{Q}_r(t,T;w)+a_r\right)\beta^\mathbb{Q}_r(t,T-\tau;w) -b_r\beta^\mathbb{Q}_r(t+\tau,T;w)\quad\text{for $t_0\leq t\leq T-\tau$}, \end{equation} with the boundary condition given by the solution of \eqref{eq:betaQ_r1}, evaluated in $t=T-\tau$; the unique solution is given by \begin{equation}\label{eq:betaQ_r2-}\begin{split} \beta^\mathbb{Q}_r(t,T-\tau;w)=\beta^\mathbb{Q}_r(t,T;w)+\,b_r\int_t^{T-\tau}\mathit{e}^{-\int_t^s \left(\sigma^2_r\alpha^\mathbb{Q}_r(u,T;w)+a^\mathbb{Q}_r\right)du}\beta^\mathbb{Q}_r(s+\tau,T;w)ds, \end{split}\end{equation} and is positive. The same procedure applies to the third equation of the system \eqref{eq:term_structure_diff_system_delay-BP}, and recalling the definition~\eqref{eq:beta_alpha}, of $\beta^\mathbb{Q}_0(t,T;w)$, we obtain the equation \begin{align}\label{eq:betaQ_0_edo} \frac{\partial }{\partial t}\beta^\mathbb{Q}_0(t,T;w)&=-a^\mathbb{Q}_r\gamma^\mathbb{Q}_r(t)\beta^\mathbb{Q}_r(t,T;w)\quad\text{for $t_0\leq t\leq T$}, \intertext{with boundary condition} \beta^\mathbb{Q}_0(T,T;w)&=\tfrac{\partial }{\partial w^+}\alpha^\mathbb{Q}_0(T,T;w)=0.\notag \end{align} The unique solution of Eq.~\eqref{eq:betaQ_0_edo} is positive and is given by \begin{equation}\label{eq:betaQ_0} \beta^\mathbb{Q}_0(t,T;w)=a^\mathbb{Q}_r\int_t^T\gamma^\mathbb{Q}_r(u)\beta^\mathbb{Q}_r(u,T;w)du. \end{equation} \end{proof}
{ "timestamp": "2018-06-05T02:15:09", "yymm": "1806", "arxiv_id": "1806.00997", "language": "en", "url": "https://arxiv.org/abs/1806.00997" }
\section{Introduction} \label{intro} Deep neural networks have shown great success in learning representations from data, but effective training of a deep neural network requires a large number of training examples and many gradient-based optimization steps. This is mainly owing to a lack of prior knowledge when solving a new task. Meta-learning or ``learning to learn'' \cite{schmidhuber:1987:srl,mitchell1993explanation,vilalta2002perspective} addresses this limitation by acquiring meta-knowledge from the learning experience across many tasks. The knowledge acquired by the meta-learner provides inductive bias \cite{thrun1998lifelong} that gives rise to sample-efficient fast learning algorithms. Previous work on deep learning based meta-learning can be summarized into four categories: learning representations that encourage fast adaptation on new tasks \cite{finn2017model,finn2017one}, learning universal learning procedure approximators by supplying training examples to the meta-learner that outputs predictions on the testing examples \cite{hochreiter2001learning,vinyals2016matching,santoro2016meta,mishra2017meta}, learning to generate model parameters conditioned on training examples \cite{gomez2005evolving,munkhdalai2017meta,ha2016hypernetworks}, and learning optimization algorithms to exploit structures in related problems \cite{bengio1992optimization, ravi2016optimization,andrychowicz2016learning,li2017learning}. Although considerable research has been devoted to meta-learning, research until now has tended to focus on image classification and reinforcement learning, while less attention has been paid to text classification. In this work, we propose a meta-learning algorithm notably designed for text classification. The proposed method is based on Model-Agnostic Meta-Learning \citep[MAML; see][]{finn2017model} that explicitly guides optimization towards adaptive representations. While MAML does not discriminate different levels of representations and adapts all parameters for a new task, we introduce Attentive Task-Agnostic Meta-Learner (ATAML) that learns task-agnostic representation while fast-adapting attention parameters to distinguish different tasks. In effect, ATAML involves two levels of learning: a meta-learner that learns across many tasks to obtain task-agnostic representation in the form of a convolutional or recurrent network, and a base-learner that optimizes the attention parameters of each task for fast adaptation. Crucially, ATAML takes into account of the importance of attention in document classification and aims to encourage task-specific attentive adaptation while learning task-agnostic text representations. We introduce a smaller version of the RCV1 and Reuters-21578 dataset---miniRCV1 and miniReuters-21578---tailored to few-shot text classification, and we show that on these datasets, our method leads to significant improvements when compared with randomly initialized, pretrained and MAML-learned models. We also analyze the impact of architectural choices for representation learning and show the effectiveness of dilated convolutional networks for few-shot text classification. Furthermore, the findings in both datasets support our claim on the importance of attention in text-based meta-learning. The contribution of this work is threefold: \begin{itemize} \item We propose a new meta-learning algorithm ATAML for text classification that separates task-agnostic representation learning and task-specific attentive adaptation. \item We show that attentive base-learner together with task-agnostic meta-learner generalizes better. \item We provide evidence as to how attention helps representation learning in ATAML. \end{itemize} \section{Model-Agnostic Meta-Learning} In this section, we follow the same meta-learning problem formulation \cite{ravi2016optimization} and revisit the MAML \cite{finn2017model} algorithm which we adapt for text classification in Section \ref{sec:method}. Model-Agnostic Meta-Learning \citep{finn2017model} is a meta-learning algorithm that aims to learn representations that encourage fast adaptation across different tasks. The meta-learner and base-learner share the same network structure, and the parameters learned by the meta-learner are used to initialize the base-learner on any given task. To form an ``episode'' \citep{vinyals2016matching} to optimize the meta-learner, we first sample a set of tasks $\{\mathcal{D}_{1},\mathcal{D}_{2},\dots,\mathcal{D}_{S}\}$ from the meta-training set $\mathscr{D}_{\mathrm{meta-train}}$, where $\mathcal{D}_{i}=\{ \mathcal{D}_i^{\mathrm{train}} , \mathcal{D}_i^{\mathrm{test}} \}$. For a meta-learner parameterized by $\theta$, we compute its adapted parameters $\theta_{i}$ for each sampled task $\mathcal{D}_{i}$: \begin{equation} \theta_{i} \leftarrow \theta - \beta_{\mathrm{T}} \nabla_{\theta}\mathcal{L}(\mathcal{D}_{i}^{\mathrm{train}};\theta), \end{equation} where $\beta_{\mathrm{T}}$ is the step size of the gradient. The adapted parameters $\theta_{i}$ are task-specific and tell us the effectiveness of $\theta$ as to whether it can achieve generalization through one or a few additional gradient steps. The meta-learner's objective is hence to minimize the generalization error of $\theta$ across all tasks: \begin{equation} \theta^{\ast}=\mathrm{argmin}_{\theta}\sum_{\mathcal{D}_i \sim \mathscr{D}_{\mathrm{meta-train}}}\mathcal{L}(\mathcal{D}_i^{\mathrm{test}}; \theta_{i}). \end{equation} Note that the meta-learner is not aimed at explicitly optimizing the task-specific parameters $\theta_{i}$. Rather, the objective of the meta-learner is to optimize the representation $\theta$ so that it can lead to good task-specific adaptations $\theta_{i}$ with a few gradient steps. In other words, the goal of fast learning is integrated into the meta-learner's objective. The meta-learner is optimized by backpropagating the error through the task-specific parameters $\theta_i$ to their common pre-update parameters $\theta$. The gradient-based updating rule is: \begin{equation} \theta \leftarrow \theta - \beta_{\mathrm{M}} \nabla_{\theta}\sum_{\mathcal{D}_i \sim \mathscr{D}_{\mathrm{meta-train}}}\mathcal{L}(\mathcal{D}_i^{\mathrm{test}}; \theta_{i}), \end{equation} where $\beta_{\mathrm{M}} $ is the learning rate of the meta-learner. The meta-learner performs slow learning at the meta-level across many tasks to support fast learning on new tasks. At meta-test time, we initialize the base-learner's parameters from the meta-learned representation $\theta^{\ast}$ and fine-tune the base-learner using gradient descent on task $\mathcal{D}_i^{\mathrm{train}} \sim \mathscr{D}_{\mathrm{meta-test}}$. The meta learner is evaluated on $\mathcal{D}_i^{\mathrm{test}} \sim \mathscr{D}_{\mathrm{meta-test}}$. MAML works with any differentiable neural network structure and has been applied to various tasks including regression, image classification, reinforcement learning and imitation learning. Extensions of MAML include learning the base-learner's learning rate \cite{li2017meta} and applying a bias transformation to concatenate a vector of parameters to the hidden layer of the base-learner \cite{finn2017one}. It is also theorized that MAML has the same expressive power as other universal learning procedure approximators and generalizes well to out-of-distribution tasks \cite{finn2017meta}. \section{Few-Shot Text Classification} Few-shot learning is commonly characterized as $N$-way $K$-shot, or $N$-class $K$-shot learning, which contains $N$ classes with only $K$ examples for each class. Few-shot learning is often accomplished by making use of the knowledge learned from a collection of tasks at an earlier time, and it has made rapid progress on image classification problems \cite{fei2006one,lake2015human,koch2015siamese}, often represented by the Omniglot \cite{lake2011one} and MiniImageNet \cite{vinyals2016matching,ravi2016optimization} datasets. We extend few-shot learning from image classification to the text classification domain, with the goal to learn a text classification model from a few examples. Many important problems require learning text classification models from small amounts of data. As an example, predicting if a person is likely to commit suicide could save many lives, but it is often difficult to collect psychiatric case histories and suicide notes \cite{shneidman1956clues,matykiewicz2012effect}. Furthermore, in the biomedical domain, active learning can be used to classify clinical text data so as to reduce the burden of annotation \cite{figueroa2012active}. The ability to achieve fast and effective learning from a few annotated examples can jump-start the active learning process and improve the convergence of active learning, thereby maximizing the efficiency of human involvement. A great body of research in natural language processing emphasizes on the importance of attention in a variety of tasks \cite{shen2018bi,lin2017structured, vaswani2017attention}. These papers show that attention is able to retrieve task-specific representation across a sequence of text encodings from CNN or LSTM to obtain a task specific representation of the input. Attention could help decompose the contents of a document into ``subproblems'' \cite{parikh2016decomposable} thus producing task-specific representations; this ability to decompose text encodings also allows us to learn shared representation across tasks. In the context of meta-learning for few shot text classification, we empirically show that there is a synergistic relationship between meta-learning a shared text embedding across tasks and task-specific representation through attention. Intuitively, by constraining the text embedding parameters to be shared across different tasks in an episode, attention learns to be more task-specific and better decompose the document according to the task at hand. \section{Attentive Task-Agnostic Meta-Learning} \label{sec:method} \subsection{The Attentive Base Learner} \label{sec:base_learner} The base-learner is a neural network trained on each text classification task $\mathcal{D}$ under a loss function $\mathcal{L}$. The base-learner reads the $T$-word input document $\mathbf{x}=\left [ x_1,x_2,\dots,x_T \right ]$, where $x_t$ denotes the $t$-th word, \begin{equation} \label{equ:bilstm} \mathbf{s}_t=f(x_t; \theta_\mathrm{E}). \end{equation} The base learner in~\eqref{equ:bilstm} encodes the input sequence $\mathbf{x}$ to a corresponding sequence of states $\left [ \mathbf{s}_1,\mathbf{s}_2,\dots,\mathbf{s}_T \right ]$, where $f$ can take the form of a recurrent or convolutional network with parameters $\theta_\mathrm{E}$. We then apply content-based attention mechanism \cite{bahdanau2014neural,hermann2015teaching,graves2014neural,sukhbaatar2015end} that enables the model to focus on different aspects of the document. The specific attention formulation used here is defined in~\eqref{equ:attention_layer} and belongs to a type of feedforward attention \cite{raffel2015feed}, \begin{equation} \label{equ:attention_layer} \alpha_t=\bm{\theta}_{\mathrm{ATT}}^\intercal \mathbf{s}_t ,\qquad \mathbf{s}'_t = \alpha_t \mathbf{s}_t ,\qquad \mathbf{c}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{s}'_t, \end{equation} where $\bm{\theta}_{\mathrm{ATT}}$ represents the attention parameter vector. For each memory state $\mathbf{s}_t$, we calculate its inner product with the attention parameter, resulting in a scalar $\alpha_t$. The scalar $\alpha_t$ rescales each state $\mathbf{s}_t$ into $\mathbf{s'}_t$, and these are averaged to obtain the final representation $\mathbf{c}$ of a document. The attention retrieves relevant information from a document and offers interpretability into the model behavior by explaining the importance of each word, through attention weight $\alpha_t$, that contributes to the final prediction. Once an input document $\mathbf{x}$ is encoded into the vectorized representation $\mathbf{c}$, we apply a softmax classifier parameterized by $\bm{\theta}_W$ to obtain the predictions $\hat{y}$. The softmax classifier is replaced by a set of sigmoid classifiers if the labels are not mutually exclusive in multi-label classification, \begin{equation} \hat{y}=\mathrm{softmax}(\mathbf{c}; \bm{\theta}_W) \quad \mathrm{or} \quad \hat{y}=\mathrm{sigmoid}(\mathbf{c}; \bm{\theta}_W). \end{equation} \subsection{The Attentive Task-Agnostic Meta-Learner} \label{sec:meta_learner} ATAML learns to obtain common representations that can be shared across different tasks while having the fast learning ability to quickly adapt to new tasks. In contrast with MAML, which does not make any distinction between different parameters in the meta-learner, the proposed ATAML splits all parameters $\theta$ into into two disjoint sets, shared parameters $\theta_\mathrm{E}$ and task-specific parameters $\theta_\mathrm{T}$, and employs discriminative strategies in the meta-training and meta-testing phrases. The shared parameters $\theta_\mathrm{E}$ are aimed at representation learning while the task-specific parameters $\theta_\mathrm{T}$ are aimed at capturing task-specific information for classification. \subsubsection{Meta Training} \begin{algorithm} \caption{Attentive Task-Agnostic Meta-Learner}\label{alg:maml_text} \label{alg:MAML} \begin{algorithmic}[1] \Require $\mathscr{D}_{\mathrm{meta-train}}$: the meta-train set \Require $N$-way $K$-shot learning \Require $S$ classification tasks for each training episode \Require $\beta_{\mathrm{T}}, \beta_{\mathrm{M}}$: task and meta level learning rate \Require $\theta_{\mathrm{E}}$: shared parameters for representation learning \Require $\theta_\mathrm{T} = \{\bm{\theta}_W, \bm{\theta}_{\mathrm{ATT}} \}$: parameters to be adapted at the task level \State randomly initialize $\theta_{\mathrm{E}}$ and $\theta_{\mathrm{T}}$ \Comment{Initialize all parameters} \While{not done} \State Sample $S$ tasks: $\mathcal{D}_i \sim \mathscr{D}_{\mathrm{meta-train}}$\Comment{Sample tasks for meta-training} \ForAll {$\mathcal{D}_i$} \State $\theta_{\mathrm{T},i} = \theta_\mathrm{T} - \beta_{\mathrm{T}} \nabla_{\theta_{\mathrm{T},i}}\mathcal{L}(\mathcal{D}_i^{\mathrm{train}};\theta_{\mathrm{T},i})$ \Comment{Get task-specific parameters} \EndFor \State $\mathcal{L}_{\mathrm{meta}} = \sum_{\mathcal{D}_i}\mathcal{L}(\mathcal{D}_i^{\mathrm{test}}; \{ \theta_{\mathrm{T},i}, \theta_{\mathrm{E}} \})$\Comment{Get loss of the meta-learner} \State $\theta_\mathrm{T} \gets \theta_\mathrm{T} - \beta_{\mathrm{M}} \nabla_{\theta_\mathrm{T}}\mathcal{L}_{\mathrm{meta}}$\Comment{Update task-specific parameters} \State $\theta_\mathrm{E} \gets \theta_\mathrm{E} - \beta_{\mathrm{M}} \nabla_{\theta_\mathrm{E}}\mathcal{L}_{\mathrm{meta}}$\Comment{Update shared parameters} \EndWhile \end{algorithmic} \end{algorithm} The Attentive Task-Agnostic Meta-Learning training algorithm is described in Algorithm \ref{alg:MAML}. We use $\theta$ to denote all parameters of the model ($\theta = \{\bm{\theta}_W, \bm{\theta}_{\mathrm{ATT}}, \theta_{\mathrm{E}} \}$), which is divided into shared parameters $\theta_\mathrm{E}$ and task-specific parameters $\theta_\mathrm{T}$, where $\theta_\mathrm{T} = \{\bm{\theta}_W, \bm{\theta}_{\mathrm{ATT}} \}$. To create one meta-training ``episode'' \citep{vinyals2016matching}, we sample $S$ tasks from $\mathscr{D}_{\mathrm{meta-train}}$ and optimize the model towards fast learning across all sampled tasks $\left [\mathcal{D}_1, \mathcal{D}_2, \dots,\mathcal{D}_S \right ]$. As we are sampling random tasks from $\mathscr{D}_{\mathrm{meta-train}}$ in each meta-training iteration, the goal of the meta-learner is to obtain task-agnostic representation $\theta_\mathrm{E}$ that is reusable for different tasks. For every task $\mathcal{D}_i$ in the meta-training iteration, we only update the task-specific parameters in the base-learner that are initialized with $\theta_\mathrm{T}$ and updated to $\theta_{\mathrm{T},i}$ using task-specific gradients $\nabla_{\theta_{\mathrm{T},i}}\mathcal{L}(\mathcal{D}_i^{\mathrm{train}};\theta_{\mathrm{T},i})$. We further calculate the expected loss according to the post-update parameters that is composed of the task-specific fast weights $\theta_{\mathrm{T},i}$ and shared slow weights $\theta_\mathrm{E}$, \begin{equation} \mathcal{L}_{\mathrm{meta}} = \sum_{\mathcal{D}_i}\mathcal{L}(\mathcal{D}_i^{\mathrm{test}}; \{ \theta_{\mathrm{T},i}, \theta_{\mathrm{E}} \}). \end{equation} The resulting loss $\mathcal{L}_{\mathrm{meta}}$ can be understood as the loss of the meta-learner. $\mathcal{L}_{\mathrm{meta}}$ gives us an evaluation measure on how well the task-specific parameters $\theta_{\mathrm{T}}$ can adapt across all the sampled tasks $\mathcal{D}_i$, together with a measure on how well the shared parameters $\theta_{\mathrm{E}}$ can be reused across all tasks. The meta-optimization therefore consists of minimizing $\mathcal{L}_{\mathrm{meta}}$ with respect to all parameters $\theta$ towards optimizing the model's adaptability and re-usability across different tasks. The meta-training iterations are repeated until the model converges, and the resulting parameters $\theta$ are then used as initialization at meta-test time. \subsubsection{Meta Testing} Meta testing involves evaluating on the meta-learned model on the meta-test set $\mathscr{D}_{\mathrm{meta-test}}$ by fine-tuning on $\mathcal{D}_i^{\mathrm{train}}$ and test on $\mathcal{D}_i^{\mathrm{test}}$, where $\mathcal{D}_i \sim \mathscr{D}_{\mathrm{meta-test}}$. We introduce a meta testing approach that freezes the shared representation learning parameters $\theta_{\mathrm{E}}$ and only applies gradient on the task-specific parameters $\theta_{\mathrm{T}}$. In contrast to fine-tuning all parameters for a new task, this discriminative meta-testing procedure is more coherent with the stratified meta-training strategy. It also provides regularization of few-shot learning that improves generalization. \iffalse \subsection{Gradient Properties} \label{sec:attention_as_coupling_elimination} \begin{figure}% \centering \subfigure[Standard LSTM without attention]{% \includegraphics[width=2.6in]{./Images/grad_LSTM}}% \qquad \subfigure[LSTM with attention]{% \includegraphics[width=2.55in]{./Images/grad_LSTM_ATT}}% \caption{The forward pass of standard LSTM and attentive LSTM} \label{fig:forward_passes} \end{figure} We now draw connections between the base- and meta-learner to highlight the importance of attention. Since MAML is a gradient-based learning algorithm, the gradient properties of the base-learner play a vital role in the success of meta-learning. The LSTM makes more gradient updates if there is a stronger match between attention $\bm{\theta}_{\mathrm{ATT}}$ and the cell state $\mathbf{s}_t$. Summing up, attention not only enables the model to focus on different aspects of the LSTM states, it also results in a more effective learning procedure that allows fast adaptation and generalization. The detailed gradient analysis is included in the Appendix. \fi \section{Experiments} We provide three sets of empirical evaluations on the single-label miniRCV1, multilabel miniRCV1 and miniRCV1miniReuters-21578 datasets to analyze the proposed meta-learning framework. \label{sec:eval_homogeneous} \iffalse \begin{table}[t] \renewcommand{\arraystretch}{1.15} \caption{Number of classes in meta-split of miniRCV1.}\smallskip \centering \label{tab:rcv1_dataset} \begin{small} \begin{tabular}{lccc} \toprule & Meta-train & Meta-validation & Meta-test \\ \midrule Single-label & 30 & 13 & 12 \\ Multi-label & 70 & 12 & 20 \\ \bottomrule \end{tabular} \end{small} \vskip -0.1in \end{table} \fi \subsection{The Base Learners} We use Temporal Convolutional Networks (TCN), which is a type of dilated convolution \cite{van2016wavenet}, as the base learner. We have also conducted experiments with bidirectional LSTM \cite{schuster1997bidirectional} as the base learner. Details on those experiments as well as the LSTM architecture are included in the Appendix due to lack of space. The TCN contains two layers of dilated causal convolutions with filter size 3 and dilation rate 3. Each convolutional layer is followed by a Leaky Rectified Linear Unit \cite{maas2013rectifier} with negative slope rate 0.01, which is followed by 50\% dropout \cite{srivastava2014dropout}. For word representation, we use 300 dimensional Glove embeddings \cite{pennington2014glove}. For optimization, we use Adam optimizer \cite{kingma2014adam}. For the loss function, we use categorical cross entropy error when each document contains only one label, and use sigmoid cross entropy error when each document may contain multiple labels. Although it is common to use threshold calibration algorithms for multilabel classification, we use the constant 0.5 as prediction threshold in order to reduce the impact of external algorithms. \subsection{Data} Reuters Corpus Volume I (RCV1) is an archive of news stories for research on text categorization \cite{lewis2004rcv1}. We create two versions of the miniRCV1 dataset by selecting a subset from the full RCV1 dataset to study the effect of few-shot learning in text classification: \begin{enumerate} \item \textit{miniRCV1 for single-label classification} consisting of the 55 second-level topics as target classes. We sample 20 documents from each class which is further divided into a training set that contains 5 documents and a testing set that contains 15 documents. Documents with overlapping topics are removed to ensure each document contains a single label. \item \textit{miniRCV1 for multi-label classification} consisting of 102 out of 103 non-mutually exclusive labels. Each document is associated with a set of labels and we exclude one label that only appeared once in the corpus. We sample about 20 documents for each class and divide them into training and testing sets in a similar manner. It is worthwhile to mention that, due to the inherent properties of multi-labeled data \cite{zhang2014review}, some classes may contain more examples than others classes. \end{enumerate} Similar to miniRCV1, we create a smaller version of the Reuters-21578 dataset by selecting about 20 examples for each label. \subsection{Few-shot Learning Setup} At the meta-level, we divide all classes into meta-train, meta-validation and meta-test sets. In the $N$-way $K$-shot setup, during meta-training, we randomly sample $N$ classes among the meta-training set where each class contains $K$ training examples. At meta-test time, we randomly sample $N$ classes among the meta-test set and calculate evaluation statistics across many runs. We evaluate 5-way 1-shot, 5-way 5-shot, 10-way 1-shot and 10-way 5-shot learning for both single-label and multi-label classification. The single-label classification task is evaluated on classification accuracy; the multi-label classification task is evaluated on micro and macro F1-scores, which are intended to measure the average F1-scores across all labels. They differ in that, micro-average gives equal weights to each example regardless of label imbalance, whereas macro-average treats different labels equally. \subsection{Results and Discussion} As with other meta-learning paradigms we consider two baselines:~models trained from random initialization, i.e., ``random'', and~models pretrained across many sampled meta-train tasks, i.e., ``pretrained''. In addition, we also compare our proposed ATAML framework with MAML under similar architecture. Our experiments show that while MAML achieves better accuracies compared to the aforementioned baselines, ATMAL significantly outperforms MAML in all 1-shot learning experiments. Table~\ref{tab:rcv1_acc}, Table~\ref{tab:rcv1_multilabel} and Table~\ref{tab:r21578} summarize these results on single-label miniRCV1, multi-label miniRCV1 and multi-label miniReuters-21578 experiments, wherein ``Meta'' denotes the type of meta learner, ``Base'' denotes the type of base learner, ``(A)'' denotes models trained with attention and the bold numbers highlight the best performing ones at 95\% confidence interval. \paragraph{The difficulty of few-shot learning.} Few-shot text classification is a challenging task as text data contain rich information from various aspects which are difficult to ascertain from a few training examples. This difficulty is manifested in the poor testing performances when trained from random initialization. Meanwhile, in both multi-label classification tasks, the TCN models perform much better when we increase the number of training examples from 1 to 5 examples per class. Furthermore, we show in the Appendix that, classic machine learning algorithms, such as support vector machine, naive Bayes multinomial and K-nearest neighbors, as well as document embedding algorithms, such as doc2vec~\cite{levine1985effect} and doc2vecC~\cite{chen2017efficient}, also suffer from data scarcity in few-shot learning. \paragraph{The difficulty of pretraining in few-shot learning.} We empirically find it generally ineffective to make use of pretrained models in few-shot learning. This can be explained by the ``contradictory outputs'' of the pretraining tasks \cite{finn2017model}. Put differently, as each task contains a small number of examples, when we pretrain the model from many tasks in the meta-training set, the sampled tasks provide contradictory supervisory signals to the classifier, hence making it difficult to pretrain effectively. \paragraph{Why does pretrained $10$-way $K$-shot TCN models perform so poorly?} In multi-label classification tasks, some labels appear less frequently in the training data . This label imbalance causes uncalibrated output probabilities when using the constant 0.5 as prediction threshold. Some pretrained models performs worse than random guesses because its output probabilities are not well distributed. \paragraph{The effect of meta learning.} From all three experiments, the empirical results demonstrate the basic MAML with attentive base learners performs notably better than the non-meta-learned baselines. More importantly, the proposed ATAML algorithm offers further improvements that are statistically significant in all the 1-shot learning experiments. These empirical findings support the need for meta-learning in few-shot text classification. That being the case, the empirical findings further support the importance of learning task-agnostic representations together with task-specific attentive adaptations. \begin{table}[t] \renewcommand{\arraystretch}{1.15} \centering \caption{Comparing single-label classification accuracies between baselines and ATAML on miniRCV1}\smallskip \vskip -0.15in \label{tab:rcv1_acc} \begin{center} \begin{small} \begin{tabular}{llcccc} \toprule \multicolumn{2}{c}{Method } & \multicolumn{2}{c}{5-way Accuracy} & \multicolumn{2}{c}{10-way Accuracy} \\ \cmidrule(r{4pt}){1-2} \cmidrule(l){3-4} \cmidrule(l){5-6} Meta & Base & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule random & TCN (A) & 41.52\% & 65.64\% & 28.32\% & 45.12\% \\ pretrained & TCN (A) & 24.06\% & 57.08\% & 18.60\% & 45.85\% \\ MAML & TCN (A) & 47.09\% & \textbf{72.65\%} & 31.57\% &\textbf{ 62.75\%} \\ \cmidrule(l){1-6} ATAML & TCN (A) & \textbf{54.05\%} & \textbf{72.79\%} & \textbf{39.48\%} & \textbf{61.74\%} \\ \bottomrule \end{tabular} \end{small} \end{center} \vskip -0.15in \end{table} \begin{table}[t] \renewcommand{\arraystretch}{1.15} \centering \caption{Comparing multi-label classification outcomes between baselines and ATAML on miniRCV1}\smallskip \vskip -0.12in \centering \label{tab:rcv1_multilabel} \begin{small} \begin{tabular}{llcccccccc} \toprule \multicolumn{2}{c}{Method } & \multicolumn{2}{c}{5-way Micro-F1} & \multicolumn{2}{c}{10-way Micro-F1} & \multicolumn{2}{c}{5-way Macro-F1} & \multicolumn{2}{c}{10-way Macro-F1} \\\cmidrule(r{4pt}){1-2} \cmidrule(r{4pt}){3-4} \cmidrule(l){5-6} \cmidrule(r{4pt}){7-8} \cmidrule(l){9-10} Meta & Base & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule random & TCN (A) & 38.9\% & 60.9\% & 40.6\% & 45.6\% & 31.4\% & 55.7\% & 22.9\% & 33.1\% \\ pretrained & TCN (A) & 26.9\% & 55.8\% & 33.5\% & 52.1\% & 17.0\% & 51.5\% & 14.9\% & 41.4\% \\ MAML & TCN (A) & 52.3\% & \textbf{69.1\%} & 44.9\% & 58.6\% & 43.2\% & \textbf{64.3\%} & 27.7\% & \textbf{48.4\%} \\ \cmidrule(l){1-10} ATAML & TCN (A) & \textbf{59.7\%} & \textbf{71.1\%} & \textbf{50.7\%} & \textbf{61.3\%} & \textbf{54.3\%} & \textbf{65.0\%} & \textbf{38.5\% }& \textbf{49.2\%} \\ \bottomrule \end{tabular} \end{small} \vskip -0.12in \end{table} \begin{table}[t] \renewcommand{\arraystretch}{1.15} \centering \caption{Comparing multi-label classification between baselines and ATAML on miniReuters-21578}\smallskip \vskip -0.12in \centering \label{tab:r21578} \begin{small} \begin{tabular}{llcccccccc} \toprule \multicolumn{2}{c}{Method } & \multicolumn{2}{c}{5-way Micro-F1} & \multicolumn{2}{c}{10-way Micro-F1} & \multicolumn{2}{c}{5-way Macro-F1} & \multicolumn{2}{c}{10-way Macro-F1} \\\cmidrule(r{4pt}){1-2} \cmidrule(r{4pt}){3-4} \cmidrule(l){5-6} \cmidrule(r{4pt}){7-8} \cmidrule(l){9-10} Meta & Base & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule random & TCN (A) & 38.2\% & 66.0\% & 25.1\% & 44.9\% & 30.6\% & 55.0\% & 17.9\% & 33.6\% \\ pretrained & TCN (A) & 23.5\% & 50.3\% & 18.4\% & 49.1\% & 16.4\% & 37.8\% & 12.0\% & 37.3\% \\ MAML & TCN (A) &52.4\% & \textbf{74.1\%} & 38.1\% & \textbf{61.2\%} & 44.3\% & 64.3\% & 29.9\% & \textbf{51.2\%} \\ \cmidrule(l){1-10} ATAML & TCN (A) & \textbf{66.3\%} & \textbf{76.5\%} & \textbf{42.6\% }& \textbf{60.8\%} & \textbf{60.9\%} & \textbf{69.4\%} & \textbf{34.9\%} & \textbf{51.2\%} \\ \bottomrule \end{tabular} \end{small} \vskip -0.12in \end{table} \subsection{The Importance of Attention} \label{sec:importance_of_attention} The importance of attention lies in its synergistic effect on the meta learners. Under the same meta learning framework, introducing attention to the base learner leads to improved generalization when compared with non-attentive base learners. From empirical results shown in Table~\ref{tab:ablation}, we find MAML trained with attention performs better than MAML without attention. We have similar findings on the mini-RCV1 experiments detailed in the Appendix. Furthermore, the proposed ATAML framework implicitly evaluates the learned representations by freezing the meta-learned representation parameters when fine-tuning a new task. Under those circumstances, well trained representation will facilitate the learning of a new task while poorly trained representation will prohibit effective adaptations. From empirical studies, ATAML performs the best across all experiments when we use TCN as the base learner. If only focusing on base learners that are equipped with attention mechanisms, we find that although MAML provides reasonable improvements from the baselines, models trained with ATAML offer substantial improvements in generalization when compared with the rest of the models. This hints at the benefits brought by shared representation learning and discriminative fine-tuning. Putting all these together, we empirically find that both the attention mechanism and the meta learner are crucial components for good generalization in few-shot text classification. To better understand the representation learning procedure as well as the role of attention in meta training, we undertake ablation studies to provide further insights into ATAML. \iffalse \begin{figure}[t] \centering \includegraphics[width=2.5in]{./Images/learning_curve} \caption{Learning curves on meta-validation sets when adapting a meta-learned model to a new task. The MAML methods converge faster and MAML with attention has a smaller generalization gap.} \label{fig:learning_curve} \vskip -0.17in \end{figure} \textit{The efficiency of meta-learning.} Figure~\ref{fig:learning_curve} shows the learning curve when adapting a meta-trained model to a validation task. The meta-learner converges much faster and takes less training steps than models trained from random initialization. Despite the promising speed improvements, questions remain: In contrast to previous work on image classification where only one gradient step could greatly improve learning, our finding suggests more steps are required to adapt to a new task. One possible explanation is in the difficulty of learning recurrent networks, which further shed lights on the difficulty of meta-learning recurrent networks. Further research should be undertaken to investigate the effect of meta-optimization in recurrent networks. \fi \begin{table}[t] \renewcommand{\arraystretch}{1.15} \centering \caption{Ablation studies on miniReuters-21578 for multi-label classification}\smallskip \centering \label{tab:ablation} \begin{small} \begin{tabular}{llcccccccc} \toprule \multicolumn{2}{c}{Method } & \multicolumn{2}{c}{5-way Micro-F1} & \multicolumn{2}{c}{10-way Micro-F1} & \multicolumn{2}{c}{5-way Macro-F1} & \multicolumn{2}{c}{10-way Macro-F1} \\\cmidrule(r{4pt}){1-2} \cmidrule(r{4pt}){3-4} \cmidrule(l){5-6} \cmidrule(r{4pt}){7-8} \cmidrule(l){9-10} Meta & Base & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\ \midrule random & E (A) & 36.7\% & 66.1\% & 25.2\% & 49.1\% & 29.2\% & 55.0\% & 18.2\% & 36.8\% \\ MAML & E (A) & 44.9\% & 72.3\% & 26.4\% & 59.2\% & 35.6\% & 61.7\% & 19.6\% & 47.4\% \\ \cmidrule(l){1-10} MAML & TCN & 26.4\% & 65.7\% & 11.4\% & 44.5\% & 19.1\% & 52.7\% & 7.6\% & 31.2\% \\ MAML & TCN (A) &52.4\% & \textbf{74.1\%} & 38.1\% & \textbf{61.2\% } & 44.3\% & \textbf{64.3\%} & 29.9\% & \textbf{51.2\%} \\ \cmidrule(l){1-10} ATAML & TCN (A) & \textbf{66.3\%} & \textbf{76.5\%} & 42.6\% & \textbf{60.8\% }& \textbf{60.9\%} & \textbf{69.4\%} & 34.9\% & \textbf{51.2\% }\\ ATAML & TCN (-) & 62.7\% & \textbf{77.5\%} &\textbf{ 49.5\%} & \textbf{63.7\%} & \textbf{58.3\%} & \textbf{71.1\%} & \textbf{41.6\%} & \textbf{54.2\%} \\ \bottomrule \end{tabular} \end{small} \end{table} \subsection{Ablation Studies} With ablation studies we can offer evidence into the need to learn text in a structured manner as opposed to making classifications at the word level alone. We use ``E (A)'' to denote a base learner where an attention model is directly applied to the word embeddings. The goal of this model is to extract individual words to make predictions. This model provides a measure on classification performance if we only take into account individual word-level representations. The empirical results in Table~\ref{tab:ablation} suggest classifying from word embeddings is inferior to the proposed ATAML model, indicating the need to learn text structures, such as phrase or sentence level representations. Moreover, learning from only a few examples exacerbates the effect of over-fitting as it is more likely to have spurious correlations at the word level compared with phrase or sentence level. It is therefore desirable to have the ability to learn text structures. To analyze the role of attention in meta training, we construct an attention-based meta training strategy where the attention parameters are not updated in each meta training iteration. Although the attention parameters are not being updated in meta training, they take task-specific fast weights as regular ATAML and these fast weights have direct influence over the gradients of the TCN layers. The goal of this model is to exploit the fast weights of the attention parameters and examine if this could produce well trained representation without learning attention parameters in meta learning. This model, denoted as ``TCN(-)'', has similar performance with the regular ATAML models in Table~\ref{tab:ablation}. Thus, the role of attention in meta training is to facilitate the learning of shared representations, rather than learning attention parameter itself. In addition, we show in the Appendix that, the proposed ATAMA works better than document embedding approaches that further confirms its ability to aggregate information from substructures. \subsection{Visualizing Learned Attentions} \begin{figure} \centering \includegraphics[scale=0.68]{./Images/non_freeze.pdf} \caption{Visualizing attentions learned by MAML TCN(A).} \label{fig:attention_MAML} \end{figure} \begin{figure} \centering \includegraphics[scale=0.68]{./Images/freeze.pdf} \caption{Visualizing attentions learned by ATAML TCN(A).} \label{fig:attention_ATAML} \end{figure} Figure~\ref{fig:attention_MAML} and Figure~\ref{fig:attention_ATAML} are visualizations of attention obtained by MAML and ATAML, respectively. The density of the blue color indicates the weights, or importance, for the words to model predictions. The target label for this document is ``REGULATION/POLICY'' and both models make correct predictions for this training example. Additional visualization is provided in the Appendix. The MAML model illustrated in Figure~\ref{fig:attention_MAML} is over-fitting on the training data and only searches for repetitive words, such as ``tobacco'' and ``drug'', that are merely spurious correlations. On the other hand, the proposed ATAML suffers less from over-fitting and searches for relevant phrases, such as ``accept regulation of'' and ``recommendation that'', which are relevant to ``REGULATION/POLICY'' for prediction. This suggests the proposed ATAML is able to discover local text substructures via attention from shared representation learning which has a regularization effect when adapting to new tasks. \subsection{The Impact of Base Learner} Current research on meta learning typically use LSTM as a meta-learner, while we experimented with both LSTM and TCN as the base learner. Although meta learning works with both LSTM and TCN and they all provide improvements from randomly initialized and pretrained models, it is worthwhile to highlight their different properties. Overall, TCN has faster training speed and generalization when compared with LSTM. One main problem when using LSTM as the base learner is that, in meta-training, the LSTM saturates at a very early stage owing to difficulties in optimization, and prevents the meta-learner from obtaining sharable representations across different tasks. The detailed quantitative comparisons are included in the Appendix. \section{Conclusion} \label{conclusions} We propose a meta learning approach that enables the development of text classification models from only a few training examples. The proposed Attentive Task-Agnostic Meta-Learner encourages the learning of shared representation across different tasks. The use of attention mechanism is capable of decomposing some text into substructures for task-specific adaptation. We also found attention facilitates learning text representations that can be shared across different tasks. The importance of attention in meta-learning for few-shot text classification is clearly supported by our empirical studies on the miniRCV1 and miniReuters-21578 datasets. We also provided ablation analysis and visualization to get insights into how different components of the model work together. To the best of our knowledge, this is the first work to raise the question of few-shot text classification. Further work should further characterize what makes a good few-shot text classification algorithm. \medskip \small
{ "timestamp": "2018-06-05T02:12:12", "yymm": "1806", "arxiv_id": "1806.00852", "language": "en", "url": "https://arxiv.org/abs/1806.00852" }
\section{Introduction} For commercial fruit trees, the total light available to the tree and its distribution throughout the canopy is a primary factor leading to the production of quality fruit as explained by \cite{McFadyen2004}, as well as fruit characteristics including dry weight and oil concentration (\cite{connor2016relationships}). Certain crops require a relatively large amount of energy to meet minimal acceptable quality standards, like those described for avocado by \cite{Lee1983}, yet particularly dense trees can have a high total light interception but inadequate light distribution, preventing significant parts of the tree from contributing to yield. Consequently, methods for estimating the magnitude and distribution of light throughout canopies are of interest, both for yield estimation and to support crop management decisions. The motivation of this paper is to create a model that associates the distribution of light to the geometrical structure of a tree, so that future work can use this to recommend pruning actions that result in optimal light distribution. As the total light used by a tree for photosynthesis during a growing period is difficult to observe directly, it is commonly inferred from the amount of light intercepted by the tree at instantaneous measurement times as discussed by~\cite{Cherbiy-Hoffmann2013,Cherbiy-Hoffmann2012}. Methods of modelling light interception have long been used to inform orchard decision processes with regards to optimal spacing and tree shape. ~\cite{charles1982physiological} describes how to analytically estimate light interception using a simplified model which incorporates consideration of canopy geometry, varying foliage density and leaf orientation, but uses minimal complexity due to the lack of computing power available at the time. More recent models promote a greater understanding of crop growth. Functional-Structural Plant Modelling (FSPM) like that described by \cite{White2012} can be used to generate tree models with sufficient resolution to apply high-fidelity light environment modelling like QuasiMC presented by \cite{cieslak2007quasi}, which can also be extended to provide high-quality information regarding the plant's response to changing management practices. \cite{Massonnet2008} demonstrated virtual systems for simulating respiration, transpiration and photosynthesis in apple trees, allowing growers to see what effect a particular pruning strategy might have on a typical tree. These methods require high resolution digitisations of the trees, which are difficult to obtain for physical trees using sensors but can be achieved by simulating the growth of virtual trees. Lower resolution methods exist to measure the energy absorption characteristics of real-world trees. Ceptometers, which measure the amount of Photosynthetically Active Radiation (PAR) at the sensor, can be used to estimate light interception by subtracting simultaneous above and below canopy measurements. In order to achieve an accurate estimation, measurements for any given tree must be taken multiple times in different light conditions and at many locations under the tree to reduce the effect of spatial noise due to patchy light, as described by \cite{ibell2015preliminary}. This method is sufficiently useful that it is often employed despite the intensive labour and time required, so there is interest in faster and easier methods. Light interception can alternatively be estimated using plant characteristics, including Leaf Area Index (LAI) which measures the leaf area per unit ground area. An early method used mechanical pin frames (\cite{wilson1963estimation}) but more recently sensing devices such as Pocket LAI may be used (\cite{confalonieri2013development,francone2014comparison}). Here, a camera on a mobile phone is used to compute the gap fraction of a plant, which means how much of the sky is visible through the foliage when the plant is viewed from the ground at a certain angle. Alternative methods to analyse individual trees can be used if accurate geometric model of the trees are available, and recent advances in Light Detection and Ranging (LiDAR) enable quick and accurate capture of such models at a low cost (\cite{rosell2012review}). LiDAR systems mounted on ground vehicles can be used to measure tree parameters like height, volume and leaf area (\cite{Nielsen2009,Sanz-Cortiella2011,underwood2016mapping}), while statically mounted terrestrial LiDAR like that used by \cite{Kato2009} can generate higher quality models at the cost of time and scalability. \cite{hagstrom2011line} describe a LiDAR-based approach for estimating tree characteristics where the opacity of voxelised data is computed by calculating how LiDAR beams pass through each voxel, compared to how much is reflected, to estimate gap fraction similarly to the Pocket LAI method. While LiDAR scanning is limited in resolution, certain information beyond raw geometry can be deduced. \cite{Ma2016,Ma2016a} present a method for segmenting terrestrial LiDAR point clouds into photosynthetic and non-photosynthetic components with 91\% accuracy using purely geometric data, and further demonstrate that they can calculate the woody-to-total-area ratio of individual trees, which can be used to estimate the LAI and improve radiosity simulations. Further analysis can be applied by employing a combination of modelling and measurement. An early attempt using Silhouette to Total Area Ratio (STAR) to measure light interception is presented by \cite{sinoquet2005foliage}. In this method, the ratio of silhouette size to total leaf area was calculated for digitised and simulated trees integrated over a discretised sky model (as STAR is a directional measure). The digitisation used in this method involves measuring the location and direction of each leaf on a tree, as well as using a leaf area meter to measure the size of sampled leaves. \cite{hadari2004three} investigates the impact of PAR availability and interception on the growth and yield of avocado trees using the "Radiance" lighting simulation software tool. The approach used is applied on a whole-orchard scale with simplified uniform tree geometries and provided useful conclusions for agricultural practices such as pruning angle and tree height. However, more accurate geometric modelling would allow higher-resolution simulation and specific recommendations for individual trees. In this paper, we develop a method for sensing and modelling light interception that is applicable for physical (not just virtual) fruit trees, yet scalable for whole orchards. By explicitly modelling tree geometry and light conditions, we estimate the distribution of energy throughout the tree in addition to the total light interception. With the motivation of developing a pruning recommendation system in future work, we here build the underlying model and verify its accuracy compared to ceptometer data. \section{Method} \label{sec:method} The method used to model light through a given tree canopy is outlined in Figure~\ref{fig:li-blockdiagram}. On-site weather station data (or national meteorological records) were used to create a model of the sky as a set of discrete light sources with energy values corresponding to a particular time period. Ray tracing was then used to calculate the distribution and absorption of light in the tree by superimposing the sky model and analysing the path of light from each sky node through the canopy. Finally, this light model was compared to ground-truth ceptometer data and model parameters were tuned. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/LI-blockdiagram} \caption{Method used for light interception estimation} \label{fig:li-blockdiagram} \end{figure} \subsection{Data acquisition} \label{sec:method-data} Data were gathered from a commercial avocado farm in Bundaberg, Queensland, Australia. Three trees were selected, to include a low, medium and high vigour tree, following the method of \cite{robson2017evaluating}. These were scanned at multiple times during the growing season in order to capture the changing shape of each tree due to pruning and growth. To obtain models of tree geometry, the target trees were scanned using a Zebedee handheld LiDAR (\cite{bosse2012zebedee}). This scanner consists of a two-dimensional LiDAR scanner which oscillates about the user's hand and uses Simultaneous Localization and Mapping (SLAM) to generate a three-dimensional point cloud scan. This sensor was selected due to its ability to scan tree geometry throughout the canopy, reducing occlusion. Knowledge of the orientation of the tree is critical to simulation of light interception, but the Zebedee LiDAR does not assign a geographical frame of reference to scanned point clouds. The local scans were therefore aligned to previously obtained georeferenced data from the mobile terrestrial lidar system presented by \cite{stein2016image}. This combination was chosen to facilitate repeated experiments within a commercial orchard, whereas future work will directly use repeated MTLS scan data. Weather data for modelling the sky were captured with a Davis Vantage Pro 2 weather station~(\cite{davis2016vantage}) which provided the global irradiance over 30 minute intervals throughout each day. For periods when the local weather station was offline, we also extracted the daily global solar exposure from the public record provided by the Australian Bureau of Meteorology (\cite{bom2018daily}), and interpolated to relevant times using calibration factors from when both sources were available concurrently. To validate the model, ceptometer measurements were taken in a regular grid below the data trees, with a 1m row space along the orchard row and 0.8m spacing perpendicular to the row direction, as shown in Figures \ref{fig:cepto-grid} and \ref{fig:cepto-grid-real}. Measurements were taken with the 80cm long ceptometer (see Figure~\ref{fig:ceptometer}) perpendicular to the tree row. The ceptometer sensor takes eight readings evenly spaced along its length, measuring PAR in $\mu$ mol s\textsuperscript{-1} m\textsuperscript{-2}. In addition to measurements within the grid, a second identical ceptometer was placed at the edge of the orchard block, in a constantly unshaded area, logging data continuously at one-minute intervals. This process was performed on the target trees multiple times a day during four different days in 2016-17. Overall, 33 distinct datasets were available. For 15 of these, data were available for each of the eight sensors in the ceptometer, and for the other 18 only averages of the eight were available, so in our comparisons we use the average of all eight readings to form each data point. Open-air ceptometer measurements were available for 24 datasets, and Vantage Pro weather station data were available for all but 9 datasets, for which Bureau of Meteorology data were used. The method used here for LiDAR scanning captured the data trees in great detail, but not the neighbouring trees. As such, the ceptometer data from the south side of the tree were representative of the light interception from the data tree but the north side was shaded by geometries that have not been modelled. Therefore north side ceptometer measurements were excluded from our experiments. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/cepto-grid} \caption{The ceptometer measurements were taken in a regular grid, fixed for each time of day, under specific trees. Other trees could be close enough to cast a shadow in the grid as well. Note that this sketch only shows the grid in front of the tree, but measurements were taken in an equivalent grid behind it as well.} \label{fig:cepto-grid} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth,angle=180,origin=c]{figures/cepto-grid-real} \caption{A photograph showing the setup of the ceptometer measurement grid below an avocado tree.} \label{fig:cepto-grid-real} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/ceptometer} \caption{The ceptometer used to gather data. Eight light sensors measuring PAR in $\mu$ mol s\textsuperscript{-1} m\textsuperscript{-2} are evenly spaced along its shaft.} \label{fig:ceptometer} \end{figure} \subsection{Sky modelling} \label{sec:method-sky-model} The sky was modelled as a set of discrete light sources evenly spaced on a hemispherical surface, which was generated deterministically using the method presented by \cite{weisstein2003geodesic} to produce a geodesic sphere. The sky resolution was controlled by a model parameter $S$, representing the number of discrete points in the sky. Global irradiance at a particular point on Earth, aquired as described in Section~\ref{sec:method-data}, can be decomposed into a direct component which measures the light travelling straight from the sun, and a diffuse component which integrates light from all indirect pathways travelled between the sun and that point. The diffuse and direct components were calculated using a method presented by \cite{ridley2010modelling}: \begin{equation} D_{frac}=\frac{1}{1+e^{-5.38+6.63k_t+0.006AST-0.007\alpha+1.75K_t+1.31\phi}} \label{eq:dfrac} \end{equation} \begin{equation} k_t = \frac{I_{global}}{H_0} \end{equation} \begin{equation} K_t = \frac{\sum_{h=1}^{24} I_{global}}{\sum_{h=1}^{24} H_0} \end{equation} \begin{equation} \phi = \frac{k_{t-1}+k_{t+1}}{2}, \end{equation} where $D_{frac}$ is the diffuse fraction of light (which can be used to calculate the absolute diffuse and direct irradiance), $k_t$ clearness index, $I_{global}$ global irradiance on Earth, $H_0$ extraterrestrial irradiance, $K_t$ daily clearness, $\phi$ persistence factor, $\alpha$ the solar angle and $AST$ the apparent solar time. According to \cite{kennewell2016equation} $AST$ can be computed from mean solar time (UTC time): \begin{equation} B = \frac{360(N-81)}{365} [degrees] \end{equation} \begin{equation} AST = UTC + 9.87\sin(2B) - 7.67\sin(B + 78.7), \end{equation} where $N$ is day of year counting from January 1st when $N = 1$ The extraterrestrial irradiance $H_0$, which is the irradiance at the entry point of the atmosphere, can be estimated from the solar constant $S_C \approx 1370W/m^2$ and taking the deviation from the mean distance between the sun and the Earth on day $N$ of the year into account ($N \in [1 : 366]$ counting from January 1st and considering leap years). This gives the following expression: \begin{equation} H_0 = S_C(1+0.033412\cos(2\pi(N-3)/365)). \end{equation} Figure \ref{fig:global-irradiance} demonstrates how global irradiance was split into direct and diffuse components for different days using the method described above. \begin{figure} \centering \subfloat[Clear day]{% \includegraphics[width=0.8\linewidth]{figures/irradiance-clear} \label{fig:irradiance-clear}}\hfill \subfloat[Cloudy day]{% \includegraphics[width=0.8\linewidth]{figures/irradiance-cloudy} \label{fig:irradiance-cloudy}}\hfill \subfloat[Mixed day]{% \includegraphics[width=0.8\linewidth]{figures/irradiance-mixed} \label{fig:irradiance-mixed}}\hfill \caption{Irradiances for different sky types. The global measurements are from the weather station at the farm, the diffuse and direct components are estimated.} \label{fig:global-irradiance} \end{figure} For any latitude and longitude on Earth, it is trivial to calculate the current solar position as described by \cite{reda2004solar}. In this work, the Python package Pysolar (\cite{stafford2014pysolar}) was used to obtain the solar elevation and azimuth. This allowed us to create a new sky node at this position, to which the direct irradiance component was added. The diffuse component can be represented as $S$ separate light sources placed at the vertices of the discretised sky, using a diffuse sky distribution model. The CIE General Sky Standard, specified by \cite{darula2002cie}, states how to estimate the relative diffuse luminance between two different points in the sky, here denoted $L_{rel}$: \begin{equation} L_{rel} = f(\chi)\phi(Z) \label{eq:lrel} \end{equation} \begin{equation} f(\chi) = 1 + c(\exp(d\chi) - \exp(d\pi/2)) + e\cos^2(\chi) \label{eq:cde} \end{equation} \begin{equation} \phi(Z) = 1 + a\exp(b/\cos(Z)) \label{eq:ab} \end{equation} \begin{equation} \chi = \arccos(\cos(Z_s)\cos(Z) + \sin(Z_s)\sin(Z)\cos(Az)), \end{equation} where $L_{rel}$ is the relative luminance of a certain sky location, $f(\chi)$ is the scattering indicatrix, $\phi(Z)$ is the luminance gradation and $\chi$ is the great arc distance between the sun and the sky element of interest. The inputs required are: sun zenith angle $Z_s$, element of interest's zenith angle $Z$, difference in azimuthal angle between the sun and the element of interest $Az$ and five parameters $a,b,c,d,e$ describing the type of sky (overcast, clear, polluted, etc). Which sky model to use (parameters $a, b, c, d, e$ in \eqref{eq:cde} and \eqref{eq:ab}) was determined from the diffuse fraction of light $D_{frac}$ using the rules shown in Table~\ref{tab:cie-types} and use a subset of the 12 CIE sky types. \begin{table}[h] \centering \begin{tabular}{cccccccc} $D_{frac}$ & Type & Description & $a$ & $b$ & $c$ & $d$ & $e$ \\ \hline $[0.00,0.25]$ & 12 & Standard clear & -1 & -0.32 & 10 & -3 & 0.45 \\ $(0.25,0.50]$ & 11 & White-blue sky & -1 & -0.55 & 10 & -3 & 0.45 \\ $(0.50,0.75]$ & 7 & Partly cloudy & 0 & -1.0 & 5 & -2.5 & 0.30 \\ $(0.75,1.00]$ & 1 & Standard overcast & 4 & -0.7 & 2 & -1.5 & 0.15 \\ \end{tabular} \caption{CIE parameters used by the value of $D_{frac}$} \label{tab:cie-types} \end{table} As $L_{rel}$ in \eqref{eq:lrel} is the relative luminance, a reference measurement is needed to obtain the actual luminance. \cite{darula2002cie} normalises $L_{rel}$ by the luminance at the zenith $L_{zenith}$: \begin{equation} L_{zenith} = f(Z_s)\phi(0), \end{equation} which means that the actual luminance can be obtained by normalising each relative luminance by a reference measurement at zenith. The model presented by \cite{darula2002cie} is intended to distribute a single central luminance measure across the sky, while we instead had the integral of diffuse irradiance over the whole sky. Since the sky node area is uniform and the spectrum of the light source is assumed to be constant, irradiance and luminance differed only by a constant factor and we could use the same equations for irradiance. We distributed the integral through the nodes using the relative equation and an integrated normalisation factor (since no zenith measurement was available) using: \begin{equation} I_{diffuse,node} = \frac{I_{diffuse}}{\sum_{sky} L_{rel}} L_{rel,node}. \end{equation} Figure~\ref{fig:modelled-sky} shows a visualisation of the diffuse light distribution over a sky on a clear and overcast day generated by this method. Both visualisations are from the same day and time, and hence have the same total amount of diffuse light, and were generated using the same colour scale. The clear sky demonstrated that there is a higher concentration of diffuse light close to the sun, as opposed to the overcast sky, where the light is distributed over a larger area around the position of the sun. \begin{figure} \centering \subfloat[Clear sky]{% \includegraphics[width=0.4\linewidth]{figures/modelled-sky-clear} \label{fig:modelled-sky-clear}\hfill} \subfloat[Overcast sky]{% \includegraphics[width=0.4\linewidth]{figures/modelled-sky-cloudy} \label{fig:modelled-sky-cloudy}} \subfloat{% \includegraphics[width=0.125\linewidth]{figures/modelled-sky-scale} \label{fig:modelled-sky-mixed}} \caption{Distribution of diffuse light over a discretised clear and overcast sky at 14:00 on a winter day. The colour scale in both sub figures represent the same range of radiation. CIE standard skies 12 and 1 from~\cite{darula2002cie} respectively were used to generate the distribution.} \label{fig:modelled-sky} \end{figure} Whenever a temporal integration is required, for instance to calculate the total light over a growing season, a composite sky for a given time span can be computed, where we model the total energy in the sky over a known interval rather than the instantaneous power. This composite is simply the sum of skies generated at regular time intervals, and an example of this can be seen in Figure~\ref{fig:composite-sky}. For composite skies, adding a sun node for the exact solar position as done for instantaneous simulation would generate an infeasibly high-resolution sky. Thus we instead added the direct component of light to the nearest existing sky node when integrating over multiple intervals. \begin{figure} \centering \subfloat[Single day]{% \includegraphics[width=0.4\linewidth]{figures/composite-sky-day} \label{fig:modelled-sky-day}\hfill} \subfloat[Full year]{% \includegraphics[width=0.4\linewidth]{figures/composite-sky-season} \label{fig:modelled-sky-season}} \subfloat{% \includegraphics[width=0.125\linewidth]{figures/modelled-sky-scale}} \caption{Two composite skies generated by summing sky models at discrete time points during a single day and a full year. Note the image was scaled so that the pattern of the diffuse sky is visible, since the direct component of light is significantly brighter. The magnitude of radiation in each figure is also different due to the timescale involved. Best viewed digitally with zooming.} \label{fig:composite-sky} \end{figure} \subsection{Point cloud processing} \label{sec:method-pointcloud} As described in Section~\ref{sec:method-data}, the geometry of the trees on the farm was represented by geo-referenced LiDAR data captured by a handheld Zebedee scanner. Non-uniform point cloud sampling densities arose from the variable range from sensor to target, and the complex patterns of foreground occlusions. Consequently, the data density was regularised by voxelisation as described by \cite{douillard2011segmentation}, with each voxel represented as a cube with side length $s_{vox}$. As a form of noise rejection, a model parameter $w_{vox}$ was introduced as the minimum number of points within a voxel for it to be included. We also maintained a correspondence between each original LiDAR point and its associated voxel, allowing computationally expensive operations such as ray tracing to be performed in the smaller voxel space and later redistributed to the original data with no loss of resolution. The point clouds were manually segmented into branches and foliage as shown in Figure~\ref{fig:trunk-label}, allowing different parameters to be applied for each\footnote{Future work will consider automated approaches such as those presented by \cite{Lalonde2006}, \cite{Olofsson2014} and \cite{Ma2016}}. These parameters were the transmission coefficient $\beta$ and the absorption coefficient $\alpha$, which estimate the proportion of light which passed through and was absorbed by the voxel respectively. The segmentation reflects that branches are opaque ($\beta_{b} = 0$) and not photosynthetically active ($\alpha_{b} = 0$), as distinct from regions of foliage which have non-zero values that sum to one ($\beta_{f} > 0$, $\alpha_{f} + \beta_{f} = 1$). \begin{figure} \centering \includegraphics[width=\linewidth]{figures/trunk-labelled} \caption{A point cloud model of a target tree with points manually labelled as woody matter and foliage.} \label{fig:trunk-label} \end{figure} \subsection{Interception of radiation by a tree} \label{sec:method-raytracing} Knowing the energy in each sky node and assuming parallel rays, we traced the path of light through the tree from each node individually. The point cloud was first voxelised with a grid that was oriented towards the node, and voxels were organised into columns parallel to the light ray which were traversed sequentially to calculate the energy available to each voxel. This process is illustrated in Figure~\ref{fig:voxels}. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/dark-trees-with-voxels} \caption{Illustration of the raytracing process. The red wireframe represents a single column of voxels through the point cloud. The blue arrows represent the available irradiance diminishing as it travels through the voxels. Smaller voxels were typically used when processing the data.} \label{fig:voxels} \end{figure} The quantum of energy in each sky node was distributed to the voxels along each sky-to-ground path, using a method based on that suggested by \cite{charles1982physiological}: \begin{equation} \label{eq:charles} I = I_0(s + (1-s)\beta)^{n-1}, \end{equation} which estimates the downward light flux density falling on the $n$th voxel in a column knowing the incident irradiance $I_0$ and assuming a constant transmission coefficient $\beta$. In our work, the gap fraction $s$ in \eqref{eq:charles} was ignored because it is explicitly represented by the voxelised geometry of the tree. In other words, occupied voxels were considered to have a gap fraction of 1 and the gap fraction of larger regions is represented as the ratio of occupied to unoccupied voxels. Furthermore, we did not assume a constant transmission coefficient as each voxel had a unique coefficient $\beta_i$ calculated as the average coefficient of its constituent points. Thus we calculated the irradiance available to the $n$th voxel in a column as: \begin{equation} \label{eq:incident-intensity-complex} I_n = I_0\prod_{i=0}^{n-1} \beta_i. \end{equation} The irradiance absorbed $I_{abs}$ by this voxel can then be derived from Equation \eqref{eq:incident-intensity-complex} as $I_{abs,n} = \alpha_n I_n$. For comparison to ground-truth sensor data, we used the instantaneous irradiance absorbed ($W m^{-2}$) in this form. However, the absolute energy ($J$) is required for integration over a growing season, and this can easily be found as $E_{n}$ using: \begin{equation} E_{n} = I_{abs,n}A_{vox}\Delta t, \end{equation} where $A_{vox}$ is the voxel area facing the sky node, calculated as the voxel side length squared, and $\Delta t$ is the time step which the incident irradiance is valid for. The mapping of lidar points to voxels changes every time the point cloud is oriented to a different sky node. Consequently it is not possible to tally the energy across all sky nodes in a consistent way per voxel. Instead energy values were stored in the original lidar points within the voxel. The voxel energy was distributed evenly to all points, while the irradiance added to each point was equal to the total irradiance of the voxel. In this way, the total light available became the sum of the contribution from every sky node, and we calculated total energy by summing all lidar points or sample irradiance at any point. \subsection{Validation by Comparison to Ceptometer Data} In order to validate the model, we compared estimated energy values at known points with ceptometer data captured within a staked-out grid as described in Section~\ref{sec:method-data}. Ceptometer locations within the point cloud were interpolated from the positions of the physical stakes visible in the LiDAR scans as shown in Figure~\ref{fig:cepto-grid-pc}. The irradiances stored within the point cloud in the immediate neighbourhood of the ceptometer locations were then averaged to obtain a reading which can be compared to the ground truth values. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/tree-with-posts} \caption{Ceptometer locations overlaid on LiDAR point cloud. Initial offset achieved by aligning the corner stakes and refined by optimisation.} \label{fig:cepto-grid-pc} \end{figure} Conversion from full-spectrum irradiance ($W m^{-2}$) to the Photosynthetically Active Radiation (PAR) range ($\mu$ mol $s^{-1}$ $m^{-2}$) measured by the ceptometers involves scaling to convert to mols and then taking a subsection of the full spectrum. These operations can be combined by multiplication of a constant factor, assuming that light has a constant spectrum. According to \cite{murphy2011maximum} and \cite{thimijan1983photometric}, conversion from irradiance to PAR can be done as: \begin{equation} \label{eq:parconvert} I_{PAR} = 1.72 I_{irr}, \end{equation} where $I_{irr}$ is irradiance in $W m^{-2}$ and $I_{PAR}$ is PAR in$\mu$ mol $s^{-1}$ $m^{-2}$. As described in Section~\ref{sec:method-data}, a second ceptometer provided a measurement of open air PAR at one-minute intervals for the duration of experimentation. We used data from this ceptometer for experimental validation against sub-canopy ceptometer data taken at specific instances in time. This was necessary to compensate for local, short-term, micro-climactic variations such as patchy cloud cover that can vary more rapidly and locally than the weather station can compensate for. Comparing this open air measurement of PAR, denoted $O_{PAR}$, and integrating the sky model to calculate an estimated open air irradiance value $O_{irr}$, we calculated $I_{PAR}$ using: \begin{equation} I_{PAR} = I_{irr} (O_{PAR}/O_{irr}). \end{equation} This was necessary for our experimental validation against instantaneous data, whereas for whole season calculations when continuous open air ceptometer data was unavailable, Equation \eqref{eq:parconvert} was used instead, since short-term phenomena are averaged out over whole days, months and seasons. The model produced estimates in PAR for the ceptometer measurement at a particular grid location with a given tree, date and time, we directly compared these with the actual measurements in a scatter plot, and calculated a best-fit line. The coefficient of determination ($R^2$) and gradient ($m$) were calculated and compared to ideal values (1.0 in both cases). We also calculated the root mean squared error (RMSE) of all data points to the best-fit line. \subsection{Parameter Optimisation} The method introduced in Section~\ref{sec:method} has a number of free parameters, which are summarised in Table~\ref{tab:method-tuning}. These parameters were optimised to maximise $R^2$ of the comparison to ceptometer measurements. The ceptometers used to generate the ground-truth data were aligned manually to the LiDAR scan by identifying the stakes in the corner of the ceptometer grid within the lidar data. Any error in placement of the physical or virtual stakes propagates to the query points used in simulation, so we varied the alignment by different x and y offsets to maximise $R^2$ in comparison to ceptometer data. \begin{table}[h] \centering \begin{tabular}{cc} Parameter & Name \\ \hline $S$ & Sky resolution \\ $\beta_{f}$ & Transmission coefficient for leafy matter \\ $s_{vox}$ & Voxel size \\ $w_{vox}$ & Minimum voxel weight \\ $[\Delta x,\Delta y]$ & Virtual ceptometer offset \end{tabular} \caption{Model parameters which require tuning} \label{tab:method-tuning} \end{table} Further experimentation was undertaken to validate different sub-components of the model output, to justify the added complexity of each component. The primary validation tests involved purposefully changing the major sub-components of the model to test the sensitivity of the output, and are summarised in Table~\ref{tab:method-validation}. \begin{table}[h] \centering \begin{tabular}{p{0.4\linewidth}p{0.4\linewidth}} Experiment & Hypothesis \\ \hline Point cloud of incorrect tree & Inter-tree differences affect results \\ Incorrect time of day & Changes in the sky model during the day affect the results \\ Incorrect date & Differences in sky on different days affect the results\\ Rotated trunk (various degrees) & Intra-tree differences affect results \\ Dedicated sun node & Exact sun position affects results \\ No diffuse light & Splitting total light into diffuse and direct components affects results \end{tabular} \caption{Validation experiments, and the hypotheses they are designed to test} \label{tab:method-validation} \end{table} One of these sub-components is the specific LiDAR scan used. By altering this, we measured the importance of inter-tree geometric differences and validated the conclusion that all avocado trees are not so similar that the specific tree geometry is unimportant. Furthermore, we believe modelling the structures of any one tree must be done accurately to achieve accurate light distribution results. To validate this, we rotated the virtual tree within its environment. For example, this tests the alternative hypothesis that lidar scans are merely capturing approximate geometry such as tree height, and that internal structural details are unimportant. The time of day and date were also varied to investigate the effect of how the sky changes both during a single day and during a season. We ascertained the importance of inserting an additional sky node at the exact solar position by comparing results when direct irradiance was instead snapped to the nearest sky node. Finally, if the critical quantity for calculating results is the direct light from the solar sky node, the light from diffuse sources may not be required for results. To test this, we performed an experiment where the sky contained no diffuse nodes, rather all total light was accumulated in the solar node. \section{Results} \label{sec:results} \begin{figure} \centering \subfloat[1 ceptometer per data point]{% \includegraphics[width=\linewidth]{figures/v0_05-w1t1-b0_80} \label{fig:results-quantitative-1x1}}\hfill \subfloat[4 ceptometers per data point]{% \includegraphics[width=\linewidth]{figures/v0_05-w1t1-b0_80-n2} \label{fig:results-quantitative-2x2}} \caption{Estimated vs measured ceptometer readings. Experiment performed with model parameters $s_{vox} = 0.1m$, $w_{vox} = 1$, $S=19$ and $\beta = 0.80$. The green line plots y=x while the red dashed line represents the line of best fit. (a) presents with $R^2 = 0.854$, slope $m = 0.826$ and $RMSE = 237 \mu mol s^{-1} m^{-2}$, while (b) presents with $R^2 = 0.923$, slope $m = 0.849$ and $RMSE = 157 \mu mol s^{-1} m^{-2}$} \label{fig:results-quantitative} \end{figure} The scatter plot in Figure~\ref{fig:results-quantitative-1x1} shows the results of comparing estimated energy values with ceptometer measurements across all available data sets when estimated using the optimal parameters determined here. The green line plots y=x which is the desired relationship between estimated and measured values. Meanwhile the red dashed line represents the line of best fit, demonstrating a strong relationship with $R^2 = 0.854$, $m = 0.826$ and $RMSE = 237 \mu mol s^{-1} m^{-2}$. Several clusters are apparent within the scatter plot. The large cluster near the origin was where the model and ceptometer agreed on heavy shade. Several smaller clusters occur along the line of equality for higher energy values in full sun, which appear in smaller clusters than the dark points since every dataset represents a different time of day on different days, so the maximum light available varied. The noise in the model is most evident in the intermediate regions of the graph where the ground under the canopy is in partial shade. In these cases, the high spatial frequency of dappled light and shade patterns challenged the spatial resolution of the model. The plot in Figure~\ref{fig:results-quantitative-2x2} illustrates the result when this dappling was compensated for, by averaging over ceptometer readings and model estimates in a square sliding window of size 2m. This resulted in a strengthened relationship ($m=0.849$, $R^2 = 0.923$) and reduced noise ($RMSE=157$). \subsection{Parameter Optimisation} \label{sec:results-parameters} \begin{figure} \centering \subfloat[Gradient $m$]{% \includegraphics[width=0.95\linewidth]{figures/voxelsize-m} \label{fig:tuning-beta-m}}\hfill \subfloat[$R^2$]{% \includegraphics[width=0.95\linewidth]{figures/voxelsize-r2} \label{fig:tuning-beta-r2}}\hfill \subfloat[$RMSE$]{% \includegraphics[width=0.95\linewidth]{figures/voxelsize-rmse}% \label{fig:tuning-beta-rmse}}\hfill \caption{Results when changing $\beta_{f}$. $w_{vox}=1$ and $S=19$ are fixed while $s_{vox}$ is varied as shown.} \label{fig:results-tuning-beta} \end{figure} The parameters in Table~\ref{tab:method-tuning} form a multi-dimensional optimisation problem with a significant runtime per step prohibiting a joint optimisation, therefore a grid search was performed starting with the two most influential parameters, foliage transmission coefficient $\beta$ and voxel resolution $s_{vox}$, before fixing those and varying the others. Figure~\ref{fig:results-tuning-beta} shows performance of the model plotted for varying $\beta$, with a separate line for different voxel sizes $s_{vox}$. The results deteriorate with voxel sizes too large (0.5m) and too small (0.01m). Across most voxel sizes a transmission coefficient of $\beta = 0.8$ was optimal. \begin{figure} \centering \subfloat[Plot]{% \includegraphics[width=\linewidth]{figures/beta-annotated}}\hfill \subfloat[Photo]{% \includegraphics[width=0.45\linewidth]{figures/photo} \label{fig:results-qualitative-photo}}\hfill \subfloat[$v_{size}=0.02m$]{% \includegraphics[width=0.45\linewidth]{figures/v0_02-b0_80} \label{fig:results-qualitative-1}}\hfill \subfloat[$v_{size}=0.05m$]{% \includegraphics[width=0.45\linewidth]{figures/v0_05-b0_80} \label{fig:results-qualitative-2}}\hfill \subfloat[$v_{size}=0.20m$]{% \includegraphics[width=0.45\linewidth]{figures/v0_2-b0_80} \label{fig:results-qualitative-3}}\hfill \caption{Effect on results when $\beta_f$ is fixed at 0.80, and $s_{vox}$ is varied. Other parameters are fixed at $S=19$ and $w_{vox}=1$.} \label{fig:results-tuning-vs} \end{figure} With $\beta$ fixed, Figure~\ref{fig:results-tuning-vs} plots the model performance for different voxel sizes at a higher resolution than Figure~\ref{fig:results-tuning-beta}. The performance degraded at the upper and lower ranges, but there was no clear peak or optimal value. The slope demonstrated the most informative peak, suggesting an acceptable parameter range of $s_{vox} \in [0.04,0.1] m$. The data suggested any voxel size in this range at $\beta=0.8$ would provide near-optimal results. Qualitative comparisons of shadow quality between models at different voxel sizes and the photo in Figure~\ref{fig:results-qualitative-photo} were performed and displayed in Figure~\ref{fig:results-tuning-vs}. Small voxel sizes ($<0.03m$) demonstrated an overly distinct shadow as seen in Figure~\ref{fig:results-qualitative-1}, while larger voxel sizes ($>0.07m$) deteriorated recognisable features as seen in Figure~\ref{fig:results-qualitative-3}, despite comparable $R^2$ and $RMSE$. It was determined that $s_{vox} = 0.05m$ provided a good balance of qualitative and quantitative performance. While the plot in Figure~\ref{fig:results-tuning-vs} was generated with a fixed $\beta = 0.80$, we also explored the results at other values of $\beta$ to compare voxel sizes at their locally optimal transmission coefficient rather than a single global optimum. These results were excluded for brevity, but demonstrate the same trends. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/voxelweight} \caption{Effect on results when $w_{vox}$ is varied. Other parameters are fixed at $S=19$, $\beta=0.80$ and $s_{vox}=0.1$} \label{fig:results-tuning-vw} \end{figure} Once optimal values were selected for $\beta=0.8$ and $s_{vox}=0.05$, we investigated the effect of changing $w_{vox}$ (the minimum number of LiDAR points in a voxel for it to qualify as solid matter). This parameter is related to $s_{vox}$ and used primarily for noise rejection. Figure~\ref{fig:results-tuning-vw} shows the $R^2$ against $s_{vox}$ for different values of $w_{vox}$ to illustrate the relationship between the two parameters. $w_{vox} = 0$ never performed worse than any other value, suggesting that noise filtering was not required for our data. Further, When the weight was made too large compared to the voxel size, performance was reduced as critical details were removed from the point cloud with excessive noise reduction. Once the size-to-weight ratio was sufficiently large, the weight used appeared to have no quantitative effect. \begin{table}[h] \centering \begin{tabular}{cccccccc} & \multicolumn{3}{c}{With sun node} & \multicolumn{3}{c}{Without sun node} &\\ $S$ & $m$ & $RMSE$ & $R^2$ & $m$ & $RMSE$ & $R^2$ & Runtime \\ \hline 19 & 0.845 & 225 & 0.866 & 0.732 & 461 & 0.606 & 1m53s \\ 121 & 0.847 & 224 & 0.867 & 0.826 & 275 & 0.812 & 15h28m6s\\ 315 & 0.847 & 224 & 0.867 & 0.841 & 252 & 0.837 & 39h38m8s \\ 841 & 0.847 & 224 & 0.867 & 0.843 & 239 & 0.851 & 106h13m8s \\ 1271 & 0.847 & 224 & 0.867 & 0.843 & 247 & 0.843 & 159h46m48s \\ 1983 & 0.847 & 224 & 0.867 & 0.847 & 233 & 0.858 & 249h24m30s \end{tabular} \caption{Model error metrics for different sky complexities. Experiments were performed with constant model parameters $s_{vox} = 0.1m$, $w_{vox} = 1$ and $\beta = 0.80$. Runtime was measured as the CPU time used by the whole process. The model was parallelised across 8 cores, so the real runtime was approximately 1/8th the reported. } \label{tab:results-tuning-sky} \end{table} Finally the sky resolution $S$ was varied, both with and without the use of a dedicated sky-node for the exact sun position, with resulting slope, $R^2$ and $RMSE$ reported in Table~\ref{tab:results-tuning-sky}. This shows that the resolution of the sky is generally unimportant if a dedicated sun node is used. Without a dedicated node, lower resolutions suffered performance losses. A significant difference in runtime was also demonstrated. For this parameter, we used an $s_{vox}$ value of 0.1m to reduce model runtime, but as shown in Figure~\ref{fig:results-tuning-vs} this performed equally well as the chosen voxel size of 0.05m. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/offset-rmse-all-trees.jpg} \caption{Heat maps of $RMSE$ across offsets for $\Delta x,\Delta y \in [-1,1]$ relative to $[0,0]$ at the centre which represents the original manually generated ceptometer placement. Each individual LiDAR scan was represented here and all offset profiles use the same colour scale.} \label{fig:offset-optimisation} \end{figure} Figure~\ref{fig:offset-optimisation} shows the performance of the model as a heat-map of RMSE for different $[\Delta x,\Delta y] \in [-1,1]m$ offsets for all data sets. The offset is relative to the manually selected ceptometer position at $[\Delta x,\Delta y]= [0,0]$. The response to these offsets demonstrated the sensitivity of precise shadow location on the measurement grid. Some trees showed a clear optimal offset while others had a noticeable bias in X but not in Y. \begin{table}[h] \centering \begin{tabular}{cccc} Experiment & $m$ & $RMSE$ & $R^2$ \\ \hline Basic & 0.845 & 225 & 0.866 \\ Incorrect point cloud & 0.246 & 2050 & 0.0726 \\ Incorrect time & 0.663 & 504 & 0.564 \\ Incorrect date & 0.558 & 607 & 0.471 \\ No dedicated sun node & 0.732 & 461 & 0.606 \\ No diffuse light & 0.911 & 251 & 0.839 \end{tabular} \caption{Degradation of error metrics when intentionally mismatching data for validation experiments. All of these experiments were performed using model parameters $w_{vox} = 0.1m$, $w_{vox} = 1$, $S = 19$ and $\beta_{f} = 0.80$.} \label{tab:results-validation-incorrects} \end{table} Table~\ref{tab:results-validation-incorrects} demonstrates the difference in results when different aspects of the model were purposefully altered. Without any alterations, an $R^2$ of 0.866 was achieved, with an RMSE of 225. The incorrect time and date worsened the results with $R^2 = 0.564$ and $R^2 = 0.471$ respectively, while removing the sun node caused a smaller deterioration in the relationship ($R^2 = 0.606$) but the error more than doubled ($RMSE=461$). As was demonstrated in Table~\ref{tab:results-tuning-sky}, this would likely be far more significant with a smaller sky resolution but is almost unnoticeable at higher resolutions. Removing the diffuse light component caused only a slight reduction in performance ($R^2 = 0.839$), though we would expect it to be worse in data sets during cloudy days with a higher diffuse fraction. Use of the point cloud from an incorrect tree destroyed the relationship, with an $R^2$ of 0.0726, implying that modelling the geometric characteristics of the tree and its neighbours (for instance, height, volume, density and specific geometries) was critical. \begin{figure} \centering \includegraphics[width=\linewidth]{figures/trunk-rotate} \caption{Error measures of model as tree trunks were rotated from 0 to 360 degrees. Comparison performed using model parameters $s_{vox} = 0.1m$, $w_{vox} = 1$, $S = 19$ and $\beta_{f} = 0.80$.} \label{fig:results-validate-rot} \end{figure} The results of artificially rotating trees from their true alignment were shown in Figure~\ref{fig:results-validate-rot}, which revealed that the model performed best for the correct alignment and was sensitive to changes in alignment with a clear drop in performance for angular errors as small as 5 degrees. Upon rotation, the tree maintained its large-scale features such as volume and height, while specific intra-tree geometry was varied. This showed that it would be insufficient to model the tree in a simplistic fashion, for instance using a generic tree shape with the same characteristics (such as height) but none of the specific geometric detail. \section{Discussion} The results presented in Section~\ref{sec:results} validated the model's energy estimates against measurements taken on the canopy floor, whereas the intended use of the model in future work is to estimate the energy distribution throughout the entire 3D canopies (as seen in Figure~\ref{fig:results-tuning-vs}). It was infeasible to gather ceptometer measurements distributed in 3D, and so the 2D canopy floor validation serves as a reasonable proxy. We believe this is valid because although the spatial arrangement of ceptometer data is two dimensional, the ray paths intersect the full complexity of 3D geometry prior to the planar intersection, and the importance of the specific 3D geometry was demonstrated. Validation was performed for instantaneous measurements because the cost of ceptometer sensors prohibited leaving one per node in the field. The model, however, permits estimation of energy absorption and distribution across time spans such as an entire growing season by integrating the changing sky throughout the duration of the simulation. Figure~\ref{fig:composite-sky} demonstrated that as the sky is integrated, the variations in the diffuse component of light were averaged out. This means short-term effects like local cloud movements which are unpredictable but may have introduced errors in the ceptometer validation in this study were unlikely to have a significant impact on the composite sky. The more critical value to model in this case was the total diffuse light, which was extrapolated from public record and does not require sensors on location. Introducing a dedicated sky node demonstrated a practical improvement in runtime for instantaneous estimates, since the direct light from the sun was dominant at any one given time and the diffuse light distributed across the remainder of the sky was unlikely to provide a significant effect on individual ceptometer readings, as shown in Table~\ref{tab:results-validation-incorrects}. However, if this approach were used for time spans, the resolution of the sky would be inflated by the addition of many continuously placed nodes, which would prove infeasible beyond a certain length of time. For this reason, a sky resolution should be chosen which can match the instantaneous performance of the dedicated sun node without overly compromising runtime. The results reported in Table~\ref{tab:results-tuning-sky} suggested a resolution of 841 sky nodes would be appropriate. Furthermore, the geometry of the neighbouring trees gains an increased significance when the energy estimate includes the earlier and later hours of the day, as the direct light from the sun is more likely to pass through neighbours on its path to the tree of interest. In our data, trees in neighbouring rows were insufficiently scanned by LiDAR, such that the shadows cast by these trees were not able to be modelled. While this did not affect the ceptometer measurements used in validation, it may impact seasonal light estimates so collecting comprehensive LiDAR scans of neighbouring trees would improve performance. These scans could be provided by more extensive use of the Zebedee handheld LiDAR, or by other mobile LiDAR systems like that presented by~\cite{underwood2016mapping} and \cite{stein2016image}, which are designed to capture per-tree geometry at the scale of the whole orchard. If a mobile platform were used to scan entire orchard blocks, there would be no concern of unmodelled neighbours. As mentioned, the model would require a high-resolution sky to achieve an appropriate level of accuracy, and with a larger point cloud, the run time for the model would increase significantly. However, the model is highly parallelisable as the ray traced from each sky node can be processed in isolation. \subsection{Future work} The current model takes into account a classification of LiDAR points as trunk or foliage, which was provided by manual labelling for this study. Existing algorithms for branch/foliage classification will be explored, to provide a completely automated end-to-end solution in future work. Further, since the ultimate aim of the model is to provide data that can help improve orchard yield, the predictive power of the model's output for estimating yield and fruit size will be tested and validated. While yield is dependent on a variety of factors that are not captured in the model, a proven correlation between estimated light intake and yield will enable the model to be applied to pruning recommendation towards automated orchard decision support and management. Using the model described here and a digitised orchard (e.g. using a mobile LiDAR platform), existing pruning methods could be evaluated with the aim of maximising the total light or light distribution of fruit trees. Further, such a system could suggest variations on traditional methods, which are more optimised for this purpose. \section{Conclusion} A model for simulating the light energy captured by individual fruit trees in an orchard was presented. The sky was estimated at several particular times and places, taking weather into account, and the light from each part of the sky was traced through a tree model created using a hand-held LiDAR. The model was validated and optimised using ceptometer measurements captured in parallel with the LiDAR scans. Strong agreement was observed between the model and ceptometer data on the canopy floor ($R^2 = 0.854$, $RMSE = 237 \mu mol s^{-1} m^{-2}$). An additional validation was performed to assess the importance of each major sub-component, which demonstrated the overall complexity of the algorithm was justified. The validated model is suitable for further development towards an orchard decision support system for pruning. \subsubsection*{Acknowledgements} This work is supported by the Australian Centre for Field Robotics (ACFR) at The University of Sydney and by funding from the Australian Government Department of Agriculture and Water Resources as part of its Rural R\&D for profit programme. Thanks to Sushil Pandey, Nicholas Anderson and Kerry Walsh from Central Queensland University for providing the ceptometer data, as well as Chad Simpson, Chris Searle and Simpson Farms for their support and to Andrew Robson from the University of New England for selecting target trees. Thanks also to Neil White for the ongoing discussions and collaboration. Finally, thanks to Vsevolod (Seva) Vlaskine and the software team for their continuous support, and to Salah Sukkerieh for his continuous direction and support of Agriculture Robotics at the ACFR. For more information about robots and systems for agriculture at the ACFR, please visit http://sydney.edu.au/acfr/agriculture. \subsubsection*{Conflicts of interest} The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of the data; in the writing of the manuscript; nor in the decision to publish the results.
{ "timestamp": "2018-06-05T02:18:21", "yymm": "1806", "arxiv_id": "1806.01118", "language": "en", "url": "https://arxiv.org/abs/1806.01118" }
\section{System Overview} The NECTEC team is a first-time participant in the VBS competition in 2018. We have presented a video retrieval system as an interactive browsing tool with a simple interface. The main system is designed to support two types of retrieval modes which are text-based retrieval and sketch-based retrieval~\cite{Sloth2017}. All features can be summarized as follows: \begin{itemize} \item indexing with high level concepts of 16,429 concepts from three convolutional neural network (CNN) models, \item indexing with eight dominant color masks and the top 10 object localizations using masks of their bounding boxes, \item using hashing technique to search for color masks and object localization masks, \item reducing the retrieval time by using the Elasticsearch system for a full text search, \item reducing the retrieval time by using the Radis system for sketch-based search, \item viewing all keyframes in a related video shot, \item reranking the results by using grouping with video shot. \item combining the text-based and sketch-based search by using weighted technique. \item two display modes for the retrieval results: sort all frame by the similarity score and group the frames by the video id. \end{itemize} \section{Final System Changes} The system described in our VBS2018 paper~\cite{Sloth2017} has been further improved after paper submission. The details of these changes are described in this section. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\linewidth]{figures/vbs_framework.png} \end{center} \caption{Overview of our indexing process.} \label{fig:overview} \end{figure} The final system was developed using Elasticsearch~\cite{Elasticseach2015} together with Radis~\cite{Redis2013} as shown in Fig.~\ref{fig:overview} to improve the retrieval time. For indexing process, instead of using every frames in the whole dataset, we used 335,944 keyframes provided with the TRECVID dataset~\cite{2017trecvidawad}. For each keyframe, the concept labels, color feature, and object spatial vectors were extracted. We used the CNNs to extract objects, scenes, and image captions as concept labels. The objects were recognized and localized using the Faster R-CNN, which was trained on the Open Images dataset~\cite{openimages} with 545 object classes. Scenes were extracted using Alexnet trained on the Place365 dataset\cite{zhou2017places}. And we used image caption model or im2txt~\cite{Vinyals2017} to generate image captions. These extracted labels were indexed using Elasticsearch~\cite{Elasticseach2015} at a keyframe level. For color extraction, the dominant 8 colors, such as red, purple, dark blue light blue, green, yellow, orange, and gray, were selected to generate the 16x16 binary masks as a color feature vector~\cite{Sloth2017}. Similar to the color feature, we chose the top 10-object labels from 545 object classes to construct the 16x16 binary masks for spatial object extraction. The top 10-object labels includes of person, man, woman, face, clothing, tree, plant, car, window and poster. Both color and spatial feature vectors were hashed from 16x16 binary masks using locality sensitive hashes (LSHs)~\cite{Redis2013} and they were stored on the Redis. \begin{figure}[b] \begin{center} \includegraphics[width=0.9\linewidth]{figures/textseach-1.png} \end{center} \caption{Screenshot of Sloth Search System} \label{fig:screenshot_gui} \end{figure} For text-based retrieval using the Elasticsearch~\cite{Elasticseach2015}, the score of an image related to the search terms were calculated based on Term Frequency/Inverse Document Frequency (TF/IDF) scoring model. For visual-based retrieval, the score was calculated based on the cosine distance of the hash. In order to combine the results from textual and visual search, the weighed formulation is used in Eq.~\ref{eq:weight_dist}. \begin{equation} \label{eq:weight_dist} sim_{all}=w_{t} \cdot sim_{t}+w_{c} \cdot (1 - dist_{c})+ w_{o} \cdot (1-dist_{o}) \end{equation} where $sim_{all}$ is the combined similarity score, $sim_{t}$ is the similarity score from querying by text, $dist_{c}$ and $dist_{o}$ are cosine distance from querying by color sketch and object sketch, respectively. $w_t$, $w_c$ and $w_o$ are weight parameters that can be defined by a user. Our tool was designed with a simple and basic interface where a user can search by entering one or multiple texts or search by uploading a sketch image, shown in Fig.~\ref{fig:screenshot_gui}. For visual query, the user can roughly sketch a color image or draw bounding boxes of a particular object represented by color. A combined query can be performed and the weight for each query type can be adjusted. There are two display modes for viewing the retrieval results. For default mode, the results are displayed in grid views sorted by their similarity score, shown in Fig.~\ref{fig:screenshot_gui}. For grouping mode, the frames with the same video id are grouped together as a row shown in Fig.~\ref{fig:video_grouping}. And, each group is sorted according to the maximum score for each group. Our system also have a shot view for each video, shown in Fig.~\ref{fig:viewall}. It allows a user to view all keyframes in that particular video by clicking the video id. \begin{figure}[tb] \begin{center} \includegraphics[width=0.9\linewidth]{figures/group-by-video.png} \end{center} \caption{Re-ranking by video grouping} \label{fig:video_grouping} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=0.9\linewidth]{figures/viewall.jpg} \end{center} \caption{Viewing all frame of each video} \label{fig:viewall} \end{figure} \section{System Usage at VBS2018} The text-based search has been mainly used by both experts and novices during the competition. The search can be queried using multiple keywords to improve the result. This type of search is faster and easier to used compared to the visual sketch. However, the sketch-based search serves as an alternative search option when the text-based search fails. This sketch-based search has additional capabilities of color distribution and object location. There are three types of queries in the competition; visual KIS, textual KIS and textual AVS. For this competition, our system works well on a particular task. Our tool performs quite well on the AVS task. With easy-to-use browsing interface and adequate labels for general objects, the user can search for the general objects in the AVS task quite well. This text-based query is quite generic making our system to be able to find many relevant frames. Moreover, for a particular retrieved frame, the user has an option to view all frames in that particular video. This feature helps the user in the AVS task to retrieve more related frames from the same video. Both expert and novice user can search and retrieve desired image for AVS task quite fast and our team was ranked first in this competition task. For visual KIS, only two queries in the expert session were found using text-based query and none were found in the novice session. In this visual KIS task, each frame usually contains different objects making it difficult to search for that particular shot as our current system can only search for a particular frame and not a shot that composed of multiple unique objects in a different frame. Our sketch-based query also didn't perform as well as we expected and it was not able to help finding the shot when the text-based failed. On the other hand, textual KIS is also quite difficult as there is no visual information and we can only rely on text-based query. This textual information is usually very specific. Our system was not able to extract such specific detail of an object such as "a bearded man" or "a woman carries a large orange bag". Our types of object is still limited making our system unable to find an uncommon object as well. At VBS2018 competition, our system was demonstrated to be a good interactive browsing tool. However, the system still needs some improvements. Currently, the number of keyframe is still too small and the system lacks the ability to search for multiple components in the whole shot or the attributes of an object. Reranking and filtering should be adopted for a better interactive browsing interface. And, the sketch-based search should be improved for a better result as well.
{ "timestamp": "2018-06-05T02:17:39", "yymm": "1806", "arxiv_id": "1806.01081", "language": "en", "url": "https://arxiv.org/abs/1806.01081" }
\section{Introduction} Let $M$ be an $n$-dimensional complex manifold endowed with a Hermitian metric $g$. Then, on the space of $(p,q)$-forms on $M$ there are defined several self-adjoint elliptic differential operators of order two, as the {\em Dolbeault Laplacian} $\Delta^g_{\overline{\del}}$ and of order four as {\em Bott-Chern} and {\em Aeppli Laplacians}, respectively denoted by $\tilde{\Delta}^g_{BC}$ and $\tilde{\Delta}^g_{A}$, involving $\partial,\overline{\del},\partial^*,\overline{\del}^*$ and their suitable combinations. If $M$ is compact, then, in particular, according to Schweitzer \cite{S}, it turns out that a $(p,q)$-form $\varphi$ on $M$ satisfies $\tilde{\Delta}^g_{BC}\varphi=0$ if and only if $$\partial\varphi=0,\quad\overline{\del}\varphi=0,\quad \overline{\del}^*\partial^*\varphi=0. $$ Furthermore, if the Hermitian metric $g$ is also K\"ahler on the compact complex manifold $M$, then according to the ellipticity, the kernels of such differential operators are finite dimensional and, as a consequence of K\"ahler identities, they coincide. For a complex non compact manifold $M$, one can consider smooth $L^2$ forms and study the space of $L^2$ harmonic $(p,q)$-forms for the above Laplacians. In \cite{DF}, the space of $L^2$ harmonic forms with respect to the Dolbeault Laplacian $\Delta^g_{\overline{\del}}$ on a bounded strictly pseudoconvex domain $\Omega$ in $\C^n$ with smooth boundary, endowed with the Bergman metric, is studied. Precisely, denoting by $\mathcal{H}^{p,q}_{\overline{\del},2}$ the space of $L^2$ harmonic forms with respect to $\Delta^g_{\overline{\del}}$, Donnelly and Fefferman proved that under the assumptions as above, it holds that $$\left\{ \begin{array}{lll} \dim\mathcal{H}^{p,q}_{\overline{\del},2}=0 &\hbox{if}&\,\,\, p+q\neq n\\[10pt] \dim\mathcal{H}^{p,q}_{\overline{\del},2}=\infty&\hbox{if}&\,\,\, p+q= n. \end{array} \right. $$ In \cite{O} Ohsawa proved that the dimension of the middle $L^2$ $\overline{\del}$-cohomology of a domain in a complex manifold, admitting a non-degenerate regular boundary point and whose defining function satisfies some suitable assumptions, is infinite dimensional. Later, Gromov in \cite{G} introduced the notion of K\"ahler hyperbolicity showing that, if $X$ is a K\"ahler complete simply-connected manifold whose K\"ahler form $\omega$ is $d$-bounded, admitting a uniform discrete subgroup of isometries, then $$\left\{ \begin{array}{lll} \mathcal{H}^{p,q}_{\overline{\del},2}=\{0\} &\hbox{if}&\,\,\, p+q\neq n\\[10pt] \mathcal{H}^{p,q}_{\overline{\del},2}\neq\{0\}&\hbox{if}&\,\,\, p+q= n. \end{array} \right. $$ In the present paper we are interested in studying the space of $L^2$ $(p,q)$-harmonic forms with respect to the Bott-Chern Laplacian on Hermitian complete manifolds. More precisely, we consider a $d$-{\em bounded Stein manifold $M$}, i.e., a complex $n$-dimensional manifold $M$ admitting a smooth strictly plurisubharmonic exhaustion $\rho$ and endowed with the K\"ahler metric whose fundamental form is $\omega=i\partial\overline{\del}\rho$, such that $i\overline{\del}\rho$ has bounded $L^\infty$ norm; examples of such manifolds are bounded strictly pseudoconvex domains in $\C^n$ with smooth boundary, endowed with the Bergman metric (see \cite{DO}). Denoting by $\mathcal{H}^{p,q}_{BC,2}$ the space of $W^{1,2}$ Bott-Chern harmonic $(p,q)$-forms, where $W^{1,2}$ denotes the Sobolev Space, we prove the following vanishing result\vskip.2truecm\noindent {\bf Theorem} (see Theorem \ref{GromovBC}) {\em Let $M$ be a $d$-bounded Stein manifold of complex dimension $n$. Then $$\mathcal{H}^{p,q}_{BC,2}=\{0\},\quad \hbox{for}\,\, p+q\neq n. $$ } \vskip.2truecm\noindent The paper is organized as follows: in Section \ref{preliminaries} we fix some notation and recall some well known results in K\"ahler geometry. In Section \ref{cut-off}, adapting an argument by Demailly \cite{DE}, we construct cut-off functions on a $d$-bounded Stein manifold, proving estimates of second order derivatives. Section \ref{pezzi} is mainly devoted to prove that a smooth $W^{1,2}$ $(p,q)$-form $\varphi$ satisfies $\tilde{\Delta}^g_{BC}\varphi=0$ if and only if $\partial\varphi=0,\quad\overline{\del}\varphi=0,\quad \overline{\del}^*\partial^*\varphi=0$ (see Theorem \ref{teo:pezzi}). Basic tools in the proof of such a theorem are the estimates of second order derivatives of cut-off functions ensured by Stein $d$-bounded assumption (see Lemma \ref{estimates-derivatives}). As a corollary, we also derive that (see Theorem \ref{teo:harmonic}), \begin{equation*} \mathcal{H}^{p,q}_{BC,2}\subset\mathcal{H}^{p,q}_{\overline{\del},2}=\mathcal{H}^{p,q}_{\partial,2}=\mathcal{H}^{p,q}_{d,2}, \end{equation*} where the last two sets are the spaces of $L^2$-harmonic forms with respect to the $\partial$-Laplacian $\Delta^g_{\partial}$ and to the Hodge-de Rham Laplacian $\Delta^g_{d}$. Then, combining Theorem \ref{teo:harmonic} with the results by Gromov, we obtain the proof of the vanishing Theorem \ref{GromovBC}. In \cite{G} Gromov gave an $L^2$ Hodge decomposition Theorem for complete Riemannian manifolds, (see also \cite[Chap.VIII, Thm.3.2]{DE}). It seems to be harder to prove a link between $W^{1,2}$ Bott-Chern harmonic forms and $L^2$ reduced cohomology. For other results on $L^2 $ cohomological decomposition in the complete almost Hermitian setting see \cite{HT}. Thanks to elliptic regularity, we consider only $L^2$ forms that are also smooth. It is sufficient for our results. \medskip \noindent{\sl Acknowledgments.} We are grateful to Professor Boyong Chen for useful suggestions and remarks for a better presentation of the results. \section{Preliminaries}\label{preliminaries} Let $M$ be an $n$-dimensional complex manifold. Denote by $\Omega^r(M;\C)$, respectively $\Omega^{p,q}(M)$ the space of smooth complex $r$-forms, respectively smooth $(p,q)$-forms on $M$. Let $g$ be a Hermitian metric on $M$; denote by $\omega$ the fundamental form of $g$ and by $\hbox{Vol}_g:=\frac{\omega^n}{n!}$ the standard volume form. Let $\langle,\rangle$ be the pointwise Hermitian inner product induced by $g$ on the space of $(p,q)$-forms. Given any $\varphi\in\Omega^r(M;\C)$, set $$\vert\varphi(x)\vert^2_g=\langle\varphi,\varphi\rangle(x)$$ and \begin{gather*} \lVert \varphi\rVert_{L^2}:=\int_M\vert\varphi\vert^2_g\hbox{Vol}_g,\\ \lVert \varphi\rVert_{W^{1,2}}:=\int_M(\vert\varphi\vert^2_g+\vert\nabla\varphi\vert^2_g)\hbox{Vol}_g, \end{gather*} where $\nabla$ is the Levi-Civita connection of $(M,g)$ and $\vert\nabla\varphi\vert_g$ is the pointwise Hermitian norm of the covariant derivative of the $(p,q)$-form $\varphi$, induced by $g$ on the space of complex covariant tensors on $M$. Define $$ L^2(M):=\left\{\varphi\in\Omega^r(M;\C)\,\,\,\vert\,\,\, 0\le r\le 2n,\ \lVert \varphi\rVert_{L^2}<\infty \right\} $$ and $$ W^{1,2}(M):=\left\{\varphi\in\Omega^r(M;\C)\,\,\,\vert\,\,\, 0\le r\le 2n,\ \lVert \varphi\rVert_{W^{1,2}}<\infty \right\} $$ For any given $\varphi\in\Omega^r(M;\C)$, we also set $$ \lVert \varphi\rVert_{L^\infty}:=\sup_{x\in M}\vert\varphi(x)\vert_g $$ and we call $\varphi$ {\em bounded} if $ \lVert \varphi\rVert_{L^\infty}<\infty$. Furthermore, if $\varphi=d\eta$, then $\varphi$ is said to be {\em d-bounded}, if $\eta$ is bounded. For any $\varphi,\psi\in\Omega_c^{p,q}(M)$ denote by $\llangle,\rrangle$ the $L^2$-Hermitian product defined as $$ \llangle\varphi,\psi\rrangle =\int_M\langle\varphi,\psi\rangle \hbox{Vol}_g $$\newline Denoting by $*:\Omega^{p,q}(M)\to \Omega^{n-p,n-q}(M)$ the complex anti-linear Hodge operator associated with $g$, the {\em Bott-Chern Laplacian} and {\em Aeppli Laplacian} $\tilde\Delta_{BC}^g$ and $ \tilde\Delta_{A}^g$ are the $4$-th order elliptic self-adjoint differential operators defined respectively as (see \cite[p.8]{S}) $$ \tilde\Delta_{BC}^g \;:=\; \partial\overline{\del}\delbar^*\partial^*+ \overline{\del}^*\partial^*\partial\overline{\del}+\overline{\del}^*\partial\del^*\overline{\del}+ \partial^*\overline{\del}\delbar^*\partial+\overline{\del}^*\overline{\del}+\partial^*\partial $$ and $$ \tilde\Delta_{A}^g \;:=\; \partial\del^*+\overline{\del}\delbar^*+ \overline{\del}^*\partial^*\partial\overline{\del}+\partial\overline{\del}\delbar^*\partial^*+ \partial\overline{\del}^*\overline{\del}\partial^*+\overline{\del}\partial^*\partial\overline{\del}^*\,, $$ where, as usual $$ \partial^*=-*\partial\, *\,,\qquad \overline{\del}^*=-*\overline{\del}\, *. $$ \begin{rem}\label{bc-kh-id}{If $g$ is a K\"ahler metric on $M$, then as a consequence of the K\"ahler identities, (see \cite[p.120]{H} and \cite[Prop.2.4]{S}), it is $$ \tilde\Delta_{BC}^g=\Delta^g_{\bar\partial}\Delta^g_{\bar\partial}+\overline{\del}^*\overline{\del}+\partial^*\partial,\quad \tilde\Delta_{A}^g=\Delta^g_{\bar\partial}\Delta^g_{\bar\partial}+\partial\del^*+\overline{\del}\delbar^*, $$ $\Delta^g_{\bar\partial}$ being the Dolbeault Laplacian on $(M,g)$. } \end{rem} Finally, we define the space of $W^{1,2}$ Bott-Chern harmonic forms by setting $$ \mathcal{H}^{p,q}_{BC,2}:=\left\{\varphi\in\Omega^{p,q}(M)\,\,\,\vert\,\,\,\tilde\Delta_{BC}^g\varphi=0,\ \varphi\in W^{1,2}(M)\right\}. $$ In the sequel we will need a local expression for the operators $\overline{\del}^*$. To this purpose, let $(z^1,\ldots, z^n)$ be local holomorphic coordinates on $M$, $$ g=\sum_{\alpha,\beta}g_{\alpha\bar\beta}dz^\alpha\otimes d\bar{z}^\beta $$ be the local expression of the Hermitian metric $g$ and $(g^{\bar\alpha\beta})=(g_{\bar\alpha\beta})^{-1}$. Given $\psi\in\Omega^{p,q}(M)$, for $$ A_p=(\alpha_1,\ldots,\alpha_p),\qquad B_q=(\beta_1,\ldots,\beta_q)$$ multiindices of length $p$, $q$ respectively, with $\alpha_1<\cdots <\alpha_p$ and $\beta_1<\cdots <\beta_q$, denote by $$ \psi=\sum_{A_p, B_q}\psi_{A_p\bar{B}_q}dz^{A_p}\wedge d\bar{z}^{B_q} $$ the local expression of $\psi$ and by $$ \psi^{\bar{A}_pB_q}=\sum_{\Gamma_p,\Lambda_q}g^{\bar{\alpha}_1\gamma_1}\cdots g^{\bar{\alpha}_p\gamma_p} g^{\bar{\lambda}_1\beta_1}\cdots g^{\bar{\lambda}_q\beta_q}\psi_{\gamma_1\ldots \gamma_p\bar{\lambda}_1\ldots \bar{\lambda}_q}. $$ Then, if $\varphi\in\Omega^{p,q}(M)$, locally the pointwise Hermitian inner product $\langle,\rangle$ on $\Omega^{p,q}(M)$ is given by $$ \langle \varphi,\psi\rangle=\sum_{A_p, B_q}\varphi_{A_p\bar{B}_q}\overline{\psi^{\bar{A}_pB_q}}. $$ The local expression of the pointwise Hermitian inner product induced by $g$ on the space of complex covariant tensors on $M$ is similar. According to \cite[Prop.2.3]{MK}, we recall the local formula for $\overline{\del}^*$, that is for any given $\psi\in\Omega^{p,q+1}(M)$, it is $$ (\overline{\del}^*\psi)^{\bar{A}_pB_q}=-\sum_{\gamma=1}^n \left(\frac{\partial}{\partial z^\gamma}+\frac{\partial\log \det(g_{\alpha\bar\beta})}{\partial z^\gamma}\right)\psi^{\gamma\bar{A}_pB_q}. $$ In the special case that $g$ is a K\"ahler metric on $M$, then as a consequence of the last formula, for every fixed $x_0\in M$, denoting by $(z^1,\ldots, z^n)$ local normal holomorphic coordinates at $x_0$, we obtain \begin{equation}\label{delbar-normal} (\overline{\del}^*\psi)_{\alpha_1\ldots\alpha_p\bar{\beta}_1\ldots\bar{\beta}_q}(x_0)=-\sum_{\gamma=1}^n \frac{\partial \psi_{\bar{\gamma}\alpha_1\ldots\alpha_p\bar{\beta}_1\ldots\bar{\beta}_q}}{\partial z^\gamma}(x_0). \end{equation} Finally, for any given $\varphi\in\Omega^{p,q}(M)$, still using local normal holomorphic coordinates at $x_0$, we have \begin{equation}\label{nabla-normal} |\nabla\varphi|^2_g(x_0)=2\sum_{A_p, B_q}\sum_{\gamma=1}^n \biggl(\biggl|\de{\varphi_{A_p\bar{B}_q}}{z^\gamma}\biggr|^2+\biggl|\de{\varphi_{A_p\bar{B}_q}}{\bar{z}^\gamma}\biggr|^2\biggl)(x_0). \end{equation} \section{Construction of cut-off functions in $d$-bounded Stein manifolds} \label{cut-off} Let $M$ a be Stein manifold and let $\rho$ be a strictly plurisubharmonic exhausting smooth function. Denote $\omega=i\partial\overline{\del}\rho$ the fundamental form with the K\"ahler metric $g$ associated. We say that $M$ is {\em d-bounded} if $\omega=d\eta$ and $\eta=i\overline{\del}\rho$ is bounded. In particular, $\omega$ is $d$-bounded. In the following, $\rho$, $\omega$, $g$, $\eta$ are considered fixed. We remark that any $d$-bounded Stein manifold is complete, see \cite[Chap.VIII, Lemma 2.4]{DE}. Examples of $d$-bounded Stein manifolds are bounded strictly pseudoconvex domains in $\C^n$ with smooth boundary, endowed with the Bergman metric, see \cite[Prop.3.4]{DO}. Now we prove the existence of cut-off functions with specific bounds on the second order derivatives on a d-bounded Stein manifold. We need the following known lemma. \begin{lemma} Let $a,b\in\R$, $a<b$. Then there exists a $\mathcal{C}^\infty$ function $\psi:\R\rightarrow [0,1]\subset\R$ such that the following properties hold: \begin{itemize} \item $\psi(t)=1 \ \iff\ t\le a$; \item $\psi(t)=0 \ \iff\ t\ge b$; \item $\exists C\in\R$ such that $|\psi'(t)|,|\psi''(t)|\le C\psi(t)^{\frac{1}{2}}\ \forall t\in\R$. \end{itemize} \end{lemma} \begin{proof} Let us define $\phi:\R\rightarrow\R$, a $\mathcal{C}^\infty$ function such that \begin{equation*} \phi(t)= \begin{cases} \exp(-\frac{1}{t^2}) & \text{ if } t>0\\ 0 & \text{ if } t\le 0. \end{cases} \end{equation*} Then we define \begin{equation*} \psi(t)=\frac{\phi(b-t)}{\phi(b-t)+\phi(t-a)}. \end{equation*} Note that $\psi(t)=1$ iff $t\le a$ and $\psi(t)=0$ iff $t\ge b$. After some calculations we obtain \begin{equation*} \begin{split} \psi'(t) & = \frac{-2\frac{\phi(b-t)}{(b-t)^3}(\phi(b-t)+\phi(t-a))- \phi(b-t)\biggl(-2\frac{\phi(b-t)}{(b-t)^3}+2\frac{\phi(t-a)}{(t-a)^3}\biggr)}{(\phi(b-t)+\phi(t-a))^2}\\ & = -2\frac{\phi(b-t)\phi(t-a)}{(\phi(b-t)+\phi(t-a))^2}\biggl(\frac{1}{(b-t)^3}+\frac{1}{(t-a)^3}\biggr)\\ & = -2\psi(t)^\frac{1}{2}\frac{\phi(b-t)^\frac{1}{2}\phi(t-a)}{(\phi(b-t)+\phi(t-a))^\frac{3}{2}} \biggl(\frac{1}{(b-t)^3}+\frac{1}{(t-a)^3}\biggr). \end{split} \end{equation*} This implies that $\exists C\in\R$ such that $|\psi'(t)|\le C\psi(t)^{\frac{1}{2}}\ \forall t\in\R$. The calculations of the estimate on $\psi''$ are analogous. \end{proof} The following lemma is inspired by \cite[Chap.VIII, Lemma 2.4]{DE}. \begin{lemma}\label{estimates-derivatives} Let $M$ be a d-bounded Stein manifold of complex dimension $n$. Then there exists a sequence $\{K_\nu\}_{\nu\in\N}$ of compact subsets of $M$ and a sequence $a_\nu : M \rightarrow [0,1]\subset\R$, $\nu \in \N$, of $\mathcal{C}^\infty$ functions with compact support, called cut-off functions, such that the following properties hold: \begin{itemize} \item $\bigcup_{\nu\in\N} K_\nu=M$ and $K_\nu\subset \mathop K\limits^ \circ$$_{\nu+1}$; \item $\forall\nu \in \N$ $a_\nu=1$ in a neighbourhood of $K_\nu$ and $\supp{a_\nu}\subset \mathop K\limits^ \circ$$_{\nu+1}$; \item $\exists C\in\R$ such that $|\partial a_\nu(x)|_g,|\overline{\del} a_\nu(x)|_g,|\partial\overline{\del} a_\nu(x)|_g\le 2^{-\nu}C a_\nu(x)^\frac{1}{2}$ $\forall x\in M$. \end{itemize} \end{lemma} \begin{proof} We define \begin{equation*} a_\nu(x)=\psi(2^{-\nu}\rho(x)) \ \forall x\in M \text{ and } K_\nu=\overline{\{x\in M\ |\ \rho(x)< 2^{\nu}\}}, \end{equation*} where $\psi$ is the function of the previous lemma, with $a=1.1$ and $b=1.9$. Let us check that the claimed properties hold. The subsets $K_\nu$ are compact because of the definition of exhausting function. If $x\in M$, then $\exists\nu\in\N$ such that $\rho(x)<2^\nu$, so $x\in K_\nu$, thus $\bigcup_{\nu\in\N} K_\nu=M$. The inclusions $K_\nu\subset \mathop K\limits^ \circ$$_{\nu+1}$ hold by the construction of $K_\nu$, in fact if $x\in K_\nu$, then $\rho(x)\le 2^\nu$ by continuity and $x\in\{y\in M\ |\ \rho(y)<2^\nu\cdot 1.5\}\subset\mathop K\limits^ \circ$$_{\nu+1}$. The functions $a_\nu$ are $\mathcal{C}^\infty$ because $\psi$ and $\rho$ are $\mathcal{C}^\infty$. The function $\psi$ takes values in the interval $[0,1]$, and so is true for $a_\nu$. Choosing a sufficiently small neighbourhood of $K_\nu$, we can assume that $2^{-\nu}\rho<1.1$, so $a_\nu=1$ in that neighbourhood. In order to prove $\supp{a_\nu}\subset \mathop K\limits^ \circ$$_{\nu+1}$, let us take $\tilde{x}\in\supp{a_\nu}$ and a sequence $\{x_k\}_{k\in\N}$ of points in $M$ such that $a_\nu(x_k)>0\ \forall k\in \N$ and $x_k\rightarrow\tilde{x}$ as $k\rightarrow\infty$. By the construction of $\psi$, we have that, $\forall x\in M$, $a_\nu(x)>0$ if and only if $2^{-\nu}\rho(x)< 1.9$. Therefore, by the continuity of $\rho$, $2^{-\nu}\rho(\tilde{x})\le 1.9$, so that $\rho(\tilde{x})< 2^{\nu}\cdot 1.95$ and $\tilde{x}\in \mathop K\limits^ \circ$$_{\nu+1}$. Because $\supp{a_\nu}$ is a close set contained in a compact set, then it is compact. Finally we have to prove the estimates on the differentials of $a_\nu$. Let $x\in M$, then \begin{equation*} \partial a_\nu(x)= 2^{-\nu}\psi'(2^{-\nu}\rho(x)) \sum\limits_{i=1}^n \de{\rho}{z^i}(x)dz^i, \end{equation*} and \begin{equation*} \begin{split} |\partial a_\nu(x)|_g & = 2^{-\nu}|\psi'(2^{-\nu}\rho(x))| |\partial\rho(x)|_g \\ & \le 2^{-\nu}C(\psi(2^{-\nu}\rho(x)))^{\frac{1}{2}} ||\partial\rho||_{L^\infty} \\ & \le 2^{-\nu}Ca_\nu(x)^{\frac{1}{2}}, \end{split} \end{equation*} where the constant $C$ is taken big enough and may not be the same at every passage of the calculations. In the last passage we used the hypothesis that $\omega$ is $d$-bounded and the fact that $\rho$ is real, so $\de{\rho}{\bar{z}^i}(x)=\overline{\de{\rho}{z^i}}(x)$ and $|\partial\rho(x)|_g=|\overline{\del}\rho(x)|_g$. By the same calculations, we also obtain the estimate of $|\overline{\del} a_\nu(x)|_g$. Moreover, \begin{equation*} \begin{split} \overline{\del}\partial a_\nu(x) & = 2^{-\nu} \sum\limits_{i,j=1}^n \biggl(2^{-\nu}\psi''(2^{-\nu}\rho(x)) \de{\rho}{\bar{z}^j}(x) \de{\rho}{z^i}(x)+\\ & + \psi'(2^{-\nu}\rho(x)) \de{^2\rho}{\bar{z}^j \partial z^i}(x)\biggr) d\bar{z}^j\land dz^i, \end{split} \end{equation*} and \begin{equation*} \begin{split} |\overline{\del}\partial a_\nu(x)|_g & \le 2^{-\nu} \biggl(2^{-\nu}|\psi''(2^{-\nu}\rho(x))||\overline{\del}\rho(x)|_g|\partial\rho(x)|_g+\\ & + |\psi'(2^{-\nu}\rho(x))| |\sum\limits_{i,j=1}^n \de{^2\rho}{\bar{z}^j \partial z^i}(x) d\bar{z}^j\land dz^i|_g \biggr)\\ & \le 2^{-\nu}C(\psi(2^{-\nu}\rho(x)))^{\frac{1}{2}} \biggl(2^{-\nu}||\overline{\del}\rho||_{L_\infty}||\partial\rho||_{L_\infty}+|\omega(x)|_g\biggr)\\ & \le 2^{-\nu}Ca_\nu(x)^{\frac{1}{2}}. \end{split} \end{equation*} In fact $\omega$ is $d$-bounded as before and $|\omega(x)|_g$ is constant, due to the definition of the metric. The proof is complete. \end{proof} \section{Vanishing of $L^2$ Bott-Chern harmonic forms}\label{pezzi} \label{vanishing} Our main theorem states that the following property, true in the compact case, also holds in our non-compact case. \begin{theorem} \label{teo:pezzi} Let $M$ be a d-bounded Stein manifold of complex dimension $n$. Let $\varphi\in \Omega^{p,q}(M)\cap W^{1,2}(M)$. If $\tilde{\Delta}_{BC}^g\varphi=0$, then \begin{equation*} \partial\varphi=0,\quad \overline{\del}\varphi=0,\quad \overline{\del}^*\partial^*\varphi=0. \end{equation*} \end{theorem} The following Lemma will be useful for the proof of Theorem \ref{teo:pezzi}. \begin{lemma}\label{estimates} Let $M$ be a K\"ahler manifold of complex dimension $n$ and denote its metric with $g$. If $\varphi\in\Omega^{p,q}(M)$ and $\{a_\nu\}_\nu$ are $\mathcal{C}^\infty$ functions on $M$, then $\exists C>0$ such that $\forall\nu\in\N$ \begin{equation} \begin{split} \label{lemma:delbar} &|\overline{\del}^*(\overline{\del} a_\nu\land*\varphi)|_g\le C\bigl(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g\bigr),\\ &|\overline{\del}^*(\overline{\del} a_\nu\land\varphi)|_g \le C\bigl(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g\bigr). \end{split} \end{equation} \end{lemma} \begin{proof} The pointwise Hermitian norm on forms is invariant under change of coordinates, so we can prove the inequalities locally with a uniform constant $C$. For every fixed $x_0\in M$, denoting by $(z^1,\ldots, z^n)$ local normal holomorphic coordinates at $x_0$, we have \begin{gather*} \overline{\del} a_\nu=\sum_\beta \de{a_\nu}{\bar{z}^\beta}d\bar{z}^\beta, \\ \varphi=\sum_{A_p,B_q} \varphi_{A_p\bar{B}_q} dz^{A_p}\land d\bar{z}^{B_q}, \\ \overline{\del} a_\nu\land\varphi =\sum_{A_p,B_q,\beta} \de{a_\nu}{\bar{z}^\beta} \varphi_{A_p\bar{B}_q} d\bar{z}^\beta\land dz^{A_p}\land d\bar{z}^{B_q} \end{gather*} We can write \begin{equation*} \overline{\del} a_\nu\land\varphi= \frac{1}{p!(q+1)!}\sum_{\substack{\alpha_1,\dots,\alpha_p,\\ \beta_0,\beta_1,\dots,\beta_q}} (\overline{\del} a_\nu\land\varphi)_{\alpha_1\dots\alpha_p\bar{\beta}_0\bar{\beta}_1\dots\bar{\beta}_q} dz^{\alpha_1\dots\alpha_p}\land d\bar{z}^{\beta_0\beta_1\dots\beta_q} \end{equation*} where the coefficients $(\overline{\del} a_\nu\land\varphi)_{\alpha_1\dots\alpha_p\bar{\beta}_0\bar{\beta}_1\dots\bar{\beta}_q}$ are antisymmetric in the indices $\alpha_1,\dots,\alpha_p,\bar{\beta}_0,\bar{\beta}_1,\dots,\bar{\beta}_q$, so \begin{equation*} \begin{split} (\overline{\del} a_\nu\land\varphi)_{\bar{\beta}_0\alpha_1\dots\alpha_p\bar{\beta}_1\dots\bar{\beta}_q} & = (-1)^p (\overline{\del} a_\nu\land\varphi)_{\alpha_1\dots\alpha_p\bar{\beta}_0\bar{\beta}_1\dots\bar{\beta}_q} \\ & = \sum_{j=0}^q (-1)^{qj} \de{a_\nu}{\bar{z}^{\beta_j}} \varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}}, \end{split} \end{equation*} where we set $\beta_{j+q}:=\beta_{j-1}$ for $j=1,\dots,q$. Now we apply (\ref{delbar-normal}) and obtain \begin{equation*} \begin{split} (\overline{\del}^*(\overline{\del} a_\nu\land\varphi))&_{\alpha_1\ldots\alpha_p\bar{\beta}_1\ldots\bar{\beta}_q}(x_0) = \\ & = -\sum_{\beta_0=1}^n \de{}{z^{\beta_0}} \bigl( (\overline{\del} a_\nu\land\varphi)_{\bar{\beta}_0\alpha_1\ldots\alpha_p\bar{\beta}_1\ldots\bar{\beta}_q}\bigr)(x_0)\\ & = -\sum_{\beta_0=1}^n \sum_{j=0}^q \de{}{z^{\beta_0}} \bigl( (-1)^{qj} \de{a_\nu}{\bar{z}^{\beta_j}} \varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}} \bigr) (x_0) \\ & = -\sum_{\beta_0=1}^n \sum_{j=0}^q (-1)^{qj} \bigl( \de{^2a_\nu}{z^{\beta_0}\partial\bar{z}^{\beta_j}}\varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}}(x_0)+\\ & \quad\quad\quad\quad\quad\quad\quad\quad + \de{a_\nu}{\bar{z}^{\beta_j}}\de{\varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}}}{z^{\beta_0}}(x_0) \bigr) . \end{split} \end{equation*} This yields \begin{equation*} |\overline{\del}^*(\overline{\del} a_\nu\land\varphi)|_g(x_0) \le C\bigl(|\partial\overline{\del} a_\nu|_g(z_0)|\varphi|_g(x_0)+|\overline{\del} a_\nu|_g(x_0)|\nabla\varphi|_g(x_0)\bigr), \end{equation*} where $C$ depends only on $n,p,q$. To prove it, we have to do some calculations. We set \begin{equation*} \gamma_{\beta_0j}:=\de{^2a_\nu}{z^{\beta_0}\partial\bar{z}^{\beta_j}}\varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}} \text{ and } \lambda_{\beta_0j}:=\de{a_\nu}{\bar{z}^{\beta_j}}\de{\varphi_{\alpha_1\dots\alpha_p\bar{\beta}_{j+1}\dots\bar{\beta}_{j+q}}}{z^{\beta_0}}, \end{equation*} with $\alpha_1,\dots,\alpha_p,\beta_1,\dots,\beta_q=1,\dots,n$, $\beta_0=1,\dots,n$ and $j=0,\dots,q$. So we have, using (\ref{nabla-normal}), \begin{equation*} \begin{split} |\overline{\del}^*(\overline{\del} a_\nu\land\varphi)|_g^2(x_0) & = \frac{1}{p!q!}\sum_{\substack{\alpha_1,\dots,\alpha_p,\\ \beta_1,\dots,\beta_q}} \sum_{\beta_0,\beta'_0=1}^n \sum_{j,j'=0}^q (\gamma_{\beta_0j}+\lambda_{\beta_0j})(\bar{\gamma}_{\beta'_0j'}+\bar{\lambda}_{\beta'_0j'})(x_0) \\ & = \frac{1}{p!q!}\sum (\gamma_{\beta_0j}\bar{\gamma}_{\beta'_0j'}+\gamma_{\beta_0j}\bar{\lambda}_{\beta'_0j'}+\lambda_{\beta_0j} \bar{\gamma}_{\beta'_0j'}+\lambda_{\beta_0j}\bar{\lambda}_{\beta'_0j'})(x_0) \\ & = \frac{1}{p!q!}\sum (\real(\gamma_{\beta_0j}\bar{\gamma}_{\beta'_0j'})+2\real(\gamma_{\beta_0j}\bar{\lambda}_{\beta'_0j'})+ \real(\lambda_{\beta_0j}\bar{\lambda}_{\beta'_0j'}))(x_0) \\ & \le \frac{2}{p!q!}\sum (|\gamma_{\beta_0j}|^2+|\lambda_{\beta_0j}|^2)(x_0) \\ & \le C \bigl(|\partial\overline{\del} a_\nu|_g(x_0)|\varphi|_g(x_0)+|\overline{\del} a_\nu|_g(x_0)|\nabla\varphi|_g(x_0)\bigr)^2. \end{split} \end{equation*} To prove the other inequality in (\ref{lemma:delbar}), we want to estimate $|\overline{\del}^*(\overline{\del} a_\nu\land*\varphi)|_g(x_0)$, so the calculations are analogous except for the fact that we use the following formula (see \cite[p.94]{MK}) for the Hodge star operator: \begin{equation*} *\varphi=(i)^n(-1)^{\frac{1}{2}n(n-1)+qn}\sum_{A_p,B_q} \sgn A\sgn B \det(g_{h\bar{k}}) \bar{\varphi}^{\bar{B}_qA_p} dz^{A_{n-p}}\land d\bar{z}^{B_{n-q}}, \end{equation*} where $A$ and $B$ are respectively the permutations that send $(1,\dots,n)$ in $(A_p,A_{n-p})$ and in $(B_q,B_{n-q})$. \end{proof} \begin{proof}[Proof of Theorem \ref{teo:pezzi}] First of all, by remark \ref{bc-kh-id}, we have $$\tilde\Delta_{BC}^g=\Delta^g_{\bar\partial}\Delta^g_{\bar\partial}+\overline{\del}^*\overline{\del}+\partial^*\partial. $$ Thanks to the cut-off functions of the previous lemma, now we can integrate by part, using the Stokes Theorem. \begin{equation*} \begin{split} 0 & = \llangle\tilde{\Delta}^g_{BC}\varphi,a_\nu\varphi\rrangle \\ & =\llangle\overline{\del}\delbar^*\overline{\del}\delbar^*\varphi,a_\nu\varphi\rrangle+\llangle\overline{\del}^*\overline{\del}\delbar^*\overline{\del}\varphi,a_\nu\varphi\rrangle+ \llangle\overline{\del}^*\overline{\del}\varphi,a_\nu\varphi\rrangle+\llangle\partial^*\partial\varphi,a_\nu\varphi\rrangle \\ & =\llangle\overline{\del}\delbar^*\varphi,\overline{\del}\delbar^*(a_\nu\varphi)\rrangle+\llangle\overline{\del}^*\overline{\del}\varphi,\overline{\del}^*\overline{\del}(a_\nu\varphi)\rrangle+ \llangle\overline{\del}\varphi,\overline{\del}(a_\nu\varphi)\rrangle+\llangle\partial\varphi,\partial(a_\nu\varphi)\rrangle \end{split} \end{equation*} Now we calculate every differential in the right sides of the inner products. \begin{equation*} \begin{split} \overline{\del}\delbar^*(a_\nu\varphi) & = -\overline{\del}*(\overline{\del} a_\nu\land*\varphi+a_\nu\overline{\del}*\varphi) \\ & =(-1)^{p+q}*\overline{\del}^*(\overline{\del} a_\nu\land*\varphi) +\overline{\del} a_\nu\land \overline{\del}^*\varphi+a_\nu\overline{\del}\delbar^*\varphi \\ \overline{\del}^*\overline{\del}(a_\nu\varphi) & = \overline{\del}^*(\overline{\del} a_\nu\land\varphi+a_\nu\overline{\del}\varphi) \\ & =\overline{\del}^*(\overline{\del} a_\nu\land\varphi) -*(\overline{\del} a_\nu\land *\overline{\del}\varphi)+a_\nu\overline{\del}^*\overline{\del}\varphi \\ \overline{\del}(a_\nu\varphi) & =\overline{\del} a_\nu\land\varphi+a_\nu\overline{\del}\varphi \\ \partial(a_\nu\varphi) & =\partial a_\nu\land\varphi+a_\nu\partial\varphi \end{split} \end{equation*} Therefore, \begin{equation*} 0 = \llangle\tilde{\Delta}^g_{BC}\varphi,a_\nu\varphi\rrangle= I_1(\nu)+I_2(\nu), \end{equation*} where \begin{equation*} I_1(\nu)= \int_M a_\nu\bigl(|\overline{\del}\delbar^*\varphi|_g^2+|\overline{\del}^*\overline{\del}\varphi|_g^2+ |\overline{\del}\varphi|_g^2+|\partial\varphi|_g^2\bigr)\,\vol, \end{equation*} and \begin{equation*} \begin{split} I_2(\nu) & = \int_M \bigl( \langle\overline{\del}\delbar^*\varphi,(-1)^{p+q}*\overline{\del}^*(\overline{\del} a_\nu\land*\varphi) +\overline{\del} a_\nu\land \overline{\del}^*\varphi\rangle+\\ & + \langle\overline{\del}^*\overline{\del}\varphi,\overline{\del}^*(\overline{\del} a_\nu\land\varphi) -*(\overline{\del} a_\nu\land *\overline{\del}\varphi)\rangle+\\ & + \langle\overline{\del}\varphi,\overline{\del} a_\nu\land\varphi\rangle+\langle\partial\varphi,\partial a_\nu\land\varphi\rangle\bigr)\,\vol. \end{split} \end{equation*} We have $I_1(\nu)=|I_2(\nu)|$ and, by the monotone convergence theorem, as $\nu\rightarrow\infty$, \begin{equation* I_1(\nu)\rightarrow \int_M \bigl(|\overline{\del}\delbar^*\varphi|_g^2+|\overline{\del}^*\overline{\del}\varphi|_g^2+ |\overline{\del}\varphi|_g^2+|\partial\varphi|_g^2\bigr)\,\vol. \end{equation*} Thus, if we show that $I_1(\nu)\rightarrow 0$ as $\nu\rightarrow\infty$, we have \begin{equation}\label{I1} \partial\varphi=0,\qquad \overline{\del}\varphi=0,\qquad\overline{\del}\delbar^*\varphi=0,\qquad\overline{\del}^*\overline{\del}\varphi=0. \end{equation} Estimating $|I_2(\nu)|$, we obtain \begin{equation*} \begin{split} |I_2(\nu)| & \le \int_M \bigl( |\overline{\del}\delbar^*\varphi|_g(|\overline{\del}^*(\overline{\del} a_\nu\land*\varphi)|_g+|\overline{\del} a_\nu|_g|\overline{\del}^*\varphi|_g)+\\ & +|\overline{\del}^*\overline{\del}\varphi|_g(|\overline{\del}^*(\overline{\del} a_\nu\land\varphi)|_g+ |\overline{\del} a_\nu|_g|\overline{\del}\varphi|_g)+\\ & +|\overline{\del}\varphi|_g|\overline{\del} a_\nu|_g|\varphi|_g+|\partial\varphi|_g|\partial a_\nu|_g|\varphi|_g\bigr)\,\vol. \end{split} \end{equation*} By Lemma \ref{estimates} there exists a constant $C>0$ such that \begin{equation*} \begin{split} &|\overline{\del}^*(\overline{\del} a_\nu\land*\varphi)|_g\le C\bigl(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g\bigr),\\ &|\overline{\del}^*(\overline{\del} a_\nu\land\varphi)|_g \le C\bigl(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g\bigr). \end{split} \end{equation*} Therefore we have \begin{equation*} \begin{split} |I_2(\nu)| & \le C \int_M \bigl( |\overline{\del}\delbar^*\varphi|_g(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g)+\\ & +|\overline{\del}^*\overline{\del}\varphi|_g(|\partial\overline{\del} a_\nu|_g|\varphi|_g+|\overline{\del} a_\nu|_g|\nabla\varphi|_g)+\\ & +|\overline{\del}\varphi|_g|\overline{\del} a_\nu|_g|\varphi|_g+|\partial\varphi|_g|\partial a_\nu|_g|\varphi|_g\bigr)\,\vol. \end{split} \end{equation*} The estimates on the cut-off functions, i.e., $$|\partial a_\nu(x)|_g,|\overline{\del} a_\nu(x)|_g,|\partial\overline{\del} a_\nu(x)|_g\le 2^{-\nu}C a_\nu(x)^\frac{1}{2},$$ yield \begin{equation*} \begin{split} I_1(\nu)=|I_2(\nu)| & \le 2^{-\nu}C \int_M a_\nu(x)^\frac{1}{2} \bigl( |\overline{\del}\delbar^*\varphi|_g(|\varphi|_g+|\nabla\varphi|_g)+\\ & +|\overline{\del}^*\overline{\del}\varphi|_g(|\varphi|_g+|\nabla\varphi|_g)+\\ & +|\overline{\del}\varphi|_g|\varphi|_g+|\partial\varphi|_g|\varphi|_g\bigr)\,\vol \\ & \le 2^{-\nu}C \int_M a_\nu(x)^\frac{1}{2} \bigl(|\overline{\del}\delbar^*\varphi|_g+|\overline{\del}^*\overline{\del}\varphi|_g+|\overline{\del}\varphi|_g+|\partial\varphi|_g\bigr)\cdot \\ & \cdot \bigl(|\varphi|_g+|\nabla\varphi|_g\bigr)\,\vol \\ & \le 2^{-\nu}C \biggl( \int_M a_\nu \bigl( |\overline{\del}\delbar^*\varphi|_g+|\overline{\del}^*\overline{\del}\varphi|_g+|\overline{\del}\varphi|_g+|\partial\varphi|_g\bigr)^2\,\vol\biggr)^{\frac{1}{2}}\cdot\\ & \cdot \biggl( \int_M \bigl(|\varphi|_g+|\nabla\varphi|_g\bigr)^2\,\vol\biggr)^{\frac{1}{2}} \\ & \le 2^{-\nu}C\bigl(I_1(\nu)\bigr)^{\frac{1}{2}}\cdot \bigl(||\varphi||_{L_2}+||\nabla\varphi||_{L_2}\bigr), \end{split} \end{equation*} and consequently $$ \bigl(I_1(\nu)\bigr)^{\frac{1}{2}}\leq C\,2^{-\nu}. $$ Thus $\varphi$ is $\partial$-closed and $\overline{\del}$-closed. This implies $$\tilde{\Delta}_{BC}^g\varphi=\partial\overline{\del}\delbar^*\partial^*\varphi.$$ Now we substantially reapply the argument as above to this form of the Bott-Chern Laplacian. We have \begin{equation*} 0 = \llangle\tilde{\triangle}_{BC}\varphi,a_\nu\varphi\rrangle = \llangle\partial\overline{\del}\delbar^*\partial^*\varphi,a_\nu\varphi\rrangle = \llangle\overline{\del}^*\partial^*\varphi,\overline{\del}^*\partial^*(a_\nu\varphi)\rrangle, \end{equation*} and \begin{equation*} \begin{split} \overline{\del}^*\partial^*(a_\nu\varphi) & = \overline{\del}^*(-*(\partial a_\nu\land*\varphi)+a_\nu\partial^*\varphi) \\ & =(-1)^{p+q-1}*(\overline{\del}\partial a_\nu\land*\varphi) -(-1)^{p+q-1}*(\partial a_\nu\land\overline{\del}*\varphi)+\\ & -*(\overline{\del} a_\nu\land*\partial^*\varphi)+a_\nu\overline{\del}^*\partial^*\varphi, \end{split} \end{equation*} thus \begin{equation*} 0 = \llangle\tilde{\Delta}^g_{BC}\varphi,a_\nu\varphi\rrangle= I'_1(\nu)+I'_2(\nu), \end{equation*} where \begin{equation*} I'_1(\nu)= \int_M a_\nu|\overline{\del}^*\partial^*\varphi|_g^2\,\vol, \end{equation*} and \begin{equation*} \begin{split} |I'_2(\nu)| & \le 2^{-\nu}C \int_M a_\nu(x)^\frac{1}{2} |\overline{\del}^*\partial^*\varphi|_g(|\varphi|_g+|\overline{\del}^*\varphi|_g+|\partial^*\varphi|_g)\,\vol \\ & \le 2^{-\nu}C \bigl(I'_1(\nu)\bigr)^{\frac{1}{2}}\cdot \bigl(||\varphi||_{L_2}+||\nabla\varphi||_{L_2}\bigr). \end{split} \end{equation*} Thus we also have $\overline{\del}^*\partial^*\varphi=0$. This ends the proof. \end{proof} \begin{rem} From Theorem \ref{teo:pezzi} we immediately obtain that $$ \varphi\in \mathcal{H}^{p,q}_{BC,2}\quad\hbox{\rm if and only if}\qquad \varphi\in W^{1,2}(M),\qquad \partial\varphi=0,\qquad \overline{\del}\varphi=0, \qquad \overline{\del}^*\partial^*\varphi=0, $$ which ``extends'' the characterization of the space of Bott-Chern harmonic forms on a compact Hermitian manifold to any $d$-bounded Stein manifold. \end{rem} As a straightforward consequence of Theorem \ref{teo:pezzi}, we obtain the following \begin{theorem} \label{teo:harmonic} Let $M$ be a d-bounded Stein manifold of complex dimension $n$. Then \begin{equation*} \mathcal{H}^{p,q}_{BC,2}\subset\mathcal{H}^{p,q}_{\overline{\del},2}=\mathcal{H}^{p,q}_{\partial,2}=\mathcal{H}^{p,q}_{d,2}, \end{equation*} where the last three sets are the spaces of $L^2$-harmonic $(p,q)$-forms with respect to $\Delta^g_{\overline{\del}}$, $\Delta^g_{\partial}$ and $\Delta^g_{d}$. \end{theorem} \begin{proof} We note that the last three equalities of the thesis hold because $M$ is K\"ahler, in fact $\Delta^g_d=2\Delta^g_{\partial}=2\Delta^g_{\overline{\del}}$ by K\"ahler identities. Therefore it is enough to prove $\mathcal{H}^{p,q}_{BC,2}\subset\mathcal{H}^{p,q}_{\overline{\del},2}$. Let $\varphi\in\mathcal{H}^{p,q}_{BC,2}$; from (\ref{I1}) in the proof of Theorem \ref{teo:pezzi} we have $$\overline{\del}\delbar^*\varphi=0,\quad \overline{\del}^*\overline{\del}\varphi=0. $$ Thus $\Delta^g_{\bar\partial}\varphi=0$ and $\varphi\in\mathcal{H}^{p,q}_{\overline{\del},2}$. \end{proof} We are ready to prove the following \begin{theorem}\label{GromovBC} Let $M$ be a $d$-bounded Stein manifold of complex dimension $n$. Then $$\mathcal{H}^{p,q}_{BC,2}=\{0\},\quad \hbox{for}\,\, p+q\neq n. $$ \end{theorem} \begin{proof} By Theorem \ref{teo:harmonic}, it is enough to prove that $\mathcal{H}^{p,q}_{d,2}=\{0\}$. This last fact is a consequence of \cite[Thm.1.2.B.]{G}. For the sake of completeness we remind the argument by Gromov. Let us consider the Lefschetz operator $$ L:\Omega^{(p,q)}(M)\longrightarrow\Omega^{(p+1,q+1)}(M) $$ defined by $$L\varphi=\omega\land\varphi $$ for every $\varphi\in\Omega^{(p,q)}(M)$. By \cite[rem.3.2.7.iii)]{H}, the map \begin{equation*} L^{n-p-q}:\mathcal{H}^{p,q}_{d}\longrightarrow\mathcal{H}^{n-q,n-p}_{d} \end{equation*} is an isomorphism for $p+q\le n$, where $\mathcal{H}^{p,q}_{d}$ denotes the space of $\Delta^g_{d}$-harmonic $(p,q)$-forms. Now set $k=n-p-q$ and consider the form $L^k\varphi$, where $\varphi\in\Omega^{(p,q)}(M)\cap L^2(M)$ is a $d$-closed form. Since $\omega=d\eta$, if $k>0$, then we get \begin{equation*} L^k\varphi=\omega^k\land\varphi=(d\eta)^k\land\varphi=d(\eta\land(d\eta)^{k-1}\land\varphi). \end{equation*} Furthermore, \begin{itemize} \item $\eta\land(d\eta)^{k-1}$ is bounded, since $\eta$ is bounded and $|d\eta|_g$ is constant; \item $\eta\land(d\eta)^{k-1}\land\varphi\in L^2(M)$, since $\varphi\in L^2(M)$. \end{itemize} Moreover, if $\varphi\in\Omega^{(p,q)}(M)\cap L^2(M)$ is a $\Delta^g_d$-harmonic form, then $L^k\varphi$ is also $\Delta^g_d$-harmonic. Thus, in view of the $L^2$ Hodge decomposition theorem (see \cite[Chap.VIII, Thm.3.2]{DE}), we obtain that $L^k\varphi=0$. Now let $\varphi\in\mathcal{H}^{p,q}_{d,2}$. If $p+q<n$, then $k>0$ and $L^k\varphi=0$; therefore $\varphi=0$ since $L^k$ is injective. Conversely, if $p+q>n$, then $*\varphi\in\mathcal{H}^{n-p,n-q}_{d,2}$ and $(n-p)+(n-q)<n$; by the previous argument $*\varphi=0$ and consequently $\varphi=0$. Summing up, we showed that $\mathcal{H}^{p,q}_{d,2}=\{0\}$ for $p+q\ne n$. This ends the proof. \end{proof} \begin{rem} The Lefschetz argument, in the proof of Theorem \ref{GromovBC}, uses only the K\"ahler and the $d$-bounded assumptions. \end{rem} \begin{rem} By similar computations, Theorem \ref{GromovBC} can be stated and proved also for the Aeppli Laplacian. Indeed, it is sufficient to repeat the proof of Theorem \ref{teo:pezzi} with $\tilde\Delta_{A}^g=\Delta^g_{\bar\partial}\Delta^g_{\bar\partial}+\partial\del^*+\overline{\del}\delbar^*$. \end{rem}
{ "timestamp": "2018-06-05T02:14:51", "yymm": "1806", "arxiv_id": "1806.00987", "language": "en", "url": "https://arxiv.org/abs/1806.00987" }
\section{Introduction} Compressive sensing (CS) has provided ways to sample and to compress signals at the same time with relatively long signal reconstruction time~\cite{Candes:2006eq, Donoho:2006ci}. The idea of combining image acquisition and compression immediately drew great attention in the application areas such as MRI~\cite{Lustig:2007cu,ravishankar2011mr}, CT~\cite{Choi:2010gd}, hyperspectral imaging~\cite{Zhang:2015jn, Zhang:2015ea}, coded aperture imaging~\cite{Arce:2013jm}, radar imaging~\cite{Potter:2010gr} and radio astronomy ~\cite{Wiaux:2009ev}. CS applications have been investigated intensively for the last decade and now some systems are commercialized for practical uses such as low-dose CT and accelerated MR. \subsection{Conventional compressive image recovery} CS is usually modeled as a linear equation: \begin{equation} {\boldsymbol y} = {\boldsymbol A} {\boldsymbol x} + {\boldsymbol \epsilon} \label{eq:lin} \end{equation} where ${\boldsymbol y} \in \mathbb{R}^M$ is a measurement vector, ${\boldsymbol A} \in \mathbb{R}^{M\times N}$ is a sensing matrix with $M \ll N$, ${\boldsymbol x} \in \mathbb{R}^N$ is an unknown signal to reconstruct, and ${\boldsymbol \epsilon} \in \mathbb{R}^M$ is a noise vector. It is a challenging ill-posed inverse problem to estimate $\boldsymbol x$ from the undersampled measurements ${\boldsymbol y}$ with $M \ll N$. Sparsity has been investigated as a prior to regularize the ill-posed problem of CS recovery. CS theories allow to use $l_1$ norm for good signal recovery instead of $l_0$ norm ~\cite{Candes:2006eq,Donoho:2006ci}, but it requires to use more samples (still significantly lower than $N$). Minimizing $l_1$ norm is advantageous for large-scale inverse problems since $l_1$ norm is convex so that conventional convex optimization algorithms can be used for signal recovery. There have been many convex optimization algorithms in solving CS image recovery problem with non-differentiable $l_1$ norm such as iterative shrinkage-thresholding algorithm (ISTA), fast iterative shrinkage-thresholding algorithm (FISTA)~\cite{Beck:2009gh}, alternating direction minimization (ADM)~\cite{Yang:2011dm}, and approximate message passing (AMP)~\cite{Donoho:2009bs}, to name a few. Even though these advanced algorithms achieved remarkable speedup compared to conventional convex optimization algorithms, CS image recovery is still slow in some application areas. The image ${\boldsymbol x}$ itself is not usually sparse. However, a transformed image is often sparse. For example, images are sparse in the wavelet domain and/or discrete cosine transform (DCT) domain. In high-resolution imaging, images have sparse edges that are often promoted by minimum total variation (TV)~\cite{li2009user}. Sparse MR image recovery used both wavelet and TV priors~\cite{Lustig:2007cu} or dictionary learning prior from highly undersampled measurements~\cite{ravishankar2011mr}. Similarly, CS color video and depth recovery used both wavelet and DCT~\cite{Yuan:2014ij}. Hyperspectral imaging utilized manifold-structured sparsity prior~\cite{Zhang:2015jn} or reweighted Laplace prior~\cite{Zhang:2015ea}. Self-similarity in images is also used as a prior for CS image recovery such as NLR-CS~\cite{dong2014compressive} and denoiser based AMP (D-AMP)~\cite{metzler2016denoising}. D-AMP utilized powerful modern denoisers such as BM3D~\cite{dabov2007image} and has recently been extended to sparse MRI~\cite{eksioglu2018denoising}. \subsection{Deep learning in compressive image recovery} Deep learning with massive amount of training data has revolutionized many computer vision tasks~\cite{LeCun:2015dt}. It has also influenced many low level computer vision tasks such as image denoising~\cite{burger2012image,jain2009natural,Vincent:2010vu,xie2012image,zhang2017beyond} and CS recovery~\cite{Gupta:2018ia,Hammernik:2017ku, Jin:2017iz,Kulkarni:2016je,metzler2017learned,Xu:2018cd,Zhang:2018wz}. There are largely two different approaches using deep neural networks for CS recovery. One is to use a deep network to directly map from an initially recovered low-quality image from CS samples to a high-quality ground truth image~\cite{Jin:2017iz,Kulkarni:2016je}. The other approach for deep learning based CS image recovery is to use deep neural network structures that unfold optimization algorithms and learned image priors, inspired by learned ISTA (LISTA)~\cite{gregor2010learning}. In sparse MRI, ADMM-Net~\cite{Yang:2016vc} and variational network~\cite{Hammernik:2017ku} were proposed with excellent performances. Both methods learned parametrized shrinkage function as well as transformation operator for sparse representation from given training data. Recently, instead of using explicit parametrization in shrinkage operator, deep neural networks were used to unfold optimization algorithms for CS image recovery such as learned D-AMP (LDAMP)~\cite{metzler2017learned}, ISTA-Net~\cite{Zhang:2018wz}, CNN-projected gradient descent for sparse CT~\cite{Gupta:2018ia}, and Laplacian pyramid reconstructive adversarial network~\cite{Xu:2018cd}. Utilizing generative adversarial network (GAN) for CS was also investigated~\cite{Bora:2017tz}. All these methods have one important requirement: ``clean" ground truth images must be available for training. \subsection{Deep learning without ground truth} All deep learning based approaches for CS image recovery solve the ill-posed inverse problem with undersampled data by using deep neural networks. Ironically, training deep neural networks requires ``clean'' ground truth images, but obtaining the best quality images from undersampled data requires well-trained deep neural networks. It is often expensive or infeasible to acquire clean data, for example, in high-resolution medical imaging (long acquisition time for MR, high radiation dose for CT) or high-resolution hyperspectral imaging. In this paper, we address this dilemma. Recently, there have been a few attempts to train deep neural networks for low-level computer vision tasks in unsupervised ways. Lehtinen \textit{et al.} proposed noise2noise to train deep neural networks for image denoising, inpainting, and MR reconstruction~\cite{Lehtinen:2018un}. This work implemented MR reconstruction using a direct mapping instead of unfold optimization scheme. However, this was not evaluated with various CS applications and it requires two contaminated data for each image, which may not be available in some cases. Bora \textit{et al.} proposed AmbientGAN, a training method for GAN with contaminated images and applied it to CS image recovery~\cite{Bora:2017tz,Bora:2018tl}. However, AmbientGAN was trained with artificially contaminated images, rather than with CS measurements. Moreover, the method of~\cite{Bora:2017tz} is limited to $i.i.d.$ Gaussian measurement matrix theoretically and was evaluated with relatively low-resolution images. Soltanayev \textit{et al.} proposed a Stein's unbiased risk estimator (SURE) based training method for deep learning based denoisers~\cite{soltanayev2018}. This method requires only one realization, but it is limited to $i.i.d.$ Gaussian noise. Moreover, it is non-trivial to extend this to CS image recovery. In this paper, we propose unsupervised training methods for deep learning based CS recovery using two theories: D-AMP and SURE. Our proposed methods can train deep denoisers from undersampled measurements without ground truth and without image priors, and to recover images with state-of-the-art qualities. The contributions of our work are: 1) Proposing a method to train deep learning based denoisers from undersampled measurements without ground truth and without image priors. Only one realization for each image was required. An accurate noise estimation method was also developed for training deep denoisers. 2) Proposing a CS image recovery method by modifying LDAMP to have as low as 1 denoiser instead of 9 denoisers with comparable performance to reduce training time. 3) Extensive evaluations of the proposed method using high-resolution natural images and MR images for 3 CS recovery problems with Gaussian random, coded diffraction pattern, and realistic CS-MRI measurement matrices. \section{Background} \subsection{Denoiser-based AMP (D-AMP)} D-AMP is an algorithm designed to solve CS problems, where one needs to recover image vector ${\boldsymbol x} \in\mathbb{R}^N$ from the measurements ${\boldsymbol y} \in\mathbb{R}^M$ using prior information about ${\boldsymbol x}$. Based on the model (\ref{eq:lin}), the problem can be formulated as: \begin{equation} \min_{\boldsymbol x} \|{\boldsymbol y}-{\boldsymbol A}{\boldsymbol x}\|^2_2 \quad \textrm{subject to} \quad {\boldsymbol x} \in C \label{eq1} \end{equation} where $C$ is a set of natural images. D-AMP solves (\ref{eq1}) relying on AMP theory. It employs appropriate Onsager correction term ${\boldsymbol b_t}$ at each iteration, so that ${\boldsymbol x}_t + {\boldsymbol A}^H {\boldsymbol z}_t$ in Algorithm~\ref{algorithmDAMP} becomes close to the ground truth image plus $i.i.d.$ Gaussian noise. D-AMP can utilize any state-of-the-art denoiser as a mapping operator ${\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma_t})}( \cdot )$ in CS image recovery (Algorithm~\ref{algorithmDAMP}) for reducing $i.i.d.$ Gaussian noise as far as the divergence of denoiser can be obtained. \begin{algorithm}[b] \SetKwInOut{input}{input} \SetKwInOut{output}{output} \caption{(Learned) D-AMP algorithm \cite{metzler2016denoising,metzler2017learned}} \label{algorithmDAMP} \input{${\boldsymbol x}_0 = {\boldsymbol 0}, {\boldsymbol y}, {\boldsymbol A}$} \For{$t = 1$ to $T$} { ${\boldsymbol b}_t \gets {\boldsymbol z}_{t-1} \mathrm{div} {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma}_{t-1})}({\boldsymbol x}_{t-1} + {\boldsymbol A}^H {\boldsymbol z}_{t-1}) / M$ ${\boldsymbol z}_t \gets {\boldsymbol y} - {\boldsymbol A} {\boldsymbol x}_t + {\boldsymbol b}_t$ $\hat{\sigma}_t \gets \| {\boldsymbol z}_t \|_2 / \sqrt{M}$ ${\boldsymbol x}_{t+1} \gets {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma_t})}({\boldsymbol x}_t + {\boldsymbol A}^H {\boldsymbol z}_t)$ } \output{${\boldsymbol x}_T$} \end{algorithm} D-AMP first utilized conventional state-of-the-art denoisers such as BM3D~\cite{dabov2007image} for ${\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma_t})}( \cdot )$ in Algorithm~\ref{algorithmDAMP}~\cite{metzler2015bm3d}. Given a standard deviation of noise $\hat{\sigma}_t$ at iteration $t$, BM3D was applied to a noisy image ${\boldsymbol x_t} + {\boldsymbol A^H}{\boldsymbol z_t}$ to yield estimated image ${\boldsymbol x_{t+1}}$. Since BM3D can not be represented as a linear function, analytical form for divergence of this denoiser is not available for the Onsager term. This issue was resolved by using Monte-Carlo (MC) approximation for divergence term $\mathrm{div} {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma}_{t})}(\cdot)$~\cite{ramani2008monte}: For $\epsilon>0$, \begin{equation} \mathrm{div} {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma}_{t})}(\cdot) \approx \frac{\tilde{\boldsymbol n}'}{\epsilon} \left( {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma}_{t})}(\cdot + \epsilon \tilde{\boldsymbol n}) - {\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma}_{t})}(\cdot) \right) \label{eq:divD} \end{equation} where $\tilde{\boldsymbol n}$ is a standard normal random vector. Recently, LDAMP was proposed to use deep learning based denoisers for ${\boldsymbol D}_{{\boldsymbol w}(\hat{\sigma_t})}( \cdot )$ in Algorithm~\ref{algorithmDAMP}~\cite{metzler2017learned}. Nine deep neural network denoisers were trained with noiseless ground truth data for different noise levels. LDAMP consists of 10 D-AMP layers (iterations) and in each layer, state-of-the-art DnCNN~\cite{zhang2017beyond} was used as a denoiser operator. Unlike other unrolled neural network versions of iterative algorithms such as Learned-AMP~\cite{borgerding2017amp} and LISTA~\cite{gregor2010learning}, LDAMP exploited imaging system models and fixed ${\boldsymbol A}$ and ${\boldsymbol A^H}$ operators while the parameters for DnCNN denoisers were trained with ground truth data in image domain. \subsection{Stein's unbiased risk estimator (SURE) based deep neural network denoisers} Over the past years, deep neural network based denoisers have been well investigated~\cite{burger2012image, jain2009natural,Vincent:2010vu,xie2012image,zhang2017beyond} and they often outperformed conventional state-of-the-art denoisers such as BM3D. Deep neural network denoisers such as DnCNN~\cite{zhang2017beyond} yielded state-of-the-art denoising performance at multiple noise levels and are typically trained by minimizing the mean square error (MSE) between the output image of denoiser and the noiseless ground truth image: \begin{equation} \frac{1}{K}\sum_{j=1}^{K} \| {\boldsymbol D}_{{\boldsymbol w}(\sigma)}( {\boldsymbol z}^{(j)} ) - {\boldsymbol x}^{(j)} \|^2 \label{eq3} \end{equation} where ${\boldsymbol z} \in\mathbb{R}^N$ is a noisy image of the ground truth image ${\boldsymbol x}$ contaminated with $i.i.d.$ Gaussian noise with zero mean and fixed $\sigma^2$ variance, ${\boldsymbol D}_{{\boldsymbol w}(\sigma)}( \cdot )$ is a deep learning based denoiser with large-scale parameters ${\boldsymbol w}$ to train, and ${({\boldsymbol z}^{(1)}, {\boldsymbol x}^{(1)})}, \ldots, {({\boldsymbol z}^{(K)}, {\boldsymbol x}^{(K)})}$ is a training dataset with $K$ samples in image domain. Recently, a method to train deep learning based denoisers only with noisy images was proposed~\cite{soltanayev2018}. Instead of minimizing MSE, the following Monte-Carlo Stein's unbiased risk estimator (MC-SURE) that approximates MSE was minimized with respect to large-scale weights in a deep neural network without noiseless ground truth images: \begin{equation} \begin{aligned} \frac{1}{K}\sum_{j=1}^{K} \|{\boldsymbol z}^{(j)} - {\boldsymbol D}_{{\boldsymbol w}(\sigma)}( {\boldsymbol z}^{(j)} ) \|^2 - N \sigma^2 + \\ \frac{2\sigma^2 \tilde{\boldsymbol n}'}{\epsilon} \left( {\boldsymbol D}_{{\boldsymbol w}(\sigma)}({\boldsymbol z}^{(j)} + \epsilon \tilde{\boldsymbol n}) - {\boldsymbol D}_{{\boldsymbol w}(\sigma)}({\boldsymbol z}^{(j)}) \right). \label{eq4} \end{aligned} \end{equation} In CS image recovery applications, there are often cases where no ground truth data or no Gaussian contaminated image are available, but only CS samples in measurement domain are available for training. Thus, it is not straightforward to use MSE based or MC-SURE based deep denoiser networks for CS image recovery. The goal of this article is to propose a method to train deep neural network denoisers directly from CS samples without image prior and to simultaneously recover high quality images. \section{Method} \subsection{Training deep denoisers from undersampled measurements without ground truth} Our proposed method exploits D-AMP (LDAMP)~\cite{metzler2016denoising,metzler2017learned} to yield Gaussian noise contaminated images during CS image recovery from large-scale undersampled measurements and to train a single deep neural network denoiser with these noisy images at different noise levels using MC-SURE based denoiser learning~\cite{soltanayev2018}. Since Onsager correction term in D-AMP allows ${\boldsymbol x}_t + {\boldsymbol A}^H {\boldsymbol z}_t$ term to be close to the ground truth image plus Gaussian noise, so we conjecture that these can be utilized for MC-SURE based denoiser training. We further investigated this in the next subsection. Our joint algorithm is detailed in Algorithm~\ref{algorithm2}. Note that for large-scale CS measurements ${\boldsymbol y}^{(1)}, \ldots, {\boldsymbol y}^{(K)}$, both images $\hat{\boldsymbol x}_{L}^{(1)}, \ldots, \hat{\boldsymbol x}_{L}^{(K)}$ and trained denoising deep network ${\boldsymbol D}_{{\boldsymbol w}_{L}(\sigma)}(\cdot)$ were able to be obtained. After training, fast and high performance CS image recovery was possible without further training of deep denoising network. \begin{algorithm}[!t] \SetKwInOut{input}{input} \SetKwInOut{output}{output} \caption{Simultaneous LDAMP and MC-SURE deep denoiser learning algorithm} \label{algorithm2} \input{${\boldsymbol y}^{(1)}, \ldots, {\boldsymbol y}^{(K)}, {\boldsymbol A}$} \For{$l = 1$ to $L$} { \For{$k = 1$ to $K$} { \For{$t = 1$ to $T$} { ${\boldsymbol b}_t \gets {\boldsymbol z}_{t-1} \mathrm{div} {\boldsymbol D}_{{\boldsymbol w}_{l}(\hat{\sigma}_{t-1})}({\boldsymbol x}_{t-1} + {\boldsymbol A}^H {\boldsymbol z}_{t-1}) / M$ ${\boldsymbol z}_t \gets {\boldsymbol y}^{(k)} - {\boldsymbol A} {\boldsymbol x}_t + {\boldsymbol b}_t$ $\hat{\sigma}_t \gets \| {\boldsymbol z}_t \|_2 / \sqrt{M}$ \eIf{$\hat{\sigma}_t <= 55 $}{ ${\boldsymbol x}_{t+1} \gets {\boldsymbol D}_{{\boldsymbol w}_{l}(\hat{\sigma_t})}({\boldsymbol x}_t + {\boldsymbol A}^H {\boldsymbol z}_t)$ \ }{ ${\boldsymbol x}_{t+1} \gets BM3D_{\hat{\sigma_t}}({\boldsymbol x}_t + {\boldsymbol A}^H {\boldsymbol z}_t)$ \ } } $\hat{\boldsymbol x}_{l}^{(k)} \gets {\boldsymbol x}_{T+1}$ ${\boldsymbol s}_l^{(k)} \gets {\boldsymbol x}_T + {\boldsymbol A}^H {\boldsymbol z}_T$ } Train ${\boldsymbol D}_{{\boldsymbol w}_{l}(\sigma)}(\cdot)$ with ${\boldsymbol s}_l^{(1)}, \ldots, {\boldsymbol s}_l^{(K)}$ at different noise levels $\sigma$ } \output{$\hat{\boldsymbol x}_{L}^{(1)}, \ldots, \hat{\boldsymbol x}_{L}^{(K)}, {\boldsymbol D}_{{\boldsymbol w}_{L}(\sigma)}(\cdot)$ } \end{algorithm} The original LDAMP utilized 9 DnCNN denoisers trained on $``$clean$"$ images for different noise levels ($\sigma = 0-10, 10-20, 20-40, 40-60, 60-80, 80-100, 100-150, 150-300, 300-500$)~\cite{metzler2017learned}. However, in our work we argue that training a single DnCNN denoiser could be enough to achieve almost the same results. The network was pre-trained with reconstructed images using D-AMP with BM3D plus Gaussian noise in $\sigma \in [0, 55]$ range. The pre-trained DnCNN blind denoiser ${\boldsymbol D}_{{\boldsymbol w}_{l}(\hat{\sigma_t})}$ cleans ${\boldsymbol x}_{t-1} + {\boldsymbol A}^H {\boldsymbol z}_{t-1}$ with noise level between $[0, 55]$ (line 8), while $BM3D_{\hat{\sigma_t}}$ is used for higher level noise reduction (line 10). Depending on a sampling ratio and forward operator $\boldsymbol A$, only initial 2-4 iterations are required to use BM3D to decrease the noise level sufficient enough for DnCNN. Then, after $T$ iterations, the set of training data ${\boldsymbol s}_l^{(1)}, \ldots, {\boldsymbol s}_l^{(K)}$ can be generated using LDAMP with pre-trained deep denoiser. Those noisy training images were utilized for further training pre-trained DnCNN with MC-SURE. It is worth to note that the noise level range for DnCNN is subject to change depending on a particular problem. For example, we found that for $i.i.d.$ Gaussian and coded diffraction pattern (CDP) matrices, training DnCNN in $\sigma \in [0, 55]$ range was optimal, while for CS-MRI case, the range was shortened to $\sigma \in [0, 10]$. For noisier measurements, the noise level of the denoiser may need to be optimized. \subsection{Accuracy of standard deviation estimation for MC-SURE based denoiser learning} In D-AMP and LDAMP~\cite{metzler2016denoising,metzler2017learned}, noise level was estimated in measurement domain using \begin{equation} \hat{\sigma}_t \gets \| {\boldsymbol z}_t \|_2 / \sqrt{M}. \label{stdv} \end{equation} The accuracy of this estimation was not critical for D-AMP or LDAMP since denoisers in both methods were not sensitive to different noise levels. However, accurate noise level estimation is quite important for MC-SURE based deep denoiser network learning. We investigated the accuracy of (\ref{stdv}). It turned out that the accuracy of noise level estimation depends on measurement matrices. With $i.i.d.$ Gaussian measurement matrix ${\boldsymbol A}$, (\ref{stdv}) was very accurate and comparable to the ground truth standard deviation that was obtained from the true residual $({\boldsymbol x_t}+{\boldsymbol A}^H {\boldsymbol z_t}) - {\boldsymbol x_{true}}$. However, with CDP measurement matrix ${\boldsymbol A}$ that yields complex measurements, it turned out that (\ref{stdv}) yielded over-estimated noise level for multiple examples. Since the image ${\boldsymbol x_t}$ is real, we propose a new standard deviation estimation method for D-AMP: \begin{equation} \hat{\sigma}_t \gets \| \operatorname{Re}({\boldsymbol A}^H {\boldsymbol z}_t ) \|_2 / \sqrt{N}. \label{new_sigma} \end{equation} We performed comparison studies among (\ref{stdv}), (\ref{new_sigma}), and the ground truth from true residual $({\boldsymbol x_t}+{\boldsymbol A}^H {\boldsymbol z_t}) - {\boldsymbol x_{true}}$ and found that they are all similar for $i.i.d.$ Gaussian measurement matrix. However, our proposed method (\ref{new_sigma}) yielded more accurate estimates of standard deviation than previous method (\ref{stdv}) for CDP matrix with complex numbers. \begin{figure}[!t] \begin{center} \begin{subfigure}[t]{0.48\linewidth} \centering \scriptsize (a) True residual \includegraphics[width=1\textwidth]{den1} \label{sfig1} \end{subfigure} \begin{subfigure}[t]{0.48\linewidth} \centering \scriptsize (b) D-AMP \includegraphics[width=1\textwidth]{den2} \label{sfig2} \end{subfigure} \begin{subfigure}[t]{0.48\linewidth} \centering \scriptsize (c) Proposed \includegraphics[width=1\textwidth]{den3} \label{sfig3} \end{subfigure} \begin{subfigure}[t]{0.48\linewidth} \begin{minipage}{.1cm} \vfill \end{minipage} \end{subfigure} \caption{Normalized residual histograms of ``Boat" image after 10 iterations using LDAMP-BM3D for CDP matrix. Normalization was done with estimated sigma from (a) true residual (b) ${\boldsymbol z}_T$ (D-AMP) and (c) $\operatorname{Re}({\boldsymbol A}^H {\boldsymbol z}_T )$ (Proposed).} \label{PDF} \end{center} \end{figure} Figure~\ref{PDF} illustrates the accuracy of our noise estimator compared to the previous one. Normalizing true residual histogram with the true noise level yielded good fitting to standard normal density (red line) as shown in Figure~\ref{PDF}(a). Normalized histogram of true residual with the previous noise estimation method yielded sharper histogram as shown in Figure~\ref{PDF}(b) due to the overestimation of noise levels. However, our proposed standard deviation estimation yielded good normalized histogram fitting to the ground truth (red line). In the simulation, we will show that our proposed estimation is critical to achieve high performance using our proposed methods with CDP and CS-MRI. Moreover, we found out that proposed noise estimator (\ref{new_sigma}) can also be applicable for CS-MRI cases, when k-space data is moderately undersampled. Therefore, for a sampling rate of larger than 35-40\%, true residual follows a Gaussian noise, which can be accurately measured by (\ref{new_sigma}) and further utilized for training deep denoisers with MC-SURE. \section{Simulation Results} \subsection{Setup} \paragraph{Datasets} We used images from DIV2K~\cite{agustsson2017ntire}, Berkeley's BSD-500~\cite{MartinFTM01} datasets, and standard test images for training and testing our proposed method on $i.i.d.$ Gaussian and CDP matrices. Training dataset was comprised of all 500 images from BSD-500, while a test set of 100 images included 75 randomly chosen images from DIV2K and 25 standard test images. Since the proposed method uses fixed linear operator for image reconstruction, all test and train images had the same size. Thus, all images were subsampled and cropped to a size of 180 $\times$180 and compressively sampled to generate measurement data. For CS-MRI reconstruction, training data were pulled from an open repository (http://mridata.org/). The knee dataset includes 256 slices of 320 $\times$ 320 knee images per patient for 20 patients. We chose images from 3 patients randomly for training and other images from 1 patient for testing. The images were transformed onto the k-space domain and subsampled with realistic radial sampling patterns at various sampling rates. We implemented all methods on the Tensorflow framework and used Adam optimizer~\cite{kinga2015method} with the learning rate of 10\textsuperscript{-3}, which was dropped to 10\textsuperscript{-4} after 40 epochs and further trained for 10 epochs. The batch size was set to 128 and training DnCNN denoiser took approximately 12-14 hours on an NVIDIA Titan Xp. \paragraph{Initialization of DnCNN denoiser} The primary idea of this stage is to decouple DnCNN denoiser from the LDAMP SURE and pre-train it given measurement data $\boldsymbol y$ and linear operator $\boldsymbol A$. To do so, BSD-500 images were firstly reconstructed using BM3D-AMP. Recovered images were rescaled, cropped, flipped, and rotated to generate 298,000 50$\times$50 patches. These patches were used as ground truth to pre-train DnCNN denoiser with MSE. We also simulated another scenario. Since our approach does not require dataset with ground truth, it is possible to use measurement data from the test set. Thus, we generated 357,600 50$\times$50 patches from reconstructed test and train images. Pre-trained DnCNN denoiser on those patches was utilized in LDAMP network and the network was labeled in tables as "LDAMP BM3D-T" or "LDAMP BM3D" depending on whether test measurements were included for training or not. Our DnCNN denoiser was trained for $\sigma \in [0, 55]$ noise level range. In the CS-MRI case, BM3D-AMP-MRI \cite{eksioglu2018denoising} was specifically tailored for CS-MRI reconstruction and thus yielded significantly better results than conventional BM3D-AMP. Therefore, k-space knee dataset was reconstructed using BM3D-AMP-MRI \cite{eksioglu2018denoising} and resulted images were rescaled, cropped, flipped, and rotated to generate 267,240 50$\times$50 patches for LDAMP BM3D and 350,320 50$\times$50 patches for LDAMP BM3D-T training. We trained DnCNN denoisers for $\sigma \in [0, 10]$ noise range. \paragraph{Training LDAMP SURE} Firstly, LDAMP SURE was run $T=10$ iterations using pre-trained DnCNN denoiser and BM3D. At the last iteration, we collected images and estimated noise standard deviation with (\ref{new_sigma}). Then, all images with noise levels in $[0, 55]$ range (CS-MRI case: $\sigma \in [0, 10]$) were grouped into one set, while outliers that have larger noise levels were replaced by Gaussian noise added BM3D-AMP recovered images. Overall, we have dataset of all images with $\sigma \in [0, 55]$ (CS-MRI case: $\sigma \in [0, 10]$) to train DnCNN denoiser with MC-SURE. These steps were repeated $L$ times to further improve the performance of our proposed method. Although training DnCNN with MC-SURE involves estimation of a noise standard deviation for an entire image, we assume that a patch from an image has the same noise level as the image itself. Thus, we generated patches without using rescaling to avoid noise distortion to train LDAMP SURE. To train DnCNN with SURE, we used weights of pre-trained DnCNN and trained using Adam optimizer \cite{kinga2015method} with the learning rate of 10\textsuperscript{-4} and batch size 128 for 10 epochs. Then, we decreased learning rate to 10\textsuperscript{-5} and trained for 10 epochs. Training process lasted about 3-4 hours for LDAMP SURE or LDAMP SURE-T respectively. We empirically found that after $L$=2 iterations (Algorithm~\ref{algorithm2}-line 1) of training LDAMP SURE, the results converge for both CDP and $i.i.d.$ Gaussian cases, while for CS-MRI: $L$ = 1. The accuracy of MC-SURE approximation depends on the selection of constant value $\epsilon$, which is directly proportional to $\sigma$ \cite{deledalle2014stein, soltanayev2018}. Therefore, for training DnCNN with SURE, $\epsilon$ value was calculated for each patch based on its noise level. \subsection{Results} \paragraph{Gaussian measurement matrix} We compared our proposed LDAMP SURE with the state-of-the-art CS methods, namely BM3D-AMP\cite{metzler2015bm3d}, NLR-CS\cite{dong2014compressive}, and TVAL3\cite{li2009user}. BM3D-AMP was used with default parameters and run for 30 iterations to reduce high variation in the results, although PSNR\footnote{PSNR stands for peak signal-to-noise ratio and is calculated by following expression: $10log_{10}(\frac{255^2}{mean(\hat{x} - x_{gt})^2})$ for pixel range $\in [0-255]$} approached its maximum after 10 iterations \cite{metzler2016denoising}. The proposed LDAMP SURE algorithm was run 30 iterations but also showed convergence after 8-10 iterations. NLR-CS was initialized with 8 iterations of BM3D as justified in \cite{metzler2016denoising}, while TVAL3 was set to its default parameters. Also, we included the results of LDAMP trained on ground truth images to see the performance gap. From Table \ref{gaussian}, proposed LDAMP SURE and LDAMP SURE-T outmatches other methods at higher CS ratios by 0.26-0.46 dB, while at a highly undersampled case, it is inferior to NLR-CS. Nevertheless, it is clear that SURE based LDAMP is able to improve the performance of pretrained LDAMP BM3D and surpasses BM3D-AMP by 0.28-1.56 dB. In Figure~\ref{fig-Gaussian_recon}, reconstructions of all methods on a test image are represented for 25\% sampling ratio. Proposed LDAMP SURE and LDAMP SURE-T provide sharper edges and preserve more details. In terms of run time, the dominant source of computation comes from using BM3D denoiser at initial iterations, while DnCNN takes less than a second for inference. LDAMP SURE utilizes CPU for BM3D and GPU for DnCNN. Consequently, proposed LDAMP SURE is comparatively faster than BM3D-AMP, NLR-CS, and TVAL3 methods. \paragraph{Coded diffraction pattern measurements} LDAMP SURE was tested with randomly sampled coded diffraction pattern \cite{candes2015phase} and yielded the best quantitative performance at higher sampling rates (see Table \ref{cdp} and Figure \ref{fig-CDP_recon}). LDAMP SURE and LDAMP SURE-T achieved about 1.8 dB performance gain over BM3D-AMP. However, at extremely low sampling ratio, our method slightly falls behind TVAL3. LDAMP SURE requires better dataset than BM3D-AMP reconstructed images from highly undersampled data to pretrain DnCNN. Therefore, one way to surpass TVAL3 at the highly undersampled case is to pretrain DnCNN with TVAL3 reconstructed images. \paragraph{CS MR measurement matrix} LDAMP SURE was applied to CS-MRI reconstruction problem to demonstrate its generality and to show its performance on images that contain structures different from natural image dataset. We compared LDAMP SURE with state-of-the-art BM3D-AMP-MRI algorithm \cite{eksioglu2018denoising} for CS-MR image reconstruction along with TVAL3, BM3D-AMP, and dictionary learning method or DL-MRI \cite{ravishankar2011mr}. Average image recovery PSNRs and run times are tabulated in Table \ref{csmri}. Figure~\ref{fig-CSMRI_recon40} shows that our proposed method yielded state-of-the-art performance, close to the ground truth. The results reveal that proposed LDAMP SURE-T outperforms existing algorithms in all sampling ratios. \section{Discussion and Conclusion} We proposed simultaneous compressive image recovery and deep learning based denoiser learning method from undersampled measurements. Our proposed method yielded better image quality than conventional methods at higher sampling rates for $i.i.d$ Gaussian, CDP, and CS MR measurements. Thus, it may be possible that this work can be helpful for areas where obtaining ground truth images is challenging such as hyperspectral or medical imaging. Note that training deep learning based image denoisers from undersampled data still requires to contain enough information in the undersampled measurements. Tables~\ref{gaussian} and \ref{cdp} show that 5\% of the full samples is not enough to achieve state-of-the-art performance possibly due to lack of information in the measurement. Also note that since we assume a single CS measurement for each image and evaluated with various CS matrices with high-resolution images, it was not possible to compare our method to noise2noise~\cite{Lehtinen:2018un} and AmbientGAN~\cite{Bora:2017tz,Bora:2018tl}. Lastly, our proposed method can potentially be used with more advanced deep denoisers as far as they are trainable with MC SURE loss~\cite{soltanayev2018}. \begin{table*}[t] \begin{center} \begin{tabular} {lccccccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Training Time} & \multicolumn{2}{c}{${M}/{N} = 5\%$} & \multicolumn{2}{c}{${M}/{N} = 15\%$} & \multicolumn{2}{c}{${M}/{N} = 25\%$} \\ \cmidrule(r){3-8} & & PSNR & Time & PSNR & Time & PSNR & Time \\ \midrule TVAL3 &N/A &20.46 &9.71 &24.14 &22.96 &26.77 &34.87 \\ NLR-CS &N/A &\textbf{21.88} &128.73 &27.58 &312.92 &31.20 &452.23 \\ BM3D-AMP &N/A &21.40 &25.98 & 26.74 &24.21 & 30.10 &23.08 \\ \midrule LDAMP BM3D &10.90 hrs &21.41 &8.98 & 27.54 &3.94 & 31.20 &2.89 \\ LDAMP BM3D-T &14.30 hrs &21.42 &8.98 & 27.61 &3.94 & 31.32 &2.89 \\ LDAMP SURE &15.05 hrs &21.44 &8.98 & 27.65 &3.94 & 31.46 &2.89 \\ LDAMP SURE-T &17.97 hrs &21.68 &\textbf{8.98} &\textbf{27.84} &\textbf{3.94} &\textbf{31.66} &\textbf{2.89}\\ \midrule LDAMP MSE & 10.17 hrs &22.07 &8.98 &27.78 &3.94 &31.65 &2.89 \\ \bottomrule \end{tabular} \caption{Average PSNRs (dB) and run times (sec) of 100 180$\times$180 image reconstructions for i.i.d. Gaussian measurements case (no measurement noise) at various sampling rates ($M/N\times100\%$).} \label{gaussian} \end{center} \end{table*} \begin{table*}[!h] \begin{center} \begin{tabular} {lccccccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Training Time} & \multicolumn{2}{c}{${M}/{N} = 5\%$} & \multicolumn{2}{c}{${M}/{N} = 15\%$} & \multicolumn{2}{c}{${M}/{N} = 25\%$} \\ \cmidrule(r){3-8} & & PSNR & Time & PSNR & Time & PSNR & Time \\ \midrule TVAL3 &N/A &\textbf{22.57} &\textbf{0.85} &27.99 &\textbf{0.75} &32.82 &\textbf{0.67} \\ NLR-CS &N/A &19.00 &93.05 &22.98 &86.90 &31.24 &119.70\\ BM3D-AMP &N/A &21.66 &22.15 & 27.29 &22.28 &31.40 &17.00 \\ \midrule LDAMP BM3D &10.56 hrs &21.97 &23.43 &28.04 &7.01 &31.65 &2.71 \\ LDAMP BM3D-T &12.67 hrs &21.93 &23.43 &28.01 &7.01 &32.12 &2.71 \\ LDAMP SURE &15.22 hrs &22.18 &23.43 &29.14 &7.01 &33.26 &2.71 \\ LDAMP SURE-T &17.61 hrs &22.06 &23.43 &\textbf{29.17} &7.01 &\textbf{33.51} &2.71 \\ \midrule LDAMP MSE & 10.17 hrs &22.12 &23.43 &28.87 &7.01 &33.88 &2.71 \\ \bottomrule \end{tabular} \caption{Average PSNRs (dB) and run times (sec) of 100 180x180 image reconstructions for CDP measurements case (no measurement noise) at various sampling rates ($M/N \times100\%$).} \label{cdp} \end{center} \end{table*} \begin{table*}[!h] \begin{center} \begin{tabular} {lccccccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{Training Time} & \multicolumn{2}{c}{${M}/{N} = 40\%$} & \multicolumn{2}{c}{${M}/{N} = 50\%$} & \multicolumn{2}{c}{${M}/{N} = 60\%$} \\ \cmidrule(r){3-8} & & PSNR & Time & PSNR & Time & PSNR & Time \\ \midrule TVAL3 & N/A &36.76 &\textbf{0.58} &37.13 &\textbf{0.24} &38.35 &\textbf{0.21} \\ DL-MRI & N/A &36.60 &98.51 &37.81 &97.58 &39.13 &99.44\\ BM3D-AMP-MRI & N/A &37.42 &14.76 &38.94 &15.00 &40.51 &15.36 \\ BM3D-AMP & N/A &36.15 &96.23 &36.29 &84.34 &39.53 &98.01 \\ \midrule LDAMP BM3D & 9.31 hrs &37.12 &6.26 &38.63 &6.14 &39.53 &6.10 \\ LDAMP BM3D-T &12.41 hrs &37.65 &6.26 &38.92 &6.14 &39.87 &6.10 \\ LDAMP SURE &12.04 hrs &37.40 &6.26 &38.70 &6.14 &40.62 &6.10 \\ LDAMP SURE-T & 16.05 hrs &\textbf{37.77} &6.26 &\textbf{39.09} &6.14 & \textbf{40.71} &6.10 \\ \bottomrule \end{tabular} \caption{Average PSNRs (dB) and run times (sec) of 100 180x180 image reconstructions for CS-MRI measurements case (no measurement noise) at various sampling rates ($M/N \times100\%$).} \label{csmri} \end{center} \end{table*} \begin{figure*}[!t] \begin{center} \begin{subfigure}[!b]{0.16\textwidth} \centering Ground truth \includegraphics[width=1\textwidth]{gss_25/gt1.png} \caption{PSNR} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering TVAL3 \includegraphics[width=1\textwidth]{gss_25/tval1.png} \caption{26.10 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering BM3D-AMP \includegraphics[width=1\textwidth]{gss_25/bm3d1.png} \caption{31.90 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering NLR-CS \includegraphics[width=1\textwidth]{gss_25/nlr1.png} \caption{33.59 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering LDAMP SURE \includegraphics[width=1\textwidth]{gss_25/sure1.png} \caption{34.53dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering LDAMP SURE-T \includegraphics[width=1\textwidth]{gss_25/suret1.png} \caption{\textbf{35.19 dB}} \end{subfigure} \caption{Reconstructions of 180$\times$180 test $``$Butterfly$"$ image with i.i.d. Gaussian matrix with ${M}/{N}=0.25$ sampling rate.} \label{fig-Gaussian_recon} \end{center} \end{figure*} \begin{figure*}[!t] \begin{center} \begin{subfigure}[!b]{0.16\textwidth} \centering Ground truth \includegraphics[width=1\textwidth]{cdp_15/gt1.png} \caption{PSNR} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering TVAL3 \includegraphics[width=1\textwidth]{cdp_15/tval1.png} \caption{27.52 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering BM3D-AMP \includegraphics[width=1\textwidth]{cdp_15/bm3d1.png} \caption{24.08 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering NLR-CS \includegraphics[width=1\textwidth]{cdp_15/nlr1.png} \caption{22.29 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering LDAMP SURE \includegraphics[width=1\textwidth]{cdp_15/sure1.png} \caption{\textbf{29.17 dB}} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering LDAMP SURE-T \includegraphics[width=1\textwidth]{cdp_15/suret1.png} \caption{28.92 dB} \end{subfigure} \caption{Reconstructions of 180$\times$180 test image with CDP measurement matrix for ${M}/{N}=0.15$ sampling rate.} \label{fig-CDP_recon} \end{center} \end{figure*} \begin{figure*}[!t] \begin{center} \begin{subfigure}[!b]{0.16\textwidth} \centering Ground truth \includegraphics[width=1\textwidth]{csmri_40/gt1.png} \caption{PSNR} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering TVAL3 \includegraphics[width=1\textwidth]{csmri_40/tval1.png} \caption{37.44 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering BM3D-AMP \includegraphics[width=1\textwidth]{csmri_40/bm3d1.png} \caption{36.54 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering DL-MRI \includegraphics[width=1\textwidth]{csmri_40/dlmri1.png} \caption{36.76 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering BM3D-AMP-MRI \includegraphics[width=1\textwidth]{csmri_40/bm3dmri1.png} \caption{37.85 dB} \end{subfigure} \begin{subfigure}[!b]{0.16\textwidth} \centering LDAMP SURE-T \includegraphics[width=1\textwidth]{csmri_40/suret1.png} \caption{\textbf{38.22 dB}} \end{subfigure} \caption{Reconstructions with CS-MRI measurement matrix for ${M}/{N}=0.40$. Residual errors are shown in red boxes. } \label{fig-CSMRI_recon40} \end{center} \end{figure*} \section*{Acknowledgments} This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education(NRF-2017R1D1A1B05035810). {\small \bibliographystyle{ieee}
{ "timestamp": "2018-12-20T02:18:44", "yymm": "1806", "arxiv_id": "1806.00961", "language": "en", "url": "https://arxiv.org/abs/1806.00961" }
\section{Introduction}\label{Sec:intro} Today's data-intensive applications like big data processing, live streaming~\cite{anysee} and graph analytics place heavy demands on memory capacity. DRAM scaling, however, is unable to fit the increasing memory requirement for petabyte-scale big data processing. Emerging byte-addressable non-volatile memory (NVM) technologies, such as phase change memory (PCM) and 3D XPoint~\cite{3DXPoint} offer high memory density, low cost per bit and near-zero standby power consumption, at the expense of low performance and limited write endurance~\cite{dhiman2009pdram,Qureshi:2009}. Despite the promising features of NVM, it is not exactly a direct substitute for DRAM. Thus, it is more practical to use NVM in conjunction with DRAM in hybrid memory systems~\cite{Qureshi:2009,dhiman2009pdram,ramos2011page,wei2015exploiting,Liu:2017}. With the continuously increasing application footprints and a corresponding growth of memory capacity, virtual-to-physical memory address translation tends to be a new bottleneck of system performance~\cite{Alam:2017:DVM:3079856.3080209}. Modern computer systems typically employ translation lookaside buffer (TLB) as a cache to store the recent virtual-to-physical address translations for faster retrieval in the future. Upon each memory reference, the TLB is consulted first. If the requested address is not present in the TLB (i.e., a TLB miss), the CPU needs to retrieve the absent address translation through hardware page table walking, which incurs a significant performance penalty due to four memory references in x86-64 systems~\cite{Yaniv:2016:HDC:2896377.2901456}. Previous studies have shown that TLB misses may significantly degrade system performance by up to 50\% when the application's memory footprint becomes very large~\cite{mccurdy2008investigating,basu2013efficient,Barr:2010:TCS:1815961.1815970,Bhattacharjee:2013:LMM:2540708.2540741,du2015supporting}. There has been a large body of work on improving TLB coverage, i.e., the total memory address space that can be directly translated through TLBs. As the number of TLB entries does not scale up well due to speed, power, and space constraints, superpages have been widely exploited to increase TLB coverage~\cite{du2015supporting,Swanson:1998,Romer:1995,Cox:2017,Agarwal:2017}. A superpage refers to a large virtual page that maps to a number of continuous physical small (base) pages. The use of superpages can significantly broaden the TLB coverage (by a factor of 512 for typical 2 MB superpages compared to 4 KB small pages). However, the side effect is that superpage can hamper lightweight and agile memory management, such as page migration. On the other hand, hybrid DRAM/NVM memory systems often rely on page migrations to achieve higher performance~\cite{Qureshi:2009,ramos2011page,wei2015exploiting,Liu:2017} and energy efficiency~\cite{park2011power,wei2015exploiting,Salkhordeh:2016,Liu:2017}. This in turn requires lightweight page migration schemes to move the frequently-accessed (hot) pages from the slow NVM to the fast DRAM. However, page migration at the superpage granularity (e.g., 2 MB) can incur unbearable performance overhead due to a vast waste of DRAM capacity and bandwidth if most memory references are distributed in a small region of the superpage (see Section~\ref{Sec:observation}). The cost may be even larger than the benefit of superpage migration. This places the use of superpage in a dilemma since the lightweight page migration can outweigh the benefits of extended TLB coverage. In this paper, we study how to exploit superpages for wide TLB coverage while supporting lightweight page migration in hybrid memory systems. To achieve this goal, several challenging issues should be addressed. (1) \textbf{Lightweight hot pages identification:} to support lightweight page migration, a large body of work advocates monitoring memory accesses through the memory controller~\cite{park2011power,ramos2011page}. However, access counters at per-page granularity (i.e., 4 KB) leads to prohibitively high storage overhead when the capacity of main memory becomes large (e.g., 1 TB memory needs total $\frac{1TB}{4KB}\times 2B = 512 MB$ storage on a 2B-per-page basis). Storing those records with on-chip SRAM is impractical. Another alternative is to store them in main memory, however, this would lead to higher memory access latency and additional records lookup overhead in main memory for each memory reference. (2) \textbf{Impact of lightweight page migration on TLB coverage}: page migrations often fragment superpages and thus break the physical address continuity. Previous work has advocated splintering superpages to enable lightweight memory management such as page migration and sharing, while sacrificing the performance of address translation~\cite{Pham:2015,Guo:2015}. It is still a challenge to retain the improved TLB coverage when the hot small pages within superpages are migrated to the DRAM. (3) \textbf{Efficiency of hot pages addressing}: as hot pages contribute to a major portion of applications' memory references, it is essential to further improve address translation performance for those hot pages in the DRAM. To address the above problems, we propose \textit{Rainbow}, a novel memory management mechanism to bridge the gap between superpaging and lightweight page migration for hybrid DRAM/NVM memory systems. \textit{Rainbow} manages NVM and DRAM with different page granularities. Correspondingly, \textit{Rainbow} exploits the available hardware feature of split TLBs ~\cite{papadopoulou2015prediction,Cox:2017,AMDFamily,IntelSkylake} to support different page sizes, with one TLB for addressing the superpages, and another TLB for small pages. Rainbow migrates hot small pages within superpages to the DRAM, without compromising the integrity of superpage TLB. As a result, Rainbow actually architects the DRAM as a cache to the NVM. Rainbow has the following novel designs to address the aforementioned challenging issues in supporting both superpages and lightweight page migration: \begin{itemize}[leftmargin=*] \item To mitigate the storage overhead of fine-grained page access counting, we propose two-stage memory access counting. In a given time interval, Rainbow first counts NVM memory accesses at the superpage granularity, and then selects the $top$ $N$ hot superpages as targets. At the second stage, we only monitor those hot superpages at the small pages (4 KB) granularity to identify hot small pages. This history-based policy avoids monitoring the sub-blocks (4 KB pages) within a large number of cold superpages, and thus significantly reduce the overhead of hot page identification. \item We adopt split TLBs to accelerate the address translation performance for both DRAM and NVM references. To keep the integrity of superpages' TLB when some small pages are migrated to the DRAM, we use a bitmap to identify the migrated hot pages in the memory controller, without splintering the superpages. \item We propose a physical address remapping mechanism to access the migrated hot pages in the DRAM, without suffering costly page table walks for addressing a DRAM page. To achieve this goal, we store the migrated hot page's destination address in its original residence (the superpage). Once the hot pages' corresponding TLB misses, the DRAM page addressing should resort to an indirect access of the superpage. This design logically leverages the superpage TLBs as a next-level cache of the 4 KB page TLBs. Because the superpage TLB hit rate is rather high, Rainbow can significantly speed up the DRAM page addressing. \end{itemize} Putting those designs all together, we implement Rainbow within an integrated simulator based on zsim~\cite{Sanchez:2013} and NVMain~\cite{NVMain}. To the best of our knowledge, this is the first kind of work that supports superpages and lightweight page migration in hybrid memory systems. We compare Rainbow with several alternatives using a wide range of workloads. Experimental results show that Rainbow can significantly reduce the address translation overhead for applications with large memory footprints, and thus improve application performance by up to 2.9X (43.0\% on average) compared to a hybrid memory migration policy without superpage support. Rainbow also demonstrates higher energy efficiency than other policies. The remainder of this paper is organized as follows. Section~\ref{sec:motivation} introduces the background and motivates our design for DRAM/NVM hybrid memories. Section~\ref{sec:design-and-impl} gives the detailed design of Rainbow. Experimental results are presented in Section~\ref{sec:evaluation}. We discuss the related work in Section~\ref{relatedwork} and conclude in Section~\ref{sec:conc}. \section{Background and Motivation} \label{sec:motivation} We first introduce the superpages and split TLBs. Next, we experimentally study memory access statistics of typical applications to motivate the design of Rainbow. \subsection{Superpage and Split TLBs} The evidence of performance degradation due to address translation have been well corroborated~\cite{mccurdy2008investigating,basu2013efficient,Barr:2010:TCS:1815961.1815970,Bhattacharjee:2013:LMM:2540708.2540741}. Modern data-centric applications are characterized with large memory footprint and lower data locality, and are expected to incur even higher address translation overheads due to TLB misses. On the other hand, emerging NVM technologies are much denser and cheaper than the conventional DRAM, and consequently we expect a rapid growth of memory capacity in the near future. The trends of big memory systems make the address translation problem become more urgent than ever before. TLB misses can be mitigated by improving TLB coverage (or TLB reach). There are two fundamental ways to enlarge the TLB coverage, either by using more TLB entries or letting each entry map a larger memory page. Increasing TLB size implies larger die space area, higher energy consumption and access latency. Another alternative is to use superpages, which have been proposed to improve TLB coverage since the 1990s~\cite{Talluri:1994,Romer:1995}. Most modern computer systems support superpages at both hardware and software levels. For example, x86-64 supports 4 KB, 2 MB and 1 GB page sizes, and the processor vendors provide split TLBs for different page sizes correspondingly~\cite{papadopoulou2015prediction,Cox:2017,AMDFamily,IntelSkylake}. A virtual address can be consulted in all split TLBs in parallel to shorten the address translation latency. Although split TLBs are simple for implementation and with good performance, they would be underutilized without judicious allocation of superpages at different sizes. For example, if the OS only allocates 4 KB pages, the 2 MB superpage TLBs are wasted. \subsection{Memory Access Analytics of Superpages}\label{Sec:observation} Emerging NVM offers higher density than DRAM, but at the expense of higher access latency and lower bandwidth. Particularly, for the write operations, NVM is about 5-10x slower than DRAM, and consumes up to 10x more energy than DRAM~\cite{Raoux:2008}. As a result, page migration is widely utilized to improve performance and energy efficiency in hybrid memory systems~\cite{Qureshi:2009,park2011power,ramos2011page,wei2015exploiting,Liu:2017}. However, the use of superpages in hybrid memory systems precludes lightweight page migrations because a superpage is required to be contiguous and aligned in both physical and virtual address spaces. Fine-grained page (e.g., 4 KB) migration breaks the continuity of physical address space, and thus splinters the superpages. Page migration at the superpage granularity (e.g., 2 MB) can still retain the advantages of wide TLB coverage, however, is prohibitively costly. To evaluate the side effects of superpage migrations, we run several representative applications using 2 MB superpages, and profile their memory usage at the granularity of 4 KB pages in an interval of $10^8$ cycles. These applications are selected from SPEC CPU2006\cite{Spec_CPU2006}, Parsec\cite{Parsec}, Problem Based Benchmarks Suit (PBBS)\cite{PBBS}, Graph500\cite{Graph500}, Linpack\cite{Linpack}, NPB-CG~\cite{CG}, and HPC Challenge Benchmark GUPS~\cite{GUPS}. CactusADM, mcf and soplex are selected from SPEC CPU2006. CactusADM is a computational kernel representative of many applications in numerical relativity. Soplex solves a linear program using the simplex algorithm. Mcf is a program used for single-depot vehicle scheduling in public mass transportation. Canneal, bodytrack and streamcluster are multi-thread applications selected from Parsec. DICT, BFS, setCover and MST are selected from PBBS. Both BFS and MSF all solve graph problems. SetCover is a computational biological problem. DICT is a dictionary matching algorithm. Linpack is a traditional supercomputer benchmark performing numerical linear algebra. Graph500 is a supercomputer benchmark based on large-scale data-intensive graph analysis. NPB-CG measures irregular memory access and communication performance. GUPS measures the rate of integer random updates of memory. These workloads cover a wide range of memory access patterns, and their memory footprints are shown in Table~\ref{tab:hotpage}. All experiments are conducted in a simulated platform, as presented in Section~\ref{test-setup}. We have the following observations. \emph{Observation 1: For most applications, only a small portion of 4 KB pages are actually touched in each superpage during each sampling interval.} Figure~\ref{fig:memory_footprint} shows the cumulative distribution function (CDF) of superpages as a fraction of the touched small pages in one superpage. For many applications, we find that almost 80\% superpages' memory accesses are distributed on only a few small pages in a given interval. For cactusADM, the total number of touched small pages is even less than 100 for all superpages. This indicates that the migration of a whole superpage often results in wasted DRAM bandwidth and CPU time, and inefficient use of the limited DRAM capacity. The cost may be even larger than the benefit of superpage migration. \begin{figure}[tbp] \vspace{-0.0ex} \centering \includegraphics[width=0.9\columnwidth]{figure/touched_pages.pdf} \vspace{-0.5ex} \caption{The cumulative distribution function of superpages versus the number of touched 4 KB small pages within a superpage in a given interval} \label{fig:memory_footprint} \vspace{-1ex} \end{figure} \emph{Observation 2: most applications' memory references are mainly distributed within a small portion of 4 KB hot pages.} Similar to CHOP~\cite{jiang2010chop}, the hot pages are classified as the top N pages ranked by number of accesses, and the total accesses of these pages constitute 70\% of the application's memory accesses. For each application, Table~\ref{tab:hotpage} shows the minimum access counts for hot pages in working sets measured every $10^8$ cycles, and the applications' total memory footprints. The hot page percent is calculated as the total volume of small hot pages to the working set in the sampling interval. Given the small fraction of touched small pages in each superpage, the proportion of hot small pages is even much smaller for many applications, such as mcf, canneal, and bodytrack. Table~\ref{tab:hotsmallpage} shows how the hot small pages are distributed among superpages. For many applications, we find that most superpages' memory references are distributed on less than 128 hot small pages. This is extremely clear for data-intensive benchmarks. For GUPS and Graph500, even 95.5\% and 61.48\% superpages are covered by less than 32 hot small pages. This implies it is more beneficial and lightweight to migrate only the hot small pages rather than the whole superpages from NVM to DRAM. \begin{table}[tbp] \scriptsize \centering \caption{ Hot Page (4 KB) Access Statistics} \label{tab:hotpage} \vspace{-0ex} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{3}{1.5cm}{Application} & \multicolumn{3}{c|}{\tabincell{c}{{Page access statistics} ($10^8$ cycles)}} & \multirow{2}{1.1cm}{{Total memory footprint}} \\ \cline{2-4} & \tabincell{c}{Hot page\\ min\# access} &\tabincell{c}{Working\\ set (MB)}& \tabincell{c}{Hot page\\percent} &\\ \hline cactusADM &64 & 74.6 MB & 4.71\% & 776 MB \\ mcf &30 & 1089 MB & 2.36\% & 1698 MB \\ soplex &51 & 70.9 MB & 19.63\% & 1888 MB \\ \hline canneal &2 & 891.6 MB & 8.52\%& 972 MB \\ bodytrack &19 & 16.2 MB & 1\% & 620 MB \\ streamcluster &10 & 105.5 MB & 27.60\% &150 MB \\ \hline DICT &53 & 20.3 MB& 37.20\% &384 MB \\ BFS &30 & 404.1 MB& 20.51\% &3718 MB \\ setCover &34 & 49.8 MB& 37.53\% &2520 MB \\ MST &35 & 121.2 MB& 32.42\% &6660 MB \\ \hline Graph500 &64 & 7.20 MB & 6.35\% & 27.4 GB \\ \hline Linpack &63 &40 MB & 21.19\% & 23.9 GB \\ \hline NPB-CG &64 &40.9 MB & 24.7\% & 22.9 GB \\ \hline GUPS &4 &7.6 GB & 5.8\% & 8.06 GB \\ \hline \end{tabular} \vspace{-1ex} \end{table} The above observations motivate us to design a new memory management mechanism that supports both superpages and fine-grained page migration for hybrid memory systems. \begin{table}[tbp] \scriptsize \centering \caption{ Distribution of Hot 4 KB Pages within Superpages } \label{tab:hotsmallpage} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{1cm}{Application} & \multicolumn{6}{c|}{Percent of superpages covered by a number of hot 4 KB pages } \\ \cline{2-7} &1-32 &33-64 &65-128 &129-256 &257-384 &385-512 \\ \hline cactusADM &28.01\% &34.1\% &29.32\% &0.65\% &7.45\% &0.47\%\\ mcf &57.56\% &16.48\% &10.84\% &9.95\% &4.78\% &0.39\%\\ soplex &45.69\% &10.88\% &22.76\% &9.28\% &6.77\% &4.62\%\\ \hline canneal &62.18\% &15.86\% &8.9\% &11.57\% &0.91\% &0.58\% \\ bodytrack &83.19\% &6.01\% &7.66\% &2.18\% &0.63\% &0.33\% \\ streamcluster &23.77\% &30.55\% &14.38\% &13.71\% &17.5\% &0.09\%\\ \hline DICT &23.86\% &14.53\% &28.27\% &22.14\% &11.06\% &0.14\%\\ BFS &3.94\% &18.19\% &57.42\% &6.35\% &5.6\% &8.5\%\\ setCover &16.26\% &24.28\% &27.58\% &17.36\% &7.5\% &7.02\%\\ MST &13.44\% &21.28\% &21.77\% &25.8\% &16.31\% &1.4\%\\ Graph500 &61.48\% &38.46\% &0.06\% &0\% &0\% &0\%\\ \hline Linpack &22.21\% &14.71\% &29.18\% &16.3\% &9.64\% &7.96\% \\ \hline NPB-CG &0.05\% &96.29\% &2.66\% &1.0\% &0\% &0\%\\ \hline GUPS &95.5\% &4.5\% &0\% &0\% &0\% &0\% \\ \hline \end{tabular} \end{table} \section{Design and Implementation}\label{sec:design-and-impl} In this section, we first give an overview of Rainbow, and then present the technical details of hot page identification, utility-based page migration, and split TLBs. At last, we describe some other implementation issues such as cache/TLB consistency guarantees. \subsection{Architecture Overview} Figure~\ref{fig:Rainbow_arch} depicts the architecture of our system. Rainbow only allocates 2 MB superpages in the NVM, and use the DRAM to cache the hot NVM pages of 4 KB size. Correspondingly, each processor uses two split TLBs to accelerate the address translation of superpages and small pages. In the memory controller, we design a lightweight page access monitoring mechanism to identify the hot small pages in the NVM. Also, we use a migration bitmap to flag the migrated small pages on an one-bit-per-page basis. \begin{figure}[tbp] \centering \includegraphics[width=0.95\columnwidth]{figure/architecture.pdf} \vspace{-1.0ex} \caption{Architecture of Rainbow} \label{fig:Rainbow_arch} \vspace{-1.0ex} \end{figure} In the operating system (OS), we develop three modules. The hot page identifier periodically reads the page access counts in the memory controller and identifies the hot small pages within the monitored superpages. The page migration controller exploits a utility-based scheme to migrate small pages when the migration benefit is expected to be larger than the migration cost. The DRAM manager is responsible for page allocation and replacement. We modified the buddy allocator in the OS for DRAM memory allocation. As the DRAM capacity is often much larger than on-chip cache, conventional LRU-based replacement policies can cause significant performance penalty when they are implemented in the software layer. Like HSCC~\cite{Liu:2017}, we use three lists to manage the DRAM memory: a free list to maintain unused pages, a clean list for unmodified pages, and a dirty list for dirty (modified) pages. Because the dirty pages should be written back to the NVM (costly), Rainbow preferentially selects free and clean pages for DRAM replacement at first. When the free and clean lists all become empty, Rainbow has to replace the dirty pages finally. \vspace{-1ex} \subsection{Lightweight Hot Page Identification} \label{sec:Page_tracking} Memory access monitoring at the page granularity (4 KB) is costly when the memory size becomes very large. For example, if we use only 2 bytes to record the access counts of a 4 KB page, 1 TB memory requires total $\frac{1 TB}{4 KB}\times 2B = 512$ MB storage. It is impractical to store those records in on-chip SRAM. \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=\columnwidth]{figure/counting.pdf} \vspace{-3.0ex} \caption{The two-stage memory access monitoring for small hot page identification in \textit{Rainbow}} \label{fig:interval} \vspace{-2.0ex} \end{figure} We propose a two-stage memory access monitoring mechanism to mitigate the storage overhead. As shown in Figure~\ref{fig:interval}, Rainbow divides the process of memory access monitoring into two phases. In the phase 1, Rainbow first counts NVM memory accesses at the superpage granularity (\textcircled{1}). We use two bytes to store the access counts for each superpage in an interval of $10^8$ cycles. For each memory reference, the memory controller determines which superpage corresponds to the physical address and updates the access counts. We note that NVM write operations have a higher weighting of the counter value than NVM read operations. After a given time interval for superpage access counting, Rainbow then selects the $top$ $N$ hot superpages to further monitor them at the granularity of 4 KB pages (\textcircled{2}). Even though application footprints may be very large, their working sets in a short interval is often much smaller. Thus, the superpages sorting latency is acceptable through a software approach. In the Phase 2, Rainbow monitors the small pages within those hot superpages and records their memory access counts (\textcircled{3}) in a small table. As shown in Figure~\ref{fig:small_page_monitoring}, we need 4 bytes to record the physical superpage number, and 2 bytes to record the access counts for each small page. Note that the access counter uses 15 bits to store the data values, and 1 bit as the overflow flag. An overflow implies that the superpage is definitely hot. Thus, monitoring a hot superpage requires $4B + 512\times2B=$ 1028 bytes total storage in a fine-grained manner. At last, Rainbow classifies those split small pages into hot and cold pages via threshold based classification (\textcircled{4}). A page is determined as a hot page only when its migration benefit exceed a given threshold (see Section~\ref{sec:multi_migration}). This history-based policy avoids monitoring cold superpages at the small page granularity, and thus significantly reduce the overhead of hot page identification. \begin{figure}[tbp] \centering \includegraphics[width=0.7\columnwidth]{figure/small_page_counting.pdf} \vspace{-0ex} \caption{The data structure for small page access counting} \label{fig:small_page_monitoring} \vspace{-2ex} \end{figure} \subsection{Utility-based Hot Page Migration} \label{sec:multi_migration} Page migration from NVM to DRAM can improve memory access performance and energy efficiency. However, it also incurs increased access latency of requested data. Moreover, indiscriminate page migration can result in page thrashing between DRAM and NVM when memory pressure in DRAM becomes high. We need to make a trade-off between the gained benefit and cost of page migrations. Table \ref{tab:parameters} presents the parameters for evaluating the benefits of page migrations in a time interval (10$^8$ cycles in our experiments). When the DRAM has free pages to cache a NVM page, we should check whether the benefit of page migration is larger than the cost of page migration. We assume the migrated page will be read and written for $C_{r}$ and $C_{w}$ times in the next interval. The benefit of page migration is calculated as the total cycles saved by accessing data from DRAM against NVM. The total cycles spent in a page migration can be deemed as a constant, as shown in Equation~\ref{equ:migration_benefit1}. \vspace{-0.5ex} \begin{equation} Benefit_{mig} = (t_{nr}-t_{dr})C_{r} + (t_{nw}-t_{dw})C_{w} -T_{mig} \label{equ:migration_benefit1} \end{equation} \begin{table}[tbp] \vspace{-0ex} \scriptsize \centering \caption{{\textnormal{Parameters for Evaluating Page Migrations}}} \vspace{-1ex} \begin{tabular}{|c|l|} \hline \textbf{Notations} & \textbf{ Descriptions} \\ \hline $C_{r}$, $C_{w}$ & total number of reads and writes on a page in a time interval\\ \hline $t_{nr}$, $t_{nw}$ & NVM read and write latencies\\ \hline $t_{dr}$, $t_{dw}$ & DRAM read and write latencies \\ \hline $T_{mig}$ & cycles spent in migrating a page from NVM to DRAM \\ \hline $T_{writeback}$ & cycles spent in writing a dirty DRAM page to NVM\\ \hline \end{tabular} \label{tab:parameters} \vspace{-4ex} \end{table} When the DRAM utilization becomes high, Rainbow may need to reclaim DRAM pages for holding the newly migrated pages. Rainbow would preferentially reclaim clean pages. However, if there is no clean pages, Rainbow needs to evict dirty pages to the NVM. This results in bidirectional page migrations and less migration benefit. Assume a DRAM page $p1$ is evicted to hold a newly migrated NVM page $p2$, the total migration benefit should be offset by the cost due to page swapping, as illustrated in Equation~\ref{equ:migration_benefit2}. To mitigate the cost of page swapping, we monitor the data traffic of bidirectional page migrations, and dynamically increase the threshold of migration benefit to select hotter small pages within each superpage. \vspace{-1.0ex} \begin{equation} \begin{split} \Delta Benefit_{mig} & = (t_{nr}-t_{dr})(C_{r}^{p2}-C_{r}^{p1}) \\ &+(t_{nw}-t_{dw})(C_{w}^{p2}-C_{w}^{p1})\\ & -T_{mig}-T_{writeback} \end{split} \label{equ:migration_benefit2} \end{equation} \subsection{A Small Cache for Page Migration Bitmap} When a page is migrated to the DRAM, Rainbow sets the corresponding bit in the migration bitmap to identify whether this page has been migrated to the DRAM. For each 2 MB superpage, we need a 512-bits bitmap to record the migration flags for all 4 KB small pages. The storage overhead ($\frac{1}{4096 \times 8}$) is acceptable for a moderate-sized NVM device, and thus the migration bitmaps of all superpages can be placed in the memory controller (SRAM) for high performance. However, for large capacity memory systems, it is impractical to store all migration bitmaps in SRAM. For example, 1 TB NVM leads to 32 MB bitmap data. For better scalability, we design a small cache in the memory controller to store the migration bitmaps of recently-accessed superpages, while the whole migration bitmaps are still stored in the main memory. The migration bitmap cache is implemented as a 8-way set-associative cache, as shown in Figure~\ref{fig:bitmap_cache}. The physical superpage number (PSN) is used to index the migration bitmap of a superpage, and the middle 9 bits (12 to 20) are used to index the migration flag of a small page in the bitmap. It requires only a number of bit shifting operations to locate the migration flag. Due to space constraints in the memory controller, Rainbow only uses 4000 entries to cache the page migration flags of total 8 GB memory. Each cache entry requires 4 bytes for the PSN and 512 bits for the migration bitmap. The total storage overhead is only 272 KB SRAM. Generally, the migration bitmap cache is filled accompanying with a superpage TLB miss. As the number of migration bitmap cache entries is much larger than the superpage TLB entries in Rainbow, the hit rate of migration bitmap cache is also higher than that of superpage TLBs. We further evaluated the timing parameters of the bitmap cache by using CACTI 3.0~\cite{CACTI-3.0}. It only leads to 9 cycles latency (similar to the L2 cache latency) before accessing the NVM, which is one order of magnitude lower than the inherent NVM access latencies. \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=0.7\columnwidth]{figure/migration_bitmap.pdf} \vspace{-0.5ex} \caption{The migration bitmap cache in \textit{Rainbow}} \label{fig:bitmap_cache} \vspace{-2ex} \end{figure} \subsection{Split TLBs and Address Remapping}\label{sec:addressing} Once a page has been migrated to the DRAM, Rainbow stores the destination address (DRAM page number) in the page's original place. More specifically speaking, Rainbow overwrites the beginning 8 byte data with the page's new physical address, which points to a DRAM page. Meanwhile, we set the corresponding flag in the migration bitmap. When the DRAM page is evicted, if the data is not modified (clean), we only need to write back the first 8 bytes of the page. \begin{figure}[tbp] \vspace{-0.0ex} \centering \includegraphics[width=\columnwidth]{figure/addressing.pdf} \vspace{-3.5ex} \caption{Four cases of memory addressing in \textit{Rainbow}} \label{fig:addressing} \vspace{-3ex} \end{figure} Rainbow leverages split TLBs cooperatively to accelerate virtual-to-physical address translations for both DRAM and NVM. Upon a memory reference, the split TLBs are consulted in parallel, as shown in Figure~\ref{fig:addressing}. Generally, we have the following four cases: (1) 4 KB page TLB hit and superpage TLB hit; (2) 4 KB page TLB hit and superpage TLB miss; (3) 4 KB page TLB miss and superpage TLB hit; (4) 4 KB page TLB miss and superpage TLB miss. For the first and second cases, Rainbow chooses the physical address that the 4 KB page TLB returns (path \textcircled{1} in Figure~\ref{fig:addressing}). These two cases imply that the accessed data has been cached in the DRAM, and the stale data in the NVM is invalid. For the third case, Rainbow needs to check the migration bitmap with the returned physical address. If the corresponding migration bit is set, meaning that the small page within the superpage has been cached in the DRAM, Rainbow reads the first 8 bytes of this page in the NVM to obtain the destination physical address, which points to a page in the DRAM (\textcircled{2} in Figure~\ref{fig:addressing}). Otherwise (the small page is not migrated), Rainbow gets the physical address translated by the superpage TLB (\textcircled{3} in Figure~\ref{fig:addressing}). At last, Rainbow sends the translated physical address to on-chip cache or main memory (upon LLC misses) to access the requested data. In the forth case, Rainbow performs hardware page table walking for the superpage address translation (\textcircled{4} in the Figure~\ref{fig:addressing}). When the page tables return the physical address, Rainbow also needs to check the migration bitmap, and the following operations are the same as the third case. As illustrated in Figure~\ref{fig:addressing}, although the hot pages are migrated between DRAM and NVM, Rainbow does not need to splinter the NVM superpages and the corresponding TLBs. Any memory references to a migrated hot page are redirected to the DRAM through only one access to the superpage. This address remapping mechanism guarantees the transparency of hot page migration from the view of applications. In the following, we analytically compare the cost of DRAM page addressing in Rainbow with the traditional page table walking mechanism~\cite{Yaniv:2016:HDC:2896377.2901456}. Upon the 4 KB page TLB miss, page table walks result in four memory references to the four-level page tables, and thus the address translation overhead is $4\times t_{dr}$. For Rainbow, we need to read the DRAM page's physical address from the corresponding superpage in NVM. Since the superpages have only three-level page tables, there are three memory references to the page tables and one memory reference for reading the DRAM page address. Assume the hit rate of superpage TLBs is $R_{hit}$, the DRAM page addressing overhead becomes $R_{hit} \times t_{nr} +(1-R_{hit}) \times 4 t_{nr}$. Because $t_{nr}$ is almost twice as much as $t_{dr}$, we deduce that Rainbow leads to lower DRAM page addressing overhead than the page table walking mechanism when $R_{hit}>67\%$, and reduces DRAM page addressing overhead by 42.5\% when $R_{hit}=95\%$. Since the hit rate of superpage TLBs is rather high for many applications (over 99\% in our experiments), Rainbow is able to significantly reduce the overhead of DRAM page addressing. Because Rainbow can fully utilize the space of split TLBs, it essentially enables the superpage TLB to be another larger cache to the 4 KB page TLBs. \subsection{Data Consistency} \textbf{Data Consistency between DRAM and NVM}. As mentioned before, the hot data has two replicas, one in DRAM and one resident in NVM. Correspondingly, a virtual address may be presented in both superpage TLBs and 4 KB page TLBs. To guarantee the data consistency, we use a migration bitmap in the memory controller to mark the migrated hot pages. For each memory reference on the NVM, the migration flag is first checked to make sure that Rainbow always accesses the data cached in the DRAM. \textbf{Cache Consistency.} Since some processors leverage write-back cache solutions, where the write operations are directed to cache and the completion is immediately confirmed to the host CPU. The dirty data blocks are written to main memory only at specified intervals or under the condition of cache evictions. This mechanism often brings higher performance but may result in inconsistency problems. Because a page may be referenced by a set of cache lines in on-chip cache, a page migration may copy the stale data to another place, allowing a portion of data inconsistent with the replica in on-chip caches. Rainbow utilizes \textit{clflush} instructions to address this problem. To be more specific, the \textit{clflush} instruction invalidates all cache lines associated with the migrated page from all levels of the processor's cache hierarchy. The invalidation is broadcast throughout the cache coherence domain. If a cache line contains modified (dirty) data at any level of the cache hierarchy, the cache line is written back to the main memory before invalidation. In this way, when a page is migrated, the corresponding dirty cache lines are written to main memory, and clean cache lines are invalidated. \textbf{TLB Consistency.} Similar to the cache consistency issue, page migration may also cause TLB inconsistency problems~\cite{Liu:2017} because a page may be referenced by multiple cores' TLB entries. A simple solution is to adopt the TLB shootdown mechanism~\cite{Black:1989:TLB:68182.68193, Liu:2017}. That is, when a processor's TLB changes a address mapping, the same TLB entries in other cores should be invalidated. However, in Rainbow, a NVM-to-DRAM page migration do not lead to a TLB inconsistency problem. As mentioned before, our address remapping mechanism is able to logically guarantee the contiguity of superpage, and thus a hot page migration does not need to be perceived by the superpage TLB. Also, because a migrated page in DRAM is not necessarily accessed in the immediate future, the 4 KB page TLB for the DRAM page is constructed on its first access. When a DRAM page is written back to the NVM, we adopt the TLB shootdown mechanism~\cite{Black:1989:TLB:68182.68193} to invalidate all cores' 4 KB TLB entries corresponding to the DRAM page. \section{Evaluation}\label{sec:evaluation} \subsection{Experimental Methodology}\label{test-setup} We implement Rainbow in an integrated simulator based on zsim~\cite{Sanchez:2013} and NVMain~\cite{NVMain}. Zsim is a fast x86-64 multi-core simulator built on Pin~\cite{Luk:2005}. We extend zsim to support many OS-level functions, such as buddy allocator, page tables, and TLB management operations. NVMain is a cycle-accurate memory simulator that can model both DRAM and NVM in detail. In our experiment, NVMain is used to simulate the hybrid main memory composed of DRAM and NVM, each with an individual memory controller. \textbf{Configuration.} The platform and the detailed configuration in our experiments are depicted in Table~\ref{tab:testbed}. PCM is chosen as the storage medium of main memory as it is a widely studied NVM. Timing and energy parameters of PCM are referred to papers~\cite{Lee:2009:APC:1555754.1555758,Liu:2017}. We also model the latencies of clflush, TLB shootdown, and data move in details according to the timing parameters of CPU and DRAM/NVM. \begin{table}[tbp] \vspace{-0.5ex} \scriptsize \centering \caption{{\textnormal{System Configuration in Rainbow}}} \vspace{-2ex} \setlength{\tabcolsep}{0.9mm}{ \begin{tabular}{|l|l|} \hline CPU & 8 cores, 3.2 GHz, out-of-order \\ \hline L1 TLB & \tabincell{l}{32 Data TLB entries for 2 MB superpages, and 32 Data TLB entries\\ for 4 KB small pages per core, 4-way, 1-cycle latency} \\ \hline L2 TLB & \tabincell{l}{512 unified TLB entries for 2 MB superpages, and 512 unified TLB\\ entries for 4 KB small pages, 8-way, 8-cycle latency} \\ \hline L1 Cache & \tabincell{l}{private 64 KB per core, 4-way, split D/I, 3-cycle latency} \\ \hline L2 Cache & \tabincell{l}{private 256 KB per core, 8-way, 10-cycle latency} \\ \hline L3 Cache & \tabincell{l}{shared 8 MB, 16-way, 34-cycle latency} \\ \hline Bitmap cache & \tabincell{l}{272 KB, 8-way, 9-cycle latency} \\ \hline \tabincell{c}{DRAM} & \tabincell{l}{4 GB: 1 channel, 4 rank, 32 banks, 32768 rows, 64 cols,\\ Bandwidth: 10.7 GB/Sec, FR-FCFS request scheduling, \\ Timing (tCAS-tRCD-tRP-tRAS) : 7-7-7-18 (cycles),\\ 13.5 ns read latency, 28.5 ns write latency }\\ \hline \tabincell{c}{PCM} & \tabincell{l}{32 GB: 4 channels, 8 ranks, 8 banks per rank, 65536 rows, 32 cols,\\ Bandwidth: 10.7 GB/Sec, FR-FCFS request scheduling, \\ Timing (tCAS-tRCD-tRP-tRAS): 9-37-100-53 (cycles), \\19.5ns read latency, 171 ns write latency \\ }\\ \hline \multicolumn{2}{|c|}{\textbf{Power/Energy consumption}}\\ \hline DRAM & \tabincell{l}{Voltage: 1.5V, Standby: 77 mA, Refresh: 160 mA, Precharge: 37 mA;\\ Read and write on row buffer hit: 120 and 125 mA; \\ Read and write on row buffer miss: 237 and 242 mA }\\ \hline PCM & \tabincell{l}{Read/write on row buffer hit: 1.616 pJ/bit; \\ Read and write on row buffer miss: 81.2 pJ/bit and 1684.8 pJ/bit}\\ \hline \end{tabular}} \label{tab:testbed} \vspace{-1.5ex} \end{table} \textbf{Alternative policies.} We compare Rainbow with several alternative page migration policies for hybrid memories as follows. (1) \textbf{Flat-static}: 4 GB DRAM and 32 GB NVM are organized in a flat address space~\cite{Liu:2017}, and are managed in 4 KB small pages. Data is evenly distributed in DRAM and NVM according to the ratio of DRAM to NVM capacity (1:8). There is no page migration between NVM and DRAM. We use this system as a baseline for comparison. (2) \textbf{HSCC-4KB-mig}: HSCC is a state-of-the-art hybrid memory system that supports utility-based page migration at the granularity of 4 KB page~\cite{Liu:2017}. The major difference between Rainbow and HSCC is the support of superpages. This comparison is made to evaluate the effectiveness of using superpages in hybrid memory systems. (3)\textbf{ HSCC-2MB-mig}: we modify HSCC to support superpages and memory migration at 2 MB superpages granularity. This comparison is made to evaluate the performance and energy penalties of superpage migrations. (4) \textbf{DRAM-only}: this is a system with only 32 GB DRAM and supports only 2 MB superpages. We use it as the applications' performance upper bound because they can benefit from superpages without any page migrations. \textbf{Benchmarks.} We evaluate a number of workloads with different memory access patterns from SPEC CPU 2006~\cite{Spec_CPU2006}, Parsec~\cite{Parsec}, Problem Based Benchmarks Suite (PBBS)~\cite{PBBS}, Graph500\cite{Graph500}, Linpack\cite{Linpack}, NPB-CG~\cite{CG}, and GUPS~\cite{GUPS}, as listed in Table~\ref{tab:workload}. Detailed memory access patterns of these applications are shown in Table~\ref{tab:hotpage}. In addition, we evaluate three multi-programmed workloads, as shown in Table~\ref{tab:workload}. \begin{table}[tbp] \vspace{-0.0ex} \centering \caption{{\textnormal{Workloads for Evaluation}}} \vspace{-1ex} \setlength{\tabcolsep}{1mm}{ \begin{tabular}{|c|c|} \hline \textbf{Workloads} & \textbf{Applications} \\ \hline SPEC CPU2006 & cactusADM, mcf, soplex \\ \hline Parsec & canneal, bodytrack, streamcluster\\ \hline PBBS & \tabincell{l} {DICT, BFS, setCover, MST} \\ \hline {Large footprints} & {Graph500, Linpack, NPB-CG, GUPS} \\ \hline {mix1} & {cactusADM+soplex+setCover+MST}\\ {mix2} & {setCover+BFS+DICT+mcf}\\ {mix3} & {canneal+DICT+MST+soplex}\\ \hline \end{tabular}} \label{tab:workload} \vspace{-3ex} \end{table} \subsection{Address Translation Overhead}\label{sec:results} \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=1\columnwidth]{figure/MPKI.pdf} \vspace{-3ex} \caption{MPKI of applications} \label{fig:TLB_perf} \vspace{-2ex} \end{figure} Figure~\ref{fig:TLB_perf} shows superpages significantly reduce TLB misses per kilo instructions (MPKI) by several orders of magnitude on average. Although Rainbow supports different page sizes, it shows almost similar TLB performance with HSCC-2MB-mig and DRAM-only (2 MB). The reason is that Rainbow logically uses the superpage TLBs as a larger next-level cache to the 4 KB page TLBs. For applications with small memory footprints, such as bodytrack, and streamcluster, the MPKI is significantly reduced because of the wider TLB coverage (1 GB) offered by the superpages. As shown in Table~\ref{tab:hotpage}, GUPS and \textit{canneal} are memory intensive benchmarks and show very large working sets in a short sampling interval. As a result, GUPS and \textit{canneal} show relatively high MPKI even using superpages. Mix2 shows both a large working set and large memory footprint, leading to a large amount of page swapping between DRAM and PCM in HSCC-2MB-mig. Thus, HSCC-2MB-mig incurs a lot of TLB shootdown operations, causing a relatively high MPKI. In contrast, Rainbow does not cause TLB shootdown when migrating hot small pages within superpages to DRAM. Figure~\ref{fig:addr_trans_overhead} shows the percent of execution cycles spent on servicing TLB misses for different applications. When the memory are managed in 4 KB small pages, the TLB miss overhead even approximates to 60\% of total execution cycles for soplex, Graph500 and mix2. For mcf, canneal, GUPS, and mix3, since their working sets approximate or exceed the superpage TLB coverage, they cause relatively high address translation overheads even using superpages. Overall, superpages are able to significantly reduce TLB miss overhead by 99.8\% on average. \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=1\columnwidth]{figure/addr_trans_overhead.pdf} \vspace{-3ex} \caption{Percent of total cycles spent on servicing TLB misses. The very small values show TLB miss overheads in Rainbow.} \label{fig:addr_trans_overhead} \vspace{-2ex} \end{figure} \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=1\columnwidth]{figure/rainbow_addressing_overhead.pdf} \vspace{-3ex} \caption{The breakdown of detailed address translation overheads in Rainbow.} \label{fig:addr_trans_overhead_rainbow} \vspace{-2.5ex} \end{figure} We further study the detailed address translation overheads in Rainbow. Figure~\ref{fig:addr_trans_overhead_rainbow} shows the breakdown of execution cycles spent on split TLB hits, bitmap cache hits/misses, superpage table walks (SPTWs), and address remapping. The overall address translation only cause 12\% performance overhead on average. Since split TLBs are on the critical path of address translation, they introduce 78.5\% of total address translation overhead although the TLB latency is very short. The bitmap cache costs near 20\% of total address translation overhead because it should be consulted for each access to the NVM. To address DRAM pages when the corresponding 4 KB page TLB misses, the address remapping mechanism leads to 1.4\% of total address translation overhead on average. The bitmap cache miss can result in relatively higher latency, however, we observed trivial bitmap cache misses even for applications with very large footprints, such as Graph500, NPB-CG, and Linpack. As the superpage hit rate even exceeds 99.9\% on average, the average cost on superpage table walks is as low as 0.1\%. We only observe remarkable cost on SPTWs for \textit{canneal} and GUPS, because their large working sets lead to relatively lower superpage hit rate. \subsection{Application Performance}\label{sec:IPC} Figure~\ref{fig:ipc} shows the \textit{instructions per cycle} (IPC) of each application normalized to the baseline system (Flat-static). Rainbow achieves 72.7\%, 22.8\%, and 17.3\% performance improvement on average compared to Flat-static, HSCC-4KB-mig, and HSCC-2MB-mig, respectively. The performance difference between Rainbow and the upper bound (DRAM-only) is only 14.0\% on average. Compared to HSCC-4KB-mig, Rainbow can significant improve the IPC of \textit{mcf}, \textit{soplex}, and \textit{Graph500} by 2.1X, 1.2X, and 2.9X, respectively. For \text{mcf}, since superpages reduce the MPKI by approximate 99\%, they deliver 2.1X performance improvement compared to HSCC-4KB-mig. This suggests that superpages can significantly reduce the address translation overheads for memory-intensive applications. \textit{Soplex}, \textit{SetCover}, GUPS, and \textit{Graph500} all show rather poor data locality. However, they also show significant performance improvement against the systems without superpage support. This implies that applications with poor data locality can also benefit from superpages because of the improved TLB coverage. We also find that HSCC-2MB-mig may result in lower application performance than HSCC-4KB-mig, such as \textit{cactusADM, streamcluster, DICT, setCover, NPB-CG} and \textit{MST}. This implies that page migrations at the superpage granularity are extremely costly. The benefit of using superpages is significantly offset by the cost of superpage migrations. In contrast, Rainbow explores the advantages of both superpages and lightweight page migrations, and thus achieve much better application performance. For the DRAM-only system with 2 MB superpages supported, it shows the best performance against other policies because it takes full advantage of superpages without any memory migrations. We also note that it is not a completely fair comparison, since DRAM-only uses more DRAM. \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=1\columnwidth]{figure/IPC.pdf} \vspace{-3ex} \caption{Normalized IPC relative to the Flat-static system} \label{fig:ipc} \vspace{-2.5ex} \end{figure} \textbf{Insight}. \textit{(1) For applications with intensive memory accesses or poor data locality, Rainbow can significantly improve application performance by up to 2.9X. (2) For other applications, the cost of superpage migrations can offset the advantages of superpages. Using the proposed lightweight page migration scheme without splintering superpages, Rainbow can still improve application performance by 37.4\%. } \subsection{Page Migration Traffic} Figure~\ref{fig:migrate_data} shows the ratio of migration traffic to total memory footprint for each application. Generally, HSCC-2MB-mig shows larger migration traffic than other migration polices because of the large granularity of page migrations. As a result, superpage migrations lead to wasted memory bandwidth on copying the cold data within superpages. Rainbow can reduce page migration traffic by 50\% on average compared to HSCC-2MB-mig. For memory-intensive applications, such as soplex, canneal, and Graph500, HSCC-4KB-mig shows more migration traffic than Rainbow because page access counting scheme in HSCC is implemented in TLB and does not filter the memory references in on-chip caches, and thus more pages are migrated to the DRAM. For MST, GUPS and Linpack, because their memory footprints are larger than the capacity of DRAM (4 GB), HSCC-2MB-mig leads to a large amount of page swapping between DRAM and PCM. Thus, the migration traffic is even larger than their total memory footprints. In contrast, Rainbow only select the hot small pages for migration, and thus significantly mitigate the frequent page swapping. We also observe that page migrations only consume 1.35\% of total memory bandwidth at most. Thus, page migrations lead to trivial memory bandwidth contention with these applications. \begin{figure}[tbp] \vspace{-0ex} \centering \includegraphics[width=1\columnwidth]{figure/migration_data.pdf} \vspace{-3.0ex} \caption{Normalized page migration traffic relative to the applications' total memory footprints} \label{fig:migrate_data} \vspace{-1.0ex} \end{figure} \subsection{Energy Consumption} \begin{figure}[tbp] \centering \includegraphics[width=1\columnwidth]{figure/energy.pdf} \vspace{-3.0ex} \caption{Normalized energy consumption relative to the baseline system} \label{fig:energy} \vspace{-3ex} \end{figure} DRAM consumes a large amount of energy due to periodical refreshing, while NVM leads to near-zero static energy consumption. To evaluate energy efficiency of Rainbow, we compare energy consumption of those migration schemes using Flat-static as a baseline, as shown in Figure~\ref{fig:energy}. Generally, the DRAM-only system shows much more energy consumption than the hybrid memory systems on average. Rainbow is able to reduce energy consumption by 45.1\% and 68.5\% on average compared to the Flat-static and DRAM-only (2 MB), respectively. Although Flat-static does not introduce additional energy consumption due to page migrations, it causes more energy consumption than HSCC and Rainbow. The reason is that a large amount of memory references are distributed on the PCM, resulting in higher active energy consumption of PCM. Because write operations on PCM consumes 20 times more energy per bit than on DRAM~\cite{Lee:2009:APC:1555754.1555758}. This phenomenon is more clear for \text{mcf}, which shows that the misuse of hybrid memories even causes higher energy consumption compared to the DRAM-only system. In contrast, Rainbow and HSCC migrate hot pages to the DRAM, which services a large portion of memory accesses with higher energy efficiency. \subsection{Sensitivity Studies} To study how the application performance in Rainbow is sensitive to the time interval for hot page monitoring, we run selected applications with different sampling intervals. Note that we increase the interval and the number of monitored top $N$ hot superpages by the same factor (10). Figure~\ref{fig:subfig:sensitive_interval_migrate_data} and Figure~\ref{fig:subfig:sensitive_interval_ipc} show how the normalized migration traffic and application IPC are sensitive to the sampling interval, respectively. All experimental results are normalized to the setting of $10^5$ cycles. Generally, a longer sampling interval usually cause less software overhead for hot page identification. However, we find that less hot pages are migrated to the DRAM when the sampling interval exceeds $10^8$ cycles. Correspondingly, the applications' IPC also decreases with the growth of sampling interval. As a result, we set the sampling interval as $10^8$ cycles in Rainbow for better performance. \begin{figure} \centering \vspace{-0ex} \subfigure{ \label{fig:subfig:sensitive_interval_migrate_data} \includegraphics[width=1.56in]{figure/sensitive_migrate_data_subfigure.pdf} } \subfigure{ \label{fig:subfig:sensitive_interval_ipc} \includegraphics[width=1.56in]{figure/sensitive_ipc_subfigure.pdf} } \vspace{-2.5ex} \caption{ Migration traffic and IPC vary with the sampling intervals in Rainbow} \vspace{-2ex} \end{figure} To evaluate how the number of selected top \textit{N} hot superpages can affect the page migration traffic and application performance in a given time interval (10$^8$ cycles), we run some memory-intensive applications with different settings of \textit{N}. Figure~\ref{fig:subfig:sensitive_topn_migrate_data} shows that there is trivial growth of migration traffic for those applications when the number of selected hot superpages exceeds 50. Figure~\ref{fig:subfig:sensitive_topn_ipc} also shows that those applications' IPC become stable when the value of $N$ is larger than 50. This suggests that the majority of hot small pages of applications are distributed on only a few hot superpages. As a result, we prudently set $N$ to be 100 in Rainbow. We argue that the top 100 hot superpages are enough for hot small pages identification, because many applications' working sets are much less than 200 MB in each sampling interval. \begin{figure} \centering \vspace{-0ex} \subfigure{ \label{fig:subfig:sensitive_topn_migrate_data} \includegraphics[width=1.56in]{figure/TopN_migration_traffic.pdf} } \subfigure{ \label{fig:subfig:sensitive_topn_ipc} \includegraphics[width=1.56in]{figure/TopN_IPC.pdf} } \vspace{-1.0ex} \caption{Migration traffic and IPC vary with the number of top \textit{N} hot superpages in Rainbow} \vspace{-1ex} \end{figure} We have also studied the sensitivity of other settings in Rainbow. Due to space limitation, we only describe the results here. The first one is the threshold for hot page identification. We find that less hot pages are migrated to DRAM when the threshold increases. Correspondingly, the applications' IPC also become lower. We have also studied the impact of different NVM access latencies on the effectiveness of page migration. When the NVM read/write latencies increase linearly, a little more pages are migrated to DRAM because the migration benefit increases according to Equation~\ref{equ:migration_benefit1} and Equation~\ref{equ:migration_benefit2}. However, the applications' performance degrades because a large portion of cold data on the NVM introduce higher accumulative access delay. \subsection{Storage and Runtime Overhead}\label{sec:overheads} \begin{table}[tbp] \vspace{-0ex} \scriptsize \centering \caption{{\textnormal{Storage Overhead of Rainbow with 1 TB PCM}}} \vspace{-1.5ex} \begin{tabular}{|c|c|} \hline \textbf{Data Structure} & \textbf{Storage overhead} \\ \hline \tabincell{l} {Migration bitmap (1 bit per 4 KB-sized page )} & \tabincell{l} {\textcolor{black}{272 KB} SRAM }\\ \hline \tabincell{l} {Access counters for superpages in PCM (2 Byte per 2 MB )}& 1 MB SRAM \\ \hline \tabincell{l} {Physical Superpage number (PSN) of top $N$ hot superpages \\(4 Byte per superpage )} & $4N$ Bytes SRAM\\ \hline \tabincell{l} {Access counters for split small pages in the top $ N$ hot \\ superpages ($2B \times 512=1 KB$ per hot superpage) }& { $N$ KB} SRAM \\ \hline \hline Total storage overhead (if $N=100$) & \tabincell{l}{ \textcolor{black}{1.372 MB SRAM} }\\ \hline \end{tabular} \label{tab:storageoverhead} \vspace{-2.5ex} \end{table} We analyze the storage overhead of Rainbow in a hybrid memory system comprising of 1 TB PCM. \textcolor{black}{The storage overheads mainly come from migration bitmaps and superpage access counters.} We list all costs in Table~\ref{tab:storageoverhead}. \textcolor{black}{For the migration bitmaps, 1 TB PCM needs total $\frac{1 TB}{4 KB\times 8} = 32 MB$ storage to store all superpages' migration bitmaps. We put the whole bitmaps in the main memory, and cache only a portion of recently-accessed ones} \textcolor{black} {(272 KB)} in the memory controller. We use 2 bytes to record both superpages and small pages' access counts. There are total 512K superpages for 1 TB PCM, and thus consumes 1 MB SRAM for superpage access counters. As described in Section~\ref{sec:motivation}, although many applications may have a very large memory footprint during execution, they show relatively small working sets in the sampling time interval ($10^8$ cycles). As a result, we only select the top 100 hot superpages for fine-grained page access counting at the second stage, and thus only requires $1.004*100=100.4$ KB additional storage. Overall, Rainbow only causes 1.372 MB SRAM storage overhead for a big memory system with 1 TB PCM, and the hardware die area overhead modeled by CACTI~\cite{CACTI-3.0} is only 7\%. \begin{figure}[tbp] \centering \includegraphics[width=1\columnwidth]{figure/overhead.pdf} \vspace{-3.0ex} \caption{Breakdown of runtime overhead in Rainbow} \label{fig:runtimeoverhead} \vspace{-3ex} \end{figure} Figure~\ref{fig:runtimeoverhead} shows the breakdown of performance overhead due to the address remapping mechanism, bitmap cache, page migration, TLB shootdown, and clflush. We model all these operations in our simulator by adding reasonable latencies accordingly. We find that these applications show significantly different performance overhead at runtime. For soplex, mix2 and mix3, DRAM page addressing accounts for the majority of runtime overhead, implying that these workloads show relatively high miss rate of 4 KB page TLBs. DICT, BFS, setCover, MST and Graph500 all spend a relatively large portion of time in accessing the bitmap cache, suggesting that many memory accesses are distributed on NVM due to poor data locality of these workloads. MST, Linpack, and NPB-CG show very large memory footprints, and thus a large fraction of execution time are spent on page migrations. Overall, the runtime performance overhead of Rainbow is 9.8\% on average. However, it can be offset by the significant benefit of using superpages and lightweight page migrations. \section{Related Work} \label{relatedwork} We summarize the related work in the following categories. \textbf{Superpages and TLBs}. There have been many studies on mitigating the performance overhead of virtual-to-physical address translations~\cite{basu2013efficient,pham2012colt,Bhattacharjee2010Inter-core,bhattacharjee2011shared, srikantaiah2010synergistic}. Due to energy and latency constraints on TLB designs, a majority of studies have focused on superpages for improving TLB coverage. Talluri \textit{et al.} has discussed the challenges and tradeoffs to support superpages in hardware~\cite{Talluri1992tradeoffs}. Libhugetlbfs~\cite{libhugetlbfs} and Ingens~\cite{Kwon:2016} are OS-level supports for hugepage management. TLB coalescing~\cite{pham2012colt, pham2014increasing} and MMU cache coalescing~\cite{Bhattacharjee:2013:LMM:2540708.2540741} have been proposed to increase the coverage of TLB and MMU cache. GLUE~\cite{Pham:2015} groups contiguous, aligned small page translations under a single speculative huge page translation in the TLB. Redundant memory mappings (RMM)~\cite{Karakostas2015RMM} extends TLB coverage by mapping ranges of virtually and physically contiguous pages in a range TLB. Many studies have focused on improving the availability of superpages. Navarro \textit{et al.} propose reservation-based allocation and deferring promotion~\cite{Navarro2002practical} to support superpages in the OS layer. Gorman \textit{et al.}~\cite{gorman2008supporting} propose a physical page allocator to mitigate memory fragmentation and promote page contiguity. Zhang \textit{et al.} proposed Enigma to map superpages to discontinuous physical pages using a intermediate address (IA) space~\cite{zhang2010enigma}. GTSM~\cite{du2015supporting} leverages contiguity of physical memory extents to construct superpages even when pages have been retired due to bit errors. MIX TLB~\cite{Cox:2017} exploits superpage allocation patterns to concurrently support multiple page sizes. Those studies are orthogonal to our work as the design space is different. \textit{Rainbow} mainly aims to address a thorny problem of enabling lightweight page migration in a superpage-supported hybrid memory system. \textbf{Page Migration in Hybrid Memory Systems}. There have been a number of studies on page migration for hybrid memory systems~\cite{dhiman2009pdram, ramos2011page,Liu:2017,DI-MMAP}. Both PDRAM~\cite{dhiman2009pdram} and CLOCK-DWF~\cite{lee2014clock} migrate frequently-written NVM pages to DRAM, while remaining read-intensive pages in the NVM. AIMR~\cite{AIMR2015} exploits data write ``recency" and ``frequency" to identify write-intensive pages, and only migrates these NVM pages to DRAM. RaPP~\cite{ramos2011page} ranks pages according to the access frequency and recency, and migrate the top-ranked NVM pages to DRAM. HSCC~\cite{Liu:2017} extends the TLB and page tables to count NVM page accesses, and explores an utility-based model to migrate hot NVM pages to DRAM. Bock \textit{et al.} proposed CMMP~\cite{bock2016concurrent} to concurrently migrate multiple pages. Those studies have assumed that the hybrid main memories are uniformly managed at the granularity of 4 KB pages, and thus naturally supports lightweight page migration~\cite{Qureshi:2009}. The context of \textit{Rainbow} is different from those studies. Rainbow mainly focuses on supporting lightweight page migration in hybrid memory systems while still preserving the benefit of superpages. Probably the most relevant work to this paper, Thermostat~\cite{Agarwal:2017} supports page migration at the granularity of 2 MB or 4 KB pages for a two-tiered hybrid memory system. Rainbow is different from Thermostat in two folds. First, to migrate small pages, Thermostat needs to splinter the corresponding superpages and then migrates the small pages. In contrast, Rainbow supports lightweight page migration without splintering superpages, and thus preserves the benefit of superpages on TLB performance. Second, Thermostat exploits an OS-level extension for intercepting TLB misses to estimate access counts at the 4 KB page granularity. The software overhead is usually rather high, and thus Thermostat make a tradeoff between the precision of hot page monitoring and the performance penalty. In contrast, the two-stage page access counting mechanism in Rainbow is more precise and efficient than thermostat, and thus leads to lower page migration cost. \vspace{-0.5ex} \section{Conclusion}\label{sec:conc} Superpages are able to significantly improve TLB coverage and reduce address translation overhead. Hybrid memory systems composed of DRAM and NVM usually can provide very large memory capacity, and thus are more eager for the support of superpages. However, superpages often preclude lightweight page migration, which is a key technique for improving performance and energy efficiency in hybrid memory systems. In this paper, we propose a novel hybrid memory management mechanism called \textit{Rainbow} to support both superpages and lightweight page migration. \textit{Rainbow} manages the NVM at the superpage granularity, and uses the DRAM to cache frequently-accessed (hot) small pages in the superpages. Correspondingly, Rainbow utilizes split TLBs to support different page sizes. We propose a NVM-to-DRAM address remapping mechanism to identify the migrated small pages, without splintering the superpages. Experimental results show that Rainbow can significantly reduce the address translation overhead for applications with large memory footprints, and improve application performance by up to 2.9X (43.0\% on average) compared to a state-of-the-art memory migration policy without superpage support. \vspace{-0.5ex} \bibliographystyle{unsrt}
{ "timestamp": "2018-06-05T02:10:40", "yymm": "1806", "arxiv_id": "1806.00776", "language": "en", "url": "https://arxiv.org/abs/1806.00776" }
\section{Algorithm} \label{sec:algorithm} \vspace{-5pt} Learning in conservative interleaving bandits is non-trivial. For instance, one cannot simply construct exploratory sets $\rnd{D}_t$ using a non-conservative matroid bandit algorithm \citep{kveton2014matroid, talebi2016optimal}, and then take actions $\rnd{A}_t$ containing $(1-\alpha)$ fraction of items from the initial baseline set $B_0$ and the remaining items from $\rnd{D}_t$. If the set $B_0$ contains sub-optimal items, the regret of this policy is linear since its actions never converge to the optimal action $A^\ast$. In this section, we introduce our \emph{Interleaving Upper Confidence Bound} (${\tt I\mhyphen UCB}$) algorithm which achieves sub-linear regret by maintaining a baseline set $\rnd{B}_t$ which continuously improves over the initial baseline set $B_0$ with high probability. We present two variants of the algorithm: one where the agent knows the expected rewards of the input baseline set $\{\bar{w}(e): e \in B_0\}$, which we call ${\tt I\mhyphen UCB1}$; and one where the learner does not know them, which we call ${\tt I\mhyphen UCB2}$. The expected rewards of items in $B_0$ may be known in practice, for instance if the baseline policy has been deployed for a while. We refer to the common aspects of both algorithms as ${\tt I\mhyphen UCB}$. The pseudocode of both algorithms is in \cref{alg:main}. We highlight differences in comments. Recall that $K$ is the rank of the matroid, or equivalently the number of items in any action. ${\tt I\mhyphen UCB}$ operates in rounds, which are indexed by $t$, and takes $K$ actions in each round. We assume that ${\tt I\mhyphen UCB}$ has access to an oracle {\sc MaxBasis} that takes in a matroid and a vector of weights $w \in [L]$, and returns the maximum weight basis with respect to the weights $w$. {\sc MaxBasis} is a greedy algorithm for matroids and hence can run in $O(L \log L)$ time \citep{edmonds1971matroids}. Each round has three stages. In the first stage (lines $5$--$8$), ${\tt I\mhyphen UCB}$ computes \emph{upper confidence bounds (UCBs)} $\rnd{U}_t \in (\mathbb{R}^+)^E$ and \emph{lower confidence bounds (LCBs)} $\rnd{L}_t \in (\mathbb{R}^+)^E$ on the rewards of all items. For any item $e \in E$, let \begin{align} \rnd{U}_t(e) = \hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e) + c_{n, \rnd{T}_{t-1}(e)}, \qquad \rnd{L}_t(e) = \max \{\hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e) - c_{n, \rnd{T}_{t-1}(e)}, 0\} \label{eq:ucblcb} \end{align} where $\hat{\rnd{w}}_{s}(e)$ is the average of $s$ observed weights of item $e$, $\rnd{T}_{t}(e)$ is the number of times item $e$ has been observed in $t$ steps, and \begin{align} c_{n,s} = \sqrt{1.5 \log (n) / s} \label{eq:ucb1} \end{align} is the radius of a confidence interval around $\hat{\rnd{w}}_{s}(e)$ such that $\bar{w}(e) \in [\hat{\rnd{w}}_{s}(e)-c_{n,s}, \hat{\rnd{w}}_{s}(e)+c_{n,s}]$ holds with a high probability. We adopt UCB1 confidence intervals \cite{auer2002finite} to simplify analysis, but it is possible to use tighter ${\tt KL\mhyphen UCB}$ confidence intervals \cite{garivier2011kl}. \begin{algorithm}[t!] \caption{${\tt I\mhyphen UCB}$ for conservative interleaving bandits.} \label{alg:main} \begin{algorithmic}[1] \STATE \textbf{Input:} Set of items $E$, Collection of exchangeable actions $\mathcal{B}$, baseline action $B_0 \in \mathcal{B}$ \STATE \STATE Observe $\rnd{w}_0 \sim P$, $\forall \, e \in E:$ $\rnd{T}_0(e) \gets 1, \hat{\rnd{w}}_0(e) \gets \rnd{w}_0(e)$ \hfill // Initialization \STATE \FOR{$t = 1, 2, \dots$} \FOR[// Compute UCBs and LCBs]{$e \in E$} \STATE $\rnd{U}_t(e) = \hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e) + c_{n,\rnd{T}_{t-1}(e)}$ \STATE $\rnd{L}_t(e) = \max \{\hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e) - c_{n,\rnd{T}_{t-1}(e)}, 0\}$ \ENDFOR \STATE \STATE $\rnd{D}_t \gets ${\sc MaxBasis}$( (E,\mathcal{B}), \, \rnd{U}_t)$ \hfill // Compute decision set \STATE \FOR[// Compute baseline set]{$e \in B_0$} \IF[// $\cmucb1$]{$\bar{w}(e)$ is known} \STATE $\rnd{v}_t(e) \gets \bar{w}(e)$ \ELSE[// $\cmucb2$] \STATE $\rnd{v}_t(e) \gets \rnd{U}_t(e)$ \ENDIF \ENDFOR \FOR{$e \in E \setminus B_0$} \STATE $\rnd{v}_t(e) \gets \rnd{L}_{t}(e)$ \ENDFOR \STATE $\rnd{B}_t \gets ${\sc MaxBasis}$( (E, \mathcal{B}), \, \rnd{v}_t)$ \STATE \STATE // Take $K$ combined actions of $\rnd{D}_t$ and $\rnd{B}_t$ \STATE Let $\rnd{\rho}_t:\rnd{B}_t \rightarrow \rnd{D}_t$ be the bijection in \cref{lem:bijectiveexchange} \FOR{$e \in \rnd{B}_t$} \STATE Take action $\rnd{A}_t = \rnd{B}_t \setminus \{e\}\cup \{\rnd{\rho}_t(e)\}$ \STATE Observe $\{\rnd{w}_t(e): e \in \rnd{A}_t\}$, where $\rnd{w}_t \sim P$ \STATE Update statistics $\hat{\rnd{w}}$ and $\rnd{T}$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} In line $10$, ${\tt I\mhyphen UCB}$ chooses a \emph{decision set} $\rnd{D}_t$ which is the maximum weight basis with respect to $\rnd{U}_t$, an optimistic estimate of $\bar{w}$. The same approach was used in \emph{Optimistic Matroid Maximization} (OMM) of \citet{kveton2014matroid}. However, unlike OMM, we cannot take action $\rnd{D}_t$ because this action may not satisfy our conservative constraint in \eqref{eq:conservative constraint}. In the second stage (lines $12$--$19$), ${\tt I\mhyphen UCB}$ computes a \emph{baseline set} $\rnd{B}_t$ which improves over the input baseline set $B_0$ in each item with a high probability. The set $\rnd{B}_t$ is the maximum weight basis with respect to weights $\rnd{v}_t$, which are chosen as follows. For items $e \in B_0$, we set $\rnd{v}_t(e) = \bar{w}(e)$ if $\bar{w}(e)$ is known, and $\rnd{v}_t(e) = \rnd{U}_t(e)$ if it is not. For items $e \in E \setminus B_0$, we set $\rnd{v}_t(e) = \rnd{L}_t(e)$. This setting guarantees that an item $e \in E \setminus B_0$ is selected over an item $e' \in B_0$ only if its expected reward is higher than that of item $e'$ with a high probability. In the last stage (lines $22$--$26$), ${\tt I\mhyphen UCB}$ takes $K$ combined actions of $\rnd{D}_t$ and $\rnd{B}_t$, which are guaranteed to be bases by \cref{lem:bijectiveexchange}. Let $\rnd{\rho}_t: \rnd{B}_t \rightarrow \rnd{D}_t$ be the bijection in \cref{lem:bijectiveexchange}. Then in round $t$, ${\tt I\mhyphen UCB}$ takes actions $\rnd{A}_t = \rnd{B}_t \setminus \{e\} \cup \{\rnd{\rho}_t(e)\}$ for all $e \in \rnd{B}_t$. Since $\rnd{A}_t$ contains at least $K - 1$ baseline items, all of which improve over $B_0$ with a high probability, the conservative constraint in \eqref{eq:conservative constraint} is satisfied. \section{Analysis} \label{sec:analysis} \vspace{-5pt} This section is organized as follows. We have three subsections. In \cref{sec:analysis 1}, we state theorems about the conservativeness of ${\tt I\mhyphen UCB1}$ and bound its regret. In \cref{sec:analysis 2}, we state analogous theorems for ${\tt I\mhyphen UCB2}$. In \cref{sec:discussion}, we discuss our theoretical results. We only explain the main ideas in the proofs. The details can be found in Appendix. We use the following conventions in our analysis. Without loss of generality, we assume that items in $E$ are sorted such that $\bar{w}(1) \ge \dots \ge \bar{w}(L)$. The decision set at time $t$ is denoted by $\rnd{D}_t$, the baseline set at time $t$ is denoted by $\rnd{B}_t$, and the optimal set is denoted by $A^\ast$. Recall that $A^\ast$, $\rnd{D}_t$, and $\rnd{B}_t$ are bases. Let $\rnd{\pi}_t: A^\ast \rightarrow \rnd{D}_t$ and $\rnd{\sigma}_t: \rnd{D}_t \rightarrow \rnd{B}_t$ be the bijections guaranteed by \cref{lem:bijectiveexchange}. For any item $e$ and item $e'$ such that $\bar{w}(e') > \bar{w}(e)$, we define the \emph{gap} $\Delta_{e,e'} = \bar{w}(e') - \bar{w}(e)$. \subsection{$\cmucb1$: Known Baseline Means} \label{sec:analysis 1} We first prove that $\cmucb1$ is conservative in \cref{thm:conservative 1}. Then we prove a gap-dependent upper bound on its regret in \cref{thm:regret bound cmucb1}. \begin{restatable}{theorem}{TheoremCmucbOneAccuracy} \label{thm:conservative 1} ${\tt I\mhyphen UCB1}$ satisfies \eqref{eq:conservative constraint} for $\alpha = 1 / K$ at all time steps $t \in [n]$ with probability of at least $1 - 2 L / (K n)$. \end{restatable} The regret upper bound of $\cmucb1$ involves two kinds of gaps. For every suboptimal item $e$, we define its minimum gap from the closest optimal item $e^\ast$ whose mean is higher than that of $e$ as \begin{align} \textstyle \Delta_{e,\min} = \min_{e^\ast \in A^\ast:\Delta_{e,e^\ast}>0} \Delta_{e,e^\ast}. \label{eq:Deltaemin} \end{align} This gap is standard in matroid bandits \citep{kveton2014matroid}. For any optimal item $e^\ast$, we define its minimum gap from the closest sub-optimal item $e$ whose mean is lower than that of $e^\ast$ as \vspace{-10pt} \begin{align} \textstyle \Delta^\ast_{e^\ast,\min} = \min_{e \in E\setminus A^\ast: \Delta_{e,e^\ast}>0} \Delta_{e,e^\ast}. \label{eq:Deltaestarmin} \end{align} \begin{restatable}[Regret of $\cmucb1$]{theorem}{TheoremRegretCmucbOne} \label{thm:regret bound cmucb1} The expected $n$-step regret of $\cmucb1$ is bounded as \begin{align*} (K-1)\left( 12\sum_{e^\ast \in A^\ast} \frac{1}{\Delta^\ast_{e^\ast,\min}} + 24\sum_{e \in E\setminus A^\ast} \frac{1}{\Delta_{e,\min}} \right)\log n + 12\sum_{e \in E\setminus A^\ast} \frac{1}{\Delta_{e,\min}} \log n + c, \end{align*} where $\Delta_{e,\min}$ and $\Delta^\ast_{e^\ast,\min}$ are defined in \eqref{eq:Deltaemin} and \eqref{eq:Deltaestarmin} respectively, and $c = O(KL\sqrt{\log n})$. \end{restatable} \begin{proof} The standard UCB counting argument does not work because the baseline set is selected using lower confidence bounds (LCBs). Instead, we use the exchangeability property of matroids (\cref{lem:bijectiveexchange}) to match every item in the baseline set with an item in the decision set. Since the baseline set is selected using LCBs, the LCBs of the baseline items must be higher than those of the corresponding decision set items. We use this to bound the regret of the baseline set by the confidence intervals of the decision set items (\cref{lem:optimal baseline}). We then consider two cases depending on whether an item from the decision set is optimal or not. The first case leads to the first term containing the gap $\Delta^\ast_{e^\ast,\min}$, and the second case gives rise to the second term containing the gap $\Delta_{e,\min}$. \end{proof} \subsection{$\cmucb2$: Unknown Baseline Means} \label{sec:analysis 2} We first prove that $\cmucb2$ is conservative in \cref{thm:conservative 2}. Then we prove a gap-dependent upper bound on its regret in \cref{thm:regret bound cmucb2}. \begin{restatable}{theorem}{TheoremCmucbTwoAccuracy} \label{thm:conservative 2} ${\tt I\mhyphen UCB2}$ satisfies \eqref{eq:conservative constraint} for $\alpha = 1 / K$ at all time steps $t \in [n]$ with probability of at least $1 - 2 L / (K n)$. \end{restatable} The upper bound on the regret of $\cmucb2$ requires a third kind of gap in addition to those defined in \eqref{eq:Deltaemin} and \eqref{eq:Deltaestarmin}. For items $e' \in B_0$, we define its minimum gap from the closest item $e$ whose mean is higher than that of $e'$ as \vspace{-10pt} \begin{align} \textstyle \Delta'_{e',\min} = \min_{e \in E \setminus B_0:\bar{w}(e)>\bar{w}(e')} \Delta_{e',e}. \label{eq:Deltaetildemin} \end{align} \vspace{-15pt} \begin{restatable}[Regret of $\cmucb2$]{theorem}{TheoremRegretCmucbTwo} \label{thm:regret bound cmucb2} The expected $n$-step regret of $\cmucb2$ is bounded as \begin{align*} (K-1)\left(48\hspace{-5pt}\sum_{e^\ast \in A^\ast}\hspace{-5pt} \frac{1}{\Delta^\ast_{e^\ast, \min}} +36\hspace{-8pt}\sum_{e \in E\setminus A^\ast}\hspace{-5pt} \frac{1}{\Delta_{e,\min}} +48\hspace{-5pt}\sum_{e'\in B_0}\hspace{-5pt} \frac{1}{\Delta'_{e',\min}} \right) \log n + 24\hspace{-8pt}\sum_{e \in E\setminus A^\ast}\hspace{-5pt} \frac{1}{\Delta_{e,\min}} \log n + c, \end{align*} where $\Delta_{e,\min}$, $\Delta^\ast_{e^\ast,\min}$ and $\Delta'_{e',\min}$ are defined in \eqref{eq:Deltaemin}, \eqref{eq:Deltaestarmin}, and \eqref{eq:Deltaetildemin} respectively, and $c = O(KL\sqrt{\log n})$. \end{restatable} \begin{proof} The first two terms in the regret upper bound arise similarly to \cref{thm:regret bound cmucb1}. The additional complexity in the analysis of $\cmucb2$ stems from the fact that items in the initial baseline set $B_0$ are selected in $\rnd{B}_t$ using their UCBs, while other items are selected using their LCBs. Because of this, the regret due to items in $\rnd{B}_t \cap B_0$ is bounded using the sum of the confidence intervals of items in $\rnd{B}_t \cap B_0$ and those of the corresponding items in $\rnd{D}_t$ (\cref{lem:cmucb2}). We then consider two cases depending on whether the confidence intervals of the items in $B_0$ are smaller or larger than those of their corresponding decision set items. The latter case gives rise to the third gap term $\Delta'_{e',\min}$. \end{proof} \vspace{-5pt} \subsection{Discussion} \label{sec:discussion} \vspace{-5pt} We note three points. First, the regret bound of ${\tt I\mhyphen UCB}$ contains an extra $(K-1)$ factor as compared to the bound of non-conservative matroid bandit algorithms \citep{kveton2014matroid, talebi2016optimal}. This is because ${\tt I\mhyphen UCB}$ explores a new action in $K$ steps that non-conservative algorithms can explore in a single step. Note that we set $\alpha=1/K$ in our conservative constraint \eqref{eq:conservative constraint}. If the action space allows exchanging multiple items in \cref{eq:exchangeability definition}, our algorithm can be generalized to any $\alpha=m/K$ for $m \in [K]$ by interleaving multiple items simultaneously in lines $23$-$26$. It is clear from our proofs that the regret bound of this algorithm for general $\alpha$ will contain an extra factor of $K(1-\alpha)$. \emph{This is the price we pay for conservativism.} As $\alpha$ approaches $1$, this extra factor disappears and our regret upper bound matches existing regret bounds of non-conservative matroid algorithms \citep{kveton2014matroid, talebi2016optimal}. Second, by using the standard technique of decomposing the gaps into those that are larger than $\varepsilon$ and smaller than $\varepsilon$, one can show that the gap-free regret bound is $O(K\sqrt{KLn\log n})$. This again is $K$ times the gap-free regret of non-conservative matroid algorithms \citep{kveton2014matroid}. Finally, the regret of $\cmucb1$ contains two gaps $\Delta^\ast_{e^\ast,\min}$ and $\Delta_{e,\min}$, while the regret of $\cmucb2$ contains an additional gap $\Delta'_{e',\min}$ that is defined for items $e' \in B_0$. The gap $\Delta_{e,\min}$ also appears in the regret of non-conservative matroid algorithms \citep{kveton2014matroid}. The gap $\Delta^\ast_{e^\ast,\min}$ measures the distance of every optimal item to the closest suboptimal item, and is similar to that appearing in top-$K$ best arm identification problems \citep{kalyanakrishnan2012pac}. We believe the $\Delta'_{e',\min}$ gap in the $\cmucb2$ regret bound is not necessary and our analysis can be improved; however note that it only appears for items in $B_0$, which contains $K$ items, and hence its contribution is small. It also doesn't affect the gap-free bound. \section{Appendix} \label{sec:appendix} We define a ``good'' event \begin{align} \mathcal{E}_t =\{\forall \, e \in E: |\bar{w}(e) - \hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e)| \leq c_{n,\rnd{T}_{t-1}(e) }\}\,, \label{eq:goodevent} \end{align} which states that $\bar{w}(e)$ is inside the high-probability confidence interval around $\hat{\rnd{w}}_{\rnd{T}_{t-1}(e)}(e)$ for all items $e$ at the beginning of time $t$. \begin{lemma} \label{lem:failureevent} Let $\mathcal{E}_r$ be the good event in \eqref{eq:goodevent}. Then \begin{align*} \mathbb{P}\left(\bigcup_{r = 1}^{n / K} \bar{\mathcal{E}}_r\right) \leq \sum_{r = 1}^{n / K} \E{}{\1{}{\bar{\mathcal{E}}_r}} \leq \frac{2 L}{K n}\,. \end{align*} \end{lemma} \begin{proof} From the definition of our confidence intervals and Hoeffding's inequality \cite{boucheron2013concentration}, \begin{align*} \mathbb{P}({|\bar{w}(e) - \hat{\rnd{w}}_s(e)| \geq c_{t,s}}) \leq 2 \exp[-3 \log t] \end{align*} for any $e \in E$, $s \in [n]$, and $t \in [n]$. Therefore, \begin{align*} \mathbb{P}\left(\bigcup_{r = 1}^{n / K} \bar{\mathcal{E}}_r\right) & \leq \sum_{r = 1}^{n / K} \mathbb{P}(\bar{\mathcal{E}}_r) \\ & \leq \sum_{r = 1}^{n / K} \sum_{e \in E} \sum_{s = 1}^{r K} \mathbb{P}(|\bar{w}(e) - \hat{\rnd{w}}_s(e)| \geq c_{n, s}) \\ & \leq 2 \sum_{e \in E} \frac{1}{K n}\,. \end{align*} This concludes our proof. \end{proof} \begin{restatable}{lemma}{LemBaselineConstruction} \label{lem:baseline construction} Let $A$ be the maximum weight basis with respect to weights $w$. Let $B$ be any basis and let $\rho: A \to B$ be the bijection in \cref{lem:bijectiveexchange}. Then \begin{align*} \forall a \in A: w(a) \geq w(\rho(a))\,. \end{align*} \end{restatable} \begin{proof} Fix $a \in A$ and let $b = \rho(a)$. By \cref{lem:bijectiveexchange}, $A^a_b = A \setminus \{a\} \cup \{b\} \in \mathcal{B}$. Now note that $A$ is the maximum weight basis with respect to $w$. Therefore, \begin{align*} w(a) - w(b) = \sum_{e \in A} w(e) - \sum_{e \in A^a_b} w(e) \geq 0\,. \end{align*} This concludes our proof. \end{proof} \TheoremCmucbOneAccuracy* \begin{proof} At time $t$, the baseline set $\rnd{B}_t$ is the maximum weight basis with respect to $\rnd{v}_t$. Therefore, by \cref{lem:baseline construction}, there exists a bijection $\rnd{\rho}: \rnd{B}_t \rightarrow B_0$ such that \begin{align*} \forall b \in \rnd{B}_t: \rnd{v}_t(b) \geq \rnd{v}_t(\rnd{\rho}(b))\,. \end{align*} From the definition of $\rnd{v}_t$, $\rnd{v}_t(\rnd{\rho}(b)) = \bar{w}(\rnd{\rho}(b))$ for any $b \in \rnd{B}_t$, and thus \begin{align*} \forall b \in \rnd{B}_t: \rnd{v}_t(b) \geq \bar{w}(\rnd{\rho}(b))\,. \end{align*} Now suppose that event $\mathcal{E}_t$ in \eqref{eq:goodevent} happens. Then $\bar{w}(e) \geq \rnd{L}_{t}(e)$ for any $e \in E$, and it follows that \begin{align*} \forall b \in \rnd{B}_t: \bar{w}(b) \geq \bar{w}(\rnd{\rho}(b))\,. \end{align*} Since any action at time $t$ contains $K - 1$ items from $\rnd{B}_t$, the constraint in \eqref{eq:conservative constraint} is satisfied when event $\mathcal{E}_t$ happens. Finally, we prove that $ \mathbb{P}(\cup_t \bar{\mathcal{E}}_t) \leq 2 L / (K n)$ in \cref{lem:failureevent}. Therefore, $\mathbb{P}(\mathcal{E}_t) \geq \mathbb{P}(\cap_t \mathcal{E}_t) \geq 1 - 2 L / (K n)$. This concludes our proof. \end{proof} \begin{restatable}{lemma}{LemOptimalDecision} For any $e,e^\ast$, if $e \in \rnd{D}_t$ and $e = \rnd{\pi}_t(e^\ast)$, we have that \begin{align} 2c_{n,\rnd{T}_{t-1}(e)} \ge \bar{w}(e^\ast) - \bar{w}(e),\qquad \text{and}\qquad \rnd{T}_{t-1}(e) \le \frac{6 \log n}{\Delta_{e,e^\ast}^2} \le \frac{6 \log n}{\Delta_{e,\min}^2}, \label{eq:optimal decision} \end{align} where $\Delta_{e,\min}$ is defined in \eqref{eq:Deltaemin}. \label{lem:optimal decision} \end{restatable} \begin{proof} Since the decision set $\rnd{D}_t$ is chosen using upper confidence bounds, we have that $\rnd{U}_t(e) \ge \rnd{U}_t(e^\ast)$. This gives us: $$\bar{w}(e) + 2c_{n,\rnd{T}_{t-1}(e)} \ge \hat{\rnd{w}}_{t-1}(e) + c_{n,\rnd{T}_{t-1}(e)} = \rnd{U}_t(e) \ge \rnd{U}_t(e^\ast) \ge \bar{w}(e^\ast).$$ This implies the first inequality in \eqref{eq:optimal decision}. Substituting the expression for $c_{n,\rnd{T}_{t-1}(e)}$ from \eqref{eq:ucb1} yields the bound on $\rnd{T}_{t-1}(e)$ in \eqref{eq:optimal decision}. \end{proof} \begin{restatable}{lemma}{LemOptimalBaseline} For any $e^\ast \in A^\ast$, $e \in \rnd{D}_t$, and $e' \in \rnd{B}_t$ such that $e = \rnd{\pi}_t(e^\ast)$ and $e'=\rnd{\sigma}_t(e)$, \begin{itemize} \item[(a)] If $e \in A^\ast$, then $e=e^\ast$ and \begin{align} 2c_{n,\rnd{T}_{t-1}(e^\ast)} \ge \bar{w}(e^\ast)-\bar{w}(e'), \qquad \text{and}\qquad \rnd{T}_{t-1}(e^\ast) \le \frac{6 \log n}{\Delta_{e',e^\ast}^2} \le \frac{6 \log n}{\Delta_{e^\ast,\min}^{^\ast 2}}, \label{eq:optimal baseline e=e*} \end{align} where $\Delta^\ast_{e^\ast, \min}$ is defined in \eqref{eq:Deltaestarmin}. \item[(b)] If $e \notin A^\ast$, \begin{align} 4c_{n,\rnd{T}_{t-1}(e)} \ge \bar{w}(e^\ast) - \bar{w}(e'). \label{eq:optimal baseline ci} \end{align} \end{itemize} \label{lem:optimal baseline} \end{restatable} \begin{proof} Since the baseline set is selected using lower confidence bounds, we have that $\rnd{L}_t(e') \ge \rnd{L}_t(e)$. This gives us: $$\bar{w}(e') \ge \rnd{L}_t(e') \ge \rnd{L}_t(e) \ge \bar{w}(e)-2c_{n,\rnd{T}_{t-1}(e)}$$ This implies that \begin{align} 2c_{n,\rnd{T}_{t-1}(e)} \ge \bar{w}(e)-\bar{w}(e'). \label{eq:decision baseline ci} \end{align} \begin{itemize} \item[(a)] If $e \in A^\ast$, then since $e = \rnd{\pi}_t(e^\ast)$, we must have that $e \ne e^\ast$. Assume otherwise. Then $A^\ast \setminus \{e^\ast\} \cup \{e\}$ is a basis (by \cref{lem:bijectiveexchange}) of size $(K-1)$, which contradicts the fact that all bases have the same cardinality $K$. Substituting $e=e^\ast$ in \eqref{eq:decision baseline ci} gives the first inequality in \eqref{eq:optimal baseline e=e*}. The $\rnd{T}_{t-1}(e^\ast)$ bound in \eqref{eq:optimal baseline e=e*} follows by substituting the expression of $c_{n,\rnd{T}_{t-1}(e)}$ from \eqref{eq:ucb1}. % \item[(b)] If $e \notin A^\ast$, note that the confidence interval inequality in \eqref{eq:optimal decision} from \cref{lem:optimal decision} still holds because $e \in \rnd{D}_t$. \eqref{eq:optimal baseline ci} then follows by adding this and \eqref{eq:decision baseline ci}. \end{itemize} \end{proof} \TheoremRegretCmucbOne* \begin{proof} We first decompose the regret depending on whether the event $\bar{\mathcal{E}} = \bigcup\limits_{t=1}^{n/K} \bar{\mathcal{E}}_t$ happens or not, where $\mathcal{E}_t$ is defined in \eqref{eq:goodevent}. Let $\rnd{R}_t$ denote the regret at time $t$. Then, we can decompose the regret of $\cmucb1$ as: \begin{align} R(n) &= \E{}{\1{}{\bar{\mathcal{E}}} \sum\limits_{t=1}^{n/K} \rnd{R}_t} + \E{}{\1{}{\mathcal{E}}\sum\limits_{t=1}^{n/K} \1{}{\rnd{R}_t}} \label{eq:regretdecomposition} \end{align} Let us first analyze the case when $\bar{\mathcal{E}}$ holds. The probability of this event by \cref{lem:failureevent} is $\frac{2L}{Kn}$. Since the maximum regret in $n$ steps can be $Kn$, the contribution of the first term is $2L$. We assume $\mathcal{E}$ holds in the remaining proof. The expected regret at time $t$ can be written as \begin{align} \E{}{R_t} &= K \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - (K-1) \sum_{e' \in \rnd{B}_t} \bar{w}(e') - \sum_{e \in \rnd{D}_t} \bar{w}(e) \nonumber \\ &= \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e \in \rnd{D}_t} \bar{w}(e) \right) + (K-1) \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t} \bar{w}(e')\right). \label{eq:regret time t decomposition} \end{align} Let us first bound the regret due to the first term. When we sum the first term in \eqref{eq:regret time t decomposition} over all times $t$, we get \begin{align*} \sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e \in \rnd{D}_t} \bar{w}(e) \right) &\overset{(a)}{\le} \sum_{t=1}^{n/K} \sum_{e \in \rnd{D}_t} 2c_{n,\rnd{T}_{t-1}(e)} \le \sum_{e \in E\setminus A^\ast} \sum_{t=1}^{n/K} 2\sqrt{\frac{1.5\log n}{\rnd{T}_{t-1}(e)}}\1{}{e \in \rnd{D}_t} \end{align*} where $(a)$ follows from the first inequality in \eqref{eq:optimal decision} in \cref{lem:optimal decision}. Since a) the counter $\rnd{T}_{t-1}(e)$ increments every time $e$ is played, b) second inequality in Eq. \eqref{eq:optimal decision} holds by \cref{lem:optimal decision}, and \begin{align} \sum_{s=1}^{m} \frac{1}{\sqrt{s}} \le 1+2\sqrt{m}, \label{eq:sum of sqrts} \end{align} we can bound the regret due to the first term as \begin{align} \sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e \in \rnd{D}_t} \bar{w}(e) \right) &\le \sum_{e \in E\setminus A^\ast} 2\sqrt{1.5 \log n} \left(1+2\sqrt{\frac{6 \log n}{\Delta_{e,\min}^2}} \right) \nonumber \\ &\le 12 \sum_{e \in E\setminus A^\ast} \frac{1}{\Delta_{e,\min}} \log n + L\sqrt{6 \log n} \label{eq:regret bound first term} \end{align} Let us now bound the regret due to the second term in \eqref{eq:regret time t decomposition}. When we sum the second term in \eqref{eq:regret time t decomposition} over all times $t$, we get \begin{align} &(K-1) \sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t} \bar{w}(e') \right) \nonumber \\ \overset{(a)}{\le}\,& (K-1) \left( \sum_{t=1}^{n/K} \sum_{e \in \rnd{D}_t \cap A^\ast} 2 c_{n,\rnd{T}_{t-1}(e)} + \sum_{t=1}^{n/K} \sum_{e \in \rnd{D}_t \setminus A^\ast} 4 c_{n,\rnd{T}_{t-1}(e)} \right) \nonumber \\ =\,& (K-1) \left( \sum_{e \in A^\ast} \sum_{t=1}^{n/K} 2 \sqrt{\frac{1.5 \log n}{\rnd{T}_{t-1}(e)}} \1{}{e \in \rnd{D}_t} + \sum_{e \in E\setminus A^\ast} \sum_{t=1}^{n/K} 4 \sqrt{\frac{1.5 \log n}{\rnd{T}_{t-1}(e)}} \1{}{e \in \rnd{D}_t}\right) \label{eq:second term decomposition} \end{align} where $(a)$ follows from \eqref{eq:optimal baseline e=e*} and \eqref{eq:optimal baseline ci} in \cref{lem:optimal baseline}. We use the $\rnd{T}_{t-1}(e^\ast)$ bound in \eqref{eq:optimal baseline e=e*} to bound the first term, and the $\rnd{T}_{t-1}(e)$ bound in \eqref{eq:optimal decision} to bound the second term in \eqref{eq:second term decomposition}. Then, from the fact that the counter $\rnd{T}_{t-1}(e)$ is incremented every time $e$ is chosen, and \eqref{eq:sum of sqrts}, we can bound the regret due to the second term in \eqref{eq:regret time t decomposition} as \begin{align} &(K-1) \sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t} \bar{w}(e') \right) \nonumber \\ \le\, &(K-1) \left( \sum_{e^\ast \in A^\ast}2 \sqrt{1.5 \log n} \left( 1+2\sqrt{\frac{6\log n}{\Delta_{e^\ast,\min}^{'2}}}\right) + \sum_{e \in E\setminus A^\ast} 4\sqrt{1.5\log n} \left(1+2\sqrt{\frac{6 \log n}{\Delta_{e,\min}^2}} \right)\right) \nonumber \\ \le\, &24(K-1) \sum_{e \in E\setminus A^\ast} \frac{1}{\Delta_{e,\min}} \log n + 12(K-1) \sum_{e^\ast \in A^\ast} \frac{1}{\Delta^\ast_{e^\ast,\min}} \log n \nonumber \\ &\qquad + L(K-1)\sqrt{24 \log n} + K(K-1)\sqrt{6 \log n} \label{eq:regret bound second term} \end{align} Adding \eqref{eq:regret bound first term}, \eqref{eq:regret bound second term}, and the contribution from the failure event $\bar{\mathcal{E}}$ yields the upper bound in the theorem statement. \end{proof} \TheoremCmucbTwoAccuracy* \begin{proof} At time $t$, the baseline set $\rnd{B}_t$ is the maximum weight basis with respect to $\rnd{v}_t$. Therefore, by \cref{lem:baseline construction}, there exists a bijection $\rnd{\rho}:\rnd{B}_t \rightarrow B_0$ such that \begin{align*} \forall b \in \rnd{B}_t: \rnd{v}_t(b) \geq \rnd{v}_t(\rnd{\rho}(b))\,. \end{align*} Now we consider two cases. First, suppose that $b \in B_0$. Then by \cref{lem:baseline construction}, $b = \rnd{\rho}(b)$, and $\bar{w}(b) \geq \bar{w}(\rnd{\rho}(b))$ from our assumption. Second, suppose that $b \notin B_0$. Then from $\rnd{v}_t(b) = \rnd{L}_{t}(b)$ and $\rnd{v}_t(\rnd{\rho}(b)) = \rnd{U}_{t}(\rnd{\rho}(b))$, and \begin{align*} \bar{w}(b) \geq \rnd{L}_{t}(b) \geq \rnd{U}_{t}(\rnd{\rho}(b)) \geq \bar{w}(\rnd{\rho}(b)) \end{align*} under event $\mathcal{E}_t$. Since any action at time $t$ contains $K - 1$ items from $\rnd{B}_t$, the constraint in \eqref{eq:conservative constraint} is satisfied when event $\mathcal{E}_t$ happens. Finally, we prove that $ \mathbb{P}(\cup_t \bar{\mathcal{E}}_t) \leq 2 L / (K n)$ in \cref{lem:failureevent}. Therefore, $\mathbb{P}(\mathcal{E}_t) \geq \mathbb{P}(\cap_t \mathcal{E}_t) \geq 1 - 2 L / (K n)$. This concludes our proof. \end{proof} \begin{restatable}{lemma}{LemCmucbTwo} For any $e^\ast \in A^\ast$, $e \in \rnd{D}_t$, and $e' \in \rnd{B}_t$ such that $e' \in B_0$, $e = \rnd{\pi}_t(e^\ast)$, and $e'=\rnd{\sigma}_t(e)$, \begin{itemize} \item[(a)] If $e \in A^\ast$, and $c_{n,\rnd{T}_{t-1}(e')} \le c_{n,\rnd{T}_{t-1}(e)}$, then $e = e^\ast$, and \begin{align} 4c_{n,\rnd{T}_{t-1}(e^\ast)} \ge \bar{w}(e^\ast)-\bar{w}(e'), \qquad \text{and} \qquad \rnd{T}_{t-1}(e^\ast) \le \frac{24 \log n}{\Delta_{e',e^\ast}^2}. \label{eq:optimal b0 e=e*} \end{align} \item[(b)] If $e \in \rnd{D}_t \setminus A^\ast$ and $c_{n,\rnd{T}_{t-1}(e')} \le c_{n,\rnd{T}_{t-1}(e)}$, then \begin{align} 6c_{n,\rnd{T}_{t-1}(e)} \ge \bar{w}(e^\ast) - \bar{w}(e'). \label{eq:optimal b0 ci} \end{align} \item[(c)] If $c_{n,\rnd{T}_{t-1}(e')} > c_{n,\rnd{T}_{t-1}(e)}$, then \begin{align} 4c_{n,\rnd{T}_{t-1}(e')} \ge \bar{w}(e) - \bar{w}(e'), \qquad \text{and} \qquad \rnd{T}_{t-1}(e') \le \frac{24 \log n}{\Delta_{e',e}^2} \le \frac{24 \log n}{\Delta_{e',\min}^{'2}}, \label{eq:decision b0 case2} \end{align} where $\Delta'_{e',\min}$ is defined in \eqref{eq:Deltaetildemin}. \end{itemize} \label{lem:cmucb2} \end{restatable} \begin{proof} For items $e' \in B_0 \cap \rnd{B}_t$, we have that $\rnd{U}_t(e') \ge \rnd{L}_t(e)$. This gives us $$\bar{w}(e') + 2c_{n,\rnd{T}_{t-1}(e')} \ge \rnd{U}_t(e') \ge \rnd{L}_t(e) \ge \bar{w}(e)-2c_{n,\rnd{T}_{t-1}(e)}$$ This implies that \begin{align} 2c_{n,\rnd{T}_{t-1}(e)} + 2c_{n,\rnd{T}_{t-1}(e')} \ge \bar{w}(e)-\bar{w}(e'). \label{eq:decision b0 ci} \end{align} \begin{itemize} \item[(a)] If $e \in A^\ast$, then $e=e^\ast$ by the same argument as in the proof of \cref{lem:optimal baseline}(a). Substituting $e=e^\ast$ in \eqref{eq:decision b0 ci} gives the first inequality in \eqref{eq:optimal b0 e=e*}. Substituting the expression for $c_{n,\rnd{T}_{t-1}(e^\ast)}$ from \eqref{eq:ucb1} gives the second inequality in \eqref{eq:optimal b0 e=e*}. % \item[(b)] If $e \in \rnd{D}_t \setminus A^\ast$ and $c_{n,\rnd{T}_{t-1}(e')} \le c_{n,\rnd{T}_{t-1}(e)}$, adding the confidence interval inequalities in \eqref{eq:decision b0 ci} and \eqref{eq:optimal decision} gives \eqref{eq:optimal b0 ci}. % \item[(c)] We assume $\bar{w}(e) > \bar{w}(e')$, because otherwise the regret contribution is bounded by $0$. Then, $c_{n,\rnd{T}_{t-1}(e')} > c_{n,\rnd{T}_{t-1}(e)}$ and \eqref{eq:decision b0 ci} imply the first inequality in \eqref{eq:decision b0 case2}. Substituting the expression for $c_{n, \rnd{T}_{t-1}(e')}$ from \eqref{eq:ucb1} gives the bound on $\rnd{T}_{t-1}(e')$ in \eqref{eq:decision b0 case2}. \end{itemize} \end{proof} \begin{corollary} For any $e^\ast \in A^\ast \cap \rnd{D}_t$, and $e' \in \rnd{B}_t$ such that and $e'=\rnd{\sigma}_t(e^\ast)$, if a) $e' \notin B_0$, or b) $e' \in B_0$ and $c_{n,\rnd{T}_{t-1}(e')} \le c_{n,\rnd{T}_{t-1}(e^\ast)}$, we have \begin{align} \rnd{T}_{t-1}(e^\ast) \le \frac{24 \log n}{\Delta_{e^\ast,\min}^{\ast 2}}, \label{eq:Ttestar bound cmucb2} \end{align} where $\Delta^\ast_{e^\ast,\min}$ is defined in \eqref{eq:Deltaestarmin}. \label{corr:Ttestar cmucb1 cmucb2} \end{corollary} \begin{proof} The proof follows by taking the maximum of the upper bounds in \eqref{eq:optimal baseline e=e*} and \eqref{eq:optimal b0 e=e*} over all $e'$ that satisfy the conditions of \cref{lem:optimal baseline}(a) or \cref{lem:cmucb2}(a). \end{proof} \TheoremRegretCmucbTwo* \begin{proof} Similar to the proof of $\cmucb1$, we use \eqref{eq:regretdecomposition} to break down the regret depending on whether the failure event $\bar{\mathcal{E}} = \bigcup\limits_{t=1}^{n/K} \bar{\mathcal{E}}_t$ holds or not. The contribution from the event $\bar{\mathcal{E}}$ is again bounded by $2L$. We assume $\mathcal{E}$ holds in the remaining proof. We again use \eqref{eq:regret time t decomposition} to decompose the regret, and the bound on the first term from \eqref{eq:regret bound first term} holds. The difference in $\cmucb2$ compared to $\cmucb1$ is that while selecting the baseline set $\rnd{B}_t$ in $\cmucb2$, we use upper confidence intervals for items in $B_0$. We now sum the second term in \eqref{eq:regret time t decomposition} over all times $t$, \begin{align} &(K-1) \sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t} \bar{w}(e')\right) \nonumber \\ = &(K-1) \sum_{t=1}^{n/K} \left( \left( \sum_{\stackrel{e^\ast \in A^\ast,}{\rnd{\sigma}_t(\rnd{\pi}_t(e^\ast)) \notin B_0}} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t \setminus B_0} \bar{w}(e') \right) + \left( \sum_{\stackrel{e^\ast \in A^\ast,}{\rnd{\sigma}_t(\rnd{\pi}_t(e^\ast)) \in B_0}} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t \cap B_0} \bar{w}(e') \right) \right) \nonumber \\ % \le &(K-1) \sum_{t=1}^{n/K} \left( \left( \sum_{\stackrel{e \in \rnd{D}_t \cap A^\ast,}{\rnd{\pi}_t(e) \notin B_0}} 2 c_{n,\rnd{T}_{t-1}(e)} + \sum_{\stackrel{e \in \rnd{D}_t \setminus A^\ast,}{\rnd{\pi}_t(e) \notin B_0}} 4 c_{n,\rnd{T}_{t-1}(e)} \right) \right. \nonumber \\ + &\left. \left( \sum_{\stackrel{e \in A^\ast \cap \rnd{D}_t, \rnd{\pi}_t(e) =e' \in B_0}{c_{n,\rnd{T}_{t-1}(e)} > c_{n,\rnd{T}_{t-1}(e')}}} 4c_{n,\rnd{T}_{t-1}(e)} + \sum_{\stackrel{e \in \rnd{D}_t \setminus A^\ast, \rnd{\pi}_t(e) =e' \in B_0}{c_{n,\rnd{T}_{t-1}(e)} > c_{n,\rnd{T}_{t-1}(e')}}} 6c_{n,\rnd{T}_{t-1}(e)} + \sum_{\stackrel{e \in \rnd{D}_t,\rnd{\pi}_t(e) =e' \in B_0}{c_{n,\rnd{T}_{t-1}(e)} > c_{n,\rnd{T}_{t-1}(e')}}} 4c_{n,\rnd{T}_{t-1}(e')} \right) \right) \nonumber \\ % \le &(K-1)\left( \sum_{e^\ast \in A^\ast} \sum_{t=1}^{n/K} 4c_{n,\rnd{T}_{t-1}(e^\ast)} \1{}{e^\ast \in \rnd{D}_t} + \sum_{e \in E\setminus A^\ast} \sum_{t=1}^{n/K} 6c_{n, \rnd{T}_{t-1}(e)} \1{}{e \in \rnd{D}_t} \right. \nonumber \\ \qquad &\left. + \sum_{e' \in B_0} \sum_{t=1}^{n/K} 4c_{n,\rnd{T}_{t-1}(e')} \1{}{e' \in \rnd{B}_t} \right) \nonumber \end{align} Similar to the proof of $\cmucb1$, we substitute for the confidence intervals using \eqref{eq:ucb1}. We then bound the first term using \eqref{eq:Ttestar bound cmucb2}, second term using \eqref{eq:optimal decision}, and third term using \eqref{eq:decision b0 case2}. \begin{align} (K-1) &\sum_{t=1}^{n/K} \left( \sum_{e^\ast \in A^\ast} \bar{w}(e^\ast) - \sum_{e' \in \rnd{B}_t} \bar{w}(e')\right) \nonumber \\ \le (K-1) &\left( \sum_{e^\ast \in A^\ast} \frac{48 \log n}{\Delta^\ast_{e^\ast,\min}} + \sum_{e \in E\setminus A^\ast \setminus B_0} \frac{36 \log n}{\Delta_{e,\min}} + \sum_{e' \in B_0} \frac{48 \log n}{\Delta'_{e',\min}} \right) \nonumber \\ + (K-1)&\left( K\sqrt{24 \log n} + L\sqrt{48 \log n} + K\sqrt{24 \log n} \right) \label{eq:regret bound second term cmucb2} \end{align} Adding \eqref{eq:regret bound first term}, \eqref{eq:regret bound second term cmucb2} and the contribution from the failure event $\bar{\mathcal{E}}$ yields the upper bound in the theorem statement. \end{proof} \section{Conclusions} \label{sec:conclusions} \vspace{-6pt} In this paper, we study controlled exploration in combinatorial action spaces using interleaving, and precisely formulate the learning problem in the action space of matroids. Our conservate formulation is more suitable for combinatorial spaces than existing notions of conservatism. We propose an algorithm for solving our problem, ${\tt I\mhyphen UCB}$, and prove gap-dependent upper bounds on its regret. ${\tt I\mhyphen UCB}$ exploits the idea of interleaving, and hence can evaluate an action without ever taking that action. We leave open several questions of interest. First, we only study the case of $\alpha = 1 / K$. Our algorithm generalizes to higher values of $\alpha$ in uniform and partition matroids, because they satisfy the property that $\forall\,B_1,B_2\in \mathcal{B}$, there exists a bijection $\sigma_{B_1,B_2}: B_1 \to B_2$ such that $(B_1 \setminus X) \cup \sigma_{B_1,B_2}(X) \in \mathcal{B}$ $\forall X \subseteq B_1$. Matroids that satisfy this property are called \emph{strongly base-orderable}, and one can generalize ${\tt I\mhyphen UCB}$ and its analysis to these matroids for higher values of $\alpha$ (see \cref{sec:discussion}). It is not clear how to extend our results beyond $\alpha = 1 / K$ when the matroid is not strongly base-orderable. Second, we exploit the modularity of our reward function. In general, it may not be possible to build unbiased estimators with interleaving. For e.g., clicks are known to be position-biased, and click models that take this into account have non-linear reward functions \cite{chuklin2015click}. But it may be possible to build biased estimators with the right bias, such that a more attractive item never appears to be less attractive than a less attractive item \cite{zoghi2017online}. Third, \cref{lem:bijectiveexchange} only guarantees the \emph{existence} of a bijection, but it is not constructive. The construction is straightforward for uniform and partition matroids in our experiments. Fourth, we also leave open the question of a lower bound. Finally, note that our new analysis based on \cref{lem:bijectiveexchange} significantly simplifies the original analysis of OMM in \citet{kveton2014matroid}. \section{Experiments} \label{sec:experiments} \vspace{-5pt} We conduct two experiments. In \cref{sec:regret scaling}, we validate that the regret of ${\tt I\mhyphen UCB}$ grows as per our upper bounds in \cref{sec:analysis}. In \cref{sec:recommender system experiment}, we solve two recommendation problems using ${\tt I\mhyphen UCB}$, and validate that its regret is no higher than $K-1$ times that of a non-conservative matroid bandit algorithm ${\tt OMM}$ \citep{kveton2014matroid}. ${\tt OMM}$ violates our conservative constraint multiple times. \begin{figure*}[t] \label{fig:experiments} \centering \includegraphics[width=1.8in]{Figures/RegretScaling} \includegraphics[width=1.8in]{Figures/TopK} \includegraphics[width=1.8in]{Figures/DiverseTopK} \\ \hspace{0.35in} (a) \hspace{1.6in} (b) \hspace{1.75in} (c) \vspace{-0.07in} \caption{\textbf{a}. The $n$-step regret of ${\tt I\mhyphen UCB1}$ in the synthetic problem in \cref{sec:regret scaling} as a function of $K$. \textbf{b}. The regret of ${\tt I\mhyphen UCB1}$, ${\tt I\mhyphen UCB2}$, and ${\tt OMM}$ in the top-$K$ recommendation problem in \cref{sec:recommender system experiment}. \textbf{c}. The regret of ${\tt I\mhyphen UCB1}$, ${\tt I\mhyphen UCB2}$, and ${\tt OMM}$ in the diverse top-$K$ recommendation problem in \cref{sec:recommender system experiment}.} \vspace{-12pt} \end{figure*} \vspace{-7pt} \subsection{Regret Scaling} \label{sec:regret scaling} \vspace{-5pt} The first experiment shows that the regret of ${\tt I\mhyphen UCB1}$ grows as suggested by our gap-dependent upper bound in \cref{thm:regret bound cmucb1}. We experiment with uniform matroids of rank $K$ where the ground set is $E = [K^2]$. The $i$-th entry of $\rnd{w}_t$, $\rnd{w}_t(i)$, is an independent Bernoulli variable with mean $\bar{w}(i) = 0.5 (1 - \Delta \1{}{i > K})$ for $\Delta \in (0, 1)$. The baseline set is the last $K$ items in $E$, $B_0 = [K^2] \setminus [K (K - 1)]$. The key property of our class of problems is that the regret of any item in $B_0$ is the same as that of any suboptimal item, and therefore the regret of ${\tt I\mhyphen UCB1}$ should be dominated by the gap-dependent term in \cref{thm:regret bound cmucb1}. This term is $O(K^3)$ because $L = K^2$. We vary $K$ and report the $n$-step regret in $100\text{k}$ steps for multiple values of $\Delta$. \cref{fig:experiments}a shows log-log plots of the regret of ${\tt I\mhyphen UCB1}$ as a function of $K$ for three values of $\Delta$. The slopes of the plots are $2.99$ ($\Delta = 0.8$), $2.98$ ($\Delta = 0.4$), and $2.99$ ($\Delta = 0.2$). This means that the regret is cubic in $K$, as suggested by our upper bound. \vspace{-7pt} \subsection{Recommender System Experiment} \label{sec:recommender system experiment} \vspace{-5pt} In the second experiment, we apply ${\tt I\mhyphen UCB}$ to the two recommendation problems discussed in \cref{sec:conmatbandit}. In each problem, we recommend $K$ most attractive movies out of $L$ subject to a different matroid constraint. We experiment with the \emph{MovieLens} dataset from February 2003 \cite{movielens}, where $6$ thousand users give one million ratings to $4$ thousand movies. Our learning problems are formulated as follows. The set $E$ are $200$ movies from the MovieLens dataset. The set is partitioned as $E = \bigcup_{i = 1}^{10} E_i$, where $E_i$ are $20$ most popular movies in the $i$-th most popular MovieLens movie genre that are not in $E_1, \dots, E_{i - 1}$. The weight of item $e$ at time $t$, $\rnd{w}_t(e)$, indicates that item $e$ attracts the user at time $t$. We assume that $\rnd{w}_t(e) = 1$ if and only if the user rated item $e$ in our dataset. This indicates that the user watched movie $e$ at some point in time, perhaps because the movie was attractive. The user at time $t$ is drawn randomly from all MovieLens users. The goal of the learning agent is to learn a list of items with the highest expected number of attractive movies on average, subject to a constraint. We experiment with two constraints. The first problem is a uniform matroid of rank $K = 10$. The optimal solution is the set of $K$ most attractive movies. This setting is also known as \emph{top-$K$ recommendations}. The baseline set $B_0$ are the $11$-th to $20$-th most attractive movies. The second problem is a partition matroid of rank $K = 10$, where the partition is $\{E_i\}_{i = 1}^{10}$. The optimal solution are most attractive movies in each $E_i$. This setting can be viewed as \emph{diverse top-$K$ recommendations}. The baseline set $B_0$ are second most attractive movies in each $E_i$. Our results are reported in Figures \ref{fig:experiments}b and \ref{fig:experiments}c. We observe several trends. First, the regret of all algorithms flattens over time, which shows that they learn near-optimal solutions. Second, the regret of ${\tt I\mhyphen UCB2}$ is higher than that of ${\tt I\mhyphen UCB1}$. This is because ${\tt I\mhyphen UCB2}$ is a variant of ${\tt I\mhyphen UCB1}$ that does not know the values of suboptimal items, and therefore needs to estimate them. Both of our algorithms satisfy our conservative constraint in \eqref{eq:conservative constraint} at each time $t$. Third, we observe that ${\tt OMM}$ achieves the lowest regret. But it also violates our conservative constraints. In Figures \ref{fig:experiments}b and \ref{fig:experiments}c, the numbers of violated constraints are more than $16$ and $158$ thousand, respectively. In the latter problem, this is one violated constraint in every three actions on average. Finally, note that the regret of $\cmucb1$ and $\cmucb2$ is less than $(K-1)$ times ($K=10$) the regret of ${\tt OMM}$, as predicted by our regret bounds. \section{Introduction} \label{sec:introduction} Recommender systems are an integral component of many industries, with applications in content personalization, advertising, and landing page design \cite{resnick1997recommender, adomavicius2015context, broder2008computational}. Multi-armed bandit algorithms provide adaptive techniques for content recommendation, and although theoretically well-understood, they have not been widely adopted in production systems \citep{cremonesi2011looking,schnabel2018short}. This is primarily due to concerns that the output of the bandit algorithm can be sub-optimal or even disastrous, especially when the algorithm explores sub-optimal arms. To address this issue, most industries have a static recommendation engine in production that has been well-optimized and tested over many years, and a promising new policy is often evaluated using A/B testing \cite{siroker2013b} by allocating a small percentage $\alpha$ of the traffic to the new policy. When the utilities of actions are independent, this is a reasonable solution that allows the new policy to explore non-aggressively. Many recommendation problems, however, involve \emph{structured actions}, such as ranked lists of items (movies, products, etc.). In such actions, the total utility of the action can be decomposed into the utilities of its individual items. Therefore, it is conceivable that the new policy can be evaluated in a controlled and principled fashion by \emph{interleaving} items in the new and production actions, instead of splitting the traffic as is done in A/B testing. As a concrete example, consider the problem of recommending top-$K$ movies to a new visitor \citep{deshpande2004item}. A company may have a production policy that recommends a default set of $K$ movies that performs reasonably well, but intends to test a new algorithm that promises to learn better movies. The A/B testing method would show the new algorithm's recommendations to a visitor with probability $\alpha$. In the initial stages, the new algorithm is expected to explore a lot to learn, and may hurt engagement with the visitor who is shown a disastrous set of movies, just to learn that these movies are not good. However, an arguably better approach that does not hurt any visitor's engagement as much and gathers the same feedback on average, is to show the default well-tested movies interleaved with $\alpha$ fraction of new recommendations. A recent study by \citet{schnabel2018short} concluded that this latter approach is in fact better: \begin{quote} ``These findings indicate that for improving recommendation systems in practice, it is preferable to mix a limited amount of exploration into every impression – as opposed to having a few impressions that do pure exploration.'' \end{quote} In this paper, we formalize the above idea and study the general case where actions are \emph{exchangeable}, which is a mathematical formulation of the notion of interleaving. One fairly general and important class of exchangeable actions is the set of bases of a matroid, and this is the setting we focus on in our theorems and experiments. In particular, we study learning variants of maximizing an unknown modular function on a matroid subject to a conservative constraint. In the recommendations problem discussed above, our conservative constraint requires that the recommendations always be above a certain baseline quality. The question we wish to answer is: \emph{what is the price of being this conservative?} In this work, we answer this question and make five contributions. First, we introduce the idea of \emph{conservative multi-armed bandits in combinatorial action spaces}, and formulate a conservative constraint that addresses the issues raised in \citet{schnabel2018short}. Existing conservative constraints for multi-armed bandit problems fail in this aspect, and hence our constraint is more appropriate for combinatorial action spaces. Second, we propose interleaving as a solution, and show how it naturally leads to the idea of \emph{exchangeable} action spaces. We precisely formulate an online learning problem - \emph{conservative interleaving bandits} - in one such space, that of matroids. Third, we present \emph{Interleaving Upper Confidence Bound (${\tt I\mhyphen UCB}$)}, a computationally and sample-efficient algorithm for solving our problem. The algorithm satisfies our conservative constraint by design. Fourth, we prove gap-dependent upper bounds on its expected cumulative regret, and show that the regret scales logarithmically in the number of steps $n$, at most linearly in the number of items $L$, and at most quadratically in the number of items $K$ in any action. Finally, we evaluate ${\tt I\mhyphen UCB}$ on both synthetic and real-world problems. In the synthetic experiments, we validate an extra factor in our regret bounds, which is the price for being conservative. In the real-world experiments, we illustrate how to formulate and solve top-$K$ recommendation problems in our setting. To the best of our knowledge, this is the first work that studies conservatism in the context of combinatorial bandit problems. \section{Related Work} \label{sec:relatedwork} \vspace{-8pt} Online learning with matroids was introduced by \citet{kveton2014matroid}, and also studied by \citet{talebi2016optimal}. However, they do not consider any notion of conservatism. Our ${\tt I\mhyphen UCB}$ algorithm borrows ideas and the {\sc MaxBasis} method from their algorithm. Conservatism in online learning was introduced by \citet{wu2016conservative}. They consider the standard multi-armed bandit problem with no structural assumption about their actions. Their constraint is cumulative, and this allows the learner to take bad actions once in a while, but our instantaneous constraint \eqref{eq:conservative constraint} explicitly forbids this by design. However, note that our setting and algorithm applies to combinatorial action spaces, and hence is less general. \citet{kazerouni2017conservative} study conservatism in linear bandits. Their constraint is also cumulative; furthermore the time complexity of their algorithm grows with time when the rewards of the basline policy are unknown. ${\tt I\mhyphen UCB}$ is efficient because it exploits the matroid structure of the action space. \citet{bastani2017exploiting} study contextual bandits and propose diversity assumptions on the environment. Intuitively, if contexts vary a lot over time, the environment explores on your behalf and you need not explore. In our setting, the learner actively explores, albeit in a constrained fashion. \citet{radlinski2006minimally} propose randomizing the order of presented items to estimate their true relevance in the presence of item and position biases. While their algorithm guarantees that the quality of the presented items is unaffected, it does not learn a better policy. The idea of interleaving has been used to evaluate information retrieval systems and \citet{chapelle2012large} validate its efficacy, but they too do not learn a better policy. Our algorithm learns a better policy, as seen in our regret plots. While we do not consider item and position biases in this work, we hope to do so in the future work. \section{Setting} \label{sec:setting} We focus on linear reward functions and formulate our learning problem as a stochastic combinatorial semi-bandit \citep{kveton2015tight, gai2012combinatorial, chen2013combinatorial}, which we first review in \cref{sec:semi-bandit}. Stochastic combinatorial semi-bandits have been used for recommendation problems before \citep{kveton2014learning, kveton2014matroid}. In \cref{sec:exchangeability}, we motivate our notion of conservativeness, and suggest interleaving as a solution, which can be mathematically formulated using exchangeable action spaces. Finally, in \cref{sec:conmatbandit}, we show that actions that are bases of a matroid are exchangeable, and phrase our problem using the terminology of matroids. To simplify exposition, we write all random variables in bold. We use $[K]$ to denote the set $\{1,\dots,K\}$. \subsection{Stochastic Combinatorial Semi-Bandits} \label{sec:semi-bandit} A \emph{stochastic combinatorial semi-bandit} \citep{kveton2015tight, gai2012combinatorial, chen2013combinatorial} is a tuple $(E, \mathcal{B}, P)$, where $E=[L]$ is a finite set of $L$ items, $\mathcal{B} \subseteq \Pi_K(E)$ is a non-empty set of feasible subsets of $E$ of size $K$, and $P$ is a probability distribution over a unit cube $[0,1]^E$. Here $\Pi_K(E)$ is the set of all $K$-permutations of $E$. Let $(\rnd{w}_t)_{t = 1}^n$ be an i.i.d. sequence of $n$ weights drawn according to $P$, where $\rnd{w}_t(e)$ is the weight of item $e \in E$ at time $t$. The learning agent interacts with our problem as follows. At time $t$, it takes an action $\rnd{A}_t \in \mathcal{B}$, which is a set of items from $E$. The reward for taking the action is $f(\rnd{A}_t, \rnd{w}_t)$, where $f(A,w) = \sum_{e \in A} w(e)$ is the sum of the weights of items in $A$ in weight vector $w$. After taking action $\rnd{A}_t$, the agent observes the weight $\rnd{w}_t(e)$ for each item $e \in \rnd{A}_t$. This model of feedback is known as \emph{semi-bandit} \cite{audibert2013regret}. The learning agent is evaluated by its \emph{expected $n$-step regret} $R(n) = \E{}{\sum_{t = 1}^n R(\rnd{A}_t, \rnd{w}_t)}$, where $R(\rnd{A}_t, \rnd{w}_t) = f(A_\ast, \rnd{w}_t) - f(\rnd{A}_t, \rnd{w}_t)$ is the \emph{instantaneous stochastic regret} of the agent at time $t$ and $A_\ast = \argmax_{A \in \mathcal{B}} f(A, \bar{w})$ is the \emph{maximum weight action in hindsight}. \subsection{Conservativeness and Exchangeable Actions} \label{sec:exchangeability} The idea of controlled exploration is not new. \citet{wu2016conservative} studied conservatism in multi-armed bandits, and their learning agent is constrained to have its cumulative reward no worse than $1 - \alpha$ of that of the default action. In this sense, their conservative constraint is \emph{cumulative}. Roughly speaking, the constraint means that the learning agent can explore once in every $1 / \alpha$ steps. A/B testing can also be thought of as the solution to a constrained exploration problem where the constraint is \emph{instantaneous} (instead of cumulative); here the constraint requires that the actions at any time be at least $(1-\alpha)$ good as the default action \emph{in expectation}, where the expectation is taken over multiple runs of the A/B test. When actions are combinatorial, as in the top-$K$ movie recommendation problem in \cref{sec:introduction}, both these forms of conservatism allow the learning agent to occasionally take actions containing items that are all disastrous (for example, have very low popularity). We consider a stricter conservative constraint that explicitly forbids this possibility. We state our conservative constraint next. Let $K$ be the number of items in any action. Let $B_0$ be the default baseline action, where $|B_0|=K$. Our constraint requires that at any time $t$, the action $\rnd{A}_t$ should be at least as good as the baseline set $B_0$, in the sense that most items in $\rnd{A}_t$ are at least as good or better than those in $B_0$. Mathematically, we require that there exists a bijection $\rho_{\rnd{A}_t,B_0}: \rnd{A}_t \rightarrow B_0$ such that \begin{align} \sum_{e \in \rnd{A}_t} \1{}{\bar{w}(e) \geq \bar{w}(\rho_{\rnd{A}_t,B_0}(e))} \geq (1 - \alpha) K \label{eq:conservative constraint} \end{align} holds with a high probability at any time $t$. That is, the items in $\rnd{A}_t$ and $B_0$ can be matched such that no more than $\alpha$ fraction of the items in $\rnd{A}_t$ has a lower expected reward than those in $B_0$. For simplicity of exposition, we only consider the special case of $\alpha = 1 / K$ in this work. We discuss the case $\alpha > 1/K$ in \cref{sec:discussion}. Given an algorithm that explores and suggests new actions that could potentially be disastrous, a simple way to satisfy \eqref{eq:conservative constraint} is to \emph{interleave} most items from the default action with a few from the new action. This is possible if the set of feasible actions $\mathcal{B} \subseteq 2^E$ is \emph{exchangeable}, which we define next. \begin{definition} \label{defn:exchangeable} A set $\mathcal{B} \subseteq 2^E$ is exchangeable if for any two actions $A_1,A_2 \in \mathcal{B}$, there exists a bijection $\rho_{A_1,A_2}:A_1 \rightarrow A_2$ such that \begin{align} \forall\,e \in A_1: A_1 \setminus \{e\} \cup \{\rho_{A_1,A_2}(e)\} \in \mathcal{B}\,. \label{eq:exchangeability definition} \end{align} \end{definition} In our motivating top-$K$ movie recommendation example, $A_1$ is the default action (recommendation) and $A_2$ is the new action, and $|A_1|=|A_2|=K$. If the action space is exchangeable, we can explore all items in a new action $A_2$ over $K$ time steps by taking $K$ interleaved actions. Each interleaved action substitutes an item $e \in A_1$ with the item $\rho_{A_1,A_2}(e) \in A_2$. \vspace{-5pt} \subsection{Conservative Interleaving Bandits} \label{sec:conmatbandit} \vspace{-5pt} In this section, we consider an important exchangeable action space, the bases of a matroid. A matroid $M$ is a pair $(E,\mathcal{B})$ where $E=[L]$ is a finite set, and $\mathcal{B}\subseteq \Pi_K(E)$ is a collection of subsets of $E$ called \emph{bases} \citep{welsh1976matroid}. $K$ is called the rank of the matroid. Matroids have many interesting properties \citep{oxley2006matroid}; the one that is relevant to our work is the \emph{bijective exchange lemma for matroids} \cite{brualdi1969comments}, which states that the collection $\mathcal{B}$ is exchangeable. \begin{lemma}[Bijective Exchange Lemma] \label{lem:bijectiveexchange} For any two bases $B_1, B_2 \in \mathcal{B}$, there exists a bijection $\rho_{B_1,B_2}: B_1 \rightarrow B_2$ such that $(B_1 \setminus \{e\}) \cup \{\rho_{B_1,B_2}(e)\}$ is a basis for any $e \in B_1$. \end{lemma} The recommendations for the top-$K$ movie problem in \cref{sec:introduction} are bases of a \emph{uniform matroid}, which is a matroid whose items $E$ are movies and whose feasible sets are all $K$-permutations of these items, i.e., $\mathcal{B} = \Pi_K(E)$. One can also enforce diversity in the recommendations by formulating actions as the feasible set of a \emph{partition matroid}, which is defined as follows. Let $\mathcal{P}_1, \dots, \mathcal{P}_K$ be a partition of $[L]$. The feasible set of the partition matroid is $\mathcal{B} = \{A \in \Pi_K([L]): A(1) \in \mathcal{P}_1, \dots, A(K) \in \mathcal{P}_K\}$. The members of the partition in this case correspond to the movie categories, and the partition matroid ensures that the recommended movies contain a movie from every category. In both the above matroids, $\rho_{A,B}$ maps the $k$-th item in $A$, $A(k)$, to the $k$-th item in $B$, $B(k)$. We study both of these examples in our experiments (\cref{sec:experiments}). In addition to these examples, many important combinatorial optimization problems can be formulated as optimization on a matroid. We formulate our learning problem using the terminology of matroids as a conservative interleaving bandit. A \emph{conservative interleaving bandit} is a tuple $(E, \mathcal{B}, P, B_0, \alpha)$, where $E=[L]$ is a set of items, $\mathcal{B} \subseteq \Pi_K(E)$ is the collection of bases, $P$ is a probability distribution over the weights $\rnd{w} \in \mathbb{R}^L$ of items $E$, the \emph{input baseline set} $B_0\in \mathcal{B}$ is a basis, and $\alpha \in [0,1]$ is a tolerance parameter. We assume that the matroid $(E, \mathcal{B})$, input baseline set $B_0$, and tolerance $\alpha$ are known and that the distribution $P$ is unknown. Without loss of generality, we assume that the support of $P$ is a bounded subset of $[0,1]^L$. We denote the expected weights of items by $\bar{w} = \mathbb{E}[\rnd{w}]$.
{ "timestamp": "2018-06-05T02:13:04", "yymm": "1806", "arxiv_id": "1806.00892", "language": "en", "url": "https://arxiv.org/abs/1806.00892" }
\section{Introduction}\label{sec:intro} In the standard Maxwell theory of vacuum electrodynamics, the field strength tensor $F_{ab}$ (which comprises the vector fields $\vec{E}$ and $\vec{B}$ ) is related to the excitation tensor (which comprises the vector fields $\vec{D}$ and $\vec{H}$) by a linear constitutive law. However, there are good reasons to assume that this law has to be replaced by a nonlinear relation for very strong electromagnetic fields. In the course of history, several such nonlinear modifications of the vacuum Maxwell theory have been suggested. One of the best known examples is the theory of Born and Infeld \cite{BornInfeld1934} from 1934. Its introduction was motivated by the observation that in the standard Maxwell vacuum theory the field energy in an arbitrarily small ball around a point charge is infinite which leads to an infinite self-force, and that this infinity might be overcome if one modifies the constitutive law of the vacuum in a nonlinear fashion. The Born-Infeld theory introduces a new hypothetical constant of Nature, $b_0$, with the dimension of a (magnetic) field strength. In the limit $b_0 \to \infty$ the theory approaches the standard Maxwell theory, i.e., the fact that the latter is in good agreement with experiments can be understood if one assumes that $b_0$ is very large. On the basis of the Born-Infeld theory one would have to expect measurable deviations from the vacuum Maxwell theory in electromagnetic fields that are of a similar order of magnitude as $b_0$. Although not exactly in the main stream of physics, the Born-Infeld theory was always taken seriously by many scientists. In the late 1990s this theory got an additional strong push when Tseytlin \cite{Tseytlin1999} realized that it can be derived, as an effective theory, from some kind of string theories. Another very well known nonlinear modification of the vacuum Maxwell theory is the Heisenberg-Euler theory \cite{HeisenbergEuler1936} from 1936. It is a classical field theory which comes about, as an effective theory, if one-loop corrections from quantum electrodynamics are taken into account. In contrast to the Born-Infeld theory, it does not involve any new hypothetical constant of Nature, i.e., it numerically predicts how strong an electromagnetic field has to be in order to produce measurable deviations from the standard vacuum Maxwell theory. Since a few years (magnetic) fields of this strength can be produced in the laboratory. The Born-Infeld theory and the Heisenberg-Euler theory are Lorentz invariant, they are gauge invariant, and they derive from a Lagrangian. The entire class of theories that share these properties was systematically studied by Pleba{\'n}ski \cite{Plebanski1970}, with important early contributions by Boillat \cite{Boillat1970}. We refer to it as to the \emph{Pleba{\'n}ski class} of electromagnetic theories. The Born-Infeld theory and the Heisenberg-Euler theory are the best known examples in this class, but there are many more. In particular, there are theories of the Pleba{\'n}ski class that allow for regular black-hole solutions if they are coupled to Einstein's field equations. The first two examples were found by Ay{\'o}n-Beato and Garc{\'\i}a \cite{AyonGarcia1998,AyonGarcia1999}. It is a general feature of nonlinear theories that the superposition principle is no longer satisfied. As a consequence, the propagation of light is influenced by electromagnetic background fields. This effect is known as ``light-by-light scattering'' and it has been observed in 1997, see Burke at el. \cite{BurkeEtAl1997}, in good agreement with the prediction by the Heisenberg-Euler theory. Another effect predicted by most theories of the Pleba{\'n}ski class, with the notable exception of the Born-Infeld theory, is birefringence in vacuo. This means that, according to these theories, a light beam that enters into a region with a sufficiently strong electromagnetic background field would split into two beams, as in the case of a light beam entering into a crystal according to ordinary optics, with the two different beams corresponding to two different polarization states. Such a birefringence in vacuo is predicted, in particular, by the Heisenberg-Euler theory. Experimentalists are trying to observe this effect since several years and there is the general expectation that these attempts will be succesful soon, see in particular the most recent status report on the so-called PVLAS experiment by Della Valle et al. \cite{DellavalleEtAl2016}. In this experiment not only the birefringence in vacuo but also the dichroism of the Heisenberg-Euler theory is tried to be measured. The latter means the effect that there are different absorption coefficients for the two different polarization states which results in an apparent rotation of the polarization plane. Finally, we mention that there are also attempts to verify effects from nonlinear electrodynamics with astrophysical observations. A particularly promising idea is to observe the birefringence in vacuo if light passes through a very strong magnetic field, such as in the neighborhood of a magnetar. A first observation that might indicate such an effect was already made, see Mignani et al. \cite{MignaniEtAl2017}. We emphasize that experiments searching for birefringence in vacuo cannot be used as tests for the Born-Infeld theory because in the latter there is no such effect. As an alternative, the Born-Infeld theory may be tested with the help of Michelson interferometry. Such an experiment was discussed for the Heisenberg-Euler theory by Boer and van Holten \cite{BoerHolten2002}, D{\"o}brich and Gies \cite{DoebrichGies2009}, Zavattini and Calloni \cite{ZavattiniCalloni2009} and Grote \cite{Grote2015}, for the Heisenberg-Euler and the Born-Infeld theories by Denisov, Krivchenkov and Kravtsov \cite{DenisovKrivchenkovKravtsov2004}, and in detail for a general theory of the Pleba{\'n}ski class by Schellstede et al. \cite{SchellstedePerlickLaemmerzahl2015}. Moreover, there are suggestions to test the Born-Infeld theory with wave-guides, see Ferraro \cite{Ferraro2007}, or with fluid motions in a magnetic background field, see Dereli and Tucker \cite{DereliTucker2010}. As of now, none of these experiments has been actually carried through. In this paper we want to study, for a general theory of the Pleba{\'n}ski class, the effect of a background field on the transport law of the polarization plane along a light ray. This will give us a new way of testing these theories, in particular the Born-Infeld theory, experimentally. We emphasize that this is to be distinguished from all the experiments mentioned above. In particular, it is not to be confused with the planned observation of dichroism by the PVLAS experiment: The latter is an effect on the absorption of light, depending on the polarization state. Here we want to investigate the direct effect of an electromagnetic background field on the polarization plane. To that end we start out from an approximate-plane-harmonic-wave ansatz, taking the generation of higher harmonics into account. By this ansatz the electromagnetic field is written as an asymptotic series with respect to a parameter $\alpha$. Sending $\alpha$ to zero corresponds to sending the frequency to infinity. We will see that we have to consider the generalized Maxwell equations to zeroth order and to first order with respect to $\alpha$ in order to determine the transport law for the polarization plane. Earlier studies of the high-frequency limit in nonlinear theories were restricted to the derivation of the eikonal equation from the zeroth order of the generalized Maxwell equations. It is well known that, as a result, one finds that the light rays are the null geodesics of two optical metrics; this was first shown by Novello et al. \cite{Novelloetal2000} and later, in different representations, by Obukhov and Rubilar \cite{ObukhovRubilar2002} and by Schellstede et al. \cite{SchellstedePerlickLaemmerzahl2015}. To the best of our knowledge, the transport law of the polarization plane was not yet considered for an arbitrary theory of the Pleba{\'n}ski class. We will not specify the nonlinear electromagnetic theory, apart from the fact that we require it to be of the Pleba{\'n}ski class. However, we mention that not all theories of this type are to be considered as physically meaningful: Some of them violate causality in the sense that the light cones of the optical metrics are not inside the light cone of the spacetime metric, see Schellstede et al. \cite{SchellstedePerlickLaemmerzahl2016}. Also, not all of them give rise to a well-posed initial-value problem, see Abalos et al. \cite{Abalosetal2015}. The paper is organized as follows. In Section \ref{sec:Pleb} we briefly review the basic features of theories of the Pleba{\'n}ski class. In Section \ref{sec:phw} we introduce our approximate-plane-wave ansatz on an arbitraty general-relativistic spacetime and for an arbitrary electromagnetic background field. In Section \ref{sec:zerothorder} we evaluate the generalized Maxwell equations to zeroth order which gives us the eikonal equation and an algebraic condition on the polarization plane. In Section \ref{sec:firstorder} we consider the generalized Maxwell equations to first order and discuss the additional conditions they give us on the polarization plane. In Section \ref{sec:BI} we exemplify the results with the Born-Infeld theory. \section{The Pleba{\'n}ski class of non-linear electrodynamical theories}\label{sec:Pleb} We consider a general-relativistic spacetime, i.e., an oriented 4-dimensional manifold with a metric tensor $g_{ab}$ of Lorentzian signature. The covariant derivative associated with the Levi-Civita connection of the metric will be denoted $\nabla _a$. Latin indices take values 0,1,2,3 and are lowered with $g_{ab}$ and raised with its inverse $g^{bc}$. The Pleba{\'n}ski class \cite{Plebanski1970} consists of all non-linear electrodynamical theories that derive from an action of the form \begin{equation}\label{eq:action} S[A_c]=\frac{1}{4\pi c}\int _M \left(\mathcal{L}(F,G)+ \frac{4\pi}{c}\,j^aA_a \right)\, \sqrt{| \mathrm{det}(g_{bc})|} \; d^4x \,. \end{equation} Here $M$ is a domain of the spacetime, $d^4x=dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3$, $j^a$ is a \emph{given} current density, $A_a$ is the electromagnetic potential, \begin{equation}\label{eq:Fab} F_{ab} = \nabla _a A_b-\nabla _b A_a \end{equation} is the electromagnetic field strength and $\mathcal{L}$ is the Lagrangian for the electromagnetic field. It is assumed that the latter depends only on the two invariants \begin{equation}\label{eq:FG} F=\frac{1}{2}\,F_{ab}F^{ab} \quad \text{and} \quad G=-\frac{1}{4}\,F_{ab} \, ^{\star \!} F^{ab} \, . \end{equation} Here and in the following, $^{\star \,}$denotes the Hodge star operator, i.e., \begin{equation} ^{\star \!} F_{ab}\, = \, \frac{1}{2} \, \varepsilon_{abcd}F^{cd} \end{equation} where $\varepsilon _{abcd}$ is the totally antisymmetric Levi-Civita tensor field (volume form) associated with the spacetime metric. By (\ref{eq:Fab}) the homogeneous Maxwell equation is automatically satisfied, \begin{equation}\label{eq:Max1} \varepsilon ^{abcd} \nabla_b F_{cd}=0 \, . \end{equation} Requiring that the variational derivative of the action (\ref{eq:action}) with respect to the potential $A_c$ vanishes, for all compact domains $M$ and all variations that keep $A_c$ fixed on the boundary of $M$, leads to the inhomogeneous Maxwell equation, \begin{equation}\label{eq:Max2} \nabla_a H^{ab} \, = \, - \, \frac{4\pi}{c}\,j^b\,, \end{equation} where \begin{equation}\label{eq:const} H^{ab}=-\frac{\partial \mathcal{L}}{\partial F_{ab}} = -2\, \mathcal{L}_F \, F^{ab}+\mathcal{L}_G \, ^{\star \!} F^{ab} \end{equation} is the electromagnetic excitation. For the sake of brevity, we write \begin{equation}\label{eq:LFLG} \mathcal{L}_F = \dfrac{\partial \mathcal{L}}{\partial F} \, , \quad \mathcal{L}_G = \dfrac{\partial \mathcal{L}}{\partial G} \end{equation} and \begin{equation}\label{eq:LFLG2} \mathcal{L}_{FF} = \dfrac{\partial ^2 \mathcal{L}}{\partial F ^2} \, , \quad \mathcal{L}_{GG} = \dfrac{\partial ^2 \mathcal{L}}{\partial G ^2} \, , \quad \mathcal{L}_{FG} = \dfrac{\partial ^2 \mathcal{L}}{\partial F \partial G} \, . \end{equation} It is the constitutive law (\ref{eq:const}) that distinguishes different theories, while the Maxwell equations (\ref{eq:Max1}) and (\ref{eq:Max2}) are always the same. Each particular theory of the Pleba{\'n}ski class is characterized by a particular Lagrangian and, thereby, by a particular constitutive law. Let us mention the two most important examples: For the Born-Infeld theory \cite{BornInfeld1934}, the Lagrangian reads \begin{equation}\label{eq:LBI} \mathcal{L} = b_0^2 - b_0^2 \sqrt{1+ \dfrac{F}{b_0^2}-\dfrac{G^2}{b_0^4}} \end{equation} where $b_0$ is a hypothetical constant of Nature with the dimension of a magnetic field strength. For $b_0 \to \infty$ the Born-Infeld theory reproduces the standard Maxwell vacuum theory. For the Heisenberg-Euler theory \cite{HeisenbergEuler1936}, \begin{equation}\label{eq:LHE} \mathcal{L} = E_0^2 \left( - \dfrac{1}{2} \dfrac{F}{E_0^2} + \Lambda \Big( \dfrac{F^2}{E_0^4} + 7 \dfrac{G^2}{E_0^4} \Big) + \, \dots \right) \end{equation} where $E_0=m^2c^4/e^3$ and $\Lambda = \hbar c/(90 \pi e^2)$. Here $m$ is the electron mass, $e$ is the electron charge, $\hbar$ is the reduced Planck constant and the ellipses in (\ref{eq:LHE}) stand for terms of third and higher order in $F$ and $G$. \section{Approximate-plane-harmonic-wave~ansatz}\label{sec:phw} An approximate-plane-harmonic wave is a one-parameter family $F^{\alpha}_{cd}$ of field strength tensors, depending on a real parameter $\alpha$, of the form \begin{equation}\label{eq:aphwF} F^{\alpha}_{cd} = F_{cd} + \alpha \, F_{cd}^{(1)} + \sum _{K=2}^{\infty} \alpha ^K F_{cd} ^{(K)} \end{equation} where \begin{equation}\label{eq:f1} F_{cd}^{(1)} = \mathrm{Re} \Big\{ e^{iS/\alpha} f_{cd} ^{(11)} \Big\} \end{equation} and \begin{equation}\label{eq:f} F_{cd}^{(K)} = \sum _{\tilde{K} =0} ^{K} \mathrm{Re} \Big\{ e^{i \tilde{K} S/\alpha} f_{cd} ^{(K \tilde{K})} \Big\} \, \quad \text{for} \; K \ge 2 \, . \end{equation} Here $F_{cd}$ is a given electromagnetic background field that is independent of $\alpha$, $S$ is a real-valued function and $f^{(K \tilde{K})}_{cd}$ is a complex-valued antisymmetric tensor field for each pair of integers ${K, \tilde{K}}$ that occurs. We assume that, on the spacetime region considered, the tensor fields $\nabla _a S$ and $f_{cd}^{(11)}$ have no zeros. The series is to be understood as an asymptotic series, not as a convergent series. The function $S$ is called the \emph{eikonal function}. On a sufficiently small neighborhood, the field $F^{(1)}_{ab}$ is approximately a plane harmonic wave: The surfaces $S= \mathrm{constant}$ are the wave-fronts and the gradient of S divided by alpha defines the wave four-covector. Correspondingly, the frequency measured by an observer with four-velocity $U^a$ is $\omega = U^a \nabla _aS / \alpha$. The limit $\alpha \to 0$ corresponds to sending the frequency to infinity. The idea is to feed the ansatz (\ref{eq:aphwF}) into Maxwell's equations, to solve these equations iteratively order by order in $\alpha$ and, in this way, to asymptotically approach a one-parameter family of exact solutions. Our ansatz (\ref{eq:aphwF}) is a generalization of the standard approximate-plane-harmonic-wave ansatz. The latter goes back to Ralph Luneburg and is detailed, for wave propagation in linear and isotropic media, e.g. in the text-book by Kline and Kay \cite{KlineKay1965}. Our ansatz is more general in two respects: Firstly, we take a non-zero background field into account. In a linear theory, it suffices to consider the case with zero background field because, by the superposition principle, the propagation of the approximate-plane-harmonic wave is independent of a background field. In a non-linear theory, however, the propagation is influenced by a background field. Secondly, the higher-order fields, $F_{cd}^{(K)}$ for $K \ge 2$, come not only with the same frequency as the first-order field $F_{cd}^{(1)}$ (the terms with $\tilde{K} = 1$) but also with integer multiples of this frequency (the terms with $\tilde{K} \neq 1$). This reflects the \emph{generation of higher harmonics} which is well-known from optics in non-linear media. It should not come as a surprise that it has to be taken into account also in the non-linear vacuum theories of the Pleba{\'n}ski class. Higher harmonics play no role if one considers Maxwell's equations only to the lowest order (i.e., $\alpha ^0$). This is the reason why it was not necessary to take them into account in \cite{Perlick2011} where the eikonal equation was derived for Maxwell's equations with a local but otherwise arbitrary constitutive law. In the present paper, however, we want to derive the transport law for the polarization plane which requires considering Maxwell's equations also to the next order (i.e. $\alpha ^1$). We will see that these equations cannot in general be solved if we set all terms $f_{cd}^{(K \tilde{K})}$ with $\tilde{K} \neq 1$ equal to zero. For our purpose we need the series (\ref{eq:aphwF}) up to second order, \begin{gather} F^{\alpha}_{cd} = F_{cd} + \alpha \mathrm{Re} \Big\{ e^{iS/ \alpha} f_{cd}^{(11)} \Big\} \qquad \nonumber \\ + \alpha ^2 \mathrm{Re} \Big\{ f_{cd}^{(20)} + e^{iS/ \alpha} f_{cd}^{(21)} + e^{2iS/ \alpha} f_{cd}^{(22)} \Big\} + \, \dots \label{eq:aphwF2} \end{gather} which includes \emph{frequency doubling} $(\tilde{K}=2)$ and the generation of a non-oscillatory mode, known from non-linear media as \emph{optical rectification} $(\tilde{K}=0)$. The homogeneous Maxwell equation (\ref{eq:Max1}) is automatically satisfied for all $\alpha$ if we assume that (\ref{eq:aphwF2}) derives from a potential, \begin{equation}\label{eq:Aalpha} F^{\alpha}_{cd} = \nabla _c A^{\alpha}_d- \nabla _d A^{\alpha}_c \, . \end{equation} It is easy to see that such a potential (up to an arbitrary gradient term) must be of the form \begin{gather} A^{\alpha}_{d} = A_{d} + \alpha ^2 \mathrm{Re} \Big\{ a_d^{(10)} + e^{iS/ \alpha} a_d^{(11)} \Big\} \qquad \quad \nonumber \\ + \alpha ^3 \mathrm{Re} \Big\{ a_d^{(20)} + e^{iS/ \alpha} a_d^{(21)} + e^{2iS/ \alpha} a_d^{(22)} \Big\} + \, \dots \label{eq:aphwA0} \end{gather} Then (\ref{eq:Aalpha}) holds to zeroth order in $\alpha$ with \begin{equation}\label{eq:Fa} F_{cd} = \nabla _c A_d- \nabla _d A_c \, , \end{equation} to first order with \begin{equation}\label{eq:f11a} f_{cd}^{(11)} = i \big( \nabla _c S \, a^{(11)}_d-\nabla _d S \, a^{(11)}_c \big) \, , \end{equation} and to second order with \begin{gather} f_{cd}^{(20)} = \nabla _c a^{(10)}_d - \nabla _d a^{(10)}_d \, , \label{eq:f20a} \\ f_{cd}^{(21)} = \nabla _c a^{(11)}_d - \nabla _d a^{(11)}_d \nonumber \\ + i \big( \nabla_c S \, a_d^{(21)} - \nabla_d S \, a_c^{(21)} \big) \, , \label{eq:f21a} \\ f_{cd}^{(22)} = 2 \, i \big( \nabla_c S \, a_d^{(22)} - \nabla_d S \, a_c^{(22)} \big) \, . \label{eq:f22a} \end{gather} Here we have used our assumption that the gradient of $S$ has no zeros which implies that $S \neq 0$ almost everywhere and that, accordingly, the functions $1$, $\mathrm{sin} \big( S(x)/\alpha \big)$, $\mathrm{cos}(\big( S(x)/\alpha \big))$, $\mathrm{sin} \big( 2 S(x)/\alpha \big))$ and $\mathrm{cos} \big(2 S(x)/\alpha \big)$ are linearly independent. Feeding the approximate-plane-harmonic wave (\ref{eq:aphwF}) into the constitutive law (\ref{eq:const}) gives, after a rather long but straight-forward calculation, an excitation of the form \begin{gather}\label{eq:aphwH} H^{\alpha}_{ab} = H_{ab} + \alpha \, \mathrm{Re} \Big\{ e^{iS/\alpha} h^{(11)}_{ab} \Big\} \qquad \\ \nonumber + \alpha ^2 \, \mathrm{Re} \Big\{ h^{(20)}_{ab} + e^{iS/\alpha} h^{(21)}_{ab} + e^{i2S/\alpha} h^{(22)}_{ab} \Big\} + \, \dots \end{gather} The zeroth order term in (\ref{eq:aphwH}) is just the excitation of the background field, \begin{equation}\label{eq:aphwH0} H_{ab} = -2\, \mathcal{L}_F \, F_{ab}+\mathcal{L}_G {} ^{\star \!} F_{ab} \, , \end{equation} the first-order amplitude is \begin{equation}\label{eq:h11} h_{ab}^{(11)} = \dfrac{1}{2} \chi _{ab}{}^{cd} f_{cd}^{(11)} \, , \end{equation} and the second-order amplitudes are \begin{equation}\label{eq:h20} h_{ab}^{(20)} = \dfrac{1}{2} \chi _{ab}{}^{cd} f_{cd}^{(20)} + \dfrac{1}{2} \psi _{ab}{}^{cdef} f_{cd}^{(11)} \overline{f}{}_{ef}^{(11)}\, , \end{equation} \begin{equation}\label{eq:h21} h_{ab}^{(21)} = \dfrac{1}{2} \chi _{ab}{}^{cd} f_{ef}^{(21)} \, , \end{equation} \begin{equation}\label{eq:h22} h_{ab}^{(22)} = \dfrac{1}{2} \chi _{ab}{}^{cd} f_{cd}^{(22)} + \dfrac{1}{2} \psi _{ab}{}^{cdef} f_{cd}^{(11)} f_{ef}^{(11)}\, , \end{equation} with \begin{gather} \chi _{ab}{}^{cd} = \mathcal{L} _G \varepsilon _{ab}{}^{cd} - 2 \mathcal{L} _F \big( \delta _a^c \delta _b^d-\delta _a^d \delta _b^c \big) -4 \mathcal{L} _{FF} F_{ab}F^{cd} \nonumber \\ \label{eq:chi} + 2 \mathcal{L}_{FG} \big( F_{ab}{} ^{\star \!}F^{cd} +{} ^{\star \!}F_{ab} F^{cd} \big) - \mathcal{L}_{GG}{} ^{\star \!}F_{ab}{} ^{\star \!}F^{cd} \end{gather} and \begin{gather} \psi _{ab}{}^{cdef} = \dfrac{1}{4} \Big(-2 \mathcal{L}_{FF}F_{ab} +\mathcal{L}_{FG} {} ^{\star \!}F_{ab} \Big) \Big( g^{ce}g^{df}-g^{de}g^{cf} \Big) \nonumber \\ +\dfrac{1}{2} \Big( \delta _a^c \delta _b^d -\delta _b^c \delta _a^d \Big) \Big(-2 \mathcal{L}_{FF}F^{ef}+ \mathcal{L}_{FG} {} ^{\star \!}F^{ef} \Big) \nonumber \\ - \dfrac{1}{8} \Big(-2 \mathcal{L}_{FG}F_{ab} + \mathcal{L}_{GG} {} ^{\star \!}F_{ab} \Big) \varepsilon ^{cdef} \nonumber \\ - \dfrac{1}{4} \varepsilon _{ab}{}^{cd} \Big(-2 \mathcal{L}_{FG}F^{ef} +\mathcal{L}_{GG} {} ^{\star \!}F^{ef} \Big) \nonumber \\ + \dfrac{1}{2} \Big(-2 \mathcal{L}_{FFF}F_{ab} + \mathcal{L}_{FFG} {} ^{\star \!}F_{ab} \Big) F^{cd}F^{ef} \nonumber \\ - \dfrac{1}{2} \Big(-2 \mathcal{L}_{FFG}F_{ab} + \mathcal{L}_{FGG} {} ^{\star \!}F_{ab} \Big) {} F^{cd} {} ^{\star \!}F^{ef} \nonumber \\ + \dfrac{1}{8} \Big(-2 \mathcal{L}_{FGG}F_{ab} + \mathcal{L}_{GGG} {} ^{\star \!}F_{ab} \Big) {} ^{\star \!}F^{cd} {} ^{\star \!}F^{ef} \, . \end{gather} We see that the first- order constitutive law (\ref{eq:h11}) is of the same form as the constitutive law of a linear medium, but now with a constitutive tensor $\chi _{ab}{}^{cd}$ that depends on the invariants $F$ and $G$ of the background field. Quite generally, such a constitutive tensor can be decomposed into principal part, skewon part and axion part (see Hehl and Obukhov \cite{HehlObukhov2003}). In (\ref{eq:chi}), the first term is the axion part, the rest is the principal part and the skewon part is zero. It is known \cite{HehlObukhov2003} that the skewon part is always vanishing if the theory derives from a variational principle. At the second order, we get for each of the three amplitudes $h_{ab}^{(2\tilde{K})}$ a linear law with the same constitutive tensor $\chi_{ab}{}^{cd}$ as for the first order, but for $\tilde{K}=0$ and $\tilde{K}=2$ additional quadratic terms with a second-order constitutive tensor $\psi _{ab}{}^{cdef}$ which looks rather complicated. We will now evaluate the Maxwell equations. The homogeneous Maxwell equation is satisfied if we express the amplitudes $f_{cd}^{(K \tilde{K})}$ in terms of the potential according to (\ref{eq:f11a}), (\ref{eq:f20a}), (\ref{eq:f21a}) and (\ref{eq:f22a}). Feeding the excitation (\ref{eq:aphwH}) into the inhomogeneous Maxwell equation requires at zeroth order \begin{equation}\label{eq:Maxzero1} - \dfrac{4 \pi}{c} j_b = \nabla ^a H_{ab} \, , \end{equation} \begin{equation}\label{eq:Maxzero2} 0 = \nabla ^a S \, h_{ab} ^{(11)} \, , \end{equation} and at first order \begin{equation}\label{eq:Maxfirst1} 0 = \nabla ^a h_{ab}^{(11)} + i \nabla ^aS \, h_{ab}^{(21)} \, , \end{equation} \begin{equation}\label{eq:Maxfirst2} 0 = \nabla ^a S \, h_{ab}^{(22)} \, . \end{equation} Here we have assumed that the current $j_b$ is independent of $\alpha$, i.e., that only the background field may have a source whereas our approximate-plane-harmonic wave is source-free. Moreover, we have again used our assumption that the gradient of $S$ has no zeros which implies that the functions $1$, $\mathrm{sin} \big( S(x)/\alpha \big)$, $\mathrm{cos}(\big( S(x)/\alpha \big))$, $\mathrm{sin} \big( 2 S(x)/\alpha \big))$ and $\mathrm{cos} \big(2 S(x)/\alpha \big)$ are linearly independent. At zeroth order we get one equation, (\ref{eq:Maxzero2}), that has to be satisfied. With (\ref{eq:f11a}) and (\ref{eq:h11}) this equation reads \begin{equation}\label{eq:Maxzeroa} 0 = \nabla ^a S \chi _{ab}{}^{cd} \nabla _c S \, a_d^{(11)} \, . \end{equation} We will evaluate this equation in the next section. We will see that it gives us the \emph{eikonal equation} for $S$ and an algebraic condition on $a_d^{(11)}$ which is known as the zeroth order \emph{polarization condition}. Note that $a_d^{(11)}$ is not gauge-invariant: As can be read from (\ref{eq:f11a}), the field strength $f_{cd}^{(11)}$ is unchanged if a multiple of $\nabla _d S$ is added to $a^{(11)}_d$. We will see that the zeroth order polarization condition is actually a condition on the (gauge-invariant) plane spanned by $a_d^{(11)}$ and $\nabla _d S$. We refer to this plane as to the \emph{polarization plane}. At first order we get two equations, (\ref{eq:Maxfirst1}) and (\ref{eq:Maxfirst2}), that have to be satisfied. With (\ref{eq:f11a}), (\ref{eq:f20a}), (\ref{eq:f21a}), (\ref{eq:h11}), (\ref{eq:h21}) and (\ref{eq:h22}) these equations read \begin{gather} 0 = \nabla ^a \Big( \chi _{ab}{}^{cd} \nabla _c S \, a_d^{(11)} \Big) \nonumber \\ + \nabla ^a S \, \chi _{ab}{}^{cd} \nabla _c a_d^{(11)} + i \nabla ^a S \, \chi _{ab}{}^{cd} \nabla _c S \, a_d ^{(21)} \, , \label{eq:Max2first1a} \end{gather} \begin{equation}\label{eq:Max2first2a} 0 = \nabla ^a S \, \chi _{ab}{}^{cd} \nabla _c S \, a_d ^{(22)} - \psi _{ab}{}^{cdef} \nabla ^a S \, \nabla _c S \, a_d ^{(11)} a_f^{(11)} \, . \end{equation} We will evaluate these two equations, as far as necessary for our purpose, in Section \ref{sec:firstorder} below. They will give us a differential equation for $a_d^{(11)}$ which is known as the first-order \emph{transport equation} and algebraic conditions on $a_d^{(21)}$ and $a_d^{(22)}$ which are the first-order polarization conditions. These equations are not in general satisfied if $a_d^{(22)}=0$, i.e., frequency doubling has to be taken into account if Maxwell's equations are to be solved to first order. If one wants to go beyond the first order, one can do this step by step. At the $K^{\mathrm{th}}$ level one gets transport equations for the amplitudes $a_d^{(K \tilde{K})}$ and polarization conditions on the amplitudes $a_d^{((K+1)\tilde{K})}$. \section{Evaluation of the zeroth-order field equation}\label{sec:zerothorder} In Section \ref{subsec:eikonal} we will derive the eikonal equation from the zeroth-order field equation (\ref{eq:Maxzeroa}), in Section \ref{subsec:rays} we will determine the Hamiltonian for the rays and in Section \ref{subsec:polcon} we will evaluate the zeroth-order polarization condition. The main results of Sections \ref{subsec:eikonal} and \ref{subsec:rays} are not new. In particular, it is known that for any theory of the Pleba{\'n}ski class the rays are the null geodesics of two optical metrics. This was first demonstrated by Novello et al. \cite{Novelloetal2000}. The same result was re\-de\-rived, using a different representation, by Obukhov and Rubilar \cite{ObukhovRubilar2002} who also showed that the optical metrics have Lorentzian signature if they are non-degenerate. Still another form of the optical metrics was derived by Schellstede et al. \cite{SchellstedePerlickLaemmerzahl2015}. However, we have to rederive these known results here because in doing so we will also establish a number of new relations that will be needed later. We will use the same representation as in \cite{SchellstedePerlickLaemmerzahl2015}. \subsection{Derivation of the eikonal equation}\label{subsec:eikonal} In the following we write \begin{equation}\label{eq:puv} p_a= \nabla _a S \, , \quad u_a= F_{ab} \nabla ^b S \, , \quad v_a= {\,}{}^{*}{\!}F{}_{ab} \nabla ^b S \end{equation} which implies \begin{equation}\label{eq:puvorth} p_au^a = p_a v^a = 0 \, . \end{equation} Then the zeroth-order field equation (\ref{eq:Maxzeroa}) can be rewritten as \begin{equation}\label{eq:MA} M_b{}^d a^{(11)}_d = 0 \end{equation} where \begin{gather}\label{eq:M} M_b{}^d = \chi _{ab}{}^{cd} p ^a p _c = -2 \mathcal{L}_F p _a p ^a \delta _b ^d + 2\mathcal{L} _F p _b p ^d \\ \nonumber - 4 \mathcal{L} _{FF} u_b u^d + 2 \mathcal{L} _{FG} \big( u _b v^d + v_b u^d \big) - \mathcal{L} _{GG} v_b v^d \, . \end{gather} Note that $M_b{}^d$ is self-adjoint with respect to the spacetime metric, i.e. $M_{ab}=M_{ba}$. This is a consequence of the above-mentioned fact that the skewon part of the constitutive tensor vanishes. Also note that the axion part gives no contribution to (\ref{eq:M}) which is a general result \cite{HehlObukhov2003,Itin2007}. From (\ref{eq:M}) we read that $p_d$ is in the kernel of $M_b{}^d$, so (\ref{eq:MA}) is satisfied by $a^{(11)}_d = \psi \, p _d$ with any scalar factor $\psi$. However, by (\ref{eq:f11a}) such a potential gives a trivial first-order field strength. As we require $f_{cd}^{(11)} \neq 0$, we need a solution $a^{(11)}_d$ of (\ref{eq:MA}) that is linearly independent of $p_d$, i.e., the kernel of $M_d{}^b$ has to be at least two-dimensional. This is the case if and only if the \emph{adjugate} $A_d{}^b$ of $M_d{}^b$ (also known as the \emph{classical adjoint}) vanishes, cf. Itin \cite{Itin2009}. A straight-forward (though tedious) calculation shows that the adjugate is given by \begin{gather}\label{eq:adj} A_b{}^a = - 8 \mathcal{L} _F \big( M (p_cp^c)^2 + N p_cp^c u_du^d + P (u_du^d)^2 \big) p_b p ^a \end{gather} where \begin{gather} M=\mathcal{L}_F^2+2\,\mathcal{L}_F \mathcal{L}_{FG}\,G -\frac12\,\mathcal{L}_F\mathcal{L}_{GG}\,F -PG^2 \, , \label{eq:coeff1} \\ N=2\,\mathcal{L}_F \mathcal{L}_{FF} +\frac12\,\mathcal{L}_F\mathcal{L}_{GG} -PF \, , \label{eq:coeff2} \\ P=\mathcal{L}_{FF}\mathcal{L}_{GG}-\mathcal{L}^2_{FG}\,. \label{eq:coeff3} \end{gather} Here we have used the well-known \cite{Plebanski1970} identities \begin{equation}\label{eq:FFid} {}^{*}{\!}F_{ac} F^{bc} = - G \delta _a ^b \, , \quad F_{ac} F^{bc} - {}^{*}{\!}F_{ac} {}^{*}{\!} F^{bc} = F \delta _a^b \end{equation} which imply \begin{equation}\label{eq:uvorth} u_cv^c = - G p_cp^c \, , \quad u_cu^c-v_cv^c=Fp_cp^c \, . \end{equation} By (\ref{eq:adj}), the zeroth-order field equation (\ref{eq:MA}) admits a solution $a_d^{(11)}$ giving a non-trivial field strength if and only if \begin{gather}\label{eq:eikonal} 0 = \mathcal{L} _F \big( M (p_cp^c)^2 + N p_cp^c u_du^d + P (u_du^d)^2 \big) \, . \end{gather} This is the \emph{eikonal equation}. It is a first-order partial differential equation for the function $S$. Each solution to this equation determines a family of light rays, in the same way as in Hamiltonian mechanics each solution to the Hamilton-Jacobi equation determines a family of trajectories, see the next subsection. If viewed as an algebraic condition on the covector $p_a$, (\ref{eq:eikonal}) is known as the \emph{dispersion relation}, as the \emph{characteristic equation} or as the \emph{Fresnel equation}. From now on we reqire $\mathcal{L}_F \neq 0$ because otherwise the eikonal equation is an identity, so there is no well-defined notion of rays. If in addition $M \neq 0$, (\ref{eq:eikonal}) factorizes according to \begin{equation}\label{eq:eikonalfact} \big( \tilde{g}{}^{bc}_+ p_b p_c\big) \big( \tilde{g}{}^{de}_- p_d p_e \big) = 0 \end{equation} where \begin{gather} \nonumber \tilde{g}{}^{bc}_{\pm} = g^{bc} + \sigma _{\pm} F^{bd}F^c{}_d \\ = \big( 1 + \sigma _{\pm} F \big) g^{bc} + \sigma _{\pm} {}^{*} {\!} F^{bd} {}^{*} {\!} F^c{}_d \label{eq:optmet} \end{gather} and \begin{equation}\label{eq:sigma} \sigma _{\pm} = \dfrac{N}{2M} \pm \sqrt{\dfrac{N^2}{4M^2}-\dfrac{P}{M} \,} \, . \end{equation} $\tilde{g}{}^{bc}_{+}$ and $\tilde{g}{}^{bc}_{-}$ are known as the \emph{optical metrics}. Note that $\sigma_{\pm}$ is always real because $N^2-4MP$ can be rewritten as the sum of two squares, \begin{gather}\label{eq:sigmareal} N^2-4MP= \\ \nonumber \Big( \mathcal{L}_F \mathcal{L}_{GG} - N \Big) ^2 + 4 \Big( \mathcal{L}_F \mathcal{L}_{FG}-PG \Big) ^2 \, . \end{gather} The determinant of $\tilde{g}{}^{cd}_{\pm}$ is \begin{equation}\label{eq:nondeg} \mathrm{det} \big( \tilde{g}{}^{cd}_{\pm} \big) = \big( 1 + \sigma _{\pm} F - \sigma _{\pm}^2 G^2 \big)^2 \mathrm{det} \big( g^{cd} \big) \, . \end{equation} As $(g^{cd})$ is of Lorentzian signature, the right-hand side of (\ref{eq:nondeg}) is either zero or negative. This demonstrates that the optical metrics are either degenerate or Lorentzian (i.e., of signature $(-+++)$ or $(---+)$\big), as was already observed by Obukhov and Rubilar \cite{ObukhovRubilar2002}. If the determinant is non-zero, the covariant components of the optical metrics are \begin{gather} \nonumber \big( \tilde{g}{}^{-1} \big)^{\pm}_{cd} = \dfrac{ g_{cd} - \sigma _{\pm} {}^{*} {\!} F_c{}^{b \,} {}^{*} {\!} F_{db} }{ 1+ \sigma _{\pm} F - \sigma _{\pm}^2 G^2 } \\ = \dfrac{ \big( 1 + \sigma _{\pm} F \big) g_{cd} - \sigma _{\pm} F_c{}^b F_{db} }{ 1+ \sigma _{\pm} F - \sigma _{\pm}^2 G^2 } \, . \label{eq:covoptmet} \end{gather} Indeed, with the help of the identities (\ref{eq:FFid}) it is easy to check that (\ref{eq:optmet}) and (\ref{eq:covoptmet}) imply $\big( \tilde{g}{}^{-1} \big) ^{\pm}_{ac} \tilde{g}{}^{cb}_{\pm} = \delta _a^b$. If $M=0$, the eikonal equation factorizes as well, but we will not consider this case because it shows some pathologies, see \cite{SchellstedePerlickLaemmerzahl2016}. We restrict for the rest of the paper to background fields for which $\mathcal{L}_F \neq0$, $M \neq 0$ and $(1+\sigma _{\pm} F - \sigma _{\pm}^2 G^2 ) \neq 0$ so that we have two optical metrics of Lorentzian signature. Then the eikonal equation is of the form (\ref{eq:eikonalfact}), i.e., it requires $p_a = \nabla _a S$ to be a null covector of at least one of the two optical metrics. This is true if and only if \begin{equation}\label{eq:eikonsigma} p_ap^a + \sigma _{\pm} u_au^a =0 \end{equation} holds with at least one of the two signs where $\sigma _{\pm}$ is given by (\ref{eq:sigma}). We refer to the two equations (\ref{eq:eikonsigma}) with $p_a = \nabla _a S$ as to the two \emph{partial eikonal equations}. We end this section with two useful results. \begin{proposition}\label{prop:nobiref} Let $\sigma$ be one of the two solutions, $\sigma = \sigma _+$ or $\sigma = \sigma _-$, to \emph{(\ref{eq:sigma})}. Then the following conditions are mutually equivalent: \begin{itemize} \item[\emph{(a)}] $N^2=4MP$, i.e., the two optical metrics coincide, $\sigma = \dfrac{N}{2M}$ . \item[\emph{(b)}] $\mathcal{L}_F \mathcal{L}_{GG} = N$ and $\mathcal{L}_F \mathcal{L}_{FG}=PG$. \item[\emph{(c)}] $D M = \mathcal{L}_F^2$, $D N = 2 \mathcal{L}_F ^2 \sigma$ and $D P = \mathcal{L}_F^2 \sigma ^2$. \item[\emph{(d)}] $2 D \mathcal{L}_{FF} = \mathcal{L}_F \sigma ( 1+F \sigma )$, $D \mathcal{L}_{GG} = 2 \mathcal{L}_F \sigma$ and $D \mathcal{L}_{FG} = \mathcal{L}_F G \sigma ^2$. \end{itemize} In \emph{(c)} and \emph{(d)}, $D = 1 + F \sigma - G^2 \sigma ^2$. \end{proposition} \begin{proof} (a) $\Leftrightarrow$ (b) is obvious from (\ref{eq:sigmareal}). We now assume that one, and thus also the other, of these conditions is true. Then we find from (a) that \begin{equation}\label{eq:NPsigma} N=2 M \sigma \, , \quad P = M \sigma ^2 \end{equation} and from inserting (b) into (\ref{eq:coeff1}) that \begin{equation}\label{eq:Msigma} M= \mathcal{L}_F^2-PG^2- \dfrac{NF}{2} \, \end{equation} (\ref{eq:NPsigma}) and (\ref{eq:Msigma}) demonstrate that then (c) is true. Conversely, (c) obviously implies (a), so we have proven that (a), (b), and (c) are mutually equivalent. Finally, we observe that (a) and (c) together with (\ref{eq:coeff2}) imply (d) and that (d), if inserted into (\ref{eq:coeff1}), (\ref{eq:coeff2}) and (\ref{eq:coeff3}), implies (a), so all four conditions are indeed mutually equivalent. \end{proof} \begin{proposition}\label{prop:eigen} Assume that $p_a = \nabla _a S$ is a solution to the eikonal equation $p_a p^a + \sigma u_au^a =0$ with $\sigma = \sigma _+$ or $\sigma = \sigma _-$. Then the eigenvalues of the matrix $\big( M_b{}^d \big)$ are $\lambda _1 = \lambda _2 =0$ and \begin{equation}\label{eq:lambda4} \lambda _3 = 2 \mathcal{L}_F \sigma u_au^a \, , \end{equation} \begin{equation}\label{eq:lambda3} \lambda _4 = \big( 4 \mathcal{L} _F \sigma - 4 \mathcal{L}_{FF} + 4 \mathcal{L}_{FG} G \sigma -\mathcal{L}_{GG} (1+F \sigma ) \big) u_au^a \end{equation} \end{proposition} \begin{proof} By assumption, zero is a double-eigenvalue of the matrix (\ref{eq:M}). Then the remaining two eigenvalues $\lambda _3$ and $\lambda _4$ can be determined in the following way. The formulas for the trace of a matrix and for the trace of the square of a matrix in terms of its eigenvalues yield \begin{equation}\label{eq:trc1} M_b{}^b = \lambda _3 + \lambda _4 \, , \end{equation} \begin{equation}\label{eq:trc2} M_b{}^dM_d{}^b = \lambda _3^2 + \lambda _4^2 \, . \end{equation} Upon calculating the traces with the help of (\ref{eq:puvorth}), solving (\ref{eq:trc1}) and (\ref{eq:trc2}) for the eigenvalues results in the given expressions for $\lambda _3$ and $\lambda _4$. \end{proof} \subsection{Hamiltonian for rays and transport vector fields} \label{subsec:rays} We say that $S$ is a solution to the eikonal equation of multiplicity two if $p_a = \nabla _a S$ satisfies the equation (\ref{eq:eikonsigma}) with both signs, and we say that it is a solution of multiplicity one if (\ref{eq:eikonsigma}) holds with one sign but not with the other. The multiplicity may change from point to point. Each of the two partial eikonal equations has the form of the Hamilton-Jacobi equation, $H(x,\nabla S)=0$, with the Hamiltonian \begin{equation}\label{eq:Hpart} H_{\pm} (x,p ) = \dfrac{1}{2} \tilde{g}{}^{bc}_{\pm} (x) p_b p_c \, . \end{equation} The solutions to Hamilton's equations \begin{equation}\label{eq:Hampart} \dot{x}{}^a \! = \! \dfrac{\partial H_{\pm}(x,p)}{\partial p_a} \, , \; \dot{p}{}_a \! = \! - \dfrac{\partial H_{\pm}(x,p)}{\partial x^a} \, , \; H(x,p) \! = \! 0 \end{equation} are known as the \emph{bicharacteristic curves} or as the \emph{rays}. They are the null geodesics of the optical metric. Every solution $S$ to the eikonal equation is associated with a congruence of rays whose tangent vector field is given by \begin{equation}\label{eq:Ka} K^b_{\pm} (x) = \dfrac{\partial H_{\pm} (x,p)}{\partial p_b} \Big| _{p = \nabla S(x)} = \tilde{g}{}^{bc}_{\pm}(x)\nabla _c S(x) \, , \end{equation} i.e., \begin{equation}\label{eq:Ksigma} K^b_{\pm} = p^b - \sigma _{\pm} F^{bc} u_c \, . \end{equation} This vector field is known as the \emph{transport vector field} associated with the solution $S$ of the eikonal equation. For solutions of multiplicity two, we have two transport vector fields $K^b_+$ and $K^b_-$. However, they are always proportional to each other so that the rays (as unparametrized curves) are uniquely determined. We will prove this in the next section. Note that the non-degeneracy of the optical metric implies that the transport vector field cannot have zeros if we assume that $p_a = \nabla _a S$ has no zeros (as required for an eikonal function of an approximately plane wave), i.e, that ``rays cannot stand still''. The following proposition establishes a property of the transport vector field that will be crucial for the next section. \begin{proposition}\label{prop:aux1} Assume that $p_a = \nabla _a S$ satisfies the eikonal equation $p_ap^a + \sigma u_au^a=0$ where $\sigma = \sigma _+$ or $\sigma = \sigma _-$. Let $\tilde{g}{}^{ab}=g^{ab} + \sigma F^{ac}F^b{}_c$ and $K^a = \tilde{g}{}^{ab}p_b$. Then \begin{gather} \label{eq:optpuv1} \tilde{g}{}^{cd} p_c u_d= \tilde{g}{}^{cd} p_c v_d = 0 \, , \quad \tilde{g}{}^{cd} u_c v_d= 0 \, , \\ \label{eq:optpuv2} \tilde{g}{}^{cd} u_c u_d= \tilde{g}{}^{cd} v_c v_d= u^cu_c(1+\sigma F - \sigma ^2 G^2) \, . \end{gather} As a consequence, the transport vector field satisfies \begin{equation}\label{eq:Kpuv} K^ap_a = K^a u_a = K^a v_a = 0 \, . \end{equation} \end{proposition} \begin{proof} This can be verified in a straight-forward manner with the help of the identities (\ref{eq:FFid}). \end{proof} \subsection{Polarization condition} \label{subsec:polcon} If we fix a solution $p_a=\nabla _aS$ to the eikonal equation $p_ap^a+ \sigma u_au^a = 0$ with $\sigma = \sigma _+$ or $\sigma = \sigma _-$, the zeroth-order field equation (\ref{eq:MA}) gives an algebraic restriction on $a^{(11)}_b$. This is the zeroth-order polarization condition. In this section we investigate to what extent the polarization condition fixes the allowed values for $a^{(11)}_b$ and, thereby, for the lowest-order field-strength amplitude $f_{cd}^{(11)}$. Thereby we have to distinguish solutions of multiplicity two from solutions of multiplicity one. Clearly, if the two optical metrics coincide, $\sigma _+ = \sigma _-$, every solution is of multiplicity two. In a background field with $\sigma _+ \neq \sigma _-$, a solution is of multiplicity two if and only if $u_au^a =0$. In this case $p_a$ is a \emph{principal null covector}, i.e., a covector with $p_ap^a=0$ for which $u_a$ and $v_a$ are multiples of $p_a$. In the following proposition we determine the general form of the matrix $M_b{}^d$ for this special case. For more details on principal null solutions to the eikonal equation we refer to Abalos et al. \cite{Abalosetal2015} where also pictures of the cones of the optical metrics can be found. \begin{proposition}\label{prop:aux2a} Assume that $p_a =\nabla _a S$ satisfies $p_ap^a=0$ and $u_au^a=0$. Then $p_a$ is a solution of multiplicity two to the eikonal equation. The covectors $u_a$ and $v_a$ are multiples of $p_a$, \begin{equation}\label{eq:princnull} u_c = F_{c}{}^ap_a = \mu p_c \, , \quad v_c = {}^{*} {\!} F_{c}{}^ap_a = \nu p_c \, , \end{equation} where the coefficients $\mu$ and $\nu$ satisfy \begin{gather} \mu ^2 = - \dfrac{F}{2} + \sqrt{ \dfrac{F^2}{4}+G^2} \, , \quad \nu ^2 = \dfrac{F}{2} + \sqrt{ \dfrac{F^2}{4}+G^2} \, , \nonumber \\ \mu \nu = -G \, . \label{eq:munu} \end{gather} The transport vector fields are proportional to $p^a$, \begin{equation}\label{eq:Kprinc} K^a _{\pm} = \xi _{\pm} \, p^a \, , \end{equation} where \begin{equation}\label{eq:xi} \xi _{\pm}= 1 - \sigma _{\pm} \mu ^2 \, . \end{equation} The matrix $M_b{}^d$ reduces to \begin{equation}\label{eq:Mprinc} M_d{}^b= \Big( 2 \mathcal{L}_F - 4 \mathcal{L}_{FF} \mu ^2 + 4 \mathcal{L}_{FG} \mu \nu - \mathcal{L}_{GG} \nu ^2 \Big) p_b p^d \, . \end{equation} \end{proposition} \begin{proof} If $p_ap^a=0$ and $u_au^a=0$, (\ref{eq:eikonsigma}) is trivially satisfied with both signs, i.e., the covector $p_a$ is lightlike with respect to both optical metrics. Moreover, we read from (\ref{eq:optpuv1}) and (\ref{eq:optpuv2}) that with respect to either of the two optical metrics the covectors $u_a$ and $v_a$ are orthogonal to $p_a$ and lightlike. As two lightlike vectors are orthogonal with respect to a Lorentzian metric if and only if they are linearly dependent, this proves that (\ref{eq:princnull}) has to hold with some coefficients $\mu$ and $\nu$. Then (\ref{eq:munu}) follows from (\ref{eq:FFid}). Inserting (\ref{eq:princnull}) into (\ref{eq:Ksigma}) and (\ref{eq:M}), respectively, yields (\ref{eq:Kprinc}) and (\ref{eq:Mprinc}). \end{proof} Recall that the eikonal equation requires the kernel of $M_b{}^d$ to be at least two-dimensional. Proposition (\ref{prop:aux2a}) implies that the kernel is even three-dimensional if $u_au^a=0$. We will now consider the case $u_au^a \neq 0$. \begin{proposition}\label{prop:aux2b} Assume that $p_a = \nabla _a S$ is a solution to one of the two eikonal equations, $p_ap^a + \sigma u_au^a =0$ where $\sigma$ stands for $\sigma _+$ or for $\sigma _-$. Let $\tilde{g}^{ab} = g^{ab} + \sigma F^{ac}F^b{}_c$ be the corresponding optical metric and $K^a=\tilde{g}^{ab}p_b$ be the corresponding transport vector field. If $u_au^a \neq 0$, the three covectors $p_a$, $u_a$ and $v_a$ are linearly independent. They span the orthocomplement of $p_a$ with respect to $\tilde{g}^{ab}$. The kernel of the matrix $M_b{}^d$ consists of all covectors \begin{equation}\label{eq:alphabetagamma} a^{(11)}_b = \alpha u_b + \beta v_b + \gamma p_b \end{equation} where $\gamma$ is arbitrary and $\alpha$ and $\beta$ satisfy \begin{equation}\label{eq:alphabeta} \begin{pmatrix} m_1{}^1 & m_1{}^2 \\ m_2{}^1 & m_2{}^2 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} \end{equation} where \begin{gather} \begin{pmatrix} \nonumber m_1{}^1 & m_1{}^2 \\[0.1cm] m_2{}^1 & m_2{}^2 \end{pmatrix} = 2 \mathcal{L}_F \sigma \begin{pmatrix} 1 & 0 \\[0.1cm] 0 & 1 \end{pmatrix} \\[0.2cm] - \begin{pmatrix} -4 \mathcal{L}_{FF} & 2\mathcal{L}_{FG} \\[0.1cm] 2 \mathcal{L}_{FG} & - \mathcal{L}_{GG} \end{pmatrix} \begin{pmatrix} 1 & G \sigma \\[0.1cm] \; G \sigma \; & 1+F \sigma \end{pmatrix} \, . \label{eq:m} \end{gather} The kernel is three-dimensional if and only if $p_a = \nabla _aS$ is a solution of multiplicity two. The kernel is then spanned by $p_d$, $u_d$ and $v_d$, i.e., it coincides with the orthocomplement of $p_b$ with respect to the optical metric. \end{proposition} \begin{proof} Our assumption that $u_au^a \neq 0$ implies, by (\ref{eq:optpuv1}) and (\ref{eq:optpuv2}) together with $\tilde{g}^{ab}p_ap_b=0$, that $p_a$, $u_a$ and $v_a$ are linearly independent and that they span the $\tilde{g}^{ab}$-orthocomplement of $p_a$. After normalizing $u_a$ and $v_a$ with the help of (\ref{eq:optpuv2}) we may complement these three covectors to a Newman-Penrose tetrad by choosing a covector $w_a$ with \begin{gather} \tilde{g}^{ab}w_ap_b=1 \, , \quad \tilde{g}^{ab}w_aw_b= 0 \, , \nonumber \\ \tilde{g}^{ab}w_au_b= 0 \, , \quad \tilde{g}^{ab}w_av_b=0 \, . \label{eq:w} \end{gather} From (\ref{eq:M}) we calculate with the help of (\ref{eq:uvorth}) \begin{gather} M_b{}^dw_d= 2 \mathcal{L}_F \sigma u_au^a w_b \nonumber \\ + 2 \mathcal{L}_F \big( 1+ \sigma F^{fg} F^e{}_g w_fp_e \big) p_b \, , \label{eq:Mw} \end{gather} \begin{equation}\label{eq:Mp} M_b{}^dp_d= 0 \, , \end{equation} \begin{gather} M_b{}^du_d= u_au^a \big(2 \mathcal{L} _F \sigma - 4 \mathcal{L}_{FF} + 2 \mathcal{L}_{FG} G \sigma \big) u_b \nonumber \\ + u_au^a \big(2 \mathcal{L} _{FG} - \mathcal{L}_{GG} G \sigma \big) v_b \, , \label{eq:Mu} \end{gather} \begin{gather} M_b{}^dv_d= u_au^a \big(2 \mathcal{L} _{FG} (1+F \sigma)- 4 \mathcal{L}_{GG} G \sigma \big) u_b \nonumber \\ + u_au^a \big(2 \mathcal{L} _F \sigma - \mathcal{L}_{GG} (1+F \sigma ) + 2 \mathcal{L}_{FG} G \sigma \big) v_b \, . \label{eq:Mv} \end{gather} The first two equations (\ref{eq:Mw}) and (\ref{eq:Mp}) demonstrate that $M_b{}^d$ leaves the two-space spanned by $w_d$ and $p_d$ invariant and that it has a one-dimensional kernel on this two-space. The last statement follows from the fact that $w_d$ is not in the kernel: It is mapped onto a covector $M_b{}^dw_d$ that is non-zero if $\sigma =0$ (because then it is a non-zero multiple of $p_b$) and also if $\sigma \neq 0$ (because then it has a non-zero component in the direction of $w_b$). The other two equations (\ref{eq:Mu}) and (\ref{eq:Mv}) demonstrate that the two-space spanned by $u_d$ and $v_d$ is left invariant as well. On this two-space the matrix $M_b{}^d$ must have a one-dimensional or two-dimensional kernel because the eikonal equation requires that the kernel of the full matrix $M_b{}^d$ is at least two-dimensional. By (\ref{eq:Mu}) and (\ref{eq:Mv}), a covector $\alpha u_b + \beta v_b$ is in the kernel if and only if (\ref{eq:alphabeta}) holds with (\ref{eq:m}). The determinant of the matrix (\ref{eq:m}) vanishes as a consequence of the eikonal equation. Clearly, a ($2 \times 2$)-matrix has a two-dimensional kernel if and only if it is the zero matrix. The matrix (\ref{eq:m}) is the zero matrix if and only if the symmetric matrix \begin{gather} \nonumber \dfrac{1}{u_au^a D} \begin{pmatrix} m_1{}^1 & m_1{}^2 \\[0.1cm] m_2{}^1 & m_2{}^2 \end{pmatrix} \begin{pmatrix} 1 + F \sigma & -G \sigma \\[0.1cm] - G \sigma & 1 \end{pmatrix} = \\[0.2cm] \dfrac{2 \mathcal{L}_F \sigma}{D} \begin{pmatrix} 1+ F \sigma & - G \sigma \\[0.1cm] - G \sigma & 1 \end{pmatrix} + \begin{pmatrix} - 4 \mathcal{L}_{FF} & 2 \mathcal{L}_{FG} \\[0.1cm] 2 \mathcal{L}_{FG} & - \mathcal{L}_{GG} \end{pmatrix} \label{eq:msym} \end{gather} is the zero matrix, where $D=1+F \sigma -G^2 \sigma ^2$. By comparison with part (d) of Proposition \ref{prop:nobiref} we see that this is the case if and only if the two optical metrics coincide. As we assume that $u_au^a \neq 0$ this is true if and only if $p_a=\nabla _a S$ is a solution of multiplicity two. \end{proof} With these results at hand it is now easy to evaluate the polarization condition. We do this first for solutions of multiplicity two. \begin{proposition}\label{prop:polconmult2} Let $p_a= \nabla _a S$ be a solution of multiplicity two to the eikonal equation, i.e. $p_ap^a + \sigma _+ u_au^a = 0$ and $p_ap^a + \sigma _- u_au^a = 0$. Then the two transport vector fields $K^a_+ = p^a - \sigma _+ F^{ab}u_b$ and $K^a_- = p^a - \sigma _- F^{ab}u_b$ are linearly dependent. The polarization condition $M_b{}^d a^{(11)}_d=0$ is equivalent to $K^d_{\pm} a^{(11)}_d =0$ (which holds with one sign if and only if it holds with the other sign), i.e., it restricts $a^{(11)}_d$ to a three-dimensional subspace which contains $p_a$. \end{proposition} \begin{proof} If $u_au^a=0$, this follows from Proposition \ref{prop:aux2a}. If $u_au^a \neq 0$ it follows from Proposition \ref{prop:aux2b}. \end{proof} We now prove the analogous statement for solutions of multiplicity one. \begin{proposition}\label{prop:polconmult1} Let $p_a= \nabla _a S$ be a solution of multiplicity one to the eikonal equation, i.e. $p_ap^a + \sigma u_au^a = 0$ with $\sigma = \sigma _+$ or $\sigma = \sigma _-$ but not with both. Then the polarization condition $M_b{}^da^{(11)}_d=0$ is true if and only if $a_b^{(11)} = \alpha u_b + \beta v_b + \gamma p_b$ where $\alpha$ and $\beta$ satisfy \emph{(\ref{eq:alphabeta})} with \emph{(\ref{eq:m})}. This condition restricts $a^{(11)}_b$ to a two-dimensional subspace that contains $p_b$. \end{proposition} \begin{proof} This is an immediate consequence of Proposition \ref{prop:aux2b}). \end{proof} We summarize the results of this section in the following way. For every solution $p_a=\nabla _aS$ to the eikonal equation the polarization condition requires that $a^{(11)}_b$ satisfies $K^ba^{(11)}_b =0$ where $K^b$ is the corresponding transport vector field. This may be interpreted as a transversality condition. For a solution of multiplicity two there is no additional restriction, i.e., $a^{(11)}_b$ is confined to a three-dimensional subspace that contains $p_b$. By contrast, for a solution of multiplicity one the polarization condition restricts $a^{(11)}_b$ to a two-dimensional space that contains $p_b$, i.e., it fixes the polarization plane (the plane spanned by $a_b^{(11)}$ and $p_b$) uniquely. \section{Evaluation of the first-order field equation}\label{sec:firstorder} We now turn to the first-order field equation which gives us the two conditions (\ref{eq:Max2first1a}) and (\ref{eq:Max2first2a}). We can write them, in a slightly more compact form, as \begin{equation}\label{eq:Max2first1b} 0 = \nabla ^a \Big( \chi _{ab}{}^{cd} p_c \, a_d^{(11)} \Big) + p ^a \, \chi _{ab}{}^{cd} \nabla _c a_d^{(11)} + i \, M_b{}^d \, a_d ^{(21)} \, , \end{equation} \begin{equation}\label{eq:Max2first2b} 0 = M_b{}^d \, a_d ^{(22)} - \psi _{ab}{}^{cdef} p ^a \, p _c \, a_d ^{(11)} a_f^{(11)} \, . \end{equation} We want to determine what kind of information these equations give us on the polarization plane spanned by $a_b^{(11)}$ and $p_b$. We know from the preceding section that for a solution of multiplicity one this plane is already uniquely fixed at the zeroth-order level, so the first-order equations cannot give us any additional information on this plane. One just has to check for consistency, i.e., one has to verify that the sum of the first two terms in (\ref{eq:Max2first1b}) is in the image space of $M_b{}^d$ and that the second term in (\ref{eq:Max2first2b}) is in the image space of $\psi _{ab}{}^{cdef}$. Then (\ref{eq:Max2first1b}) and (\ref{eq:Max2first2b}) give us polarization conditions on $a_d^{(21)}$ and $a_d^{(22)}$. We have already emphasized that (\ref{eq:Max2first2b}) is not in general satisfied by $a_d^{(22)} = 0$, i.e., that frequency doubling has to be taken into account if the field equation should hold at first order, and that at the next order in general also a non-zero $a^{(20)}$ is needed. As in this paper we will be satisfied with determining the potential up to first order, there is nothing else to be done for solutions of multiplicity one. Therefore, in the following we will restrict ourselves to solutions of multiplicity two. We know from the preceding section that then $a_d^{(11)}$ is restricted at the zeroth-order level only by the condition $K^d a_d^{(11)} =0$. This condition restricts the polarization plane to a three-dimensional space, i.e., it still allows the polarization plane to arbitrarily rotate along a ray. We will now demonstrate that the first-order equation (\ref{eq:Max2first1b}) gives us a transport law which uniquely determines the polarization plane along a ray if it is given at one point of this ray. We will consider first solutions of multiplicity two with $u_a u^a =0$ and then with $u_au^a \neq 0$. \subsection{Transport equation in the case $\boldsymbol{u_au^a=0}$}\label{sec:transport1} For a solution of multiplicity two with $u_au^a=0$ the rays are lightlike geodesics not only with respect to each of the two optical metrics but also with respect to the spacetime metric. (The affine parametrizations are in general different.) For such a solution we have $u_a$ and $v_a$ parallel to $p_a$ and the matrix $M_b{}^d$ projects onto the line spanned by $p_b$, recall Proposition \ref{prop:aux2a}. As a consequence, (\ref{eq:Max2first1b}) reduces to \begin{gather} 4 \, \mathcal{L} _F p^a \nabla _a a_b^{(11)} + 2 \nabla _a \big( \mathcal{L} _F p^a \big) a_b^{(11)} \nonumber \\ - \nabla ^a \mathcal{L} _G \, \varepsilon _{ab}{}^{cd} p_c a_d^{(11)} = \psi \, p_b \label{eq:transprinc} \end{gather} where $\psi$ is an undetermined scalar function. Recall from Proposition \ref{prop:aux2a} that in the case at hand the two transport vector fields $K^a_+$ and $K^a_-$ are multiples of $p^a$, i.e., that $p^a$ is tangent to the rays. Therefore, (\ref{eq:transprinc}) gives us a first-order ordinary differential equation for $a_b^{(11)}$ along each ray. As $\psi$ is arbitrary, for each initial condition this differential equation has a solution that is unique up to a multiple of $p_b$. In other words, (\ref{eq:transprinc}) gives us a unique transport law for the polarization plane. If $\nabla ^a \mathcal{L} _G \, \varepsilon _{ab}{}^{cd} p_c a_d^{(11)} =0$, we read from (\ref{eq:transprinc}) that the polarization plane is parallel with respect to the transport law defined by the Levi-Civita derivative of the spacetime metric, as it is in the standard Maxwell vacuum theory; in general, however, in a theory of the Pleba{\'n}ski class a background field with non-constant $\mathcal{L} _G$ produces a rotation of the polarization plane. This gives us a new experimental test of this type of theories in situations where the rays behave as in the standard vacuum Maxwell theory but the polarization plane does not. We will exemplify this with the Born-Infeld theory in the next section. \subsection{Transport equation in the case $\boldsymbol{u_au^a \neq 0}$}\label{sec:transport2} We now consider a solution of multiplicity two with $u_au^a \neq 0$. For such solutions we know from Proposition \ref{prop:aux2b} that the matrix $M_b{}^d$ has a three-dimensional kernel spanned by $p_b$, $u_b$ and $v_b$, i.e., that $a_d^{(11)}$ is of the form \begin{equation}\label{eq:albega} a_d^{(11)} = \alpha u_d + \beta v_d + \gamma p_d \, . \end{equation} By the same token, as the matrix $M_b{}^d$ is self-adjoint with respect to the spacetime metric, (\ref{eq:Max2first1b}) is true with some $a_d^{(21)}$ if and only if the equation \begin{equation}\label{transportx} 0 = z^b \Big\{\nabla ^a \Big( \chi _{ab}{}^{cd} p_c \, a_d^{(11)} \Big) + p ^a \, \chi _{ab}{}^{cd} \nabla _c a_d^{(11)} \Big\} \end{equation} is true for $z^b=p^b$, $z^b=u^b$ and $z^b=v^b$. It is easy to check that for $z^b=p^b$ the equation is identically satisfied, for all $a_d^{(11)}$ of the form (\ref{eq:albega}). Therefore, we only have to consider it for $z^b=u^b$ and $z^b=v^b$. To that end, we recall that a solution of multiplicity two with $u_au^a \neq 0$ exists only if the two optical metrics coincide. In the following we write $\sigma$ for $\sigma _+ = \sigma _-$ and $K^a$ for $K^a_+ = K^a_-$. If we express $\mathcal{L}_{FF}$, $\mathcal{L}_{FG}$ and $\mathcal{L}_{GG}$ with the help of part (d) of Proposition \ref{prop:nobiref}, we see that $\chi _{ab}{}^{cd}$ can be written as \begin{gather} \chi _{ab}{}^{cd} = \mathcal{L} _G \varepsilon _{ab}{}^{cd} - 2 \, \mathcal{L}_F \big( \delta _a^c \delta _b^d - \delta _a^d \delta _b^c \big) \nonumber \\ - \dfrac{2 \, \mathcal{L}_F \sigma}{D} F_{ab} \big( (1+ \sigma F ) F^{cd} - \sigma G {}^{*} {\!} F^{cd} \big) \nonumber \\ + \dfrac{2 \, \mathcal{L}_F \sigma}{D} {}^{*} {\!} F_{ab} \big( \sigma G F^{cd} - {}^{*} {\!} F^{cd} \big) \Big) \label{eq:chimulttwo} \end{gather} where $D=1+\sigma F - \sigma ^2 G^2$. If we insert this expression and (\ref{eq:albega}) into (\ref{transportx}) with $x^b=u^b$ and with $x^b=v^b$, we get after some lengthy algebra the two equations \begin{equation}\label{eq:transport1} 4 \, \mathcal{L}_F u_bu^b K^a \nabla _a \alpha = a \, \alpha + b \, \beta \, , \end{equation} \begin{equation}\label{eq:transport2} 4 \, \mathcal{L}_F u_bu^b K^a \nabla _a \beta = a \, \beta - b \, \alpha \, , \end{equation} where \begin{equation}\label{eq:defa} a = \, - \, 2 \, \nabla _a \big( \mathcal{L}_F u_cu^c K^a \big) \, , \end{equation} \begin{gather} b = \nabla ^a \mathcal{L} _G \, \varepsilon _{abcd} p^bv^cu^d \nonumber \\ + 2 \, \mathcal{L}_F p^b \big( p^ap^x-p_ep^e g^{ac} \big) \big( {}^{*} {\!} F_a{}^d \nabla _cF_{db} +F_a{}^d \nabla _c {}^{*} {\!} F_{db} \big) \, . \label{eq:defb} \end{gather} These equations determine the change of $\alpha$ and $\beta$ and, thus, of the polarization plane, along each ray. In particular, $b$ determines the rotation of the polarization plane with respect to the basis covectors $u_b$ and $v_b$ which are orthogonal to each other, but not parallelly transported, with respect to the optical metric. \section{Example: Born-Infeld theory}\label{sec:BI} As an example, we consider the transport law for the polarization plane in the Born-Infeld theory \cite{BornInfeld1934} where the Lagrangian is given by (\ref{eq:LBI}). In this case, for any background field, the two optical metrics coincide, \begin{equation}\label{eq:sigmaBI} \sigma _+ = \sigma _- = \dfrac{-1}{b_0^2+F} \, , \end{equation} \begin{equation}\label{eq:gBI} \tilde{g}{}_{+}^{ab} = \tilde{g}{}_{-}^{ab} = g^{ab} - \dfrac{F^{ac}F^b{}_c}{b_0^2 +F} \end{equation} so there is no birefringence and every solution to the eikonal equation is a solution of multiplicity two. We assume that the underlying spacetime is the Minkowski spacetime with standard inertial coordinates $(x^0=ct,x^1,x^2,x^3)$, i.e., $g_{ab} = \eta _{ab}$ where $(\eta _{ab} ) = \mathrm{diag} (-1,1,1,1)$. As the background field we choose the superposition of a time-dependent electric field and a constant magnetic field, both in $x^3$ direction; so the only non-vanishing components of the field strength tensor are \begin{equation}\label{eq:BIF} F_{03} = - F_{30} = \dfrac{E(t)}{c} \, , \quad F_{12}= - F_{21} = B_0 \, . \end{equation} Note that the homogeneous Maxwell equation is indeed satisfied, $\varepsilon ^{abcd} \nabla _b F_{cd} =0$. For this electromagnetic field, \begin{equation}\label{eq:BIp} p_a = \nabla _a S \, , \quad S(x^0,x^1,x^2,x^3) = \dfrac{1}{c} \big( x^3-x^0 \big) = \dfrac{x^3}{c} - t \end{equation} is a principal null covector field, $p_bp^b=0$ and \begin{equation}\label{eq:BIu} u_a=F_{ab}p^b = - E ( t ) \, p_a \, , \end{equation} \begin{equation}\label{eq:BIu} v_a= {}^{*} {\!} F_{ab}p^b = - c B_0 p_a \, , \end{equation} so the eikonal equation is satisfied with $u_au^a=0$. The transport vector field $K^a$ is proportional to $p^a$, i.e., the rays are straight lightlike lines in $x^3$ direction. Note that they are lightlike not only with respect to the spacetime metric; they are lightlike geodesics also with respect to the optical metric. Whereas $K^a$ is adapted to an affine parameter with respect to the optical metric, $p^a$ is adapted to the parametrization with the time coordinate $t$ which is an affine parameter with respect to the spacetime metric. The amplitude $a_b^{(11)}$ must be orthogonal to $p_c$ with respect to the spacetime metric, so we may write it in the form \begin{equation}\label{eq:BIa} a_b^{(11)} = \zeta \big( \delta _b^1 \, \mathrm{cos} \, \varphi + \delta _b^2 \, \mathrm{sin} \, \varphi \big) + \gamma \, p_b \end{equation} with scalar coefficients $\zeta$ and $\gamma$ and an angle $\varphi$ which gives, at each point along the ray, the rotation of the polarization plane with respect to the $(x^1,x^2)$ basis vectors which are parallel with respect to the Levi-Civita derivative of the metric along the ray. For the electromagnetic field (\ref{eq:BIF}), the partial derivatives of the Lagrangian are \begin{equation}\label{eq:BILF} \mathcal{L} _F = \dfrac{- \, 1 }{2 \sqrt{1+ \dfrac{B_0^2}{b_0^2}} \sqrt{ 1 - \dfrac{E (t) ^2}{c^2b_0^2} }} \, , \end{equation} \begin{equation}\label{eq:BILG} \mathcal{L} _G = \dfrac{- \, B_0 E ( t )}{ c \, b_0^2 \sqrt{1+ \dfrac{B_0^2}{b_0^2}} \sqrt{ 1 - \dfrac{E (t) ^2}{c^2b_0^2} }} \, . \end{equation} With these results, inserting (\ref{eq:BIa}) into (\ref{eq:transprinc}) yields \begin{equation}\label{eq:dotphi} \dot{\varphi} = \dfrac{- \, B_0 \dfrac{d E(t)}{dt}}{2 \, c \, b_0^2 \Big( 1 - \dfrac{E (t)^2}{c^2b_0^2}\Big)} \end{equation} where the overdot means derivative with respect to $t$ along the ray, $p^a \nabla _a \varphi = \dot{\varphi}$. We see that the time-dependence of $\mathcal{L}_G$ produces a rotation of the polarization plane. Integrating (\ref{eq:dotphi}) from $t_1= 0$ with $E(t_1)=0$ to a time $t_2$ with $E(t_2)=E_0$ gives \begin{equation}\label{eq:phi} \Delta \varphi = \dfrac{B_0}{2 b_0} \, \mathrm{arctanh} \Big( \dfrac{E_0}{c b_0} \Big) = \dfrac{B_0 E_0}{2 c b_0 ^2} \, \left( 1 + O \Big( \dfrac{E_0^2}{c^2 b_0^2} \Big) \right) \, . \end{equation} In principle, this can be utilized for a new laboratory test of the Born-Infeld theory. If in the considered constellation with strong fields $E_0$ and $B_0$ no rotation of the polarization plane is detected, this gives a lower bound on $b_0$. However, the effect on the polarization plane is so small that with present-day technology this test of the Born-Infeld theory is not yet competitive with other tests. E.g., in Ref. \cite{SchellstedePerlickLaemmerzahl2015} we have seen that with interferometric methods one could find a bound on $b_0$ in the order of $b_0 \gtrsim 7 \times 10^7$ T which corresponds to $b_0 \gtrsim 7 \times 10^{11} \sqrt{\mathrm{g}} / \big( \sqrt{\mathrm{cm}} \, \mathrm{s} \big)$ in Gaussian units. Assuming that a rotation of the polarization plane by one arcminute could be measured, $\Delta \varphi \approx 3 \times 10^{-4}$ rad, we would need electric and magnetic fields of \begin{equation}\label{eq:E0B0} \left| B_0 \, \dfrac{E_0}{c} \right| \approx 3 \times 10^{12} \, \mathrm{T}^2 \end{equation} for being competitive with the interferometric test. This is not reachable in a laboratory experiment in the near future. \section{Conclusions}\label{sec:conclusions} It was the main purpose of this paper to derive the transport law of the polarization plane in nonlinear vacuum electrodynnamics. We have done this, on an unspecified general-relativistic spacetime, for a theory of the Pleba{\'n}ski class and an electromagnetic background field which were arbitrary except for some non-degeneracy conditions. To that end we have utilized an approximate-plane-harmonic-wave ansatz which takes the generation of higher harmonics and frequency rectification into account. According to this ansatz, the electromagnetic field is written as an asymptotic series with respect to a parameter $\alpha$ where the limit $\alpha \to 0$ refers to sending the frequency to infinity. We have seen that the generalized Maxwell equations have to be solved to zeroth and to first order with respect to $\alpha$ for determining the transport law of the polarization plane in lowest non-trivial order. When considering the generalized Maxwell equations to zeroth order, we have rederived the known result that, for every theory of the Pleba{\'n}ski class and every background field that satisfy the assumed non-degeneracy conditions, there are two optical metrics which are both of Lorentzian signature. For a solution of the zeroth-order equations one needs a scalar function, the so-called eikonal function $S$, whose gradient $p_b = \nabla _b S$ is lightlike with respect to at least one of the two optical metrics, and a covector field, $a_b^{(11)}$, which has to satisfy an algebraic equation known as the zeroth-order polarization condition. We have seen that two cases have to be distinguished. The first case is that of a solution of multiplicity one, i.e., the case that $p_b=\nabla _bS$ is lightlike with respect to only one of the optical metrics. Then the polarization plane (i.e., the plane spanned by $a_b^{(11)}$ and $p_b$) is uniquely determined by the zeroth-order polarization condition. The first-order equations give no additional information on the polarization plane and have to be checked only for consistency. The second case is that of a solution of multiplicity two, i.e., the case that $p_b = \nabla _b S$ is lightlike with respect to both optical metrics. Then the zeroth-order polarization condition allows an arbitrary rotation of the polarization plane along each ray. However, the generalized Maxwell equations at first order give us a transport equation which determines the polarization plane uniquely along a ray if it is given at one point of this ray. This transport law has a fairly simple form in the case that $p_b$ is a principal null covector of the electromagnetic background field, $F_a{}^bp_b \sim p_a$, see Section \ref{sec:transport1}. It is much more awkward if this is not the case, see Section \ref{sec:transport2}. We have exemplified the general results with the Born-Infeld theory. In this theory the two optical metrics coincide, i.e., all solutions of the eikonal equation are of multiplicity two. We have considered a particular solution where $p_b$ is a principal null covector of a background field on Minkowski spacetime. In this example, the rays are straight lightlike lines, i.e., the light propagation is the same as in the standard Maxwell vacuum theory. However, the behavior of the polarization plane is different: Whereas in the standard Maxwell theory it is parallely transported along each ray, here it rotates with respect to a parallely transported plane by an angle $\Delta \varphi$. This is a feature that could be observed in a laboratory experiment, sometimes in the future, when sufficiently strong fields are available. \section*{Acknowledgements} We thank Gerold Schellstede for helpful discussions. V.P. is grateful to Deutsche Forschungsgemeinschaft for financial support under Grant No. LA 905/14-1. Moreover, C.L. and V.P. gratefully acknowledge support from the Deutsche Forschungsgemeinschaft within the Research Training Group 1620 ``Models of Gravity.'' A.M. acknowledges support from DAAD.
{ "timestamp": "2018-11-27T02:04:55", "yymm": "1806", "arxiv_id": "1806.00891", "language": "en", "url": "https://arxiv.org/abs/1806.00891" }
\section{Introduction} A major theme in set theory is the question of how much compactness can consistently exist in the universe. We regard an object as satisfying a {\it compactness principle} if whenever a property holds for all strictly smaller substructures of the object, then this property holds for the object itself. Such principles typically follow from large cardinals, but can often occur at successor cardinals as well. So the aforementioned question can be rephrased as: What combinatorial properties of large cardinals can consistently hold at small ones? A key instance of such a combinatorial principle is the \emph{tree property}. A regular cardinal $\mu$ has the tree property if every tree of height $\mu$, all of whose levels have size less than $\mu$, has a cofinal branch. For inaccessible $\mu$, the tree property is equivalent to weak compactness of $\mu$; but this combinatorial principle can consistently hold at successor cardinals. Results of Mitchell and Silver show that the tree property at $\aleph_2$ is equiconsistent with a weakly compact cardinal. Mitchell's 1972 proof \cite{mitchell} obtaining the consistency of the tree property at $\aleph_2$ initiated a long, ongoing project in set theory: Obtain the tree property at all regular cardinals greater than $\aleph_1$. There are strengthenings of the tree property that capture the essence of larger cardinals in a similar way. Jech defined a principle which we call the strong tree property that characterizes strongly compact cardinals. Then Magidor isolated a further strengthening, $\operatorname{ITP}$ (or the super tree property) that can characterize supercompact cardinals. We will give the precise definitions in the next section, but the highlight is the following: \begin{fact} Let $\mu$ be an inaccessible cardinal. \begin{enumerate} \item (Jech, 1973 \cite{Jech}) The strong tree property holds at $\mu$ if and only if $\mu$ is strongly compact. \item (Magidor, 1974 \cite{Magidor}) $\operatorname{ITP}$ holds at $\mu$ if and only if $\mu$ is supercompact. \end{enumerate} \end{fact} Much as with the tree property, these combinatorial characterizations can consistently hold at successor cardinals. Indeed, if one starts with a supercompact $\lambda$ and forces with the Mitchell poset to make the tree property hold at $\lambda=\aleph_2=2^\omega$, then even $\operatorname{ITP}$ will hold at $\aleph_2$ in the generic extension. Similar remarks hold for the strong tree property. So we have an even more ambitious version of the project to obtain the tree property everywhere: Can we consistently obtain the strong or super tree properties at every regular cardinal greater than $\aleph_1$? Knowing that the tree property and its strenghtenings can be obtained at $\aleph_2$, the natural follow up is what happens at higher $\aleph_n$'s. Below we summarize some history on the subject: \begin{enumerate} \item (Abraham, 1983 \cite{abraham}) Starting from a supercompact and a weakly compact above it, one can force the tree property simultaneously at $\aleph_2$ and $\aleph_3$. \item (Cummings-Foreman, 1998 \cite{Cummings-Foreman}) Starting from $\omega$-many supercompact cardinals, the tree property can be forced to hold simultaneously at every $\aleph_n$, for $n>1$. \item (Fontanella \cite{FontanellaCF}; Unger \cite{UngerCF} independently, 2013-4), In the Cummings-Foreman model, $\operatorname{ITP}$ holds an every $\aleph_n$, for $n>1$. \end{enumerate} A classical theorem of Aronszajn shows the tree property, hence also its strengthenings, fail at $\aleph_1$. An old generalization by Specker of this theorem implies that even obtaining the tree property at $\nu^+$ and $\nu^{++}$ with $\nu$ strong limit requires a violation of the singular cardinals hypothesis (SCH) at $\nu$. We note that obtaining this situation just at $\nu = \aleph_{\omega}$ is an open problem. The crux of this program will thus likely be encountered at successors of singulars. Our focus here is on this region. The first result in that direction is by Magidor and Shelah \cite{MagidorShelah}, who showed that if $\nu$ is a singular limit of supercompact cardinals, then $\mu = \nu^+$ has the tree property. They also showed that the tree property can be forced at $\aleph_{\omega+1}$. The original large cardinal hypothesis included a huge cardinal. It was later reduced to $\omega$-many supercompact cardinals, using a Prikry construction in \cite{sinapova-TP}. More recently, Neeman \cite{Neeman} showed that from this situation, one can force $\mu = \aleph_{\omega+1}$ to have the tree property with a product of Levy collapses. A few years ago, Fontanella \cite{Fontanella} generalized both arguments in \cite{MagidorShelah} and \cite{Neeman}, showing that the strong tree property holds the successor of a singular limit of strongly compact cardinals, and also can consistently hold at $\aleph_{\omega+1}$. In this paper we show that even $\operatorname{ITP}$ can hold at the successor of a singular cardinal. We prove the following theorems: \begin{theorem} Suppose that $\langle\kappa_n\rangle_{n<\omega}$ is an increasing sequence of supercompact cardinals, $\nu = \sup_n \kappa_n$, and $\mu=\nu^+$. Then $\operatorname{ITP}$ holds at $\mu$. \end{theorem} Then we show this can be forced at smaller cardinals: \begin{theorem} Suppose that $\langle\kappa_n\rangle_{n<\omega}$ is an increasing sequence of supercompact cardinals, $\nu = \sup_n \kappa_n$ and $\mu=\nu^+$. Then there is a forcing extension in which $\mu=\aleph_{\omega+1}$ and $\operatorname{ITP}$ holds at $\aleph_{\omega+1}$. \end{theorem} We also consider a strengthening of $\operatorname{ITP}$, the so-called ineffable slender list property ($\operatorname{ISP}$). This principle has been of interest in connection with the SCH as well as that of the consistency strength of the proper forcing axiom (PFA). Viale and Wei{\ss} showed that under PFA, $\operatorname{ISP}$ holds at $\aleph_2$ \cite{viale-weiss}. In \cite{viale} Viale showed that $\operatorname{ISP}$ at $\aleph_2$ together with stationary many internally unbounded models implies that SCH holds; it is still open whether $\operatorname{ISP}$ by itself is enough. Viale and Weiss gave a striking application \cite{viale-weiss}, showing that any standard iteration to force PFA must start with a strongly compact cardinal. If in addition the iteration is proper, then there must be a supercompact cardinal in the ground model. We consider both $\operatorname{ISP}$ and certain weakenings, $\operatorname{ISP}(\delta, \mu, \lambda)$. Here $\operatorname{ISP}_\mu$ as defined by Weiss corresponds to $\operatorname{ISP}(\aleph_1, \mu, \lambda)$ for all $\lambda$; the principle is weakened as $\delta$ is increased; and $\operatorname{ISP}(\mu, \mu, \lambda)$ for all $\lambda$ implies $\operatorname{ITP}_\mu$. The precise definitions are in the next section. We determine exactly which level of ISP can hold at a successor of a singular. \begin{theorem} Suppose $\nu$ is a singular strong limit cardinal, and $\mu = \nu^+$. Then for all $\delta \leq \nu$, $\operatorname{ISP}(\delta,\mu,2^\nu)$ fails. In particular, if SCH holds at $\nu$, then $\operatorname{ISP}(\nu,\mu,\mu)$ fails. \end{theorem} \begin{theorem}\label{ISP} Suppose that $\langle\kappa_n\mid n<\omega\rangle$ is an increasing sequence of supercompact cardinals with limit $\nu$, $\mu=\nu^+$, and $\lambda \geq \mu$. Then $\operatorname{ISP}(\mu,\mu,\lambda)$ holds. \end{theorem} This paper is organized as follows. In section 2, we give the definitions of the principles discussed above and fix some notation. In section 3 we prove that $\operatorname{ITP}$ holds at successor of limit of supercompacts. In section 4, we prove the theorems regarding $\operatorname{ISP}$, and in section 5 we prove the consistency of $\operatorname{ITP}$ at $\aleph_{\omega+1}$. We then conclude with some open questions. \section{Preliminaries and notation} In this section we define the notion of lists and the strengthenings of the tree property discussed in the last section. \begin{definition} Suppose $\mu$ is a regular cardinal and $\lambda \geq \mu$. We say that ${d} = \langle d_z \rangle_{z \in \mathcal{P}_\mu(\lambda)}$ is a \emph{$\mathcal{P}_\mu(\lambda)$-list} if for all $z \in \mathcal{P}_\mu(\lambda)$, $d_z\subseteq z$. A tree-like structure is obtained from a list by regarding the levels, indexed by $z \in \mathcal{P}_\mu(\lambda)$, as consisting of restrictions of $d_y$'s above: \[ \Lev{d}{z} = \{d_y \cap z \mid z \subseteq y, y \in \mathcal{P}_\mu(\lambda)\}. \] A \emph{cofinal branch through $d$} is a set $b\subseteq \lambda$ so that for all $z \in \mathcal{P}_\mu(\lambda)$, $b \cap z \in \Lev{d}{z}$. A $\mathcal{P}_\mu(\lambda)$-list is \emph{thin} if $|\Lev{d}{z}|<\mu$ for all $z \in \mathcal{P}_\mu(\lambda)$. \end{definition} Note that if $\mu$ is inaccessible, then every $\mathcal{P}_\mu(\lambda)$-list is thin. \begin{definition} We say \emph{$\operatorname{TP}(\mu,\lambda)$ holds} if for every thin $\mathcal{P}_\mu(\lambda)$-list $d$, there is a cofinal branch $b$ through $d$. The \emph{strong tree property} holds at $\mu$ if $\operatorname{TP}(\mu,\lambda)$ holds for all $\lambda \geq \mu$. A set $b\subseteq \lambda$ is an \emph{ineffable branch} through $d$ if $\{z \in \mathcal{P}_\mu(\lambda) \mid b \cap z = d_z\}$ is stationary. We say \emph{$\operatorname{ITP}(\mu,\lambda)$ holds} if every thin $\mathcal{P}_\mu(\lambda)$-list has an ineffable branch. The \emph{super tree property} holds at $\mu$ ($\operatorname{ITP}_\mu$ holds) if $\operatorname{ITP}(\mu,\lambda)$ holds for all $\lambda \geq \mu$. \end{definition} Thus the levels of a list $d$ consist of small approximations to a subset of $\lambda$, and a cofinal branch is a subset of $\lambda$ that is approximated at every level. Note that $\operatorname{TP}(\mu,\mu)$ is equivalent to the tree property at $\mu$. A further strengthening of $\operatorname{ITP}$ called $\operatorname{ISP}$ was defined by Wei{\ss} in \cite{Weiss}. \begin{definition} A $\mathcal{P}_\mu(\lambda)$-list is \emph{slender} if for all sufficiently large $\theta$, for club many $M\in\mathcal{P}_\mu(H_\theta)$, for all $b\in M\cap\mathcal{P}_{\omega_1}(\lambda)$, $d_{M\cap\lambda}\cap b\in M$. $\operatorname{ISP}(\mu)$ holds if for every $\lambda\geq \mu$, every slender $\mathcal{P}_\mu(\lambda)$ list has an ineffable branch. \end{definition} All thin lists are slender, though the converse can fail. Consequently, $\operatorname{ISP}$ implies $\operatorname{ITP}$. Moreover, like $\operatorname{ITP}$, $\operatorname{ISP}$ can consistently hold at $\aleph_2$. For example, in \cite{viale-weiss} it is shown that PFA implies $\operatorname{ISP}$ holds at $\aleph_2$. Viale and Wei{\ss} gave a characterization of $\operatorname{ISP}$ via \emph{guessing models}. We will discuss this in more detail later, together with a refinement of slenderness to analyze what happens when $\mu$ is the successor of a singular cardinal. We use standard notation. For conditions in a forcing poset $\mathbb{P}$, $p\leq q$ denotes that $p$ is stronger than $q$. We say that $\mathbb{P}$ is $\kappa$-closed to mean that decreasing sequences of length less than $\kappa$ have a lower bound. $\mathbb{P}$ is ${<}\kappa$-distributive if it adds no new sequences of size less than $\kappa$. \section{ITP at the successor of a singular limit of supercompacts} We begin with the simplest instance of the super tree property, namely $\operatorname{ITP}(\mu,\mu)$. \begin{theorem}\label{baby} Suppose that $\langle\kappa_n\rangle_{n<\omega}$ is an increasing sequence of supercompact cardinals with limit $\nu$ and $\mu=\nu^+$. Then we have $\operatorname{ITP}(\mu, \mu)$. \end{theorem} \begin{proof} Since $\mu$ is club in $\mathcal{P}_\mu(\mu)$, we may assume that our list is indexed by $\mu$. So let $d=\langle d_\alpha\mid\alpha<\mu\rangle$ be a thin $\mu$-list. Each $\Lev{d}{\alpha}$ has size at most $\nu$, so we enumerate these as $\Lev{d}{\alpha} = \{ \lev{\alpha}{\xi} \mid \xi < \nu\}$. We will show that there is a $b\subset\mu$ such that $\{\alpha<\mu\mid d_\alpha=b\cap\alpha\}$ is stationary. The ineffability of $b$ will come from a basic fact about sets in supercompactness measures. \begin{fact}\label{stationarysups} Let $\mu$ be regular and suppose $U$ is a normal measure on $\mathcal{P}_\kappa(\mu)$ with $\kappa \leq \mu$. Then for all $A \in U$, $\{\sup x \mid x \in A\}$ is stationary. \end{fact} For each $n$, let $U_n$ be a normal measure on $\mathcal{P}_{\kappa_n}(\mu)$, and let $j_{U_n}$ be the corresponding ultrapower embedding. \begin{lemma} There are $n<\omega$, an unbounded $S\subset\mu$, and $A\in U_0$ such that for all $x\in A$ and $\alpha\in x\cap S$, there is $\xi<\kappa_n$ with $d_{\sup x}\cap\alpha=\lev{\alpha}{\xi}$. \end{lemma} \begin{proof} Let $i=j_{U_0}$ and consider $id_{\sup i"\mu}$. For all $\alpha<\mu$, there is some $n$ and $\xi<i(\kappa_n)$ such that $id_{\sup i"\mu}\cap i(\alpha)=i\lev{i(\alpha)}{\xi}$. Then for some $n$, there is a cofinal set $S\subset\mu$ such that for all $\alpha\in S$, $id_{\sup i"\mu}\cap i(\alpha)$ has index below $i(\kappa_n)$ in $\Lev{id}{i(\alpha)}$; namely, for some $\xi< i(\kappa_n)$, $$id_{\sup i"\mu}\cap i(\alpha)=i\lev{i(\alpha)}{\xi}.$$ Then for all $\alpha\in S$, there is a measure one set $A_\alpha\in U_0$ such that for all $x\in A_\alpha$, for some $\xi<\kappa_n$, $d_{\sup x}\cap \alpha=\lev{\alpha}{\xi}$. Set $A:=\triangle_{\alpha\in S}A_\alpha$. Then $S, A$ are as desired. \end{proof} Fix $S, A, n$ as in the conclusion of the lemma. Note that then for all $\alpha<\beta$ both in $S$, there are $\xi, \delta<\kappa_n$ such that $\lev{\alpha}{\xi}=\lev{\beta}{\delta}\cap\alpha$. For the purposes of the following lemma, define a (not necessarily cofinal) branch through $d$ as a set $b \subseteq \mu$ so that $b \cap \alpha \in \Lev{d}{\alpha}$ for all $\alpha \leq \sup b$. (Note $b$ may be a cofinal branch even if $b$ is bounded as a subset of $\mu$.) \begin{lemma} There is a sequence $\langle b_\delta\mid\delta<\kappa_n\rangle$ of (possibly bounded) branches through $d$ and a measure one set $A'\in U_0$ such that for all $x\in A'$ and $\alpha\in x$, there is $\delta<\kappa_n$ such that $d_{\sup x}\cap\alpha=b_\delta\cap\alpha$. \end{lemma} \begin{proof} Let $j=j_{U_{n+1}}:V\rightarrow M$, and let $\gamma\in j(S)\setminus\sup j"\mu$. By elementarity, for all $\alpha\in S$, there are $\xi, \delta<\kappa_n$ such that $j(\lev{\alpha}{\xi})=j\lev{j(\alpha)}{\xi}=j\lev{\gamma}{\delta}\cap j(\alpha)$. For each $\delta<\kappa_n$, let $$b_\delta=\bigcup\{\lev{\alpha}{\xi}\mid \alpha\in S, j(\lev{\alpha}{\xi})=j\lev{\gamma}{\delta}\cap j(\alpha)\}.$$ That is, $b_\delta$ is the pullback by $j$ of ``the predecessors" of $j\lev{\gamma}{\delta}$. We have the following: \begin{itemize} \item Each $b_\delta$ is a branch, as it is the union of a coherent sequence of elements of the $\alpha$-th level of $d$ ranging over $\alpha<\mu$. \item There is some $\delta<\kappa_n$ such that $\{ \alpha < \mu \mid b_\delta \cap \alpha \in \Lev{d}{\alpha}\}$ is unbounded in $\mu$. Such $b_\delta$ is a cofinal branch through $d$. \item For each $\alpha < \mu$, there is $\delta<\kappa_n$ such that $b_\delta\cap\alpha \in \Lev{d}{\alpha}$. \end{itemize} The last item follows by the second item, but we gain more information from the following direct argument. Fix $\alpha<\mu$. Choose $x\in j(A)$, such that $\gamma\in x$ and there is some $\alpha'\in S\setminus\alpha$ with $j(\alpha')\in x$. Then we apply elementarity. In particular, for all such $x$'s there are some $\xi,\delta<\kappa_n$ such that: \begin{itemize} \item $jd_{\sup x}\cap\gamma=j\lev{\gamma}{\delta}$, and \item $jd_{\sup x}\cap j(\alpha')=j \lev{j(\alpha')}{\xi}=j(\lev{\alpha'}{\xi})$. \end{itemize} Then $jd_{\sup x}\cap j(\alpha')=j\lev{\gamma}{\delta}\cap j(\alpha')=j(b_\delta)\cap j(\alpha')$. The last equality is since by definition, $b_\delta\cap\alpha'=\lev{\alpha'}{\xi}$. So, $jd_{\sup x}\cap j(\alpha)=j(b_\delta)\cap j(\alpha)=j(b_\delta\cap\alpha)$. So we have in $M$ that for all $\alpha<\mu$, and for $j(U_0)$-measure one many $x\in\mathcal{P}_{\kappa_0}(j(\mu))$, there is some $\delta$, such that $jd_{\sup x}\cap j(\alpha)=j(b_\delta\cap\alpha)$. Note that $j(\langle b_\delta\mid \delta<\kappa_n\rangle)=\langle j(b_\delta)\mid \delta<\kappa_n\rangle$, since $\kappa_n$ is below the critical point. Then by elementarity, in $V$ we have for all $\alpha<\mu$ that there is a measure one set $A_\alpha\in U_0$ such that for all $x\in A_\alpha$, there is $\delta<\kappa_n$ such that $d_{\sup x}\cap\alpha=b_\delta\cap \alpha$. Set $A':=\triangle A_\alpha$. $A'$ is as desired. \end{proof} Fix the branches $\langle b_\delta\mid\delta<\kappa_n\rangle$ and $A'\in U_0$ as in the above lemma. By restricting to a subset of $\kappa_n$, if necessary, assume that for $\eta<\delta$, $b_\eta$ and $b_\delta$ are distinct branches. (Note that this is not automatic for all $\eta<\delta$, since our top node $\gamma$ may be strictly above $\sup j"\mu$.) Then for distinct $\eta, \delta<\kappa_n$, let $\alpha_{\eta, \delta}$ be such that for all $\alpha\geq\alpha_{\eta, \delta}$, $b_\eta\cap\alpha\neq b_\delta\cap\alpha$ (it exists, otherwise they will be the same branch). Let $\bar{\alpha}=\sup_{\eta, \delta<\kappa_n}\alpha_{\eta,\delta}<\mu$. But then for all $x\in A'$ with $\bar{\alpha}\in x$, there is a unique $\delta<\kappa_n$, such that for all $\alpha\in x\setminus \bar{\alpha}$, $d_{\sup x}\cap\alpha=b_\delta\cap\alpha$. By intersecting with a measure one set, we may assume that for all $x\in A'$, $x$ is unbounded in $\sup x$. Then $d_{\sup x}=b_\delta\cap\sup x$. So the set $T:=\{\beta<\mu\mid (\exists \delta<\kappa_n) d_\beta=b_\delta\cap\beta\}\supset\{\sup x\mid \bar{\alpha}\in x, x\in A'\}$, and by Fact~\ref{stationarysups}, this last set is stationary. For each $\delta$, let $T_\delta:=\{\beta<\mu\mid d_\beta=b_\delta\cap\beta\}$. Since $T=\bigcup_{\delta<\kappa_n} T_\delta$, there is some $\delta$ so that $T_\delta$ is stationary. This completes the proof of Theorem~\ref{baby}. \end{proof} Next, we argue for the two-cardinal version. \begin{theorem}\label{two cardinal} Suppose that $\langle\kappa_n\rangle_{n<\omega}$ is an increasing sequence of supercompact cardinals with limit $\nu$ and $\mu=\nu^+$, and let $\lambda>\mu$ be inaccessible. Then we have $\operatorname{ITP}(\mu, \lambda)$. \end{theorem} \begin{proof} Suppose that $d=\langle d_z\mid z\in \mathcal{P}_\mu(\lambda)\rangle$ is a thin $\mathcal{P}_\mu(\lambda)$-list. Recall for each $z\in\mathcal{P}_\mu(\lambda)$ that the $z$-th level of $d$ is $\Lev{d}{z}=\{z\cap d_y\mid y\supset z\}$. Since $d$ is thin, we enumerate it as $\{\lev{z}{\xi}\mid \xi<\nu\}$. For each $n$, let $U_n$ be a normal measure on $\mathcal{P}_{\kappa_n}(\lambda)$. Note that $\lambda=|\mathcal{P}_\mu(\lambda)|$. Let $i=j_{U_0}$ and set $z^*:=\bigcup i"\mathcal{P}_\mu(\lambda)$, and let $g:\mathcal{P}_{\kappa_0}(\lambda)\rightarrow\mathcal{P}_{\mu}(\lambda)$ be such that $z^*=[g]_{U_0}$. Then $id_{z^*}$ (that is, $i(d)_{z^*}$) is $[x\mapsto d_{g(x)}]_{U_0}$. \begin{claim}\label{g-image is stationary} If $A\in U_0$, then $\bar{A}:=\{g(x)\mid x\in A\}$ is stationary in $\mathcal{P}_\mu(\lambda)$. \end{claim} \begin{proof} Suppose that $C$ is a club in $\mathcal{P}_\mu(\lambda)$. Then in $M$, $i"C$ is a directed subset of $i(C)$ of size less than $i(\mu)$. So, $z^*=\bigcup i"\mathcal{P}_\mu(\lambda)=\bigcup i"C\in j(C)$. Also, by definition of $g$, $z^*\in j(\bar{A})$. So, $C\cap\bar{A}$ is nonempty. \end{proof} For all $z\in\mathcal{P}_\mu(\lambda)$, there is some $n$ and $\xi<j(\kappa_n)$, such that $id_{z^*}\cap i(z)=i\lev{j(z)}{\xi}$ Then for some $n$, there is a stationary set $S\subset\mathcal{P}_\mu(\lambda)$, such that for all $z\in S$, there is $\xi<i(\kappa_n)$, $$id_{z^*}\cap i(z)=i\lev{i(z)}{\xi}.$$ Then for all $z\in S$, there is a measure one set $A_z\in U_0$, such that for all $x\in A_z$, there is $\xi<\kappa_n$, such that $d_{g(x)}\cap z=\lev{z}{\xi}$. Next we want to take a diagonal intersection of the $A_z$'s. To that end, fix a bijection $c:\mathcal{P}_\mu(\lambda)\rightarrow \lambda$. Let $h$ be a function with domain $\mathcal{P}_{\kappa_0}(\lambda)$, such that $h(x)=\{z\in \mathcal{P}_\mu(\lambda)\mid c(z)\in x\}$. \begin{claim} $[h]_{U_0}=j"\mathcal{P}_\mu(\lambda)$. \end{claim} \begin{proof} Clearly, for each $z\in \mathcal{P}_\mu(\lambda)$, $j(z)\in [h]_{U_0}$. For the other direction, if $[f]_{U_0}\in [h]_{U_0}$, then for $U_0$-almost every $x$, $f(x)=z\in\mathcal{P}_\mu(\lambda)$ for some $z$ with $c(z)\in x$, i.e. $c(f(x))\in x$. By normality, $c\circ f$ is constant on a measure one set, say with value $\alpha$. Setting $z:=c^{-1}(\alpha)$, we have $[f]_{U_0}=j(z)$. \end{proof} So let us assume that for all $x\in\mathcal{P}_{\kappa_0}(\lambda)$, $g(x)=\bigcup h(x)=\bigcup\{z\mid c(z)\in x\}$. Now set $A:=\triangle_{z\in S}A_z=\{x\in\mathcal{P}_{\kappa_0}(\lambda)\mid x\in \bigcap_{c(z)\in x} A_z\}\in U_0$. Then if $x\in A$, $z\in S$, and $c(z)\in x$, there is $\xi<\kappa_n$, such that $d_{g(x)}\cap z=\lev{z}{\xi}$. As a corollary, we have that for all $z\subset w$, both in $S$, there are $\xi, \delta<\kappa_n$, such that $\lev{z}{\xi}=\lev{w}{\delta}\cap z$. \bigskip Next we prove the analogous result from Lemma 2 in Theorem \ref{baby}. \begin{lemma} There is a sequence $\langle b_\delta\mid\delta<\kappa_n\rangle$ of (possibly bounded) branches through the list and a measure one set $A'\in U_0$, such that for all $x\in A'$, for all $z\in\mathcal{P}_\mu(\lambda)$ with $c(z)\in x$, there is $\delta<\kappa_n$, such that $d_{g(x)}\cap z=b_\delta\cap z$. \end{lemma} \begin{proof} Let $j=j_{U_{n+1}}:V\rightarrow M$. By elementarity, $j(A)\subset\mathcal{P}_{\kappa_0}(j(\lambda))$ is in $j(U_0)$ and $jc:\mathcal{P}_{j(\mu)}(j(\lambda))\rightarrow j(\lambda)$ is a bijection. And if $x\in j(A)$ and $z\in j(S)$ is such that $jc(z)\in x$, then there is $\delta<\kappa_n$, such that $jd_{jg(x)}\cap z=j\lev{z}{\delta}$. Now let $u\in j(S)$ be such that $\bigcup j"\mathcal{P}_\mu(\lambda)\subset u$. Then it follows that, for all $z\in S$, there are $\xi, \delta<\kappa_n$, such that $j(\lev{z}{\xi})=j\lev{j(z)}{\xi}=j\lev{u}{\delta}\cap j(z)$. For each $\delta<\kappa_n$, let $$b_\delta=\bigcup\{\lev{z}{\xi}\mid z\in S, j(\lev{z}{\xi})=j\lev{u}{\delta}\cap j(z)\}.$$ I.e., analogously as before, $b_\delta$ is the pullback of ``the predecessors" of $j\lev{u}{\delta}$. Then each $b_\delta$ is the union of a coherent sequence of elements of the $z$-th level of $d$ ranging over $z\in\mathcal{P}_\mu(\lambda)$ (it may be bounded). And for each $z\in\mathcal{P}_\mu(\lambda)$, there is $\delta<\kappa_n$, such that $b_\delta\cap z$ is in the $z$-th level of $d$. \begin{claim} For all $z\in\mathcal{P}_\mu(\lambda)$, in $M$, for $j(U_0)$-measure one many $x\in\mathcal{P}_{\kappa_0}(j(\lambda))$, there is some $\delta<\kappa_n$, such that $jd_{jg(x)}\cap j(z)=j(b_\delta\cap z)$. \end{claim} \begin{proof} Fix $z\in \mathcal{P}_\mu(\lambda)$. Choose $x\in j(A)$, such that $jc(u)\in x$ and there is some $z'\in S$, $z\subset z'$ with $jc(j(z'))=j(c(z'))\in x$. Then there there is some $\delta<\kappa_n$, such that $jd_{jg(x)}\cap u=j\lev{u}{\delta}$, and for some $\xi<\kappa_n$, $jd_{jg(x)}\cap j(z')=j \lev{j(z')}{\xi}=j(\lev{z'}{\xi})=j\lev{u}{\delta}\cap j(z')=j(b_\delta)\cap j(z')$. The last equality is since by definition, $b_\delta\cap z'=\lev{z'}{\xi}$. So $jd_{jg(x)}\cap j(z)=j(b_\delta)\cap j(z)=j(b_\delta\cap z)$. \end{proof} Then by elementarity, in $V$, for all $z\in\mathcal{P}_\mu(\lambda)$, there is a measure one set $A_z\in U_0$, such that for all $x\in A_z$, there is $\delta<\kappa_n$, such that $d_{g(x)}\cap z=b_\delta\cap z$. Set $A':=\triangle A_z=\{x\mid x\in\bigcap_{c(z)\in x}A_z\}$. This is as desired. \end{proof} Fix the branches $\langle b_\delta\mid\delta<\kappa_n\rangle$ and $A'\in U_0$ as in the above lemma. By passing to a subset of $\kappa_n$ if necessary, assume that for $\eta<\delta$, $b_\eta$ and $b_\delta$ are distinct branches. As before, for $\eta<\delta<\kappa_n$, let $z_{\eta, \delta}$ be such that for all $z\supset z_{\eta, \delta}$, $b_\eta\cap z\neq b_\delta\cap z$. Let $\bar{z}=\bigcup_{\eta<\delta<\kappa_n}z_{\eta,\delta}\in\mathcal{P}_\mu(\lambda)$. Let $A''=\{x\in A'\mid c(\bar{z})\in x, g(x)=\bigcup\{z\mid \bar{z}\subset z, c(z)\in x\}\}\in U_0$. Then for all $x\in A''$, there is a unique $\delta<\kappa_n$, such that for all $z\in \mathcal{P}_\mu(\lambda)$, with $c(z)\in x, \bar{z}\subset z$, we have $d_{g(x)}\cap z=b_\delta\cap z$. It follows that $d_{g(x)}=b_\delta\cap g(x)$. So the set $T:=\{z\in\mathcal{P}_\mu(\lambda)\mid (\exists \delta<\kappa_n) d_z=b_\delta\cap z\}\supset\{g(x)\mid x\in A''\}$, which is stationary by Claim \ref{g-image is stationary}. For each $\delta$, let $T_\delta:=\{z\in\mathcal{P}_\mu(\lambda)\mid d_z=b_\delta\cap z\}$. Since $T=\bigcup_{\delta<\kappa_n} T_\delta$, for some $\delta$, $T_\delta$ is stationary. This completes the proof of Theorem \ref{two cardinal}. \end{proof} \section{ISP at the successor of a singular cardinal} In this section we analyze a somewhat stronger principle at successor of a singular, called ISP. Let us begin with some definitions. \begin{definition} Let $M \prec H_\theta$ for some $\theta$, and suppose $z \subseteq a$ for some $a \in M$, and $\delta \in M$ is a cardinal. We say \emph{$M$ $\delta$-approximates $z$} if for all $x \in M \cap \mathcal{P}_\delta(a)$, we have $x \cap z \in M$. Let $\delta \leq \mu \leq \lambda$ be cardinals with $\mu$ regular. A $\mathcal{P}_\mu(\lambda)$-list $\langle d_x \rangle_{x \in \mathcal{P}_\mu(\lambda)}$ is \emph{$\delta$-slender} if for all cardinals $\theta$ that are sufficiently large, the set \[ \{ M \in H_\theta \mid \text{$M$ $\delta$-approximates }d_{M \cap \lambda}\} \] contains a club. We say the principle $\operatorname{ISP}(\delta,\mu,\lambda)$ holds if every $\delta$-slender $\mathcal{P}_\mu(\lambda)$-list has an ineffable branch. \end{definition} This principle is made stronger if $\delta$ is decreased or $\lambda$ is increased. Note that $\operatorname{ISP}(\mu,\lambda)$ as originally defined by Wei{\ss} in \cite{Weiss} is equivalent to our $\operatorname{ISP}(\aleph_1,\mu,\lambda)$. Moreover, $\operatorname{ISP}(\mu,\mu,\lambda)$ for all $\lambda$ implies $\operatorname{ITP}_\mu$. In \cite{viale-weiss}, Viale and Wei{\ss} gave a characterization of $\operatorname{ISP}$ via guessing models. A model $M$ is \emph{$\delta$-guessing} if whenever $M$ $\delta$-approximates $x$ with $x$ a subset of some $a \in M$, then $x$ is $M$-guessed, i.e. there is $b\in M$ such that $b\cap a=x$. For more on these objects, see \cite{VialeGuessing}. Viale and Weiss showed that if $\operatorname{ISP}(\aleph_1,\mu,|H_\theta|)$ holds, then there are stationarily many $\aleph_1$-guessing models $M \prec H_\theta$ with $|M|<\mu$; and $\operatorname{ISP}(\aleph_1,\mu,\lambda)$ holds for all $\lambda \geq \mu$ if and only if there are stationarily many $\aleph_1$-guessing models of size less than $\mu$ in $H_\theta$ for all large $\theta$. Similarly, we have: \begin{fact}\label{ISPgivesguessers} $\operatorname{ISP}(\delta,\mu,|H_\theta|)$ implies that there are stationary many $\delta$-guessing models $M\prec H_\theta$ with $|M|<\mu$. \end{fact} Let us now observe a limitation on the extent to which this principle can hold for $\mu$ the successor of a strong limit singular cardinal. In \cite{Sherwood-Dima}, it was shown that the principle $\operatorname{ISP}$ as defined by Wei{\ss} (i.e. $\operatorname{ISP}(\aleph_1,\mu,\lambda)$ for all $\lambda \geq \mu$) cannot hold at the single or at the double successor of a singular strong limit cardinal. The next theorem generalizes this fact. Note the proof of Fact~\ref{ISPgivesguessers} is embedded in this argument. \begin{theorem} Suppose $\nu$ is a singular strong limit cardinal, and $\mu = \nu^+$. Then $\operatorname{ISP}(\nu,\mu,2^\nu)$ fails. \end{theorem} \begin{proof} Suppose for contradiction $\operatorname{ISP}(\nu,\mu,2^\nu)$ holds; write $\tau = 2^\nu$. We seek a $\nu$-slender $\mathcal{P}_{\mu}(\tau)$ list $d$ with no ineffable branch. Note that $2^\nu=|H_\mu|$; fix a bijection $f:\tau \to H_\mu$. There is a club $C \subseteq \mathcal{P}_{\mu}(H_\mu)$ of $M$ so that $M \prec H_\mu$, $f \in M$, and $M \cap \mu$ is an ordinal $\alpha$ with $\nu < \alpha < \mu$. Note that no such $M$ can be $\nu$-guessing: Since $\nu$ is strong limit, $2^{<\nu} \subseteq M$, and so every subset of $\nu$ is trivially $\nu$-approximated by $M$; but if $M$ guessed every subset of $\nu$, we'd have $\mathcal{P}(\nu) \subseteq M$, contradicting $|M|<\mu\leq 2^\nu$. So for each $z \in \mathcal{P}_\mu(\tau)$ such that $f"z \in C$, denote $M_z=f"z$ and let $x_z$ be a subset of $\nu$ that is not $M_z$-guessed. Then put $d_z = \{\alpha \in \tau \mid f(\alpha) \in x_z\}$. Since $f \in M_z$, $d_z \subseteq z$, and (any $\mathcal{P}_\mu(\tau)$-list extending) $\langle d_z \rangle_{M_z \in C}$ is clearly $\nu$-slender. Let $b$ be an ineffable branch through $d$. Since by construction $f"d_z \subseteq \nu$ for a club of $z$, we have $f"b \subseteq \nu$. So fix $z$ such that $M_z \in C$, $f"b \in M_z$, and $d_z = b \cap z = b$. But $x_z = f"d_z = f"b \in M_z$ was defined as a subset of $\nu$ that was not $M_z$-guessed. This is a contradiction. \end{proof} It turns out that the above theorem is sharp. In particular, next we show that a modification of the arguments from the previous section yield $\operatorname{ISP}(\mu,\mu,\lambda)$ when $\mu$ is the successor of a limit of supercompacts (note that in this situation, $2^\nu = \mu$). \begin{theorem}\label{ISP} Suppose that $\langle\kappa_n\mid n<\omega\rangle$ is an increasing sequence of supercompact cardinals with limit $\nu$, $\mu=\nu^+$, and $\lambda \geq \mu$. Then $\operatorname{ISP}(\mu,\mu,\lambda)$ holds. \end{theorem} \begin{proof} We are given $d = \langle d_x \mid x \in \mathcal{P}_\mu \lambda \rangle$ a $\mu$-slender list. Let $\theta \gg \lambda$, and fix a $\theta$-supercompactness measure $U_0$ on $\mathcal{P}_{\kappa_0}(\theta)$, with embedding $i=j_{U_0}:V\to M$. In particular, $i(\kappa_0) > \theta$ and $i"\theta \in M$. Let $N^* = \bigcup i" \mathcal{P}_\mu H_\theta$. Then $N^*\in M$ and in $M$, $|N^*| = i(\nu)$ and $N^* \subseteq H_{i(\theta)}$. Let $g:\mathcal{P}_{\kappa_0}(\theta) \to \mathcal{P}_\mu H_\theta$ be such that $[g]_{U_0} = N^*$. As before, we have that for all clubs $C \subseteq \mathcal{P}_\mu H_\theta$ in $V$, $N^* \in i(C)$. In particular, by slenderness of $d$, $M$ satisfies that $N^*$ $i(\mu)$-approximates $id_{N^* \cap i(\lambda)}$. Thus for any $z \in \mathcal{P}_\mu(\lambda)$, we have $i(z) \cap id_{N^*\cap i(\lambda)} \in N^*$. It follows that for each $z\in \mathcal{P}_\mu(\lambda)$, there is an elementary substructure $M_z \in \mathcal{P}_\mu(H_\theta)$ such that $z \cup \{z\} \subseteq M_z$, and \[i(z) \cap id_{N^*\cap i(\lambda)} \in i(M_z); \] we may assume $|M_z|=\nu$. For each such $z$, let us enumerate $\mathcal{P}_\mu(z) \cap M_z$ as $\langle \lev{z}{\xi} \mid \xi < \nu \rangle$. Let $c:\mathcal{P}_\mu(\lambda) \to \theta$ be injective. \begin{lemma} There exist $n< \omega$, a stationary $S\subseteq \mathcal{P}_\mu(\lambda)$, and $A \in U_0$ such that for all $z \in S$ and $x \in A$ with $c(z)\in x$, there is some $\xi < \kappa_n$ such that $z \cap d_{g(x) \cap \lambda} = \lev{z}{\xi}$. \end{lemma} \begin{proof} For each $z \in \mathcal{P}_\mu(\lambda)$, there is some $\xi < i(\nu)$ so that $i(z) \cap id_{N^* \cap i(\lambda)} = i\lev{i(z)}{\xi}$; let $n_z$ be least so that $\xi < i(\kappa_n)$. The map $z \mapsto n_z$ is constant on a cofinal $S$, say with value $n$. For each $z \in S$, we have by {\L}o\'{s}'s theorem \[ A_z = \{x \in \mathcal{P}_{\kappa_0} (\theta) \mid \exists \xi < \kappa_n z \cap d_{g(x) \cap \lambda} = \sigma_{z}(\xi) \} \in U_0. \] As before we wish to take a diagonal intersection of the $A_z$ over $z \in S$. Recall that we fixed an injective $c:\mathcal{P}_\mu(\lambda) \to \theta$; define $h:\mathcal{P}_{\kappa_0}(\theta) \to \mathcal{P}(\mathcal{P}_\mu(\lambda))$ by \[ h(x) = \{z \in \mathcal{P}_\mu(\lambda) \mid c(z) \in x\}. \] It's not hard to see that $[h] = i"\mathcal{P}_\mu(\lambda)$; and for $U_0$-many $x$, $g(x) \cap \lambda = \bigcup h(x)$. Then let \[ \triangle_{z \in S} A_z = \{a \in \mathcal{P}_{\kappa_0}(\theta) \mid (\forall z \in S) c(z) \in x \to x \in A_z\}. \] Then $A = \triangle_{z \in S} A_z$ is as in the statement of the lemma. \end{proof} Next, we reduce the number of potential branches as before. Let $j=j_{U_{n+1}}:V\to M'$ witness $\theta$-supercompactness of $\kappa_{n+1}$. By elementarity, $M'$ satisfies that $j(S)$ is stationary in $\mathcal{P}_{j(\mu)}(j(\lambda))$, and that for all $z \in j(S)$, if $jc(z) \in x \in j(A)$, then $jd_{jg(x)\cap j(\lambda)} \cap j(z) = j\lev{j(z)}{\xi} = j(\lev{z}{\xi})$ for some $\xi < j(\kappa_n)=\kappa_n$. Let $u \in j(S)$ satisfy $u \supseteq \bigcup j" \mathcal{P}_\mu(\lambda)$. Then for any $z \in S$, we have $j(z),u \in j(S)$, and there is some $x \in j(A) \subseteq \mathcal{P}_{\kappa_0}(j(\theta))$ so that $jc(j(z))$ and $jc(u)$ are both in $x$. Since $j(z) \subseteq u$, it follows that for some $\xi,\delta < \kappa_n$, \[ j(\lev{z}{\xi}) = j\lev{j(z)}{\xi} = jd_{jg(x) \cap j(\lambda)} \cap j(z) = jd_{jg(x)\cap j(\lambda)} \cap j(z) = j\lev{u}{\delta} \cap j(z). \] Define, for $\delta < \kappa_n$, \[ b_\delta = \bigcup\{\lev{z}{\xi} \mid j(\lev{z}{\xi}) = j\lev{u}{\delta} \cap j(z)\}. \] We have just shown that for every $z \in S$, that there are some $\xi,\delta<\kappa_n$ with $b_\delta \cap z = \lev{z}{\xi}$. And more precisely, for every $z\in S$ and $U_0$-a.e.\ $x \in \mathcal{P}_{\kappa_0}(\theta)$, there is $\xi<\kappa_n$ such that $d_{g(x)\cap \lambda} \cap z = \lev{z}{\xi}$. Let $A'_z$ be this measure one set. We again take the diagonal intersection, $A' = \triangle_{z \in S} A'_z$: \[ A' = \{x \in \mathcal{P}_{\kappa_0}(\theta) \mid (\forall z \in S) c(z) \in x \to \exists \delta<\kappa_n, d_{g(x)\cap\lambda} \cap z = b_\delta \cap z\}. \] Again, by passing to a subset of $\kappa_n$ if necessary, we assume that for all $\delta<\eta$, $b_\delta\neq b_\eta$. Take $\bar{z}$ ``above all the splitting'', so that $b_\eta \cap \bar{z} \neq b_\delta \cap \bar{z}$ for all $\eta \neq \delta < \kappa_n$. Let \[ T = \{z \in \mathcal{P}_\mu(\lambda) \mid \bar{z} \subseteq z \text{ and for some }\delta<\kappa_n, \; d_z = b_\delta \cap z\}. \] Now by the above remarks, there are measure one many $x \in A'$ so that $d_{g(x) \cap \lambda} = d_{\bigcup \{z \mid c(z) \in x\}}$. Thus $T \supset\{g(x)\cap \lambda \mid x \in A'\}$, and so by (the same argument as in) Claim~\ref{g-image is stationary}, $T$ is a stationary set. Again, there is some $\delta$ so that $T_\delta = \{z \in \mathcal{P}_\mu(\lambda) \mid d_z = b_\delta \cap z\}$ is stationary. This $b_\delta$ is the desired ineffable branch. \end{proof} Combining Fact~\ref{ISPgivesguessers} with the previous two theorems, we obtain the following Corollary, answering Questions 8.2 and 8.3 of Viale in \cite{VialeGuessing}. \begin{cor} Suppose $\mu =\nu^+$ with $\nu$ the limit of $\omega$-many supercompacts. Then for all cardinals $\theta$ taken sufficiently large, there are stationarily many $\mu$-guessing models of size $\nu$ in $H_\theta$; and none of these is $\delta$-guessing for any $\delta \leq \nu$. \end{cor} \section{ITP at $\aleph_{\omega+1}$} We next show, assuming the existence of infinitely many supercompacts, that it is consistent for $\aleph_{\omega+1}$ to have the super tree property. We begin by showing it is consistent to have $\operatorname{ITP}(\aleph_{\omega+1},\aleph_{\omega+1})$. The forcing will almost be the same as that used by Neeman \cite{Neeman} to obtain the tree property. We first take the product of Levy collapses to turn $\kappa_{n}$ into $\kappa_0^{+n}$; we then show there exists some inaccessible $\rho<\kappa_0$ so that collapsing to make $\rho^{+\omega+1}$ become $\aleph_1$, and $\kappa_0$ become $\aleph_3$, forces the tree property at $\aleph_{\omega+1}$. In fact the argument will show there are measure one many (in the normal measure on $\kappa_0$ induced by our supercompactness measure) such $\rho$. Suppose that $\langle\kappa_n\mid n<\omega\rangle$ is an increasing sequence of indestructibly supercompact cardinals, $\nu:=\sup_n\kappa_n$, $\mu=\nu^+$. For a successor cardinal $\tau=\delta^+$, where $\delta<\kappa_0$ is a singular cardinal of countable cofinality, let $\mathbb{L}_\tau:=\operatorname{Col}(\omega, \delta)\times \operatorname{Col}(\tau^+, {<}\kappa_0)$. Note that forcing with $\mathbb{L}_\tau$ makes $\tau$ into $\aleph_1$ and $\kappa_0$ into $\aleph_3$ in the extension. \begin{theorem}\label{forcebaby} Let $H=\prod_{n}H_n$ be $\prod_{n<\omega} \operatorname{Col}(\kappa_n, {<}\kappa_{n+1})$-generic over $V$. Then there is a $\tau<\kappa$ such that $\tau = \delta^+$ for $\delta$ a strong limit cardinal of cofinality $\omega$, and in the extension of $V[H]$ by $\mathbb{L}_\tau$, $\operatorname{ITP}(\mu, \mu)$ holds. \end{theorem} Work in $V[H]$. Supposing otherwise, we have, for every such $\tau<\kappa_0$, a $\mathbb{L}_\tau$ name for a thin $\mu$-list $\dot{d}^\tau$ forced by $\mathbb{L}_\tau$ to have no ineffable branch. Assume that for all $\alpha<\mu$, $\mathbbm{1}_{\mathbb{L}_\tau}$ forces that the $\alpha$-th level of $\dot{d}^\tau$ is enumerated by the names $\{\flev{\tau}{\alpha}{\xi}\mid \xi<\nu\}$; we may furthermore assume that for sufficiently large $\alpha < \mu$, it is forced that there are no repetitions in the sequence $\langle \sigma^\tau_\alpha(\xi)\mid\xi<\nu\rangle$. By indestructibility, let $U_0$ be a normal measure on $\mathcal{P}_{\kappa_0}(\mu)$ in $V[H]$, and for each $n>0$, let $U_n$ be a normal measure on $\mathcal{P}_{\kappa_n}(\mu)$ in $V$. Let $i=j_{U_0}:V \to M_0$ be the ultrapower embedding; for ease of notation we set $\kappa := \kappa_0$. Recall for $x \in \mathcal{P}_{\kappa}(\mu)$ that $\kappa_x = \sup (x \cap \kappa)$, (which is just $x\cap\kappa$ on a measure one set), and $[x \mapsto \kappa_x]_{U_0} = \kappa$. Therefore $x \mapsto \kappa_x^{+\omega+1}$ represents $\mu = \kappa^{+\omega+1}$ in $M_0$. Let $\mu_x$ denote $\kappa_x^{+\omega+1}$ in what follows. \begin{lemma}\label{forcelemma1} There exist $n<\omega$, an unbounded $S\subset\mu$, and $A\in U_0$ and a map $x\mapsto (p_x,q_x)$, such that for all $x\in A$ and $\alpha\in x\cap S$, there is $\xi<\kappa_n$, such that $(p_x,q_x)\Vdash_{\mathbb{L}_{\mu_x}} \dot{d}^{\mu_x}_{\sup x}\cap \alpha=\flev{\mu_x}{\alpha}{\xi}$. \end{lemma} \begin{proof} By the above remarks, $[x\mapsto\mathbb{L}_{\mu_x}]_{U_0}=\operatorname{Col}(\omega, \nu)\times \operatorname{Col}(\mu^+, {<}i(\kappa))=\mathbb{L}_\mu$. Now $i\dot{d}^\mu_{\sup i"\mu}=[x\mapsto\dot{d}^{\mu_x}_{\sup x}]_{U_0}$. For all $\alpha<\mu$, there is some $n_\alpha$, $\xi<i(\kappa_n)$, and $(p_\alpha,q_\alpha)\in\mathbb{L}_\mu$ such that $(p_\alpha,q_\alpha)\Vdash i\dot{d}^\mu_{\sup i"\mu}\cap i(\alpha)=i\flev{\mu}{i(\alpha)}{\xi}$. Choosing the $q_\alpha$ inductively, we arrange that $\langle q_\alpha\mid \alpha<\mu\rangle$ is decreasing. Note that $p_\alpha \in \operatorname{Col}(\omega,\nu)$ are finite conditions; then there are an unbounded $S\subset \mu$ and fixed $n$ and $p$ such that for all $\alpha\in S$, $p_\alpha=p$ and $n_\alpha=n$. By $\mu$-closure of $\operatorname{Col}(\mu^+,{<}i(\kappa))$, we can take $q$ to be a common strengthening of the $q_\alpha$. Let $[x\mapsto p_x]_{U_0}=p$ and $[x\mapsto q_x]_{U_0}=q$. Then for all $\alpha\in S$, there is a measure one set $A_\alpha\in U_0$ such that for all $x\in A_\alpha$, there is $\xi<\kappa_n$, such that $(p_x,q_x)\Vdash_{\mathbb{L}_{\mu_x}} \dot{d}^{\mu_x}_{\sup x}\cap \alpha=\flev{\mu_x}{\alpha}{\xi}$. Note that it follows that, for all $\alpha<\beta$ both in $S$, there are $\xi,\eta<\kappa_n$, $\tau<\kappa_0$ and $(p,q)\in \mathbb{L}_\tau$ such that $(p,q)\Vdash \flev{\tau}{\alpha}{\xi}=\flev{\tau}{\beta}{\eta}\cap\alpha$; this will be witnessed by any $x \in A$ with $\alpha,\beta \in x$. Set $A:=\triangle_{\alpha\in S}A_\alpha$. Then $S, A$ are as desired. \end{proof} Fix $n, S,A, x\mapsto (p_x, q_x)$ as in the conclusion of the above lemma. Much as in \cite{MagidorShelah} and later \cite{Neeman}, we require the notion of a \emph{system}. \begin{definition} Let $D \subseteq \operatorname{Ord}$, $\rho \in \operatorname{Ord}$, and $I$ be an index set. A \emph{system on $D \times \rho$} is a family $\langle R_s \rangle_{s \in I}$ of transitive, reflexive relations on $D \times \rho$, so that \begin{enumerate} \item If $( \alpha, \xi ) R_s ( \beta, \zeta)$ and $( \alpha, \xi ) \neq ( \beta, \zeta)$ then $\alpha < \beta$. \item If $( \alpha_0, \xi_0)$ and $( \alpha_1, \xi_1)$ are both $R_s$-below $( \beta, \zeta)$, then $( \alpha_0,\xi_0)$ and $(\alpha_1,\xi_1)$ are comparable in $R_s$. \item For every $\alpha < \beta$ both in $D$, there are $s \in I$ and $\xi,\zeta \in \rho$ so that $( \alpha, \xi ) R_s ( \beta, \zeta )$. \end{enumerate} A \emph{branch} through $R_s$ is a subset of $D \times \rho$ that is linearly ordered by $R_s$ and downwards $R_s$-closed (in particular, a branch is a partial function $b:D \rightharpoonup \rho$). A \emph{system of branches through $\langle R_s \rangle_{s \in I}$} is a family $\langle b_\eta \rangle_{\eta \in J}$ so that each $b_\eta$ is a branch through some $R_{s(\eta)}$, and $D = \bigcup_{\eta \in J} \mathop{\mathrm{dom}}\nolimits(b_\eta)$. \end{definition} As before, branches in a system need not be cofinal; however, note that now a branch $b_\eta$ through $R_s$ is cofinal iff $\mathop{\mathrm{dom}}\nolimits(b_\eta)$ is cofinal in $D$. Let $I=\{(\tau, p,q)\mid \tau<\kappa, \tau = \delta^+$ for some singular strong limit of countable cofinality $\delta$, and $(p,q)\in\mathbb{L}_\tau\}$. Restricting to a final segment if necessary, we can assume for all $\alpha \in S$ that it is forced by $\mathbbm{1}_{\mathbb{L}_\tau}$ that $\dot\sigma^\tau_\alpha(\xi) \neq \dot\sigma^\tau_\alpha(\eta)$ whenever $\xi \neq \eta$ are in $\kappa_n$. For all $s=(\tau,p,q)\in I$, define the relation $R_s$ on $S\times\kappa_n$ by $(\alpha, \xi)R_s(\beta, \eta)$ iff $\alpha \leq \beta$ and $(p,q)\Vdash_{\mathbb{L}_\tau}\flev{\tau}{\alpha}{\xi}=\flev{\tau}{\beta}{\eta}\cap\alpha$. \begin{prop} $\langle R_s \rangle_{s \in I}$ is a system on $S\times \kappa_n$. \end{prop} \begin{proof} That (1) and (2) hold is immediate by definition and the preceding paragraph, and the above lemma gives (3). \end{proof} \begin{lemma}\label{forcelemma2} There exists, in $V[H]$, an unbounded $S' \subseteq S$ and a system of branches $\langle b_{s,\delta}\mid s\in I,\delta<\kappa_n\rangle$ through $\langle R_s \upharpoonright S' \times \kappa_n \rangle_{s \in I}$ such that each $b_{s,\delta}$ is a branch through $R_s \upharpoonright S' \times \kappa_n$. \end{lemma} \begin{proof} Let $j=j_{U_{n+2}}:V\rightarrow M$. Working in a forcing extension $V[H][H^*]$ of $V[H]$, where $H^*$ is generic for $\operatorname{Col}(\kappa_{n+1},j(\kappa_{n+3}))^V$, we may extend the embedding $j$ and regard it as a map $j:V[H]\rightarrow M^*$. This poset is $\kappa_{n+1}$-closed in $V[\prod_{m \geq n+1} H_m]$, and $V[H]$ is a $\kappa_{n+1}$-c.c. extension of this model; in particular, $V[H]$ satisfies hypothesis (2) of the branch absorption Lemma 3.3 of \cite{Neeman}. Also, note that the poset to add the embedding is $<\kappa_{n+1}$ distributive in $V[H]$. Let $\gamma\in j(S)\setminus\sup j"\mu$. Since $\kappa_n<\operatorname{crit}(j)$, by elementarity applied to Lemma~\ref{forcelemma1}, we have for all $\alpha\in S$ that if we let $x \in j(A)$ so that $j(\alpha),\gamma \in x$, then there exist $\xi, \delta<\kappa_n$ and $s=(\mu_x, p_x,q_x)\in j(I)=I$ such that \[ (p_x,q_x)\Vdash j(\flev{\mu_x}{\alpha}{\xi})=j\flev{\mu_x}{j(\alpha)}{\xi}=j\flev{\mu_x}{\gamma}{\delta}\cap j(\alpha). \] For each $\delta<\kappa_n$ and $s=(\tau, p,q)\in I$, let \[ b_{s,\delta}=\{(\alpha,\xi)\mid \alpha\in S,\xi<\kappa_n, (p,q)\Vdash_{\mathbb{L}_\tau} j(\flev{\tau}{\alpha}{\xi})=j\flev{\tau}{\gamma}{\delta}\cap j(\alpha)\}. \] We have that $\langle b_{s, \delta}\mid s\in I, \delta<\kappa_n\rangle$ is a system of branches through $\langle R_s \rangle_{s \in I}$: Each is clearly linearly ordered and downward closed; and we have just shown that any $x \in j(A)$ with $\gamma,j(\alpha) \in x$ witnesses $\alpha \in \mathop{\mathrm{dom}}\nolimits{b_{s,\delta}}$ for some $\delta < \kappa_n$, so that $\bigcup \mathop{\mathrm{dom}}\nolimits{b_{s,\delta}} = S$. This system may not belong to $V[H]$, but we now satisfy precisely hypothesis (1) of the branch preservation lemma, Lemma 3.3, of \cite{Neeman}. So there is some $(s,\delta) \in I \times \kappa_n$ so that $b_{s,\delta}$ is cofinal and belongs to $V[H]$. Let $\mathcal{D} = \{(s, \delta) \mid b_{s,\delta} \in V[H]\}$. By ${<}\kappa_{n+1}$-distributivity, $\mathcal{D} \in V[H]$, and we have just shown that $\mathcal{D}$ contains at least one pair $(s,\delta)$ corresponding to a cofinal branch. Again, by distributivity, $\langle b_{(s,\delta)}\mid (s,\delta)\in\mathcal{D}\rangle$ is in $V[H]$. So the set $S' := \bigcup_{(s,\delta) \in \mathcal{D}} \mathop{\mathrm{dom}}\nolimits(b_{s,\delta})$ is unbounded in $\mu$, and we have that $\langle b_{s,\delta} \rangle_{(s,\delta)\in\mathcal{D}}$ is a system of branches through $\langle R_s \upharpoonright S' \times \kappa_n \rangle_{s \in I}$. Also, by passing to a subset of $I\times \kappa_n$ if necessary (any such will be in $V[H]$ by distributivity), we may assume that for all $s\in I$ and $\eta<\delta<\kappa_n$, if $b_{s,\eta}$ and $b_{s,\delta}$ are both cofinal, then they are distinct. This is done by simply removing duplicates (which may exit if the splitting between $j\dot{\sigma}^\tau_\gamma(\eta)$ and $j\dot{\sigma}^\tau_\gamma(\delta)$ is forced to be above $\sup j"\mu$). \end{proof} Note that if $s' = (\tau,p',q')$ and $s = (\tau,p,q)$ are such that $(p',q')\leq (p,q)$, then $R_{s'} \supseteq R_s$ and $b_{s',\delta} \supseteq b_{s,\delta}$ for all $\delta$; similarly, if $b_{s,\delta}$ is cofinal in $R_s$, then $(s,\delta) \in \mathcal{D}$ implies that $(s',\delta) \in \mathcal{D}$. For each $s=(\tau,p,q)$ and $\delta$ such that $(s,\delta) \in \mathcal{D}$, let us fix an $\mathbb{L}_\tau$-name $\dot{\pi}_{s,\delta} = \bigcup\{ \flev{\tau}{\alpha}{\xi} \mid (\alpha,\xi) \in b_{s,\delta}\}$. Note if $b_{s,\delta}$ is a cofinal branch, then $\dot{\pi}_{s,\delta}$ is forced by $(p,q)$ to name a cofinal branch through $\dot{d}^\tau$. Our next goal is to show that these ground model branches are enough for us to repeat the final argument of Theorem~\ref{baby}. What we need is a strengthened version of \ref{forcelemma1} for those branches from $\mathcal{D}$. First, we bound the splitting for all branches (not just those in $V[H]$). Working in $V[H][H^*]$ from the proof of Lemma~\ref{forcelemma2}, let, for each $\eta<\delta<\kappa_n$ and $s \in I$, let $\alpha_{s,\eta,\delta}$ be the least $\alpha$ so that $b_{s,\eta}(\alpha)$ and $b_{s,\delta}(\alpha)$ are (both defined and) not equal, if such exists; otherwise, let $\alpha_{s,\eta,\delta} = \sup \mathop{\mathrm{dom}}\nolimits(b_{s,\eta}) \cup \mathop{\mathrm{dom}}\nolimits(b_{s,\delta})$. Let $\bar{\alpha} = \sup_{s \in I, \eta<\delta<\kappa_n} \alpha_{s,\eta,\delta} + 1$. For $x \in A'$ let $(p_x,q_x) \in \mathbb{L}_{\mu_x}$ be as in the conclusion of Lemma~\ref{forcelemma1}; and set $s_x = (\mu_x,p_x,q_x)$. \begin{lemma}\label{forcelemma3} There exist an unbounded $\bar{S} \subseteq S'$ and $\bar{A} \in U_0$ with $\bar{A} \subseteq A$, so that for all $x \in \bar{A}$, for all $\alpha \in \bar{S} \cap x$ we have \begin{equation*}\label{dagger} \tag{$\dagger_{x,\alpha}$} \text{for some } \delta < \kappa_n, \;(s_x,\delta) \in \mathcal{D}\text{ and } (p_x,q_x) \Vdash_{\mathbb{L}_{\mu_x}} \dot{d}^{\mu_x}_{\sup x} \cap \alpha = \dot{\pi}_{s_x,\delta} \cap \alpha. \end{equation*} \end{lemma} First let us see how to finish the proof of Theorem~\ref{forcebaby} assuming the lemma. By our choice of $\bar{\alpha}$, any names $\dot{\pi}_{s,\delta}$ and $\dot{\pi}_{s,\eta}$ corresponding to cofinal branches of $V[H]$ are, by elementarity and the definition of these names, forced outright to disagree below $\bar{\alpha}$. Suppose we have $x \in \bar{A}$ with $x\cap\bar{S}$ unbounded in $\sup x$, $\bar{\alpha} < \sup x$, and let $G_x$ be generic for $\mathbb{L}_{\mu_x}$ with $(p_x,q_x) \in G_x$. By our definition of $\bar{\alpha}$, there exists a strengthening $(p'_x,q'_x) \in G_x$ of $(p'_x,q'_x)$ that forces $\dot{\pi}_{s,\delta} \cap \alpha \neq \dot{\pi}_{s,\eta} \cap \alpha$ for all $\alpha > \bar{\alpha}$ and distinct $\eta,\delta$ such that $(s,\eta),(s,\delta) \in \mathcal{D}$ represent cofinal branches; and $\alpha$ is above the domains of all bounded $b_{s,\delta}$'s. Now for any $\alpha \in x \cap \bar{S}$, $\alpha>\bar{\alpha}$, we have some $\delta < \kappa_n$ so that $(p_x,q_x) \Vdash \dot{d}^{\mu_x}_{\sup x} \cap \alpha = \dot{\pi}_{s,\delta } \cap \alpha = \dot{\pi}_{s',\delta} \cap \alpha$. Since we are above the splitting, these must be the same branch, and so without loss of generality this $\delta$ must be the same for each $\alpha \in x \cap \bar{S}$. It follows that $(p_x,q_x) \Vdash \dot{d}^{\mu_x}_{\sup x} = \dot{\pi}_{s,\delta} \cap \sup x$. Letting, for $(s,\delta) \in \mathcal{D}$ with $s = (\tau,p,q)$, \[ T_{s,\delta} = \{\gamma \in \mu \mid (p,q) \Vdash_{\mathbb{L}_\tau} \dot{d}^{\mu_x}_\gamma = \dot{\pi}_{s,\delta} \cap \gamma \}, \] what we have shown is that $T := \bigcup_{(s,\delta) \in \mathcal{D}} T_{s,\delta} \supseteq \{ \sup x \mid x \in \bar{A}, \bar{\alpha}<\sup x, x\cap\bar{S}\text{ unbounded in }\sup x \}$. So $T$ is stationary; since $|\mathcal{D}| \leq \kappa_n < \mu$, there is some fixed $(s,\delta)$ so that $T_{s,\delta}$ is stationary. But since $\mathbb{L}_{\tau}$ preserves stationarity of subsets of $\mu$, we have that $b_{s,\delta}$ defines an ineffable branch through ${\dot{{d^\tau}}}$ in any extension containing $(p,q)$, a contradiction. \begin{proof}[Proof of Lemma~\ref{forcelemma3}] It is sufficient to show that if we set $A_\alpha = \{x \in A \mid \text{(\ref{dagger}) holds}\}$, then $\bar{S} = \{\alpha<\mu \mid A_\alpha \in U_0\}$ is unbounded in $\mu$; since in that case, $\bar{A} = A \cap \triangle_{\alpha \in \bar{S}} A_\alpha$ is as desired. So suppose $\bar{S}$ is bounded. Fix $\alpha_0 < \mu$ so that $\bar{\alpha} < \alpha_0$, and $A_\alpha \notin U_0$ whenever $\alpha > \alpha_0$ is in $S'$. Put $A' := A \cap \triangle_{\alpha_0<\alpha \in S'} \mathcal{P}_{\kappa_0}(\mu) \setminus A_\alpha$; so $A' \in U_0$, and (\ref{dagger}) fails whenever $\alpha \in x \in A'$ with $\alpha \in S'$. We wish to show that if $R'_{s}$ is obtained by deleting all ground model branches $b_{s,\delta}$ from $R_s$, then the resulting family $\langle R'_{s} \rangle_{s \in I}$ is a system on $(S' \setminus \alpha_0) \times \kappa_n$. That is, for each $s$, let \begin{align*} (\alpha,\xi) \mathbin{R'_s} (\beta,\zeta) \iff &\alpha_0 < \alpha,\; \beta \in S', \;(\alpha,\xi) \mathbin{R_s} (\beta,\zeta), \text{ and for all }\delta < \kappa_n,\\ &\text{ if }(s,\delta) \in \mathcal{D} \text{ then }(\alpha,\xi) \notin b_{s,\delta}. \end{align*} The first two properties of a system are clear. For (3), suppose $\alpha_0 < \alpha < \beta$ with $\alpha,\beta$ both in $S'$. By elementarity, we have some $x' \in j(A')$ so that $j(\alpha),j(\beta),\gamma \in x'$, where here $\gamma$ is the element of $j(S)\setminus \sup j"\mu$ we used to define the system of branches in Lemma~\ref{forcelemma2}. Now by Lemma \ref{forcelemma1}, and since $j(\alpha),j(\beta)\in j(S)$ as well, we have some $\xi,\zeta,\delta < \kappa_n$ so that \[ (p_{x'},q_{x'}) \Vdash j\flev{\mu_{x'}}{j(\alpha)}{\xi} = j\flev{\mu_x}{\gamma}{\delta} \cap j(\alpha), j\flev{\mu_x}{j(\beta)}{\zeta} = j\flev{\mu_x}{\gamma}{\delta} \cap j(\beta), \] and moreover, each of these nodes is forced by $(p_{x'},q_{x'})$ to cohere with $j\dot{d}^{\mu_x}_{\sup x}$. Now $x' \notin j" V[H]$, but by elementarity we have some $x \in A'$ so that $s_x = s_{x'}$ with $\alpha,\beta \in x$. In particular, we have $(\alpha,\xi) R_{s_x,\delta} (\beta,\zeta)$; indeed, $(\alpha,\xi),(\beta,\eta)$ are both in $b_{s_x,\delta}$. Since we are above splitting, we have for any $\delta'$ with $(\alpha,\xi) \in b_{s_x,\delta'}$ that this branch coincides with $b_{s_x,\delta}$. So to obtain condition (3), we just need to show $(s_x,\delta) \notin \mathcal{D}$. Suppose $(s_x,\delta) \in \mathcal{D}$. Since $\alpha \in x\cap S'$ with $x \in A'$, (\ref{dagger}) fails. Then we have \[ (p_x,q_x) \not\Vdash \dot{d}^{\mu_x}_{\sup x} \cap \alpha = \dot{\pi}_{s_x,\delta} \cap \alpha. \] But by our choice of $x$, \[ (p_x,q_x) \Vdash \dot{d}^{\mu_x}_{\sup x} \cap \alpha = \flev{\mu_x}{\alpha}{\xi}. \] But these conditions and our definition of $\dot{\pi}_{s_x,\delta}$ contradict $(\alpha,\xi) \in b_{s_x,\delta}$. Now that $\langle R'_s \rangle_{s \in I}$ is a system, we let for each $(s,\delta) \notin \mathcal{D}$ the branch $b'_{s,\delta}$ be the restriction of $b_{s,\delta}$ to $R'_s$. Then $\langle b'_{s,\delta}\mid (s,\delta)\notin \mathcal{D}\rangle$ is a system of branches through the system $\langle R_s'\mid s\in I\rangle$. Now recapitulating the argument of Lemma~\ref{forcelemma2}, we see that there exists some $s$ and a $\delta$ so that $b'_{s,\delta} \upharpoonright R'_s$ is cofinal and belongs to $V[H]$. Since $b_{s,\delta}$ can be recovered from any cofinal subset, we must have $(s,\delta) \in \mathcal{D}$. But this contradicts our definition of the system $\langle R'_s \rangle_{s \in I}$. This contradiction completes the proof. \end{proof} Next we prove the full, two cardinal version. We make a slight modification: For $\tau<\kappa=\kappa_0$, with countable cofinality, define $\mathbb{L}_\tau=\operatorname{Col}(\omega, \tau)\times \operatorname{Col}(\tau^{+3}, {<}\kappa)$. \begin{theorem}\label{force two cardinal} Let $H=\prod_{n}H_n$ be $\prod_{n<\omega} \operatorname{Col}(\kappa_n, {<}\kappa_{n+1})$-generic over $V$. Then there exists $\tau<\kappa$ with countable cofinality such that in the extension of $V[H]$ by $\mathbb{L}_\tau$, $\mu = \aleph_{\omega+1}$ and $\operatorname{ITP}_\mu$ holds. \end{theorem} \begin{proof} Suppose otherwise. Then for every $\tau<\kappa$, there is some $\lambda$, such that $\operatorname{ITP}(\mu,\lambda)$ fails in the extension of $V[H]$ by $\mathbb{L}_\tau$. By taking a supremum, assume the $\lambda$ is the same for all $\tau$ and that $\lambda^{\mu}=\lambda$. So for each $\tau$, let $\dot{d}^\tau$ be a name for a $\mathbb{P}_\mu(\lambda)$ thin list which is forced by $1_{\mathbb{L}_\tau}$ not to have an ineffable branch. Let $K$ be $\operatorname{Col}^{V[H]}(\mu, {<}\lambda)$-generic over $V[H]$. \begin{theorem}\label{muplus} There is $\tau<\kappa$, such that $V[H][K][\mathbb{L}_\tau]$, $d^\tau$ has an ineffable branch. \end{theorem} Assuming this, let us show that this is enough to prove Theorem \ref{force two cardinal}. Let $b$ be an ineffable branch for $d^\tau$ in $V[H][K][\mathbb{L}_\tau]$. Since $\operatorname{Col}(\mu, {<}\lambda)$ is $\mu$ closed in $V[H]$, and $\mathbb{L}_\tau$ is $\kappa$-c.c. (actually we only need $\mu$-c.c.), we have that in $V[H]$, $1_{\mathbb{L}_\tau}$ forces that $\operatorname{Col}(\mu, {<}\lambda)$ has the $\mu$-thin approximation property. Therefore, $b\in V[H][\mathbb{L}_\tau]$. Since stationarity is downwards absolute, $b$ is an ineffable branch for $d^\tau$ in $V[H][\mathbb{L}_\tau]$. But that is a contradiction with the choice of $\dot{d}^\tau$, and so the result follows. \end{proof} Now for the proof of Theorem \ref{muplus}, first note that in $V[H][K]$, $\kappa$ is still supercompact, each $\kappa_n$, $n>0$, is generically supercompact for the right type of quotient, and $\lambda$ is $\mu^+$. So, this will be a similar argument as in Theorem \ref{forcebaby}. Except that here we work in $V[H][K]$ instead of $V[H]$, and consider $\mathcal{P}_{\mu}(\lambda)=\mathcal{P}_\mu(\mu^+)$ lists and $\lambda=\mu^+$-supercompact embedding. We outline the proof, skipping some of the details. \begin{proof}[Proof of Theorem \ref{muplus}] Let $U_0$ be a normal measure on $\mathcal{P}_\kappa(\lambda)$ in $V[H][K]$, and set $i=j_{U_0}:V[H][K]\rightarrow M$ to be the corresponding $\lambda$ supercompact embedding with critical point $\kappa$. And for each $n>0$, let $U_n$ be a normal measure on $\mathcal{P}_{\kappa_n}(\lambda)$ in $V$. For each $x\in\mathcal{P}_\mu(\lambda)$, set $\tau_x=\kappa_x^{+\omega}$. Then $[x\mapsto \mathbb{L}_{\tau_x}]_{U_0}=\operatorname{Col}(\omega, \nu)\times \operatorname{Col}(\mu^{++}, {<}i(\kappa))=\mathbb{L}_\nu$ (of course here $\mu^{++}=\kappa^{+\omega+3}=\lambda^+$ both in $V[H][K]$ and in the ultrapower. Note that $\mathcal{P}^{V[H]}_\mu(\lambda)=\mathcal{P}^{V[H][K]}_\mu(\lambda)$, so we just denote it by $\mathcal{P}_\mu(\lambda)$. As before for each $z\in \mathcal{P}_\mu(\lambda)$, assume that that $1_{\mathbb{L}_{\tau_x}}$ forces that the $z$-th level of $\dot{d}^{\tau_x}$ is enumerated by the names $\{\flev{\tau_x}{z}{\xi}\mid \xi<\nu\}$. As in Theorem \ref{two cardinal}, fix a bijection $c:\mathcal{P}_\mu(\lambda)\rightarrow\lambda$, and set $z^*:=\bigcup i"\mathcal{P}_\mu(\lambda)=[g]_{U_0}$. Then we have: \begin{itemize} \item If $A\in U_0$, then $\bar{A}:=\{g(x)\mid x\in A\}$ is stationary in $\mathcal{P}_\mu(\lambda)$. \item $i\dot{d}^\nu_{z^*}=[x\mapsto \dot{d}^{\tau_x}_{g(x)}]_{U_0}$. \item We may assume that for all $x$, $g(x)=\bigcup\{z\mid c(z)\in x\}$. \end{itemize} \begin{lemma}\label{lemma1muplus} There exist $n<\omega$, a cofinal $S\subset\mathcal{P}_\mu(\lambda)$, and $A\in U_0$ such that for all $x\in A$ and $z\in S$ with $c(z)\in x$, there is $\xi<\kappa_n$ and $(p_x,q_x)\in \mathbb{L}_{\tau_x}$ such that $(p_x,q_x)\Vdash \dot{d}^{\tau_x}_{g(x)}\cap z=\flev{\tau_x}{z}{\xi}$. \end{lemma} \begin{proof} Analogously to the proof of Lemma \ref{forcelemma1}. Here for every $z\in\mathcal{P}_\mu(\lambda)$, we find conditions $(p^z, q^z)$ forcing over $\mathbb{L}_\nu$ that $i\dot{d}^\nu_{z^*}\cap i(z)=i\flev{\nu}{i(z)}{\xi}$ for some $\xi<i(\nu)$. Since the closure of the second factor is $\lambda^+$, we can define the $q^z$'s inductively to be decreasing according to some enumeration and take $q\in \operatorname{Col}(\mu^{++}, {<}i(\kappa))$ to be a lower bound. And find a cofinal $S\subset\mathcal{P}_\mu(\lambda)$, $n<\omega$ and $p\in\operatorname{Col}(\omega, \nu)$, such that for all $z\in S$, the witnessing $\xi$ is less than $i(\kappa_n)$ and $p^z=p$. Let $p=[x\mapsto p_x]$, $q=[x\mapsto q_x]$, and for each $z\in S$, get a measure one set $A_z$ witnessing $(p, q)\Vdash_{\mathbb{L}_\nu}i\dot{d}^\nu_{z^*}\cap i(z)=i\flev{\nu}{i(z)}{\xi}$ for some $\xi<i(\kappa_n)$. Take $\triangle A_z$. These are as desired. \end{proof} Let $I=\{(\tau, p,q)\mid \tau<\kappa,(p,q)\in\mathbb{L}_\tau\}$. For all $s=(\tau,p,q)\in I$, define the relation $R_s$ on $S\times\kappa_n$ by $(z, \xi)R_s(z', \eta)$ iff $z \subset z'$ and $(p,q)\Vdash_{\mathbb{L}_\tau}\flev{\tau}{z}{\xi}=\flev{\tau}{z'}{\eta}\cap z$. Then, $\langle R_s \rangle_{s \in I}$ is a system on $S\times \kappa_n$. Here the definition of system uses $\subset$ instead of $\leq$. More precisely: \begin{definition} Let $D$ be a cofinal subset of $\mathcal{P}_\mu(\lambda)$, $\rho \in \operatorname{Ord}$, and $I$ be an index set. A \it{system on $D \times \rho$} is a family $\langle R_s \rangle_{s \in I}$ of $\subset$-transitive, reflexive relations on $D \times \rho$, so that \begin{enumerate} \item If $( x, \xi ) R_s ( y, \zeta)$ and $( x, \xi ) \neq ( y, \zeta)$ then $x\subsetneq y$. \item If $( x_0, \xi_0)$ and $( x_1, \xi_1)$ are both $R_s$-below $( y, \zeta)$ and $x_0\subset x_1$, then $( x_0,\xi_0) R_s(x_1,\xi_1)$. \item For every $x,y$ both in $D$, there are $z\in D$, $s \in I$ and $\xi,\xi',\zeta \in \rho$ so that $x\cup y\subset z$, $(x, \xi ) R_s (z, \zeta )$ and $(y,\xi')R_s(z,\zeta)$. \end{enumerate} A \it{branch} through $R_s$ is a partial function $b:D \rightharpoonup \rho$, such that \begin{enumerate} \item if $x\subset y$ are both in $\mathop{\mathrm{dom}}\nolimits(b)$, then $(x, b(x))R_s(y, b(y))$. \item if $y\in \mathop{\mathrm{dom}}\nolimits(b)$, and $(x,\xi) R_s (y,b(y))$, then $(x,\xi)\in b$, i.e. $b$ is downwards $R_s$-closed \end{enumerate} A \it{system of branches} through $\langle R_s \rangle_{s \in I}$ is a family $\langle b_\eta \rangle_{\eta \in J}$ so that each $b_\eta$ is a branch through some $R_{s(\eta)}$, and $D = \bigcup_{\eta \in J} \mathop{\mathrm{dom}}\nolimits(b_\eta)$. \end{definition} Next we show the branch preservation lemma we will use. The proof follows closely Lemma 3.3 of \cite{Neeman}, adapted to the two cardinal version. We include it for completeness. The key fact is that if a forcing adds a branch, then this branch is thinly $\mu$-approximated. \begin{lemma}\label{branch} Let $V\subset W$ be models of set theory and $W$ is a $\tau$-c.c. forcing extension of $V$ and $\mathbb{Q}\in V$ is $\tau$-closed in $V$. In $W$ suppose $\langle R_s \rangle_{s \in I}$ is a system on $D\times \rho$, for some cofinal $D\subset\mathcal{P}_\mu(\lambda)$, such that forcing with $\mathbb{Q}$ over $W$ adds a system of branches $\langle b_j \rangle_{j\in J}$ through this system. Finally suppose $\chi:=\max(|J|, |I|, \rho)^+<\tau<\mu$. Then there is a cofinal branch $b_j\in W$. \end{lemma} \begin{proof} Suppose otherwise; say $1_{\mathbb{Q}}$ forces that each $\dot{b}_j\notin W$, where $\dot{b}_j$ is a $\mathbb{Q}$-name in $W$ for the $j$-th branch. Let $G=\langle G_\xi\mid \xi<\chi\rangle$ be $\mathbb{Q}^{\chi}$-generic over $W$. Here $\mathbb{Q}^{\chi}$ is the full support $\chi$ power of $\mathbb{Q}$. For every $\xi<\chi, j\in J$, let $b^\xi_j=\dot{b}_j[G_\xi]$. First note that since $\mathbb{Q}^\chi$ is $\tau$-closed in $V$, it must be $<\tau$ distributive in $W$. Working in $W[G]$, for each non cofinal branch $b^\xi_j$, there is $z^\xi_j\in\mathcal{P}^W_\mu(\lambda)$ such that for all $z\supset z^\xi_j$, $z\notin\mathop{\mathrm{dom}}\nolimits(b^\xi_j)$. Let $z_0$ be their union. By distributivity, $z_0\in \mathcal{P}^W_\mu(\lambda)$. Similarly, we can find $z_1\supset z_0$, in $\mathcal{P}^W_\mu(\lambda)$, such that for all cofinal $b_j^\xi, b_j^\eta$ for $\xi<\eta$, for all $z\supset z_1$, $b_j^\xi(z)\neq b_j^\eta(z)$ (possibly because one of them is not defined) if they are branches through the same relation. This splitting follows by mutual genericity. Let $z\in \mathcal{P}^W_\lambda(\mu)$, $z_1\subset z$ be such that $z\in D$. Since we have a system of branches, for every $\xi<\chi$, there is $j_\xi\in J$, such that $z\in \mathop{\mathrm{dom}}\nolimits(b^\xi_{j_\xi})$ and $b^\xi_{j_\xi}$ is a branch through $R_{s_\xi}$. Let $\alpha_\xi=b^\xi_{j_\xi}(z)$. Then the map $\xi\mapsto (j_\xi, s_\xi, \alpha_\xi)$ is from $\chi\rightarrow |J|\times |I|\times\rho$, and $|J|,|I|,\rho<\chi$. So there are $\xi<\eta<\chi$ and $j,s$ such that $j=j_\xi=j_\eta$ and $s=s_\xi=s_\eta$. But then $b^\xi_j(z)=b^\eta_j(z)$, and they are both branches through $R_s$. Contradiction. \end{proof} \begin{lemma}\label{lemma2muplus} There exists, in $V[H][K]$, a cofinal $S' \subseteq S$ and a system of branches $\langle b_{s,\delta}\mid s\in I,\delta<\kappa_n\rangle$ through $\langle R_s \upharpoonright S' \times \kappa_n \rangle_{s \in I}$ such that each $b_{s,\delta}$ is a branch through $R_s \upharpoonright S' \times \kappa_n$. \end{lemma} \begin{proof} Let $j=j_{U_{n+2}}:V\rightarrow M$ be the $\lambda$-s.c. embedding obtained from $U_{n+2}$. We lift $j$ to $j:V[H][K]\rightarrow M^*$ in a forcing extension $V[H][K][H^*]$ of $V[H][K]$, where $H^*$ is generic for $\operatorname{Col}(\kappa_{n+1},j(\kappa_{n+3}))^V$. This poset is $<\kappa_{n+1}$-distributive in $V[H][K]$, $\kappa_{n+1}$-closed in $V[\prod_{m \geq n+1} H_m][K]$, and $V[H][K]$ is a $\kappa_{n+1}$-c.c. extension of this model. In particular, $V[H][K]$ satisfies hypothesis of the branch lemma above. Let $u\in j(S)$ be such that $u\supset j"\mathcal{P}_\mu(\lambda)$. For each $\delta<\kappa_n$ and $s=(\tau, p,q)\in I$, let \[ b_{s,\delta}=\{(z,\xi)\mid z\in S,\xi<\kappa_n, (p,q)\Vdash_{\mathbb{L}_\tau} j(\flev{\tau}{z}{\xi})=j\flev{\tau}{u}{\delta}\cap j(z)\}. \] $\langle b_{s, \delta}\mid s\in I, \delta<\kappa_n\rangle$ is a system of branches through $\langle R_s \rangle_{s \in I}$. By passing to a subset of $I\times\kappa_n$ if necessary, we may assume that any two $b_{s,\eta}$ and $b_{s,\delta}$ are distinct. Set $\mathcal{D} = \{(s, \delta) \mid b_{s,\delta} \in V[H][K]\}\in V[H][K]$ by $<\kappa_{n+1}$-distributivity, as is $\langle b_{(s,\delta)}\mid (s,\delta)\in\mathcal{D}\rangle$. By Lemma \ref{branch}, there is a confinal branch among these in $V[H][K]$. So the set $S' := \bigcup_{(s,\delta) \in \mathcal{D}} \mathop{\mathrm{dom}}\nolimits(b_{s,\delta})$ is cofinal in $\mathcal{P}_\kappa(\lambda)$, and we have that $\langle b_{s,\delta} \rangle_{(s,\delta)\in\mathcal{D}}$ is a system of branches through $\langle R_s \upharpoonright S' \times \kappa_n \rangle_{s \in I}$. \end{proof} Let $\dot{\pi}_{s,\delta}$ be the corresponding name for a branch obtained from $b_{s,\delta}$. Fix $\bar{z}\in\mathcal{P}_\mu(\lambda)$, such that for any two distinct $(s,\eta)$ and $(s,\delta)$ in $\mathcal{D}$, the names $\dot{\pi}_{s,\delta}$ and $\dot{\pi}_{s,\eta}$ are forced densely often to disagree at $\bar{z}$, if they are cofinal; and are bounded by $\bar{z}$ otherwise. \begin{lemma}\label{lemma3 muplus} There exist an unbounded $\bar{S} \subseteq S'$ and $\bar{A} \in U_0$ with $\bar{A} \subseteq A$, so that for all $x \in \bar{A}$, for all $z \in \bar{S}, c(z)\in x$, there exists a $\delta < \kappa_n$ so that, letting $s_x = (\tau_x,p_x,q_x)$, we have $(s_x,\delta) \in \mathcal{D}$, and \[ (p_x,q_x) \Vdash_{\mathbb{L}_{\tau_x}} \dot{d}^{\tau_x}_{g(x)} \cap z = \dot{\pi}_{s_x,\delta} \cap z. \] \end{lemma} \begin{proof} The proof is as in Lemma \ref{forcelemma3}. Again, we assume otherwise. Let $(\dagger_{x,z})$ be the statement that there exists a $\delta < \kappa_n$ so that, letting $s_x = (\tau_x,p_x,q_x)$, we have $(s_x,\delta) \in \mathcal{D}$, and \[ (p_x,q_x) \Vdash_{\mathbb{L}_{\tau_x}} \dot{d}^{\tau_x}_{g(x)} \cap z = \dot{\pi}_{s_x,\delta} \cap z. \] Then there must be some $z_0 < \mathcal{P}_\mu(\lambda)$, $\bar{z} \subset z_0$, a measure one $A'\in U_0$, such that $(\dagger_{x,z})$ fails whenever $c(z) \in x \in A'$ with $z \in S'$. Define a system $\langle R'_{s} \rangle_{s \in I}$ on $\{z\in S'\mid z_0\subset z\} \times \kappa_n$ from $\langle R_{s} \rangle_{s \in I}$ by deleting all ground model branches $b_{s,\delta}$ as follows: \begin{align*} (z,\xi) \mathbin{R'_s} (z',\zeta) \iff &z_0 \subset z,\; z' \in S', \;(z,\xi) \mathbin{R_s} (z',\zeta), \text{ and for all }\delta < \kappa_n,\\ &\text{ if }(s,\delta) \in \mathcal{D} \text{ then }(z,\xi) \notin b_{s,\delta}. \end{align*} Setting $b'_{s,\delta}$ to be the restriction of $b_{s,\delta}$ to $R'_s$, we get that $\langle b'_{s,\delta}\mid (s,\delta)\notin \mathcal{D}\rangle$ is a system of branches through $\langle R'_s \rangle_{s \in I}$. But then, by branch preservation, there exists some $(s,\delta)$ so that $b'_{s,\delta} \upharpoonright R'_s$ is cofinal and belongs to $V[H][K]$, and so $(s,\delta) \in \mathcal{D}$. Contradiction with the definition of $\langle R'_s \rangle_{s \in I}$. \end{proof} Finally, we can complete the argument. For all $x \in \bar{A}$ with $c(\bar{z})\in x$, there is a unique $\delta<\kappa_n$, such that for all $z\in \bar{S}$, with $c(z)\in x$, $(p_x,q_x) \Vdash \dot{d}^{\tau_x}_{g(x)} \cap z = \dot{\pi}_{s,\delta } \cap z$. Let $A''=\{x\in \bar{A}\mid c(\bar{z})\in x, g(x)=\bigcup\{z\mid \bar{z}\subset z, c(z)\in x\}\}\in U_0$. Then for all $x\in A''$, there is some $\delta<\kappa_n$, such that $(p_x,q_x) \Vdash \dot{d}^{\tau_x}_{g(x)} = \dot{\pi}_{s,\delta } \cap g(x)$. For $(s,\delta) \in \mathcal{D}$ with $s = (\tau,p,q)$, \[ T_{s,\delta} := \{z \in \mathcal{P}_\mu(\lambda) \mid (p,q) \Vdash \dot{d}^{\tau}_z = \dot{\pi}_{s,\delta} \cap z \}, \] We have shown is that $T := \bigcup_{(s,\delta) \in \mathcal{D}} T_{s,\delta} \supseteq \{ g(x) \mid x\in A''\}$ is stationary. Since $|\mathcal{D}| \leq \kappa_n < \mu$, for some $(s,\delta)$, $T_{s,\delta}$ is stationary. Denoting $s=(\tau,p,q)$, it follows that $b_{s,\delta}$ generates an ineffable branch through ${\dot{{d^\tau}}}$ in any extension containing $(p,q)$, a contradiction. \end{proof} \section{Open problems} Having obtained ITP at the successor of a singular, the direction of forcing ITP at successive cardinals past a singular looks very promising. As a first step one can try and combine the construction in \cite{Neeman} with the results in this paper to force ITP at every $\aleph_n$, $n>1$ together with ITP at $\aleph_{\omega+1}$. We conjecture that this should be possible. Next, as mentioned in the introduction, in order to get the tree property everywhere one needs failures of SCH. The reason is that the tree property (or strong tree property or ITP) at $\kappa^{++}$ for a singular strong limit $\kappa$ implies that SCH fails at $\kappa$. So here are some questions to consider: \begin{question} Can we obtain $\operatorname{ITP}$ at $\kappa^+$ for a singular strong limit cardinal $\kappa$ together with failure of SCH at $\kappa$? \end{question} \begin{question} Can we obtain $\operatorname{ITP}$ at $\kappa^+$ and $\kappa^{++}$ for a singular strong limit cardinal $\kappa$? \end{question} \begin{question} Can we get the above for $\kappa=\aleph_{\omega^2}$? Or much more ambitiously, for $\kappa=\aleph_\omega$? \end{question} \bibliographystyle{asl}
{ "timestamp": "2018-06-05T02:11:32", "yymm": "1806", "arxiv_id": "1806.00820", "language": "en", "url": "https://arxiv.org/abs/1806.00820" }
\section{1. Introduction} \noindent As studying social media has become a major cross-disciplinary endeavor in the past decade, scholars have grappled with the numerous ethical and methodological questions that Internet researchers inevitably face on a daily basis. In the significant body of work about online research ethics, which touches on everything from anonymization to the researcher-subject relationship, relatively little has been written about notions of vulnerability. Who are vulnerable subjects in online research? Interestingly, Internet researchers seem to generally conceptualize vulnerability in the biomedical research tradition --- focusing on children, for instance, or other populations unable to fully provide informed consent --- as opposed to other forms of vulnerability, especially that which we term \textit{political vulnerability}. In this short paper, we discuss the methodological and ethical quandaries that can emerge when studying politically vulnerable communities, such as such as political dissidents, bloggers in authoritarian countries, refugees, and others. Our central question is as follows: to what extent are current social media research guidelines acceptable for studying these types of communities, or should researchers go beyond current ethics standards to treat them with a unique set of principles? The first section of the paper will engage with the literature on Internet ethics in order to (a) outline the central concepts for ethical online research and (b) see how scholars of Internet research ethics conceptualize the notion of the ``vulnerable subject" or ``vulnerable community." The second section will aim to contextualize this literature with a number of short case studies (relevant journal articles and reports), to demonstrate whether or not the best practices and ethics guidelines discussed in the first section are generally applied by researchers. At the ICWSM workshop, we hope to build on these two sections by discussing alternative approaches for minimizing harm in online research that deals with politically vulnerable communities. A preliminary discussion is presented in the final section. \section{2. Vulnerability and Vulnerable Communities} Internet researchers generally accept that certain populations are often more vulnerable in all types of research, whether offline or online, and that some sort of duty to protect those subjects exists, depending on the circumstances (Markham and Buchanan, 2012). While the definition of vulnerability presented by scholars of Internet ethics is often broad, they tend to define ``vulnerable persons" in the biomedical research tradition as ``young people, the elderly, and people with mental health issues" (Markham and Buchanan, 2012: 29). Buchanan (2011: 85) provides an example of this reasoning when she offers an example of vulnerable populations in Internet research that includes pregnant women and prisoners. These are persons vulnerable in the traditional sense of human-subject research: people who may not be able to provide properly informed consent, or could be harmed by direct experimentation. However, many communities exist online that are vulnerable in a different sense, wherein their information must remain private because their identification could be problematic in some way. For example, Bruckman (2002) classifies LGBTQ+ individuals and those with serious diseases as vulnerable populations that could experience significant harm if ``outed" and de-anonymised. Her work presents a broader notion of vulnerability, where the issue is not with just with subjects being unable to provide truly informed consent, but also more broadly with the sensitive nature of the data being collected by the researcher (and therefore, with the practices that the researcher takes to protect their subjects). We would like to introduce a notion of political vulnerability, where research subjects are vulnerable for reasons that go beyond them simply not being able to provide properly informed consent, and extend specifically into harms that could arise due to their political and social context. For example, in many studies, especially those conducted on the ``authoritarian Internet,'' data collected by the researcher could prove politically damning and harmful to the research subjects. Interestingly, direct mentions of political vulnerability seem to be entirely absent from the Internet ethics literature, and one must look towards the conventional ethics literature to find allusions to this concept (Birman, 2006) --- a potentially concerning gap given the amount of research currently being done on politically vulnerable communities online. However, although the current notions of vulnerability commonly discussed in the Internet ethics community can seem a bit narrow, the general principles espoused by the Internet ethics community are far broader. The first guiding principle established by the Association of Internet Researchers in 2012 actually concerns vulnerability, stating that ``The greater the vulnerability of the community / author / participant, the greater the obligation of the researcher to protect the community / author / participant." This flexible principle would allow for the researcher to protect politically vulnerable communities as long as they properly identify the community as such. Approaches which emphasise context are crucial, as several factors should be taken into account when dealing with politically vulnerable communities, such as political dissidents, that may not be as important in other scenarios. Hongladarom and Ess (2007) have proposed a set of ``good samaritan ethics" which could provide a good starting point for those studying politically vulnerable subjects. They advocate a set of best practices that hold for both qualitative and quantitative work, and include: removing all names and pseudonyms from the published work, only accessing publicly available content, not sneaking into gated online forums or communities, identifying oneself as a researcher rather than masquerading as a user, and not linking to direct sources in the published work to help maintain anonymity. Leaving aside the point that even without links it may takes a quick copy-and-paste online search to find quotations, provided that they have been reproduced in their language of origin, this set of principles raises several methodological questions which apply to most approaches that deal with vulnerable subjects in online research. The central issue is one of credibility and trustworthiness. As Bruckman (2002) has noted, the more a researcher protects sources and anonymizes, the more that they may be seen as reducing the accuracy and replicability of their study. This may be more of an issue for quantitative, as opposed to qualitative scholars (whose work is generally not designed to be reproducible), but a overall assumption seems to be that there can be a trade off between the robustness of one’s research and the amount of protection granted to one’s subjects. How best to ensure that one’s findings are reliable, while one’s sources remain safely protected? Eynon, Fry, and Schroeder (2008: 35) have argued that a key question relates to ``the extent to which Internet researchers should be concerned with the collection and use of potentially harmful data --- given that [they] cannot anticipate all the ways in which it might be reused and by whom." If, in the case of a politically vulnerable community, the Internet researcher is very concerned about the protection of data, and yet still wants to maintain methodological rigour, how should they best proceed? In the following section, we shall present some brief case studies to evaluate how scholars deal with this problem. \section{3. Case Studies} Curious to understand whether or not the guidelines and ethical frameworks proposed by Internet ethics scholars are broadly employed in actual research, we collected a convenience sample of 15 articles on the topic of online political debate in authoritarian countries. Using the Scopus database aggregator to search for articles with the boolean search function ``political debate AND online AND authoritarian," we selected a diverse set of articles published between 2000-2015, which employ a variety of methods --- social media scraping, surveys, interviews, ethnography, participant observation, discourse analysis, and more --- and deal with five countries: China, Iran, Burma, Cuba, and Uzbekistan. Some articles are specifically about blogging, while others are predominantly concerned with forums, micro-blogs, and other forms of social media. Given the length constraints of this paper, we will present what we believe to be the most interesting findings, rather than discuss each article individually in depth. In their article on ``News Blogging in Cross-Cultural Contexts," Katz and Lai (2009) provide an interesting method for protecting source material. They seek to understand the motives of bloggers in South-East Asian Countries, especially in countries such as Burma and Thailand where speech is restricted and bloggers can be jailed or physically punished for their online activities. Instead of simply quoting online content, which, as noted earlier, can easily be de-anonymized through a cursory Google search, they reach out to interesting bloggers and interview them via e-mail. The authors can then publish anonymous, original quotations which provide interesting insights into blogging/online dissent as a practice, without providing searchable, potentially incriminating material. The authors do not discuss their email procedure in great detail, but this approach seems to be a good option with the caveat that the researchers (and the bloggers) must maintain an adequate level of operational security, and ensure that they use encrypted email (PGP) and secure communications channels. This research design stands in contrast to the technique employed by Yang (2003), who is interested in debates around democratization on Chinese social media. Yang performs a discourse analysis of various Chinese forums and bulletin boards, and reproduces many posts from these forums in order to illustrate his main arguments. He provides usernames and the post dates for each quote, in order to ``give electronic voice to those posting in the forum" (Yang, 2003: 416). This technique could be perceived as problematic from an ethical standpoint, especially since most of the quotes reproduced are of users critiquing the government and proclaiming the importance of free speech above all else. It is interesting to note his rationale --- he not only attaches these identifiers for citation purposes (in what he surely considers to be a transparent and robust practice), but also to credit the authors and give them a ``voice." While Yang notes that government censors are very active on these sites (and, ostensibly, posts which are deemed unsuitable by the government will simply be removed), he does not mention the potential implications of capturing posts before they are censored, or that publishing them and linking them back to a user could provide them with more voice than they may have wished. He also does not mention whether or not these forums are fully private or not, and whether or not a site registration was required to gain access this material. If so, social media research guidelines generally suggest that some sort of consent and moderator permission should be required (Markham and Buchanan, 2012). Shen (2009) employs similar approach. He provides exact (translated) quotes taken from forums and bulletin boards, and then quotes the reference numbers and usernames for each post — again, a potentially problematic practice. In an influential article, King, Pan, and Roberts (2013) perform a large scale quantitative analysis of Chinese censorship online. Their highly publicized finding was that the Chinese government does not necessarily censor commentary critical of the government, as commonly believed, but rather predominantly censors posts that advocate for some sort of collective action. They anonymize all of the social media content used to corroborate their argument; however, when presenting quotes, they choose to provide the original Chinese posts alongside an English translation. While this approach allows greater transparency as far as findings go (given the amount of flexibility that is inherent in the act of translation, authors could unconsciously or consciously provide subtle nudges to the translations in order to better support their argument), it also significantly increases the chances that the author of the quote can be determined via a Chinese search engine. This could be problematic, especially since many of the posts they choose to publish are quite inflammatory and highly critical of the government, and because they know that several of these posts were eventually censored. They even go as far as to comment that these posts constitute ``detailed information that the Chinese government does not want anyone to see and goes to great lengths to prevent anyone from accessing" (King, Pan, and Roberts, 2013: 328), and yet they do not discuss whether reproducing this information could perhaps endanger the (possibly unwitting) participants of this study. Kelly and Etling's study of Iranian blogging (2008), published by Harvard's Berkman Center, provides a potential response to the transparency-vulnerability trade-off described above. They conduct a network analysis of more than 60 000 blogs in order to map key political issues in the Iranian blogosphere, and employ a team of Iranian language coders, who assess important blogs individually and provide short summaries of their content in English. Rather than provide quotes directly from the blogs, or linking to the blogs themselves, the authors choose to publish these summaries. For example, one reads ``Blogger believes that Iran lacks basic freedoms and democracy and posts articles, poems, and pictures to reflect his beliefs." (Kelly and Etling, 2008: 14). These brief quotes are used to drive the analysis, and provide an admirable level of anonymity and security to the potentially vulnerable bloggers who are being studied. However, the authors do not employ any tests for intercoder reliability (where multiple coders code the same randomly sampled content, so that the authors can ensure that they are all similar), potentially raising questions about the robustness of the study. Because it appears that the authors themselves do not speak Farsi, it is possible that coders could affect the study by coding erroneously (either accidentally, or on purpose) without the authors realizing it. Two studies of political dissidents and bloggers online, by Venegas (2010) and Kedzior (2011) utilize a similar technique in order to maintain reliability and address the vulnerability of their sources. In Venegas' study of Cuban dissidents, and Kedzior's study of Uzbek exiles, both scholars only cite material that is produced by highly visible political dissidents/bloggers/intellectuals. Kedzior focuses on the online forums and communities through which Uzbek diaspora communities engage politically with their counterparts still in Uzbekistan, and punctuates her account with the online writings of Xoldor Vulqon, a high-profile poet and political commentator. Similarly, Venegas’ critical analysis of the ``biopolitics of Cuban blogging" only quotes material written by well known Cuban bloggers/intellectuals, such as Yoani Sanchez (an award winning blogger and writer). While neither author explicitly acknowledges that this technique is intended to protect the identities of other, less public bloggers, it may provide an effective solution to the vulnerability problem. The authors do not have to de-anonymize their sources, as their subjects are already very public and well known, and therefore, have in effect personally absorbed the risks associated with their dissent. It is not to say that these risks do not exist --- Vanegas provides several examples of bloggers describing, in excruciating detail, the many threats levied against them by the state --- but the calculus of ethical vulnerability is slightly different, as the threshold of harm that could possibly face the research subject is unlikely to be raised by a published piece of research, given their high public profile and history of public dissent. Taken together, these brief examples provide some insight into the ways that researchers deal with the political vulnerability of their data and their subjects. In the next section, we will draw upon these examples to inform a discussion of best practices for future research in this area. \section{4. Discussion} What is the best way to maintain methodological rigor without sacrificing the safety of potentially politically vulnerable subjects? The examples provided above implicitly hint at a trade-off between robustness and the level of protection granted to sources. After all, in a perfect world, scholars would be able to reproduce all of the content that they are using to corroborate their argument, and would be able to release their data publicity to maximize transparency (and the validity of their study). However, in the case of politically vulnerable communities or politically sensitive material, this is clearly not possible, and many of the countries discussed in the previous section --- China, Burma, Iran --- have a history of imprisoning or physically punishing those who cross the line with their speech online. As such, the standard of harm for the research subjects is often much higher than in North America or Europe, and scholars performing online research in these areas should be cognizant about the potential vulnerability of their subjects. Interestingly, although none of the authors mentioned above directly discuss the potential ethical issues posed by their study, they seem to have an implicit understanding that their subjects may be vulnerable, and they operationalize these assumptions at least to a certain extent. For example, King, Pan, and Roberts assert the need to maintain anonymity for their data, and state that releasing their full dataset would constitute an ethical breach. However, as mentioned in the previous section, they still reproduce full quotations in a way which could be problematic. Since we can assume the good intentions of most researchers, and also assume that every scholar wishes for their work to be as robust as possible, what are the best ways to move beyond this trade-off? How can we ensure that research is robust, and yet still adequately protects its potentially politically vulnerable subjects? When discussing best practices for this sort of research in the future, perhaps the first step would be to propose that scholars make an effort to be more open about the potential political vulnerability of their subjects, and that they discuss these issues in the methodology sections of their articles. A second step would be to push for creative solutions to the searchability problem --- the issue that direct quotations can possibly be traced back to the author even if they are anonymized. Katz and Lai's interview method provides one such solution: they conduct online interviews with their subjects, instead of quoting online content. As long as they maintain sufficient operational security, and take care to use encrypted email and secure their communications well, it will be far more difficult to de-anonymize those quoted. This method is also fairly robust as it allows them to provide direct quotations, although the sources must, of course, remain anonymous. The technique employed by Vanegas and Kendzior is also good, where they only quote high-profile bloggers and public intellectuals. These individuals, while not necessarily less vulnerable, are less likely to face harm as a direct result of the research, given their existing levels of high-profile public dissent. Kelly and Etling's method of summarizing the content, rather than quoting it explicitly, could provide another acceptable solution, as long as intercoder reliability is maintained. As well, some sort of third-party review could be introduced, where other trusted academics or researchers could assess qualitative data and vouch for its validity (to thwart, for example, the potential scenario that journal editors worry about, involving the fabrication of content by unscrupulous researchers), thereby increasing robustness while maintaining high levels of anonymity. Their technique is taken a step further by Markham (2012: 334), who argues that the only way to truly ensure the privacy of the source material is through ``creative, bricolage-style transfiguration of original data into composite accounts or representational interactions." While this method is very well suited for studying politically vulnerable communities, according to Markham, several papers written by herself and her colleagues were rejected on the grounds that they had ``fabricated" data, while they were really trying to protect their sensitive source material. This example illustrates that both qualitative and quantitative scholars may face institutional pressures to make at least certain aspects of their data public or ``transparent," at a potential cost to the safety of their research subjects. In sum, these are only some possible solutions to the robustness-anonymization trade-off. Social media researchers must emphasize that context is key, and that the best solutions will have to be determined on a case-by-case basis. It should be noted that not all these best practices will be acceptable for those with a more positivist bent, while post-positivists are more likely to employ more involved techniques such as the one proposed by Markham. Also, both qualitative and quantitative scholars should find ways to account for political vulnerability, although the way this is accomplished may vary. The notion of what constitutes robust research can vary between qualitative and qualitative approaches, with quantitative scholars in certain disciplines (such as Psychology or Political Science) more likely to emphasize the importance of reproducibility. However, we would suggest that these short case studies illustrate that it is completely possible to produce robust work which also accounts for political vulnerability: in the small number of case studies provided here, we have seen several creative solutions. Further discussion and experimentation in this are would surely demonstrate that many others are possible as well, depending on the context and subject population in question. \section{5. Conclusion} The Internet ethics community has spent almost two decades discussing critical ethical questions associated with Internet research. In this paper, we have suggested that the notion of \textit{political vulnerability} has been underrepresented in these debates, and conclude that it would be worthwhile for researchers to (a) take extra care when working with politically vulnerable communities, and (b) be cognizant that their work could have direct ramifications on the politically sensitive persons who (often unwittingly) provide data. Given that Internet researchers emphasize the importance of making informed methodological decisions based on context, it would be valuable for them to ensure that they consider \textit{political context} as well. A credible research method should not only be methodologically rigorous and well suited to the task at hand, but also be ethically credible and robust. It should demonstrate that the researcher has treated their data (and the populations that could be affected by said data) with the appropriate measure of care, and that they have thought through some of the issues associated with anonymity presented above. After all, no researcher wants to be informed that someone has been physically harmed as a result of their study. In order to avoid this possibility, responsible researchers would do well to strive towards best practices for research that deals with politically vulnerable populations. \section{6. References} \smallskip \noindent Bassett, E, and O'Riordan, K. 2002. Ethics of Internet Research: Contesting the Human Subjects Research Model. \textit{Ethics and Information Technology} 4 (3): 233--247. \smallskip \noindent Birman, D. 2006. Ethical Issues in Research With Immigrants and Refugees. In The \textit{Handbook of Ethical Research with Ethnocultural Populations and Communities}, Trimble and Fisher, eds. Thousand Oaks, California: SAGE. \smallskip \noindent Bruckman, A. 2002. Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. \textit{Ethics and Information Technology} 4 (3): 217--231. \smallskip \noindent Buchanan, E. 2011. Internet Research Ethics: Past, Present, and Future. In \textit{The Handbook of Internet Studies}, Consalvo and Ess, eds. 83--108. Oxford, UK: Wiley-Blackwell. \smallskip \noindent Elgesem, D. 2002. What Is Special about the Ethical Issues in Online Research? \textit{Ethics and Information Technology} 4 (3): 195. \smallskip \noindent Ess, C., and the Association of Internet Researchers Working Group. 2002. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee. \smallskip \noindent Eynon, R., Fry, J., and Schroeder, R. 2008. The Ethics of Internet Research. \textit{In The Handbook of Online Research Methods}, edited by Blank, Lee, and Fielding, 23--41. London, UK: SAGE. \smallskip \noindent Eynon, R., Schroeder, R., and Fry, J. 2009. New Techniques in Online Research: Challenges for Research Ethics. \textit{Twenty-First Century Society} 4 (2): 187--199. \smallskip \noindent Hongladarom, S., and Ess, C., eds. 2007. \textit{Information Technology Ethics: Cultural Perspectives}. Hershey, PA: Idea Group. \smallskip \noindent Katz, J., and Lai, C.H. 2009. News Blogging in Cross-Cultural Contexts: A Report on the Struggle for Voice. \textit{Knowledge, Technology and Policy} 22 (2): 95--107. \smallskip \noindent Kelly, J., and Etling, B. 2008. Mapping Iran’s Online Public: Politics and Culture in the Persian Blogosphere. Berkman Center Research Report. \smallskip \noindent King, G., Pan, J., and Roberts, M.E. 2013. How Censorship in China Allows Government Criticism but Silences Collective Expression. \textit{American Political Science Review} 107 (2): 326--343. \smallskip \noindent Kendzior, S. 2011. Digital Distrust: Uzbek Cynicism and Solidarity in the Internet Age. \textit{American Ethnologist} 38 (3): 559--575. \smallskip \noindent Markham, A., and Buchanan, E. 2012. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). \smallskip \noindent Markham, A. 2012. Fabrication as Ethical Practice: Qualitative Inquiry in Ambiguous Internet Contexts. \textit{Information, Communication and Society} 15 (3): 334--353. \smallskip \noindent Schroeder, R. 2007. An Overview of Ethical and Social Issues in Shared Virtual Environments. \textit{Futures} 39 (6): 704–-717. \smallskip \noindent Shen, S. 2009. A Constructed (un)reality on China's Re-Entry into Africa: The Chinese Online Community Perception of Africa. \textit{Journal of Modern Africa Studies} 47 (3): 425--448. \smallskip \noindent Venegas, C. 2010. `Liberating' the Self. \textit{The Journal of International Communication} 16 (2): 43--54 \smallskip \noindent Walther, J.B. 2002. Research Ethics in Internet-Enabled Research: Human Subjects Issues and Methodological Myopia. \textit{Ethics and Information Technology} 4 (3): 205. \smallskip \noindent White, M. 2002. Representations or People? \textit{Ethics and Information Technology} 4 (3): 249--266. \smallskip \noindent Yang, G. 2003. The Co--Evolution of the Internet and Civil Society in China. \textit{Asian Survey} 43 (3): 405--422. \end{document}
{ "timestamp": "2018-06-05T02:11:37", "yymm": "1806", "arxiv_id": "1806.00830", "language": "en", "url": "https://arxiv.org/abs/1806.00830" }
\section{Introduction} \label{sec:intro} The production of circularly polarized light from either an unpolarized or a partially polarized source involves the removal of either the left-circularly polarized or the right-circularly polarized component. One way of accomplishing this removal is by inserting a rotated linear polarizer between two orthogonally oriented quarter-wave plates. Relying on the anisotropy of the material that it is made of, a quarter-wave plate \cite[pp.~20-21]{Collett} converts linearly polarized light into circularly polarized light and vice versa. A Fresnel rhomb \cite[p.~49]{Collett} also converts linearly polarized light into circularly polarized light, but without reliance on anisotropy. Instead, its operation is based on the difference between the phase shifts of totally internally reflected light of two orthogonal linear polarization states. Another way is to use a slab of a structurally chiral medium (SCM), exemplified by cholesteric liquid crystals (CLCs) \cite{Chan,deG} and chiral sculptured thin films (CSTFs) \cite{YK1959}. As an SCM is helicoidally non\-homogeneous along a fixed axis, it displays the circular Bragg phenomenon in a specific spectral regime called the circular Bragg regime. This phenomenon is the almost total reflection of the incident light of one circular polarization state state but very little reflection of the incident light of the other circular polarization state, provided that the thickness $L$ of the SCM is sufficiently high \cite{FLcbp}. Depending on the structural period $P$ as well as the relative permittivity dyadic ${\=\eps}_{\rm rel}$ of the SCM, the spatiotemporal manifestation of the circular Bragg phenomenon is the formation of a light pipe that bleeds energy backward inside the SCM under appropriate conditions \cite{GML,GLepjap}. The center wavelength $\lambda_{\scriptscriptstyle 0}^{\rm Br}$ of the circular Bragg regime depends on $P$. Typically, $\lambda_{\scriptscriptstyle 0}^{\rm Br}/P > 1$ for normal incidence because all eigenvalues of the relative permittivity dyadics of CLCs and CSTFs exceed unity, and the ratio $L/P$ has to be in the neighborhood of $10$ or higher for adequately strong manifestation of the circular Bragg phenomenon \cite{FLcbp,StJohn}. Both CLCs \cite{Adams,Scheffer,Isihara,Jacobs} and CSTFs \cite{WHL00,PSH08,PMSPL,ZYLZLX} have been demonstrated to serve as circular-polarization filters in the visible and the near-infrared (NIR) regimes, because CLCs with $P\lesssim 700$~nm \cite{Kim2010,Sato,Kulkarni} and CSTFs with $P\leq 800$~nm \cite{PSH08,PMSPL,ELmotl} are commonplace. CLCs with periods as large as about $6000$~nm have been reported \cite{Li2017}, to our knowledge. But the fabrication of stable CLCs with the ratio $L/P$ in the neighborhood of $10$ becomes challenging as $P$ increases \cite{Denisov,Kiyoto}. One reason is that surface alignment forces invoked by the use of a textured substrate become less efficacious as the ratio $L/P$ increases \cite{Denisov}. Another reason is the enhancement of internal stresses which are inimical to the needed structural chirality of the ordered layering of aciculate molecules becomes poor \cite{Kiyoto}. CLCs are also sensitive to changes in temperature and pressure \cite{Chan,deG}. Therefore, although CLCs are suitable for use as circular-polarization filters also in the short-wavelength infrared (SWIR) regime (wavelength $\lambda_{\scriptscriptstyle 0}\in(1400,3000)$~nm) \cite{Zhang}, similar use in the mid-wavelength infrared (MWIR) regime ($\lambda_{\scriptscriptstyle 0}\in(3000,8000)$~nm) appears not to be easily possible. CSTFs are solid-state analogs of chiral liquid crystals. Ideally, a CSTF is an assembly of parallel helixes of pitch $P$ and rise angle $\chi$. Structural chirality is thus rigidly built in the CSTF morphology, so that neither surface alignment forces have to be invoked nor are internal stresses deleterious. The CSTF morphology is immune to small changes of pressure and temperature, but protection against moisture intake may be necessary \cite{HWM}. Therefore, with the aim of developing MWIR circular-polarization filters, we decided to fabricate CSTFs with $P$ as high as 5880~nm. Zinc selenide (ZnSe) was chosen because its bulk refractive index varies quite slowly with $\lambda_{\scriptscriptstyle 0}$ in the MWIR regime \cite{Marple} and because it is easy to deposit CSTFs \cite{ELmotl,McAtee} of this material by oblique-angle thermal evaporation \cite{Mattox,STFbook}. The ratio $L/P$ was kept as large as practicable with our thermal evaporation system that was designed for CSTFs to function in the visible regime and is definitely suboptimal for industrial use to fabricate very thick CSTFs needed for the longer wavelengths in the MWIR regime. We also measured the normal-incidence transmittance characteristics of the fabricated CSTFs. The plan of this paper is as follows. A brief description of theory underlying the transmission of light in presented in Sec.~2.\ref{reftrans} followed by a discussion in Sec.~2.\ref{sec:rp} of the premise of this research. Fabrication of five different CSTFs is described in Sec.~3.\ref{sec:fab} and experimental methods for their morphological and optical characterization are presented in Secs.~3.\ref{sec:mc} and 3.\ref{opt-char}, respectively. Scanning-electron micrographs of all CSTFs fabricated for this research are presented in Sec.~4.\ref{sec:morph}. Theoretical spectrums of the transmittances of a reference CSTF are provided in Sec.~4.\ref{sec:rtr} to understand the very limited experimental spectrums of the fabricated CSTFs in Sec.~4.\ref{sec:er}. The paper ends with some remarks in Sec.~5. \section{Theoretical Preliminaries}\label{sec:theory} \subsection{Optical Transmission}\label{reftrans} Suppose the region $0 \leq z \leq L$ is occupied by a CSTF, while the half spaces $z \leq 0$ and $z \geq L$ are vacuous. The linear dielectric properties of the CSTF are delineated by the unidirectionally non\-homogeneous relative permittivity dyadic \cite{STFbook} \begin{eqnarray} \nonumber &&\=\varepsilon_{\,r} (z,\lambda_{\scriptscriptstyle 0}) = \=S_z(h,z,P)\.\=S_y(\chi)\. \Big[ \eps_{\rm a}(\lambda_{\scriptscriptstyle 0}) \,\#u_{\rm z}\uz +\eps_{\rm b}(\lambda_{\scriptscriptstyle 0})\,\#u_{\rm x}\ux\\ &&\quad +\,\eps_{\rm c}(\lambda_{\scriptscriptstyle 0})\,\#u_{\rm y}\uy\Big]\.\=S_y^{-1}(\chi)\. \=S_z^{-1}(h,z,P)\, , \,0 \leq z \leq L\, . \label{epsbasic} \end{eqnarray} Here and hereafter, an $\exp( - i \omega t)$ dependence on time $t$ is implicit with $\omega=2\pic_{\scriptscriptstyle 0}/\lambda_{\scriptscriptstyle 0}$ as the angular frequency and $i=\sqrt{-1}$; $c_{\scriptscriptstyle 0}$ is the speed of light in free space; and $\#u_{\rm x}$, $\#u_{\rm y}$, and $\#u_{\rm z}$ are the unit vectors in a Cartesian coordinate system. The helicoidal nonhomogeneity of the CSTF is captured by the rotation dyadic \begin{eqnarray} \nonumber && \=S_z(h,z,P)= \#u_{\rm z}\uz + \left( \#u_{\rm x}\ux+\#u_{\rm y}\uy\right)\,\cos\left(\frac{2\pi z}{P} \right) \\ [5pt] && \qquad +\,h\, \left( \#u_{\rm y}\#u_{\rm x}-\#u_{\rm x}\#u_{\rm y}\right)\,\sin\left(\frac{2\pi z}{P}\right)\,. \end{eqnarray} The direction of the non\-homogeneity is parallel to the $z$ axis. The CSTF is structurally right-handed when $h = 1$, but is structurally left-handed when $h = -1$. The dyadic \begin{equation} \=S_y(\chi) = \#u_{\rm y}\uy + (\#u_{\rm x}\ux + \#u_{\rm z}\uz) \, \cos\chi + (\#u_{\rm z}\#u_{\rm x}-\#u_{\rm x}\#u_{\rm z})\, \sin\chi \end{equation} represents the {\it locally\/} aciculate morphology of the CSTF, with $\chi > 0$~deg being the rise angle. A 4$\times$4-matrix--based procedure to calculate the linear and circular remittances of the CSTF of thickness $L$ for normally incident monochromatic light is explained elsewhere \cite[Chap.~9]{STFbook} in detail. Using this procedure, we calculated the four linear transmittances ($T_{\rm ss,sp,ps,pp}$) and the four circular transmittances ($T_{\rm RR,RL,LR,LL}$) as functions of $\lambda_{\scriptscriptstyle 0}$. Here, $T_{\rm sp}$ is the fraction of the incident power transmitted via an $s$-polarized plane wave when the incident plane wave is $p$ polarized, $T_{\rm LR}$ is the fraction of the incident power transmitted via a \textit{L}eft-circularly polarized plane wave when the incident plane wave is \textit{R}ight-circularly polarized, and so on. \subsection{Research Premise}\label{sec:rp} The premise of our research effort can now be explained using the sourceless versions of the frequency-domain Maxwell postulates: \begin{equation} \left.\begin{array}{l} \nabla\.\#E(\#r,\lambda_{\scriptscriptstyle 0})=0 \\[5pt] \nabla\.\#H(\#r,\lambda_{\scriptscriptstyle 0})=0 \\[5pt] \displaystyle{\nabla\times\#E(\#r,\lambda_{\scriptscriptstyle 0})=i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}}\mu_{\scriptscriptstyle 0} \#H(\#r,\lambda_{\scriptscriptstyle 0})} \\[5pt] \displaystyle{\nabla\times\#H(\#r,\lambda_{\scriptscriptstyle 0})=-i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}}\eps_{\scriptscriptstyle 0} \=\varepsilon_{\,r}(z,\lambda_{\scriptscriptstyle 0})\. \#E(\#r,\lambda_{\scriptscriptstyle 0})} \end{array} \right\}\,. \label{eq7} \end{equation} Here, $\eps_{\scriptscriptstyle 0}$ is the permittivity and $\mu_{\scriptscriptstyle 0}$ is the permeability of free space. Let us scale all space isotropically as \begin{equation} \#r^\prime = \alpha \#r\,, \end{equation} where the scaling parameter $\alpha>1$ is real. If we also scale the free-space wavelength as \begin{equation} \label{lp} \lambda_{\scriptscriptstyle 0}^\prime=\alpha\lambda_{\scriptscriptstyle 0}\,, \end{equation} Eqs.~(\ref{eq7}) can be recast as \begin{equation} \left.\begin{array}{l} \nabla^\prime\.\#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=0 \\[5pt] \nabla^\prime\.\#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=0 \\[5pt] \displaystyle{\nabla^\prime\times\#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}^\prime}\mu_{\scriptscriptstyle 0} \#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)} \\[5pt] \displaystyle{\nabla^\prime\times\#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=-i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}^\prime}\eps_{\scriptscriptstyle 0} \=\varepsilon_{\,r}(z^\prime,\lambda_{\scriptscriptstyle 0}^\prime)\. \#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)} \end{array} \right\}\,. \label{eq9} \end{equation} Provided that \begin{equation} \=\varepsilon_{\,r}(z^\prime,\lambda_{\scriptscriptstyle 0}^\prime)= \=\varepsilon_{\,r}(z,\lambda_{\scriptscriptstyle 0})\,, \label{eq11} \end{equation} the solutions of Eqs.~(\ref{eq7}) and (\ref{eq9}) shall be identical \cite{Sinclair}. In other words, if we can fabricate two CSTFs such that \begin{itemize} \item[(i)] their helixes have rise angles $\chi_1$ and $\chi_2=\chi_1$, \item[(ii)] their helixes have pitches $P_1$ and $P_2=\alpha{P_1}$, and \item[(iii)] their thicknesses are $L_1$ and $L_2=\alpha{L_1}$, \end{itemize} then the transmittances of the first CSTF at $\lambda_{\scriptscriptstyle 0}=\lambda_{0_1}$ shall be the same as the transmittances of the second CSTF at $\lambda_{\scriptscriptstyle 0}=\lambda_{0_2}=\alpha\lambda_{0_1}>\lambda_{0_1}$ if the relative permittivity dyadic $\=\varepsilon_{\,r}^{(1)}$ of the first CSTF and the relative permittivity dyadic $\=\varepsilon_{\,r}^{(2)}$ of the first CSTF satisfy the condition \begin{equation} \=\varepsilon_{\,r}^{(2)}(\alpha{z},\lambda_{0_2})= \=\varepsilon_{\,r}^{(1)}(z,\lambda_{0_1})\,. \label{eq12} \end{equation} Therefore, if both CSTFs are made by evaporating a dielectric material whose bulk relative permittivity is uniform in a spectral regime that encompasses both $\lambda_{0_1}$ and $\alpha\lambda_{0_1}$, and if the first CSTF can serve as a circular-polarization filter in a spectral regime encompassing $\lambda_{0_1}$, then the second CSTF must serve as a circular-polarization filter in a spectral regime encompassing $\alpha\lambda_{0_1}$. ZnSe was chosen for evaporation because its bulk refractive index $n_{\rm ZnSe}$ varies weakly with $\lambda_{\scriptscriptstyle 0}$ in the MWIR regime \cite{Marple,CERDEC}. Since $n_{\rm ZnSe}$ generally decreases as $\lambda_{\scriptscriptstyle 0}$ increases, as shown in Fig.~\ref{Fig1}, the second CSTF will perform as a circular-polarization filter in a spectral regime encompassing wavelengths somewhat smaller than $\alpha\lambda_{0_1}$. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig1} \captionof{figure}{Real part of $n_{\rm ZnSe}$ as a function of $\lambda_{\scriptscriptstyle 0}$. The imaginary part of $n_{\rm ZnSe}$ does not exceed $10^{-6}$ in the same spectral regime \cite{CERDEC}. } \label{Fig1} \end{center} \section{Experimental Methods} \subsection{Fabrication of CSTFs}\label{sec:fab} Five different structurally right-handed CSTFs were fabricated using oblique-angle thermal evaporation, the targeted values of their structural period $P$ and number of periods $L/P$ being listed in Table~\ref{tab:chiralDesigns}. \begin{table}[ht] \caption{{\bf CSTF Samples Fabricated} } \label{tab:chiralDesigns} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \rule[-1ex]{0pt}{3.5ex} Sample & Targeted & Measured & & & Number of \\ \rule[-1ex]{0pt}{3.5ex} No. & $P$ & $P$ & $L/P$ & $\tau$ & Depositions \\ \rule[-1ex]{0pt}{3.5ex} & (nm) & (nm) & & (s) & \\ \hline \rule[-1ex]{0pt}{3.5ex} 1& 366 & 364 & 10 & 5.955 & 3\\ \hline \rule[-1ex]{0pt}{3.5ex} 2& 732 & 727 & 10 & 12.627 & 5 \\ \hline \rule[-1ex]{0pt}{3.5ex} 3& 1464 & 1410 & 10 & 25.971 & 10 \\ \hline \rule[-1ex]{0pt}{3.5ex} 4& 2928 & 2900 & 10 & 52.658 & 20 \\ \hline \rule[-1ex]{0pt}{3.5ex} 5& 5856 &5880 & 1 & 106.033 & 4 \\ \hline \end{tabular} \end{center} \end{table} Thermal evaporation system was carried out in a low-pressure chamber (Torr International, New Windsor, New York). Inside this chamber, the material to be evaporated is kept in a tungsten boat (S22-.005W, R. D. Mathis, Long Beach, California) which can be heated by passing a current through it. About 15~cm above the boat is a substrate holder whose rotations about two mutually orthogonal axes are controlled by two stepper motors. One axis of rotation passes normally through the substrate holder to serve as the $z$ axis, and the second serves as the $y$ axis in the substrate ($xy$) plane. There is also a quartz crystal monitor (QCM) in the chamber which has been calibrated to measure the thickness of the film as it is being deposited. Finally, a shutter between the boat and the substrate holder allows the user to abruptly start or halt the deposition as needed. 99.995 \% pure ZnSe (Alfa Aesar, Ward Hill, Massachusetts) was the material of choice for the reasons discussed previously. The manufacturer supplied ZnSe lumps that were crushed into a fine powder. When crushing the lumps, a respirator mask, gloves, and a lab coat were worn to avoid the toxic effects of ZnSe exposure \cite{MSDS}. Each CSTF was deposited on two substrates simultaneously, one being either glass or silicon and the other being silicon. The sample grown on the first substrate was optically characterized. If the value of $P$ was chosen for the circular Bragg regime to lie in the visible regime or the NIR regime or the SWIR regime, a pre-cleaned glass slide (48300-0025, VWR, Radnor, Pennsylvania) was used as the first substrate. If the value of $P$ was chosen for the circular Bragg regime to lie in the MWIR regime, silicon was used as the first substrate. The morphology of the sample grown on the second substrate was characterized on a scanning-electron microscope. Each substrate was cleaned in an ethanol bath using an ultrasonicator for 9~min on each side; thereafter, the substrate was immediately dried with pressurized nitrogen gas. Both substrates were secured to the substrate holder using Kapton tape (S-14532, Uline, Pleasant Prairie, Wisconsin), being positioned as close to the center of the holder as possible to ensure that they would be directly above the tungsten boat. The shutter was rotated to prevent any vapor from reaching the two substrates. To begin the deposition process, the low-pressure chamber was pumped down to approximately 1~$\mu$Torr. Next, the current was gradually increased to $\thicksim$100 A and the shutter was rotated to allow a collimated portion of the ZnSe vapor to reach the substrates. The deposition rate was manually maintained at $0.4 \pm 0.02$~nm s$^{-1}$, using the QCM. Upon completion of the deposition, the shutter was rotated to prevent the vapor from reaching the substrates and the current was quickly brought down to 0 A to prevent any further deposition. After the deposition, the chamber was allowed to cool for at least 30~min before it was opened. Given the desired thickness of the CSTF, the deposition rate, and the limited amount of ZnSe that could be put in the boat, multiple depositions were needed to fabricate the CSTF. The number of depositions for each of the five samples is shown in Table~\ref{tab:chiralDesigns}. During each deposition, a film of maximum thickness $1500$~nm could be deposited before the ZnSe powder loaded in the boat was depleted. Between these depositions, the boat was refilled with ZnSe powder and, if needed, the quartz crystal in the QCM was replaced. During every deposition, the angle of the collimated vapor flux with respect to the substrate plane was set at $\chi_v = 20$~deg. Furthermore, the substrate was rotated about the $z$ axis in accordance with the M:1 asymmetric serial bi-deposition technique \cite{McAtee,Pursel} as follows. A subdeposit was made for time $\tau$, followed by a rapid rotation about the $z$ axis by $180$~deg in $0.406$~s, followed by another subdeposit for time $\tau/M$, followed by a rapid rotation about the $z$ axis by $183$~deg in $0.413$~s. This 4-step process was iterated 120 times in order to deposit a single period. Based on a detailed comparative study to improve the exhibition of the circular Bragg phenomenon \cite{McAtee}, we fixed $M=7$ for all CSTFs. The value of $\tau$ chosen for each CSTF fabricated is listed in Table~\ref{tab:chiralDesigns}. \subsection{Morphological Characterization}\label{sec:mc} The cross-sectional morphologies of all five CSTFs fabricated on a silicon substrate were characterized using the FEI Nova\texttrademark~ NanoSEM 630 (FEI, Hillsboro, Oregon) field-emission scanning-electron microscope. Prior to taking the images, the sample was cleaved using the freeze-fracture technique \cite{Severs} so that cross-sectional images could be taken on a clean cleaved edge free from edge-growth effects. The sample was then sputtered with iridium using a Quorum Emitech K575X (Quorum Technologies, Ashford, Kent, United Kingdom) sputter coater before imaging. \subsection{Optical Characterization} \label{opt-char} Optical characterization of two of the five CSTFs fabricated was performed within 24~h of fabrication, the sample being contained in a desiccator before characterization to prevent degradation due to moisture adsorption. A custom-built apparatus was used to measure the linear and circular transmittances of sample~1 \cite{McAtee,Vepachedu}, and a Vertex 70 spectrophotometer (Bruker, Billerica, Massachussetts) was used to measure the linear transmittances of sample~4. The transmittance-measurement apparatus for $\lambda_{\scriptscriptstyle 0}\in[600\ \text{nm},900\ \text{nm}]$ is described in detail elsewhere \cite{McAtee,Vepachedu}. Briefly, light from a halogen source (HL-2000, Ocean Optics, Dunedin, Florida) was passed through a fiber-optic cable and then through a linear polarizer (GT10, ThorLabs, Newton, New Jersey); it was transmitted through the sample to be characterized and then passed through a second linear polarizer (GT10, ThorLabs) and a fiber-optic cable to a CCD spectrometer (HRS-BD1-025, Mightex Systems, Pleasanton, California) \cite{Vepachedu}. The linear transmittances were thus measured for normal incidence. For measurements of the circular transmittances, a Fresnel rhomb (LMR1, ThorLabs) was introduced directly after the first linear polarizer and another Fresnel rhomb directly before the second linear polarizer \cite{McAtee}. All measurements were taken in a dark room to avoid noise from external sources. The experimental setup for the Vertex 70 spectrophotometer was different. As only one linear polarizer was available, it was used on the incidence side so that only the sums $T_{\rm s}=T_{\rm ss}+T_{\rm ps}$ and $T_{\rm p}=T_{\rm pp}+T_{\rm sp}$ were measured for normal incidence. \section{Results and Discussion} \subsection{Morphology}\label{sec:morph} Figure~\ref{Fig2} presents cross-sectional scanning-electron micrographs of all five samples listed in Table~\ref{tab:chiralDesigns}. Table~\ref{tab:chiralDesigns} presents the value of $P$ of every sample fabricated, as estimated from its micrograph. The discrepancy between the targeted and measured values of $P$ is less than 1\% for four of the five samples, and just 3.7\% for sample~3. The discrepancy is due to the variability inherent in manual control of the deposition rate. We expect that even better agreement will be obtained after the fabrication process has been optimized for industrial production. \begin{center} \includegraphics[width=1\columnwidth, height=15cm, keepaspectratio]{Vepachedu_18_2v8Fig2small} \captionof{figure}{Cross-sectional scanning-electron micrographs of all five CSTF samples listed in Table~\ref{tab:chiralDesigns}.} \label{Fig2} \end{center} Each of the five micrographs clearly shows that the fabricated CSTF is an array of very similar helixes. Even sample~5---which has just one period because the long duration required to fabricate many periods of that CSTF is infeasible with the resources of an academic laboratory---is an array of helixes. The morphologies of all five samples being the same, except for the scale factor $\alpha$ in Eq.~(\ref{lp}), we conclude that it is possible to fabricate CSTFs with periods on the order of several micrometers to serve as circular-polarization filters in the MWIR regime. \subsection{Reference Theoretical Results\label{sec:rtr}} In order to provide theoretical results to serve as a reference for our experimental findings, we assumed that the eigenvalues $\eps_{\rm a}$, $\eps_{\rm b}$ and $\eps_{\rm c}$ of $\=\varepsilon_{\,r}$ to have single-resonance Lorentzian dependences \cite{Kittel} on $\lambda_{\scriptscriptstyle 0}$. Thus, \begin{equation} \label{Lor} \varepsilon_{\ell}(\lambda_{\scriptscriptstyle 0}) = 1 + \displaystyle\frac{p_{\ell}} { 1 + \left( 1/N_{\ell} - i \lambda_{\ell}/\lambda_{\scriptscriptstyle 0}\right)^{2} }\, , \quad\ell\in\left\{a,b,c\right\}\,, \end{equation} with the oscillator strengths denoted by $p_{\ell}$. Accordingly, the resonance wavelengths are $\lambda_{\ell} \left( 1 + N^{-2}_{\ell}\right)^{-1/2}$ and the linewidths are $ \lambda_{\ell}/N_{\ell}$, $\ell\in\left\{a,b,c\right\}$. Calculations of all transmittances, linear as well as circular, of a reference CSTF for normal incidence were made by setting \cite{ErtenCBP}: $p_{\rm a}=4.7$, $p_{\rm b}=5.2$, $p_{\rm c}=4.6$, $\lambda_{\rm a,c}=260$~nm, $\lambda_{\rm b}=270$~nm, $N_{\rm a,b,c}=130$, $\chi=50$~deg, $h=1$, $P=324$~nm, and $L=20P$. Figure~\ref{Fig3} shows all four circular transmittances of the reference CSTF for normal incidence. As this CSTF is structurally right-handed, the circular Bragg phenomenon is evident as a trough centered at $\lambda_{\scriptscriptstyle 0}\approx 818$~nm in the spectrum of $T_{\rm RR}$, but that feature is absent in the spectrums of the other three circular transmittances. A comparison of the spectrums of $T_{\rm RR}$ and $T_{\rm LL}$ shows clearly that the reference CSTF can function as a circular-polarization filter at and in the neighborhood of $\lambda_{\scriptscriptstyle 0}=818$~nm, which is the center wavelength of the circular Bragg regime. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig3b} \captionof{figure}{Calculated spectrums of the circular transmittances of the reference CSTF described in Sec.~4.\ref{sec:rtr}. } \label{Fig3} \end{center} The center wavelength is about $2.52$ times the period $324$~nm used for the calculations. If the period were increased by a factor $\alpha>1$, then the center wavelength would also be increased by the same factor provided that the bulk refractive index of the evaporated material remained invariant, as discussed in Sec.~2.\ref{sec:rp}. ZnSe \textit{almost} fulfills that requirement, as shown in Fig.~\ref{Fig1}. Hence, we can conclude that samples 2--4 should function as circular-polarization filters in progressively longer-wavelength regimes than the one in which sample~1 does; furthermore, the same would be true if sample~5 were to be fabricated with a sufficiently large number of periods \cite{FLcbp,StJohn} instead of just one. As apparatus to measure the circular transmittances at longer wavelengths may not be available, we provide the spectrums of all four linear transmittances of the reference CSTF in Fig.~\ref{Fig4}. These spectrums could allow us to properly compare the experimental data on linear transmittances of the larger-$P$ samples with theory. In Fig.~\ref{Fig4} we notice a distinctive feature in the circular Bragg regime. The troughs of $T_{\rm ss}$ and $T_{\rm pp}$ intersect at $\lambda_{\scriptscriptstyle 0}\approx 818$~nm. If that pattern were to be observed in the experimental results for a given sample in a spectral regime in which the circular Bragg phenomenon is expected, one could infer the existence of that phenomenon even if circular transmittances could not be measured. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig4b} \captionof{figure}{Calculated spectrums of the linear transmittances of the reference CSTF described in Sec.~4.\ref{sec:rtr}. } \label{Fig4} \end{center} \subsection{Experimental Results}\label{sec:er} Figure~\ref{Fig5} contains the experimentally determined spectrums of the four circular transmittances of sample~1 ($P=364$~nm, $L=10P$). These spectrums qualitatively match those of the reference CSTF in Fig.~\ref{Fig3}, sample~1 exhibiting a circular Bragg regime centered at $\lambda_{\scriptscriptstyle 0}\approx780$~nm. At this wavelength, the difference between $T_{\rm LL}=0.691$ and $T_{\rm RR}=0.239$ is large enough so that sample~1 can be taken to function as a rejection filter for incident right-circularly polarized light but not for incident left-circularly polarized light. Further improvement may come by increasing the number of periods, i.e., by increasing the ratio $L/P$, as has been established theoretically \cite{FLcbp,StJohn}. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig5} \captionof{figure}{Measured spectrums of the circular transmittances of sample~1. } \label{Fig5} \end{center} Figure~\ref{Fig6} contains the experimentally determined spectrums of the four linear transmittances of sample~1. These spectrums qualitatively match those of the reference CSTF in Fig.~\ref{Fig4}, the troughs of $T_{\rm ss}$ and $T_{\rm pp}$ intersect at $\lambda_{\scriptscriptstyle 0}=761$~nm in Fig.~\ref{Fig6}. This intersection is very close to 780~nm, the center wavelength of the circular Bragg regime in Fig.~\ref{Fig5} for the same CSTF. Thus, by observing the same intersections for samples with larger $P$, we can verify the exhibition of the circular Bragg phenomenon by those samples. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig6b} \captionof{figure}{Measured spectrums of the linear transmittances of sample~1. } \label{Fig6} \end{center} Accordingly, we can predict that the center wavelength of the circular Bragg regime of sample~4 would be $780\alpha=6214$~nm, with $\alpha=2900/364=7.97$. The graphs of $T_{\rm s}$ and $T_{\rm p}$ in Fig.~\ref{Fig8} have peaks that intersect at $\lambda_{\scriptscriptstyle 0}=5512$~nm, which is lower than $6214$~nm. This lowering can be explained by reduction of the bulk relative permittivity of ZnSe by about 3.05\% in Fig.~\ref{Fig1} as $\lambda_{\scriptscriptstyle 0}$ increases from 780~nm to 6000~nm. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig8} \captionof{figure}{Measured spectrums of $T_{\rm s}=T_{\rm ss}+T_{\rm ps}$ and $T_{\rm p}=T_{\rm pp}+T_{\rm sp}$ of sample~4. } \label{Fig8} \end{center} Likewise, if a multiple-period version of sample~5 were to be made, then $\alpha=5880/364=16.15$ and the center wavelength of the circular Bragg regime of sample~5 would be somewhat lower than $780\alpha=12600$~nm. Thus, we have demonstrated that CSTFs can be fabricated to serve as circular-polarization filters in the MWIR regime. Before closing this section, we must remark that our apparatus were not appropriate to measure diffuse scattering in nonspecular directions. Nevertheless, our conclusion on the performance of CSTFs as circular-polarization filters for normal incidence in the MWIR regime holds. \section{Concluding Remarks} Relying on the scale invariance of the frequency-domain Maxwell postulates \cite{Sinclair}, we selected a material whose bulk refractive index is very weakly dependent on the wavelength. This material was used as the evaporant material to fabricate five chiral sculptured thin films---of structural periods ranging from $\sim$360~nm to $\sim$5900~nm---by oblique-angle thermal evaporation. The fabrication conditions of all five CSTFs were thus identical except for a change in scale. Morphological characterization confirmed that each of the five CSTFs was an assembly of parallel helixes, the helical pitches in those CTSFs ranging from $\sim$360~nm to $\sim$5900~nm. As expected from the literature \cite{FLcbp,ErtenCBP,Kulkarni,Sato}, the measured circular transmittances of the CSTF of the smallest period demonstrated the circular Bragg phenomenon in the NIR regime, confirming that CSTFs of sufficient thickness can function as circular-polarization filters in the same regime. The linear transmittances of the CSTF of the smallest period were also measured to identify a spectral feature that would indicate the occurrence of the circular Bragg phenomenon, this spectral feature also being confirmed theoretically. The same feature was experimentally demonstrated to exist in the spectrums of the linear transmittances of a CSTF whose structural period was about $8$ times larger than that of the CSTF with the smallest period. Even though the center wavelength of the circular Bragg regime concomitantly increased by a factor of about $7.1$, at least in part because the bulk refractive index of the evaporant material decreased a little with $\lambda_{\scriptscriptstyle 0}$, we can conclude that CSTFs can be fabricated to function as circular-polarization filters in the MWIR regime. \vspace{3mm} \noindent {\bf Acknowledgments.} VV thanks Pittsburgh Plate and Glass, Inc., for an undergraduate research fellowship. AL thanks the Charles Godfrey Binder Endowment at Penn State for ongoing support of his research activities. \section{Introduction} \label{sec:intro} The production of circularly polarized light from either an unpolarized or a partially polarized source involves the removal of either the left-circularly polarized or the right-circularly polarized component. One way of accomplishing this removal is by inserting a rotated linear polarizer between two orthogonally oriented quarter-wave plates. Relying on the anisotropy of the material that it is made of, a quarter-wave plate \cite[pp.~20-21]{Collett} converts linearly polarized light into circularly polarized light and vice versa. A Fresnel rhomb \cite[p.~49]{Collett} also converts linearly polarized light into circularly polarized light, but without reliance on anisotropy. Instead, its operation is based on the difference between the phase shifts of totally internally reflected light of two orthogonal linear polarization states. Another way is to use a slab of a structurally chiral medium (SCM), exemplified by cholesteric liquid crystals (CLCs) \cite{Chan,deG} and chiral sculptured thin films (CSTFs) \cite{YK1959}. As an SCM is helicoidally non\-homogeneous along a fixed axis, it displays the circular Bragg phenomenon in a specific spectral regime called the circular Bragg regime. This phenomenon is the almost total reflection of the incident light of one circular polarization state state but very little reflection of the incident light of the other circular polarization state, provided that the thickness $L$ of the SCM is sufficiently high \cite{FLcbp}. Depending on the structural period $P$ as well as the relative permittivity dyadic ${\=\eps}_{\rm rel}$ of the SCM, the spatiotemporal manifestation of the circular Bragg phenomenon is the formation of a light pipe that bleeds energy backward inside the SCM under appropriate conditions \cite{GML,GLepjap}. The center wavelength $\lambda_{\scriptscriptstyle 0}^{\rm Br}$ of the circular Bragg regime depends on $P$. Typically, $\lambda_{\scriptscriptstyle 0}^{\rm Br}/P > 1$ for normal incidence because all eigenvalues of the relative permittivity dyadics of CLCs and CSTFs exceed unity, and the ratio $L/P$ has to be in the neighborhood of $10$ or higher for adequately strong manifestation of the circular Bragg phenomenon \cite{FLcbp,StJohn}. Both CLCs \cite{Adams,Scheffer,Isihara,Jacobs} and CSTFs \cite{WHL00,PSH08,PMSPL,ZYLZLX} have been demonstrated to serve as circular-polarization filters in the visible and the near-infrared (NIR) regimes, because CLCs with $P\lesssim 700$~nm \cite{Kim2010,Sato,Kulkarni} and CSTFs with $P\leq 800$~nm \cite{PSH08,PMSPL,ELmotl} are commonplace. CLCs with periods as large as about $6000$~nm have been reported \cite{Li2017}, to our knowledge. But the fabrication of stable CLCs with the ratio $L/P$ in the neighborhood of $10$ becomes challenging as $P$ increases \cite{Denisov,Kiyoto}. One reason is that surface alignment forces invoked by the use of a textured substrate become less efficacious as the ratio $L/P$ increases \cite{Denisov}. Another reason is the enhancement of internal stresses which are inimical to the needed structural chirality of the ordered layering of aciculate molecules becomes poor \cite{Kiyoto}. CLCs are also sensitive to changes in temperature and pressure \cite{Chan,deG}. Therefore, although CLCs are suitable for use as circular-polarization filters also in the short-wavelength infrared (SWIR) regime (wavelength $\lambda_{\scriptscriptstyle 0}\in(1400,3000)$~nm) \cite{Zhang}, similar use in the mid-wavelength infrared (MWIR) regime ($\lambda_{\scriptscriptstyle 0}\in(3000,8000)$~nm) appears not to be easily possible. CSTFs are solid-state analogs of chiral liquid crystals. Ideally, a CSTF is an assembly of parallel helixes of pitch $P$ and rise angle $\chi$. Structural chirality is thus rigidly built in the CSTF morphology, so that neither surface alignment forces have to be invoked nor are internal stresses deleterious. The CSTF morphology is immune to small changes of pressure and temperature, but protection against moisture intake may be necessary \cite{HWM}. Therefore, with the aim of developing MWIR circular-polarization filters, we decided to fabricate CSTFs with $P$ as high as 5880~nm. Zinc selenide (ZnSe) was chosen because its bulk refractive index varies quite slowly with $\lambda_{\scriptscriptstyle 0}$ in the MWIR regime \cite{Marple} and because it is easy to deposit CSTFs \cite{ELmotl,McAtee} of this material by oblique-angle thermal evaporation \cite{Mattox,STFbook}. The ratio $L/P$ was kept as large as practicable with our thermal evaporation system that was designed for CSTFs to function in the visible regime and is definitely suboptimal for industrial use to fabricate very thick CSTFs needed for the longer wavelengths in the MWIR regime. We also measured the normal-incidence transmittance characteristics of the fabricated CSTFs. The plan of this paper is as follows. A brief description of theory underlying the transmission of light in presented in Sec.~2.\ref{reftrans} followed by a discussion in Sec.~2.\ref{sec:rp} of the premise of this research. Fabrication of five different CSTFs is described in Sec.~3.\ref{sec:fab} and experimental methods for their morphological and optical characterization are presented in Secs.~3.\ref{sec:mc} and 3.\ref{opt-char}, respectively. Scanning-electron micrographs of all CSTFs fabricated for this research are presented in Sec.~4.\ref{sec:morph}. Theoretical spectrums of the transmittances of a reference CSTF are provided in Sec.~4.\ref{sec:rtr} to understand the very limited experimental spectrums of the fabricated CSTFs in Sec.~4.\ref{sec:er}. The paper ends with some remarks in Sec.~5. \section{Theoretical Preliminaries}\label{sec:theory} \subsection{Optical Transmission}\label{reftrans} Suppose the region $0 \leq z \leq L$ is occupied by a CSTF, while the half spaces $z \leq 0$ and $z \geq L$ are vacuous. The linear dielectric properties of the CSTF are delineated by the unidirectionally non\-homogeneous relative permittivity dyadic \cite{STFbook} \begin{eqnarray} \nonumber &&\=\varepsilon_{\,r} (z,\lambda_{\scriptscriptstyle 0}) = \=S_z(h,z,P)\.\=S_y(\chi)\. \Big[ \eps_{\rm a}(\lambda_{\scriptscriptstyle 0}) \,\#u_{\rm z}\uz +\eps_{\rm b}(\lambda_{\scriptscriptstyle 0})\,\#u_{\rm x}\ux\\ &&\quad +\,\eps_{\rm c}(\lambda_{\scriptscriptstyle 0})\,\#u_{\rm y}\uy\Big]\.\=S_y^{-1}(\chi)\. \=S_z^{-1}(h,z,P)\, , \,0 \leq z \leq L\, . \label{epsbasic} \end{eqnarray} Here and hereafter, an $\exp( - i \omega t)$ dependence on time $t$ is implicit with $\omega=2\pic_{\scriptscriptstyle 0}/\lambda_{\scriptscriptstyle 0}$ as the angular frequency and $i=\sqrt{-1}$; $c_{\scriptscriptstyle 0}$ is the speed of light in free space; and $\#u_{\rm x}$, $\#u_{\rm y}$, and $\#u_{\rm z}$ are the unit vectors in a Cartesian coordinate system. The helicoidal nonhomogeneity of the CSTF is captured by the rotation dyadic \begin{eqnarray} \nonumber && \=S_z(h,z,P)= \#u_{\rm z}\uz + \left( \#u_{\rm x}\ux+\#u_{\rm y}\uy\right)\,\cos\left(\frac{2\pi z}{P} \right) \\ [5pt] && \qquad +\,h\, \left( \#u_{\rm y}\#u_{\rm x}-\#u_{\rm x}\#u_{\rm y}\right)\,\sin\left(\frac{2\pi z}{P}\right)\,. \end{eqnarray} The direction of the non\-homogeneity is parallel to the $z$ axis. The CSTF is structurally right-handed when $h = 1$, but is structurally left-handed when $h = -1$. The dyadic \begin{equation} \=S_y(\chi) = \#u_{\rm y}\uy + (\#u_{\rm x}\ux + \#u_{\rm z}\uz) \, \cos\chi + (\#u_{\rm z}\#u_{\rm x}-\#u_{\rm x}\#u_{\rm z})\, \sin\chi \end{equation} represents the {\it locally\/} aciculate morphology of the CSTF, with $\chi > 0$~deg being the rise angle. A 4$\times$4-matrix--based procedure to calculate the linear and circular remittances of the CSTF of thickness $L$ for normally incident monochromatic light is explained elsewhere \cite[Chap.~9]{STFbook} in detail. Using this procedure, we calculated the four linear transmittances ($T_{\rm ss,sp,ps,pp}$) and the four circular transmittances ($T_{\rm RR,RL,LR,LL}$) as functions of $\lambda_{\scriptscriptstyle 0}$. Here, $T_{\rm sp}$ is the fraction of the incident power transmitted via an $s$-polarized plane wave when the incident plane wave is $p$ polarized, $T_{\rm LR}$ is the fraction of the incident power transmitted via a \textit{L}eft-circularly polarized plane wave when the incident plane wave is \textit{R}ight-circularly polarized, and so on. \subsection{Research Premise}\label{sec:rp} The premise of our research effort can now be explained using the sourceless versions of the frequency-domain Maxwell postulates: \begin{equation} \left.\begin{array}{l} \nabla\.\#E(\#r,\lambda_{\scriptscriptstyle 0})=0 \\[5pt] \nabla\.\#H(\#r,\lambda_{\scriptscriptstyle 0})=0 \\[5pt] \displaystyle{\nabla\times\#E(\#r,\lambda_{\scriptscriptstyle 0})=i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}}\mu_{\scriptscriptstyle 0} \#H(\#r,\lambda_{\scriptscriptstyle 0})} \\[5pt] \displaystyle{\nabla\times\#H(\#r,\lambda_{\scriptscriptstyle 0})=-i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}}\eps_{\scriptscriptstyle 0} \=\varepsilon_{\,r}(z,\lambda_{\scriptscriptstyle 0})\. \#E(\#r,\lambda_{\scriptscriptstyle 0})} \end{array} \right\}\,. \label{eq7} \end{equation} Here, $\eps_{\scriptscriptstyle 0}$ is the permittivity and $\mu_{\scriptscriptstyle 0}$ is the permeability of free space. Let us scale all space isotropically as \begin{equation} \#r^\prime = \alpha \#r\,, \end{equation} where the scaling parameter $\alpha>1$ is real. If we also scale the free-space wavelength as \begin{equation} \label{lp} \lambda_{\scriptscriptstyle 0}^\prime=\alpha\lambda_{\scriptscriptstyle 0}\,, \end{equation} Eqs.~(\ref{eq7}) can be recast as \begin{equation} \left.\begin{array}{l} \nabla^\prime\.\#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=0 \\[5pt] \nabla^\prime\.\#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=0 \\[5pt] \displaystyle{\nabla^\prime\times\#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}^\prime}\mu_{\scriptscriptstyle 0} \#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)} \\[5pt] \displaystyle{\nabla^\prime\times\#H(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)=-i\frac{2\pic_{\scriptscriptstyle 0}}{\lambda_{\scriptscriptstyle 0}^\prime}\eps_{\scriptscriptstyle 0} \=\varepsilon_{\,r}(z^\prime,\lambda_{\scriptscriptstyle 0}^\prime)\. \#E(\#r^\prime,\lambda_{\scriptscriptstyle 0}^\prime)} \end{array} \right\}\,. \label{eq9} \end{equation} Provided that \begin{equation} \=\varepsilon_{\,r}(z^\prime,\lambda_{\scriptscriptstyle 0}^\prime)= \=\varepsilon_{\,r}(z,\lambda_{\scriptscriptstyle 0})\,, \label{eq11} \end{equation} the solutions of Eqs.~(\ref{eq7}) and (\ref{eq9}) shall be identical \cite{Sinclair}. In other words, if we can fabricate two CSTFs such that \begin{itemize} \item[(i)] their helixes have rise angles $\chi_1$ and $\chi_2=\chi_1$, \item[(ii)] their helixes have pitches $P_1$ and $P_2=\alpha{P_1}$, and \item[(iii)] their thicknesses are $L_1$ and $L_2=\alpha{L_1}$, \end{itemize} then the transmittances of the first CSTF at $\lambda_{\scriptscriptstyle 0}=\lambda_{0_1}$ shall be the same as the transmittances of the second CSTF at $\lambda_{\scriptscriptstyle 0}=\lambda_{0_2}=\alpha\lambda_{0_1}>\lambda_{0_1}$ if the relative permittivity dyadic $\=\varepsilon_{\,r}^{(1)}$ of the first CSTF and the relative permittivity dyadic $\=\varepsilon_{\,r}^{(2)}$ of the first CSTF satisfy the condition \begin{equation} \=\varepsilon_{\,r}^{(2)}(\alpha{z},\lambda_{0_2})= \=\varepsilon_{\,r}^{(1)}(z,\lambda_{0_1})\,. \label{eq12} \end{equation} Therefore, if both CSTFs are made by evaporating a dielectric material whose bulk relative permittivity is uniform in a spectral regime that encompasses both $\lambda_{0_1}$ and $\alpha\lambda_{0_1}$, and if the first CSTF can serve as a circular-polarization filter in a spectral regime encompassing $\lambda_{0_1}$, then the second CSTF must serve as a circular-polarization filter in a spectral regime encompassing $\alpha\lambda_{0_1}$. ZnSe was chosen for evaporation because its bulk refractive index $n_{\rm ZnSe}$ varies weakly with $\lambda_{\scriptscriptstyle 0}$ in the MWIR regime \cite{Marple,CERDEC}. Since $n_{\rm ZnSe}$ generally decreases as $\lambda_{\scriptscriptstyle 0}$ increases, as shown in Fig.~\ref{Fig1}, the second CSTF will perform as a circular-polarization filter in a spectral regime encompassing wavelengths somewhat smaller than $\alpha\lambda_{0_1}$. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig1} \captionof{figure}{Real part of $n_{\rm ZnSe}$ as a function of $\lambda_{\scriptscriptstyle 0}$. The imaginary part of $n_{\rm ZnSe}$ does not exceed $10^{-6}$ in the same spectral regime \cite{CERDEC}. } \label{Fig1} \end{center} \section{Experimental Methods} \subsection{Fabrication of CSTFs}\label{sec:fab} Five different structurally right-handed CSTFs were fabricated using oblique-angle thermal evaporation, the targeted values of their structural period $P$ and number of periods $L/P$ being listed in Table~\ref{tab:chiralDesigns}. \begin{table}[ht] \caption{{\bf CSTF Samples Fabricated} } \label{tab:chiralDesigns} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \rule[-1ex]{0pt}{3.5ex} Sample & Targeted & Measured & & & Number of \\ \rule[-1ex]{0pt}{3.5ex} No. & $P$ & $P$ & $L/P$ & $\tau$ & Depositions \\ \rule[-1ex]{0pt}{3.5ex} & (nm) & (nm) & & (s) & \\ \hline \rule[-1ex]{0pt}{3.5ex} 1& 366 & 364 & 10 & 5.955 & 3\\ \hline \rule[-1ex]{0pt}{3.5ex} 2& 732 & 727 & 10 & 12.627 & 5 \\ \hline \rule[-1ex]{0pt}{3.5ex} 3& 1464 & 1410 & 10 & 25.971 & 10 \\ \hline \rule[-1ex]{0pt}{3.5ex} 4& 2928 & 2900 & 10 & 52.658 & 20 \\ \hline \rule[-1ex]{0pt}{3.5ex} 5& 5856 &5880 & 1 & 106.033 & 4 \\ \hline \end{tabular} \end{center} \end{table} Thermal evaporation system was carried out in a low-pressure chamber (Torr International, New Windsor, New York). Inside this chamber, the material to be evaporated is kept in a tungsten boat (S22-.005W, R. D. Mathis, Long Beach, California) which can be heated by passing a current through it. About 15~cm above the boat is a substrate holder whose rotations about two mutually orthogonal axes are controlled by two stepper motors. One axis of rotation passes normally through the substrate holder to serve as the $z$ axis, and the second serves as the $y$ axis in the substrate ($xy$) plane. There is also a quartz crystal monitor (QCM) in the chamber which has been calibrated to measure the thickness of the film as it is being deposited. Finally, a shutter between the boat and the substrate holder allows the user to abruptly start or halt the deposition as needed. 99.995 \% pure ZnSe (Alfa Aesar, Ward Hill, Massachusetts) was the material of choice for the reasons discussed previously. The manufacturer supplied ZnSe lumps that were crushed into a fine powder. When crushing the lumps, a respirator mask, gloves, and a lab coat were worn to avoid the toxic effects of ZnSe exposure \cite{MSDS}. Each CSTF was deposited on two substrates simultaneously, one being either glass or silicon and the other being silicon. The sample grown on the first substrate was optically characterized. If the value of $P$ was chosen for the circular Bragg regime to lie in the visible regime or the NIR regime or the SWIR regime, a pre-cleaned glass slide (48300-0025, VWR, Radnor, Pennsylvania) was used as the first substrate. If the value of $P$ was chosen for the circular Bragg regime to lie in the MWIR regime, silicon was used as the first substrate. The morphology of the sample grown on the second substrate was characterized on a scanning-electron microscope. Each substrate was cleaned in an ethanol bath using an ultrasonicator for 9~min on each side; thereafter, the substrate was immediately dried with pressurized nitrogen gas. Both substrates were secured to the substrate holder using Kapton tape (S-14532, Uline, Pleasant Prairie, Wisconsin), being positioned as close to the center of the holder as possible to ensure that they would be directly above the tungsten boat. The shutter was rotated to prevent any vapor from reaching the two substrates. To begin the deposition process, the low-pressure chamber was pumped down to approximately 1~$\mu$Torr. Next, the current was gradually increased to $\thicksim$100 A and the shutter was rotated to allow a collimated portion of the ZnSe vapor to reach the substrates. The deposition rate was manually maintained at $0.4 \pm 0.02$~nm s$^{-1}$, using the QCM. Upon completion of the deposition, the shutter was rotated to prevent the vapor from reaching the substrates and the current was quickly brought down to 0 A to prevent any further deposition. After the deposition, the chamber was allowed to cool for at least 30~min before it was opened. Given the desired thickness of the CSTF, the deposition rate, and the limited amount of ZnSe that could be put in the boat, multiple depositions were needed to fabricate the CSTF. The number of depositions for each of the five samples is shown in Table~\ref{tab:chiralDesigns}. During each deposition, a film of maximum thickness $1500$~nm could be deposited before the ZnSe powder loaded in the boat was depleted. Between these depositions, the boat was refilled with ZnSe powder and, if needed, the quartz crystal in the QCM was replaced. During every deposition, the angle of the collimated vapor flux with respect to the substrate plane was set at $\chi_v = 20$~deg. Furthermore, the substrate was rotated about the $z$ axis in accordance with the M:1 asymmetric serial bi-deposition technique \cite{McAtee,Pursel} as follows. A subdeposit was made for time $\tau$, followed by a rapid rotation about the $z$ axis by $180$~deg in $0.406$~s, followed by another subdeposit for time $\tau/M$, followed by a rapid rotation about the $z$ axis by $183$~deg in $0.413$~s. This 4-step process was iterated 120 times in order to deposit a single period. Based on a detailed comparative study to improve the exhibition of the circular Bragg phenomenon \cite{McAtee}, we fixed $M=7$ for all CSTFs. The value of $\tau$ chosen for each CSTF fabricated is listed in Table~\ref{tab:chiralDesigns}. \subsection{Morphological Characterization}\label{sec:mc} The cross-sectional morphologies of all five CSTFs fabricated on a silicon substrate were characterized using the FEI Nova\texttrademark~ NanoSEM 630 (FEI, Hillsboro, Oregon) field-emission scanning-electron microscope. Prior to taking the images, the sample was cleaved using the freeze-fracture technique \cite{Severs} so that cross-sectional images could be taken on a clean cleaved edge free from edge-growth effects. The sample was then sputtered with iridium using a Quorum Emitech K575X (Quorum Technologies, Ashford, Kent, United Kingdom) sputter coater before imaging. \subsection{Optical Characterization} \label{opt-char} Optical characterization of two of the five CSTFs fabricated was performed within 24~h of fabrication, the sample being contained in a desiccator before characterization to prevent degradation due to moisture adsorption. A custom-built apparatus was used to measure the linear and circular transmittances of sample~1 \cite{McAtee,Vepachedu}, and a Vertex 70 spectrophotometer (Bruker, Billerica, Massachussetts) was used to measure the linear transmittances of sample~4. The transmittance-measurement apparatus for $\lambda_{\scriptscriptstyle 0}\in[600\ \text{nm},900\ \text{nm}]$ is described in detail elsewhere \cite{McAtee,Vepachedu}. Briefly, light from a halogen source (HL-2000, Ocean Optics, Dunedin, Florida) was passed through a fiber-optic cable and then through a linear polarizer (GT10, ThorLabs, Newton, New Jersey); it was transmitted through the sample to be characterized and then passed through a second linear polarizer (GT10, ThorLabs) and a fiber-optic cable to a CCD spectrometer (HRS-BD1-025, Mightex Systems, Pleasanton, California) \cite{Vepachedu}. The linear transmittances were thus measured for normal incidence. For measurements of the circular transmittances, a Fresnel rhomb (LMR1, ThorLabs) was introduced directly after the first linear polarizer and another Fresnel rhomb directly before the second linear polarizer \cite{McAtee}. All measurements were taken in a dark room to avoid noise from external sources. The experimental setup for the Vertex 70 spectrophotometer was different. As only one linear polarizer was available, it was used on the incidence side so that only the sums $T_{\rm s}=T_{\rm ss}+T_{\rm ps}$ and $T_{\rm p}=T_{\rm pp}+T_{\rm sp}$ were measured for normal incidence. \section{Results and Discussion} \subsection{Morphology}\label{sec:morph} Figure~\ref{Fig2} presents cross-sectional scanning-electron micrographs of all five samples listed in Table~\ref{tab:chiralDesigns}. Table~\ref{tab:chiralDesigns} presents the value of $P$ of every sample fabricated, as estimated from its micrograph. The discrepancy between the targeted and measured values of $P$ is less than 1\% for four of the five samples, and just 3.7\% for sample~3. The discrepancy is due to the variability inherent in manual control of the deposition rate. We expect that even better agreement will be obtained after the fabrication process has been optimized for industrial production. \begin{center} \includegraphics[width=1\columnwidth, height=15cm, keepaspectratio]{Vepachedu_18_2v8Fig2small} \captionof{figure}{Cross-sectional scanning-electron micrographs of all five CSTF samples listed in Table~\ref{tab:chiralDesigns}.} \label{Fig2} \end{center} Each of the five micrographs clearly shows that the fabricated CSTF is an array of very similar helixes. Even sample~5---which has just one period because the long duration required to fabricate many periods of that CSTF is infeasible with the resources of an academic laboratory---is an array of helixes. The morphologies of all five samples being the same, except for the scale factor $\alpha$ in Eq.~(\ref{lp}), we conclude that it is possible to fabricate CSTFs with periods on the order of several micrometers to serve as circular-polarization filters in the MWIR regime. \subsection{Reference Theoretical Results\label{sec:rtr}} In order to provide theoretical results to serve as a reference for our experimental findings, we assumed that the eigenvalues $\eps_{\rm a}$, $\eps_{\rm b}$ and $\eps_{\rm c}$ of $\=\varepsilon_{\,r}$ to have single-resonance Lorentzian dependences \cite{Kittel} on $\lambda_{\scriptscriptstyle 0}$. Thus, \begin{equation} \label{Lor} \varepsilon_{\ell}(\lambda_{\scriptscriptstyle 0}) = 1 + \displaystyle\frac{p_{\ell}} { 1 + \left( 1/N_{\ell} - i \lambda_{\ell}/\lambda_{\scriptscriptstyle 0}\right)^{2} }\, , \quad\ell\in\left\{a,b,c\right\}\,, \end{equation} with the oscillator strengths denoted by $p_{\ell}$. Accordingly, the resonance wavelengths are $\lambda_{\ell} \left( 1 + N^{-2}_{\ell}\right)^{-1/2}$ and the linewidths are $ \lambda_{\ell}/N_{\ell}$, $\ell\in\left\{a,b,c\right\}$. Calculations of all transmittances, linear as well as circular, of a reference CSTF for normal incidence were made by setting \cite{ErtenCBP}: $p_{\rm a}=4.7$, $p_{\rm b}=5.2$, $p_{\rm c}=4.6$, $\lambda_{\rm a,c}=260$~nm, $\lambda_{\rm b}=270$~nm, $N_{\rm a,b,c}=130$, $\chi=50$~deg, $h=1$, $P=324$~nm, and $L=20P$. Figure~\ref{Fig3} shows all four circular transmittances of the reference CSTF for normal incidence. As this CSTF is structurally right-handed, the circular Bragg phenomenon is evident as a trough centered at $\lambda_{\scriptscriptstyle 0}\approx 818$~nm in the spectrum of $T_{\rm RR}$, but that feature is absent in the spectrums of the other three circular transmittances. A comparison of the spectrums of $T_{\rm RR}$ and $T_{\rm LL}$ shows clearly that the reference CSTF can function as a circular-polarization filter at and in the neighborhood of $\lambda_{\scriptscriptstyle 0}=818$~nm, which is the center wavelength of the circular Bragg regime. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig3b} \captionof{figure}{Calculated spectrums of the circular transmittances of the reference CSTF described in Sec.~4.\ref{sec:rtr}. } \label{Fig3} \end{center} The center wavelength is about $2.52$ times the period $324$~nm used for the calculations. If the period were increased by a factor $\alpha>1$, then the center wavelength would also be increased by the same factor provided that the bulk refractive index of the evaporated material remained invariant, as discussed in Sec.~2.\ref{sec:rp}. ZnSe \textit{almost} fulfills that requirement, as shown in Fig.~\ref{Fig1}. Hence, we can conclude that samples 2--4 should function as circular-polarization filters in progressively longer-wavelength regimes than the one in which sample~1 does; furthermore, the same would be true if sample~5 were to be fabricated with a sufficiently large number of periods \cite{FLcbp,StJohn} instead of just one. As apparatus to measure the circular transmittances at longer wavelengths may not be available, we provide the spectrums of all four linear transmittances of the reference CSTF in Fig.~\ref{Fig4}. These spectrums could allow us to properly compare the experimental data on linear transmittances of the larger-$P$ samples with theory. In Fig.~\ref{Fig4} we notice a distinctive feature in the circular Bragg regime. The troughs of $T_{\rm ss}$ and $T_{\rm pp}$ intersect at $\lambda_{\scriptscriptstyle 0}\approx 818$~nm. If that pattern were to be observed in the experimental results for a given sample in a spectral regime in which the circular Bragg phenomenon is expected, one could infer the existence of that phenomenon even if circular transmittances could not be measured. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig4b} \captionof{figure}{Calculated spectrums of the linear transmittances of the reference CSTF described in Sec.~4.\ref{sec:rtr}. } \label{Fig4} \end{center} \subsection{Experimental Results}\label{sec:er} Figure~\ref{Fig5} contains the experimentally determined spectrums of the four circular transmittances of sample~1 ($P=364$~nm, $L=10P$). These spectrums qualitatively match those of the reference CSTF in Fig.~\ref{Fig3}, sample~1 exhibiting a circular Bragg regime centered at $\lambda_{\scriptscriptstyle 0}\approx780$~nm. At this wavelength, the difference between $T_{\rm LL}=0.691$ and $T_{\rm RR}=0.239$ is large enough so that sample~1 can be taken to function as a rejection filter for incident right-circularly polarized light but not for incident left-circularly polarized light. Further improvement may come by increasing the number of periods, i.e., by increasing the ratio $L/P$, as has been established theoretically \cite{FLcbp,StJohn}. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig5} \captionof{figure}{Measured spectrums of the circular transmittances of sample~1. } \label{Fig5} \end{center} Figure~\ref{Fig6} contains the experimentally determined spectrums of the four linear transmittances of sample~1. These spectrums qualitatively match those of the reference CSTF in Fig.~\ref{Fig4}, the troughs of $T_{\rm ss}$ and $T_{\rm pp}$ intersect at $\lambda_{\scriptscriptstyle 0}=761$~nm in Fig.~\ref{Fig6}. This intersection is very close to 780~nm, the center wavelength of the circular Bragg regime in Fig.~\ref{Fig5} for the same CSTF. Thus, by observing the same intersections for samples with larger $P$, we can verify the exhibition of the circular Bragg phenomenon by those samples. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig6b} \captionof{figure}{Measured spectrums of the linear transmittances of sample~1. } \label{Fig6} \end{center} Accordingly, we can predict that the center wavelength of the circular Bragg regime of sample~4 would be $780\alpha=6214$~nm, with $\alpha=2900/364=7.97$. The graphs of $T_{\rm s}$ and $T_{\rm p}$ in Fig.~\ref{Fig8} have peaks that intersect at $\lambda_{\scriptscriptstyle 0}=5512$~nm, which is lower than $6214$~nm. This lowering can be explained by reduction of the bulk relative permittivity of ZnSe by about 3.05\% in Fig.~\ref{Fig1} as $\lambda_{\scriptscriptstyle 0}$ increases from 780~nm to 6000~nm. \begin{center} \includegraphics[width=1\columnwidth]{Vepachedu_18_2v8Fig8} \captionof{figure}{Measured spectrums of $T_{\rm s}=T_{\rm ss}+T_{\rm ps}$ and $T_{\rm p}=T_{\rm pp}+T_{\rm sp}$ of sample~4. } \label{Fig8} \end{center} Likewise, if a multiple-period version of sample~5 were to be made, then $\alpha=5880/364=16.15$ and the center wavelength of the circular Bragg regime of sample~5 would be somewhat lower than $780\alpha=12600$~nm. Thus, we have demonstrated that CSTFs can be fabricated to serve as circular-polarization filters in the MWIR regime. Before closing this section, we must remark that our apparatus were not appropriate to measure diffuse scattering in nonspecular directions. Nevertheless, our conclusion on the performance of CSTFs as circular-polarization filters for normal incidence in the MWIR regime holds. \section{Concluding Remarks} Relying on the scale invariance of the frequency-domain Maxwell postulates \cite{Sinclair}, we selected a material whose bulk refractive index is very weakly dependent on the wavelength. This material was used as the evaporant material to fabricate five chiral sculptured thin films---of structural periods ranging from $\sim$360~nm to $\sim$5900~nm---by oblique-angle thermal evaporation. The fabrication conditions of all five CSTFs were thus identical except for a change in scale. Morphological characterization confirmed that each of the five CSTFs was an assembly of parallel helixes, the helical pitches in those CTSFs ranging from $\sim$360~nm to $\sim$5900~nm. As expected from the literature \cite{FLcbp,ErtenCBP,Kulkarni,Sato}, the measured circular transmittances of the CSTF of the smallest period demonstrated the circular Bragg phenomenon in the NIR regime, confirming that CSTFs of sufficient thickness can function as circular-polarization filters in the same regime. The linear transmittances of the CSTF of the smallest period were also measured to identify a spectral feature that would indicate the occurrence of the circular Bragg phenomenon, this spectral feature also being confirmed theoretically. The same feature was experimentally demonstrated to exist in the spectrums of the linear transmittances of a CSTF whose structural period was about $8$ times larger than that of the CSTF with the smallest period. Even though the center wavelength of the circular Bragg regime concomitantly increased by a factor of about $7.1$, at least in part because the bulk refractive index of the evaporant material decreased a little with $\lambda_{\scriptscriptstyle 0}$, we can conclude that CSTFs can be fabricated to function as circular-polarization filters in the MWIR regime. \vspace{3mm} \noindent {\bf Acknowledgments.} VV thanks Pittsburgh Plate and Glass, Inc., for an undergraduate research fellowship. AL thanks the Charles Godfrey Binder Endowment at Penn State for ongoing support of his research activities.
{ "timestamp": "2018-06-05T02:14:49", "yymm": "1806", "arxiv_id": "1806.00986", "language": "en", "url": "https://arxiv.org/abs/1806.00986" }
\section{Introduction} Let $p$ and $q$ denote prime numbers. The binary (or strong) Goldbach conjecture asserts that any even $n > 2$ is of the form $p+q$. There is both strong heuristic evidence that this is true for sufficiently large $n$ and enormous numerical evidence that this is true for $n > 2$. The same heuristic evidence, together with equidistibution of primes congruence classes mod $m$, suggests the following: \begin{conj} \label{conj1} Fix $a, b, m \in \mathbb Z$ with $\gcd(a,m) = \gcd(b,m) = 1$. For sufficiently large even $n \equiv a + b \mod m$, we can write $n=p+q$ for some primes $p \equiv a \mod m$, $q \equiv b \mod m$. \end{conj} While we do not know an explicit statement of this conjecture in the literature, we do not claim any originality in its formulation. Denote by $E_{a,b,m}$ the set of positive even $n \equiv a + b \mod m$ which are not of the form asserted in the conjecture. This is called the exceptional set for $(a, b, m)$, and the conjecture asserts $E_{a,b,m}$ is finite. Note for $a=b=1$ and $m=2$, this is a (still unknown) weak form of the binary Goldbach conjecture. Specifically, the binary Goldbach conjecture is equivalent to the statement that $E_{1,1,2} = \{ 2, 4 \}$. In this note, we present some heuristic and numerical investigations on the behavior of these exceptional sets. This leads to both explicit forms of Goldbach's conjecture with primes in arithmetic progressions and conjectures about the growth of $E_{a,b,m}$. First we state a few explicit Goldbach-type conjectures: \begin{conj} \label{conj:mod4} Any positive even $n$ as below is of the form $p+q$ with $p$ and $q$ satisfying the following congruence conditions: \begin{enumerate}[(i)] \item any even $n > 4$, where $p \equiv 3 \mod 4$; \item any $n \equiv 0 \mod 4$ except $n=4$, where $p \equiv 1 \mod 4$, $q \equiv 3 \mod 4$; \item any $n \equiv 2 \mod 4$ except $n=2$, where $p \equiv q \equiv 3 \mod 4$; \item any $n \equiv 2 \mod 4$ except $n=2, 6, 14, 18, 62$, where $p \equiv q \equiv 1 \mod 4$. \end{enumerate} \end{conj} We note that (ii) is already implied by Goldbach's conjecture, whereas (iii) and (iv) are not. Then (ii) and (iii) imply (i), which may be viewed as a refinement of the binary Goldbach conjecture. The notion that the exceptional set should be smaller in case (iii) rather than (iv) makes sense in light of prime number races, specifically that there are more small primes which are 3 mod 4 than 1 mod 4. For moduli $m \ne 4$, we just list some sample conjectures about when every (or almost every) multiple of $m$ is of the form $p+q$ with $p$ and $q$ each coming from single progressions mod $m$: \begin{conj} \label{conj3} Any positive even $n$ as below is of the form $p+q$ with $p$ and $q$ satisfying the following congruence conditions: \begin{enumerate}[(i)] \item any even $n \equiv 0 \mod 3$ except $n=6$, where $p \equiv -q \equiv 1 \mod 3$; \item any even $n \equiv 0 \mod 5$ (resp.\ except n=$10, 20$), where $p \equiv -q \equiv 2 \mod 5$ (resp.\ $p \equiv -q \equiv 1 \mod 5$); \item any even $n \equiv 0 \mod 7$, where $p \equiv -q \equiv 3 \mod 7$; \item any even $n \equiv 0 \mod 11$, where $p \equiv -q \equiv 3 \mod 11$; \item any $n \equiv 0 \mod 8$, where $p \equiv -q \equiv 3 \mod 8$; \item any $n \equiv 0 \mod 16$, where $p \equiv -q \equiv 3 \mod 16$; \item any $n \equiv 0 \mod 60$, where $p \equiv -q \equiv a \mod 60$ and $a$ is any fixed integer coprime to $60$ which is not $\pm 1, \pm 11 \mod 60$. \end{enumerate} \end{conj} Of these, only the first is a direct consequence of the binary Goldbach conjecture, which we list simply for means of comparison. The others come from calculations that we describe in \cref{sec:numerics}. Namely, we compute the following. First, we may as well assume $m$ is even. Then we call $(a,b)$ or $(a, b, m)$ admissible if $a, b \in (\mathbb Z/m\mathbb Z)^\times$. We compute the exceptions $n$ in $E_{a,b,m}$ for all admissible $(a, b, m)$ with $m \le 200$ up to at least $n = 10^7$. This appears sufficiently large in our cases of consideration to believe that we find the full exceptional sets for such $(a, b, m)$. Likely of more interest than numerous explicit such conjectures is a general understanding of the behaviour of the exceptional sets. The first question to ask is: \begin{question} How fast can $E_{a, b, m}$ grow? \end{question} There are a few ways to interpret this: we can look at the growth of the size of $|E_{a, b, m}|$ or the growth of the sizes of the individual exceptions $n \in E_{a, b, m}$, and either of these can be interpreted in an average sense or in the sense of looking for an absolute or asymptotic bound. Let $E_{\max}(m)$ be the maximum of the exceptions in $E_{a,b,m}$ (ranging over $a, b \in (\mathbb Z/m\mathbb Z)^\times$) and let $L_\mathrm{avg}(m)$ be the average length (size) of the exceptional sets $E_{a, b, m}$ for a fixed $m$. Then the heuristics we discuss in \cref{sec:heuristics} suggest the following: \begin{conj} \label{conj:asy} As $m \to \infty$, we have \begin{enumerate}[(i)] \item $E_{\max}(m) :=\max \{ n \in E_{a,b,m} : n \in \mathbb Z, \, a, b \in (\mathbb Z/m\mathbb Z)^\times \} = O(m^2 (\log m)^2)$; and \item $L_\mathrm{avg}(m) := \frac{1}{\phi(m)^2} \sum_{a, b \in (\mathbb Z/m\mathbb Z)^\times} |E_{a,b,m}| = O(m^\varepsilon)$ for any $\varepsilon > 0$. \end{enumerate} \end{conj} Admittedly our heuristics are rather simplistic (they are too simplistic to suggest precise asymptotics), but since the numerical data we present in \cref{sec:numerics} is in strong agreement with these heuristics, it seems reasonable to believe the above conjecture. In addition, our numerical data suggests that $E_{\max}(m)$ grows roughly like a quadratic function of $m$, which suggests that the growth bound in (i) is not too much of an overestimate (see \cref{fig2}). Our data also suggests that as $m$ grows, while the proportion of admissible $(a, b)$ with $E_{a, b, m} = \emptyset$ may decrease on average, there are still many $(a, b)$ with no exceptional sets (see \cref{tab2}), leading to: \begin{conj} \label{conj:empty} There infinitely many tuples $(a, b, m)$ such that $E_{a,b,m} = \emptyset$. \end{conj} We are less confident in this conjecture as precise heuristics about this seem a bit more delicate (we do not attempt them here), but it at least seems plausible in connection with \cref{conj:asy}. Namely, for fixed $m$ the expected length of an exceptional set may be something roughly logarithmic in $m$, but we have $\phi(m)^2$ such exceptional sets, so there will be a good chance some of them are empty as long as the variance of $|E_{a,b,m}|$ is not too small. \medskip We note that a number of authors (e.g., \cite{lavrik}, \cite{bauer-wang}, \cite{bauer}) have studied the behaviour of $E_{a,b,m}$ analytically. However, present analytic methods seem to still be far from showing finiteness of $E_{a, b, m}$, let alone attacking the finer questions we explore here. \medskip Lastly, we remark that one can similarly look at versions of the ternary Goldbach conjecture with primes in progressions. However, an answer to the binary case will also give results about the ternary case, in the same way that the strong Goldbach conjecture implies the weak Goldbach conjecture (the latter of which is now a theorem \cite{helfgott}). We simply state one ternary analogue of \cref{conj:mod4}(i): \begin{conj} Any odd integer $n > 5$ is of the form $n=p+q+r$ for primes $p, q, r$ with $p \equiv q \equiv 2 \mod 3$. \end{conj} To see this, our calculations suggest $E_{5,5,6} = \{ 4 \}$ (see \cref{sec:data}), which would imply $E_{2,2,3} = \emptyset$, so in the conjecture we can take $p+q$ to be an arbitrary even number which is $1 \mod 3$, and then $r \in \{ 3, 5, 7 \}$. Unlike the usual ternary Goldbach conjecture, this does not seem to be known at present. See \cite{li-pan}, \cite{shao}, \cite{shen} for results in this direction. \subsection*{Acknowledgements} The author is partially supported by a Simons Foundation Collaboration Grant. \section{Heuristics} \label{sec:heuristics} Let $n > 2$ be even, and $g_2(n)$ be the number of ways to write $n=p+q$ for primes $p$ and $q$. In 1922, Hardy and Littlewood conjectured \[ g_2(n) \sim \mathfrak S(n) \frac{n}{(\log n)^2}, \] where $\mathfrak S(n)$ is the singular series, which is 0 for odd $n$ and on average it is is 2 for even $n$. For a refinement, see \cite{granville}. For our heuristics, we will naively approximate $g_2(n)$ by the conjectural average $\frac{2n}{(\log n)^2}$. This simplification is justified in our heuristic upper bounds as $\mathfrak S(n) > 1$ for even $n$. Fix an even modulus $m$. Let $r$ be the integer part of expected number of admissible $(a, b) \in ((\mathbb Z/m\mathbb Z)^\times)^2$ such that $a+b \equiv n$ (averaged over even $0 \le n < m$), i.e., $r = [\frac{2\phi(m)^2}{m}]$. We want to estimate the probability that $n \in E_{a, b, m}$ for some admissible pair $(a, b) \in ((\mathbb Z/m\mathbb Z)^\times)^2$. We will use the following simplistic but reasonable model: we think of ordered pairs of primes $(p, q)$ solving $p+q=n$ as a collection of $g_2(n)$ independent random events, with the reduction mod $m$ of $(p, q)$ landing in any of the $r$ admissible classes $(a+m\mathbb Z, b + m\mathbb Z)$ with equal probability. (Obviously $(p,q)$ and $(q,p)$ are not independent, but this is not so important for our heuristics.) Immediately this suggests \cref{conj1}, but we want to speculate more precisely on the growth rate of the exceptional sets $E_{a, b, m}$ as $m \to \infty$. We recall the coupon collector problem. Say we have $r$ initially empty boxes, and at each time $t \in \mathbb N$, a coupon is placed in one box at chosen random. Assume at each stage, each box is selected with equal probability $\frac 1r$. Let $W=W_r$ be the random variable representing the waiting time until all boxes have at least 1 coupon. Let $X_s$ denote a geometric random variable such that $P(X_s = k) = (1-s)^{k-1} s$ is the probability of initial success after exactly $k$ trials, where each trial has independent probability of success $s$. The problem is to determine the expected value $E[W]$. It is easy to see that $W = X_{r/r} + X_{(r-1)/r} + \cdots X_{1/r}$ (where the $X_s$'s are independent), and thus $E[W] = r H_r$, where $H_r = \sum_{j=1}^r \frac 1j$ is the $r$-th harmonic number. Thus, in our model, the probability that $n \in E_{a, b, m}$ for some $(a, b)$ is simply $P(W_r > g_2(n))$. One has \[ P(W > k) = 1 - \sum_{j=0}^k \frac{r!}{r^j} \ssk {j-1}{r-1} = 1 - \frac{r!}{r^k} \ssk kr = \sum_{j=0}^{r-1} (-1)^{r-j+1} \binom{r}{j} \left( \frac j r \right)^k, \] where $\ssk kr$ denotes the Stirling number of the second kind, i.e., the number of ways to partition a set of size $k$ into $r$ nonempty subsets. Now we can bound each term on the right by $\binom{r}{[r/2]} (1-1/r)^k$, which will be less than $\frac \varepsilon {k^2 r^4}$ for $r$ large if \[ -k \log(1 - \frac 1 r) = k( \frac 1r - \frac 1{r^2} + \cdots ) \gtrsim r \log 2 + 2 \log k + 4 \log r - \log \varepsilon \gtrsim \log \binom{r}{[r/2]} - \log \frac \varepsilon {k^2 r^4}. \] We can make this asymptotic inequality hold by taking $k = Cr^2$, where $C$ depends on $\varepsilon$, and then \[ P(W_r > k) < \frac{\varepsilon}{k^2 r^3}. \] Consider the set $\Sigma_C$ of $\{ (r, k) : r = [\frac{2 \phi(m)^2}m], \, k = [ \frac {2n}{(\log n)^2} ] > C r^2 \}$. Note for $r$ and $k$ of this form, the condition $k > Cr^2$ is satisfied when $\frac n{(\log n)^2} > 2 C m^2$, and thus when $n > cm^2 (\log m)^2$ for a suitable constant $c$. Thus, for suitably large $c$, we have a heuristic upper bound on the probability that some $E_{a,b,m}$ contains an element $n > cm^2 (\log m)^2$ of \[ \sum_{(r,k) \in \Sigma_C} r P(W_r > k) \le \sum_{(r, k) \in \Sigma_C} \frac{\varepsilon}{k^2 r^2} < c_0 \varepsilon, \] for a uniform constant $c_0$. The factor of $r$ on the left inside the sum comes from accounting for each of the (on average) $r$ classes of pairs $(a, b)$ given $m$, $n$. This suggests the following bound on the growth of exceptional sets stated in \cref{conj:asy}(i): \begin{equation} E_{\max}(m) = O(m^2 (\log m)^2) \quad \text{as } m \to \infty. \end{equation} \medskip Now let us consider the lengths $|E_{a,b,m}|$ of the exceptional sets. For fixed $m$, we will model $|E_{a,b,m}|$ as a random variable $L(m)$. For a fixed $n \equiv a + b \mod m$, the probability (using the model described above) that $n \in E_{a, b, m}$ is simply $(1 - \frac 1r)^{g_2(n)}$. Hence the expected size of an exceptional set is \[ E[L(m)] = \sum_{n \equiv a + b \mod m} \alpha^{g_2(n)}, \quad \alpha = 1 - \frac 1r. \] We approximate this with the sum (over all integers $n \ge 2$): \[ E[L(m)] \approx \frac 1{m} \sum_{n=2}^\infty \alpha^{\frac{2n}{(\log n)^2}}. \] Then for $0 < \delta < 1$, we have \[ m E[L(m)] \ll \int_0^\infty \alpha^{2x^\delta} \, dx = \frac 1{2 \delta | \ln \alpha |^{1/\delta} } \int_0^\infty u^{1/\delta - 1} e^{-u} \, du = \frac 1{2 \delta | \ln \alpha |^{1/\delta} } \Gamma ( \frac 1 \delta ). \] Note $\frac d{dr} {| \ln \alpha |} = \frac d{dr} (\ln (r-1)- \ln r ) = \frac 1{r^2-r}$, so as $r \to \infty$, we have $\frac 1{|\ln \alpha|} \sim r$. This gives \begin{equation} E[L(m)] \ll \frac{r^{1/\delta}}{2m} = O(m^\varepsilon), \quad \varepsilon = \frac 1\delta - 1, \end{equation} as stated in \cref{conj:asy}(ii). \section{Numerics} \label{sec:numerics} Now we present numerical data on the exceptional sets $E_{a, b, m}$ for (even) $m \le 200$. \subsection{The method and computational issues} Our approach, similar to many numerical verifications of Goldbach's conjecture, was roughly as follows. To find $\{ n \in E_{a, b, m} : n \le N \}$, we start with two sets of primes $P = \{ p \equiv a \mod m : p \le M \}$ and $Q = \{ q \equiv b \mod m : q \le N \}$ and determine which $n \le N$ are not of the form $p+q$ for $p \in P$, $q \in Q$, with $M$ on the order of $10^4$ or $10^5$ depending on $m$. Then any potential exceptions below $M$ are guaranteed to actually lie in $E_{a, b, m}$, and any larger potential exceptions we checked individually by testing primality (deterministically) of $n-p$ for various $p$. For each even $m \le 200$, and $a, b \in (\mathbb Z/m\mathbb Z)^\times$, we checked up to at least $N = 10^7$ using Sage. (We checked \cref{conj:mod4} up to $N=10^8$.) We note that the binary Goldbach conjecture has been numerically verified for a much, much larger range (up to $4\cdot 10^{18}$ in \cite{oliveira}). While one could certainly extend our calculations for larger $N$ (and $m$) with more efficient implementation and computing resources, our goal here is not to push the limits of calculation, but rather to generate a reasonable amount of data to help formulate and support our conjectures. That said, there are a couple of obstacles to do a similar amount of verification for various $E_{a,b,m}$. First of all, we want to test many triples $(a, b, m)$ which increases the amount of computation involved. Second, and much more significant, when we look for representations $n=p+q$ with $p$ and $q$ in arithmetic progressions, the minimum value of $p$, say, for which such a representation is possible seems to increase much faster than without placing congruence conditions on $p$ and $q$. In other words, to rule out almost all potential exceptions in the first stage of our algorithm above, for the same $N$ we need to take $M$ larger and larger with $m$. For instance, when $m=2$ (the usual Goldbach conjecture) one can always take $M < 10^4$ (i.e., the least prime in a Goldbach partition) to rule out all exceptions for $N \le 4 \cdot 10^{18}$ (see \cite{oliveira}), however this is not a sufficiently large value of $M$ for many of our calculations. Already when $N=10^6$ and $M=10^4$, there are 41 non-exceptions $< 10^6$ that we cannot rule out when $m=50$, 24981 when $m=100$, and 1148651 when $m=148$. \subsection{Data and observations} \label{sec:data} For simplicity of exposition, we define our notation under the hypothesis: \emph{there are no exceptions $n > 10^7$ for $m \le 200$.} This is believable as the largest exception we find is approximately $10^5$, and for a given $m$ (with $a, b$ varying) the gaps between one exception and the next largest exception appear to grow at most quadratically in the number of total exceptions. Fix $m$. Let $E_{\max} = E_{\max}(m)$ and $L_{\mathrm{avg}} = L_{\mathrm{avg}}(m)$ be as in the introduction. Let $L_{\min} = L_{\min}(m)$ (resp.\ $L_{\max} = L_{\max}(m)$) be the minimum (resp.\ maximum) of the lengths $|E_{a,b,m}|$ over $a, b \in (\mathbb Z/m\mathbb Z)^\times$. Let $e_m$ (resp.\ $\tilde e_m$) denote the number of exceptions without (resp.\ with) multiplicity, i.e., the size of the set (resp.\ multiset) $\bigcup_{(a,b)} E_{a,b,m}$, where $a,b$ run over $(\mathbb Z/m\mathbb Z)^\times$. We also consider the above quantities with the additional restriction that $b \equiv -a \mod m$ so as to treat the special case where $n$ is a multiple of $m$. In this situation, we denote the analogous quantities with a superscript $0$, e.g., $E_{\max}^0$ is the maximal $n \in E_{a, b, m}$ such that $n$ is a multiple of $m$. We list the first few explicit calculations of exceptional sets (under our hypothesis): \begin{itemize} \item $E_{1,1,2} = \{ 2, 4 \}$, which is equivalent to the binary Goldbach conjecture \item $E_{1,1,4} = \{ 2, 6, 14, 38, 62 \}$, $E_{1, 3, 4} = \{ 4 \}$, and $E_{3, 3, 4} = \{ 2 \}$ \item $E_{1, 1, 6} = \{ 2, 8 \}$, $E_{1, 5, 6} = \{ 6 \}$, and $E_{5, 5, 6} = \{ 4 \}$ \item $E_{1, 1, 8} = \{ 2, 10, 18, 26, 42, 50, 66, 74, 98, 122, 218, 242, 362, 458 \}$, $E_{1, 3, 8} = \{ 4, 12, 68, 188 \}$, $E_{1, 5, 8} = \{ 6, 14, 38, 62 \}$, $E_{1, 7, 8} = \{ 8, 16, 32, 56 \}$, $E_{3, 3, 8} = E_{3, 5, 8} = \emptyset$, $E_{3, 7, 8} = E_{5, 5, 8} = \{ 2 \}$, $E_{5, 7, 8} = 4$ and $E_{7, 7, 8} = \{ 6, 22, 166 \}$ \item $E_{1, 1, 10} = \{ 2, 12, 32, 152 \}$, $E_{1, 3, 10} = E_{7, 7, 10} = \{ 4 \}$, $E_{1, 7, 10} = \{ 8 \}$, $E_{1, 9, 10} = \{ 10, 20 \}$, $E_{3, 3, 10} = E_{3, 7, 10} = \emptyset$, $E_{3, 9, 10} = \{ 2, 12 \}$, $E_{7, 9, 10} = \{ 6, 16 \}$, and $E_{9, 9, 10} = \{ 8, 18, 28, 68 \}$ \end{itemize} We summarize the data from our calculations in \cref{tab1}. Note that $E_{\max}$ and $E_{\max}^0$, as well as the total number of exceptions, tend to be relatively larger when $m$ is a power of 2 or twice a prime. In these cases $\phi(m)$ is relatively large, i.e., we have relatively more admissible pairs $(a, b)$ to consider, so it makes sense that we pick up more exceptions. We illustrate this coincidence in the fluctuations of $E_{\max}(m)$ and $\phi(m)$ in \cref{fig1}. \begin{table} \begin{tabular}{r|rrrrrr|rrrrrr} $m$ & $L^0_{\min}$ & $L^0_{\mathrm{avg}}$ & $L^0_{\max}$ & $E^0_{\max}$ & $e_m^0$ & $\tilde e^0_m$ & $L_{\min}$ & $L_{\mathrm{avg}}$ & $L_{\max}$ & $E_{\max}$& $e_m$ & $\tilde e_m$ \\ \hline 2 & 2 & 2.0 & 2 & 4 & 2 & 2 & 2 & 2.0 & 2 & 4 & 2 & 2 \\ 4 & 1 & 1.0 & 1 & 4 & 1 & 2 & 1 & 2.0 & 5 & 62 & 6 & 8 \\ 6 & 1 & 1.0 & 1 & 6 & 1 & 2 & 1 & 1.25 & 2 & 8 & 4 & 5 \\ 8 & 0 & 2.0 & 4 & 56 & 4 & 8 & 0 & 2.875 & 14 & 458 & 28 & 46 \\ 10 & 0 & 1.0 & 2 & 20 & 2 & 4 & 0 & 1.563 & 4 & 152 & 13 & 25 \\ 12 & 0 & 0.5 & 1 & 12 & 1 & 2 & 0 & 1.0 & 4 & 62 & 9 & 16 \\ 14 & 0 & 1.333 & 3 & 98 & 3 & 8 & 0 & 2.056 & 7 & 512 & 32 & 74 \\ 16 & 0 & 2.5 & 4 & 368 & 6 & 20 & 0 & 3.469 & 17 & 1298 & 94 & 222 \\ 18 & 0 & 0.333 & 1 & 18 & 1 & 2 & 0 & 0.861 & 2 & 52 & 13 & 31 \\ 20 & 0 & 1.5 & 3 & 200 & 4 & 12 & 0 & 1.828 & 8 & 542 & 46 & 117 \\ 22 & 0 & 2.2 & 4 & 418 & 7 & 22 & 0 & 2.86 & 9 & 1568 & 102 & 286 \\ 24 & 0 & 1.0 & 4 & 192 & 4 & 8 & 0 & 1.344 & 10 & 458 & 39 & 86 \\ 26 & 1 & 3.0 & 7 & 754 & 11 & 36 & 0 & 3.313 & 13 & 4688 & 146 & 477 \\ 28 & 1 & 2.833 & 7 & 616 & 11 & 34 & 0 & 3.063 & 15 & 1598 & 145 & 441 \\ 30 & 0 & 0.25 & 1 & 30 & 1 & 2 & 0 & 0.719 & 3 & 152 & 17 & 46 \\ 32 & 0 & 3.875 & 8 & 1184 & 15 & 62 & 0 & 4.969 & 26 & 5014 & 316 & 1272 \\ 34 & 1 & 3.5 & 7 & 1088 & 14 & 56 & 0 & 4.355 & 17 & 5228 & 289 & 1115 \\ 36 & 0 & 0.833 & 2 & 216 & 4 & 10 & 0 & 1.319 & 5 & 478 & 69 & 190 \\ 38 & 2 & 3.889 & 6 & 1558 & 16 & 70 & 0 & 4.864 & 21 & 5032 & 373 & 1576 \\ 40 & 1 & 2.375 & 6 & 920 & 9 & 38 & 0 & 2.973 & 19 & 2282 & 225 & 761 \\ 42 & 0 & 0.333 & 1 & 42 & 1 & 4 & 0 & 0.896 & 5 & 512 & 39 & 129 \\ 44 & 2 & 4.0 & 6 & 3344 & 18 & 80 & 0 & 4.42 & 24 & 6106 & 415 & 1768 \\ 46 & 2 & 4.273 & 8 & 1564 & 20 & 94 & 1 & 5.285 & 20 & 8104 & 541 & 2558 \\ 48 & 0 & 1.0 & 3 & 288 & 5 & 16 & 0 & 1.629 & 12 & 1298 & 132 & 417 \\ 50 & 0 & 2.3 & 4 & 550 & 8 & 46 & 0 & 2.893 & 10 & 3182 & 273 & 1157 \\ 52 & 2 & 4.333 & 8 & 3380 & 19 & 104 & 0 & 5.035 & 23 & 8546 & 580 & 2900 \\ 54 & 0 & 0.889 & 2 & 216 & 4 & 16 & 0 & 1.503 & 6 & 1096 & 130 & 487 \\ 56 & 1 & 3.583 & 6 & 2072 & 12 & 86 & 0 & 4.224 & 24 & 8318 & 491 & 2433 \\ 58 & 3 & 6.071 & 13 & 3422 & 29 & 170 & 0 & 6.342 & 31 & 10366 & 870 & 4972 \\ 60 & 0 & 0.375 & 2 & 180 & 2 & 6 & 0 & 0.855 & 5 & 542 & 66 & 219 \\ 62 & 3 & 6.267 & 10 & 4712 & 28 & 188 & 0 & 6.714 & 28 & 11416 & 975 & 6043 \\ 64 & 2 & 6.438 & 12 & 4736 & 32 & 206 & 0 & 7.262 & 35 & 16126 & 1173 & 7436 \\ 66 & 0 & 0.9 & 2 & 198 & 3 & 18 & 0 & 1.417 & 6 & 1568 & 141 & 567 \\ 68 & 3 & 6.0 & 10 & 5848 & 26 & 192 & 0 & 6.387 & 25 & 13718 & 1044 & 6540 \\ 70 & 0 & 1.833 & 3 & 1540 & 9 & 44 & 0 & 2.092 & 9 & 5002 & 280 & 1205 \\ 72 & 0 & 1.333 & 4 & 864 & 6 & 32 & 0 & 1.922 & 13 & 2834 & 268 & 1107 \\ 74 & 2 & 6.611 & 11 & 6068 & 31 & 238 & 0 & 7.355 & 28 & 16046 & 1370 & 9532 \\ 76 & 3 & 6.333 & 11 & 5624 & 30 & 228 & 0 & 6.679 & 29 & 23426 & 1250 & 8656 \\ 78 & 0 & 1.0 & 3 & 624 & 5 & 24 & 0 & 1.51 & 7 & 4688 & 205 & 870 \\ 80 & 1 & 3.813 & 11 & 3680 & 21 & 122 & 0 & 3.982 & 19 & 6200 & 729 & 4078 \\ 82 & 3 & 7.2 & 14 & 8528 & 39 & 288 & 0 & 7.764 & 27 & 25616 & 1651 & 12423 \\ 84 & 0 & 0.833 & 4 & 1008 & 5 & 20 & 0 & 1.368 & 11 & 1598 & 202 & 788 \\ 86 & 3 & 7.714 & 15 & 12556 & 41 & 324 & 0 & 7.942 & 34 & 26782 & 1805 & 14009 \\ 88 & 2 & 5.8 & 13 & 5720 & 29 & 232 & 0 & 6.382 & 32 & 19274 & 1378 & 10211 \\ 90 & 0 & 0.333 & 2 & 180 & 2 & 8 & 0 & 1.017 & 5 & 976 & 148 & 586 \\ \end{tabular} \end{table} \begin{table} \begin{tabular}{r|rrrrrr|rrrrrr} $m$ & $L^0_{\min}$ & $L^0_{\mathrm{avg}}$ & $L^0_{\max}$ & $E^0_{\max}$ & $e_m^0$ & $\tilde e^0_m$ & $L_{\min}$ & $L_{\mathrm{avg}}$ & $L_{\max}$ & $E_{\max}$& $e_m$ & $\tilde e_m$ \\ \hline 92 & 2 & 7.0 & 13 & 7636 & 36 & 308 & 0 & 7.73 & 35 & 21538 & 1891 & 14966 \\ 94 & 3 & 8.0 & 16 & 12032 & 39 & 368 & 0 & 8.43 & 38 & 30916 & 2098 & 17838 \\ 96 & 0 & 1.375 & 3 & 864 & 6 & 44 & 0 & 2.194 & 17 & 5014 & 437 & 2247 \\ 98 & 2 & 5.286 & 10 & 4214 & 25 & 222 & 0 & 5.503 & 28 & 17014 & 1389 & 9708 \\ 100 & 0 & 3.8 & 6 & 4300 & 19 & 152 & 0 & 4.366 & 21 & 12134 & 1043 & 6986 \\ 102 & 0 & 1.25 & 2 & 918 & 8 & 40 & 0 & 1.848 & 11 & 5228 & 383 & 1892 \\ 104 & 3 & 6.708 & 13 & 9256 & 36 & 322 & 0 & 7.236 & 36 & 22886 & 1990 & 16671 \\ 106 & 4 & 8.577 & 16 & 11978 & 48 & 446 & 0 & 9.009 & 42 & 39842 & 2661 & 24359 \\ 108 & 0 & 2.0 & 4 & 1944 & 13 & 72 & 0 & 2.279 & 13 & 7142 & 548 & 2954 \\ 110 & 1 & 2.85 & 8 & 3410 & 15 & 114 & 0 & 3.331 & 14 & 13316 & 822 & 5329 \\ 112 & 3 & 5.667 & 12 & 6272 & 29 & 272 & 0 & 6.11 & 34 & 23038 & 1750 & 14077 \\ 114 & 0 & 1.278 & 3 & 684 & 6 & 46 & 0 & 2.064 & 12 & 5032 & 482 & 2675 \\ 116 & 4 & 8.714 & 15 & 13688 & 44 & 488 & 1 & 8.961 & 42 & 36326 & 2892 & 28102 \\ 118 & 4 & 8.793 & 17 & 17228 & 50 & 510 & 0 & 9.446 & 45 & 53614 & 3144 & 31776 \\ 120 & 0 & 0.625 & 3 & 360 & 3 & 20 & 0 & 1.3 & 12 & 2282 & 297 & 1331 \\ 122 & 4 & 9.4 & 18 & 13298 & 48 & 564 & 0 & 9.601 & 43 & 39818 & 3377 & 34563 \\ 124 & 4 & 8.667 & 15 & 12896 & 49 & 520 & 0 & 9.114 & 40 & 42778 & 3265 & 32811 \\ 126 & 0 & 1.056 & 4 & 1008 & 7 & 38 & 0 & 1.535 & 11 & 7598 & 389 & 1989 \\ 128 & 5 & 9.25 & 15 & 13568 & 47 & 592 & 0 & 10.406 & 43 & 48346 & 3933 & 42622 \\ 130 & 1 & 3.458 & 8 & 5590 & 18 & 166 & 0 & 3.955 & 19 & 19406 & 1242 & 9112 \\ 132 & 0 & 1.2 & 4 & 1584 & 9 & 48 & 0 & 1.915 & 14 & 6106 & 540 & 3064 \\ 134 & 2 & 9.455 & 16 & 14204 & 52 & 624 & 0 & 10.166 & 48 & 52894 & 3992 & 44281 \\ 136 & 3 & 8.219 & 17 & 16048 & 47 & 526 & 0 & 8.813 & 52 & 42734 & 3413 & 36100 \\ 138 & 0 & 1.682 & 3 & 1656 & 8 & 74 & 0 & 2.332 & 11 & 8104 & 715 & 4515 \\ 140 & 0 & 2.792 & 6 & 5180 & 17 & 134 & 0 & 3.18 & 17 & 14198 & 1077 & 7327 \\ 142 & 5 & 9.657 & 17 & 23146 & 55 & 676 & 0 & 10.306 & 45 & 54526 & 4328 & 50498 \\ 144 & 0 & 2.167 & 5 & 2592 & 15 & 104 & 0 & 2.771 & 18 & 19858 & 959 & 6384 \\ 146 & 5 & 10.222 & 22 & 15476 & 54 & 736 & 0 & 10.663 & 54 & 54928 & 4684 & 55278 \\ 148 & 2 & 9.528 & 17 & 21608 & 55 & 686 & 0 & 10.128 & 43 & 70222 & 4374 & 52501 \\ 150 & 0 & 0.85 & 2 & 1800 & 7 & 34 & 0 & 1.295 & 6 & 3182 & 387 & 2072 \\ 152 & 4 & 8.972 & 18 & 17176 & 54 & 646 & 0 & 9.674 & 49 & 62554 & 4331 & 50152 \\ 154 & 2 & 4.6 & 9 & 10472 & 25 & 276 & 0 & 4.954 & 22 & 29476 & 1983 & 17836 \\ 156 & 0 & 1.625 & 4 & 4212 & 9 & 78 & 0 & 2.204 & 16 & 8546 & 778 & 5078 \\ 158 & 6 & 10.718 & 21 & 21488 & 66 & 836 & 1 & 11.011 & 47 & 51248 & 5245 & 66991 \\ 160 & 1 & 5.5 & 15 & 13760 & 32 & 352 & 0 & 5.83 & 30 & 31942 & 2479 & 23879 \\ 162 & 0 & 2.148 & 7 & 1944 & 12 & 116 & 0 & 2.826 & 15 & 9908 & 1157 & 8241 \\ 164 & 5 & 10.6 & 20 & 19352 & 65 & 848 & 0 & 10.768 & 52 & 67546 & 5438 & 68912 \\ 166 & 6 & 10.268 & 19 & 22576 & 64 & 842 & 1 & 11.067 & 58 & 65776 & 5817 & 74414 \\ 168 & 0 & 1.583 & 8 & 2352 & 11 & 76 & 0 & 1.849 & 15 & 8318 & 675 & 4260 \\ 170 & 0 & 4.313 & 10 & 10880 & 27 & 276 & 0 & 4.765 & 31 & 42862 & 2162 & 19517 \\ 172 & 4 & 11.024 & 18 & 59168 & 65 & 926 & 0 & 11.098 & 47 & 104494 & 5771 & 78306 \\ 174 & 0 & 2.286 & 5 & 7482 & 16 & 128 & 0 & 2.768 & 15 & 10366 & 1166 & 8681 \\ 176 & 3 & 8.5 & 16 & 18832 & 53 & 680 & 0 & 8.926 & 52 & 62002 & 4683 & 57129 \\ 178 & 5 & 10.955 & 18 & 38092 & 71 & 964 & 1 & 11.528 & 50 & 88622 & 6344 & 89275 \\ 180 & 0 & 0.875 & 3 & 1080 & 5 & 42 & 0 & 1.481 & 9 & 7562 & 560 & 3412 \\ \end{tabular} \end{table} \begin{table} \begin{tabular}{r|rrrrrr|rrrrrr} $m$ & $L^0_{\min}$ & $L^0_{\mathrm{avg}}$ & $L^0_{\max}$ & $E^0_{\max}$ & $e_m^0$ & $\tilde e^0_m$ & $L_{\min}$ & $L_{\mathrm{avg}}$ & $L_{\max}$ & $E_{\max}$& $e_m$ & $\tilde e_m$ \\ \hline 182 & 2 & 5.139 & 14 & 14378 & 33 & 370 & 0 & 5.818 & 30 & 33536 & 2932 & 30161 \\ 184 & 5 & 10.023 & 18 & 41768 & 65 & 882 & 1 & 10.665 & 57 & 88106 & 6112 & 82588 \\ 186 & 0 & 1.967 & 5 & 2232 & 11 & 118 & 0 & 2.869 & 18 & 11784 & 1285 & 10329 \\ 188 & 5 & 10.826 & 17 & 34028 & 70 & 996 & 0 & 11.423 & 55 & 82594 & 6778 & 96686 \\ 190 & 2 & 4.639 & 9 & 15200 & 27 & 334 & 0 & 5.052 & 27 & 34652 & 2669 & 26189 \\ 192 & 0 & 2.469 & 4 & 7296 & 15 & 158 & 0 & 3.167 & 20 & 16126 & 1552 & 12972 \\ 194 & 6 & 11.417 & 24 & 33368 & 75 & 1096 & 1 & 12.054 & 56 & 96728 & 7361 & 111093 \\ 196 & 2 & 7.119 & 15 & 20776 & 41 & 598 & 0 & 7.783 & 39 & 56738 & 4514 & 54914 \\ 198 & 0 & 1.533 & 4 & 4356 & 9 & 92 & 0 & 2.373 & 16 & 13436 & 1152 & 8542 \\ 200 & 2 & 5.6 & 11 & 16400 & 33 & 448 & 0 & 6.228 & 32 & 46922 & 3621 & 39862 \\ \end{tabular} \caption{Data on exceptional sets mod $m$ for $p \equiv a \mod m$, $q \equiv b \mod m$} \label{tab1} \end{table} \begin{figure} \begin{tikzpicture} \begin{axis}[xlabel=$m$, axis y line*=right, legend pos = north west] \addplot[smooth,mark=x, gray] plot coordinates { ( 4 , 2 ) ( 6 , 2 ) ( 8 , 4 ) ( 10 , 4 ) ( 12 , 4 ) ( 14 , 6 ) ( 16 , 8 ) ( 18 , 6 ) ( 20 , 8 ) ( 22 , 10 ) ( 24 , 8 ) ( 26 , 12 ) ( 28 , 12 ) ( 30 , 8 ) ( 32 , 16 ) ( 34 , 16 ) ( 36 , 12 ) ( 38 , 18 ) ( 40 , 16 ) ( 42 , 12 ) ( 44 , 20 ) ( 46 , 22 ) ( 48 , 16 ) ( 50 , 20 ) }; \addlegendentry{$\phi(m)$} \end{axis} \begin{axis}[ylabel = $E_{\max}(m)$, axis y line*=left] \addplot[smooth,mark=*] plot coordinates { ( 4 , 62 ) ( 6 , 8 ) ( 8 , 458 ) ( 10 , 152 ) ( 12 , 62 ) ( 14 , 512 ) ( 16 , 1298 ) ( 18 , 52 ) ( 20 , 542 ) ( 22 , 1568 ) ( 24 , 458 ) ( 26 , 4688 ) ( 28 , 1598 ) ( 30 , 152 ) ( 32 , 5014 ) ( 34 , 5228 ) ( 36 , 478 ) ( 38 , 5032 ) ( 40 , 2282 ) ( 42 , 512 ) ( 44 , 6106 ) ( 46 , 8104 ) ( 48 , 1298 ) ( 50 , 3182 ) }; \end{axis} \end{tikzpicture} \caption{Comparing $E_{\max}(m)$ with $\phi(m)$} \label{fig1} \end{figure} \begin{figure} \begin{tikzpicture} \begin{axis}[xlabel=$m$, ylabel = $E_{\max}(m)$, legend pos = north west] \addplot[smooth,mark=., gray] plot coordinates { ( 4 , 11 ) ( 6 , 39 ) ( 8 , 88 ) ( 10 , 160 ) ( 12 , 258 ) ( 14 , 381 ) ( 16 , 532 ) ( 18 , 711 ) ( 20 , 921 ) ( 22 , 1160 ) ( 24 , 1431 ) ( 26 , 1733 ) ( 28 , 2069 ) ( 30 , 2437 ) ( 32 , 2839 ) ( 34 , 3275 ) ( 36 , 3745 ) ( 38 , 4251 ) ( 40 , 4793 ) ( 42 , 5370 ) ( 44 , 5984 ) ( 46 , 6634 ) ( 48 , 7322 ) ( 50 , 8047 ) ( 52 , 8809 ) ( 54 , 9610 ) ( 56 , 10449 ) ( 58 , 11327 ) ( 60 , 12244 ) ( 62 , 13200 ) ( 64 , 14195 ) ( 66 , 15230 ) ( 68 , 16305 ) ( 70 , 17421 ) ( 72 , 18576 ) ( 74 , 19773 ) ( 76 , 21010 ) ( 78 , 22289 ) ( 80 , 23608 ) ( 82 , 24970 ) ( 84 , 26372 ) ( 86 , 27817 ) ( 88 , 29304 ) ( 90 , 30833 ) ( 92 , 32405 ) ( 94 , 34019 ) ( 96 , 35676 ) ( 98 , 37377 ) ( 100 , 39120 ) ( 102 , 40906 ) ( 104 , 42736 ) ( 106 , 44610 ) ( 108 , 46527 ) ( 110 , 48488 ) ( 112 , 50494 ) ( 114 , 52543 ) ( 116 , 54637 ) ( 118 , 56775 ) ( 120 , 58958 ) ( 122 , 61186 ) ( 124 , 63458 ) ( 126 , 65776 ) ( 128 , 68139 ) ( 130 , 70547 ) ( 132 , 73000 ) ( 134 , 75499 ) ( 136 , 78044 ) ( 138 , 80634 ) ( 140 , 83270 ) ( 142 , 85952 ) ( 144 , 88680 ) ( 146 , 91455 ) ( 148 , 94276 ) ( 150 , 97143 ) ( 152 , 100057 ) ( 154 , 103017 ) ( 156 , 106024 ) ( 158 , 109078 ) ( 160 , 112179 ) ( 162 , 115327 ) ( 164 , 118523 ) ( 166 , 121765 ) ( 168 , 125055 ) ( 170 , 128392 ) ( 172 , 131777 ) ( 174 , 135209 ) ( 176 , 138689 ) ( 178 , 142217 ) ( 180 , 145793 ) ( 182 , 149417 ) ( 184 , 153089 ) ( 186 , 156809 ) ( 188 , 160578 ) ( 190 , 164394 ) ( 192 , 168260 ) ( 194 , 172173 ) ( 196 , 176136 ) ( 198 , 180147 ) ( 200 , 184206 ) }; \addlegendentry{$4m^2 \log(m)$} \addplot[smooth,mark=., green] plot coordinates { ( 4 , 64 ) ( 6 , 144 ) ( 8 , 256 ) ( 10 , 400 ) ( 12 , 576 ) ( 14 , 784 ) ( 16 , 1024 ) ( 18 , 1296 ) ( 20 , 1600 ) ( 22 , 1936 ) ( 24 , 2304 ) ( 26 , 2704 ) ( 28 , 3136 ) ( 30 , 3600 ) ( 32 , 4096 ) ( 34 , 4624 ) ( 36 , 5184 ) ( 38 , 5776 ) ( 40 , 6400 ) ( 42 , 7056 ) ( 44 , 7744 ) ( 46 , 8464 ) ( 48 , 9216 ) ( 50 , 10000 ) ( 52 , 10816 ) ( 54 , 11664 ) ( 56 , 12544 ) ( 58 , 13456 ) ( 60 , 14400 ) ( 62 , 15376 ) ( 64 , 16384 ) ( 66 , 17424 ) ( 68 , 18496 ) ( 70 , 19600 ) ( 72 , 20736 ) ( 74 , 21904 ) ( 76 , 23104 ) ( 78 , 24336 ) ( 80 , 25600 ) ( 82 , 26896 ) ( 84 , 28224 ) ( 86 , 29584 ) ( 88 , 30976 ) ( 90 , 32400 ) ( 92 , 33856 ) ( 94 , 35344 ) ( 96 , 36864 ) ( 98 , 38416 ) ( 100 , 40000 ) ( 102 , 41616 ) ( 104 , 43264 ) ( 106 , 44944 ) ( 108 , 46656 ) ( 110 , 48400 ) ( 112 , 50176 ) ( 114 , 51984 ) ( 116 , 53824 ) ( 118 , 55696 ) ( 120 , 57600 ) ( 122 , 59536 ) ( 124 , 61504 ) ( 126 , 63504 ) ( 128 , 65536 ) ( 130 , 67600 ) ( 132 , 69696 ) ( 134 , 71824 ) ( 136 , 73984 ) ( 138 , 76176 ) ( 140 , 78400 ) ( 142 , 80656 ) ( 144 , 82944 ) ( 146 , 85264 ) ( 148 , 87616 ) ( 150 , 90000 ) ( 152 , 92416 ) ( 154 , 94864 ) ( 156 , 97344 ) ( 158 , 99856 ) ( 160 , 102400 ) ( 162 , 104976 ) ( 164 , 107584 ) ( 166 , 110224 ) ( 168 , 112896 ) ( 170 , 115600 ) ( 172 , 118336 ) ( 174 , 121104 ) ( 176 , 123904 ) ( 178 , 126736 ) ( 180 , 129600 ) ( 182 , 132496 ) ( 184 , 135424 ) ( 186 , 138384 ) ( 188 , 141376 ) ( 190 , 144400 ) ( 192 , 147456 ) ( 194 , 150544 ) ( 196 , 153664 ) ( 198 , 156816 ) ( 200 , 160000 ) }; \addlegendentry{$16m^2$} \addplot[smooth,mark=., red] plot coordinates { ( 4 , 5 ) ( 6 , 19 ) ( 8 , 44 ) ( 10 , 80 ) ( 12 , 129 ) ( 14 , 190 ) ( 16 , 266 ) ( 18 , 355 ) ( 20 , 460 ) ( 22 , 580 ) ( 24 , 715 ) ( 26 , 866 ) ( 28 , 1034 ) ( 30 , 1218 ) ( 32 , 1419 ) ( 34 , 1637 ) ( 36 , 1872 ) ( 38 , 2125 ) ( 40 , 2396 ) ( 42 , 2685 ) ( 44 , 2992 ) ( 46 , 3317 ) ( 48 , 3661 ) ( 50 , 4023 ) ( 52 , 4404 ) ( 54 , 4805 ) ( 56 , 5224 ) ( 58 , 5663 ) ( 60 , 6122 ) ( 62 , 6600 ) ( 64 , 7097 ) ( 66 , 7615 ) ( 68 , 8152 ) ( 70 , 8710 ) ( 72 , 9288 ) ( 74 , 9886 ) ( 76 , 10505 ) ( 78 , 11144 ) ( 80 , 11804 ) ( 82 , 12485 ) ( 84 , 13186 ) ( 86 , 13908 ) ( 88 , 14652 ) ( 90 , 15416 ) ( 92 , 16202 ) ( 94 , 17009 ) ( 96 , 17838 ) ( 98 , 18688 ) ( 100 , 19560 ) ( 102 , 20453 ) ( 104 , 21368 ) ( 106 , 22305 ) ( 108 , 23263 ) ( 110 , 24244 ) ( 112 , 25247 ) ( 114 , 26271 ) ( 116 , 27318 ) ( 118 , 28387 ) ( 120 , 29479 ) ( 122 , 30593 ) ( 124 , 31729 ) ( 126 , 32888 ) ( 128 , 34069 ) ( 130 , 35273 ) ( 132 , 36500 ) ( 134 , 37749 ) ( 136 , 39022 ) ( 138 , 40317 ) ( 140 , 41635 ) ( 142 , 42976 ) ( 144 , 44340 ) ( 146 , 45727 ) ( 148 , 47138 ) ( 150 , 48571 ) ( 152 , 50028 ) ( 154 , 51508 ) ( 156 , 53012 ) ( 158 , 54539 ) ( 160 , 56089 ) ( 162 , 57663 ) ( 164 , 59261 ) ( 166 , 60882 ) ( 168 , 62527 ) ( 170 , 64196 ) ( 172 , 65888 ) ( 174 , 67604 ) ( 176 , 69344 ) ( 178 , 71108 ) ( 180 , 72896 ) ( 182 , 74708 ) ( 184 , 76544 ) ( 186 , 78404 ) ( 188 , 80289 ) ( 190 , 82197 ) ( 192 , 84130 ) ( 194 , 86086 ) ( 196 , 88068 ) ( 198 , 90073 ) ( 200 , 92103 ) }; \addlegendentry{$2m^2 \log(m)$} \addplot[smooth,mark=., blue] plot coordinates { ( 4 , 32 ) ( 6 , 72 ) ( 8 , 128 ) ( 10 , 200 ) ( 12 , 288 ) ( 14 , 392 ) ( 16 , 512 ) ( 18 , 648 ) ( 20 , 800 ) ( 22 , 968 ) ( 24 , 1152 ) ( 26 , 1352 ) ( 28 , 1568 ) ( 30 , 1800 ) ( 32 , 2048 ) ( 34 , 2312 ) ( 36 , 2592 ) ( 38 , 2888 ) ( 40 , 3200 ) ( 42 , 3528 ) ( 44 , 3872 ) ( 46 , 4232 ) ( 48 , 4608 ) ( 50 , 5000 ) ( 52 , 5408 ) ( 54 , 5832 ) ( 56 , 6272 ) ( 58 , 6728 ) ( 60 , 7200 ) ( 62 , 7688 ) ( 64 , 8192 ) ( 66 , 8712 ) ( 68 , 9248 ) ( 70 , 9800 ) ( 72 , 10368 ) ( 74 , 10952 ) ( 76 , 11552 ) ( 78 , 12168 ) ( 80 , 12800 ) ( 82 , 13448 ) ( 84 , 14112 ) ( 86 , 14792 ) ( 88 , 15488 ) ( 90 , 16200 ) ( 92 , 16928 ) ( 94 , 17672 ) ( 96 , 18432 ) ( 98 , 19208 ) ( 100 , 20000 ) ( 102 , 20808 ) ( 104 , 21632 ) ( 106 , 22472 ) ( 108 , 23328 ) ( 110 , 24200 ) ( 112 , 25088 ) ( 114 , 25992 ) ( 116 , 26912 ) ( 118 , 27848 ) ( 120 , 28800 ) ( 122 , 29768 ) ( 124 , 30752 ) ( 126 , 31752 ) ( 128 , 32768 ) ( 130 , 33800 ) ( 132 , 34848 ) ( 134 , 35912 ) ( 136 , 36992 ) ( 138 , 38088 ) ( 140 , 39200 ) ( 142 , 40328 ) ( 144 , 41472 ) ( 146 , 42632 ) ( 148 , 43808 ) ( 150 , 45000 ) ( 152 , 46208 ) ( 154 , 47432 ) ( 156 , 48672 ) ( 158 , 49928 ) ( 160 , 51200 ) ( 162 , 52488 ) ( 164 , 53792 ) ( 166 , 55112 ) ( 168 , 56448 ) ( 170 , 57800 ) ( 172 , 59168 ) ( 174 , 60552 ) ( 176 , 61952 ) ( 178 , 63368 ) ( 180 , 64800 ) ( 182 , 66248 ) ( 184 , 67712 ) ( 186 , 69192 ) ( 188 , 70688 ) ( 190 , 72200 ) ( 192 , 73728 ) ( 194 , 75272 ) ( 196 , 76832 ) ( 198 , 78408 ) ( 200 , 80000 ) }; \addlegendentry{$8m^2$} \addplot[smooth,mark=*, mark size = 0.5] plot coordinates { ( 4 , 62 ) ( 6 , 8 ) ( 8 , 458 ) ( 10 , 152 ) ( 12 , 62 ) ( 14 , 512 ) ( 16 , 1298 ) ( 18 , 52 ) ( 20 , 542 ) ( 22 , 1568 ) ( 24 , 458 ) ( 26 , 4688 ) ( 28 , 1598 ) ( 30 , 152 ) ( 32 , 5014 ) ( 34 , 5228 ) ( 36 , 478 ) ( 38 , 5032 ) ( 40 , 2282 ) ( 42 , 512 ) ( 44 , 6106 ) ( 46 , 8104 ) ( 48 , 1298 ) ( 50 , 3182 ) ( 52 , 8546 ) ( 54 , 1096 ) ( 56 , 8318 ) ( 58 , 10366 ) ( 60 , 542 ) ( 62 , 11416 ) ( 64 , 16126 ) ( 66 , 1568 ) ( 68 , 13718 ) ( 70 , 5002 ) ( 72 , 2834 ) ( 74 , 16046 ) ( 76 , 23426 ) ( 78 , 4688 ) ( 80 , 6200 ) ( 82 , 25616 ) ( 84 , 1598 ) ( 86 , 26782 ) ( 88 , 19274 ) ( 90 , 976 ) ( 92 , 21538 ) ( 94 , 30916 ) ( 96 , 5014 ) ( 98 , 17014 ) ( 100 , 12134 ) ( 102, 5228 ) ( 104, 22886) ( 106, 39842) ( 108, 7142) ( 110, 13316) ( 112, 23038) ( 114, 5032) ( 116, 36326) ( 118,53614 ) ( 120, 2282) ( 122, 39818) ( 124, 42778) ( 126, 7598) ( 128, 48346) ( 130, 19406) ( 132, 6106) ( 134, 52894) ( 136, 55714) ( 138, 8104) ( 140, 14198) ( 142, 54526) ( 144, 19858) ( 146, 54928) ( 148, 70222) ( 150, 3182) ( 152, 62554) ( 154, 29476) ( 156, 8546) ( 158, 51248) ( 160, 31942) ( 162, 9908) ( 164, 67546) ( 166, 65776) ( 168, 8318) ( 170, 42862) ( 172, 104494) ( 174, 10366) ( 176, 62002) ( 178, 88622) ( 180, 7562) ( 182, 33536) ( 184, 88106) ( 186, 11784) ( 188, 82594) ( 190, 34652) ( 192, 16126) ( 194, 96728) ( 196, 56738) ( 198, 13436) ( 200, 46922) }; \end{axis} \end{tikzpicture} \caption{Comparing $E_{\max}(m)$ with quadratic and quadratic times logarithmic growth} \label{fig2} \end{figure} \begin{table} \begin{tabular}{r||rrrrr rrrrr rrrrr} $m$ & 2 & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 & 22 & 24 & 26 & 28 & 30 \\ \hline $\#$ & 0 & 0 & 0 & 3 & 3 & 3 & 6 & 6 & 9 & 11 & 2 & 20 & 10 & 16 & 21 \\ $\%$ & 0.0 & 0.0 & 0.0 & 18.8 & 18.8 & 18.8 & 16.7 & 9.4 & 25.0 & 17.2 & 2.0 & 31.3 & 6.9 & 11.1 & 32.8 \\ \hline \hline $m$ & 32 & 34 & 36 & 38 & 40 & 42 & 44 & 46 & 48 & 50 & 52 & 54 & 56 & 58 & 60 \\ \hline $\#$ & 12 & 12 & 33 & 11 & 26 & 44 & 8 & 0 & 68 & 32 & 10 & 75 & 18 & 4 & 97 \\ $\%$ & 4.7 & 4.7 & 22.9 & 3.4 & 10.2 & 30.6 & 2.0 & 0.0 & 26.6 & 8.0 & 1.7 & 23.1 & 3.1 & 0.5 & 37.9 \\ \hline \hline $m$ & 62 & 64 & 66 & 68 & 70 & 72 & 74 & 76 & 78 & 80 & 82 & 84 & 86 & 88 & 90 \\ \hline $\#$ & 8 & 4 & 108 & 24 & 60 & 105 & 4 & 14 & 118 & 42 & 6 & 160 & 14 & 2 & 195 \\ $\%$ & 0.9 & 0.4 & 27.0 & 2.3 & 10.4 & 18.2 & 0.3 & 1.1 & 20.5 & 4.1 & 0.4 & 27.8 & 0.8 & 0.1 & 33.9 \\ \hline \hline $m$ & 92 & 94 & 96 & 98 & 100 & 102 & 104 & 106 & 108 & 110 & 112 & 114 & 116 & 118 & 120 \\ \hline $\#$ & 6 & 14 & 163 & 26 & 18 & 147 & 20 & 6 & 171 & 84 & 12 & 173 & 0 & 6 & 326 \\ $\%$ & 0.3 & 0.7 & 15.9 & 1.5 & 1.1 & 14.4 & 0.9 & 0.2 & 13.2 & 5.2 & 0.5 & 13.3 & 0.0 & 0.2 & 31.8 \\ \hline \hline $m$ & 122 & 124 & 126 & 128 & 130 & 132 & 134 & 136 & 138 & 140 & 142 & 144 & 146 & 148 & 150 \\ \hline $\#$ & 6 & 20 & 286 & 4 & 64 & 297 & 4 & 8 & 194 & 107 & 2 & 241 & 6 & 2 & 422 \\ $\%$ & 0.2 & 0.6 & 22.1 & 0.1 & 2.8 & 18.6 & 0.1 & 0.2 & 10.0 & 4.6 & 0.0 & 10.5 & 0.1 & 0.0 & 26.4 \\ \hline \hline $m$ & 152 & 154 & 156 & 158 & 160 & 162 & 164 & 166 & 168 & 170 & 172 & 174 & 176 & 178 & 180 \\ \hline $\#$ & 4 & 71 & 272 & 0 & 95 & 240 & 6 & 0 & 473 & 100 & 4 & 314 & 8 & 0 & 609 \\ $\%$ & 0.1 & 2.0 & 11.8 & 0.0 & 2.3 & 8.2 & 0.1 & 0.0 & 20.5 & 2.4 & 0.1 & 10.0 & 0.1 & 0.0 & 26.4 \\ \end{tabular} \caption{Counting the number $(a,b)$ for which $|E_{a,b,m}| = 0$} \label{tab2} \end{table} While our numerics are somewhat limited, they suggest that the growth of $E_{\max}(m)$ appears to be strictly slower than that of $m^2 (\log m)^2$ as stated in \cref{conj:asy}, and the true growth rate appears to be closer to $O(m^2)$ or $O(m^2 \log m)$---see \cref{fig2} for an overlay of the graphs of $E_{\max}(m)$ (black dots, with the scale on the left) and $\phi(m)$ ( gray x's, with the scale on the right). Finally, we observe it often happens that (under our hypothesis), for fixed $m$, at least one admissible pair $(a, b)$ has no exceptions, i.e., every $n \equiv a + b \mod m$ is of the form $p+q$ with $p \equiv a \mod m$ and $q \equiv b \mod m$. Both the number of such pairs $(a, b)$ and the fraction of such pairs out of the total number $\phi(m)^2$ of admissible pairs are tabulated in \cref{tab2}. It appears that $z_m = \# \{ (a, b) \in ((\mathbb Z/m\mathbb Z)^\times)^2 : E_{a, b, m} = \emptyset \}$ tends to grow at least on average in several of the columns in \cref{tab2} (e.g., when $m$ is a multiple of 5 or 6). This suggests that $z_m$ is unbounded, and in particular suggests \cref{conj:empty}. \begin{bibdiv} \begin{biblist} \bib{bauer}{article}{ author={Bauer, Claus}, title={Goldbach's conjecture in arithmetic progressions: number and size of exceptional prime moduli}, journal={Arch. Math. (Basel)}, volume={108}, date={2017}, number={2}, pages={159--172}, issn={0003-889X}, } \bib{bauer-wang}{article}{ author={Bauer, Claus}, author={Wang, Yonghui}, title={The binary Goldbach conjecture with primes in arithmetic progressions with large modulus}, journal={Acta Arith.}, volume={159}, date={2013}, number={3}, pages={227--243}, issn={0065-1036}, } \bib{granville}{article}{ author={Granville, Andrew}, title={Refinements of Goldbach's conjecture, and the generalized Riemann hypothesis}, journal={Funct. Approx. Comment. Math.}, volume={37}, date={2007}, pages={159--173}, issn={0208-6573}, } \bib{helfgott}{article}{ author={Helfgott, H. A.}, title={The ternary {G}oldbach conjecture is true}, eprint={https://arxiv.org/abs/1312.7748} status={preprint}, year={2013} } \bib{lavrik}{article}{ author={Lavrik, A. F.}, title={The number of $k$-twin primes lying on an interval of a given length. }, journal={Soviet Math. Dokl.}, volume={2}, date={1961}, pages={52--55}, issn={0197-6788}, } \bib{li-pan}{article}{ author={Li, Hongze}, author={Pan, Hao}, title={A density version of Vinogradov's three primes theorem}, journal={Forum Math.}, volume={22}, date={2010}, number={4}, pages={699--714}, issn={0933-7741}, } \bib{oliveira}{article}{ author={Oliveira e Silva, Tom\'as}, author={Herzog, Siegfried}, author={Pardi, Silvio}, title={Empirical verification of the even Goldbach conjecture and computation of prime gaps up to $4\cdot 10^{18}$}, journal={Math. Comp.}, volume={83}, date={2014}, number={288}, pages={2033--2060}, issn={0025-5718}, } \bib{shao}{article}{ author={Shao, Xuancheng}, title={A density version of the Vinogradov three primes theorem}, journal={Duke Math. J.}, volume={163}, date={2014}, number={3}, pages={489--512}, issn={0012-7094}, } \bib{shen}{article}{ author={Shen, Quanli}, title={The ternary Goldbach problem with primes in positive density sets}, journal={J. Number Theory}, volume={168}, date={2016}, pages={334--345}, issn={0022-314X}, } \end{biblist} \end{bibdiv} \end{document}
{ "timestamp": "2018-06-05T02:14:05", "yymm": "1806", "arxiv_id": "1806.00946", "language": "en", "url": "https://arxiv.org/abs/1806.00946" }
\section{Semantic Facts}\label{sec:analysis} In this section, we present the automated inference of control- and data-flow dependencies that \textsc{Securify}\xspace employs. The facts inferred in this process are called {\em semantic facts} and are later used for checking security properties. We begin with the background necessary for understanding this analysis: the EVM instruction set and stratified Datalog. We then introduce the semantic facts derived by \textsc{Securify}\xspace and the declarative inference rules, specified in stratified Datalog, used to derive them. \subsection{Background} In this section, we provide the necessary background. \subsubsection{Ethereum Virtual Machine (EVM)}\label{sec:background} Smart contracts are executed on a \emph{blockchain}. A contract executes when a user submits a \emph{transaction} that specifies the contract, the method to run, and the method's arguments (if any). When the transaction is processed, it is added to a new block, which is appended to the blockchain. Contracts can access a volatile memory and non-volatile storage. The EVM instruction set (over which contracts are written) supports a few dozen opcodes. \textsc{Securify}\xspace handles all EVM opcodes; we present the most relevant ones below. Note that many of the opcodes (such as \instruction{push}, \instruction{dup}, etc.) are eliminated when \textsc{Securify}\xspace decompiles the EVM bytecode to its stackless representation. The relevant instructions are: \begin{compactitem} \item Arithmetic operations and comparisons: e.g., \instruction{add}, \instruction{mul}, \instruction{lt}, \instruction{eq}. In the rest of the paper, we write \instruction{op} to denote any of these operations. \item Cryptographic hash functions: e.g., \instruction{sha3}. \item Environmental information: e.g., \instruction{balance} returns the balance of a contract, \instruction{caller} is the identity of the transaction sender, \instruction{callvalue} is the amount of ether specified to be transferred by the transaction. \item Block information: e.g., \instruction{number}, \instruction{timestamp}, \instruction{gaslimit}. \item Memory and storage operations: \instruction{mload}, \instruction{mstore}, \instruction{sstore}, \instruction{sload} load/store data from the memory/contract storage. \item System operations: e.g., \instruction{call}, which transfers ether, and takes two arguments: receiver address and amount of ether to transfer (in fact, \instruction{call} takes seven arguments; we consider here only those that are relevant for the rest of the paper). \item Control-flow instructions: e.g., \instruction{goto}, which encodes conditional jumps across instructions. \end{compactitem} For the complete set of instructions, along with their formal description, we refer the reader to~\cite{wood2014ethereum}. \input{datalog} \subsection{Facts and Inference Rules} \textsc{Securify}\xspace first extracts a set of base facts that hold for every instruction. These base facts constitute a Datalog input that is fed to a Datalog program to infer additional facts about the contract. We use the term \emph{semantic facts} to refer to the facts derived by the Datalog program. All program elements that appear in the contract, including instruction labels, variables, fields, string and integer constants, are represented as constants in the Datalog program. \para{Base Facts} The base facts of our inference engine describe the instructions in the contract's control-flow graph (CFG). The base facts take the form of $\instruction{instr}(L, Y, X_1, \ldots, X_n)$, where \instruction{instr} is the instruction name, $L$ is the instruction's label, $Y$ is the variable storing the instruction result (if any), and $X_1, \ldots, X_n$ are variables given to the instruction as arguments (if any). For example, the instruction \labelc{l1: }\code{a = 4} (from Fig.~\ref{fig:overview}) is encoded to $\instruction{assign}(\text{\labelc{l1}}, \code{a}, \code{4})$. Further, the instruction \labelc{l6:} \instruction{sstore}\text{(\code{c}, \code{b})}, where the variable \code{c} is known to be equal to the constant \code{0} at compile time, is encoded to $\instruction{sstore}(\text{\labelc{l6}}, 0, \code{b})$; if the value of the variable \code{c} could not be determined at compile time, then the instruction would be encoded to $\instruction{sstore}(\text{\labelc{l6}}, \top, \code{b})$, where $\top$ is a Datalog constant that encodes that the value of \code{c} is unknown. The base facts of consecutive instructions are expressed by a predicate defined over labels called \relation{Follow}. For every two labels, $L_1$ and $L_2$, whose instructions are consecutive in the CFG (either in the same basic block or in linked basic blocks), we have the base fact $\relation{Follow}(L_1,L_2)$. An example, \relation{Follow}~fact derived for the contract, shown in Fig~\ref{fig:overview}, is $\relation{Follow}(\text{\labelc{l1}}, \text{\labelc{l2}})$. The join of then/else branches is captured by a predicate $\relation{Join}(L_1,L_2)$, which encodes that the two branches that originate at an instruction $\instruction{goto}(L_1, X, L_3)$, located at label ~$L_1$, are joined (i.e., they are merged into a single path) at label~$L_2$. Using the base facts described above, \textsc{Securify}\xspace computes two kinds of semantic facts: {\em (i)} flow-dependency predicates, which capture instruction dependencies according to the contract's CFG, and {\em (ii)} data-dependency predicates; see Fig.~\ref{fig:flow-predicates}. \begin{figure} \small \renewcommand{\tabcolsep}{2pt} \begin{tabular}{lp{165pt}} \toprule {\bf Semantic fact} & {\bf Intuitive meaning}\\ \midrule \multicolumn{2}{c}{\textit{Flow Dependencies}}\\ \midrule $\relation{MayFollow}(L_1, L_2)$ & Instruction at label~$L_2$ may follow that at label~$L_1$.\\ $\relation{MustFollow}(L_1, L_2)$ & Instruction at label~$L_2$ must follow that at label~$L_1$.\\ \midrule \multicolumn{2}{c}{\textit{Data Dependencies}}\\ \midrule $\relation{MayDepOn}(Y, T)$ & The value of $Y$ may depend on tag $T$.\\ $\relation{Eq}(Y, T)$ & The values of $Y$ and $T$ are equal.\\ $\relation{DetBy}(Y, T)$ & For different values of $T$ the value of $Y$ is different.\\ \bottomrule \end{tabular} \caption{The semantic facts: $L_1$ and $L_2$ are labels, $Y$ is a variable, and $T$ is a tag (a variable or a label).} \label{fig:flow-predicates} \end{figure} \para{Flow-Dependency Predicates} The flow predicates we consider are $\relation{MayFollow}$ and $\relation{MustFollow}$, both are defined over pairs of labels and are inferred from the contract's CFG. The intuitive meaning (also summarized in Fig.~\ref{fig:flow-predicates}) is: \begin{itemize} \item $\relation{MayFollow}(L_1,L_2)$ holds for~$L_1$ and~$L_2$ if both are in the same basic block and $L_2$ follows $L_1$, or there is a path from the basic block of $L_1$ to the basic block of $L_2$. \item $\relation{MustFollow}(L_1,L_2)$ holds if both are in the same basic block and $L_2$ follows $L_1$, or any path to the basic block of $L_2$ passes through the basic block of $L_1$. \end{itemize} To infer the \relation{MayFollow} and \relation{MustFollow} predicates, we use the $\relation{Follow}(L_1,L_2)$ input fact which holds if $L_2$ immediately follows $L1$ in the CFG. Namely, the predicate $\relation{MayFollow}$ is defined with the following two Datalog rules: \[ \begin{array}{lll} \relation{MayFollow}(L_1, L_2) &\Leftarrow& \bodyf{\relation{Follow}}(L_1,L_2)\\ \relation{MayFollow}(L_1, L_2) &\Leftarrow& \bodyf{\relation{MayFollow}}(L_1,L_3),\bodyf{\relation{Follow}}(L_3,L_2)\\ \end{array} \] The first rule is interpreted as: if $\bodyf{\relation{Follow}}(L_1,L_2)$ holds (i.e., it is contained in the Datalog input), then the predicate $\relation{MayFollow}(L_1, L_2)$ is derived (i.e., it is added to the fixed-point). The second rule is interpreted as: if both $\relation{MayFollow}(L_1, L_3)$ and $\bodyf{\relation{Follow}}(L_3,L_2)$ hold, then $\relation{MayFollow}(L_1, L_2)$ is derived. Note that if $\relation{MayFollow}(L_1, L_2)$ is not derived in the fixed-point (at the end of the fixed-point computation), then the instruction at label $L_2$ \emph{does not} appear after the instruction at label $L_1$, \emph{in any execution} of the contract. The inference rules for $\relation{MustFollow}$ are defined similarly, with a special attention to the join points in the CFG. \sloppy \para{Data-Dependency Predicates} The dependency predicates we consider are $\relation{MayDepOn}$, $\relation{Eq}$, and $\relation{DetBy}$. The intuitive meaning of them (also summarized in Fig.~\ref{fig:flow-predicates}) is: \begin{itemize} \item $\relation{MayDepOn}(Y, T)$ is derived if the value of variable~$Y$ depends on the {\em tag}~$T$. Here, the variable $T$ ranges over tags, which can be a contract variable (e.g., $x$) or an instruction (e.g., \instruction{timestamp}). For example, $\relation{MayDepOn}(Y, X)$ means that the value of variable $Y$ may change if the value of~$X$ changes, while $\relation{MayDepOn}(Y, \instruction{timestamp})$ means that $Y$ may change if the instruction $\instruction{timestamp}$ returns a different value. \item $\relation{Eq}(Y, T)$ indicates that the values of $Y$ and $T$ are identical. For example, given fact $\instruction{assign}(\labelc{l}, x, \instruction{caller})$, which stores the sender's address at variable $x$, we have $\relation{Eq}(x, \instruction{caller})$. \item $\relation{DetBy}(Y, T)$ indicates that a different value of~$T$ guarantees that the value of $Y$ changes. For example, $\relation{DetBy}(x,\instruction{caller})$ is derived if we have the fact $\instruction{sha3}(\labelc{l}, \code{x}, \code{start}, \code{len})$, which returns the hash of the memory segment starting at offset \code{start} and ending at offset $\code{start} + \code{len}$, if any part of this memory segment is determined by \instruction{caller}. Note that $\relation{Eq}(Y,T)$ implies that $\relation{DetBy}(Y,T)$ also holds. \end{itemize} \input{infrules} The Datalog rules defining these data-dependency predicates are given in Fig.~\ref{fig:inference-rules}. To avoid clutter in the rules, we use the wildcard symbol (\code{_}) in place of variables that appear only once in the rule; for example, we write $\relation{MayDepOn}(Y, X)\Leftarrow \instruction{assign}(\_, Y, X)$ instead of $\relation{MayDepOn}(Y, X)\Leftarrow \instruction{assign}(L, Y, X)$. The rules rely on additional predicates: \relation{isConst}, \relation{MemTag} (and, similarly, \relation{StorageTag}, which are omitted from Fig.~\ref{fig:inference-rules}) and \relation{Taint}. We briefly explain the meaning of these predicates and how they are derived below. \begin{itemize} \item The predicate $\relation{isConst}(O)$ holds if $O$ is constant that appears in the contract. For example, the fact $\relation{isConst}(\code{0})$ is added to the Datalog input derived for the contract in Fig.~\ref{fig:overview}. \item The predicate $\relation{MemTag}(L, O, T)$ (and similarly \relation{StorageTag}) defines that, at label $L$, the value at offset $O$ in the memory (or storage) is assigned tag $T$. It is defined with three rules. The first rule encodes that writing a variable $X$ tagged with~$T$ to a constant (i.e., known) offset~$O$ at label~$L$, results in tagging the memory offset~$O$ at label~$L$ with tag~$T$. The second rule defines the case when the offset is unknown, in which case all possible offsets, captured via the constant~$\top$, are assigned tag~$T$. The third rule propagates the tags to the following instructions, until reaching to an instruction that reassigns that memory location (captured by a predicate $\instruction{ReassignMem}$). \item The predicate \relation{Taint}$(L_1,L_2,X)$ encodes that the execution of the instruction at label~$L_2$ depends on the value of~$X$, where~$X$ is the condition of a \instruction{goto} instruction at label~$L_1$. The first two rules defining the predicate $\instruction{Taint}(L_1,L_2,X)$ taint the two branches that originate at a \instruction{goto} instruction at label~$L_1$ with the condition~$X$. Finally, the third rule propagates the tag~$X$ along the instructions of the two branches until they are merged. \end{itemize} $\relation{MayDepOn}(X,T)$ defines that variable $X$ may have tag $T$. The first rule defines that assigning a variable $X$ to $Y$ results in tagging $Y$ with $X$. The second rule propagates any tags of~$X$ to the assigned variable~$Y$. The third rule propagates tags over operations with tagged variables. The three rules with \instruction{mload} instructions propagate tags from memory to variables. The first \instruction{mload} rule defines that when loading data from a constant offset~$O$, the tags associated to that offset are propagated to the output variable~$Y$. The second \instruction{mload} rules states that if the offset is unknown, then all tags of the memory are propagated to the output variable~$Y$. Finally, the third \instruction{mload} rule propagates tags that are written to unknown offsets (identified by~$\top$). The final rule defines that if the execution of an $\instruction{assign}(L,Y,\_)$ instruction depends on a variable~$X$ (i.e., the label~$L$ is tainted with the variable~$X$), then all tags assigned to~$X$ are propagated to the output variable~$Y$. We remark that the rules for inferring $\relation{Eq}$ and $\relation{DetBy}$ predicates are defined in a similar way and are therefore omitted. \section{Introduction} This is a sample! It does not have a lot of fancy stuff in it, but if you want to see a more complex sample, look at the original ACM templates. To fill up enough text to fill up 3 pages, we've included the CCS Call for Papers three times. It is always a good idea to include some gratuitious citations to recent CCS papers~\cite{medvinsky1993netcash, bellare1993random, anderson1993cryptosystems, blaze1993cryptographic}. \input{cfp} \input{cfp} \input{cfp} \section{Conclusions} In conclusion, it is rarely a good idea to include the same section three times in a paper, or to have a conclusion that does not conclude. \section{Call for Papers} The conference seeks submissions presenting novel research results in all aspects of computer and communications security and privacy, including both practical and theoretical contributions. \subsection{Paper Submission Information} Submissions must be received at \url{https://ccs17.hotcrp.com/} by the strict deadline of {\bf 19 May 2017 at 8:59 PM PDT (UTC-7)}. Submitted papers must not substantially overlap with papers that have been published or that are simultaneously submitted to a journal, conference, or workshop. Submissions must be anonymized and avoid obvious self-references. CCS has traditionally required that authors submitting papers guarantee that an author will be able to present their paper at the conference. We recognize, however, that the current travel restrictions and screening processes may make it impossible or uncomfortable for some authors to travel to the conference. The venue for CCS 2017 was selected several years ago, and we do not wish to exclude any potential authors who may have difficulty traveling due to recent changes in US immigration practices. CCS welcomes submissions by authors of all nationalities, and will make allowances for presenting papers electronically or with non-author presenters in cases where paper authors are unable to travel to the United States. Submissions will be evaluated based on their scientific merit, novelty, importance, presentation quality, and relevance to computer and communications security and privacy. If a paper includes work that raises ethical concerns it is up to the authors to convince the reviewers that appropriate practices were followed to minimize possible harm and that any harm caused by the work is greatly outweighed by its benefits. The review process will be carried out in two phases and authors will have an opportunity to provide a length-limited response to the first-phase reviews. \subsection{Paper Format} Submissions must be single PDF files containing at most 12 pages in the required new ACM format (see \url{https://www.sigsac.org/ccs2017/format} for details and templates, which you have presumably found if you are reading this!) of body content, with any number of additional pages for the bibliography, well-marked appendices, and any desired supplementary material. When relevant, submitters may include reviews from past submissions and responses to them in the supplementary material. Reviewers are not required to consider the appendices or supplementary material, however, so submissions need to be intelligible and convincing without them. Submissions not meeting these guidelines, or playing games to work around page limits, will be rejected by the PC chairs without review. In particular, papers should not use squeezing tricks to adjust the (already very dense) ACM paper format, and moving discussion of key related work or important definitions to appendices may be grounds for rejection. \subsection{Practice Talks} Following the lead of USENIX Enigma, we want to improve the quality of the conference and provide a better experience for both presenters and attendees by holding practice sessions before the conference (see \url{https://www.sigsac.org/ccs2017/practicefaq}). Presenting authors of accepted papers are expected to participate in an on-line practice session approximately three weeks before the conference. \subsection{Conflicts of Interest} The conference requires cooperation from both authors and program committee members to ensure a process that is both fair in practice and perceived to be fair by everyone. When submitting a paper, authors are required to identify members of the program committee who may not be able to provide an unbiased review. This includes people with strong personal or professional relationships such as advisor/advisee, members of the same organization, and close collaborators (for example, recent or repeated co-authors). In general, this means anyone who a reasonable person with all the relevant information would question as an impartial reviewer. The program co-chairs reserve the right to request a more specific description of a conflict-of-interest declaration from authors. Program committee members who have a conflict of interest with a paper, including program co-chairs, will be excluded from evaluation and discussion of the paper, but because submissions are anonymous to reviewers it is important for submitting authors to identify these conflicts. In the case of a program co-chair, the other co-chairs who do not have conflicts will be responsible for managing that paper. Program co-chairs are not permitted to be involved as co-authors in any submissions. \section{Conclusion}\label{sec:conclusion} We presented \textsc{Securify}\xspace, a new lightweight and scalable verifier for Ethereum smart contracts. \textsc{Securify}\xspace leverages the domain-specific insight that violations of many practical properties for smart contracts also violate simpler properties, which are significantly easier to check in a purely automated way. Based on this insight, we devised compliance and violation patterns that can effectively prove whether real-world contracts are safe/unsafe with respect to relevant properties. Overall, \textsc{Securify}\xspace enjoys several important benefits: {\em (i)}~it analyzes all contract behaviors to avoid undesirable false negatives; {\em (ii)}~it reduces the user effort in classifying warnings into true positives and false alarms by guaranteeing that certain behaviors are actual errors; {\em (iii)}~it supports a new domain-specific language that enables users to express new vulnerability patterns as they emerge; finally, {\em (iv)}~its analysis pipeline -- from bytecode decompilation, optimizations, to checking of patterns -- is fully automated using scalable, off-the-shelf Datalog solvers. \subsubsection{Stratified Datalog} Stratified Datalog is a declarative logic language, which enables to write \emph{facts} (predicates) and \emph{rules} to infer facts. We next briefly overview its syntax and semantics. \para{Syntax} We present Datalog's syntax in Fig.~\ref{fig:datalog}. A Datalog program consists of one or more rules, denoted $\overline{r}$. A rule $r$ consists of a head $a$, and a body, $\overline{l}$, consisting of literals, separated by commas. The head, also called an atom, is a predicate over zero or more terms, denoted~$\overline{t}$, comma-separated. A literal $l$ is a predicate or its negation. As a convention, we write Datalog variables in upper case and constants in lower case. A Datalog program is {\em well-formed} if for any rule $a\Leftarrow \overline{l}$, we have $\getvars{a}\subseteq \getvars{\overline{l}}$, where $\getvars{\overline{l}}$ returns the set of variables in $\overline{l}$. A Datalog program~$P$ is {\em stratified} if its rules can be partitioned into strata $P_1, \ldots, P_n$ such that if a predicate $p$ occurs in a positive (negative) literal in the body of a rule in $P_i$, then all rules with $p$ in their heads are in a stratum $P_j$ with $j \leq i$ ($j< i$). Stratification ensures that predicates that appear in negative literals are fully defined in lower strata. \begin{figure}[t] \centering \setlength{\tabcolsep}{2pt} \renewcommand{1.2}{1} \begin{tabular}{rrclrrclrrcl} {\em (Program)} & $P$ & $::=$ & $\overline{r}$ \hspace{30pt} & {\em (Predicates)} &$p, q$ & $\in$ & $\mathcal{P}$\\ {\em (Rule)} & $r$ & $::=$ & $a \Leftarrow \overline{l}$ & {\em (Term)} & $t$ & $\in$ & $\mathcal{V} \cup \mathcal{C}$\\ {\em (Atom)} & $a$ & $::=$ & $p(\overline{t})$ & {\em (Datalog variables)} & $X, Y$ & $\in $ & $\mathcal{V}$\\ {\em (Literal)} & $l$ & $::=$ & $a\mid \neg a$ & {\em (Constants)} & $x, y$ & $\in$ & $\mathcal{C}$\\ \end{tabular}\\ \caption{Syntax of stratified Datalog.} \label{fig:datalog} \end{figure} \para{Semantics} Let $\mathcal{A} = \{ p(\overline{t})\mid \overline{t}\subseteq \mathcal{C} \}$ (where $\overline{t}$ is a list of terms separated by commas) denote the set of all ground (i.e., variable-free) atoms; we refer to these as {\em facts}. An interpretation $A\subseteq \mathcal{A}$ is a set of facts. The complete lattice $(\mathcal{P}(\mathcal{A}), \subseteq, \cap, \cup, \emptyset, \mathcal{A})$ partially orders the set of interpretations $\mathcal{P}(\mathcal{A})$. Given a substitution $\sigma\in \mathcal{V}\to \mathcal{C}$, mapping variables to constants, and an atom~$a$, we write $\sigma(a)$ for the fact obtained by replacing the variables in $a$ according to $\sigma$. For example, $\sigma(p(X))$ returns the fact $p(\sigma(X))$. Given a program~$P$, its consequence operator $T_P\in \mathcal{P}(\mathcal{A})\to \mathcal{P}(\mathcal{A})$ is defined as: \[ T_P(A) = \{\sigma(a)\mid (a\Leftarrow l_1\ldots l_n) \in P, \forall l_i\in \overline{l}.\ A\vdash \sigma(l_i)\} \] where $A \vdash \sigma(a)$ if $\sigma(a)\in A$ and $A \vdash \sigma(\neg a)$ if $\sigma(a)\not\in A$. An input for $P$ is a set of facts constructed using $P$'s extensional predicates, i.e., those that appear only in the rule bodies. Let $P$ be a program with strata $P_1, \ldots, P_n$ and $I$ be an input for $P$. The model of $P$ for $I$, denoted by $\sem{P}_I$, is $M_n$, where $M_0 = I$ and $M_i=\bigcap \{A\in {\sf fp}\ T_{P_i}\mid M_{i-1}\subseteq A\}$ is the smallest fixed point of $T_{P_i}$ that is greater than or equal to the lower stratum's model $M_{i-1}$. \section{Evaluation}\label{sec:evaluation} \begin{figure} \setlength{\tabcolsep}{8pt} \begin{tabular}{lrr} \toprule & {\bf EVM dataset} & {\bf Solidity dataset}\\ \midrule \# Contracts & $24,594$ & $100$\\ \# \instruction{call} instructions & $46,106$ & $67$\\ \# \instruction{sstore} instructions & $56,346$ & $297$\\ \bottomrule \end{tabular} \caption{Statistics of the two Ethereum datasets} \vspace{-10pt} \label{fig:datasets} \end{figure} To evaluate \textsc{Securify}\xspace, we conducted the following experiments: {\em (i)}~evaluated \textsc{Securify}\xspace's effectiveness in proving the correctness of and discovering violations in real-world contracts; {\em (ii)}~ manually inspected \textsc{Securify}\xspace's results (i.e., reported violations and warnings) on smart contracts whose source code had been uploaded to \textsc{Securify}\xspace's public interface; {\em (iii)}~ compared \textsc{Securify}\xspace to Oyente~\cite{luu2016making} and Mythril~\cite{mythril}, two smart contract checkers based on symbolic execution; {\em (iv)}~ measured the success of \textsc{Securify}\xspace's decompiler in resolving memory and storage offsets; {\em (v)}~ measured \textsc{Securify}\xspace's time and memory consumption. \para{Datasets} \label{sec:dataset} We used two datasets of smart contracts to evaluate \textsc{Securify}\xspace. Our first dataset, dubbed {\em EVM dataset}, consists of $24,594$ smart contracts obtained by parsing create transactions using the parity client~\cite{parityclient}. Using create transactions, we obtained the EVM bytecode of these smart contracts. Our second dataset, dubbed {\em Solidity dataset}, consists of $100$ smart contracts written in Solidity which were uploaded to \textsc{Securify}\xspace's public interface. To avoid bias, we selected the first $100$ contracts in alphabetical order uploaded in $2018$. To simplify manual inspection, we restricted our selection to contracts with up to $200$ lines of Solidity code. We give relevant statistics on the two datasets in Fig.~\ref{fig:datasets}. Note that the number of contracts defines the relevant checks for the ether liquidity (LQ) property, the number of \instruction{sstore} instructions defines the relevant instructions for the restricted writes (RW) and the validated arguments (VA) property, and the number of \instruction{call} instructions defines the relevant instructions for the remaining properties. \begin{figure} \includegraphics[width=1\columnwidth]{figures/all-contracts} \caption{\textsc{Securify}\xspace results on the EVM dataset. The violations and compliance segments indicate instructions that are proved to be safe/violations for each security property.} \label{fig:all-contracts} \end{figure} \para{Security Analysis of Real-World Smart Contracts} In this task, we evaluate \textsc{Securify}\xspace's effectiveness in proving security properties (i.e., matching a compliance pattern) and finding violations (i.e., matching a violation pattern) in real-world contracts. To this end, we ran \textsc{Securify}\xspace on all smart contracts contained in our EVM dataset and measured the fraction of violations, warnings, and compliances reported by \textsc{Securify}\xspace. Fig.~\ref{fig:all-contracts} summarizes the results. The figure shows one bar for each security property. Each bar has three segments: {\em (i) violations}, which shows the fraction of instructions that have matched a violation pattern of the given property, {\em (ii) warnings}, which shows the fraction of instructions that have not matched any pattern (neither violation or compliance pattern) of the given property, and {\em (iii) compliance}, which shows the fraction of instructions that have matched a compliance pattern of the given property. We note that the sum of the three segments adds up to~$100\%$. For example, consider the no writes after calls (NW) property. The data shows that $6.5\%$ of the \instruction{call} instructions violate the property, $90.9\%$ are proved to be compliant, and the remaining $2.6\%$ are reported as warnings. On average across all security properties, \textsc{Securify}\xspace successfully proves that $55.5\%$ of the relevant instructions are safe, $29.3\%$ are definite violations, and it reports $15.2\%$ warnings. Further, $65.9\%$ of all instructions that failed to match a compliance pattern (and hence may indicate an error) are successfully proved to be definite violations (using the violation patterns). This indicates a reduction of $65.9\%$ in the number of instructions that users must manually classify into true warnings and false warnings. We report on the precise breakdown between false and true warnings in our next experiment. Overall, our results indicate that \textsc{Securify}\xspace's compliance and violations patterns are expressive enough to prove and, respectively, disprove relevant security properties. Further, we note that since \textsc{Securify}\xspace is extensible, one can further refine \textsc{Securify}\xspace's results by extending it with additional patterns that would convert more warnings into violations and compliances. This would benefit some of the security properties that are harder to prove or disprove (such as restricted writes). \begin{figure} \includegraphics[width=1\columnwidth]{figures/securify-all-patterns} \caption{\textsc{Securify}\xspace results on the Solidity dataset. The warnings are classified into true and false warnings based on whether they indicate a security issue or not.} \label{fig:accuracy} \end{figure} \para{Manual Inspection of Results} In our second experiment, we manually inspected \textsc{Securify}\xspace's reports to gain a better understanding of its results. To this end, we ran \textsc{Securify}\xspace on all contracts contained in our Solidity dataset. We then manually classified each reported warning as a {\em true warning} if it indicates a violation of the security property, and otherwise, we classified it as a {\em false warning}. We also inspected and confirmed the correctness of all reported violations and compliances. Fig.~\ref{fig:accuracy} summarizes our results. As before, the figure shows one bar for each security property. In addition to the violation and compliance segments, we partition the segment with reported warnings into {\em true warnings} and {\em false warnings}. Consider the handled exception (HE) property. The data shows that \textsc{Securify}\xspace successfully proves that $29.9\%$ of the \instruction{call} instructions have return values that are {\em not} checked by the code (indicating a violation of the property). Further, \textsc{Securify}\xspace proves that these error values {\em are} checked for the remaining $70.1\%$ of \instruction{call} instructions. \textsc{Securify}\xspace does not issue any warnings for this property because it matched at least on of the patterns for each of the \instruction{call} instructions. We remark that the number of security issues discovered in the Solidity dataset is higher relative to those found in the EVM dataset. We believe this is due to the fact that the two datasets come from different distributions: the Solidity dataset consists of recent contracts (uploaded in $2018$) that are still in development stage. In contrast, the EVM dataset contains all contracts deployed on the blockchain. Further, users often deliberately uploaded vulnerable contracts to experiment and evaluate \textsc{Securify}\xspace. An exception is the reduction in handled exception property (HE), which has more violations in the EVM dataset compared to the Solidity dataset. We believe this is due to the fact that developers now use the \code{transfer()} statement for ether transfers, which handles errors by default and was specifically introduced to avoid issues due unhandled exceptions. We observe that the effectiveness of the patterns varies across properties, which is expected as some properties are more difficult to prove/disprove than others. For example, the restricted transfer property (RT) and the three transaction ordering dependence properties (TT, TR, and TA) are hard to prove correct and result in a relatively high number of false warnings (roughly half of the warnings are false warnings). However, for other security properties, such as no writes after calls (NW) and handled exception (HE), all warnings issued by \textsc{Securify}\xspace indicate true warnings, indicating that the corresponding compliance patterns precisely matches contracts that satisfy these properties. \begin{figure} \includegraphics[width=1\columnwidth]{figures/comparison} \caption{Comparing \textsc{Securify}\xspace to Oyente and Mythril} \label{fig:compare} \end{figure} \para{Comparing \textsc{Securify}\xspace to Symbolic Security Checkers} We now compare \textsc{Securify}\xspace to two recent open-source security checkers based on symbolic execution -- Oyente~\cite{luu2016making} and Mythril~\cite{mythril}. To compare the three systems, we ran the latest versions of Oyente and Mythril against all contracts in our Solidity dataset, for which we have already manually classified all warnings into true and false warnings. Oyente supports three of \textsc{Securify}\xspace's security properties: TOD, which checks the disjunction of the TOD receiver and TOD amount properties, reentrancy (called no writes after calls\footnote{We remark that to ensure the absence of storage writes after \instruction{call} instructions, Oyente checks that the user cannot re-enter and reach the same \instruction{call} instruction.} in \textsc{Securify}\xspace), and handled exceptions. Mythril also supports the reentrancy and handled exception properties, and in addition, implements a check of the restricted transfer property. Our results are summarized in Fig.~\ref{fig:compare}. For \textsc{Securify}\xspace, we report the fraction of reported violations, true warnings, and false warnings. Since both Oyente and Mythril may report false positives (Oyente has false positives because their checks do not imply a contract vulnerability, as shown in~\cite{GrishchenkoMS18}), we treat all bugs listed by them as {\em warnings} as they must be classified by the user into true warnings and false warnings. Note that, unlike \textsc{Securify}\xspace, Oyente and Mythril do not report definite violations, i.e., results that are guaranteed to violate security properties. Since Oyente and Mythril explore a subset of all contract's behaviors, they may fail to report certain vulnerabilities, and we report these as {\em unreported vulnerabilities} in the figure. We depict true warnings and violations above the $X$-axis (to indicate desirable results), and we plot false warnings and unreported vulnerabilities below the $X$-axis (to indicate undesirable results). We observe that for all properties except reentrancy, Oyente and Mythril miss to report some actual vulnerabilities. Oyente fails to report $72.9\%$ of TOD violations, and Mythril fails to report $65.6\%$ of the restricted transfer violations. Overall, the two symbolic tools fail to report vulnerabilities for all considered security properties. \input{table-offsets} \para{Resolving Storage/Memory Offsets} We report on \textsc{Securify}\xspace's partial evaluation optimization for resolving memory and storage offsets. Fig.~\ref{tab:offsets} shows the total number of \instruction{mload}, \instruction{mstore}, \instruction{sload} and \instruction{sstore} instructions found in our EVM dataset. The figure depicts the number of resolved offsets. On average across all four instructions, partial evaluation correctly resolves $72.6\%$ of the offsets. This indicates that \textsc{Securify}\xspace can often infer the precise writes to storage/memory, thereby improving the precision of the subsequent analysis. Memory offsets are more often resolved than storage offsets, as the latter often depend on user-provided inputs. \para{Time and Memory Consumption} \textsc{Securify}\xspace terminates for all contracts and takes on average $30$ seconds per contract (to check all compliance and violation patterns). Oyente and Mythril have similar running times when used with default settings (which do not provide full coverage). To improve the coverage of these tools, users must increase the constraint solving timeouts and loop bounds, which in turn result in increased running times (especially for larger contracts). The memory consumption of \textsc{Securify}\xspace is determined by the size of the fixed point analysis. In $95\%$ of cases, the consumption was below $10$MB, and in the rest it was below $1$GB. \para{Summary} Overall, our results indicate that \textsc{Securify}\xspace's patterns are effective in finding violation and establishing correctness of contracts. Going further, we see two relevant items for future work. First, it would be interesting to integrate \textsc{Securify}\xspace with existing frameworks that provide formal EVM semantics, such as \cite{kevm,GrishchenkoMS18}, as a way to further validate \textsc{Securify}\xspace's analysis and patterns, and to formally prove the guarantees it provides. Second, we can leverage \textsc{Securify}\xspace to improve existing symbolic checkers, such as Oyente and Mythril. For example, \textsc{Securify}\xspace's compliance patterns can be used to reduce the false positive rate of these tools. \section{Implementation}\label{sec:implementation} In this section, we detail the implementation of \textsc{Securify}\xspace. \para{Decompiler} The decompiler transforms the EVM bytecode provided as input into the corresponding assembly instructions, as defined in~\cite{wood2014ethereum}. Next, it converts the EVM instructions into an SSA form. The SSA instructions are identical to the EVM instruction set except that they exclude stack operations (e.g., \instruction{pop}, \instruction{push}, etc.). Our conversion method is similar to the one described in~\cite{Proebsting97krakatoa:decompilation,Vallee-Rai98jimple:simplifying}. The decompiler constructs the control flow graph (CFG) on top of the decompiled instructions. \para{Optimizations} \textsc{Securify}\xspace employs three optimizations over the CFG, which improve the precision of its analysis: \begin{enumerate} \item[\em (i)] {\em Unused instructions}, which eliminates any instructions whose results are not used. On average, this optimization reduces the contract's instructions by $44$\% and improves the scalability and precision of the subsequent analysis. \item[\em (ii)] {\em Partial evaluation}, which propagates constant values along computations~\cite{Futamura:1999:PEC:609149.609205}. This step improves the precision of storage and memory analysis (e.g., \relation{MemTag}). As we show in our evaluation, partial evaluation resolves over $70\%$ of the offsets that appear in storage/memory instructions. \item[\em (iii)] {\em Method inlining}, which improves the precision of the static analysis by making it context sensitive. \end{enumerate} \para{Inference of Semantic Facts} \textsc{Securify}\xspace derives semantic facts using inference rules specified in stratified Datalog, using the Souffle Datalog solver~\cite{souffle} to efficiently compute a fixed-point of all facts. We report on concrete numbers in Section~\ref{sec:evaluation}. \para{Evaluating Patterns} To check the security patterns, \textsc{Securify}\xspace iterates over the instructions to handle the \emph{\relation{all}\ } and \emph{\relation{some}\ } quantifiers in the patterns. Then, to check inferred facts, it directly queries the fixed-point computed by the Datalog solver. If a violation pattern is matched, \textsc{Securify}\xspace reports which instructions are identified as vulnerable, to provide error-localization for users. If no pattern is matched, \textsc{Securify}\xspace reports a warning, to indicate that an instruction may or may not be vulnerable. \section{Introduction} \label{sec:intro} Blockchain platforms, such as Nakamoto's Bitcoin~\cite{nakamoto2008bitcoin}, enable the trade of crypto-currencies between mutually mistrusting parties. To eliminate the need for trust, Nakomoto designed a peer-to-peer network that enables its peers to agree on the trading transactions. Buterin~\cite{whitepaper} identified the applicability of decentralized computation beyond trading, and designed the Ethereum blockchain which supports the execution of programs, called smart contracts, written in Turing-complete languages. Smart contracts have shown to be applicable in many domains including financial industry~\cite{blockchaininsurance}, public sector~\cite{recordkeeping} and cross-industry~\cite{ethlance}. The increased adoption of smart contracts demands strong security guarantees. Unfortunately, it is challenging to create smart contracts that are free of security bugs. As a consequence, critical vulnerabilities in smart contracts are discovered and exploited every few months~\cite{kingofether,thedao,etherdice,bylica17,paritybug,paritybug2}. In turn, these exploits have led to losses reaching millions worth of USD in the past few years: $150$M were stolen from the popular DAO contract in June $2016$~\cite{thedao}, $30$M were stolen from the widely-used Parity multi-signature wallet in July $2017$~\cite{paritybug}, and few months later $280$M were frozen due to a bug in the very same wallet~\cite{parity2}. It is apparent that effective security checkers for smart contracts are urgently needed. \para{Key Challenges} The main challenge in creating an effective security analyzer for smart contracts is the Turing-completeness of the programming language, which renders automated verification of arbitrary properties undecidable. To address this issue, current automated solutions tend to rely on fairly generic testing and symbolic execution methods (e.g., Oyente~\cite{luu2016making} and Mythril~\cite{mythril}). While useful in some settings, these approaches come with several drawbacks: {\em (i)} they can miss critical violations (due to under-approximation), {\em (ii)} yet, can also produce false positives (due to imprecise modeling of domain-specific elements~\cite{GrishchenkoMS18}), and {\em (iii)} they can fail to achieve sufficient code coverage on realistic contracts (Oyente achieves only $20.2\%$ coverage on the popular Parity wallet~\cite{walletlibrary}). Overall, these drawbacks place a significant burden on their users, who must inspect all reports for false alarms and worry about unreported vulnerabilities. Indeed, many security properties for smart contracts are inherently difficult to reason about directly. A viable path to addressing these challenges is building an automated verifier that targets important domain-specific properties~\cite{best-practices}. For example, recent work~\cite{GrossmanAGMRSZ18} focuses solely on identifying reentrancy issues in smart contracts~\cite{reentrancy-blog}. \para{Domain-Specific Insight} A key observation of this work is that it is often possible to devise precise patterns expressed on the contract's data-flow graph in a way where a match of the pattern implies either a violation or satisfaction of the original security property. For example, $90.9\%$ of all calls in Ethereum smart contracts can be proved free of the infamous DAO bug~\cite{thedao} by matching a pattern stating that calls are not followed by writes to storage. The reason why it is possible to establish such a correspondence is that violations of the original property in real-world contracts tend to often violate a much simpler property (captured by the pattern). Indeed, in terms of verification, a key benefit in working with patterns, instead of with their corresponding property, is that patterns are substantially more amenable to automated reasoning. \para{\textsc{Securify}\xspace: Domain-specific Verifier} Based on the above insight, we developed \textsc{Securify}\xspace, a lightweight and scalable security verifier for Ethereum smart contracts. The key technical idea is to define two kinds of patterns that mirror a given security property: {\em (i)} compliance patterns, which imply the satisfaction of the property, and {\em (ii)} violation patterns, which imply its negation. To check these patterns, \textsc{Securify}\xspace symbolically encodes the dependence graph of the contract in stratified Datalog~\cite{Ullman:1988} and leverages off-the-shelf scalable Datalog solvers to efficiently (typically within seconds) analyze the code. To ensure extensibility, all patterns are expressed in a designated domain-specific language (DSL). \begin{figure} \includegraphics[width=1\columnwidth]{figures/intro-figure2.pdf} \caption{\textsc{Securify}\xspace's approach is based on automatic inference of semantic program facts followed by checking of compliance and violation security patterns over these facts.} \label{fig:intro-flow} \end{figure} In Fig.~\ref{fig:intro-flow}, we illustrate the analysis flow of \textsc{Securify}\xspace. Starting with the contract's bytecode (or source code, which can be compiled to bytecode), \textsc{Securify}\xspace derives semantic facts inferred by analyzing the contract's dependency graph and uses these facts to check a set of compliance and violation patterns. Based on the outcome of these checks, \textsc{Securify}\xspace classifies all contract behaviors into violations~({\color{violation}$\Diamondblack$}\xspace), warnings~({\large\color{warning}$\blacktriangle$}\xspace), and compliant~({\small\color{compliance}$\blacksquare$}\xspace), as abstractly illustrated in Fig.~\ref{fig:intro-patterns}. Here, the large box depicts all contract behaviors, partitioned into safe (which satisfy the property) and unsafe ones (which violate it). \textsc{Securify}\xspace reports as violations ({\color{violation}$\Diamondblack$}\xspace) all behaviors matching the violation pattern, and as warnings ({\large\color{warning}$\blacktriangle$}\xspace) all remaining behaviors not matched by the compliance pattern. \para{Reduced Manual Effort} Compared to existing symbolic analyzers for smart contracts, \textsc{Securify}\xspace reduces the required effort to inspect reports in two ways. First, existing analyzers do not report definite violations (they conflate {\color{violation}$\Diamondblack$}\xspace and {\large\color{warning}$\blacktriangle$}\xspace), and thus require users to manually classify {\em all} reported vulnerabilities into true positives (found in the \colorbox{red!14}{red box}) or false positives (found in the \colorbox{green!12}{green box}). In contrast, \textsc{Securify}\xspace automatically classifies behaviors guaranteed to be violations (marked with {\color{violation}$\Diamondblack$}\xspace). Hence, the user only needs to manually classify the warnings ({\large\color{warning}$\blacktriangle$}\xspace) as true or false positives. As we show in our evaluation, the approach of using both violation and compliance patterns reduces the warnings a user needs to inspect manually by $65.9\%$, and even up to $99.4\%$ for some properties. Second, existing analyzers fail to report unsafe behaviors (sometimes up to $72.9\%$), meaning users may have to manually inspect portions of the code that are not covered by the analyzer. In contrast, \textsc{Securify}\xspace reports all unsafe behaviors. \para{Auditing Smart Contracts} \textsc{Securify}\xspace is publicly available at \url{https://securify.ch} and has analyzed $> 18K$ contracts submitted by its users. Over the last year, we have also extensively used \textsc{Securify}\xspace to perform 38 detailed commercial audits of smart contracts (other auditors have also used \textsc{Securify}\xspace), iteratively improving the approach and adding more patterns. Indeed, the design and implementation of \textsc{Securify}\xspace have greatly benefited from this experience. In terms of the actual audit process, our approach (and we believe that of other auditors) has been to run all available tools and then to manually inspect the reported vulnerabilities so to assess their severity. For instance, while \textsc{Securify}\xspace covers a number of important properties (the full version supports $18$ properties), symbolic execution tools have better support for numerical properties (e.g., overflow). Our finding was that \textsc{Securify}\xspace was particularly helpful in auditing larger contracts, which are challenging to inspect with existing solutions for the reasons listed earlier. Overall, we believe \textsc{Securify}\xspace is a pragmatic and valuable point in the space of analyzing smart contracts due to its careful balance of scalability, guarantees, and precision. \para{Main Contributions} To summarize, our main contributions are: \begin{itemize}[nosep,nolistsep] \item A decompiler that symbolically encodes the dependency graph of Ethereum contracts in Datalog (Section~\ref{sec:analysis}). \item A set of compliance and violation security patterns that capture sufficient conditions to prove and disprove practical security properties (Section~\ref{sec:patterns}). \item An end-to-end implementation, called \textsc{Securify}\xspace, which fully automates the analysis of contracts (Section~\ref{sec:implementation}). \item An extensive evaluation over existing Ethereum smart contracts showing that \textsc{Securify}\xspace can effectively prove the correctness of contracts and discover violations (Section~\ref{sec:evaluation}). \end{itemize} \begin{figure} \includegraphics{figures/onlypatterns.pdf} \caption{ \textsc{Securify}\xspace uses compliance and violation patterns to guarantee that certain behaviors are safe and, respectively, unsafe. The remaining behaviors are reported as warnings (to avoid missing errors).} \label{fig:intro-patterns} \end{figure} \section{Motivating Examples}\label{sec:motivation} In this section, we motivate the problem we address through two real-world security issues that affected $\approx 200$ millions worth of USD in $2017$. We describe the underlying security properties and the challenges involved in proving whether a contract satisfies/violates them. We also describe how \textsc{Securify}\xspace discovers both vulnerabilities with appropriate violation patterns. \subsection{Stealing Ether} \label{sec:stealing-ownership} In Fig.~\ref{fig:bug1}, we show an implementation of a wallet. The code is written in Solidity~\cite{solidity-lang}, a popular high-level language for writing Ethereum smart contracts. We remark that this wallet is a simplified version of Parity's multi-signature wallet, which allowed an attacker to steal $30$ million worth of USD in July 2017. The wallet has a field \code{owner}, which stores the address of the wallet's owner. Further, the contract has a function \code{initWallet}, which takes as argument an address \code{_owner} and initializes the field \code{owner} with it. This function is called by the constructor (not shown in Fig.~\ref{fig:bug1}), and was assumed not to be accessible otherwise~\cite{paritybug}. Finally, the contract has a function \code{withdraw}, which takes as argument an unsigned integer \code{_amount}. The function checks if the transaction sender's address (returned by \code{msg.sender}) equals that of the contract's owner (stored in the field \code{owner}). If this check succeeds, it transfers \code{_amount} ether to the owner with the statement \code{owner.transfer(_amount)}; otherwise, no ether is transferred. The \code{withdraw} function ensures that only the owner can withdraw ether from the wallet. \para{Attack} The wallet shown in Fig.~\ref{fig:bug1} has a critical security flaw: any user could actually call the \code{initWallet} function and store an arbitrary address in the field \code{owner}. An attacker can, therefore, steal all ether stored in the wallet in two steps. First, the attacker calls the function \code{initWallet}, passing her own address as argument. Second, the attacker calls the function \code{withdraw}, passing as argument the amount of ether stored in the wallet. We remark that in the attack on Parity's wallet, to perform the first step the attacker exploits a fallback mechanism to call the \code{initWallet} function; we omit these details for simplicity and refer the reader to~\cite{paritybug} for details on the actual attack. \begin{figure}[t] \centering \includegraphics[width=0.94\columnwidth]{figures/bug1} \caption{A vulnerable wallet that allows any user to withdraw all ether stored in it.} \label{fig:bug1} \end{figure} \para{Security Property} The underlying security problem that allowed the attacker to steal ether is that the security-critical field \code{owner} is universally writable by {\em any} Ethereum user. This security issue mirrors a more general property stipulating that the write to the \code{owner} field is restricted, in the sense that not all users can make a transaction that writes to this field. To show that this property is satisfied, we need to demonstrate that some user cannot send a transaction that modifies the \code{owner} field. Conversely, to show a violation, we need to prove that all users can send a transaction that modifies the \code{owner} field. Proving both satisfaction and demonstrating violations of this property is nontrivial due to the enormous space of possible users and transactions that they can make. \para{Detection} To discover this security issue, \textsc{Securify}\xspace provides a violation pattern that is matched if the execution of the assignment \code{owner = _owner}, highlighted in \colorbox{myred}{red} in Fig.~\ref{fig:bug1}, does not depend on the value returned by the \instruction{caller} instruction (which returns the address of the transaction sender). To check this pattern, \textsc{Securify}\xspace infers data- and control-flow dependencies by analyzing the contract's dependency graph; cf.~\cite{Johnson:2015:EES:2737924.2737957}. Here, \textsc{Securify}\xspace infers that the assignment \code{owner = _owner} does not depend on the \instruction{caller} instruction, which implies that the assignment is reachable by any user. In Section~\ref{sec:overview}, we provide more details on this violation pattern and further details on how \textsc{Securify}\xspace uses it to detect the vulnerability. We remark that some symbolic checkers perform imprecise checks of similar properties, which result in both false positives and false negatives. For instance, as we show in Fig.~\ref{fig:compare} of our evaluation later, Mythril~\cite{mythril} has about $65\%$ false negatives when checking a similar property stipulating that not all users may trigger a particular ether transfer. \begin{figure}[t] \centering \includegraphics[width=0.94\columnwidth]{figures/bug2} \caption{A wallet that delegates functionality to a library contract \code{walletLibrary}.} \label{fig:bug2} \end{figure} \begin{figure*}[t!] \centering \includegraphics[width=0.9\textwidth]{figures/overview.pdf} \caption{ High-level flow illustrating how \textsc{Securify}\xspace finds the unrestricted write to the \code{owner} field of the contract from Fig.~\ref{fig:bug1}. The input (EVM bytecode and security patterns) is highlighted in \colorbox{mygreen}{green}, the output (in our example, a violated instruction) is highlighted in \colorbox{myred}{red}, and \colorbox{gray!20}{gray} boxes represent intermediate analysis artifacts. \textsc{Securify}\xspace proceeds in three steps: % (1) it decompiles the contract's EVM bytecode into a static-single assignment form, (2) it infers semantics facts about the contract, and (3) it matches the violation pattern of the restricted write property on the \code{sstore} instruction that writes to the \code{owner} field.} \label{fig:overview} \end{figure*} \subsection{Frozen Funds} \label{sec:frozen} In Fig.~\ref{fig:bug2}, we show a wallet implementation which suffers from a security issue that froze millions worth of USD in November $2017$. This wallet has a field, \code{walletLibrary}, which stores the address of a contract implementing common wallet functionality. Further, it has a function \code{deposit}, marked as \relation{payable}, which means users can send ether to the contract by calling this function. The function \code{deposit} logs the amount of ether (identified by \code{msg.value}) sent by the transaction sender (identified by \code{msg.sender}). Finally, the contract has a function \code{withdraw}, which delegates all calls to the wallet library. That is, the statement \code{walletLibrary.delegatecall(msg.data)} results in executing the \code{withdraw} method of the wallet library in the context of the current wallet. \para{Attack} Ethereum contracts can be removed from the blockchain using a designated \code{kill} instruction. If an attacker can remove the wallet library from the blockchain, then the funds in the wallet cannot be extracted from the wallet. This is because the wallet relies on the library smart contract to withdraw ether. In November $2017$, a popular wallet library was removed from the blockchain, effectively freezing $\approx 280$ million worth of USD~\cite{paritybug2}. \para{Security Property} The underlying security problem with this wallet is that it allows users to deposit ether, but it cannot guarantee that the ether can be transferred out of the contract, since the transfer depends on a library. To discover that the wallet has this problem, we must prove two facts: {\em (i)} users can deposit ether and {\em (ii)} the contract has no ether transfer instructions (i.e., \instruction{call}) with non-zero amount of ether. Note that if the contract only transfers out ether through libraries, the second requirement is met. \para{Detection} To discover this vulnerability, \textsc{Securify}\xspace's violation pattern checks the conjunction of two facts. First, to prove that users can deposit ether, \textsc{Securify}\xspace checks whether there is a \instruction{stop} instruction whose execution does not depend on the ether transferred being zero. Assuming that the \instruction{stop} instruction is reachable for some transaction, this implies that a user can reach it \emph{with a positive ether amount}, resulting in a deposit of ether to the contract. Second, \textsc{Securify}\xspace checks whether for all \instruction{call} instructions, the amount of ether extracted from the contract is zero. The conjunction of these two facts implies that ether can be locked in the contract. \begin{comment} \para{Hyperproperties} While there are many valuable properties that reveal security bugs, some security bugs (e.g., information-flow) can only be defined as \emph{hyperproperties}~\cite{clarkson2010hyperproperties}, that is, sets of \emph{sets of traces}. For example, the following is a hyperproperty: \begin{quote} \emph{\texttt{HP}: Not all users can trigger money transfer.} \end{quote} To illustrate the importance of this hyperproperty, consider the \texttt{SimpleICO} contract acting as an ICO (initial coin offering), where the first $100$ users can put money into the contract's balance via \texttt{sign_up}, and in return can ask for $1$ coin back via \texttt{get_reward}. \begin{lstlisting} contract SimpleICO { mapping(address => uint) users; uint cnt = 0; function sign_up(uint amount) { if (cnt < 100 && amount > cnt) { users[msg.sender] = amount; cnt++; } } function get_reward() { if (users[msg.sender] > 0) // { users[msg.sender] = 0; msg.sender.call.value(1)(); // } } } \end{lstlisting} The goal of this contract is to reward only the users participating in the ICO, however due to missing the curly brackets (lines 14 and 17), any user can get this reward, thereby emptying the contract's balance while stealing the investors' money. Checking whether \texttt{SimpleICO} meets \texttt{HP} involves examining \emph{all} possible traces of this contract and checking whether there is a user (i.e., \texttt{msg.sender}) who cannot invoke the money transfer (line 16) in any possible trace she can trigger. This simple example illustrates the challenge in checking hyperproperties compared to properties. In particular, techniques such as testing or symbolic execution fall short in examining all traces. While there is previous work checking certain class of hyperproperties, e.g., Pidgin~\cite{pidgin-pldi15}, many important hyperproperties of EVM (e.g., \texttt{HP}) are not expressible in their setting. For example, the following violation hyperproperty over \dsl over-approximates \texttt{HP}: \begin{quote} \emph{\texttt{HP-V}: There is a money transfer whose execution is independent of the user.} \end{quote} While \texttt{HP-V} is weaker than \texttt{HP}, it suffices to infer that \texttt{SimpleICO} violates \texttt{HP} (since it satisfies \texttt{HP-V}). \end{comment} \section{The \textsc{Securify}\xspace System}\label{sec:overview} In the previous section, we illustrated that while security issues in smart contracts are complex, they can be often captured with semantic facts inferred from the code. In this section, we describe the \textsc{Securify}\xspace system, which builds on this idea to prove and disprove security properties of smart contracts. We accompany this section with the example of how \textsc{Securify}\xspace detects the unrestricted write to the \code{owner} field in the wallet contract (Fig.~\ref{fig:bug1}). Fig.~\ref{fig:overview} summarizes the main steps. \para{Inputs to \textsc{Securify}\xspace} The input to \textsc{Securify}\xspace is the EVM bytecode of a contract and a set of security patterns, specified in our designated domain-specific language (DSL). \textsc{Securify}\xspace can also take as input contracts written in Solidity (not shown in Fig.~\ref{fig:overview}), which are compiled to EVM bytecode before proceeding with the analysis. There are two kinds of security patterns: compliance and violation patterns, which capture sufficient conditions to ensure that a contract satisfies and, respectively, violates a given security property. Fig.~\ref{fig:overview} illustrates the input to \textsc{Securify}\xspace in \colorbox{mygreen}{green} boxes, which show part of the EVM bytecode of the wallet contract (only the part necessary to illustrate the vulnerability) and the violation pattern of the \emph{restricted write} property. Intuitively, the pattern is matched if there is a write that is \emph{not} restricted. To discover the unrestricted write in the contract, \textsc{Securify}\xspace proceeds with the following three steps. \para{Step 1: Decompiling EVM Bytecode} \textsc{Securify}\xspace first transforms the EVM bytecode provided as input into a stackless representation in static-single assignment form (SSA). For example, in Fig.~\ref{fig:overview}, for the stack expression \code{push 0x04}, \textsc{Securify}\xspace introduces a local variable \code{a} and an assignment statement \code{a = 4}. In addition to removing the stack, \textsc{Securify}\xspace identifies methods. For example, the method \code{ABI_9DA8}, shown in Fig.~\ref{fig:overview}, corresponds to the \code{initOwner} method of the wallet contract, shown in Fig.~\ref{fig:bug1}. After decompilation, \textsc{Securify}\xspace performs partial evaluation to resolve memory and storage offsets, jump destinations, all of which are important for precisely analyzing the code statically. We describe these optimizations in Section~\ref{sec:implementation}. \para{Step 2: Inferring Semantic Facts} After decompilation, \textsc{Securify}\xspace analyzes the contract to infer semantic facts, including data- and control-flow dependencies, which hold over all behaviors of the contract. For example, the fact $\relation{MayDepOn}(\code{b}, \instruction{dataload})$, shown in Fig.~\ref{fig:overview}, captures that the value of variable \code{b} may depend on the value returned by the instruction \instruction{dataload}. Further, the fact $\relation{Eq}(\code{c}, 0)$ captures that variable \code{c} equals the constant~$0$. \textsc{Securify}\xspace's derivation of semantic facts is specified declaratively in stratified Datalog and is fully automated using existing scalable engines~\cite{souffle}. Key benefits of the declarative approach are: \textit{(i)}~inference rules concisely capture abstract reasoning about different components (e.g., contract storage), \textit{(ii)}~more facts and inference rules can be easily added, and \textit{(iii)}~inference rules are specified in a modular way (e.g., memory analysis is specified independently of contract storage analysis). We list the semantic facts that \textsc{Securify}\xspace derives, along with the inference rules, in Section~\ref{sec:analysis}. \para{Step 3: Checking Security Patterns} After obtaining the semantics facts, \textsc{Securify}\xspace checks the set of compliance and violation security patterns, given as input. These patterns are written in a specialized domain-specific language (DSL), which enables security experts to extend our built-in set of patterns with their customized patterns. Our DSL is a fragment of logical formulas over the semantic facts inferred by \textsc{Securify}\xspace. To detect the vulnerability in the contract of Fig.~\ref{fig:bug1}, \textsc{Securify}\xspace matches the violation pattern (given as input) on the $\instruction{sstore}(\code{c},\code{b})$ instruction at label \labelc{l6} in Fig.~\ref{fig:overview}. In $\instruction{sstore}(\code{c},\code{b})$, \code{c} is the storage offset of the \code{owner} field, and \code{b} is the value to store. The violation pattern matches if there exists some \instruction{sstore} instruction for which both the storage offset, denoted~$X$, and the execution of this instruction, identified by its label $L$, do not depend on the result of the \instruction{caller} instruction in \emph{any possible execution} of the contract. Since the instruction \instruction{caller} retrieves the address of the transaction sender, matching this pattern implies that any user can reach this \instruction{sstore} and change the value of \code{owner}. In our DSL, where negation is encoded by $\neg$ and conjunction by $\wedge$, this pattern is encoded as: \[\relation{some}\ \sstorepc{X}{\_}{L}.\ \neg \relation{MayDepOn}(X, \instruction{caller}) \wedge \neg \relation{MayDepOn}(L, \instruction{caller})\] \textsc{Securify}\xspace's DSL is important for extensibility: adding new security patterns amounts to specifying them in this DSL. To illustrate the expressiveness of the DSL, in Section~\ref{sec:patterns}, we present a range of security patterns for important properties, such as restricted writes, exception handling, ether liquidity, input validation, and others. We remark that contract-specific patterns are sometimes added by security experts while conducting security audits. For example, it is often required to check for the absence of undesirable dependencies, such as: only the owner can modify certain values in the storage, or to ensure that the result of a specific arithmetic expression does not depend on the division instruction (which may cause undesirable integer rounding effects). We illustrate how such contract-specific patterns are specified in the DSL in Section~\ref{sec:patterns}. \para{Output of \textsc{Securify}\xspace} For any match of a violation pattern, \textsc{Securify}\xspace outputs the instruction that caused the pattern to match. In our example, it highlights the instruction $\instruction{sstore}(\code{c},\code{b})$. We remark that the offset of this instruction can be easily mapped to its corresponding line in the Solidity code, if the source code is provided. Further, for any property for which neither the violation nor the compliance pattern is matched, \textsc{Securify}\xspace outputs a warning, indicating that it failed to prove or disprove the property. \para{Limitations} We briefly summarize several limitations of \textsc{Securify}\xspace. First, the current version of \textsc{Securify}\xspace cannot reason about numerical properties, such as overflows. To address this limitation, we plan to extend \textsc{Securify}\xspace with numerical analysis (e.g., using ELINA~\cite{Singh:2017:FPA:3009837.3009885}), which would not only improve the precision of \textsc{Securify}\xspace but also enable the checking of numerical properties. Second, \textsc{Securify}\xspace does not reason about reachability, and assumes that all instructions in the contract are reachable. This assumption is necessary to establish a formal correspondence between the security properties supported by \textsc{Securify}\xspace and the patterns used to prove and disprove them. For instance, in our example, \textsc{Securify}\xspace assumes that the matched $\instruction{sstore}$ instruction is reachable by some execution (otherwise, there is no violation). Finally, the properties we consider capture violations that can often, but not always, be exploited by attackers. For example, there are fields in the contract that must be universally writable by all users. To address this, security experts can write contract-specific patterns in \textsc{Securify}\xspace's DSL (e.g., to specify which fields are sensitive). \section*{Acknowledgments} The research leading to these results was partially supported by an ERC Starting Grant~$680358$. We thank Hubert Ritzdorf and the ChainSecurity team for their valuable contributions to this project. \bibliographystyle{ACM-Reference-Format} \section{Security Patterns} \label{sec:patterns} In this section, we show how to express security patterns over semantics facts. We begin by defining the \textsc{Securify}\xspace language for expressing security patterns. Then, to define security properties formally, we provide background on the execution semantics of EVM contracts and formally define properties. We continue by presenting a set of relevant security properties, and for each, we show compliance and violation patterns, which imply the property and, respectively, its negation. This construction enables us to determine whether a contract complies with or violates a given security property. Finally, we show how \textsc{Securify}\xspace leverages some patterns for error-localization. \subsection{\textsc{Securify}\xspace Language} We first define the syntax of the language for writing patterns and then define how patterns are interpreted over the semantic facts derived for a given contract (described in Section~\ref{sec:analysis}). \para{Syntax} The syntax of the \textsc{Securify}\xspace language is given by the following BNF: \[ \begin{array}{lcl} \varphi& ::= & \instruction{instr}(L, Y, X, \ldots, X) \mid \relation{Eq}(X, T) \mid \relation{DetBy}(X, T) \\ & \mid & \relation{MayDepOn}(X, T) \mid \relation{MayFollow}(L, L) \mid \relation{MustFollow}(L, L) \\ & \mid & \relation{Follow}(L, L) \mid \exists X. \varphi \mid \exists L. \varphi \mid \exists T. \varphi \mid \neg \varphi \mid \varphi \wedge \varphi\\ \end{array} \] Here, $L$, $X$, and $T$ are variables that range over program elements such as labels, contract variables, and tags. Patterns can refer to instructions $\instruction{instr}(L, Y, X_1, \ldots, X_n)$, where \instruction{instr} is the instruction name, $L$ is the instruction's label, $Y$ is the variable storing the instruction result (if any), and $X_1, \ldots, X_n$ are variables given to the instruction as arguments (if any). Patterns can also refer to flow- and data-dependency semantic facts, which can be used to impose conditions on the labels and variables that appear in instructions. Finally, the patterns can quantify over labels, variables, and tags using the standard exists quantifier ($\exists$). More complex patterns can be written by composing simpler patterns with negation~($\neg$) and conjunction~($\wedge$). We define several syntactic shorthands that simplify the specification of patterns. We use standard logical equivalences: we write $\forall X.\ \varphi(X)$ for $\neg (\exists X.\ \neg \varphi(X))$, $\varphi_1 \vee \varphi_2$ for $\neg (\neg \varphi_1 \wedge \neg \varphi_2)$, and $\varphi_1 \Rightarrow \varphi_2$ for $\neg \varphi_1 \vee \varphi_2$. We also write $X = T$ for $\relation{Eq}(X,T)$. For readability, we write: $\relation{some}\ \ \instruction{instr}(\overline{X}).\ \varphi(\overline{Y})$ for $\exists \overline{X}.\ \instruction{instr}(\overline{X}) \wedge \varphi(\overline{Y})$, which imposes that there is some instruction $\instruction{instr}(\overline{X})$ for which the logical condition $\varphi(\overline{Y})$ holds. Similarly, we write $\relation{all}\ \ \instruction{instr}(\overline{X}).\ \varphi(\overline{Y})$ for $\forall \overline{X}.\ \instruction{instr}(\overline{X}) \Rightarrow \varphi(\overline{Y})$, which imposes that for all instructions $\instruction{instr}(\overline{X})$ the condition $\varphi(\overline{Y})$ must hold. \para{Semantics} Patterns are interpreted by checking the inferred semantic facts: \begin{itemize} \item Quantifiers and connectors are interpreted as usual. \item Flow- and data-dependency predicates are interpreted as defined in Section~\ref{sec:analysis}; i.e., a semantic fact holds if and only if it is contained in the Datalog fixed-point. \end{itemize} For example, consider the pattern: \[ \relation{some}\ \sstorepc{X}{Y}{L}.\ \relation{DetBy}(X, \instruction{caller}) \] which is a shorthand for $\exists X.\ \sstorepc{X}{Y}{L} \wedge \relation{DetBy}(X, \instruction{caller})$. This pattern is matched if there is an instruction $\instruction{sstore}(L,X,Y)$ in the contract such that the offset $X$ is determined by the address returned by the \instruction{caller} instruction (captured by the predicate $\relation{DetBy}(X, \instruction{caller})$). For brevity, we omit variables that are not conditioned in the pattern: $\relation{some}\ \sstorepc{X}{\_}{\_}.\ \relation{DetBy}(X, \instruction{caller})$. In Fig.~\ref{fig:patterns}, we list security patterns that are built-in in \textsc{Securify}\xspace. In the following, we first give additional background on the EVM execution model and then present these patterns. \subsection{EVM Background and Properties} To understand the security properties defined in the next section, we extend the background on EVM (given in Section~\ref{sec:background}), which focused on the EVM syntax, with the semantics of EVM contracts. \para{EVM Semantics} A contract is a sequence of EVM instructions $C=(c_0,\ldots,c_m)$. The semantics of a contract $\sem{C}$ is the set of all \emph{traces} from an initial state. A trace of a contract $C$ is a sequence of state-instruction pairs $(\sigma_0,c_0) \to \ldots \to (\sigma_k,c_k)$, from an initial state $\sigma_0$, and such that the relation $(\sigma_j,c_j) \to (\sigma_{j+1},c_{j+1})$ is valid according to the EVM execution semantics~\cite{wood2014ethereum}. If a trace successfully terminates, then $c_k=\bot$. A state consists of the storage and memory state (mentioned in Section~\ref{sec:background}), stack state, transaction information, and block information. We denote by $\sigma_{\textit{S}[i]}/\sigma_{\textit{M}[i]}$ the value stored at offset $i$ in the storage/memory, by $\sigma_{\textit{S}}/\sigma_{\textit{M}}$ the state of the storage/memory, by $\sigma_{\textit{Bal}}$ the contract's balance, by $\sigma_{T}$ the transaction, and by $\sigma_{B}$ the block information. We denote by $t[i]$ the $i^\text{th}$ pair of the trace $t$, for a positive $i$. For a negative $i$, $t[i]$ refers to the $i^\text{th}-1$ pair of $t$ from the end of the sequence. We denote by $\sigma^{t[i]}$/$c^{t[i]}$ the state/instruction of the $i^\text{th}$ pair of $t$, and by $\sigma^{t[i]}_f$ the value of instruction $f$ (e.g., \instruction{caller}) in $\sigma^{t[i]}$. \para{Properties} A property is a relation over sets of traces. A contract satisfies a security property $\rho$ if $\sem{C}\in \rho$. If $\sem{C}\notin \rho$, we say that $C$ violates the property $\rho$. We define relations using first-order logic formulas. The formulas are interpreted over the traces and the bitstrings that comprise the user identifiers, offsets, and other arguments or return values of the EVM instructions. We denote by $t_1,t_2,...$ variables that refer to traces. We denote by $i_1,i_2,...$ variables that refer to the index of a pair in a trace. We use other letters for bitstring variables. For example, we use $a$ to refer to a bitstring which is used in the formula to refer to a user's identifier (her address), and we use $x$ to refer to an offset in the storage or as arguments to \instruction{call}. For simplicity's sake, although EVM is a stack-based language, we write instructions as $r\leftarrow \instruction{instr}(a_1,\ldots,a_k)$ and use the wildcard for arguments/return values that are not important to the formula. Note that $a_1,...a_k,r$ represent the concrete values at the moment of execution. \input{patterntable} \subsection{Security Properties and Patterns}\label{sec:patts} We now define seven security properties with respect to the EVM semantics~\cite{wood2014ethereum}. Checking these properties precisely is impossible since EVM is Turing-complete. Instead, for each property, we define compliance and violation patterns over our language, which over-approximate the property and, respectively, its negation. That is, a compliance pattern match implies that the property holds, and a violation pattern match implies that the property's negation holds. If neither pattern is matched, then the property may or may not hold. In the following, for each security property, we describe its relevance, present its formal definition, and then refine it into a set of compliance and violation patterns. The complete list of properties and patterns is given in Fig.~\ref{fig:patterns}. \para{Ether Liquidity (LQ)} In November 2017, a bug in a contract led to freezing $\$160$M~\cite{parity2}. The bug occurred because a contract relied on another smart contract (acting as a library) to transfer its ether to users. Unfortunately, a user accidentally removed the library contract, freezing the contract's ether. The combination of the contract being able to receive ether from users and the absence of an explicit transfer to the user led to this issue. Formally, we define this security property by requiring that {\em (i)} all traces $t$ do not change the contract's balance (which means that the contract has no ether and thus its ether is vacuously liquid), or {\em (ii)} there exists a trace $t$ that decreases the contract's balance (i.e., ether is liquid). \[ \begin{array}{l} \psi_{LQ}=(\forall t. \sigma_{\textit{Bal}}^{t[0]} = \sigma_\textit{\textit{Bal}}^{t[-1]}) ~\vee (\exists t. \sigma_{\textit{Bal}}^{t[0]} > \sigma_\textit{\textit{Bal}}^{t[-1]}) \end{array} \] To over-approximate $\psi_{LQ}$ with our language, we leverage the fact that if ether is transferred to the contract, then the amount of ether transferred is given by the $\val$ instruction. Thus, if, for all traces that complete successfully, this amount is zero, then the first part of $\psi_{LQ}$ is satisfied. These is exactly the first liquidity compliance pattern in Fig.~\ref{fig:patterns}: it matches if all transactions that can complete successfully (reach a \instruction{stop} instruction) have to follow a branch of a condition (where the condition is identified by a \instruction{goto} instruction) that is reachable only if the ether transferred to this contract is zero (this branch is the one to which the \instruction{goto} instruction does not jump). The second liquidity compliance pattern over-approximates the second part of $\psi_{LQ}$. It leverages the fact that ether is liquid if there is a reachable \instruction{call} instruction which sends a non-zero amount of ether. Concretely, it is matched if there is a \instruction{call} instruction which transfers {\em (i)}~a positive amount of ether or {\em (ii)}~amount of ether which depends only on the transaction data, and thus can be positive. Our violation pattern over-approximates $\neg\psi_{LQ}$ by checking that both conditions are false: the contract can receive ether, but cannot transfer ether. To guarantee that the contract can receive ether, it verifies that there is an execution that can complete successfully (i.e., reach \instruction{stop}) and its execution does not depend on \instruction{callvalue} -- this guarantees that some trace with positive \val\ can complete. To guarantee that ether cannot be transferred, it verifies that all \instruction{call} instructions transfer $0$ ether. \para{No Writes After Calls (NW)} In July 2016, a bug in the DAO contract enabled an attacker to steal $\$60$M~\cite{daocite}. The attacker exploited the combination of two factors. First, a \instruction{call} instruction which upon its execution enabled the recipient of that \instruction{call} to execute her own code before returning to the contract. Second, the amount transferred by this \instruction{call} depended on a storage value, which was updated \emph{after} this \instruction{call}. This value was critical as it recorded the amount of ether that the \instruction{call}'s recipient had in the contract, and can thus request to receive. This allowed the attacker to call the function again \emph{before the storage was updated}, thus making the contract believe that the user still had ether in the contract. A property that captures when this attack cannot occur checks that there are no writes to the storage after any $\instruction{call}$ instruction. We formalize this vulnerability by requiring that, for all traces $t$, the storage does not change in the interval that starts just before any \instruction{call} instruction and ends when the trace completes: \[ \psi_{NW}=\forall t \forall i (i<-1 \wedge c^{t[i]} = \_ \gets \instruction{call}(\_, \_, \_)) \Rightarrow\sigma_{\textit{S}}^{t[i]} = \sigma_{\textit{S}}^{t[-1]} \] Note that this property is different from reentrancy~\cite{luu2016making}, which stipulates that the callee must not be able to re-enter the same function and reach the \instruction{call} instruction. Our compliance rule over-approximates $\psi_{NW}$ by leveraging the fact that the storage can only be changed via $\instruction{sstore}$. It is thus matched if \instruction{call} instructions are not followed by $\instruction{sstore}$ instructions. Our violation pattern over-approximates $\neg \psi_{NW}$ by checking that there is a \instruction{call} instruction which must be followed by a write to the storage, in which case the implication of $\psi_{NW}$ is violated. \para{Restricted Writes (RW)} In July 2017, an attacker stole $\$30$M because of an unrestricted write to the storage~\cite{paritybug}. The attacker exploited the reliance of the contract on a library that enabled to unconditionally set an \emph{owner} field to any address. This enabled the attacker to take ownership over the contract and steal its ether. We consider a security property that guarantees that writes to storage are restricted. The property requires that, for every storage offset $x$ (e.g., a field in the contract), there is a user $a$ that cannot write at offset $x$ of the storage. \[ \begin{array}{l} \psi_{RW}=\forall x \exists a \forall {t} (\sigma_{\instruction{caller}}^{t[0]} = a \Rightarrow c^{t[-1]} \neq \instruction{sstore}(\_, x,\_))\\ \end{array} \] Our compliance pattern over-approximates $\psi_{RW}$ by checking that offsets of \instruction{sstore} instructions, denoted $x$, are determined by the sender's identifier (i.e., users can only write to their designated slot). This ensures that for all $x$, there exists a user $a$ (in fact, all users but one) who cannot write to $x$. The violation pattern over-approximates $\neg \varphi_{RW}$ by checking if there is an \instruction{sstore} instruction whose execution and offset are independent of \instruction{caller}. In this case, we can define an offset $x$, for which all users can write -- hence violating the property. If this property is too restrictive (there are cases where it is safe to allow global writes to the storage), one can define it (and adapt the patterns) with respect to critical writes (e.g., writes that modify an \code{owner} field), identified by their label $l$. In the following, we skip the formal definition of properties, and only describe them informally. \para{Restricted Transfer (RT)} We define a property that guarantees that ether transfers (via \instruction{call}) cannot be invoked by any user $a$. Violation of this property can detect Ponzi schemes~\cite{BartolettiCCS17}. Our compliance pattern requires that for all users, invocations of that \instruction{call} instruction do not transfer ether. Our first violation pattern checks if the \instruction{call} instruction transfers non-zero amount of ether and its execution is independent of the sender. For the second violation pattern, the amount of ether transferred depends on the transaction data (and thus can be set to a non-zero value), while the execution is independent of this data (and will thus take place). \para{Handled Exception (HE)} In February 2016, a contract by the name ``King of Ether'' had an issue due to mishandled exceptions, forcing its creator to publicly ask users not to send ether to it \cite{koe2}. The issue was that the return value of a \instruction{call}, which indicated if the instruction completed successfully, was not checked. Our compliance pattern checks that \instruction{call} instructions are followed by a \instruction{goto} instruction whose condition is determined by the return code of \instruction{call}. This guarantees that depending on the return code, different execution paths are taken. Our violation pattern checks that the \instruction{call} instruction is not followed by a \instruction{goto} instruction which may depend on the return value. This guarantees that there is no different behavior depending on the result of the \instruction{call}. \para{Transaction Ordering Dependency (TOD)} An inherent issue in the blockchain model is that there is no guarantee on the execution order of transactions. While this has been known, it recently became critical in the context of Initial Coin Offerings, a popular means for start-ups to collect money by selling tokens. The initial tokens are sold at a low price while offering a high bonus, and as demand increases the price increases and the bonus decreases. It has been observed that miners exploit this to create their transactions to win the big bonus at a low rate~\cite{ico}. Our compliance pattern requires that the amount of ether send by a \instruction{call} instruction is independent of the state of the storage and contract's balance. This means that reordering transactions (which can be affected by changing the storage or balance) does not affect the amount sent by the \instruction{call} execution. Our violation pattern checks that the amount of the \instruction{call} instruction is determined by a value read from the storage, whose offset in the storage is known (i.e., it is constant), and that this value can be updated. In Section~\ref{sec:evaluation}, we evaluate several versions of the TOD property: \begin{inparaenum}[\em (i)] \item{\em TOD Transfer (TT)} indicates that the execution of the ether transfer depends on transaction ordering (e.g., a condition guarding the transfer depends on the transaction ordering); \item{\em TOD Amount (TA)} marks that the amount of ether transferred depends on the transaction ordering (this variation is the one described above and in Fig.~\ref{fig:patterns}); \item{\em TOD Receiver (TR)} captures the vulnerability that the recipient of the ether transfer might change, depending on the transaction ordering. \end{inparaenum} \para{Validated Arguments (VA)} Method arguments should be validated before usage, because unexpected arguments may result in insecure contract behaviors. Contracts must check whether all transaction arguments meet their desired preconditions. Our compliance pattern checks that before storing in the persistent memory a variable that may depend on a method argument, there exists a check of the argument value. Our violation pattern identifies \instruction{sstore} instructions that write to memory a method argument without previously checking its value. \para{Limitations} We next discuss a few limitations of checking properties through patterns. First, all our violation patterns assume that the violating instructions (which match the violation pattern) are part of some terminating execution. For example, in the violation pattern of ether liquidity, the matching \instruction{stop} is assumed to be reachable, and in the violation pattern of no writes after calls, both the \instruction{call} and the write are assumed to be part of some terminating execution. We take this assumption since, in general, this problem is undecidable. Second, the security properties we consider are generic and do not capture contract-specific requirements (we illustrate the specification of contract-specific patterns in \textsc{Securify}\xspace's DSL below). Some vulnerabilities are, however, contract-specific, and therefore they are not captured by our compliance patterns (i.e., a contract can be exploitable even if a compliance pattern is matched). For example, our compliance pattern for handled exceptions matches if there is \emph{some} check over the \instruction{call}'s return value. However, the pattern cannot check that the exception was handled \emph{correctly}, as this is contract-specific. Similarly, the compliance pattern for validated arguments matches if there is \emph{some} check over the arguments. However, the check can still miss cases where inputs are not correctly validated, as the meaning of \emph{correctly validated} varies across contracts. Third, since our patterns do not capture precisely their corresponding properties, it can happen that a contract matches neither the compliance nor the violation pattern. In this case, \textsc{Securify}\xspace cannot infer whether the property holds, and thus shows a warning. \para{Contract-specific Patterns} Finally, we remark that \textsc{Securify}\xspace is not limited to checking the security properties described above. In fact, it is common that a security auditor would write custom patterns defined for a particular contract. Such custom patterns are specified by providing an expression in the \textsc{Securify}\xspace language. To illustrate this, suppose an auditor wants to check whether the execution of a specific sensitive \instruction{call} instruction at label~$\labelc{l}$ depends on the address of the owner. To discover violations of this property, the auditor would write: $$ \begin{array}{lll} \relation{some}\ \instruction{call}(L, \_, \_, \_).\ \\ \qquad (L = \labelc{l}) \wedge \neg \big(\relation{some}\ \instruction{sload}(\_, Owner, X).\ \relation{MayDepOn}(L, X)\big) \end{array} $$ Here, $Owner$ is the identifier of the field storing the owner address, i.e. a constant offset in the contract's storage. \subsection{Error Localization via Violation Patterns} An important part of \textsc{Securify}\xspace is to pinpoint the instructions that lead to violations (or potential violations) of security properties, as this enables developers to fix the code. In this section, we characterize which patterns enable such error localization. We call such patterns \emph{instruction patterns} (as they pinpoint instructions), and we call other patterns \emph{contract pattern} (as the violation is identified for the entire contract). \para{Instruction Patterns} An instruction pattern has the form of: $\relation{some}\ \instruction{instr}(\overline{X}).\ \varphi_v(\overline{X})$, for violation patterns, and, $\relation{all}\ \instruction{instr}(\overline{X}).\ \varphi_c(\overline{X})$, for compliance patterns. That is, if a violation pattern is an instruction pattern and it is matched by some $\instruction{instr}(\overline{X})$, then \textsc{Securify}\xspace can highlight this instruction as a violation. Similarly, if a compliance pattern is an instruction pattern and it is \emph{not} matched because of some $\instruction{instr}(\overline{X})$, then \textsc{Securify}\xspace can highlight this instruction as a warning (assuming that the corresponding violation pattern has not matched). Note that six of the violation patterns in Fig.~\ref{fig:patterns} (all except the violation pattern for ether liquidity) are instruction patterns. \para{Contract Patterns} Patterns which are not instruction patterns are called \emph{contract patterns}. For them, it is difficult to pinpoint a single instruction responsible for its violation. The ether liquidity violation pattern is an example of a contract pattern: it conjoins two different conditions pertaining to \instruction{stop} and \instruction{call} instructions. For contract patterns, \textsc{Securify}\xspace evaluates the compliance and violation patterns and flags the contract as vulnerable (if the violation pattern is matched) or issues a warning (if no pattern is match) without pinpointing specific instructions. \section{Related Work} \label{sec:related} We discuss some of the works that are most closely related to ours. \para{Analysis of Smart Contracts} Smart contracts have been shown to be exposed to severe vulnerabilities~\cite{atzei2016survey,vitaly-security-blog}. Hirai~\cite{hirai2016formal} was one of the firsts to formally verify smart contracts using the Isabelle proof assistant. In \cite{ITP-ethereum}, Hirai defines a formal model for the Ethereum Virtual Machine using the Lem language. This model proves safety properties of smart contracts using existing interactive theorem provers. Formal semantics of the EVM have been defined by Grishchenko~\emph{et al.} \cite{GrishchenkoMS18} using the F* framework and by Hildenbrandt~\emph{et al.} \cite{kevm} using the K framework~\cite{rosu-serbanuta-2010-jlap}. These semantics are executable and were validated against the official Ethereum test suite. Further, they enable the formal specification and verification of properties. The main benefit of these frameworks is that they provide strong formal verification guarantees and are precise (no false positives). They target arbitrary properties, but are, unfortunately, nontrivial to fully automate. In contrast, \textsc{Securify}\xspace targets properties that can be proved/disproved by checking simpler properties that can be verified in a fully automated way. In the space of automated security tools for smart contracts, there are several popular systems based on symbolic execution. Examples include Oyente~\cite{luu2016making}, Mythril~\cite{mythril}, and Maian~\cite{maian}. While symbolic execution is indeed a powerful generic technique for discovering bugs, it does not guarantee to explore all program paths (resulting in false negatives). In contrast to these tools, \textsc{Securify}\xspace explores all contract behaviors. In the context of smart contracts, path constraints often involve hard-to-solve constraints, such as hash-functions, resulting in low coverage or false positives. Further, to avoid false positives, symbolic tools must precisely explore the set of feasible contract blocks. Towards this, Maian already uses a concrete validation step to filter false positives. An interesting application of \textsc{Securify}\xspace would be to filter the false positives reported by symbolic tools using its compliance patterns. In contrast to the approaches based on symbolic execution, \textsc{Securify}\xspace is an abstract interpreter. As such, it can provide soundness guarantees over all possible executions. This is different from symbolic execution which can only guarantee soundness if the number of paths can be bounded (in particular, this means that loops have to be unrolled). Even when the number of paths is bound, an abstract interpreter often scales better than symbolic execution since it can join paths and does not have to explore different paths separately. On the other hand, symbolic execution can, in principle, handle more expressive predicates (within the logic of the underlying SMT solver), and, in theory, it has no false positives (in practice, as we show in Fig.~\ref{fig:compare}, it can have false positives). Bhargavan \emph{et al.}~\cite{Bhargavan:2016} present preliminary work on verifying Ethereum smart contracts by translating Solidity and EVM bytecode to an existing verification system. The paper does not report how their tool performs on real-world contracts. The work presented in~\cite{bigi2015validation} combines game theory and probabilistic model checking to validate a decentralized smart contract protocol. The Zeus system \cite{kalra2018zeus} is a sound analyzer that translates smart contracts to the LLVM framework. Zeus uses XACML as a language to write properties. In contrast, \textsc{Securify}\xspace's DSL supports the checking of data- and control-flow properties. Further, Zeus does not support violation patterns as a way to reduce false positives. We could not directly compare \textsc{Securify}\xspace with Zeus as neither Zeus nor its benchmarks are publicly available. Similarly to \textsc{Securify}\xspace, the work by Grossman~\emph{et al.}~\cite{GrossmanAGMRSZ18} also targets domains-specific properties. In more detail, they introduce a dynamic linearizability checker to identify reentrancy issues. In contrast, \textsc{Securify}\xspace supports a larger class of properties for smart contracts and supports a DSL to allow security experts to extend the system with more properties. \para{Security Factors} Delmolino \emph{et al.}~\cite{delmolino2016step} document the kinds of vulnerabilities students introduce while writing smart contracts and propose methods on how to avoid common pitfalls. Chen \emph{et al.}~\cite{chen2017under} show that the current standard compiler Solidity does not properly optimize the EVM bytecode. Seijas \emph{et al.}~\cite{seijas2016scripting} overview the capabilities of different blockchains such as Bitcoin, Nxt, and Ethereum, and survey extensions (Kosba \emph{et al.}~\cite{kosba2016hawk}). \para{Language-Based Security} Programming language approaches enforce security at the program code level. PQL~\cite{PQL05} introduces a program query language for Java that allows developers to express patterns of interest and check Java programs against them. Both~\cite{PQL05} and our work have an underlying declarative solver for the static analysis. Pidgin~\cite{Johnson:2015:EES:2737924.2737957} is a custom query language for program dependence graphs that can also capture security properties on Java programs. In contrast, our work focuses on Ethereum smart contracts. \textsc{Securify}\xspace's analysis is tailored to the Ethereum setting, such as Ethereum-specific instructions (e.g., \instruction{balance}) and reasoning across memory and contract storage. Furthermore, \textsc{Securify}\xspace provides a DSL specific to security properties for smart contracts. \para{Declarative Program Analysis} Declarative approaches to program analysis are related to \textsc{Securify}\xspace's fact inference engine, as they also rely on Datalog to express analysis computations. The Doop framework~\cite{Smaragdakis:2010:UDF:2185923.2185939,Bravenboer:2009:SDS:1640089.1640108} presents a fast and scalable declarative points-to analysis for Java programs and is one of the first works to show the promise of declarative static analysis. Following these ideas, the authors of \cite{Zhang:2014:ARP:2666356.2594327} present a technique for automatic abstraction refinement for static analysis specified in Datalog, and in~\cite{Mangal:2015:UAP:2786805.2786851} the authors propose to involve the developer in the abstraction-refinement loop. Researchers have developed specific extensions to Datalog, such as Flix~\cite{Madsen:2016:DFD:2908080.2908096}, to improve the efficiency and scalability of Datalog-based program analysis. These works are orthogonal to \textsc{Securify}\xspace's inference engine. They develop general program analysis techniques, while \textsc{Securify}\xspace leverages these advances for reasoning about smart contracts. As such, \textsc{Securify}\xspace can benefit from any future advances in Datalog-based program analysis.
{ "timestamp": "2018-08-27T02:09:09", "yymm": "1806", "arxiv_id": "1806.01143", "language": "en", "url": "https://arxiv.org/abs/1806.01143" }
\section{Introduction} Private learning concerns the design of learning algorithms for problems in which the input sample contains sensitive data that needs to be protected. Such problems arise in various contexts, including those involving social network data, financial records, medical records, etc. The notion of differential privacy~\citep{Dwork06calib,Dwork06ourdata}, which is a standard mathematical formalism of privacy, enables a systematic study of algorithmic privacy in machine learning. The question \begin{center} ``Which problems can be learned by a private learning algorithm?'' \end{center} has attracted considerable attention~\citep{Rubinstein09large,Kasiv11learning,Chaudhuri11erm,Beimel13charac,Beimel14bounds,Chaudhuri14margin,Balcan15active,Bun15thresholds,Beimel15unlabeled,Feldman15communication,Wang16learning,Cummings16robust,Beimel16sanit,Bun16direct,Bassily16stability,Ligett17accuracy,Bassily18model,Feldman18prediction}. {\it Learning thresholds} is one of the most basic problems in machine learning. This problem consists of an unknown threshold function $c:{\mathbb R} \to\{\pm 1\}$, an unknown distribution~$D$ over ${\mathbb R}$, and the goal is to output an hypothesis $h:{\mathbb R}\to \{\pm 1\}$ that is close to $c$, given access to a limited number of input examples $(x_1,c(x_1)),\ldots,(x_m,c(x_m))$, where the~$x_i$'s are drawn independently from~$D$. The importance of thresholds stems from that it appears as a subclass of many other well-studied classes. For example, it is the one dimensional version of the class of {\it Euclidean Half-Spaces} which underlies popular learning algorithms such as kernel machines and neural networks (see e.g.\ \cite{Shalev14book}). Standard PAC learning of thresholds without privacy constraints is known to be easy and can be done using a constant number of examples. In contrast, whether thresholds can be learned privately turned out to be more challenging to decide, and there has been an extensive amount of work that addressed this task: the work by \cite{Kasiv11learning} implies a {\it pure differentially private} algorithm (see the next section for formal definitions) that learns thresholds over a finite $X\subseteq {\mathbb R}$ of size $n$ with $O(\log n)$ examples. \cite{Feldman15communication} showed a matching lower bound for any pure differentially private algorithm. \cite{Beimel16sanit} showed that by relaxing the privacy constraint to {\it approximate differential privacy}, one can significantly improve the upper bound to some $2^{O(\log^*(n))}$. \cite{Bun15thresholds} further improved the upper bound from \citep{Beimel16sanit} by polynomial factors and gave a lower bound of $\Omega(\log^*(n))$ that applies for any proper learning algorithm. They also explicitly asked whether the dependence on $n$ can be removed in the improper case. \new{\cite{Feldman15communication} asked more generally whether any class can be learned privately with sample complexity depending only on its VC dimension (ignoring standard dependencies on the privacy and accuracy parameters).} Our main result (\Cref{thm:main}) answers these questions by showing that a similar lower bound applies for any (possibly improper) learning algorithm. Despite the impossibility of privately learning thresholds, there are other natural learning problems that can be learned privately, In fact, even for the class of Half-spaces, private learning is possible if the target half-space satisfies a large {\it margin\footnote{The margin is a geometric measurement for the distance between the separating hyperplane and typical points that are drawn from the target distribution.}} assumption \citep{Blum05practical,Chaudhuri11erm}. Therefore, it will be interesting to find a natural invariant that characterizes which classes can be learned privately (like the way the VC dimension characterizes PAC learning~\citep{Blumer89learnability,Vapnik71uniform}). Such parameters exist in the case of pure differentially private learning; these include the {\it one-way communication complexity} characterization by~\cite{Feldman15communication} and the {\it representation dimension} by~\cite{Beimel13charac}. However, no such parameter is known for approximate differentially private learning. We next suggest a candidate invariant that rises naturally from this work. \subsection{Littlestone dimension vs. approximate private learning} The Littlestone dimension~\citep{Littlestone87online} is a combinatorial parameter that characterizes learnability of binary-labelled classes within {\it Online Learning} both in the realizable case~\citep{Littlestone87online} and in the agnostic case~\citep{Bendavid09agnostic}. It turns out that there is an intimate relationship between thresholds and the Littlestone dimension: a class $H$ has a finite Littlestone dimension if and only if it does not embed thresholds as a subclass (for a formal statement, see \Cref{{thm:shelah}}); this follows from a seminal result in model theory by \cite{Shelah78classification}. As explained below, Shelah's theorem is usually stated in terms of orders and ranks. \cite{chase2018model} noticed\footnote{Interestingly, though the Littlestone dimension is a basic parameter in Machine Learning (ML), this result has not appeared in the ML literature.} that the Littlestone dimension is the same as the model theoretic rank. Meanwhile, order translates naturally to thresholds. To make \Cref{thm:shelah} more accessible for readers with less background in model theory, we provide a combinatorial proof in the appendix. While it still remains open whether finite Littlestone dimension is indeed equivalent to private learnability, our main result (\Cref{thm:main}) combined with the above connection between Littlestone dimension and thresholds (\Cref{thm:shelah}) imply an implication in one direction: At least $\Omega(\log^* d)$ examples are required for privately learning any class with Littlestone dimension $d$ (see \Cref{thm:ADPimpliesLD}). It is worth noting that \cite{Feldman15communication} studied the Littlestone dimension in the context of pure differentially private learning: (i) they showed that $\Omega(d)$ examples are required for learning a class with Littlestone dimension $d$ in a pure differentially private manner, (ii) they exhibited classes with Littlestone dimension $2$ that can not be learned by pure differentially private algorithms, and (iii) they showed that these classes can be learned by approximate differential private algorithms. \paragraph{Organization} The rest of this manuscript is organized as follows: \Cref{sec:mainresult} presents the two main results, \Cref{sec:pre} contains definitions and technical background from machine learning and differential privacy, and \Cref{sec:thresholds} and \Cref{sec:lsdim} contain the proofs. \ignore{ \begin{theorem}[Private learning implies finite Littlestone dimension]\label{thm:ADPimpliesLD} Let $H$ be an hypothesis class with Littlestone dimension $d\in{\mathbb N}\cup\{\infty\}$. Then any learning algorithm that learns $H$ and satisfies $(\epsilon,\delta)$-differential privacy with $\epsilon=1,\delta = O(\frac{1}{m^2\log m})$ requires at least some $\Omega(\log^*d)$ examples to achieve expected loss of at most $\frac{1}{4}$. In particular any class that is learnable under privacy guarantees has a bounded Littlestone dimension. \end{theorem}} \section{Main Results}\label{sec:mainresult} We next state the two main results of this paper. The statements use technical terms from differential privacy and machine learning whose definitions appear in \Cref{sec:pre}. We begin by the following statement that resolves an open problem \new{in \cite{Feldman15communication} and \cite{Bun15thresholds}}: \begin{theorem}[Thresholds are not privately learnable]\label{thm:main} \new{Let $X\subseteq {\mathbb R}$ of size $\lvert X\rvert = n$ and let $\cal A$ be a $(\frac{1}{16},\frac{1}{16})$-accurate learning algorithm for the class of thresholds over $X$ with sample complexity $m$ which satisfies $(\epsilon,\delta)$-differential privacy with $\epsilon=0.1$ and $\delta = O(\frac{1}{m^2\log m})$. Then, \[m\geq \Omega(\log^*n).\] In particular, the class of thresholds over an infinite $X$ can not be learned privately.} \end{theorem} \Cref{thm:main} and \Cref{thm:shelah} (which is stated in the next section) imply that any privately learnable class has a finite Littlestone dimension. As elaborated in the introduction, this extends a result by~\cite{Feldman15communication}. \begin{corollary}[Private learning implies finite Littlestone dimension]\label{thm:ADPimpliesLD} \new{Let $H$ be an hypothesis class with Littlestone dimension $d\in{\mathbb N}\cup\{\infty\}$ and let $\cal A$ be a $(\frac{1}{16},\frac{1}{16})$-accurate learning algorithm for $H$ with sample complexity $m$ which satisfies $(\epsilon,\delta)$-differential private with $\epsilon=0.1$ and $\delta = O(\frac{1}{m^2\log m})$. Then, \[m\geq \Omega(\log^*d).\] In particular any class that is privately learnable has a finite Littlestone dimension.} \end{corollary} \section{Preliminaries}\label{sec:pre} \subsection{PAC learning} We use standard notation from statistical learning, see e.g.\ \citep{Shalev14book}. Let $X$ be a set and let $Y=\{\pm 1\}$. An {\it hypothesis} is an $X\to Y$ function. An {\it example} is a pair in $X\times Y$. A {\it sample} $S$ is a finite sequence of examples. The {\it loss of $h$ with respect to $S$} is defined by \[L_S(h) = \frac{1}{\lvert S\rvert}\sum_{(x_i,y_i)\in S}1[h(x_i)\neq y_i].\] The {\it loss of $h$ with respect to a distribution} $D$ over $X\times Y$ is defined by \[L_D(h) = \Pr_{(x,y)\sim D} [h(x)\neq y].\] Let $\mathcal{H}\subseteq Y^X$ be an {\it hypothesis class}. $S$ is said to be {\it realizable by $\mathcal{H}$} if there is $h\in H$ such that $L_S(h)=0$ . $D$ is said to be {\it realizable by $\mathcal{H}$} if there is $h\in H$ such that $L_D(h)=0$. A {\it learning algorithm} $A$ is a (possibly randomized) mapping taking input samples to output hypotheses. We denote by $A(S)$ the distribution over hypotheses induced by the algorithm when the input sample is $S$. We say that $A$ {\it learns\footnote{We focus on the realizable case.} a class $\mathcal{H}$} with $\alpha$-{\it error}, $(1-\beta)$-{\it confidence}, and {\it sample-complexity} $m$ if for every realizable distribution $D$: \[\Pr_{S\sim D^m,~h\sim A(S)}[L_D(h) > \alpha] \leq \beta,\] For brevity if $A$ is a learning algorithm with $\alpha$-error and $(1-\beta)$-confidence we will say that~$A$ is an \emph{$(\alpha,\beta)$-accurate learner}. \paragraph{Littlestone Dimension} The Littlestone dimension is a combinatorial parameter that characterizes regret bounds in Online Learning \citep{Littlestone87online,Bendavid09agnostic}. The definition of this parameter uses the notion of {\it mistake-trees}: these are binary decision trees whose internal nodes are labelled by elements of $X$. Any root-to-leaf path in a mistake tree can be described as a sequence of examples $(x_1,y_1),...,(x_d,y_d)$, where $x_i$ is the label of the $i$'th internal node in the path, and $y_i=+1$ if the $(i+1)$'th node in the path is the right child of the $i$'th node, and otherwise $y_i = -1$. We say that a tree $T$ is {\it shattered }by $\mathcal{H}$ if for any root-to-leaf path $(x_1,y_1),...,(x_d,y_d)$ in $T$ there is $h\in \mathcal{H}$ such that $h(x_i)=y_i$, for all $i\leq d$. The Littlestone dimension of $\mathcal{H}$, denoted by $\ldim{\mathcal{H}}$, is the depth of largest complete tree that is shattered by~$\mathcal{H}$. Recently, \cite{chase2018model} noticed that the Littlestone dimension coincides with a model-theoretic measure of complexity, Shelah's $2$-rank. A classical theorem of Shelah connects bounds on 2-rank (Littlestone dimension) to bounds on the so-called order property in model theory. The order property corresponds naturally to the concept of {\it thresholds}. Let $\mathcal{H}\subseteq \{\pm 1\}^X$ be an hypothesis class. We say that $\mathcal{H}$ {\it contains $k$ thresholds} if there are $x_1,\ldots,x_k\in X$ and $h_1,\ldots,h_k\in \mathcal{H}$ such that $h_i(x_j) = 1$ if and only if $i\leq j$ for all~$i,j\leq k$. Shelah's result (part of the so-called Unstable Formula Theorem\footnote{\cite{Shelah78classification} provides a qualitative statement, a quantitative one that is more similar to \Cref{thm:shelah} can be found at~\cite{Hodges97book}}) \citep{Shelah78classification, Hodges97book}, which we use in the following translated form, provides a simple and elegant connection between Littlestone dimension and thresholds. \begin{theorem}\label{thm:shelah}(Littlestone dimension and thresholds \citep{Shelah78classification, Hodges97book})\ \\ Let $\mathcal{H}$ be an hypothesis class, then: \begin{enumerate} \item If the $\ldim{\mathcal{H}}\geq d$ then $\mathcal{H}$ contains $\lfloor \log d\rfloor$ thresholds \item If $\mathcal{H}$ contains $d$ thresholds then its $\ldim{\mathcal{H}}\geq \lfloor \log d\rfloor$. \end{enumerate} \end{theorem} For completeness, we provide a combinatorial proof of \Cref{thm:shelah} in \Cref{sec:shelah}. In the context of model theory, \Cref{thm:shelah} is used to establish an equivalence between finite Littlestone dimension and \emph{stable theories}. It is interesting to note that an analogous connection between theories that are called \emph{NIP theories} and VC dimension has also been previously observed and was pointed out by \cite{laskowski1992vapnik}; this in turn led to results in Learning theory: in particular within the context of compression schemes \citep{livni2013honest} but also some of the first polynomial bounds for the VC dimension for sigmoidal neural networks \citep{karpinski1997polynomial}. \subsection{Privacy} We use standard notation from differential privacy. For more background see e.g.\ the surveys~\citep{Dwork14survey,Vadhan17survey}. For $s,t\in {\mathbb R}$ let $a=_{\epsilon,\delta} b$ denote the statement \[a\leq e^{\epsilon}b + \delta ~\text{ and }~ b\leq e^\epsilon a + \delta.\] We say that two distributions $p,q$ are {\it $(\epsilon,\delta)$-indistinguishable} if $p(E) =_{\epsilon,\delta} q(E)$ for every event~$E$. Note that when~$\epsilon=0$ this specializes to the total variation metric. \begin{definition}[Private Learning Algorithm]\label{def:private} A randomized learning algorithm \[A: (X\times \{\pm 1\})^m \to \{\pm 1\}^X\] is $(\epsilon,\delta)$-differentially private if for every two samples $S,S'\in (X\times \{\pm 1\})^m$ that disagree on a single example, the output distributions $A(S)$ and $A(S')$ are $(\epsilon,\delta)$-indistinguishable. \end{definition} The parameters $\epsilon,\delta$ are usually treated as follows: $\epsilon$ is a small constant (say $0.1$), and $\delta$ is negligible, $\delta = m^{-\omega(1)}$, where $m$ is the input sample size. The case of $\delta=0$ is also referred to as {\it pure differential privacy}. A common interpretation of a negligible~$\delta > 0$ is that there is a tiny chance of a catastrophic event (in which perhaps all the input data is leaked) but otherwise the algorithm satisfies pure differential privacy. Thus, a class $\mathcal{H}$ is privately learnable if it is PAC learnable by an algorithm $A$ that is $(\epsilon(m),\delta(m))$-differentially private with $\epsilon(m) \leq o(1)$, and $\delta(m) \leq m^{-\omega(1)} $. We will use the following corollary of the {\it Basic Composition Theorem} from differential privacy (see, e.g.\ Theorem 3.16 in \citep{Dwork14survey}). \begin{lemma}\label{lem:prod}\citep{Dwork06ourdata,Dwork09robust} If $p,q$ are $(\epsilon,\delta)$-indistinguishable then for all $k\in\mathbb{N}$, $p^k$ and $q^k$ are $(k\epsilon,k\delta)$-indistinguishable, where $p^k,q^k$ are the k-fold products of~$p,q$ (i.e.\ corresponding to $k$ independent samples). \end{lemma} For completeness, a proof of this statement appears in \Cref{app:prod}. \paragraph{Private Empirical Learners} It will be convenient to consider the following task of minimizing the empirical loss. \begin{definition}[Empirical Learner] Algorithm $A$ is $(\alpha,\beta)$-accurate empirical learner for a hypothesis class $\mathcal{H}$ with sample complexity $m$ if for every $h\in \mathcal{H}$ and for every sample $S=((x_1,h(x_1),\ldots, (x_m,h(x_m)))\in \left(X\times \{0,1\}\right)^m$ the algorithm $A$ outputs a function~$f$ satisfying \[\Pr_{f\sim A(S)}\bigl(L_{S}(f) \le \alpha\bigr)\ge 1-\beta\] \end{definition} This task is simpler to handle than PAC learning, which is a distributional loss minimization task. Replacing PAC learning by this task does not lose generality; this is implied by the following result by \cite{Bun15thresholds}. \begin{lemma}\label{lem:bun}[\cite{Bun15thresholds}, Lemma 5.9] Suppose \new{$\epsilon<1$} and $A$ is an $(\epsilon,\delta)$-differentially private $(\alpha,\beta)$--accurate learning algorithm for a hypothesis class $\mathcal{H}$ with sample complexity~$m$. Then there exists an $(\epsilon,\delta)$--differentially private $(\alpha,\beta)$--accurate empirical learner for $\mathcal{H}$ with sample complexity $9m$. \end{lemma} \subsection{Additional notations} A sample $S$ of an even length is called \emph{balanced} if half of its labels are $+1$'s and half are $-1$'s. For a sample $S$, let $S_X$ denote the underlying set of unlabeled examples: $S_X = \bigl\{x \vert (\exists y): (x,y)\in S\bigr\}$. Let~$A$ be a randomized learning algorithm. It will be convenient to associate with~$A$ and $S$ the function $A_S:X\to[0,1]$ defined by \[ A_S(x) = \Pr_{h\sim A(S)}\bigl[h(x)=1\bigr]. \] Intuitively, this function represents the average hypothesis outputted by $A$ when the input sample is $S$. For the next definitions assume that the domain $X$ is linearly ordered. Let $S=((x_i,y_i))_{i=1}^{m}$ be a sample. We say that $S$ is {\it increasing} if $x_1< x_2< \ldots< x_m$. For $x\in X$ define ${\text{ord}}_S(x)$ by $\lvert\{ i \vert x_i \leq x\}\rvert$. Note that the set of points $x\in X$ with the same ${\text{ord}}_S(x)$ form an interval whose endpoints are two consecutive examples in $S$ (consecutive with respect to the order on $X$, i.e.\ there is no example $x_i$ between them). The {\it tower function} $\mathsf{twr}_k(x)$ is defined by the recursion \[ \mathsf{twr}^{(i)} x = \begin{cases} x &i = 1,\\ 2^{\mathsf{twr}{(i-1)}(x)} &i> 1. \end{cases} \] The iterated logarithm, $\log^{(k)}(x)$ is defined by the recursion \[ \log^{(i)} x = \begin{cases} \log x &i = 0,\\ 1 + \log^{(i-1)}\log x &i> 0. \end{cases} \] The function $\log^*x$ equals the number of times the iterated logarithm must be applied before the result is less than or equal to $1$. It is defined by the recursion \[ \log^* x = \begin{cases} 0 &x\leq 1,\\ 1 + \log^*\log x &x>1. \end{cases} \] \section{A lower bound for privately learning thresholds}\label{sec:thresholds} In this section we prove \Cref{thm:main}. \subsection{Proof overview} We begin by considering an arbitrary differentially private algorithm $A$ that learns the class of thresholds over an ordered domain $X$ of size $n$. Our goal is to show a lower bound of~$\Omega(\log^* n)$ on the sample complexity of $A$. A central challenge in the proof follows because $A$ may be improper and output arbitrary hypotheses (this is in contrast with proving impossibility results for proper algorithms where the structure of the learned class can be exploited). The proof consists of two parts: (i) the first part handles the above challenge by showing that for any algorithm (in fact, for any mapping that takes input samples to output hypotheses) there is a large subset of the domain that is {\it homogeneous with respect to the algorithm}. This notion of homogeneity places useful restrictions on the algorithm when restricting it to the homogeneous set. (ii) The second part of the argument utilizes such a large homogeneous set $X'\subseteq X$ to derive a lower bound on the sample complexity of the algorithm in terms of~$\lvert X' \rvert$. \new{We note that the Ramsey argument in the first part is quite general: it does not use the definition of differential privacy and could perhaps be useful in other sample complexity lower bounds. Also, a similar argument was used by~\cite{bun16thesis} in a weaker lower bound for privately learning thresholds in the proper case. However, the second and more technical part of the proof is tailored specifically to the definition of differential privacy.} We next outline each of these two parts. \paragraph{Reduction to an algorithm over an homogeneous set} As discussed above, the first step in the proof is about identifying a large homogeneous subset of the input domain $X$ on which we can control the output of $A$: a subset $X'\subseteq X$ is called {\it homogeneous with respect to~$A$} if there is a list of numbers $p_0,p_1,\ldots,p_m$ such that for every increasing balanced sample~$S$ of points from $X'$ and for every $x'$ from $X'$ with ${\text{ord}}_S(x') = i$: \[\lvert A_S(x') - p_i\rvert \leq \gamma,\] where $\gamma$ is sufficiently small. For simplicity, in this proof-overview we will assume that~$\gamma=0$ (in the formal proof $\gamma$ is some $O(1/m)$ - see \Cref{def:homog}). So, for example, if $A$ is deterministic then $h=A(S)$ is constant over each of the intervals defined by consecutive examples from $S$. See \Cref{fig:homogenic} for an illustration of homogeneity. The derivation of a large homogeneous set follows by a standard application of Ramsey Theorem for hyper-graphs using an appropriate coloring (\Cref{lem:ramsey}). \begin{figure} \includegraphics[scale=0.5]{homo.pdf} \caption{\small{Depiction of two possible outputs of an algorithm over an homogeneous set, given two input samples from the set (marked in red). The number $p_i$ denote, for a given point $x$, the probability that $h(x)=1$, where $h\sim A(S)$ is the hypothesis $h$ outputted by the algorithm on input sample $S$. These probabilities depends (up to a small additive error) only on the interval that $x$ belongs to. In the figure above we changed in the input the fourth example -- this only affects the interval and not the values of the $p_i$'s (again, up to a small additive error).}}\label{fig:homogenic} \end{figure} \paragraph{Lower bound for an algorithm defined on large homogeneous sets} We next assume that $X'=\{1,\ldots,k\}$ is a large homogeneous set with respect to $A$ (with $\gamma=0$). We will obtain a lower bound on the sample complexity of $A$, denoted by $m$, by constructing a family $\P$ of distributions such that: (i) on the one hand $\lvert \P \rvert \leq 2^{\tilde O(m^2)}$, and (ii) on the other hand $\lvert \P \rvert\geq \Omega( k)$. Combining these inequalities yields a lower bound on $m$ and concludes the proof. The construction of $\P$ proceeds as follows and is depicted in \Cref{fig:AtoP}: let $S$ be an increasing balanced sample of points from $X'$. Using the fact that $A$ learns thresholds it is shown that for some $i_1<i_2$ we have that $p_{i_1}\leq 1/3$ and $p_{i_2} \geq 2/3$. Thus, by a simple averaging argument there is some $i_1\leq i \leq i_2$ such that $p_{i} - p_{i-1} \geq \Omega(1/m)$. The last step in the construction is done by picking an increasing sample $S$ such that the interval $(x_{i-1}, x_{i+1})$ has size $n=\Omega(k)$. For $x\in (x_{i-1}, x_{i+1})$, let $S_x$ denote the sample obtained by replacing $x_i$ with $x$ in $S$. Each output distribution $A(S_x)$ can be seen as a distribution over the cube $\{\pm 1\}^n$ (by restricting the output hypothesis to the interval $(x_{i-1}, x_{i+1})$, which is of size $n$). This is the family of distributions $\P=\{P_j : j\leq n\}$. Since $A$ is private, and by choice of the interval $(x_i,x_{i+1})$ we obtain that $\P$ has the following two properties: \begin{itemize} \item $P_{j'}, P_{j''}$ are $(\epsilon,\delta)$-indistinguishable for all $j',j''$, and \item Put $r=\frac{p_{i-1} + p_{i}}{2}$, then for all $P_j$ \[(\forall x\leq n): \Pr_{v\sim P_j}\bigl[v(x)=1\bigr] = \begin{cases} r-\Omega(1/m) &x<j,\\ r+\Omega(1/m) &x>j. \end{cases} \] \end{itemize} It remains to show that $\Omega(k) \leq \lvert \P\rvert \leq 2^{\tilde O(m^2)}$. The lower bound follows directly from the definition of $\P$. The upper bound requires a more subtle argument: it exploits the assumption that $\delta$ is small and \Cref{lem:prod} via a binary-search argument and concentration bounds. This argument appears in \Cref{lem:binary}, whose proof is self-contained. \begin{figure}[h] \includegraphics[scale=0.45]{AtoP.pdf} \caption{\small{An illustration of the definition of the family $P$. Given an homogeneous set and two consecutive intervals where there is a gap of at least $\Omega(1/m)$ between $p_i$ and $p_{i-1}$ (here $i=4$). The distributions in $P$ correspond to the different positions of the $i$'th example, which separates between the $(i-1)$'th and the $i$'th intervals.}}\label{fig:AtoP} \end{figure} \subsection{Proof of \Cref{thm:main}} The proof uses the following definition of homogeneous sets. Recall the definitions of balanced sample and of an increasing sample. In particular that a sample $S=((x_1,y_1),\ldots,(x_m,y_m))$ of an even size is realizable (by thresholds), balanced, and increasing if and only if $x_1<x_2<\ldots<x_m$ and the first half of the $y_i$'s are $-1$ and the second half are $+1$. \begin{definition}[$m$-homogeneous set]\label{def:homog} A set $X'\subseteq X$ is {\it $m$-homogeneous} with respect to a learning algorithm $A$ if there are numbers $p_i\in [0,1]$, for $0\leq i\leq m$ such that for every increasing balanced realizable sample $S\in \bigl(X' \times \{\pm 1\}\bigr)^m$ and for every $x\in X'\setminus S_X$: \[\bigl\lvert A_S(x) - p_i\bigr\rvert \leq \frac{1}{10^2 m},\] where $i = {\text{ord}}_S(x)$. The list $(p_i)_{i=0}^m$ is called the probabilities-list of $X'$ with respect to $A$. \end{definition} \begin{proof}[Proof of \Cref{thm:main}] Let $A$ be a $(1/16,1/16)$-accurate learning algorithm that learns the class of thresholds over $X$ with $m$ examples and is $(\epsilon,\delta)$-differential private with $\epsilon=0.1,\delta = \frac{1}{10^3m^2\log m}$. By \Cref{lem:bun} we may assume without loss of generality that $A$ is an empirical learner with the same privacy and accuracy parameters and sample size that is at most 9 times larger. \Cref{thm:main} follows from the following two lemmas: \begin{lemma}[Every algorithm has large homogeneous sets]\label{lem:finiteramsey}\label{lem:ramsey} Let $A$ be a (possibly randomized) algorithm that is defined over input samples of size $m$ over a domain $X\subseteq R$ with $\lvert X\rvert = n$. Then, there is a set $X'\subseteq X$ that is $m$-homogeneous with respect to~$A$ of size \[ \lvert X'\rvert \geq \frac{\log^{(m)}(n)}{2^{O(m\log m)}}.\] \end{lemma} \Cref{lem:ramsey} allows us to focus on a large homogeneous set with respect to $A$. The next Lemma implies a lower bound in terms of the size of a homogeneous set. For simplicity and without loss of generality assume that the homogeneous set is $\{1,\ldots,k\}$. \begin{lemma}[Large homogeneous sets imply lower bounds for private learning]\label{lem:lbhomog} Let $A$ be an $(0.1,\delta)$-differentially private algorithm with sample complexity $m$ and $\delta \leq \frac{1}{10^3m^2\log m}$. Let~$X=\{1,\ldots, k\}$ be $m$-homogeneous with respect to~$A$. Then, if $A$ empirically learns the class of thresholds over $X$ with $(1/16,1/16)$-accuracy, then \[k \leq 2^{O(m^2\log^2m)}\] (i.e.~$m \geq \Omega\Bigl(\frac{\sqrt{\log k}}{\log\log k}\Bigr)$). \end{lemma} We prove \Cref{lem:ramsey} and \Cref{lem:lbhomog} in the following two subsections. With these lemmas in hand, \Cref{thm:main} follows by a short calculation: indeed, \Cref{lem:ramsey} implies the existence of an homogeneous set $X'$ with respect to $A$ of size $k\geq {\log^{(m)}(n)}/{2^{O(m\log m)}}$. We then restrict $A$ to input samples from the set $X'$, and by relabeling the elements of $X'$ assume that $X'=\{1,\ldots,k\}$ . \Cref{lem:lbhomog} then implies that $k = 2^{O(m^2\log^2m)}$. Together we obtain that \[ \log^{(m)}(n) \leq 2^{c\cdot m^2\log m} \] for some constant $c > 0$. Applying the iterated logarithm $t=\log^*(2^{c\cdot m^2\log m}) = \log^{*}(m)+O(1)$ times on the inequality yields that \[\log^{(m+t)}(n)=\log^{(m + \log^*(m) + O(1))}(n) \leq 1,\] and therefore $\log^*(n) \leq \log^*(m) + m +O(1)$, which implies that $m \geq \Omega(\log^* n)$ as required. \end{proof} \subsection{Proof of \Cref{lem:ramsey}} We next prove that every learning algorithm has a large homogeneous set. We will use the following quantitative version of Ramsey Theorem due to \cite{erdos52combinatorial} (see also the book \citep{graham90ramsey}, or Theorem 10.1 in the survey by~\cite{mubayi17survey}): \begin{theorem}\label{thm:ramsey}\citep{erdos52combinatorial} Let $s>t\geq 2$ and $q$ be integers, and let \[N\geq \mathsf{twr}_t(3sq\log q).\] Then for every coloring of the subsets of size $t$ of a universe of size $N$ using $q$ colors there is a homogeneous subset\footnote{A subset of the universe is homogeneous if all of its $t$-subsets have the same color.} of size $s$. \end{theorem} \begin{proof}[Proof of \Cref{lem:ramsey}] Define a coloring on the $(m+1)$-subsets of $X$ as follows. Let $D=\{x_1<x_2<\ldots<x_{m+1}\}$ be an $(m+1)$-subset of~$X$. For each $i\leq m+1$ let $D^{-i} = D\setminus\{x_i\}$, and let $S^{-i}$ denote the balanced increasing sample on $D^{-i}$. Set $p_{i}$ to be the fraction of the form~$\frac{t}{10^2m}$ that is closest to $A_{S^{-i}}(x_{i})$ (in case of ties pick the smallest such fraction). The coloring assigned to $A$ is the list $(p_1,p_2,\ldots,p_{m+1})$. Thus, the total number of colors is~$(10^2m+1)^{(m+1)}$. By applying \Cref{thm:ramsey} with $t:=m+1, q:=(10^2m+1)^{(m+1)}$, and $N:=n$ there is a set $X'\subseteq X$ of size \[\lvert X'\rvert \geq \frac{\log^{(m)}(n)}{{3(10^2m+1)^{m+1} (m+1)\log(10^2m+1)}} = \frac{\log^{(m)}(N)}{2^{O(m\log m)}}\] such that all $m+1$-subsets of $X'$ have the same color. One can verify that $X'$ is indeed $m$-homogeneous with respect to $A$. \end{proof} \subsection{Proof of \Cref{lem:lbhomog}} The lower bound is proven by using the algorithm $A$ to construct a family of distributions $\P$ with certain properties, and use these properties to derive that $\Omega(k) \leq \P \leq 2^{O(m^2\log^2 m)}$, which implies the desired lower bound. \begin{lemma}\label{lem:AtoP} Let $A,X',m,k$ as in \Cref{lem:lbhomog}, and set $n=k-m$. Then there exists a family $\P=\{P_i : i\le n\}$ of distributions over $\{\pm 1\}^n$ with the following properties: \begin{enumerate} \item Every $P_i,P_j\in \P$ are $(0.1,\delta)$-indistinguishable. \item There exists $r\in [0,1]$ such that for all $i,j\le n$: \[\Pr_{v\sim P_i}\bigl[v(j)=1\bigr] = \begin{cases} \leq r-\frac{1}{10m} &j < i,\\ \geq r+\frac{1}{10m} &j>i. \end{cases}\] \end{enumerate} \end{lemma} \begin{lemma}\label{lem:binary} Let $\P,n,m,r$ as in \Cref{lem:AtoP}. Then $n \leq 2^{10^3 m^2\log^2m}$. \end{lemma} By the above lemmas, $k-m = \lvert \P\rvert \leq 2^{10^3m^2\log^2 m}$, which implies that $k=2^{O(m^2\log^2 m)}$ as required. Thus, it remains to prove these lemmas, which we do next. \subsubsection{Proof of \Cref{lem:AtoP}} For the proof of \cref{lem:AtoP} we will need the following claim: \begin{claim}\label{lem:reduction} Let $(p_i)_{i=0}^m$ denote the probabilities-list of $X'$ with respect to $A$. Then for some $0 < i \leq m$: \[p_{i} - p_{i-1} \geq \frac{1}{4m}\] \end{claim} \begin{proof} The proof of this claim uses the assumption that $A$ empirically learns thresholds. Let $S$ be a balanced increasing realizable sample such that $S_X=\{x_1<\ldots <x_m\}\subseteq X'$ are evenly spaced points on $K$ (so, $S=(x_i,y_i)_{i=1}^m$, where $y_i = -1$ for $i\leq m/2$ and $y_i=+1$ for $i> m_2$). $A$ is an $(\alpha=1/16,\beta=1/16)$-empirical learner and therefore its expected empirical loss on~$S$ is at most $(1-\beta)\cdot\alpha + \beta\cdot 1\leq \alpha + \beta = 1/8$, and so: \begin{align*} \frac{7}{8}& \le \mathop\mathbb{E}_{h\sim A(S)} (1-L_S(h))\\&= \frac{1}{m}\sum_{i=1}^{m/2} \left[1-A_S(x_i) \right]+\frac{1}{m}\sum_{i=m/2+1}^m \left[ A_S(x_i) \right]. \tag{since $S$ is balanced} \end{align*} This implies that there is $m/2\le m_1\le m$ such that $A_S(x_{m_1})\ge 3/4$. Next, by privacy if we consider $S'$ the sample where we replace $x_{m_1}$ by $x_{m_1}+1$ (with the same label), we have that \[A_{S'}(x_{m_1}) \ge \Bigl(\frac{3}{4} -\delta\Bigr) e^{-0.1} \ge \frac{2}{3}.\] Note that ${\text{ord}}_{S'}(x_{m_1}) = m_1-1$, hence by homogeneity: $p_{m_1-1}\ge\frac{2}{3}- \frac{1}{10^2m}$. Similarly we can show that for some $1 \le m_2\le \frac{m}{2}$ we have $p_{m_2-1}\le\frac{1}{3} + \frac{1}{10^2m}$. This implies that for some $m_2 - 1 \leq i \leq m_1 - 1$: \[p_{i} - p_{i-1} \geq \frac{1/3}{m} - \frac{1}{50m^2} \geq \frac{1}{4m}, \] as required. \end{proof} \begin{proof}[Proof of \Cref{lem:AtoP}] Let $i$ be the index guaranteed by \Cref{lem:reduction} such that $p_{i} - p_{i-1}\geq 1/4m$. Pick an increasing realizable sample $S\in \bigl(X'\times\{\pm 1\}\bigr)^m$ so that the interval $J\subseteq X'$ between $x_{i-1}$ and $x_{i+1}$, \[J = \bigl\{x\in \{1,\ldots, k\} : x_{i-1} < x < x_{i+1}\bigr\},\] is of size $k-m$. For every $x\in J$ let $S_x$ be the neighboring sample of $S$ that is obtained by replacing $x$ with $x_i$. This yields family of neighboring samples $\bigl\{S_x: x\in (x_{i-1},x_{i+1})\bigr\}$ such that \begin{itemize} \item every two output-distributions $A(S_{x'})$, $A(S_{x''})$ are $(\epsilon,\delta)$-indistinguishable (because $A$ satisfies $(\epsilon,\delta)$ differential privacy). \item Set $r = \frac{p_{i+1} + p_i}{2}$. Then for all $x,x'\in J$: \[\Pr_{h\sim A(S_x) }\bigl[h(x')=1\bigr] = \begin{cases} \leq r-\frac{1}{10m} & x' < x,\\ \geq r+\frac{1}{10m} & x' >x. \end{cases}\] The proof is concluded by restricting the output of $A$ to $J$, and identifying $J$ with $[n]$ and each output-distributions $A(S_x)$ with a distribution over $\{\pm 1 \}^n$. \end{itemize} \end{proof} \subsubsection{Proof of \Cref{lem:binary}} \begin{proof} Set $T=10^3 m^2\log^2 m - 1$, and $D = 10^2m^2\log T$. We want to show that $n\leq 2^{T+1}$. Assume towards contradiction that~$n > 2^{T+1}$. Consider the family of distributions $Q_i=P_i^D$ for $i=1,\ldots,n$. By Lemma~\ref{lem:prod}, each $Q_i,Q_j$ are $(0.1D,\delta D)$-indistinguishable. We next define a set of mutually disjoint events $E_i$ for $i\leq 2^T$ that are measurable with respect to each of the~$Q_i$'s. For a sequence of vectors $\mathbf{v}=(v_1,\ldots,v_D)$ in $\{\pm 1\}^n$ we let $\bar{\mathbf{v}}\in \{\pm 1\}^n$ be the threshold vector defined by \[\bar{\mathbf{v}}(j) = \begin{cases} -1 & \frac{1}{D}\sum_{i=1}^D v_i(j) \le r, \\ +1 & \frac{1}{D}\sum_{i=1}^D v_i(j) \ge r. \end{cases} \] Given a point in the support of any of the $Q_i$'s, namely a sequence $\mathbf{v} = (v_1,\ldots, v_{D})$ of $D$ vectors in $\{\pm 1\}^n$ define a mapping $B$ according to the outcome of $T$ steps of binary search on $\bar{\mathbf{v}}$ as follows: probe the $\frac{n}{2}$'th entry of $\bar{\mathbf{v}}$; if it is $+1$ then continue recursively with the first half of $\bar{\mathbf{v}}$. Else, continue recursively with the second half of $\bar{\mathbf{v}}$. Define the mapping $B=B(\mathbf{v})$ to be the entry that was probed at the $T$'th step. The events $E_j$ correspond to the $2^T$ different outcomes of $B$. These events are mutually disjoint by the assumption that $n > 2^{T+1}$. Notice that for any possible $i$ in the image of $B$, applying the binary search on a sufficiently large i.i.d sample $\mathbf{v}$ from $P_i$ would yield $B(\mathbf{v}) = i$ with high probability. Quantitatively, a standard application of Chernoff inequality and a union bound imply that the event $E_i= \{\mathbf{v} : B(\bar{\mathbf{v}})=i\}$ for $\mathbf{v} \sim Q_i$, has probability at least \[1 - T\exp\Bigl(-2\frac{1}{10^2m^2}D\Bigr) = 1-T\exp(-2\log T) \geq \frac{2}{3}.\] We claim that for all $j\leq n$, and $i$ in the image of $B$: \begin{equation}\label{eq:crucial} Q_j(E_i) \geq \frac{1}{2}\exp(-0.1D). \end{equation} This will finish the proof since the $2^T$ events are mutually disjoint, and therefore \begin{align*} 1 &\geq Q_j(\cup_i E_i)\\ &= \sum_i Q_j(E_i) \\ &\geq 2^T \cdot \frac{1}{2}e^{-0.1 D}\\ & = 2^{T-1} e^{-0.1 D}, \end{align*} however, $2^{T-1} e^{-0.1 D} > 1$ by the choice of $T,D$, which is a contradiction. Thus it remains to prove \Cref{eq:crucial}. This follows since $Q_i,Q_j$ are $(0.1D,D\delta)$-indistinguishable: \[\frac{2}{3} \leq Q_i(E_i) \leq \exp(0.1 D) Q_j(E_i) + D\delta,\] and by the choice of $\delta$, which implies that $\frac{2}{3}-D\delta \geq \frac{1}{2}$. \end{proof} \section{Privately learnable classes have finite Littlestone dimension}\label{sec:lsdim} We conclude the paper by deriving \Cref{thm:ADPimpliesLD} that gives a lower bound of $\Omega(\log^* d)$ on the sample complexity of privately learning a class with Littlestone dimension $d$. \begin{proof}[Proof of \Cref{thm:ADPimpliesLD}] The proof is a direct corollary of \Cref{thm:shelah} and \Cref{thm:main}. Indeed, let $H$ be a class with Littlestone dimension $d$, and let $c= \lfloor \log d\rfloor$. By Item 1 of \Cref{thm:shelah}, there are $x_1,\ldots, x_c$ and $h_1,\ldots, h_c\in H$ such that $h_i(x_j) = +1$ if and only if $j\geq i$. \Cref{thm:main} implies a lower bound of $m\geq \Omega(\log^* c) = \Omega(\log^* d)$ for any algorithm that learns $\{h_i : i\leq c\}$ with accuracy $(1/16,1/16)$ and privacy $(0.1,O(1/m^2\log m))$. \end{proof} \section{Conclusion} The main result of this paper is a lower bound on the sample complexity of private learning in terms of the Littlestone dimension. We conclude with an open problem. There are many mathematically interesting classes with finite Littlestone dimension, see e.g.\ \citep{chase2018model}. It is natural to ask whether the converse to our main result holds, i.e.\ whether every class with finite Littlestone dimension may be learned privately. \section*{Acknowledgements} We thank Raef Bassily and Mark Bun for many insightful discussions and suggestions regarding an earlier draft of this manuscript. We also thank the anonymous reviewers for helping improving the presentation of this paper. \ignore{ \newpage For simplicity of the exposition here we outline the proof that the class of thresholds over $\mathbb{N}$ is not privately learnable,. We prove by way of contradiction, and we let $A$ be an $(\epsilon,\delta)$-private. \ignore{ We first show that the existence of an algorithm $A$ implies the existence of a family $P=\{p_i : i\leq n\}$, where each~$p_i$ is a distribution over the cube $\{\pm 1\}^n$ for arbitrarily large $n$ with the following properties: (i) $p_i, p_j$ are $(\epsilon,\delta)$-indistinguishable for all $i,j$, and (ii) there are $r,\eta>0$ such that \[\Pr_{v\sim p_i}\bigl[v(j)=1\bigr] = \begin{cases} \leq r-\eta &j<i,\\ \geq r+\eta &j>i. \end{cases} \] Then, it is shown that if $n$ is sufficiently large (at least some $2^{\tilde\Omega(1/\eta^2)}$) and the privacy parameter $\delta$ is sufficiently small, then such a family does not exist. This is shown using a binary search argument combined with \Cref{lem:prod}.} The proof rely on few basic steps which we next outline: \paragraph{Reduction to an Algorithm over an homogeneous set} The main challenge we have to deal with is that the algorithm $A$ is not necessarily proper; in particular, the output of the algorithm may be arbitrary and we can only rely on the assumption that $A$ learns thresholds. We thus rely on Ramsey theory to infer certain structural results that any learning algorithm must satisfy (in fact, any mapping that takes samples to hypotheses), and we derive an infinite subset of examples $K\subseteq {\mathbb N}$, on which the algorithms behavior is \emph{homogeneous}. Roughly speaking, a subset $K$ is said to be homogeneous if for a realizable input sample $S$ that has only examples from $K$ then for every $x\in K$, $A_S(x)$ depends only on the number of examples $x_i \in S_X$ such that $x_i < x$. In particular, $A_S(x)$ is (approximately) constant within each interval between any two consecutive examples (see \cref{fig:homogenic}). Thus, our proof begins with the assumption that $A$ is defined on a large homogeneous set $K$, and we derive a contradiction. We then end up by showing (through a Ramsey argument) that for any algorithm there exists a large homogeneous set. \paragraph{Lower Bound for an Algorithm defined on large homogeneous sets} Let $A$ be an algorithm whose input come from a large set $K$ that is homogeneous. Let $S$ be a balanced fixed sample (recall that a sample is said to be balanced if it contains exactly half number of positive points as negative points). We denote by $J_i$ the interval $(x_i,x_{i+1})$ for any two consecutive examples in $S$. The crucial property of homogeneous sets that we utilize is that the probabilities $A_{S}(x)$ depend approximately only on the interval $J_i$ for which $x$ belongs to. We can show (exploiting the fact that $A$ learns thresholds) that for some interval $J$ we have that $A_{S}(x)>2/3$ for all $x\in J$, and similarly for some interval $A_{S}(x)<1/3$: Since the marginal probabilities depend only on the interval and there are only $m$ interval we deduce that there are two consecutive intervalse $J_i,J_{i+1}$ such that the value of $A_S$ on $J_{i+1}$ is at least $\eta=\Omega(1/m)$ larger than the value of $A_S$ on $J_i$. We thus fix a sample, and replace only the sample $x_{i+1}$ which is intermediate to the intervals $J_i,J_{i+1}$. Note that changing $x_{i+1}$ does not change the probabilities $A$ assign within two intervals (because of homogeneity): Thus, by changing the position of the intermediate point $x_{i+1}$ we construct a family of distributions $P=\{p_i : i\leq n\}$ where each~$p_i$ is a distribution over the cube $\{\pm 1\}^n$ $n$ corresponds to the size of the interval $(x_{i},x_{i+2})$ with the following properties: (i) $p_i, p_j$ are $(\epsilon,\delta)$-indistinguishable for all $i,j$, and (ii) there are $r,\eta>0$ such that \[\Pr_{v\sim p_i}\bigl[v(j)=1\bigr] = \begin{cases} \leq r-\eta &j<i,\\ \geq r+\eta &j>i. \end{cases} \] We depict the construction of the family $P$ in \cref{fig:AtoP} \begin{figure}[h] \includegraphics[scale=0.45]{AtoP.pdf} \caption{\small{Depiction of the construction of the family $P$. Given an homogeneous set and two consecutive intervals where there is a major leap in the distribution of a single entry to be $1$: We obtain different probabilities over $\{\pm 1\}^n$ by assigning a new position to the intermediate example, where all entries before the example have probability less than $r-O(\frac{1}{m})$ to be $1$ and all entries after the example have probability at least $r+O(\frac{1}{m})$ to be one. Our next step is then, to show that such a family of distributions cannot be too large and yet indistinguishable.}}\label{fig:AtoP} \end{figure} Finally, it is shown that if $n$ is sufficiently large (at least some $2^{\tilde\Omega(1/\eta^2)}$) and the privacy parameter $\delta$ is sufficiently small, then such a family does not exist. This is shown using a binary search argument combined with \Cref{lem:prod}. Following the outline above, we begin with a formal definition of an homogeneous set \begin{definition}[$m$-homogeneous set] Let $X$ be a linearly ordered set. $K\subseteq X$ is {\it $m$-homogeneous} with respect to $A$ if for every pair of $m$-balanced samples $S',S''$ such that $S'_X,S''_X$ are $m$-subsets of $K$ (namely all the examples in each of $S',S''$ are distinct) and for every $x'\in K\setminus S'_X, x''\in K\setminus S''_X$ such that~${\text{ord}}_{S'}(x')={\text{ord}}_{S''}(x'')$: \[\bigl\lvert A_{S'}(x') - A_{S''}(x'')\bigr\rvert\leq \frac{1}{10^2m}.\] \end{definition} The rest of this section is organized as follows: in \cref{sec:AtoP}, we show how given a private empirical learning algorithm defined over a large homogeneous set we can construct a family of distributions $P$ as depicted above. We then proceed in \cref{sec:binary} to prove that such a family of distributions $P$ cannot be too large. As a corollary we obtain \cref{cor:homogenic} that states that there are no large homogeneous sets for any empirical learning private algorithm. Finally we apply Ramsey theory in \cref{sec:ramsey} to conclude and demonstrate that for any algorithm there are large homogeneous sets. \subsection{Constructing distribution family $P$ from algorithm $A$}\label{sec:AtoP} The following section is devoted to prove the following Lemma: \ignore{ Assume that $A$ learns thresholds over ${\mathbb N}$ with expected loss $\Ex_{S\sim D^m}\bigl[L_D(A(S))\bigr]\leq \frac{1}{4}$ for every realizable $D$. We will show that $A$ can not be an $(\epsilon,\delta)$-private algorithm with $\epsilon=1$ and~$\delta = O(\frac{1}{m^2\log m})$. It will be convenient to assume that $A$ is order-oblivious in the sense that if $S',S''$ are two samples that are permutations of one another then $A(S')$ is the same distribution like $A(S'')$. Note that this assumption does not lose generality because if $A$ is not order-oblivious then define $A'$ such that on input sample $S$, it applies $A$ on a uniform random permutation of $S$. One can verify that $A'$ has the same learning and privacy guarantees like~$A$. } \begin{lemma}\label{lem:AtoP} Let $\delta<0.001$ and let $A$ be an $(0.1,\delta)$--private empirical learner with accuracy $(1-\sqrt{\frac{7}{8}},\sqrt{\frac{7}{8}})$ and sample complexity $m$ over $K=\{1,2,\ldots,N\}\subseteq \mathbb{N}$ with respect to the class $\mathcal{H}$ of thresholds. Assume that $K$ is $m$-homogeneous with respect to $A$, and set $n=N-m$. Then there exists a family $P=\{p_i, i\le n\}$ of distributions over $\{\pm 1\}^n$ with the following properties: \begin{enumerate} \item Every $p_i,p_j\in P$ are $(0.1,\delta)$-indistinguishable. \item There exists $r\in [0,1]$ such that for all $i,j\le n$: \[\Pr_{v\sim p_i}\bigl[v(j)=1\bigr] = \begin{cases} \leq r-\frac{1}{10^2m} &j < i,\\ \geq r+\frac{1}{10^2m} &j>i. \end{cases}\] \end{enumerate} \end{lemma} For the proof of \cref{lem:AtoP} we will need the following claim: \begin{claim}\label{lem:reduction} Under the assumptions of \cref{lem:AtoP}, there exists a realizable balanced $m$-sample $S\in K\times\{\pm 1\}^m$ with $\lvert S_X\rvert = m$ and $i \leq m$ such that \[A_S(x_{i+1}) - A_S(x_i) \geq \frac{1}{4m}\] for all $x_i,x_{i+1}\in K\setminus S_X$ such that ${\text{ord}}_S(x_i)=i$ and ${\text{ord}}_S(x_{i+1})=i+1$. \end{claim} \begin{proof} Let $S$ be a balanced sample such that $S_X=\{x_1<\ldots <x_m\}\subseteq K$ are evenly spaced points on $K$, such that $y_i=-1$ iff $i\le \frac{m}{2}$. Note that since $A$ is an $(1-\sqrt{7/8},\sqrt{7/8})$- empirical learner, taking expectation over the algorithms random bits we obtain that \begin{align*} \frac{7}{8}& \le \mathop\mathbb{E}_{h\sim A(S)} (1-L_S(h))\\&= \frac{1}{m}\sum_{i=1}^{m/2} \left[1-A_S(x_i) \right]+\frac{1}{m}\sum_{i=m/2+1}^m \left[ A_S(x_i) \right]. \end{align*} In particular, there is $m/2\le m_1\le m$ such that $A_S(x_{m_1})\ge 3/4$. Next, by privacy if we consider $S'$ the sample where we replace $x_{m_1}$ by $x_{m_1}+1$ (with the same label), we have that \[A_{S'}(x_{m_1}) \ge (\frac{3}{4} -\delta) e^{-0.1} \ge \frac{2}{3}.\] Note that ${\text{ord}}_{S'}(x_{m_1}) = m_1-1$, hence by homogeneity we have that for any point $x$ such that ${\text{ord}}_{S}(x)=m_1-1$ we have that \[A_{S}(x)\ge\frac{2}{3}- \frac{1}{10^2m}.\] Similarly we can show that for some $1 \le m_2\le \frac{m}{2}$ we have that if ${\text{ord}}_{S}(x)=m_2-1$ then $A_{S}(x)\le \frac{1}{3}+ \frac{1}{10^2m}$. Now, $S$ partitions $K\setminus S_X$ to (at most) $m+1$ nonempty intervals. By homogenity of $K$, on each of these intervals $A$ is approximately constant (up to an additive error of $\frac{1}{10^2m}$). Therefore, there are two consecutive intervals $J_i,J_{i+1}$ such that \[A_S(x_{i+1}) - A_S(x_i) \geq \frac{1/3}{m} - \frac{1}{50m} \geq \frac{1}{4m} \] for any $x_i\in J_i, x_{i+1}\in J_{i+1}$. This finishes the proof. \end{proof} \paragraph{Proof of \cref{lem:AtoP}} Let $S=\bigl((x_1,y_1),\ldots(x_m,y_m)\bigr)$ be the realizable $m$-sample whose existence is guaranteed by \Cref{lem:reduction}. Note that by homogenity of $K$, any balanced $m$-sample $S'$ satisfies \[A_{S'}(x_{i+1}) - A_{S'}(x_i) \geq \frac{1}{4m}-\frac{1}{50m}\geq\frac{1}{5m}\] for all $x_i,x_{i+1}\in K\setminus S_X$ such that ${\text{ord}}_S(x_i)=i$ and ${\text{ord}}_S(x_{i+1})=i+1$. Now, the crucial point is we can pick $S'$ so that the interval in $K$ between $x_{i-1}$ and $x_{i+1}$, \[(x_{i-1},x_{i+1})_K = \{x\in K : x_{i-1} < x < x_{i+1}\},\] is of size $\left(N-m\right)$. Now, fix all examples $(x_j,y_j)$ for $j\neq i$, and set for every $x_{i-1}\le x \le x_{i+1}$ a sample $S_x$ that is a balanced sample that contains the fixed set of examples and also $x_i=x$. we obtain a family of neighboring samples $\bigl\{S_x: x\in (x_{i-1},x_{i+1})\bigr\}$ such that \begin{itemize} \item every two distributions $A(S_{x'})$, $A(S_{x''})$ are $(\epsilon,\delta)$-indistinguishable (because $A$ satisfies $(\epsilon,\delta)$ differential privacy). \item There is $r\in [0,1]$ such that for all $x,x'\in (x_{i-1},x_{i+1})_k$: \[\Pr_{h\sim A(S_x) }\bigl[h(x')=1\bigr] = \begin{cases} \leq r-\frac{1}{10m} & x' < x,\\ \geq r+\frac{1}{10m} & x' >x. \end{cases}\] The result follows by identifying each $A(S_x)$ with a distribution $p$ over $\{\pm \}^n$, by restricting the output of the algorithm to the interval $(x_{i-1},x_{i+1})$. \end{itemize} \subsection{Binary search}\label{sec:binary} In the last section we've constructed a large family of distributions over interval that are indistinguishable from a homogenice set. In this section we prove that such a family cannot be too large, concluding that there are no big homogenic sets with respect to private algorithms. Thus, we set out to prove the following statement: \begin{lemma}\label{lem:binary} Let $n > 2^{10^3 m^2\log^2m}$, and let $r\in [0,1]$ be a constant. Then there exists no family of distributions $\{p_i : i\leq n\}$ over $\{\pm 1\}^n$ such that \begin{enumerate} \item $p_i,p_j$ are $(0.1, \frac{1}{10^3m^2\log m})$-indistinguishable, and \item for all $i,j$: \[\Pr_{v\sim p_i}\bigl[v(j)=1\bigr] = \begin{cases} \leq r-\frac{1}{10m} &j < i,\\ \geq r+\frac{1}{10m} &j>i. \end{cases} \] \end{enumerate} \end{lemma} \begin{proof} Set $T=10^3 m^2\log^2 m$, and set $D = 10^2m^2\log T$. Consider the family of distributions $q_i=p_i^D$ for $i=1,\ldots,n$. By Lemma~\ref{lem:prod}, each $q_i,q_j$ are $(0.1D,\delta D)$-indistinguishable. We next define a set of mutually disjoint events $E_i$ for $i\leq 2^T$ that are measurable with respect to each of the~$q$'s. Given a sequence of vectors $\mathbf{v}=\{v_1,\ldots,v_D\}$ in $\{-1,1\}^D$ we let $\bar{\mathbf{v}}\in \{-1,1\}^D$ be defined as \[\bar{\mathbf{v}}(j) = \begin{cases} -1 & \frac{1}{D}\sum_{i=1}^D v_i(j) \le r \\ 1 & \frac{1}{D}\sum_{i=1}^D v_i(j) \ge r \end{cases} \] Given a point in the support of any of the $q_i$'s, namely a sequence $\{v_1,\ldots, v_{D}\}$ of $D$ vectors in $\{\pm 1\}^n$, we consider the vector $\bar{\mathbf{v}}$ and we define a mapping, B, that apply $T$ steps of binary search as follows: We consider the $\frac{n}{2}$ entry is $1$ we consider the first half of the interval and continue recursively, and similarly if the entry is $-1$ we consider the second half of the interval and continue recurisvely. Finally, the binary search outputs the last entry it considers and the events $E_j$ correspond to the different $2^T$ possible outcomes of this binary search procedure. For any coordinate $i$ that the binary search may output, note that if we apply the binary search on the vector $\mathbb{E}(p_i)$ then the procedure will output the entry $i$. By an application of the Chernoff bound and union bound we conclude that under the distribution $q_i$, for every such $i$, the event $E_i= \{\mathbf{v}, B(\bar{\mathbf{v}})=i\}$ has probability at least \[1 - T\exp\Bigl(-2\frac{1}{10^2m^2}D\Bigr) = 1-\exp(-2\log T) \geq \frac{2}{3}.\] We claim that for all $j\leq n$, and $i$ in the image of $B$: \begin{equation}\label{eq:crucial} q_j(E_i) \geq \frac{1}{2}\exp(-D). \end{equation} This will finish the proof since the $2^T$ events are mutually disjoint, and therefore \begin{align*} q_j(\cup_i E_i) &= \sum_i q_j(E_i) \\ &\geq 2^T \cdot \frac{1}{2}e^{-0.1 D}\\ & = 2^{T-1} e^{-0.1 D} & \\&> 1,\end{align*} Equation~\ref{eq:crucial} follows since $q_i,q_j$ are $(0.1D,D\delta)$-indistinguishable: \[\frac{2}{3} \leq q_i(E_i) \leq \exp(0.1 D) q_j(E_i) + D\delta,\] and by the choice of $\delta$, which implies that $\frac{2}{3}-D\delta \geq \frac{1}{2}$. \end{proof} We conclude this section with the following corollary of \cref{lem:binary} and \cref{lem:AtoP}: \begin{corollary}\label{cor:homogenic} Let $A$ be an $(0.1,\delta)$--private empirical learner for the class of thresholds over $X\subseteq \mathbb{N}$, with $\delta=o(\frac{1}{m^2\log m})$ accuracy $(1-\sqrt{7/8},\sqrt{7/8})$ and sample complexity $m$. Then for any $K\subseteq X$ that is homogenic with respect to $A$ we have that $|K|= O(2^{10^3 m^2\log^2m})$. \end{corollary} \begin{proof} Indeed, assume that $K\gg 2^{10^3 m^2\log^2m}$, and with out loss of generality we may assume that $K=X=\{1,\ldots, |K|\}$. By \cref{lem:AtoP} we obtain the family of distributions depicted $P=\{p_i, i\le n\}$ with $n=|K|-m\gg 2^{10^3 m^2\log^2m}$. However, \cref{lem:binary} claims that such a family does not exist unless $n\le 2^{10^3 m^2\log^2m}$. \end{proof} \ignore{ \subsection{Ramsey}\label{sec:ramsey} Given the last two sections to conclude the proof we need to show that for any algorithm (in particular a private empirical learner) there exists a large homogenic set. This will contradict \cref{cor:homogenic} and conclude the proof. As a warmup we begin with the following infinite version of Ramsey that, together with \cref{cor:homogenic} already prohibits the existence a private learning algorithm over thresholds. We then proceed to a quantiative result that gives the necessary bounds as depicted in \cref{thm:main}. \begin{lemma Let $A$ be any randomized algorithm $\mathbb{N}^m \to \{\pm 1\}^\mathbb{N}$. There is an infinite $K\subseteq {\mathbb N}$ that is $m$-homogeneous with respect to $A$. \end{lemma} \begin{proof} Define a coloring on the $(m+1)$-subsets of ${\mathbb N}$ as follows. Let $D=\{x_1<x_2<\ldots<x_{m+1}\}$ be an $(m+1)$-subset of~${\mathbb N}$; for each $i\leq m+1$ let $D^{-i} = D\setminus\{x_i\}$, let $p_{i}$ denote the fraction of the form $\frac{t}{10^2m}$ that is closest to $A_{D^{-i}}(x_{i})$ which is the probability that $A$ assigns $1$ to $x_i$ after observing the input $D^{-i}$, (in case of ties pick the smallest such fraction). The coloring assigned to $A$ is the list $(p_1,p_2,\ldots,p_{m+1})$. Thus, the total number of colors is~$(10^2m+1)^{(m+1)}$. By the Ramsey Theorem for hypergraphs there is an infinite~$K\subseteq {\mathbb N}$ such that all $m+1$-subsets of $K$ have the same color (see, e.g.\ \cite{graham90ramsey}). One can verify that $K$ is indeed $m$-homogeneous with respect to $A$. \end{proof} \paragraph{Quantitative bounds} We use the following result due to \cite{erdos52combinatorial} (see also Theorem 10.1 in the survey by~\cite{mubayi17survey}). Define the {\it tower function} $\mathsf{twr}_k(x)$ by the recursion $\mathsf{twr}_1(x) = x$ and $\mathsf{twr}_{i+1}(x) = 2^{\mathsf{twr}_i(x)}$. \begin{theorem}\label{thm:ramsey}\citep{erdos52combinatorial} Let $s>k\geq 2$ and $q$ be integers, and let \[N\geq \mathsf{twr}_k(3sq\log q).\] Then for every coloring of the subsets of size $k$ of a universe of size $N$ using $q$ colors there is a homogeneous subset\footnote{A subset of the universe is homogeneous if all of its $k$-subsets have the same color.} of size $s$. \end{theorem} We can apply \cref{thm:ramsey} and the same coloring as in \cref{lem:ramsey} to obtain the following finite version: \begin{lemma}\label{lem:finiteramsey} Let $A$ be any randomized algorithm ${X}^m \to \{\pm 1\}^{|X|}$. There is a set $K\subseteq X$ that is $m$-homogeneous with respect to $A$, of size \[ \lvert K\rvert \geq \frac{\log^{(m)}(N)}{2^{O(m2^m)}},\] where $\log^{(k)}(x)$ is the iterated logarithm, which is defined by the recursion $\log^{(1)}(x)=\log x$, and $\log^{(i+1)}(x) = \log\log^{(i)}(x)$. \end{lemma}} \subsection{Summing it all up and proving \cref{thm:main}} To conclude the proof of \cref{thm:main} we recall that by \cref{lem:bun} it is enough to show that there is no empirical learner for thresholds. We next give sample complexity bounds for privately learning thresholds over a finite linearly ordered domain $X$ of size $N$. Assume $A$ is such a learning algorithm with the same privacy and learning guarantees like in \Cref{thm:main}. By \cref{lem:finiteramsey} there exists a homogeneous set $K$ with respect to $A$ such that: \[ \lvert K\rvert \geq \frac{\log^{(m)}(N)}{2^{O(m)}},\] However by \cref{cor:homogenic} we have that \[ \lvert K\rvert= O(2^{10^3m^2\log m})\] This implies that \[ \frac{\log^{(m)}(N)}{2^{O(m)}} \leq c2^{10^3m^2\log m} \implies \log^{(m)}(N) \leq 2^{O(m^2)}.\] Applying the iterated logarithm $\log^*(m^2) = \log^{*}(m)+O(1)$ times on both sides yields the inequality \[\log^*(N) \leq \log^*(m) + m +O(1),\] which implies that $m \geq \Omega(\log^* N)$ as required. } \bibliographystyle{plainnat}
{ "timestamp": "2019-03-11T01:15:32", "yymm": "1806", "arxiv_id": "1806.00949", "language": "en", "url": "https://arxiv.org/abs/1806.00949" }
\section{Introduction} The Internet of Things (IoT) has been connecting extraordinarily large number of smart devices to the Internet and driving the digital transformation of industry. Unfortunately, existing cloud-centric IoT systems have a number of significant disadvantages such as high system maintenance costs, slow response time, security and privacy concerns, etc. Blockchain, a form of distributed, immutable and time-stamped ledger technology, has been perceived as a promising solution to address the aforementioned problems and to securely unlock the business and operational values of IoT. The combination of blockchain and IoT facilitates the sharing of services and resources, creates audit trails and enables automation of time-consuming workflows in various applications. While combining these two technologies is creating new levels of trust, the decentralized network and public verifiability of blockchain transactions often do not provide the strong security and privacy properties required by the users. During the past few years, quite a few cryptographic techniques such as ring signature \cite{Riv:Sha:Tau}, stealth address \cite{bytecoin}, and zero-knowledge proof \cite{Gol:Mic:Rac} have been employed to ensure transaction privacy for senders, receivers and transaction amount in blockchains \cite{monero,ryn:tec,cryptonote,zcash}. This work focuses on stealth address, a privacy protection technique for receivers of cryptocurrencies. Stealth address requires the sender to create random one-time addresses for every transaction on behalf of the recipient so that different payments made to the same payee unlinkable. The most basic stealth address scheme \cite{bytecoin} was first sketched by a Bitcoin Forum member named `ByteCoin' in 2011, which was then improved in \cite{cryptonote,Todd} by introducing the random ephemeral key pair and fixing the issue that the sender might change the mind and reverse the payment. Later on, a dual-key enhancement \cite{shadow} to the previous stealth address schemes was implemented in 2014, which utilized two pairs of cryptographic keys for designated third parties (e.g., auditors, proxy servers, read-only wallets, etc.) removing the unlinkability of the stealth addresses without simultaneously allowing payments to be spent. The dual-key stealth address protocol (\textsf{DKSAP}) provides strong anonymity for transaction receivers and enables them to receive unlinkable payments in practice. However, this approach does require blockchain nodes to constantly compute the purported destination addresses and find the corresponding matches in the blockchain. While this process works well for full-fledged computers, it poses new challenges for resource-constrained IoT devices. Considering the limited energy budget of smart devices, we propose a lightweight variant of \textsf{DKSAP}, namely \textsf{DKSAP-IoT}, which is based on the similar idea as the TLS session resumption \cite{Die08,Sal08} and requires both the sender and receiver to keep track of the continuously updated pairwise keys for each payment session. \textsf{DKSAP-IoT} is able to improve the performance of \textsf{DKSAP} by at least 50\% and reduce the transaction size simultaneously, thereby providing an efficient solution to protecting the privacy of recipients in blockchain-based IoT systems. The rest of the paper is organized as follows: Section~\ref{sec:pre} gives a brief overview of the elliptic curve cryptography, followed by the description of the dual-address stealth address protocol (\textsf{DKSAP}) in Section~\ref{sec:dksap}. In Section~\ref{sec:dksapiot}, we present \textsf{DKSAP-IoT}, a faster dual-key stealth address protocol for blockchain-based IoT systems. Section~\ref{sec:performance} analyzes the security and performance of the proposed scheme. Finally, Section~\ref{sec:conclusion} concludes this contribution. \section{Preliminaries} \label{sec:pre} An elliptic curve $E$ over a field $\mathbb{F}$ is defined by the Weierstrass equation: \begin{displaymath} E(\mathbb{F}): y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6, \end{displaymath} where $a_1, a_2, a_3, a_4, a_6 \in \mathbb{F}$ and the curve discriminant $\Delta \neq 0$. The set of solutions $(x, y) \in \mathbb{F} \times \mathbb{F}$ satisfying the above equation along with the identity element $\mathcal {O}$, or point-at-infinity, form an abelian group under the addition operation $+$ (i.e., the chord-and-tangent group law). It is this abelian group that is used in the construction of elliptic curve cryptosystems. Given an elliptic curve point $G \in E(\mathbb{F})$ and an integer $k$, the scalar multiplication $kG$ is defined by the addition of the point $G$ to itself $k - 1$ times, i.e., \begin{displaymath} kG = \underbrace{G + G + \cdots + G}_\text{$k - 1$ additions}. \end{displaymath} The scalar multiplication is the fundamental operation in elliptic curve based cryptographic protocols such as the Elliptic Curve Diffie-Hellman (ECDH) key agreement \cite{SEC1} and the Elliptic Curve Digital Signature Algorithm (ECDSA) \cite{SEC1}, etc. The security of elliptic curve cryptosystems is based on the difficulty of solving the Elliptic Curve Discrete Logarithm Problem (ECDLP) \cite{Kob87,Mil86}. This problem involves finding the integer $k$ ($0 < k < n$) given a point $kG$, where $n$ is the group order of $E(\mathbb{F})$. The 15 elliptic curves have been recommended by NIST in the FIPS 186-2 standard for U.S. federal government \cite{FIPS}, which are also contained in the specification defined by the Standards for Efficient Cryptography Group (SECG) \cite{SEC2}. For example, the elliptic curve used in Bitcoin is called \textsf{secp256k1} with parameters specified by SECG \cite{SEC2}. For more details about elliptic curve cryptography, the interested reader is referred to \cite{Han03}. \section{Dual-Key Stealth Address Protocol (\textsf{DKSAP})} \label{sec:dksap} The first full working implementation of \textsf{DKSAP} was announced by a developer known as rynomster/sdcoin in 2014 for \textbf{ShadowSend} \cite{shadow}, a capable, efficient and decentralized anonymous wallet solution. The \textsf{DKSAP} has been realized in a number of cryptocurrency systems since then, including \textbf{Monero} \cite{monero}, \textbf{Samourai Wallet} \cite{samourai}, \textbf{TokenPay} \cite{tokenpay}, just to name a few. The protocol takes advantage of two pairs of cryptographic keys, namely a `scan key' pair and a `spend key' pair, and computes a one-time payment address per transaction, as illustrated in Fig. 1. \begin{figure}[ht] \centering \label{fig:stealth} \includegraphics[width=0.97\textwidth]{Stealth.png} \caption{The Dual-Key Stealth Address Protocol (\textsf{DKSAP})} \end{figure} When a sender $A$ would like to send a transaction to a receiver $B$ in a stealth mode \cite{shadow}, \textsf{DKSAP} works as follows: \begin{enumerate} \item The receiver $B$ has a pair of private/public keys $(v_B, V_B)$ and $(s_B, S_B)$, where $v_B$ and $s_B$ are called B's `scan private key' and `spend private key', respectively, whereas $V_B = v_BG$ and $S_B = s_BG$ are the corresponding public keys. Note that none of $V_B$ and $S_B$ ever appear in the blockchain and only the sender $A$ and the receiver $B$ know those keys. \item The sender $A$ generates an ephemeral key pair $(r_A, R_A)$ with $R_A = r_AG$ and $0 < r_A < n$, and sends $R_A$ to the receiver $B$. \item Both the sender $A$ and the receiver $B$ can perform the ECDH protocol to compute a shared secret: \begin{displaymath} c_{AB} = H(r_Av_BG) = H(r_AV_B) = H(v_BR_A), \end{displaymath} where $H(\cdot)$ is a cryptographic hash function. \item The sender $A$ can now generate the destination address of the receiver $B$ to which $A$ should send the payment: \begin{displaymath} T_A = c_{AB}G + S_B. \end{displaymath} Note that the one-time destination address $T_A$ is publicly visible and appears on the blockchain. \item Depending on whether the wallet is encrypted, the receiver $B$ can compute the same destination address in two different ways: \begin{displaymath} T_A' = c_{AB}G + S_B = (c_{AB} + s_B)G. \end{displaymath} The corresponding ephemeral private key is \begin{displaymath} t_A' = c_{AB} + s_B, \end{displaymath} which can only be computed by the receiver $B$, thereby enabling $B$ to spend the payment received from $A$ later on. \end{enumerate} In \textsf{DKSAP}, the receiver $B$ needs to actively scan the blockchain transactions, calculate the purported destination address and compare it with the one in each block until a match is found. In the case that an auditor or a proxy server exists in the system, the receiver $B$ can share the `scan private key' $v_B$ and the `spend public key' $S_B$ with the auditor/proxy server so that those entities can scan the blockchain transactions on behalf of the receiver $B$. However, they are not able to compute the ephemeral private key $t_A'$ and spend the payment received from the sender $A$. \section{Faster Dual-Key Stealth Address Protocol for Internet of Things ($\textsf{DKSAP-IoT}$)} \label{sec:dksapiot} In this section, we describe a faster dual-key stealth address protocol called $\textsf{DKSAP-IoT}$, which is dedicatedly designed for blockchain-based IoT systems. \subsection{Design Rationale} In $\textsf{DKSAP}$, the receiver $B$ scans the blockchain and calculates the purported destination address for each transaction, which requires computations of two scalar multiplications, including one random-point scalar multiplication with the ephemeral public key $R_A$ and one fixed-point scalar multiplication with the base point $G$. For resource-constrained IoT devices, computing two scalar multiplications continuously for each blockchain transaction is going to drain battery power of smart devices dramatically. Furthermore, containing an ephemeral public key in each stealth payment increases the size of the transaction and incurs additional communication overhead for IoT devices as well. Motivated by the TLS session resumption techniques \cite{Die08,Sal08}, we aim to accelerate the process for receivers finding the matched destination address by extending the lifetime of the shared secret between senders and receivers. While both the session ID \cite{Die08} and the session ticket \cite{Sal08} are fixed in TLS for a given period of time between the client and server, the sender does need to generate a one-time destination address for each payment sent to the same recipient in our case. To this end, both the sender and receiver will apply the cryptographic hash function to their shared secret for subsequent $N$ transactions before the sender initiates a shared secret update with a fresh ephemeral public key. This key evolving process is shown in Fig. 2, which leads to the design of $\textsf{DKSAP-IoT}$, a faster dual-key stealth address protocol for blockchain-based IoT systems, as detailed in the next subsection. \begin{figure}[ht] \centering \label{fig:keyevolve} \includegraphics[width=\textwidth,height=3.7cm]{Key_Evolution.png} \caption{The Key Evolving Process Between the Sender and Receiver in \textsf{DKSAP-IoT}} \vspace{-0.7cm} \end{figure} \subsection{$\textsf{DKSAP-IoT}$ Specification} $\textsf{DKSAP-IoT}$ is similar to $\textsf{DKSAP}$ except that whenever the sender and receiver establish a shared secret using ECDH it will be continuously and pseudorandomly updated with a cryptographic hash function and used in their subsequent $N$ stealth transactions. Both the sender and receiver maintain the transaction state (i.e., shared secret, counter, etc.) locally and update it after each stealth transaction. A high-level description of $\textsf{DKSAP-IoT}$ is depicted in Fig. 3. \begin{figure}[ht] \centering \label{fig:stealthiot} \includegraphics[width=\textwidth,height=9.5cm]{Stealth_IoT.png} \caption{The Dual-Key Stealth Address Protocol for IoT (\textsf{DKSAP-IoT})} \end{figure} In a blockchain-based IoT system, two smart devices $A$ and $B$ can process a transaction in a stealth mode using $\textsf{DKSAP-IoT}$ as described below: \begin{enumerate} \item The receiver device $B$ is pre-installed with a `scan key' pair $(v_B, V_B)$ and a `spend key' pair $(s_B, S_B)$ as in $\textsf{DKSAP}$, where $V_B = v_BG$ and $S_B = s_BG$. \item For sending a transaction to $B$, the sender device $A$ first checks whether $B$ is in $A$'s receiver list. If $B$ is in the list and the counter value $cnt_B$ is less than $N$ (i.e., $A$ has communicated with $B$ before), $A$ retrieves the shared secret $h_{cnt_B}$ from the table and computes the destination public key: \begin{displaymath} T_A = h_{cnt_B}G + S_B. \end{displaymath} The stealth transaction that only contains the destination public key $T_A$ as well as the payment amount is then added into the blockchain. In the case that $B$ is not in the list or the counter value is greater than $N$, $A$ generates a fresh ephemeral public key $R_A = r_AG$ and calculates the shared secret $h_0 = H(r_AV_B)$ as well as the destination public key as in $\textsf{DKSAP}$: \begin{displaymath} T_A = h_0G + S_B. \end{displaymath} Here the stealth transaction is composed of the ephemeral public key $R_A$, the destination public key $T_A$ and the payment amount. After putting the transaction on the blockchain, the sender $A$ will initialize the ephemeral public key counter $cnt_B = 0$. In both cases, the counter $cnt_B$ and the shared secret $h_{cnt_B}$ will be updated as well: \begin{displaymath} cnt_B \leftarrow cnt_B + 1,\;\;h_{cnt_B} \leftarrow H(h_{cnt_B - 1}). \end{displaymath} Note that only the counter $cnt_B$ is updated when it reaches $N$. \item Upon receiving a stealth transaction, the receiver $B$ first checks whether the transaction contains an ephemeral public key $R_A$. If it is, $B$ computes the purported shared secret and destination public key: \begin{displaymath} h_0 = H(v_BR_A),\;\;T_A' = h_0G + S_B. \end{displaymath} If the purported destination public key $T_A'$ matches the received one (i.e., $T_A' = T_A$), $B$ accepts the payment from $A$ and computes the corresponding private key for spending: \begin{displaymath} t_A' = h_0 + s_B. \end{displaymath} $B$ also sets the ephemeral public key counter $cnt_A$ to be 0, updates the counter and shared secret, and precomputes the expected destination key pair for the next stealth transaction from $A$: \begin{eqnarray} cnt_A \leftarrow cnt_A + 1,\;\;h_{cnt_A} \leftarrow H(h_{cnt_A - 1}),\\ T_A' = h_{cnt_A}G + S_B,\;\;t_A' = h_{cnt_A} + s_B. \end{eqnarray} When $B$ receives a stealth transaction without an ephemeral public key, $B$ will check whether the received destination public key $T_A$ is contained in its list of senders. If a match is found and the value of the counter $cnt_A$ is less than or equal to $N$, $B$ retrieves the corresponding destination private key $t_A'$ as the spending key and updates the transaction state information accordingly with the equations (1) and (2). Again only the counter $cnt_A$ is updated when it reaches $N$. \end{enumerate} In \textsf{DKSAP-IoT}, stealth transactions are divided into two categories depending on whether ephemeral public keys are included in the blocks. For the first stealth transaction between two blockchain nodes, the receiver needs to conduct the same operations as \textsf{DKSAP}, followed by a more efficient preparation process for the next transaction. For the subsequent $N$ stealth transactions between the same peers, generating a fresh ephemeral key is no longer needed on the sender side. Meanwhile, the receiver only performs a fast table look-up as well as the transaction state updates, which facilitates the receiver to quickly filter out the designated transactions. Given the `scan private key' $v_B$ and the `spend public key' $S_B$, the auditor/proxy server is able to calculate all the destination addresses for the receiver $B$, thereby tracking or forwarding all the transactions to $B$. However, both the auditor or the proxy server cannot derive the corresponding ephemeral private keys and spend the funds. \section{Security Analysis and Performance Evaluation} \label{sec:performance} In this section, we analyze the security and performance of \textsf{DKSAP-IoT} and report its implementation on a Raspberry Pi 3 Model B, a good representative of moderately resource-constrained embedded devices. \subsection{Security Analysis} \textsf{DKSAP-IoT} follows the same threat model as \textsf{DKSAP}, in which the adversary aims to determine the corresponding recipients by observing the transactions on the blockchain. \textsf{DKSAP-IoT} provides the following security properties: \begin{itemize} \item \textbf{Receiver Anonymity}: \textsf{DKSAP-IoT} offers strong anonymity for receivers and ensures the unlinkability of payments received by the same payee. For each payment to a stealth address, the sender computes a new normal address $T_A$ on which the funds ought to be received. Given two destination addresses $T_A^{(i)} = h_iG + S_B$ and $T_A^{(j)} = h_jG + S_B$ ($0 \leq i, j \leq N$) for the same receiver $B$, the adversary is not able to link them thanks to the difficulty of ECDLP. \item \textbf{Forward Privacy}: \textsf{DKSAP-IoT} provides forward secrecy due to the usage of a cryptographic hash function for updating the shared secret continuously for $N$ stealth transactions. If the adversary compromises the device and obtains $h_l$ for the $l^{\textrm{th}}$ ($0 < l < N$) stealth transaction, he/she is still not able to link previous transactions because of the properties of the hash function. \item \textbf{Stealth Transaction Hiding}: In \textsf{DKSAP}, transactions in the stealth mode can be easily distinguished from regular ones in the blockchain due to the presence of ephemeral public keys, thereby resulting in some loss of privacy. However, the ephemeral public key only needs to be updated every $N$ stealth transactions for two communication peers in \textsf{DKSAP-IoT} and those stealth transactions in between are not distinguishable from regular ones. \end{itemize} Since both the sender and receiver need to locally maintain the state information for their peers in \textsf{DKSAP-IoT} (See Fig. 3), these tables, together with the device private keys, should be stored in the encrypted form for mitigating the risk that IoT devices might get compromised. Considering that the hardware AES engine is widely available on many IoT devices, the computational overhead for encrypting/decrypting those sensitive information is quite small. \subsection{Performance Evaluation} \subsubsection{Computational and Communication Overhead.} We assume that a sender is going to send $N$ stealth transactions to a receiver using blockchain. Let \textsf{RP}, \textsf{FP} and \textsf{H} denote the computation of a random-point scalar multiplication, a fixed-point scalar multiplication and a cryptographic hash function, respectively. Table 1 gives a comparison between the \textsf{DKSAP} and \textsf{DKSAP-IoT} in terms of their computational overhead. \vspace{-0.5cm} \begin{table}[h] \centering \caption{Computational Overhead of \textsf{DKSAP} and \textsf{DKSAP-IoT} for Sending $N$ Stealth Transactions between Two Blockchain Nodes} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Scheme} & \multicolumn{3}{c|}{Sender} & \multicolumn{3}{c|}{Receiver} \\ \cline{2-7} & \#\textsf{RP} & \#\textsf{FP} & \#\textsf{H} & \#\textsf{RP} & \#\textsf{FP} & \#\textsf{H}\\ \hline \textsf{DKSAP} & $N$ & $2N$ & $N$ & $N$ & $N$ & $N$ \\ \hline \textsf{DKSAP-IoT} & $1$ & $N + 1$ & $N$ & $1$ & $N$ & $N$ \\ \hline \end{tabular} \end{table} From Table 1, one can see that \textsf{DKSAP-IoT} is able to reduce the number of \textsf{RP} and \textsf{FP} by $N - 1$ on the sender side, respectively, when compared to the \textsf{DKSAP}. Moreover, \textsf{DKSAP-IoT} can also save $N - 1$ \textsf{RP} on the receiver side. With respect to the communication overhead, the sender in \textsf{DKSAP-IoT} only needs to contain a fresh ephemeral public key in the first stealth transaction, thereby saving the transmission of $N - 1$ elliptic curve points. \subsubsection{Software Implementation.} To validate the performance improvements of \textsf{DKSAP-IoT}, we implemented an optimized elliptic curve cryptography library, namely \textsf{libsect283k1}\footnote{\textsf{libsect283k1} will be integrated into the IoTeX testnet and mainnet as part of the \textsf{iotex-core} (see \url{https://github.com/iotexproject/iotex-core}).}, using the $283$-bit binary Koblitz curve specified in~\cite{SEC2}: \begin{displaymath} E(\mathbb{F}_{2^{283}}): y^2 + xy = x^3 + 1, \end{displaymath} where the binary field $\mathbb{F}_{2^{283}}$ is defined by $f(x) = x^{283} + x^{12} + x^7 + x^5 + 1$. The library was written in C and compiled using the GNU C Compiler (GCC). A number of efficient techniques, such as the lambda coordinates~\cite{Oli:Lop:Ara}, the window $\tau$NAF method~\cite{Han03}, the pre-computation~\cite{Yu:Mus:Xu}, etc., have been utilized to optimized the performance of the \textsf{libsect283k1} library. Moreover, \textsf{BLAKE-256}~\cite{Saa:Aum} is chosen as the hash function in our library due to its high performance cross multiple computing platforms. When running our library on a Raspberry Pi 3 Model B, the timings for the computation of \textsf{RP}, \textsf{FP} and \textsf{H} are shown in Table 2. \vspace{-0.5cm} \begin{table} \centering \caption{Timings for Computing \textsf{RP}, \textsf{FP} and \textsf{H} on a Raspberry Pi 3 Model B} \begin{tabular}{|c|c|c|} \hline \textsf{RP} & \textsf{FP} & \textsf{H} \\ \hline $3.67$ ms & $3.12$ ms & $5.26$ $\mu$s \\ \hline \end{tabular} \end{table} Note that the computation of the hash function is about three orders of magnitude faster than that of the scalar multiplication over an elliptic curve. Therefore, using the hash function to update the shared secret and extend its lifetime brings significant performance benefits for IoT devices. Fig. 4 compares the performance of the \textsf{DKSAP} and \textsf{DKSAP-IoT} on both sender and receiver sides for sending $N = 10, 20$ and $30$ stealth transactions, respectively. \begin{figure}[ht] \centering \label{fig:performance} \includegraphics[width=9cm, height=12cm]{Performance.png} \caption{Performance Comparison of the \textsf{DKSAP} and \textsf{DKSAP-IoT} for Sending $N = 10, 20$ and $30$ Stealth Transactions} \end{figure} From Fig. 4, one notices that the overall cost of \textsf{DKSAP-IoT} is less than 50\% of \textsf{DKSAP}, mainly because extending the lifetime of the shared secret with a cryptographic hash function enables both the sender and receiver to reduce the number of \textsf{RP} from $N$ to $1$. Moreover, the computation of the hash function is almost negligible compared to the scalar multiplication over the elliptic curve. In addition, \textsf{DKSAP-IoT} can save the transmission of $72\cdot(N - 1)$ bytes for $N$ stealth transactions. For resource-constrained IoT devices, the improved performance and reduced transaction size by \textsf{DKSAP-IoT} leads to significant power savings and extended battery life. \section{Conclusion} \label{sec:conclusion} In this paper, we propose an efficient dual-key stealth address protocol \textsf{DKSAP-IoT} for blockchain-based IoT systems. Motived by the TLS session resumption techniques, we apply a cryptographic hash function to continuously update a shared secret between two communication peers and extend the lifetime of this shared secret for additional $N$ transactions. Both the sender and receiver need to maintain the state information locally in order to keep track of the pairwise session keys. The security analysis shows that \textsf{DKSAP-IoT} provides receiver anonymity and forward privacy. When implementing \textsf{DKSAP-IoT} on a Raspberry Pi 3 Model B, we demonstrate that \textsf{DKSAP-IoT} can achieve at least 50\% performance improvement when compared to the original \textsf{DKSAP}, besides significant reduction of the transaction size in the block. Our work is another logic step towards providing strong privacy protection for blockchain-based IoT systems. \section*{Acknowledgement} The author would like to thank the IoTeX team for the great support during the course of writing this paper and the anonymous reviewers for their insightful comments.
{ "timestamp": "2018-06-05T02:14:08", "yymm": "1806", "arxiv_id": "1806.00951", "language": "en", "url": "https://arxiv.org/abs/1806.00951" }
\section{Introduction} Crowdsourcing platforms such as Mechanical Turk (MTurk), provide a new avenue for scalable, low-cost collection of relevance judgments for constructing Information Retrieval (IR) test collections \cite{alonso2009can}. However, quality of crowd judgments appears more variable than the traditional practice use of in-house, trusted personnel. Consequently, researchers have explored various techniques for quality assurance with crowdsourcing, including design to discourage cheating \cite{kittur2008}, predicting correctness of future contributions~\cite{jung2014predicting}, and requesting rationales behind their judgments~\cite{mcdonnell2016relevant}. Prior work has also studied whether crowd-workers are suitable for particular IR tasks, such as domain-specific search~\cite{clough2013examining} and e-discovery~\cite{MEET:MEET14504503126}. In this work, we hypothesize that crowd judges may be better suited to judge some documents than others for relevance. If we could effectively distinguish such documents, then we could effectively route only those appropriate documents to the crowd for judging, while restricting our more limited and expensive trusted judges to remaining documents \cite{Nguyen15-hcomp}. Since our goal is to utilize trusted judges only for the documents that we believe the crowd is likely to label incorrectly, we investigate three broad factors which may correlate with agreement in judging: relevance category, document rankings, and topical variance. This builds on a long and storied history of research study on disagreement in relevance judging \cite{saracevic2008effects,lesk1969interactive,funk1983indexing,burgin1992variations,Voorhees2000697,sormunen2002liberal,bailey2008relevance,carterette2010effect,kazai2012analysis,chouldechova2013differences,al2014qualitative}. Following this, we evaluate two collaborative judging approaches. The first {\em oracle} approach uses knowledge of disagreement for each topic to prioritize high disagreement topics for trusted judging. The second, practical approach focused on document importance rather than expected disagreement, using expensive trusted judges to judge those highly ranked documents which most greatly impact rank-based evaluation metrics. In particular, we use statAP method~\cite{pavlu2007practical}'s weighting function to prioritize highly-ranked documents for trusted judging. We compare both approaches to a random document ordering baseline. We report experiments using two TREC test collections for which both crowd and NIST (i.e., trusted) judgments are available. Results show that collaborative judging offers a promising method to leverage the crowd in combination with trusted judges for accurate and affordable building of IR test collections. \begin{comment} However, first we have to know when they disagree with NIST and why? Knowing what crowd-workers can do correctly is very important. \cite{clough2013examining} investigates if crowd-workers can be employed for domain-specific search. paraphrase "Although crowdsourced judgments rank the tested search engines in the same order as expert judgments, crowdsourced workers appear unable to distinguish different levels of highly accurate search results the way expert assessors can." paraphrase: "In contrast to these studies, Efthimiadis and Hotchkiss [9] compared expert and non-expert judgements for search topics within the TREC legal track, finding that the judgements of assessors without legal expertise were of higher quality than those of experts" [9] Efthimiadis, E.N., and Hotchkiss, M.A. 2008. Legal discovery: Does domain expertise matter? P. AM. SOC. INFORM. SCI. 45, 1, 1-2. While employing trusted personnel can be very expensive, relying on only crowd workers may also yield inaccurate judgments, thereby, unreliable IR evaluations. Therefore, we propose collaborative judging in which a portion of the document-topic pairs is judged by trusted personnel while the rest are assessed by crowd-workers. Therefore, we can decrease the cost while achieving reliable IR test collections. However, it is challenging to schedule the document-topic pairs among trusted personnel and crowd workers. Ideally, the ones that crowd workers would do inaccurate judging should be judged by trusted personnel. However, it is challenging to predict which document-topic pairs are problematic for crowd workers. This paper makes a concrete step towards advancing our understanding of the complexity of crowdsourcing tasks and its quantification. -- Modeling Task Complexity in Crowdsourcing \end{comment} \section{Disagreement in Judgments} \label{sec_disagreement} To better understand on which topic-document pairs we might expect to see judging disagreement, we investigate three broad factors which may correlate with such disagreement: relevance category, document rankings, and topical variance. \subsection{Test Collections} We use two test collections to investigate judging disagreement between crowd-workers and NIST assessors, and to conduct rank correlation experiments using collaborative judging. \begin{table*}[!htb] \centering \caption{Confusion Matrices for Crowd (Cr) vs. NIST Judgments in MQ'09 and WT'14 Test Collections. 'R' represents relevant judgments and 'NR' represents not-relevant judgments. Bold indicates agreement. } \label{tab_confusion_matrices} \begin{tabularx}{0.9\textwidth}{lYY|YY||Y@{\extracolsep{12pt}}YY|YY||Y@{\extracolsep{12pt}}} \hline\noalign{\smallskip} & \multicolumn{5}{c}{\bf \textit{MQ'09}} & \multicolumn{5}{c}{\bf \textit{WT'14} } \\ \cline{2-6}\cline{7-11}\noalign{\smallskip} & \multicolumn{2}{c}{\bf \textit{ Majority Voting}} & \multicolumn{2}{c}{\bf \textit{Dawid-Skene}} & & \multicolumn{2}{c}{\bf \textit{ Majority Voting}} & \multicolumn{2}{c}{\bf \textit{Dawid-Skene}} \\ & \multicolumn{2}{c}{\bf 65\%} & \multicolumn{2}{c}{\bf 70\%} & & \multicolumn{2}{c}{\bf 80\%} & \multicolumn{2}{c}{\bf 81\%} \\ \cline{2-3}\cline{4-5}\cline{7-10}\noalign{\smallskip} & Cr-R&Cr-NR &Cr-R & Cr-NR& \bf Total & Cr-R&Cr-NR&Cr-R&Cr-NR & \bf Total \\ \hline NIST-R & \textbf{44\%} &10\% &\textbf{41\%} &13\% &54\% &\textbf{39\%} &6\% & \bf 37\% &8\% &45\% \\ NIST-NR & 25\% &\textbf{21\%} &17\% &\textbf{29\%} &46\% &14\% &\textbf{41\%} &11\% & \bf 44\% &55\% \\ \hline \bf Total & 69\% &31\% &58\% &42\% &100\% &53\% &47\% &48\% &52\% &100\% \\ \noalign{\smallskip}\hline \end{tabularx} \end{table*} \textbf{Million Query Track 2009 (MQ'09)} \cite{carterette2009million}. 100 MQ'09 topics, as well as the ClueWeb09 collection\footnote{\url{http://lemurproject.org/clueweb09/}} were reused in the TREC Relevance Feedback (RF'09) Track ~\cite{Buckley10-notebook}. Because RF'09 participating systems retrieved additional documents not judged for MQ'09, additional relevance judgments were collected for the track via MTurk. These judgments were also used for the subsequent TREC Crowdsourcing Track\footnote{\url{https://sites.google.com/site/treccrowd/}} and made freely available. Beyond judging new documents, 3,277 documents already judged by NIST were also re-judged as part of quality assurance during data collection. As a result, we have 20,535 crowd judgments which we can measure agreement with NIST. We also evaluate 35 system runs submitted to MQ'09 using these crowd judgments vs. NIST judgments, measuring rank correlation. \textbf{Web Track 2014 (WT'14)} \cite{collins2015trec}. Recently, a new crowd-judgment collection has been released~\cite{tanyahcomp2018,sigir18}. 100 NIST-judged documents for each of 50 WT'14 topics were selected by statAP~\cite{pavlu2007practical}'s sampling method. MTurk judgments for these documents were collected via \cite{mcdonnell2016relevant,mcdonnell2017ijcai}'s rationale method. In total, 25,099 MTurk judgments for 5000 documents were collected (i.e., roughly 5 judgments per document). We evaluate 29 system runs submitted to WT'14 using these crowd judgments vs.\ NIST judgments, measuring rank correlation. For both test collections, we reduce graded relevance judgments to being binary and report two different methods for aggregating them: majority voting (MV) and Dawid-Skene (DS)~\cite{dawid1979maximum}. Whereas MV performs unweighted voting, DS performs weighted voting based on unsupervised individual reliability estimates. Agreement statistics in {\bf Table \ref{tab_confusion_matrices}} show the WT'14 crowd is 10-15\% more accurate than the MQ'09 crowd, {\em wrt.} NIST judgments as the ``gold standard'' (80\% vs. 65\% in MV and 81\% vs. 70\% in DS). We also see that DS performs much better on MQ'09 (where it evidences more variability in crowd assessor reliability) than on WT'14, whose crowd demonstrates less variability. \subsection{Agreement vs.\ Relevance Category} Is there more disagreement on documents judged by NIST to be relevant or non-relevant\todo{by collapsing NIST graded judgments to binary, we lose an opportunity to see more disagreement on borderline middle relevance grades.}? We might expect higher agreement for clear-cut cases of relevance/non-relevance, and higher disagreement for boundary cases \cite{lesk1969interactive,Voorhees2000697}. {\bf Table~\ref{tab_confusion_matrices}} shows confusion matrices for both test collections and aggregation methods. For all settings, we observe that crowd judgments show higher agreement with NIST assessors on NIST-judged relevant documents than on non-relevant ones. Assuming DS aggregation, we see that crowd judgments agree with NIST on $\frac{41\%}{54\%}=76\%$ (MQ'09) and $\frac{37\%}{45\%}=82\%$ (WT'14) of NIST-judged relevant documents. On non-relevant documents, crowd judgments agree with NIST to a lesser degree: $\frac{29\%}{46\%}=63\%$ (MQ'09) and $\frac{44\%}{55\%}=80\%$ (WT'14). Higher agreement on NIST-judged relevant documents suggests that when in doubt which way to lean, non-expert judges may be more liberal in judging documents as relevant \cite{sormunen2002liberal}. \begin{figure*}[!htb] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.9\textwidth]{rank_m09_2.png} \caption{MQ'09 } \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.9\textwidth]{rank_w14_2.png} \caption{WT'14} \end{subfigure} \caption{Accuracy of crowd judgments vs.\ average document rank. Bars show accuracy and are shaded: lower, darker regions represent the ratio of true positives, while higher, lighter regions represent the ratio of true negatives. The black line shows the number of judgments per bin.} \label{fig_average_rank} \end{figure*} \subsection{Agreement vs.\ Document Rank} We next explore how disagreement is correlated with document rankings. Following the same logic discussed above regarding document relevance, we might expect clearly relevant documents to retrieved at very high ranks, clearly non-relevant documents to be retrieved at low ranks, and borderline documents (leading to judging disagreement) retrieved at more middling ranks \cite{lesk1969interactive,Voorhees2000697}. To inspect this here, we compute the average rank of each judged document across all submitted runs for each track. We treat all documents retrieved at rank $\geq$ 1000 as having rank 1000, then bin documents into 10 groups based on their average ranks, using a fixed interval size of 100 ranks. Finally, we compute agreement statistics for each bin for NIST vs.\ crowd-workers. {\bf Figure~\ref{fig_average_rank}} shows the accuracy of the 10 groups over the two collections. In MQ'09, the accuracy for the first group (i.e., the average document rank $\leq 100$) is higher than the accuracy when the average document rank is between 200-500 in both aggregation methods. Regarding the results for WT'14, we observe a more clear pattern: The accuracy is noticeably higher in the first group. The accuracy for the second group is the lowest among other groups and then the accuracy increases gradually as the average document rank increases. \begin{comment} \hl{OLD} The results are shown in \textbf{Figure~\ref{fig_average_rank}}. In WT 2014, we see an interesting pattern: The crowd workers achieve the highest accuracy when judging the documents with average rank of 1-100. But their accuracy decreases drastically for the documents with 101-200 average ranks, then starts increasing slightly as the average rank increases. This suggests that crowd workers perform better on extreme cases (definitely relevant and definitely not-relevant documents) but it becomes harder to assess the documents accurately when the relevancy of the documents are less obvious. While this observation is in line with our expectations, we cannot see this type of crowd performance pattern in MQ 2009. This may be due to a high percentage of noisy judgments in MQT 2009. \end{comment} \subsection{Agreement Across Topics} \label{sec_topic} Because an individual with a given information need knows best what they are and are not looking for \cite{chouldechova2013differences}, NIST typically utilizes the same individual to both develop a topic (description) and perform judging for that topic. While written topic descriptions are useful, they are never complete, and so secondary assessors (be they NIST \cite{Voorhees2000697} or crowd \cite{alonso2009can}) have less information to go on when judging relevance of someone else's topic. This naturally leads to disagreement. While even NIST assessors are known to often disagree~\cite{Voorhees2000697}, crowd judging introduces further variability. For example, former intelligence analysts working as NIST assessors may share a common geographic, cultural, and knowledge background, suggesting a consistent bias. Crowd workers may be far more diverse. {\bf Figure~\ref{fig_accuracy_per_topic}} shows the distribution of judging agreement across topics for MQ'09 and WT'14. With MV aggregation, the standard deviation across topics is 0.13 (MQ'09) and 0.17 (WT'14). However, with DS aggregation, the standard deviation slightly tightens to 0.11 (MQ'09) and 0.15 (WT'14). Interestingly, the standard deviation is actually higher in WT'14, despite its crowd judgments having higher accuracy. Regardless, we clearly do observe large variability in judging agreement across different topics in both collections, suggesting the importance of modeling topical factors in order to accurately predict assessor disagreement \cite{Webber:2012:AAD:2396761.2396781,Chandar:2013:DFP:2484028.2484161,Sheshadri-thesis14}. \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.9\textwidth]{qid_m09_2.png} \caption{MQ'09} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=0.9\textwidth]{qid_w14_2.png} \caption{WT'14} \end{subfigure} \caption{Distribution of crowd judging agreement with NIST across topics for MQ'09 and WT'14. Topics are ordered left-to-right by increasing Majority Voting aggregation accuracy.} \label{fig_accuracy_per_topic} \end{figure*} \section{Collaborative Judging} \label{sec_collaborative_judging} Thus far we have seen (1) greater agreement on NIST-judged relevant documents, potentially due to layman crowd judges being permissive in judging relevance when in doubt \cite{sormunen2002liberal}; (2) greater agreement on documents ranked very high or low \cite{lesk1969interactive,Voorhees2000697}; and (3) high variance in agreement across topics. Based on this, we evaluate two simple methods for collaborative judging: a practical method prioritizing highly-ranked documents for trusted judging because of their significant impact on ranking metrics, and an oracle method which prioritizes documents based on knowledge of disagreement for each topic. \subsection{Descending Rank Based Ordering (DRBO)} Documents at higher ranks more significantly impact rank-based evaluation metrics of IR system performance (e.g., MAP). Therefore, an intuitive method for collaborative judging would be to assign these more important documents to trusted judges. Specifically, we calculate the weight of each document-topic pair using statAP method~\cite{pavlu2007practical}'s weighting function, which assigns higher scores to documents at higher ranks. Subsequently, we rank the documents based on their weights in descending order, for each topic. The first $K$ documents of each topic are assigned to the trusted judges, while the rest are judged by crowd workers. \subsection{Oracle Topic-Based Scheduling (Oracle TBS)} As discussed in Section~\ref{sec_topic}, judging disagreement exhibits high variance across topics. Therefore, quality of judgments might be improved by assigning to trusted judges those topics for which the most disagreement is expected. However, it is challenging to predict which topics are easier for crowd workers to judge \cite{Webber:2012:AAD:2396761.2396781,Chandar:2013:DFP:2484028.2484161,Sheshadri-thesis14}. We leave this for future work and instead pursue a preliminary oracle model as a proof-of-concept which perfectly predicts judging agreement for each topic. Using this oracle, we then prioritize documents for expert judging starting with the lowest agreement topics first. Documents for the same topic are ordered randomly. \begin{comment} \noindent \textbf{Ascending Rank Based Ordering (ARBO):} In order to see impact of DRBO method better, we also rank the documents in its reverse order. That is, the documents at lower ranks are assigned to trusted personnel. \end{comment} \begin{figure*}[ht] \centering \begin{subfigure}{7.0cm} \includegraphics[width=7.0cm]{mv_m09_2.png} \caption{MQ'09 - Majority Voting} \end{subfigure} \begin{subfigure}{7.0cm} \includegraphics[width=7.0cm]{ds_m09_2.png} \caption{MQ'09 - Dawid-Skene} \end{subfigure} \begin{subfigure}{7.0cm} \includegraphics[width=7.0cm]{mv_w14_2.png} \caption{WT'14 - Majority Voting} \end{subfigure} \begin{subfigure}{7.0cm} \includegraphics[width=7.0cm]{ds_w14_2.png} \caption{WT'14 - Dawid-Skene} \end{subfigure} \caption{Comparing cost vs.\ correlation of Oracle TBS, DRBO, and Random methods for collaborative NIST and crowd judging.}. \label{fig_collaborative_judging} \vspace{-15pt} \end{figure*} \section{Experiments}\label{sec_experiments} In this section, we report experiments to compare the proposed collaborative judging methods on our test collections. We also report the \emph{random} method, which assigns randomly-selected $K$ documents to the trusted personnel for each topic, as a baseline. Because the number of judged documents per topic greatly varies, we vary the ratio of NIST judgments used per topic, instead of using a fixed constant. Given our incomplete judgments, we evaluate IR systems using \emph{bpref} \cite{buckley2004retrieval}, which ignores the documents for which no judgment is available. We adopt Soboroff's corrected bpref formulation \cite{soboroff2006dynamic}, as implemented in {\tt trec\_eval}\footnote{\url{http://trec.nist.gov/trec_eval/}}. We assume that ground-truth ranking of systems comes from ranking systems based on their bpref scores using NIST judgments. We calculate Kendall's $\tau$ to measure correlation between the ground-truth ranking and the ranking induced by collaborative judging. We run the random method 50 times and the Oracle TBS method 10 times and report average Kendall's $\tau$. By convention, $\tau=0.9$ is assumed to constitute an acceptable correlation level for reliable IR evaluation \cite{Voorhees2000697}. Results are shown in \textbf{Figure~\ref{fig_collaborative_judging}}. We report the area-under-curve (AUC) in the Figure to have a simple summarization of the results. The results suggest several observations. Firstly, the Oracle TBS method achieves the best overall results, reaching $\tau=0.9$ score by assigning 55\% and 15-20\% of the judgments to the trusted personnel in MQ'09 and WT'14, respectively. This suggests that if we could predict which topics will more likely exhibit judging disagreement, then we might maintain NIST quality judging at lower cost through collaborative judging. Secondly, DRBO consistently outperforms the random baseline in WT'14, but not in MQ'09. This may be due to lower quality crowd judgments in MQ'09 (See Table~\ref{tab_confusion_matrices}). With higher quality crowd judgments, however, DRBO seems to be a simple and effective method. Overall, our results suggest that collaborative judging is a promising method to efficiently build high-quality test collections. \section{Conclusion and Future Work} In this work, we investigated when crowd workers disagree with NIST assessors and proposed one oracle and one practical collaborative judging approach. Based on our experiments conducted on two different test collections, we offer several observations. First, higher agreement with NIST on documents NIST judges to be relevant suggests that when in doubt, layman crowd judges may be more liberal in erring on the side of judging unsure documents as relevant~\cite{sormunen2002liberal}. Secondly, we reaffirm prior work's finding of greater judging agreement at very high and low ranks, suggesting documents whose relevance is not borderline \cite{lesk1969interactive,Voorhees2000697}. Thirdly, we do see high variance in agreement across topics, suggesting further confirmation of judging differences between primary and secondary assessors~\cite{Voorhees2000697,chouldechova2013differences,al2014qualitative}. In regard to collaborative judging, the oracle predicting assessor disagreement also achieved the highest rank correlation, suggesting that a model which could effectively predict judging disagreement \cite{Chandar:2013:DFP:2484028.2484161,Webber:2012:AAD:2396761.2396781,Sheshadri-thesis14} could be usefully applied toward collaborative judging. Alternatively, the DRBO approach proposed here based on statAP \cite{pavlu2007practical} often outperformed the random baseline across collections and aggregation algorithms, and especially with higher quality crowd judgments, suggesting its promise particularly when crowd judgments are collected carefully. In the future, we plan to use and extend disagreement prediction modeling \cite{Chandar:2013:DFP:2484028.2484161,Webber:2012:AAD:2396761.2396781,Sheshadri-thesis14} to further improve collaborative judging. Since our current DRBO model prioritizes highly-ranked documents without considering expected disagreement, it would be of further interest to integrate both aspects into a single, coherent strategy for collaborative judging. \begin{comment} \begin{table}[h!] \centering \caption{Confusion Matrix for Crowd vs.\ NIST Judgments in MQ09 Collection. The results for Majority Voting and Dawid-Skene are given in parentheses, respectively.} \label{tab:table1} \begin{tabular}{l|p{2cm}|p{2cm}|l|} & \# Crowd - Relevant & \# Crowd - Irrelevant & Total\\ \hline \# NIST - Relevant & (1436, 1343) & (340, 433) & 1776\\ \hline \# NIST - Irrelevant & (810, 565) & (691, 936) & 1501 \\ \hline Total & (2246, 1908) & (1031, 1369) & 3277 \\ \hline \end{tabular} \end{table} \begin{table}[h!] \centering \caption{Confusion Matrix for Crowd vs.\ NIST Judgments in WT04 Collection. The results for Majority Voting and Dawid-Skene are given in parentheses, respectively.} \label{tab:table1} \begin{tabular}{l|p{2cm}|p{2cm}|l|} & \# Crowd - Relevant & \# Crowd - Irrelevant & Total\\ \hline \# NIST - Relevant & (1969, 1850) & (298, 417) & 2267\\ \hline \# NIST - Irrelevant & (701, 547) & (2032, 2186) & 2733 \\ \hline Total & (2670, 2397) & (2330, 2603) & 5000 \\ \hline \end{tabular} \end{table} \end{comment}
{ "timestamp": "2018-06-12T02:04:59", "yymm": "1806", "arxiv_id": "1806.00755", "language": "en", "url": "https://arxiv.org/abs/1806.00755" }
\section{ [Supplemental Material] } \end{document}
{ "timestamp": "2018-06-05T02:18:32", "yymm": "1806", "arxiv_id": "1806.01132", "language": "en", "url": "https://arxiv.org/abs/1806.01132" }
\section{Introduction} Fake news is written in an intentional and unverifiable language to mislead readers. It has a long history since the 19th century. In 1835, New York Sun published a series of articles about ``the discovery of life on the moon". Soon the fake stories were printed in newspapers in Europe. Similarly, fake news widely exists in our daily life and is becoming more widespread following the Internet's development. Exposed to the fast-food culture, people nowadays can easily believe something without even checking whether the information is correct or not, such as the ``FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide''. These fake news frequently appear during the United States presidential election campaign in 2016. This phenomenon has aroused the attention of people, and it has a significant impact on the election. Fake news dissemination is very common in social networks \cite{allcott2017social}. Due to the extensive social connections among users, fake news on certain topics, e.g., politics, celebrities and product promotions, can propagate and lead to a large number of nodes reporting the same (incorrect) observations rapidly in online social networks. According to the statistical results reported by the researchers in Stanford University, 72.3\% of the fake news actually originates from the official news media and online social networks \cite{allcott2017social,conroy2015automatic}. The potential reasons are provided as follows. Firstly, the emergence of social media greatly lower down the barriers to enter in the media industry. Various online blogs, ``we media", and virtual communities are becoming more and more popular in recent years, in which everyone can post news articles online. Secondly, the large number of social media users provide a breeding ground for fake news. Fake news involving conspiracy and pitfalls can always attract our attention. People like to share this kind of information to their friends. Thirdly, the `trust and confidence' in the mass media greatly dropped these years. More and more people tend to trust the fake news by browsing the headlines only without reading the content at all. Fake news identification from online social media is extremely challenging due to various reasons. Firstly, it's difficult to collect the fake news data, and it is also hard to label fake news manually \cite{wang2017liar}. News that appears on Facebook and Twitter news feeds belongs to private data. To this context so far, few large-scale fake news detection public dataset really exists. Some news datasets available online involve a small number of the instances only, which are not sufficient to train a generalized model for application. Secondly, fake news is written by human. Most liars tend to use their language strategically to avoid being caught. In spite of the attempt to control what they are saying, language “leakage” occurs with certain verbal aspects that are hard to monitor such as frequencies and patterns of pronoun, conjunction, and negative emotion word usage \cite{feng2013detecting}. Thirdly, the limited data representation of texts is a bottleneck of fake news identification. In the bag-of-words approach, individual words or ``n-grams'' (multiword) frequencies are aggregated and analyzed to reveal cues of deception. Further tagging of words into respective lexical cues for example, parts of speech or ``shallow syntax'' \cite{markowitz2016linguistic}, affective dimensions \cite{vrij2006empirical}, and location-based words \cite{ott2013negative} can all provide frequency sets to reveal linguistic cues of deception \cite{newman2003lying,hancock2007lying}. The simplicity of this representation also leads to its biggest shortcoming. In addition to relying exclusively on language, the method relies on isolated n-grams, often divorced from useful context information. Word embedding techniques provide a useful way to represent the meaning of the word. In some circumstances, sentences of different lengths can be represented as a tensor with different dimensions. Traditional models cannot handle the sparse and high order features very well. \begin{figure}[ht] \centering \subfigure[b][Cartoon in fake news.]{\label{fig:cartoon} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/hillary.png}} \subfigure[b][Altered low-resolution image.]{\label{fig:obama} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/obama.jpg}} \subfigure[b][Irrelevant image in fake news.]{\label{fig:amish} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/amish.jpg}} \subfigure[b][Low-resolution image.]{\label{fig:hillary} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/hillary.jpg}} \label{fig:fake_image} \caption{The images in fake news: (a) `FBI Finds Previously Unseen Hillary Clinton Emails On Weiner's Laptop', (b)`BREAKING: Leaked Picture Of Obama Being Dragged Before A Judge In Handcuffs For Wiretapping Trump', (c) `The Amish Brotherhood have endorsed Donald Trump for president', (d) `Wikileaks Gives Hillary An Ultimatum: QUIT, Or We Dump Something Life-Destroying'. The news texts of images (c) and (d) are represented in Section \ref{ssec:case_study}} \end{figure} Though the deceivers make great efforts in polishing fake news to avoid being found, there are some leakages according to our analysis from the text and image aspect respectively. For instance, the lexical diversity and cognition of the deceivers are totally different from the truth teller. Beyond the text information, images in fake news are also different from that in real news. As shown in Fig. \ref{fig:fake_image}, cartoons, irrelevant images (mismatch of text and image, no face in political news) and altered low-resolution images are frequently observed in fake news. In this paper, we propose a TI-CNN model to consider both text and image information in fake news detection. Beyond the explicit features extracted from the data, as the development of the representative learning, convolutional neural networks are employed to learn the latent features which cannot be captured by the explicit features. Finally, we utilize TI-CNN to combine the explicit and latent features of text and image information into a unified feature space, and then use the learned features to identify the fake news. Hence, the contributions of this paper are summarized as follows: \begin{itemize} \item We collect a high quality dataset and take in-depth analysis on the text from multiple perspectives. \item Image information is proved to be effective features in identifying the fake news. \item A unified model is proposed to analyze the text and image information using the covolutoinal neural networks. \item The model proposed in this paper is an effective way to recognize fake news from lots of online information. \end{itemize} In the rest of the paper, we first define the problem of fake news identification. Then we introduce the analysis on the fake news data. A unified model is proposed to illustrate how to model the explicit and latent features of text and image information. The details of experiment setup is demonstrated in the experiment part. At last, we compare our model with several popular methods to show the effectiveness of our model. \section{Related Work} Deception detection is a hot topic in the past few years. Deception information includes scientific fraud, fake news, false tweets etc. Fake news detection is a subtopic in this area. Researchers solve the deception detection problem from two aspects: 1) linguistic approach. 2) network approach. \subsection{Linguistic approaches} Mihalcea and Strapparvva 2009 \cite{mihalcea2009lie} started to use natural language processing techniques to solve this problem. Bing Liu et.al. \cite{hu2004mining} analyzed fake reviews on Amazon these years based on the sentiment analysis, lexical, content similarity, style similarity and semantic inconsistency to identify the fake reviews. Hai et al. \cite{hai2016deceptive} proposed semi-supervised learning method to detect deceptive text on crowdsourced datasets in 2016. The methods based on word analysis is not enough to identify deception. Many researchers focus on some deeper language structures, such as the syntax tree. In this case, the sentences are represented as a parse tree to describe syntax structure, for example noun and verb phrases, which are in turn rewritten by their syntactic constituent parts \cite{feng2012syntactic}. \subsection{Network-based approaches} Another way to identify the deception is to analyze the network structure and behaviors, which are important complementary features. As the development of knowledge graph, it will be very helpful to check fact based on the relationship among entities. Ciampaglia et al. \cite{ciampaglia2015computational} proposed a new concept `network effect' variables to derive the probabilities of news. The methods based on the knowledge graph analysis can achieve 61\% to 95\% accuracy. Another promising research direction is exploiting the social network behavior to identify the deception. \subsection{Neural Network based approaches} Deep learning models are widely used in both academic community and industry. In computer vision \cite{krizhevsky2012imagenet} and speech recognition \cite{graves2013speech}, the state-of-art methods are almost all deep neural networks. In the natural language processing (NLP) area, deep learning models are used to train a model that can represent words as vectors. Then researchers propose many deep learning models based on the word vectors for QA \cite{chen2015abc} and summarization\cite{kaikhah2004automatic}, etc. Convolutional neural networks (CNN) utilize filters to capture the local structures of the image, which performs very well on computer vision tasks. Researchers also find that CNN is effective on many NLP tasks. For instance, semantic parsing \cite{yih2014semantic}, sentence modeling \cite{kalchbrenner2014convolutional}, and other traditional NLP tasks \cite{collobert2011natural}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\textwidth]{figures/word_frequency1.png} \end{center} \caption{Word frequency in titles of real and fake news. If the news has no title, we set the title as `notitle'. The words on the top-left are frequently used in fake news, while the words on the bottom-right are frequently used in real news. The `Top Fake' words are capital characters and some meaningless numbers that represent special characters, while the `Top Real' words contain many names and motion verbs, i.e., `who' and `what' --- the two important factors in the five elements of news: when, where, what, why and who.} \label{fig:word_frequency} \end{figure} \section{Problem Definition} Given a set of $m$ news articles containing the text and image information, we can represent the data as a set of text-image tuples $\mathcal{A} = \{(A_i^T, A_i^I)\}_i^m$. In the fake news detection problem, we want to predict whether the news articles in $\mathcal{A}$ are fake news or not. We can represent the label set as $\mathcal{Y} = \{[1,0], [0,1]\}$, where $[1,0]$ denotes real news while $[0,1]$ represents the fake news. Meanwhile, based on the news articles, e.g., $(A_i^T, A_i^I) \in \mathcal{A}$, a set of features (including both explicit and latent features to be introduced later in Model Section) can be extracted from both the text and image information available in the article, which can be represented as $\mathbf{X}_i^T$ and $\mathbf{X}_i^I$ respectively. The objective of the \textit{fake news detection} problem is to build a model $f: \{\mathbf{X}_i^T, \mathbf{X}_i^I\}_i^m \in \mathbb{X} \to \mathcal{Y}$ to infer the potential labels of the news articles in $\mathcal{A}$. \section{Data Analysis} To examine the finding from the raw data, a thorough investigation has been carried out to study the text and image information in news articles. There are some differences between real and fake news on American presidential election in 2016. We investigate the text and image information from various perspectives, such as the computational linguistic, sentiment analysis, psychological analysis and other image related features. We show the quantitative information of the data in this section, which are important clues for us to identify fake news from a large amount of data. \subsection{Dataset} The dataset in this paper contains 20,015 news, i.e., 11,941 fake news and 8,074 real news. It is available online\footnote{https://drive.google.com/open?id=0B3e3qZpPtccsMFo5bk9Ib3VCc2c}. For fake news, it contains text and metadata scraped from more than 240 websites by the Megan Risdal on Kaggle\footnote{https://www.kaggle.com/mrisdal/fake-news}. The real news is crawled from the well known authoritative news websites, i.e., the New York Times, Washington Post, etc. The dataset contains multiple information, such as the title, text, image, author and website. To reveal the intrinsic differences between real and fake news, we solely use the title, text and image information. \begin{figure*}[ht] \centering \subfigure[b][The number of words in news.]{\label{fig:words} \includegraphics[width=0.23\textwidth]{figures/Num_of_words_of_news-eps-converted-to.pdf}} \subfigure[b][The average number of words in a sentence.]{\label{fig:avg_words}\includegraphics[width=0.23\textwidth]{figures/Avg_of_sentences_length_of_news-eps-converted-to.pdf}} \subfigure[b][Question mark in news.]{\label{fig:question_mark} \includegraphics[width=0.23\textwidth]{figures/question_mark-eps-converted-to.pdf}} \subfigure[b][Exclamation mark in news.]{\label{fig:exclamation} \includegraphics[width=0.23\textwidth]{figures/exclamation-eps-converted-to.pdf}} \subfigure[b][The exclusive words in news.]{\label{fig:exclusive} \includegraphics[width=0.23\textwidth]{figures/exclusive_words-eps-converted-to.pdf}} \subfigure[b][The negations in news.]{\label{fig:negations} \includegraphics[width=0.23\textwidth]{figures/negations-eps-converted-to.pdf}} \subfigure[b][FPP: First-person pronoun.]{\label{fig:1st} \includegraphics[width=0.23\textwidth]{figures/1_person_pronouns-eps-converted-to.pdf}} \subfigure[b][Motion verbs in news.]{\label{fig:motion_verbs} \includegraphics[width=0.23\textwidth]{figures/motion_words-eps-converted-to.pdf}} \caption{Analysis on the news text. } \end{figure*} \subsection{Text Analysis} Let's take the word frequency \cite{kessler2017scattertext} in the titles as an example to demonstrate the differences between real and fake news in Fig. \ref{fig:word_frequency}. If the news has no title, we set the title as `notitle'. The frequently observed words in the title of fake news are \emph{notitle, IN, THE, CLINTON} and many meaningless numbers that represent special characters. We can have some interesting findings from the figure. Firstly, much fake news have no titles. These fake news are widely spread as the tweet with a few keywords and hyperlink of the news on social networks. Secondly, there are more capital characters in fake news. The purpose is to draw the readers' attention, while the real news contains less capital letters, which is written in a standard format. Thirdly, the real news contain more detailed descriptions. For example, names (\emph{Jeb Bush, Mitch McConnell}, etc.), and motion verbs (\emph{left, claim, debate and poll}, etc.). \subsubsection{Computational Linguistic} \paragraph{Number of words and sentences} Although liars have some control over the content of their stories, their underlying state of mind may leak out through the style of language used to tell the story. The same is true for the people who write the fake news. The data presented in the following paragraph provides some insight into the linguistic manifestations of this state of mind \cite{hancock2007lying}. As shown in Fig. \ref{fig:words}, fake news has fewer words than real news on average. There are 4,360 words on average for real news, while the number is 3,943 for fake news. Besides, the number of words in fake news distributes over a wide range, which indicates that some fake news have very few words and some have plenty of words. The number of words is just a simple view to analyze the fake news. Besides, real news has more sentences than fake news on average. Real news has 84 sentences, while fake news has 69 sentences. Based on the above analysis, we can get the average number of words in a sentence for real and fake news, respectively. As shown in Fig. \ref{fig:avg_words}, the sentence of real news is shorter than that of fake news. Real news has 51.9 words on average in a sentence. However, the number is 57.1 for fake news. According to the box plot, the variance of the real news is much smaller than that of fake news. And this phenomenon appears in almost all the box plots. The reason is that the editor of real news must write the article under certain rules of the press. These rules include the length, word selection, no grammatical errors, etc. It indicates that most of the real news are written in a more standard and consistent way. However, most of the people who write fake news don't have to follow these rules \paragraph{Question mark, exclamation and capital letters} According to the statistics on the news text, real news has fewer question marks than fake news, as shown in Fig. \ref{fig:question_mark}. The reasons may lie in that there are many rhetorical questions in fake news. These rhetorical questions are always used to emphasize the ideas consciously and intensify the sentiment. According to the analysis on the data, we find that both real and fake news have very few exclamations. However, the inner fence of fake news box plot is much larger than that of real news, as shown in Fig. \ref{fig:exclamation}. Exclamation can turn a simple indicative or declarative sentence into a strong command or reflect an emotional outburst. Hence, fake news is inclined to use the words with exclamations to fan specific emotions among the readers. Capital letters are also analyzed in the real and fake news. The reason for the capitalization in news is to draw readers attention or emphasize the idea expressed by the writers. According to the statistic data, fake news have much more capital letters than real news. It indicates that fake news deceivers are good at using the capital letters to attract the attention of readers, draw them to read it and believe it. \paragraph{Cognitive perspective} From the cognitive perspective, we investigate the exclusive words (e.g., `but', `without', `however') and negations (e.g.,, `no', `not' ) used in the news. Truth tellers use negations more frequently, as shown in Fig. \ref{fig:exclusive} and \ref{fig:negations}. The exclusive words in news have the similar phenomenon with the negations. The median of negations in fake news is much smaller than that of real news. The deceiver must be more specific and precise when they use exclusive words and negations, to lower the likelihood that being caught in a contradiction. Hence, they use fewer exclusive words and negations in writing. For the truth teller, they can exactly discuss what happened and what didn't happen in that real news writer witnessed the event and knew all the details of the event. Specifically, individuals who use a higher number of “exclusive” words are generally healthier than those who do not use these words \cite{pennebaker1999linguistic}. \subsubsection{Psychology Perspective} From the psychology perspective, we also investigate the use of first-person pronouns (e.g., I, we, my) in the real and fake news. Deceptive people often use language that minimizes references to themselves. A person who's lying tends not to use ``we” and ``I", and tend not to use person pronouns. Instead of saying ``I didn’t take your book,'' a liar might say ``That's not the kind of thing that anyone with integrity would do'' \cite{newman2003lying}. Similarly, as shown in Fig. \ref{fig:1st}, the result is the same with the point of view from the psychology perspective. On average, fake news has fewer first-person pronouns. The second-person pronouns (e.g., you, yours) and third-person pronouns (e.g., he, she, it) are also tallied up. We find that deceptive information can be characterized by the use of fewer first-person, fewer second-person and more third-person pronouns. Given space limitations,we just show the first-person pronouns figure. In addition, the deceivers avoid discussing the details of the news event. Hence, they use few motion verbs, as shown in Fig. \ref{fig:motion_verbs}. \subsubsection{Lexical Diversity} Lexical diversity is a measure of how many different words that are used in a text, while lexical density provides a measure of the proportion of lexical items (i.e. nouns, verbs, adjectives and some adverbs) in the text. The rich news has more diversity. According to the experimental results, the lexical diversity of real news is $2.2e\text{-}06$, which is larger than $1.76e\text{-}06$ for fake news. \subsubsection{Sentiment Analysis} The sentiment \cite{liu2012sentiment} in the real and fake news is totally different. For real news, they are more positive than negative ones. The reason is that deceivers may feel guilty or they are not confident to the topic. Under the tension and guilt, the deceivers may have more negative emotion \cite{markowitz2016linguistic,pennebaker1999linguistic}. The experimental results agree with the above analysis in Fig. \ref{fig:sentiment}. The standard deviation of fake news on negative sentiment is also larger than that of real news, which indicates that some of the fake news have very strong negative sentiment. \begin{figure}[ht] \centering \subfigure[b][The median sentiment values: positive and negative.]{\label{fig:median_pos_neg} \includegraphics[width=0.23\textwidth]{figures/median_pos_neg-eps-converted-to.pdf}} \subfigure[b][The standard deviation sentiment values: positive and negative.]{\label{fig:sd_pos_neg} \includegraphics[width=0.23\textwidth]{figures/sd_pos_neg-eps-converted-to.pdf}} \caption{Sentiment analysis on real and fake news.} \label{fig:sentiment} \end{figure} \subsection{Image Analysis} We also analyze the properties of images in the political news. According to some observations on the images in the fake news, we find that there are more faces in the real news. Some fake news have irrelevant images, such as animals and scenes. The experiment result is consistent with the above analysis. There are 0.366 faces on average in real news, while the number is 0.299 in fake news. In addition, real news has a better resolution image than fake news. The real news has $457 \times 277$ pixels on average, while the fake news has a resolution of $355\times 228$. \section{Model -- the architecture} In this section, we introduce the architecture of TI-CNN model in detail. Besides the explicit features, we innovatively utilize two parallel CNNs to extract latent features from both textual and visual information. And then explicit and latent features are projected into the same feature space to form new representations of texts and images. At last, we propose to fuse textual and visual representations together for fake news detection. As shown in Fig. \ref{fig:architecture}, the overall model contains two major branches, i.e., text branch and image branch. For each branch, taking textual or visual data as inputs, explicit and latent features are extracted for final predictions. To demonstrate the theory of constructing the TI-CNN, we introduce the model by answering the following questions: 1) How to extract the latent features from text? 2) How to combine the explicit and latent features? 3) How to deal with the text and image features together? 4) How to design the model with fewer parameters? 5) How to train and accelerate the training process? \renewcommand{\arraystretch}{0.7} \begin{table}[htbp] \centering \caption{Symbols in this paper.} \begin{tabular}{lll} \toprule \multicolumn{1}{c}{Parameter} & \multicolumn{1}{l}{Parameter Name } & \multicolumn{1}{c}{Dimension} \\ \midrule $\mathbf{X}^{Tl}_{i,j}$ & latent word vector $j$ in sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}^{Tl}_{i,1:n}$ & sentence for sample $i$ & $\mathbb{R}^{n\times k}$ \\ $\mathbf{X}_i^{Te}$ & explicit text feature for sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}_i^{Ie}$ & explicit image feature for sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}_i^{Il}$ & latent image feature for sample $i$ & $\mathbb{R}^k$ \\ $\theta$ & weight for the word & $\mathbb{R}^{h \times k}$ \\ $\mathbb{Y}$ & label of news & $\mathbb{R}^{n\times 2}$ \\ $w$ & filter for texts & $\mathbb{R}^{h \times k}$ \\ $b$ & bias & $\mathbb{R}$ \\ $\mathbf{c}$ & feature map & $\mathbb{R}^{n-h+1}$ \\ $\hat{c}$ & the maximum value in feature map & $\mathbb{R}$ \\ $M$ & number of maps & $\mathbb{R}$ \\ $M_i$ & the i-th filter for images & $\mathbb{R}^{K_\alpha \times K_\beta}$ \\ $\tau$ & the scores in tags of label & $\mathbb{R}$ \\ $T$ & the number of tags in label & $\mathbb{R}$ \\ $s_{w}(\mathbb{X})_\tau$ & the predicted probability & $\mathbb{R}\in [0,1]$ \\ \bottomrule \end{tabular} \label{tab:symbols}% \end{table}% \subsection{Text Branch} For the text branch, we utilize two types of features: textual explicit features $\mathbf{X}^{Te}$ and textual latent features $\mathbf{X}^{Tl}$. The textual explicit features are derived from the statistics of the news text as we mentioned in the data analysis part, such as the length of the news, the number of sentences, question marks, exclamations and capital letters, etc. The statistics of a single news can be organized as a vector with fixed size. Then the vector is transformed by a fully connected layer to form a textual explicit features. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{figures/framework33-eps-converted-to.pdf} \end{center} \caption{The architecture of the model. The rectangles in the last 5 layers represent the hidden dense layers. The dropout, batch normalization and flatten layers are not drawn for brevity. The details of the structure are shown in Table \ref{tab:hyperparameter}.} \label{fig:architecture} \end{figure*} The textual latent features in the model are based on a variant of CNN. Although CNNs are mainly used in Computer Vision tasks, such as image classification \cite{krizhevsky2012imagenet} or object recognition \cite{ren2015faster}, CNN also show notable performances in many Natural Language Processing (NLP) tasks \cite{kim2014convolutional,zeng2014relation}. With the convolutional approach, the neural network can produce local features around each word of the adjacent word and then combines them using a max operation to create a fixed-sized word-level embedding, as shown in Fig. \ref{fig:architecture}. Therefore, we employ CNN to model textual latent features for fake news detection. Let the $j$-th word in the news $i$ denote as $\mathbf{x}_{i,j} \in \mathbb{R}^{k}$, which is a k-dimensional word embedding vector. Suppose the maximum length of the news is $n$, s.t., the news have less than $n$ words can be padded as a sequence with length $n$. Hence, the overall news can be written as \begin{equation} \mathbf{X}_{i,1:n}^{Tl}=\mathbf{x}_{i,1} \oplus \mathbf{x}_{i,1} \oplus \mathbf{x}_{i,2} \oplus ... \oplus \mathbf{x}_{i,n}. \end{equation} It means that the news $\mathbf{X}_{i,1:n}^{Tl}$ is concatenated by each word. In this case, each news can be represented as a matrix. Then we use convolutional filters $w \in \mathbb{R}^{h\times k}$ to construct the new features. For instance, a window of words $\mathbf{X}_{i,j:j+h-1}^{Tl}$ can produce a feature $c_i$ as follows: \begin{equation} c_i=f(\mathbf{w} \cdot \mathbf{X}_{i,j:j+h-1}^{Tl}+b), \end{equation} where the $b\in R$ is the bias, and $\cdot$ is the convolutional operation. $f$ is the non-linear transformation, such as the sigmoid and tagent function. A feature map is generated from the filter by going through all the possible window of words in the news. \begin{equation} \mathbf{c}=[c_1,c_2,...,c_{n-h+1}], \end{equation} where $\mathbf{c}\in \mathbb{R}^{n-h+1}$. A max-pooling layer \cite{nagi2011max} is applied to take the maximum in the feature map $\mathbf{c}$. The maximum value is denoted as $\hat{c}=max\{\mathbf{c}\}$. The max-pooling layer can greatly improve the robustness of the model by reserving the most important convolutional results for fake news detection. The pooling results are fed into a fully connected layer to obtain our final textual latent features for predicting news labels. \subsection{Image Branch} Similar to the text branch, we use two types of features: visual explicit features $\mathbf{X}^{Ie}$ and visual latent features $\mathbf{X}^{Il}$. As shown in Fig. \ref{fig:architecture}, in order to obtain the visual explicit features, we firstly extract the resolution of an image and the number of faces in the image to form a feature vector. And then, we transform the vector into our visual explicit feature with a fully connected layer Although visual explicit features can convey information of images contained in the news, it is hand-crafted features and not data-driven. To directly learn from the raw images contained in the news to derive more powerful features, we employ another CNN to learn from images in the news \subsubsection{Convolutional layer} In the convolutional layer, filters are replicated across entire visual field and share the same parameterisation forming a feature map. In this case, the network have a nice property of translation-invariant. Suppose the convolutional layer has $M$ maps of size $(M_\alpha,M_\beta)$. A filter $(K_\alpha,K_\beta)$ is shifted over all the regions of the images. Hence the size of the output map is as follows: \begin{equation} M_{\alpha}^{n}=M_{\alpha}^{n-1} - K_{\alpha}^{n}+1 \end{equation} \begin{equation} M_{\beta}^{n}=M_{\beta}^{n-1} - K_{\beta}^{n}+1 \end{equation} \subsubsection{Max-pooling layer} A max-pooling layer \cite{nagi2011max} is connected to the convolutional layer. Then we apply maximum activation over the rectagular filters $(K_\alpha,K_\beta)$ to the output of max-pooling layer. Max-pooling enables position invariance over larger local regions and downsamples the input image by a factor of $K_\alpha$ and $K_\beta$ along each direction, which can make the model select invariant features, converge faster and improve the generalization significantly. A theoretical analysis of feature pooling in general and max-pooling in particular is given by \cite{boureau2010theoretical}. \begin{table*}[ht] \centering \caption{Two fake news correspond to the Fig. \ref{fig:amish} and \ref{fig:hillary}.} \label{tab:case_study} \resizebox{\textwidth}{!}{% \begin{tabular}{|>{\arraybackslash}M{2.5cm}|>{\arraybackslash}m{14cm}|>{\centering\arraybackslash}M{0.5cm}|} \hline \multicolumn{1}{|>{\centering\arraybackslash}p{23mm}|}{\thead{ \textbf{Title}}} & \multicolumn{1}{>{\centering\arraybackslash}p{8cm}|}{\thead{ \textbf{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ News text}}} & \multicolumn{1}{>{\centering\arraybackslash}p{0.2cm}|}{\textbf{Type}} \\ \hline The Amish Brotherhood have endorsed Donald Trump for president. & The Amish, who are direct descendants of the protestant reformation sect known as the Anabaptists, have typically stayed out of politics in the past. As a general rule, they don't vote, serve in the military, or engage in any other displays of patriotism. This year, however, the AAB has said that it is imperative that they get involved in the democratic process. & Fake \\ \hline Wikileaks Gives Hillary An Ultimatum: QUIT, Or We Dump Something Life-Destroying & On Sunday, Wikileaks gave Hillary Clinton less than a 24-hour window to drop out of the race or they will dump something that will destroy her “completely.”Recently, Julian Assange confirmed that WikiLeaks was not working with the Russian government, but in their pursuit of justice, they are obligated to release anything that they can to bring light to a corrupt system – and who could possibly be more corrupt than Crooked Hillary? & Fake \\ \hline \end{tabular}% } \end{table*} \subsection{Rectified Linear Neuron} The sigmoid and tanh activation functions may cause the gradient explode or vanishing problem \cite{pascanu2013difficulty} in convolutional neural networks. Hence, we add the ReLU activation to the image branch to remede the problem of gradient vanishing. \begin{equation} y = max(0,\sum_{i=1}^{k}x_{i}\theta_{i}+b) \end{equation} ReLUs can also improve neural networks by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of $x$). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations. Logistic and hyperbolic tangent networks suffer from the vanishing gradient problem, where the gradient essentially becomes 0 after a certain amount of training (because of the two horizontal asymptotes) and stops all learning in that section of the network. ReLU units are only 0 gradient on one side, which is empirically superior. \subsection{Regularization} As shown in Table \ref{tab:hyperparameter}, we empoly dropout \cite{srivastava2014dropout} as well as $l_2$-norms to prevent overfitting. Dropout is to set some of the elements in weight vectors as zero with a probability $p$ of the hidden units during the forward and backward propagation. For instance, we have a dense layer $z=[z_1,...,z_m]$, and $r$ is a vector where all the elements are zero. When we start to train the model, the dropout is to set some of the elements of $r$ as 1 with probability as $p$. Suppose the output of dense layer is $y$. Then the dropout operation can be formulated as \begin{equation} y=\theta \cdot (z\circ r)+b, \end{equation} where $\theta$ is the weight vector. $\circ$ is the element-wise multiplication operator. When we start to test the performance on the test dataset, the deleted neurons are back. The deleted weight are scaled by $p$ such that $\hat{\theta} =p\theta$. The $\hat{\theta}$ is used to predict the test samples. The above procedure is implemented iteratively, which greatly improve the generalization ability of the model. We also use early stopping \cite{prechelt1998automatic} to avoid overfitting. It can also be considered a type of regularization method (like L1/L2 weight decay and dropout). \subsection{Network Training} We train our neural network by minimizing the negative likelihood on the training dataset $D$. To identify the label of a news $\mathbb{X}$, the network with parameter $\theta$ computes a value $s_{w}(x)_{\tau}$. Then a sigmoid function is used over all the scores of tags $\tau \in T$ to transform the value into the conditional probability distribution of labels: \begin{equation} p(\tau|\mathbb{X},\theta)=\frac{e^{s_{\theta}(\mathbb{X})_{\tau}}}{\sum_{\forall i\in T}{e^{s_{\theta}(\mathbb{X})_i}}} \label{equ:equation} \end{equation} The negative log likelihood of Equation \ref{equ:equation} is \begin{equation} E(W)=-lnp(\tau|\mathbb{X},\theta)=s_{\theta}(\mathbb{X})_{\tau}-log\left (\sum_{\forall i\in T}e^{s_{\theta}(\mathbb{X})_{\tau}}\right) \end{equation} We use the RMSprop \cite{hinton2012neural} to minimize the loss function with respect to parameter $\theta$: \begin{equation} \theta -> \sum_{(\mathbb{X},\mathbb{Y})\in D}{-log\ p(\mathbb{Y}|\mathbb{X},\theta)} \end{equation} where $\mathbb{X}$ is the input data, and $\mathbb{Y}$ is the label of the news. We naturally choose back-propagation algorithm \cite{hecht1988theory} to compute the gradients of the network structure. With the fine-tuned parameters, the loss converges to a good local minimum in a few epochs. \section{Experiments} \subsection{Case study} \label{ssec:case_study} A case study of the fake news is given in this section. The two fake news in Table \ref{tab:case_study} correspond to the Fig. \ref{fig:amish} and \ref{fig:hillary}. The first fake news is an article reporting that `the American Amish Brotherhood endorsed Donald Trump for President'. However, the website is a fake CNN page. The image in the fake news can be easily searched online, and it is not very relevant with the news texts\footnote{http://cnn.com.de/news/amish-commit-vote-donald-trump-now-lock-presidency/}. For the second fake news -- `Wikileaks gave Hillary Clinton less than a 24-hour window to drop out of the race', it is actually not from Wikileaks. Besides, the composite image \footnote{http://thelastlineofdefense.org/wikileaks-gives-hillary-an-ultimatum-quit-or-we-dump-something-life-destroying/} in the news is low quality. \subsection{Experimental Setup} We use 80\% of the data for training, 10\% of the data for validation and 10\% of the data for testing. All the experiments are run at least 10 times separately. The textual explicit subbranch and visual explicit subbranch are connected with a dense layer. The parameters in these subbranches can be learned easily by the back-propagation algorithm. Thus, most of the parameters, which need to be tuned, exist in the textual latent subbranch and visual latent subbranch. The parameters are set as follows. \subsubsection{Text branch} For the textual latent subbranch, the embedding dimension of the word2vec is set to 100. The details of how to select the parameters are demonstrated in the sensitivity analysis section. The context of the word2vec is set to 10 words. The filter size in the convolutional neural network is $(3,3)$. There are 10 filters in all. Two dropouts are adopted to improve the model's generalization ability. For the textual explicit subbranch, we add a dense layer with 100 neurons first, and then add a batch normalization layer to normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. The outputs of textual explicit subbranch and textual latent feature subbranch are combined by summing the outputs up. \renewcommand{\arraystretch}{0.55} \renewcommand{\arraystretch}{0.55} \begin{table}[] \centering \caption{Models specifications. BN: Batch Normalization, ReLU: Rectified Linear Activation Function, Conv: Convolutional Layer on 2D data, Conv1D: Convolutional Layer on 1D data, Dense: Dense Layer, Emb: Embedding layer, MaxPo: Max-Pooling on 2D data, MaxPo1D: Max-Pooling on 1D data. There are two kinds of dropout layers, i.e., $D=(D_\alpha,D_\beta)$, where $D_\alpha = 0.5$ and $D_\beta = 0.8$.} \label{tab:hyperparameter} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{Text Branch} & \multicolumn{2}{c|}{Image Branch} \\ \hline \begin{tabular}[c]{@{}c@{}}Textual \\ Explicit\end{tabular} & \begin{tabular}[c]{@{}c@{}}Textual \\ Latent\end{tabular} & \begin{tabular}[c]{@{}c@{}}Visual \\ Latent\end{tabular} & \begin{tabular}[c]{@{}c@{}}Visual \\ Explicit\end{tabular} \\ \hline \multirow{3}{*}{Input 31$\times$1} & Emb 1000$\times$100 & Input 50$\times$50$\times$3 & \multirow{3}{*}{Input 4$\times$1} \\ \cline{2-3} & \multirow{2}{*}{Dropout $D_\alpha$} & (2$\times$2) Conv(32) & \\ \cline{3-3} & & ReLU & \\ \hline \multirow{4}{*}{Dense 128} & Emb 1000$\times$100 & Dropout $D_\beta$ & \multirow{4}{*}{Dense 128} \\ \cline{2-3} & (3,3) Conv1D(10) & (2$\times$2) Maxpo & \\ \cline{2-3} & 2 MaxPo1D & (2$\times$2) Conv(32) & \\ \cline{2-3} & Flatten & ReLU & \\ \hline \multirow{6}{*}{BN} & \multirow{2}{*}{Dense 128} &Dropout $D_\beta$ & \multirow{6}{*}{BN} \\ \cline{3-3} & & (2$\times$2) Maxpo & \\ \cline{2-3} & \multirow{2}{*}{BN} & (2$\times$2) Conv(32) & \\ \cline{3-3} & & ReLU & \\ \cline{2-3} & \multirow{2}{*}{ReLU} &Dropout $D_\beta$ & \\ \cline{3-3} & & (2$\times$2) Maxpo & \\ \hline \multirow{4}{*}{ReLU} & \multirow{4}{*}{Dropout $D_\beta$ } & Flatten & \multirow{4}{*}{ReLU} \\ \cline{3-3} & & Dense 128 & \\ \cline{3-3} & & BN & \\ \cline{3-3} & & RelU & \\ \hline \multicolumn{2}{|c|}{Merge} & \multicolumn{2}{c|}{Merge} \\ \hline \multicolumn{4}{|c|}{Merge} \\ \hline \multicolumn{4}{|c|}{ReLU} \\ \hline \multicolumn{4}{|c|}{Dense 128} \\ \hline \multicolumn{4}{|c|}{BN} \\ \hline \multicolumn{4}{|c|}{Sigmoid} \\ \hline \end{tabular} \end{table} \subsubsection{Image branch} For the visual latent subbranch, all the images are reshaped as size $(50\times 50)$. Three convolutional layers are added to the network hierarchically. The filters size is set to $(3,3)$, and there are 32 filters for each convolutional layer followed by a ReLU activation layer. A maxpooling layer with pool size $(2,2)$ is connected to each convolutional layer to reduce the probability to be over-fitting. Finally, a flatten, batch normalization and activation layer is added to the model to extract the latent features from the images. For the explicit image feature subbranch, the input of the explicit features is connected to the dense layer with 100 neurons. And then a batch normalization and activation layer are added. The outputs of image convolutional neural network and explicit image feature subbranch are combined by summing the outputs up. We concatenate the outputs of text and image branch. An activation layer and dense layer are transforming the output into two dimensions. The labels of the news are given by the last sigmoid activation layer. In Table \ref{tab:hyperparameter}, we show the parameter settings in the TI-CNN model. The total number of parameters is 7,509,980, and the number of trainable parameters is 7,509,176. \subsection{Experimental Results} We compare our model with several competitive baseline methods in Table \ref{tab:baseline}. With image information only, the model cannot identify the fake news well. It indicates that image information is insufficient to identify the fake news. With text information, traditional machine learning method --- logistic regression \cite{hosmer2013applied} is employed to detect the fake news. However, logistic regression fails to identify the fake news using the text information. The reason is that the hyperplane is linear, while the raw data is linearly inseparable. GRU \cite{chung2014empirical} and Long short-term memory \cite{hochreiter1997long} with text information are inefficient with very long sequences, and the model with 1000 input length performs worse. Hence, we take the input length 400 as the baseline method. With text and image information, TI-CNN outperforms all the baseline methods significantly. \renewcommand{\arraystretch}{1} \begin{table}[h!tbp] \centering \caption{The experimental results on many baseline methods. The number after the name of the model is the maximum input length for textual information. For those news text less than 1,000 words, we padded the sequence with $0$.} \begin{tabular}{cccc} \toprule \thead{ \textbf{Method}} & \thead{ \textbf{Precision}} & \thead{ \textbf{Recall}} & \thead{ \textbf{F1-measure}} \\ \hline \textbf{CNN-image} & 0.5387 & 0.4215 & 0.4729 \\ \textbf{LR-text-1000} & 0.5703 & 0.4114 & 0.4780 \\ \textbf{CNN-text-1000} & 0.8722 & 0.9079 & 0.8897 \\ \textbf{LSTM-text-400} & 0.9146 & 0.8704 & 0.8920 \\ \textbf{GRU-text-400} & 0.8875 & 0.8643 & 0.8758 \\ \textbf{TI-CNN-1000} & \textbf{\ 0.9220 } & \textbf{\ 0.9277 } & \textbf{\ 0.9210 } \\ \bottomrule \end{tabular}% \label{tab:baseline}% \end{table}% \subsection{Sensitivity Analysis} In this section, we study the effectiveness of several parameters in the proposed model: the word embedding dimensions, batch size, the hidden layer dimensions, the dropout probability and filter size. \begin{figure*}[ht] \subfigure[b][Word embedding dimension and F1-measure.]{\label{fig:word_embedding} \includegraphics[width=0.3\textwidth]{figures/word_embedding.jpeg}} \subfigure[b][Batch size and F1-measure.]{\label{fig:batch_size} \includegraphics[width=0.3\textwidth]{figures/batch_size.jpeg}} \subfigure[b][Hidden layer dimension and F1-measure.]{\label{fig:hidden_dimension} \includegraphics[width=0.3\textwidth]{figures/hidden_dimension.jpeg}} \caption{Word embedding dimension, batch size and the performance of the model.} \label{fig:word_embedding_and_batch_size} \end{figure*} \paragraph{word embedding dimensions} In the text branch, we exploit a 3 layer neural network to learn the word embedding. The learned word vector can be defined as a vector with different dimensions, i.e., from 50 to 350. In Fig. \ref{fig:word_embedding}, we plot the relation between the word embedding dimensions and the performance of the model. As shown in figure \ref{fig:word_embedding}, we find that the precision, recall and f1-measure increase as the word embedding dimension goes up from 50 to 100. However, the precision and recall decrease from 100 to 350. The recall of the model is growing all the time with the increase of the word embedding dimension. We select 100 as the word embedding dimension in that the precision, recall and f1-measure are balanced. For fake news detection in real world applications, the model with high recall is also a good choice. The reason is that publishers can use high recall model to collect all the suspected fake news at the beginning, and then the fake news can be identified by manual inspection. \begin{figure}[h!t] \subfigure[b][Dropout probabilities ($D_\alpha, D_\beta$) and the performance of the model.]{\label{fig:drop_out} \includegraphics[width=0.4\textwidth]{figures/drop_out.png}} \subfigure[b][Filter size and the performance of the model.]{\label{fig:filter_size} \includegraphics[width=0.4\textwidth]{figures/filter_size.png}} \caption{Dropout probabilities ($D_\alpha, D_\beta$), filter size and the performance of the model.} \label{fig:word_embedding_and_batch_size} \end{figure} \paragraph{batch size} Batch size defines the number of samples that going to be propagated through the network. The higher the batch size, the more memory space the program will need. The lower the batch size, the less time the training process will take. The relation between batch size and the performance of the model is shown in Fig. \ref{fig:batch_size}. The best choice for batch size is 32 and 64. The F1 measure goes up from batch size 8 to 32 first, and then drops when the batch size increases from 32 to 128. For batch size 8, it takes 32 seconds to train the data on each epoch. For batch size 128, it costs more than 10 minutes to train the model on each epoch. \paragraph{hidden layer dimension} As shown in Fig. \ref{fig:architecture}, there are many hidden dense layers in the model. Deciding the number of neurons in the hidden layers is a very important part of deciding the overall neural network architecture. Though these layers do not directly interact with the external environment, they have a tremendous influence on the final output. Using too few neurons in the hidden layers will result in underfitting. Using too many neurons in the hidden layers can also result in several problems. Some compromise must be reached between too many and too few neurons in the hidden layers. As shown in Fig. \ref{fig:hidden_dimension}, we find that 128 is the best choice for hidden layer dimension. The performance firstly goes up with the increase of the hidden layer dimension from 8 to 128. However, the dimension of the hidden layer reaches 256, the performance of the model drops due to overfitting. \paragraph{Dropout probability and filter size} We analyze the dropout probabilities, as shown in Table \ref{tab:hyperparameter}. $D_\alpha$ in Fig. \ref{fig:drop_out} is the dropout layer connected to the text embedding layer, while $D_\beta$ is used in both text and image branches. We use the grid search to choose the dropout probabilities. The model performs well when the $D_\alpha$ in the range [0.1,0.5] and the $D_\beta$ in range [0.1,0.8]. In this paper, we set the dropout probabilities as (0.5,0.8), which can improve the model's generalization ability and accelerate the training process. The filter size of a 1-dimension convolutional neural network layer in the textual latent subbranch is also a key factor in identifying the performance of the model. According to the paper \cite{kim2014convolutional}, the model prefers small filter size for text information. It is consistent with the experimental results in Fig. \ref{fig:filter_size}. When the filter size is set to (3,3), the F1-measure of the model is 0.92-0.93. \section{Conclusions and Future Work} The spread of fake news has raised concerns all over the world recently. These fake political news may have severe consequences. The identification of the fake news grows in importance. In this paper, we propose a unified model, i.e., TI-CNN, which can combine the text and image information with the corresponding explicit and latent features. The proposed model has strong expandability, which can easily absorb other features of news. Besides, the convolutional neural network makes the model to see the entire input at once, and it can be trained much faster than LSTM and many other RNN models. We do experiments on the dataset collected before the presidential election. The experimental results show that the TI-CNN can successfully identify the fake news based on the explicit features and the latent features learned from the convolutional neurons. The dataset in this paper focuses on the news about American presidential election. We will crawl more data about the France national elections to further investigate the differences between real and fake news in other languages. It's also a promising direction to identify the fake news with much social network information, such as the social network structures and the users' behaviors. In addition, the relevance between headline and news texts is a very interesting research topic, which is useful to identify the fake news. As the development of Generative Adversarial Networks (GAN) \cite{goodfellow2014generative,radford2015unsupervised}, the image can generate captions. It provides a novel way to evaluate the relevance between image and news text. \bibliographystyle{plain} \section{Introduction} Fake news is written in an intentional and unverifiable language to mislead readers. It has a long history since the 19th century. In 1835, New York Sun published a series of articles about ``the discovery of life on the moon". Soon the fake stories were printed in newspapers in Europe. Similarly, fake news widely exists in our daily life and is becoming more widespread following the Internet's development. Exposed to the fast-food culture, people nowadays can easily believe something without even checking whether the information is correct or not, such as the ``FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide''. These fake news frequently appear during the United States presidential election campaign in 2016. This phenomenon has aroused the attention of people, and it has a significant impact on the election. Fake news dissemination is very common in social networks \cite{allcott2017social}. Due to the extensive social connections among users, fake news on certain topics, e.g., politics, celebrities and product promotions, can propagate and lead to a large number of nodes reporting the same (incorrect) observations rapidly in online social networks. According to the statistical results reported by the researchers in Stanford University, 72.3\% of the fake news actually originates from the official news media and online social networks \cite{allcott2017social,conroy2015automatic}. The potential reasons are provided as follows. Firstly, the emergence of social media greatly lower down the barriers to enter in the media industry. Various online blogs, ``we media", and virtual communities are becoming more and more popular in recent years, in which everyone can post news articles online. Secondly, the large number of social media users provide a breeding ground for fake news. Fake news involving conspiracy and pitfalls can always attract our attention. People like to share this kind of information to their friends. Thirdly, the `trust and confidence' in the mass media greatly dropped these years. More and more people tend to trust the fake news by browsing the headlines only without reading the content at all. Fake news identification from online social media is extremely challenging due to various reasons. Firstly, it's difficult to collect the fake news data, and it is also hard to label fake news manually \cite{wang2017liar}. News that appears on Facebook and Twitter news feeds belongs to private data. To this context so far, few large-scale fake news detection public dataset really exists. Some news datasets available online involve a small number of the instances only, which are not sufficient to train a generalized model for application. Secondly, fake news is written by human. Most liars tend to use their language strategically to avoid being caught. In spite of the attempt to control what they are saying, language “leakage” occurs with certain verbal aspects that are hard to monitor such as frequencies and patterns of pronoun, conjunction, and negative emotion word usage \cite{feng2013detecting}. Thirdly, the limited data representation of texts is a bottleneck of fake news identification. In the bag-of-words approach, individual words or ``n-grams'' (multiword) frequencies are aggregated and analyzed to reveal cues of deception. Further tagging of words into respective lexical cues for example, parts of speech or ``shallow syntax'' \cite{markowitz2016linguistic}, affective dimensions \cite{vrij2006empirical}, and location-based words \cite{ott2013negative} can all provide frequency sets to reveal linguistic cues of deception \cite{newman2003lying,hancock2007lying}. The simplicity of this representation also leads to its biggest shortcoming. In addition to relying exclusively on language, the method relies on isolated n-grams, often divorced from useful context information. Word embedding techniques provide a useful way to represent the meaning of the word. In some circumstances, sentences of different lengths can be represented as a tensor with different dimensions. Traditional models cannot handle the sparse and high order features very well. \begin{figure}[ht] \centering \subfigure[b][Cartoon in fake news.]{\label{fig:cartoon} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/hillary.png}} \subfigure[b][Altered low-resolution image.]{\label{fig:obama} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/obama.jpg}} \subfigure[b][Irrelevant image in fake news.]{\label{fig:amish} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/amish.jpg}} \subfigure[b][Low-resolution image.]{\label{fig:hillary} \includegraphics[width=0.22\textwidth,height=2.5cm]{figures/hillary.jpg}} \label{fig:fake_image} \caption{The images in fake news: (a) `FBI Finds Previously Unseen Hillary Clinton Emails On Weiner's Laptop', (b)`BREAKING: Leaked Picture Of Obama Being Dragged Before A Judge In Handcuffs For Wiretapping Trump', (c) `The Amish Brotherhood have endorsed Donald Trump for president', (d) `Wikileaks Gives Hillary An Ultimatum: QUIT, Or We Dump Something Life-Destroying'. The news texts of images (c) and (d) are represented in Section \ref{ssec:case_study}} \end{figure} Though the deceivers make great efforts in polishing fake news to avoid being found, there are some leakages according to our analysis from the text and image aspect respectively. For instance, the lexical diversity and cognition of the deceivers are totally different from the truth teller. Beyond the text information, images in fake news are also different from that in real news. As shown in Fig. \ref{fig:fake_image}, cartoons, irrelevant images (mismatch of text and image, no face in political news) and altered low-resolution images are frequently observed in fake news. In this paper, we propose a TI-CNN model to consider both text and image information in fake news detection. Beyond the explicit features extracted from the data, as the development of the representative learning, convolutional neural networks are employed to learn the latent features which cannot be captured by the explicit features. Finally, we utilize TI-CNN to combine the explicit and latent features of text and image information into a unified feature space, and then use the learned features to identify the fake news. Hence, the contributions of this paper are summarized as follows: \begin{itemize} \item We collect a high quality dataset and take in-depth analysis on the text from multiple perspectives. \item Image information is proved to be effective features in identifying the fake news. \item A unified model is proposed to analyze the text and image information using the covolutoinal neural networks. \item The model proposed in this paper is an effective way to recognize fake news from lots of online information. \end{itemize} In the rest of the paper, we first define the problem of fake news identification. Then we introduce the analysis on the fake news data. A unified model is proposed to illustrate how to model the explicit and latent features of text and image information. The details of experiment setup is demonstrated in the experiment part. At last, we compare our model with several popular methods to show the effectiveness of our model. \section{Related Work} Deception detection is a hot topic in the past few years. Deception information includes scientific fraud, fake news, false tweets etc. Fake news detection is a subtopic in this area. Researchers solve the deception detection problem from two aspects: 1) linguistic approach. 2) network approach. \subsection{Linguistic approaches} Mihalcea and Strapparvva 2009 \cite{mihalcea2009lie} started to use natural language processing techniques to solve this problem. Bing Liu et.al. \cite{hu2004mining} analyzed fake reviews on Amazon these years based on the sentiment analysis, lexical, content similarity, style similarity and semantic inconsistency to identify the fake reviews. Hai et al. \cite{hai2016deceptive} proposed semi-supervised learning method to detect deceptive text on crowdsourced datasets in 2016. The methods based on word analysis is not enough to identify deception. Many researchers focus on some deeper language structures, such as the syntax tree. In this case, the sentences are represented as a parse tree to describe syntax structure, for example noun and verb phrases, which are in turn rewritten by their syntactic constituent parts \cite{feng2012syntactic}. \subsection{Network-based approaches} Another way to identify the deception is to analyze the network structure and behaviors, which are important complementary features. As the development of knowledge graph, it will be very helpful to check fact based on the relationship among entities. Ciampaglia et al. \cite{ciampaglia2015computational} proposed a new concept `network effect' variables to derive the probabilities of news. The methods based on the knowledge graph analysis can achieve 61\% to 95\% accuracy. Another promising research direction is exploiting the social network behavior to identify the deception. \subsection{Neural Network based approaches} Deep learning models are widely used in both academic community and industry. In computer vision \cite{krizhevsky2012imagenet} and speech recognition \cite{graves2013speech}, the state-of-art methods are almost all deep neural networks. In the natural language processing (NLP) area, deep learning models are used to train a model that can represent words as vectors. Then researchers propose many deep learning models based on the word vectors for QA \cite{chen2015abc} and summarization\cite{kaikhah2004automatic}, etc. Convolutional neural networks (CNN) utilize filters to capture the local structures of the image, which performs very well on computer vision tasks. Researchers also find that CNN is effective on many NLP tasks. For instance, semantic parsing \cite{yih2014semantic}, sentence modeling \cite{kalchbrenner2014convolutional}, and other traditional NLP tasks \cite{collobert2011natural}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\textwidth]{figures/word_frequency1.png} \end{center} \caption{Word frequency in titles of real and fake news. If the news has no title, we set the title as `notitle'. The words on the top-left are frequently used in fake news, while the words on the bottom-right are frequently used in real news. The `Top Fake' words are capital characters and some meaningless numbers that represent special characters, while the `Top Real' words contain many names and motion verbs, i.e., `who' and `what' --- the two important factors in the five elements of news: when, where, what, why and who.} \label{fig:word_frequency} \end{figure} \section{Problem Definition} Given a set of $m$ news articles containing the text and image information, we can represent the data as a set of text-image tuples $\mathcal{A} = \{(A_i^T, A_i^I)\}_i^m$. In the fake news detection problem, we want to predict whether the news articles in $\mathcal{A}$ are fake news or not. We can represent the label set as $\mathcal{Y} = \{[1,0], [0,1]\}$, where $[1,0]$ denotes real news while $[0,1]$ represents the fake news. Meanwhile, based on the news articles, e.g., $(A_i^T, A_i^I) \in \mathcal{A}$, a set of features (including both explicit and latent features to be introduced later in Model Section) can be extracted from both the text and image information available in the article, which can be represented as $\mathbf{X}_i^T$ and $\mathbf{X}_i^I$ respectively. The objective of the \textit{fake news detection} problem is to build a model $f: \{\mathbf{X}_i^T, \mathbf{X}_i^I\}_i^m \in \mathbb{X} \to \mathcal{Y}$ to infer the potential labels of the news articles in $\mathcal{A}$. \section{Data Analysis} To examine the finding from the raw data, a thorough investigation has been carried out to study the text and image information in news articles. There are some differences between real and fake news on American presidential election in 2016. We investigate the text and image information from various perspectives, such as the computational linguistic, sentiment analysis, psychological analysis and other image related features. We show the quantitative information of the data in this section, which are important clues for us to identify fake news from a large amount of data. \subsection{Dataset} The dataset in this paper contains 20,015 news, i.e., 11,941 fake news and 8,074 real news. It is available online\footnote{https://drive.google.com/open?id=0B3e3qZpPtccsMFo5bk9Ib3VCc2c}. For fake news, it contains text and metadata scraped from more than 240 websites by the Megan Risdal on Kaggle\footnote{https://www.kaggle.com/mrisdal/fake-news}. The real news is crawled from the well known authoritative news websites, i.e., the New York Times, Washington Post, etc. The dataset contains multiple information, such as the title, text, image, author and website. To reveal the intrinsic differences between real and fake news, we solely use the title, text and image information. \begin{figure*}[ht] \centering \subfigure[b][The number of words in news.]{\label{fig:words} \includegraphics[width=0.23\textwidth]{figures/Num_of_words_of_news-eps-converted-to.pdf}} \subfigure[b][The average number of words in a sentence.]{\label{fig:avg_words}\includegraphics[width=0.23\textwidth]{figures/Avg_of_sentences_length_of_news-eps-converted-to.pdf}} \subfigure[b][Question mark in news.]{\label{fig:question_mark} \includegraphics[width=0.23\textwidth]{figures/question_mark-eps-converted-to.pdf}} \subfigure[b][Exclamation mark in news.]{\label{fig:exclamation} \includegraphics[width=0.23\textwidth]{figures/exclamation-eps-converted-to.pdf}} \subfigure[b][The exclusive words in news.]{\label{fig:exclusive} \includegraphics[width=0.23\textwidth]{figures/exclusive_words-eps-converted-to.pdf}} \subfigure[b][The negations in news.]{\label{fig:negations} \includegraphics[width=0.23\textwidth]{figures/negations-eps-converted-to.pdf}} \subfigure[b][FPP: First-person pronoun.]{\label{fig:1st} \includegraphics[width=0.23\textwidth]{figures/1_person_pronouns-eps-converted-to.pdf}} \subfigure[b][Motion verbs in news.]{\label{fig:motion_verbs} \includegraphics[width=0.23\textwidth]{figures/motion_words-eps-converted-to.pdf}} \caption{Analysis on the news text. } \end{figure*} \subsection{Text Analysis} Let's take the word frequency \cite{kessler2017scattertext} in the titles as an example to demonstrate the differences between real and fake news in Fig. \ref{fig:word_frequency}. If the news has no title, we set the title as `notitle'. The frequently observed words in the title of fake news are \emph{notitle, IN, THE, CLINTON} and many meaningless numbers that represent special characters. We can have some interesting findings from the figure. Firstly, much fake news have no titles. These fake news are widely spread as the tweet with a few keywords and hyperlink of the news on social networks. Secondly, there are more capital characters in fake news. The purpose is to draw the readers' attention, while the real news contains less capital letters, which is written in a standard format. Thirdly, the real news contain more detailed descriptions. For example, names (\emph{Jeb Bush, Mitch McConnell}, etc.), and motion verbs (\emph{left, claim, debate and poll}, etc.). \subsubsection{Computational Linguistic} \paragraph{Number of words and sentences} Although liars have some control over the content of their stories, their underlying state of mind may leak out through the style of language used to tell the story. The same is true for the people who write the fake news. The data presented in the following paragraph provides some insight into the linguistic manifestations of this state of mind \cite{hancock2007lying}. As shown in Fig. \ref{fig:words}, fake news has fewer words than real news on average. There are 4,360 words on average for real news, while the number is 3,943 for fake news. Besides, the number of words in fake news distributes over a wide range, which indicates that some fake news have very few words and some have plenty of words. The number of words is just a simple view to analyze the fake news. Besides, real news has more sentences than fake news on average. Real news has 84 sentences, while fake news has 69 sentences. Based on the above analysis, we can get the average number of words in a sentence for real and fake news, respectively. As shown in Fig. \ref{fig:avg_words}, the sentence of real news is shorter than that of fake news. Real news has 51.9 words on average in a sentence. However, the number is 57.1 for fake news. According to the box plot, the variance of the real news is much smaller than that of fake news. And this phenomenon appears in almost all the box plots. The reason is that the editor of real news must write the article under certain rules of the press. These rules include the length, word selection, no grammatical errors, etc. It indicates that most of the real news are written in a more standard and consistent way. However, most of the people who write fake news don't have to follow these rules \paragraph{Question mark, exclamation and capital letters} According to the statistics on the news text, real news has fewer question marks than fake news, as shown in Fig. \ref{fig:question_mark}. The reasons may lie in that there are many rhetorical questions in fake news. These rhetorical questions are always used to emphasize the ideas consciously and intensify the sentiment. According to the analysis on the data, we find that both real and fake news have very few exclamations. However, the inner fence of fake news box plot is much larger than that of real news, as shown in Fig. \ref{fig:exclamation}. Exclamation can turn a simple indicative or declarative sentence into a strong command or reflect an emotional outburst. Hence, fake news is inclined to use the words with exclamations to fan specific emotions among the readers. Capital letters are also analyzed in the real and fake news. The reason for the capitalization in news is to draw readers attention or emphasize the idea expressed by the writers. According to the statistic data, fake news have much more capital letters than real news. It indicates that fake news deceivers are good at using the capital letters to attract the attention of readers, draw them to read it and believe it. \paragraph{Cognitive perspective} From the cognitive perspective, we investigate the exclusive words (e.g., `but', `without', `however') and negations (e.g.,, `no', `not' ) used in the news. Truth tellers use negations more frequently, as shown in Fig. \ref{fig:exclusive} and \ref{fig:negations}. The exclusive words in news have the similar phenomenon with the negations. The median of negations in fake news is much smaller than that of real news. The deceiver must be more specific and precise when they use exclusive words and negations, to lower the likelihood that being caught in a contradiction. Hence, they use fewer exclusive words and negations in writing. For the truth teller, they can exactly discuss what happened and what didn't happen in that real news writer witnessed the event and knew all the details of the event. Specifically, individuals who use a higher number of “exclusive” words are generally healthier than those who do not use these words \cite{pennebaker1999linguistic}. \subsubsection{Psychology Perspective} From the psychology perspective, we also investigate the use of first-person pronouns (e.g., I, we, my) in the real and fake news. Deceptive people often use language that minimizes references to themselves. A person who's lying tends not to use ``we” and ``I", and tend not to use person pronouns. Instead of saying ``I didn’t take your book,'' a liar might say ``That's not the kind of thing that anyone with integrity would do'' \cite{newman2003lying}. Similarly, as shown in Fig. \ref{fig:1st}, the result is the same with the point of view from the psychology perspective. On average, fake news has fewer first-person pronouns. The second-person pronouns (e.g., you, yours) and third-person pronouns (e.g., he, she, it) are also tallied up. We find that deceptive information can be characterized by the use of fewer first-person, fewer second-person and more third-person pronouns. Given space limitations,we just show the first-person pronouns figure. In addition, the deceivers avoid discussing the details of the news event. Hence, they use few motion verbs, as shown in Fig. \ref{fig:motion_verbs}. \subsubsection{Lexical Diversity} Lexical diversity is a measure of how many different words that are used in a text, while lexical density provides a measure of the proportion of lexical items (i.e. nouns, verbs, adjectives and some adverbs) in the text. The rich news has more diversity. According to the experimental results, the lexical diversity of real news is $2.2e\text{-}06$, which is larger than $1.76e\text{-}06$ for fake news. \subsubsection{Sentiment Analysis} The sentiment \cite{liu2012sentiment} in the real and fake news is totally different. For real news, they are more positive than negative ones. The reason is that deceivers may feel guilty or they are not confident to the topic. Under the tension and guilt, the deceivers may have more negative emotion \cite{markowitz2016linguistic,pennebaker1999linguistic}. The experimental results agree with the above analysis in Fig. \ref{fig:sentiment}. The standard deviation of fake news on negative sentiment is also larger than that of real news, which indicates that some of the fake news have very strong negative sentiment. \begin{figure}[ht] \centering \subfigure[b][The median sentiment values: positive and negative.]{\label{fig:median_pos_neg} \includegraphics[width=0.23\textwidth]{figures/median_pos_neg-eps-converted-to.pdf}} \subfigure[b][The standard deviation sentiment values: positive and negative.]{\label{fig:sd_pos_neg} \includegraphics[width=0.23\textwidth]{figures/sd_pos_neg-eps-converted-to.pdf}} \caption{Sentiment analysis on real and fake news.} \label{fig:sentiment} \end{figure} \subsection{Image Analysis} We also analyze the properties of images in the political news. According to some observations on the images in the fake news, we find that there are more faces in the real news. Some fake news have irrelevant images, such as animals and scenes. The experiment result is consistent with the above analysis. There are 0.366 faces on average in real news, while the number is 0.299 in fake news. In addition, real news has a better resolution image than fake news. The real news has $457 \times 277$ pixels on average, while the fake news has a resolution of $355\times 228$. \section{Model -- the architecture} In this section, we introduce the architecture of TI-CNN model in detail. Besides the explicit features, we innovatively utilize two parallel CNNs to extract latent features from both textual and visual information. And then explicit and latent features are projected into the same feature space to form new representations of texts and images. At last, we propose to fuse textual and visual representations together for fake news detection. As shown in Fig. \ref{fig:architecture}, the overall model contains two major branches, i.e., text branch and image branch. For each branch, taking textual or visual data as inputs, explicit and latent features are extracted for final predictions. To demonstrate the theory of constructing the TI-CNN, we introduce the model by answering the following questions: 1) How to extract the latent features from text? 2) How to combine the explicit and latent features? 3) How to deal with the text and image features together? 4) How to design the model with fewer parameters? 5) How to train and accelerate the training process? \renewcommand{\arraystretch}{0.7} \begin{table}[htbp] \centering \caption{Symbols in this paper.} \begin{tabular}{lll} \toprule \multicolumn{1}{c}{Parameter} & \multicolumn{1}{l}{Parameter Name } & \multicolumn{1}{c}{Dimension} \\ \midrule $\mathbf{X}^{Tl}_{i,j}$ & latent word vector $j$ in sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}^{Tl}_{i,1:n}$ & sentence for sample $i$ & $\mathbb{R}^{n\times k}$ \\ $\mathbf{X}_i^{Te}$ & explicit text feature for sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}_i^{Ie}$ & explicit image feature for sample $i$ & $\mathbb{R}^k$ \\ $\mathbf{X}_i^{Il}$ & latent image feature for sample $i$ & $\mathbb{R}^k$ \\ $\theta$ & weight for the word & $\mathbb{R}^{h \times k}$ \\ $\mathbb{Y}$ & label of news & $\mathbb{R}^{n\times 2}$ \\ $w$ & filter for texts & $\mathbb{R}^{h \times k}$ \\ $b$ & bias & $\mathbb{R}$ \\ $\mathbf{c}$ & feature map & $\mathbb{R}^{n-h+1}$ \\ $\hat{c}$ & the maximum value in feature map & $\mathbb{R}$ \\ $M$ & number of maps & $\mathbb{R}$ \\ $M_i$ & the i-th filter for images & $\mathbb{R}^{K_\alpha \times K_\beta}$ \\ $\tau$ & the scores in tags of label & $\mathbb{R}$ \\ $T$ & the number of tags in label & $\mathbb{R}$ \\ $s_{w}(\mathbb{X})_\tau$ & the predicted probability & $\mathbb{R}\in [0,1]$ \\ \bottomrule \end{tabular} \label{tab:symbols}% \end{table}% \subsection{Text Branch} For the text branch, we utilize two types of features: textual explicit features $\mathbf{X}^{Te}$ and textual latent features $\mathbf{X}^{Tl}$. The textual explicit features are derived from the statistics of the news text as we mentioned in the data analysis part, such as the length of the news, the number of sentences, question marks, exclamations and capital letters, etc. The statistics of a single news can be organized as a vector with fixed size. Then the vector is transformed by a fully connected layer to form a textual explicit features. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.9\textwidth]{figures/framework33-eps-converted-to.pdf} \end{center} \caption{The architecture of the model. The rectangles in the last 5 layers represent the hidden dense layers. The dropout, batch normalization and flatten layers are not drawn for brevity. The details of the structure are shown in Table \ref{tab:hyperparameter}.} \label{fig:architecture} \end{figure*} The textual latent features in the model are based on a variant of CNN. Although CNNs are mainly used in Computer Vision tasks, such as image classification \cite{krizhevsky2012imagenet} or object recognition \cite{ren2015faster}, CNN also show notable performances in many Natural Language Processing (NLP) tasks \cite{kim2014convolutional,zeng2014relation}. With the convolutional approach, the neural network can produce local features around each word of the adjacent word and then combines them using a max operation to create a fixed-sized word-level embedding, as shown in Fig. \ref{fig:architecture}. Therefore, we employ CNN to model textual latent features for fake news detection. Let the $j$-th word in the news $i$ denote as $\mathbf{x}_{i,j} \in \mathbb{R}^{k}$, which is a k-dimensional word embedding vector. Suppose the maximum length of the news is $n$, s.t., the news have less than $n$ words can be padded as a sequence with length $n$. Hence, the overall news can be written as \begin{equation} \mathbf{X}_{i,1:n}^{Tl}=\mathbf{x}_{i,1} \oplus \mathbf{x}_{i,1} \oplus \mathbf{x}_{i,2} \oplus ... \oplus \mathbf{x}_{i,n}. \end{equation} It means that the news $\mathbf{X}_{i,1:n}^{Tl}$ is concatenated by each word. In this case, each news can be represented as a matrix. Then we use convolutional filters $w \in \mathbb{R}^{h\times k}$ to construct the new features. For instance, a window of words $\mathbf{X}_{i,j:j+h-1}^{Tl}$ can produce a feature $c_i$ as follows: \begin{equation} c_i=f(\mathbf{w} \cdot \mathbf{X}_{i,j:j+h-1}^{Tl}+b), \end{equation} where the $b\in R$ is the bias, and $\cdot$ is the convolutional operation. $f$ is the non-linear transformation, such as the sigmoid and tagent function. A feature map is generated from the filter by going through all the possible window of words in the news. \begin{equation} \mathbf{c}=[c_1,c_2,...,c_{n-h+1}], \end{equation} where $\mathbf{c}\in \mathbb{R}^{n-h+1}$. A max-pooling layer \cite{nagi2011max} is applied to take the maximum in the feature map $\mathbf{c}$. The maximum value is denoted as $\hat{c}=max\{\mathbf{c}\}$. The max-pooling layer can greatly improve the robustness of the model by reserving the most important convolutional results for fake news detection. The pooling results are fed into a fully connected layer to obtain our final textual latent features for predicting news labels. \subsection{Image Branch} Similar to the text branch, we use two types of features: visual explicit features $\mathbf{X}^{Ie}$ and visual latent features $\mathbf{X}^{Il}$. As shown in Fig. \ref{fig:architecture}, in order to obtain the visual explicit features, we firstly extract the resolution of an image and the number of faces in the image to form a feature vector. And then, we transform the vector into our visual explicit feature with a fully connected layer Although visual explicit features can convey information of images contained in the news, it is hand-crafted features and not data-driven. To directly learn from the raw images contained in the news to derive more powerful features, we employ another CNN to learn from images in the news \subsubsection{Convolutional layer} In the convolutional layer, filters are replicated across entire visual field and share the same parameterisation forming a feature map. In this case, the network have a nice property of translation-invariant. Suppose the convolutional layer has $M$ maps of size $(M_\alpha,M_\beta)$. A filter $(K_\alpha,K_\beta)$ is shifted over all the regions of the images. Hence the size of the output map is as follows: \begin{equation} M_{\alpha}^{n}=M_{\alpha}^{n-1} - K_{\alpha}^{n}+1 \end{equation} \begin{equation} M_{\beta}^{n}=M_{\beta}^{n-1} - K_{\beta}^{n}+1 \end{equation} \subsubsection{Max-pooling layer} A max-pooling layer \cite{nagi2011max} is connected to the convolutional layer. Then we apply maximum activation over the rectagular filters $(K_\alpha,K_\beta)$ to the output of max-pooling layer. Max-pooling enables position invariance over larger local regions and downsamples the input image by a factor of $K_\alpha$ and $K_\beta$ along each direction, which can make the model select invariant features, converge faster and improve the generalization significantly. A theoretical analysis of feature pooling in general and max-pooling in particular is given by \cite{boureau2010theoretical}. \begin{table*}[ht] \centering \caption{Two fake news correspond to the Fig. \ref{fig:amish} and \ref{fig:hillary}.} \label{tab:case_study} \resizebox{\textwidth}{!}{% \begin{tabular}{|>{\arraybackslash}M{2.5cm}|>{\arraybackslash}m{14cm}|>{\centering\arraybackslash}M{0.5cm}|} \hline \multicolumn{1}{|>{\centering\arraybackslash}p{23mm}|}{\thead{ \textbf{Title}}} & \multicolumn{1}{>{\centering\arraybackslash}p{8cm}|}{\thead{ \textbf{\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ News text}}} & \multicolumn{1}{>{\centering\arraybackslash}p{0.2cm}|}{\textbf{Type}} \\ \hline The Amish Brotherhood have endorsed Donald Trump for president. & The Amish, who are direct descendants of the protestant reformation sect known as the Anabaptists, have typically stayed out of politics in the past. As a general rule, they don't vote, serve in the military, or engage in any other displays of patriotism. This year, however, the AAB has said that it is imperative that they get involved in the democratic process. & Fake \\ \hline Wikileaks Gives Hillary An Ultimatum: QUIT, Or We Dump Something Life-Destroying & On Sunday, Wikileaks gave Hillary Clinton less than a 24-hour window to drop out of the race or they will dump something that will destroy her “completely.”Recently, Julian Assange confirmed that WikiLeaks was not working with the Russian government, but in their pursuit of justice, they are obligated to release anything that they can to bring light to a corrupt system – and who could possibly be more corrupt than Crooked Hillary? & Fake \\ \hline \end{tabular}% } \end{table*} \subsection{Rectified Linear Neuron} The sigmoid and tanh activation functions may cause the gradient explode or vanishing problem \cite{pascanu2013difficulty} in convolutional neural networks. Hence, we add the ReLU activation to the image branch to remede the problem of gradient vanishing. \begin{equation} y = max(0,\sum_{i=1}^{k}x_{i}\theta_{i}+b) \end{equation} ReLUs can also improve neural networks by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of $x$). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations. Logistic and hyperbolic tangent networks suffer from the vanishing gradient problem, where the gradient essentially becomes 0 after a certain amount of training (because of the two horizontal asymptotes) and stops all learning in that section of the network. ReLU units are only 0 gradient on one side, which is empirically superior. \subsection{Regularization} As shown in Table \ref{tab:hyperparameter}, we empoly dropout \cite{srivastava2014dropout} as well as $l_2$-norms to prevent overfitting. Dropout is to set some of the elements in weight vectors as zero with a probability $p$ of the hidden units during the forward and backward propagation. For instance, we have a dense layer $z=[z_1,...,z_m]$, and $r$ is a vector where all the elements are zero. When we start to train the model, the dropout is to set some of the elements of $r$ as 1 with probability as $p$. Suppose the output of dense layer is $y$. Then the dropout operation can be formulated as \begin{equation} y=\theta \cdot (z\circ r)+b, \end{equation} where $\theta$ is the weight vector. $\circ$ is the element-wise multiplication operator. When we start to test the performance on the test dataset, the deleted neurons are back. The deleted weight are scaled by $p$ such that $\hat{\theta} =p\theta$. The $\hat{\theta}$ is used to predict the test samples. The above procedure is implemented iteratively, which greatly improve the generalization ability of the model. We also use early stopping \cite{prechelt1998automatic} to avoid overfitting. It can also be considered a type of regularization method (like L1/L2 weight decay and dropout). \subsection{Network Training} We train our neural network by minimizing the negative likelihood on the training dataset $D$. To identify the label of a news $\mathbb{X}$, the network with parameter $\theta$ computes a value $s_{w}(x)_{\tau}$. Then a sigmoid function is used over all the scores of tags $\tau \in T$ to transform the value into the conditional probability distribution of labels: \begin{equation} p(\tau|\mathbb{X},\theta)=\frac{e^{s_{\theta}(\mathbb{X})_{\tau}}}{\sum_{\forall i\in T}{e^{s_{\theta}(\mathbb{X})_i}}} \label{equ:equation} \end{equation} The negative log likelihood of Equation \ref{equ:equation} is \begin{equation} E(W)=-lnp(\tau|\mathbb{X},\theta)=s_{\theta}(\mathbb{X})_{\tau}-log\left (\sum_{\forall i\in T}e^{s_{\theta}(\mathbb{X})_{\tau}}\right) \end{equation} We use the RMSprop \cite{hinton2012neural} to minimize the loss function with respect to parameter $\theta$: \begin{equation} \theta -> \sum_{(\mathbb{X},\mathbb{Y})\in D}{-log\ p(\mathbb{Y}|\mathbb{X},\theta)} \end{equation} where $\mathbb{X}$ is the input data, and $\mathbb{Y}$ is the label of the news. We naturally choose back-propagation algorithm \cite{hecht1988theory} to compute the gradients of the network structure. With the fine-tuned parameters, the loss converges to a good local minimum in a few epochs. \section{Experiments} \subsection{Case study} \label{ssec:case_study} A case study of the fake news is given in this section. The two fake news in Table \ref{tab:case_study} correspond to the Fig. \ref{fig:amish} and \ref{fig:hillary}. The first fake news is an article reporting that `the American Amish Brotherhood endorsed Donald Trump for President'. However, the website is a fake CNN page. The image in the fake news can be easily searched online, and it is not very relevant with the news texts\footnote{http://cnn.com.de/news/amish-commit-vote-donald-trump-now-lock-presidency/}. For the second fake news -- `Wikileaks gave Hillary Clinton less than a 24-hour window to drop out of the race', it is actually not from Wikileaks. Besides, the composite image \footnote{http://thelastlineofdefense.org/wikileaks-gives-hillary-an-ultimatum-quit-or-we-dump-something-life-destroying/} in the news is low quality. \subsection{Experimental Setup} We use 80\% of the data for training, 10\% of the data for validation and 10\% of the data for testing. All the experiments are run at least 10 times separately. The textual explicit subbranch and visual explicit subbranch are connected with a dense layer. The parameters in these subbranches can be learned easily by the back-propagation algorithm. Thus, most of the parameters, which need to be tuned, exist in the textual latent subbranch and visual latent subbranch. The parameters are set as follows. \subsubsection{Text branch} For the textual latent subbranch, the embedding dimension of the word2vec is set to 100. The details of how to select the parameters are demonstrated in the sensitivity analysis section. The context of the word2vec is set to 10 words. The filter size in the convolutional neural network is $(3,3)$. There are 10 filters in all. Two dropouts are adopted to improve the model's generalization ability. For the textual explicit subbranch, we add a dense layer with 100 neurons first, and then add a batch normalization layer to normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. The outputs of textual explicit subbranch and textual latent feature subbranch are combined by summing the outputs up. \renewcommand{\arraystretch}{0.55} \renewcommand{\arraystretch}{0.55} \begin{table}[] \centering \caption{Models specifications. BN: Batch Normalization, ReLU: Rectified Linear Activation Function, Conv: Convolutional Layer on 2D data, Conv1D: Convolutional Layer on 1D data, Dense: Dense Layer, Emb: Embedding layer, MaxPo: Max-Pooling on 2D data, MaxPo1D: Max-Pooling on 1D data. There are two kinds of dropout layers, i.e., $D=(D_\alpha,D_\beta)$, where $D_\alpha = 0.5$ and $D_\beta = 0.8$.} \label{tab:hyperparameter} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{Text Branch} & \multicolumn{2}{c|}{Image Branch} \\ \hline \begin{tabular}[c]{@{}c@{}}Textual \\ Explicit\end{tabular} & \begin{tabular}[c]{@{}c@{}}Textual \\ Latent\end{tabular} & \begin{tabular}[c]{@{}c@{}}Visual \\ Latent\end{tabular} & \begin{tabular}[c]{@{}c@{}}Visual \\ Explicit\end{tabular} \\ \hline \multirow{3}{*}{Input 31$\times$1} & Emb 1000$\times$100 & Input 50$\times$50$\times$3 & \multirow{3}{*}{Input 4$\times$1} \\ \cline{2-3} & \multirow{2}{*}{Dropout $D_\alpha$} & (2$\times$2) Conv(32) & \\ \cline{3-3} & & ReLU & \\ \hline \multirow{4}{*}{Dense 128} & Emb 1000$\times$100 & Dropout $D_\beta$ & \multirow{4}{*}{Dense 128} \\ \cline{2-3} & (3,3) Conv1D(10) & (2$\times$2) Maxpo & \\ \cline{2-3} & 2 MaxPo1D & (2$\times$2) Conv(32) & \\ \cline{2-3} & Flatten & ReLU & \\ \hline \multirow{6}{*}{BN} & \multirow{2}{*}{Dense 128} &Dropout $D_\beta$ & \multirow{6}{*}{BN} \\ \cline{3-3} & & (2$\times$2) Maxpo & \\ \cline{2-3} & \multirow{2}{*}{BN} & (2$\times$2) Conv(32) & \\ \cline{3-3} & & ReLU & \\ \cline{2-3} & \multirow{2}{*}{ReLU} &Dropout $D_\beta$ & \\ \cline{3-3} & & (2$\times$2) Maxpo & \\ \hline \multirow{4}{*}{ReLU} & \multirow{4}{*}{Dropout $D_\beta$ } & Flatten & \multirow{4}{*}{ReLU} \\ \cline{3-3} & & Dense 128 & \\ \cline{3-3} & & BN & \\ \cline{3-3} & & RelU & \\ \hline \multicolumn{2}{|c|}{Merge} & \multicolumn{2}{c|}{Merge} \\ \hline \multicolumn{4}{|c|}{Merge} \\ \hline \multicolumn{4}{|c|}{ReLU} \\ \hline \multicolumn{4}{|c|}{Dense 128} \\ \hline \multicolumn{4}{|c|}{BN} \\ \hline \multicolumn{4}{|c|}{Sigmoid} \\ \hline \end{tabular} \end{table} \subsubsection{Image branch} For the visual latent subbranch, all the images are reshaped as size $(50\times 50)$. Three convolutional layers are added to the network hierarchically. The filters size is set to $(3,3)$, and there are 32 filters for each convolutional layer followed by a ReLU activation layer. A maxpooling layer with pool size $(2,2)$ is connected to each convolutional layer to reduce the probability to be over-fitting. Finally, a flatten, batch normalization and activation layer is added to the model to extract the latent features from the images. For the explicit image feature subbranch, the input of the explicit features is connected to the dense layer with 100 neurons. And then a batch normalization and activation layer are added. The outputs of image convolutional neural network and explicit image feature subbranch are combined by summing the outputs up. We concatenate the outputs of text and image branch. An activation layer and dense layer are transforming the output into two dimensions. The labels of the news are given by the last sigmoid activation layer. In Table \ref{tab:hyperparameter}, we show the parameter settings in the TI-CNN model. The total number of parameters is 7,509,980, and the number of trainable parameters is 7,509,176. \subsection{Experimental Results} We compare our model with several competitive baseline methods in Table \ref{tab:baseline}. With image information only, the model cannot identify the fake news well. It indicates that image information is insufficient to identify the fake news. With text information, traditional machine learning method --- logistic regression \cite{hosmer2013applied} is employed to detect the fake news. However, logistic regression fails to identify the fake news using the text information. The reason is that the hyperplane is linear, while the raw data is linearly inseparable. GRU \cite{chung2014empirical} and Long short-term memory \cite{hochreiter1997long} with text information are inefficient with very long sequences, and the model with 1000 input length performs worse. Hence, we take the input length 400 as the baseline method. With text and image information, TI-CNN outperforms all the baseline methods significantly. \renewcommand{\arraystretch}{1} \begin{table}[h!tbp] \centering \caption{The experimental results on many baseline methods. The number after the name of the model is the maximum input length for textual information. For those news text less than 1,000 words, we padded the sequence with $0$.} \begin{tabular}{cccc} \toprule \thead{ \textbf{Method}} & \thead{ \textbf{Precision}} & \thead{ \textbf{Recall}} & \thead{ \textbf{F1-measure}} \\ \hline \textbf{CNN-image} & 0.5387 & 0.4215 & 0.4729 \\ \textbf{LR-text-1000} & 0.5703 & 0.4114 & 0.4780 \\ \textbf{CNN-text-1000} & 0.8722 & 0.9079 & 0.8897 \\ \textbf{LSTM-text-400} & 0.9146 & 0.8704 & 0.8920 \\ \textbf{GRU-text-400} & 0.8875 & 0.8643 & 0.8758 \\ \textbf{TI-CNN-1000} & \textbf{\ 0.9220 } & \textbf{\ 0.9277 } & \textbf{\ 0.9210 } \\ \bottomrule \end{tabular}% \label{tab:baseline}% \end{table}% \subsection{Sensitivity Analysis} In this section, we study the effectiveness of several parameters in the proposed model: the word embedding dimensions, batch size, the hidden layer dimensions, the dropout probability and filter size. \begin{figure*}[ht] \subfigure[b][Word embedding dimension and F1-measure.]{\label{fig:word_embedding} \includegraphics[width=0.3\textwidth]{figures/word_embedding.jpeg}} \subfigure[b][Batch size and F1-measure.]{\label{fig:batch_size} \includegraphics[width=0.3\textwidth]{figures/batch_size.jpeg}} \subfigure[b][Hidden layer dimension and F1-measure.]{\label{fig:hidden_dimension} \includegraphics[width=0.3\textwidth]{figures/hidden_dimension.jpeg}} \caption{Word embedding dimension, batch size and the performance of the model.} \label{fig:word_embedding_and_batch_size} \end{figure*} \paragraph{word embedding dimensions} In the text branch, we exploit a 3 layer neural network to learn the word embedding. The learned word vector can be defined as a vector with different dimensions, i.e., from 50 to 350. In Fig. \ref{fig:word_embedding}, we plot the relation between the word embedding dimensions and the performance of the model. As shown in figure \ref{fig:word_embedding}, we find that the precision, recall and f1-measure increase as the word embedding dimension goes up from 50 to 100. However, the precision and recall decrease from 100 to 350. The recall of the model is growing all the time with the increase of the word embedding dimension. We select 100 as the word embedding dimension in that the precision, recall and f1-measure are balanced. For fake news detection in real world applications, the model with high recall is also a good choice. The reason is that publishers can use high recall model to collect all the suspected fake news at the beginning, and then the fake news can be identified by manual inspection. \begin{figure}[h!t] \subfigure[b][Dropout probabilities ($D_\alpha, D_\beta$) and the performance of the model.]{\label{fig:drop_out} \includegraphics[width=0.4\textwidth]{figures/drop_out.png}} \subfigure[b][Filter size and the performance of the model.]{\label{fig:filter_size} \includegraphics[width=0.4\textwidth]{figures/filter_size.png}} \caption{Dropout probabilities ($D_\alpha, D_\beta$), filter size and the performance of the model.} \label{fig:word_embedding_and_batch_size} \end{figure} \paragraph{batch size} Batch size defines the number of samples that going to be propagated through the network. The higher the batch size, the more memory space the program will need. The lower the batch size, the less time the training process will take. The relation between batch size and the performance of the model is shown in Fig. \ref{fig:batch_size}. The best choice for batch size is 32 and 64. The F1 measure goes up from batch size 8 to 32 first, and then drops when the batch size increases from 32 to 128. For batch size 8, it takes 32 seconds to train the data on each epoch. For batch size 128, it costs more than 10 minutes to train the model on each epoch. \paragraph{hidden layer dimension} As shown in Fig. \ref{fig:architecture}, there are many hidden dense layers in the model. Deciding the number of neurons in the hidden layers is a very important part of deciding the overall neural network architecture. Though these layers do not directly interact with the external environment, they have a tremendous influence on the final output. Using too few neurons in the hidden layers will result in underfitting. Using too many neurons in the hidden layers can also result in several problems. Some compromise must be reached between too many and too few neurons in the hidden layers. As shown in Fig. \ref{fig:hidden_dimension}, we find that 128 is the best choice for hidden layer dimension. The performance firstly goes up with the increase of the hidden layer dimension from 8 to 128. However, the dimension of the hidden layer reaches 256, the performance of the model drops due to overfitting. \paragraph{Dropout probability and filter size} We analyze the dropout probabilities, as shown in Table \ref{tab:hyperparameter}. $D_\alpha$ in Fig. \ref{fig:drop_out} is the dropout layer connected to the text embedding layer, while $D_\beta$ is used in both text and image branches. We use the grid search to choose the dropout probabilities. The model performs well when the $D_\alpha$ in the range [0.1,0.5] and the $D_\beta$ in range [0.1,0.8]. In this paper, we set the dropout probabilities as (0.5,0.8), which can improve the model's generalization ability and accelerate the training process. The filter size of a 1-dimension convolutional neural network layer in the textual latent subbranch is also a key factor in identifying the performance of the model. According to the paper \cite{kim2014convolutional}, the model prefers small filter size for text information. It is consistent with the experimental results in Fig. \ref{fig:filter_size}. When the filter size is set to (3,3), the F1-measure of the model is 0.92-0.93. \section{Conclusions and Future Work} The spread of fake news has raised concerns all over the world recently. These fake political news may have severe consequences. The identification of the fake news grows in importance. In this paper, we propose a unified model, i.e., TI-CNN, which can combine the text and image information with the corresponding explicit and latent features. The proposed model has strong expandability, which can easily absorb other features of news. Besides, the convolutional neural network makes the model to see the entire input at once, and it can be trained much faster than LSTM and many other RNN models. We do experiments on the dataset collected before the presidential election. The experimental results show that the TI-CNN can successfully identify the fake news based on the explicit features and the latent features learned from the convolutional neurons. The dataset in this paper focuses on the news about American presidential election. We will crawl more data about the France national elections to further investigate the differences between real and fake news in other languages. It's also a promising direction to identify the fake news with much social network information, such as the social network structures and the users' behaviors. In addition, the relevance between headline and news texts is a very interesting research topic, which is useful to identify the fake news. As the development of Generative Adversarial Networks (GAN) \cite{goodfellow2014generative,radford2015unsupervised}, the image can generate captions. It provides a novel way to evaluate the relevance between image and news text. \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:09:47", "yymm": "1806", "arxiv_id": "1806.00749", "language": "en", "url": "https://arxiv.org/abs/1806.00749" }
\section{Introduction } \IEEEPARstart{M}{assive} mutliple-input multiple-output (MIMO) is the term used to describe the practice of deploying a large number of antennas at the base station (BS), which is considered as a key technology for 5G \cite{5g2017_JSAC}. Although the benefits of massive MIMO are by now well understood \cite{mMIMO book}, the fundamental bottleneck for massive MIMO deployment in a multi-cell scenario is pilot contamination, i.e., degradation of the uplink channel state information (CSI) due to multiple user equipments (UEs) transmitting non-orthogonal training signals on the same set of resources \cite{Marzetta mMIMO}. In addition, with the emergence of massive machine type communications (MTCs) with typically small data bursts, there is a need to decrease the signaling overhead associated with the CSI acquisition \cite{mMTC mag}, thus resulting in pilot contamination issues also in a single-cell scenario. It is therefore of critical importance to come up with designs that balance the conflicting requirements of accurate CSI and low training overhead, for systems with a massive number of antennas, UEs, and bandwidth. \subsection{Related Work} The topic of (optimal) training design and channel estimation for the single UE case has been extensively studied for the multi-antenna and/or wideband (OFDM) channels both from an estimation mean squared error (MSE) as well as a capacity perspective (see, e.g., \cite{Giannakis optimal training OFDM,Leus optimal training MIMO OFDM,Viswanathan optimal training,Hassibi how much training}). This line of works on pilot-aided system design was based on the assumption of a rich scattering propagation environment, {effectively treating the channel response among different antennas as independent}. This results in pilot designs having a training overhead that is proportional to the product of the number of antenna elements and the system bandwidth. Application of these approaches in the multiuser setting may be unacceptable due to limited resources that cannot allow for orthogonal pilot transmissions, resulting in pilot contamination effects \cite{Marzetta mMIMO,pilot contamination survey paper}. The key towards reducing the training overhead is the observation that the wireless channel is fundamentally \emph{sparse}, i.e., a signal arrives at the receiver via a limited number of distinct (resolvable) paths \cite{Sayeed MIMO channel model}. This propagation has been experimentally observed to hold true with large carrier frequencies (beyond 2GHz) and/or with large antenna array (i.e., massive MIMO) \cite{cost2100}. Therefore, posing the channel estimation problem as that of identifying the channel paths properties (gain, delay, angle) immediately implies improvement of the CSI procedure over the conventional approaches, either in terms of performance (MSE) or training overhead, as the number of unknowns to be estimated (significantly) decreases. Earlier works exploiting this channel sparsity for estimation purposes (e.g., \cite{Letaief parametric channel est}) utilized traditional tools from the fields of array processing and harmonic retrieval \cite{Stoica book}, however, the focus was only on algorithmic and performance aspects, and the issue of training overhead minimization was ignored. The recent advent of the field of compressive sensing (CS) \cite{CS signal process. mag}, which considers the problem of solving an under-determined linear system under the assumption that the vector to be estimated is sparse, has provided a new set of tools towards low-overhead sparse channel estimation \cite{CS for channel est Mag}. Considering the problem of wideband massive MIMO channel estimation, by reformulating it in a format compatible to the one considered in CS, a few recent publications have proposed CS-inspired channel estimation algorithms demonstrating that excellent performance is indeed possible with low training overhead \cite{Swindlehurst2016_TSP,Heath JSAC 2017}. However, these performance results are only provided via means of numerical simulations, with very limited (if at all) analytical insights on the \emph{training overhead required to achieve a certain performance}. A different approach is considered in \cite{Haghighatshoar2017_TWC}, where it is the (sparse) covariance matrix of the channel that is estimated by a CS approach, with the resulting estimate used to perform linear minimum mean squared error (LMMSE) channel estimation. However, this approach requires the observation of multiple, independent channel snapshots, which might not be possible under certain scenarios (e.g., short length MTCs). Intuitively, one expects that the utilization of multiple antennas at the BS can aid in reducing the bandwidth dedicated for training signals. However, to the best of our knowledge, there are no analytical results available that confirm this intuition, even though CS theory provides numerous rigorous answers regarding number of measurements required to achieve good estimation performance \cite{MathIntroToCS}. {This lack of analysis in the considered setup is mainly due to the, so called, sensing matrix of the corresponding CS problem formulation having a Kronecker-product structure \cite{Heath how many measurements}. Even though Kronecker-product sensing matrices have been explicitly investigated in the CS literature \cite{Jokar Kronecker, Duarte Kronecker}, the available results suggest a required training overhead that is overly pessimistic (cf. discussion of Theorem 9).} \subsection{Contributions} In this paper, a setup with a single BS equipped with a uniform linear array (ULA) serving multiple single-antenna uplink UEs is considered, with the goal of proposing efficient channel estimation algorithms as well as providing rigorous analytical insights on the overhead requirements. {Under the assumption of, so called, on-grid channel parameters, that is reasonable for asymptotically large number of antennas and bandwidth, a key observation is that the channel estimation problem can be formulated as the CS estimation of a vector that is not simply sparse but \emph{hierarchically sparse} \cite{C-HiLasso, Bockelmann, Roth2016_TSP,RothEtAl2018_ISIT}. In particular, the positions of its non-zero elements cannot be arbitrary but are subject to constraints implied by the physical channel properties. This is a critical property that is exploited in the following.} The main contributions of the paper are summarized as follows. \begin{itemize} \item Two novel, low-complexity channel estimation algorithms are proposed that explicitly take into account the hierarchical sparsity property, which was ignored in the previous literature. The algorithm description is provided for an arbitrary pilot sequence design, where multiple UEs utilize the same subcarriers of a single OFDM symbol for training purposes. \item The notion of the hierarchical restricted isometry property (HiRIP) is introduced, which can be considered as a specialization of the standard RIP notion \cite{MathIntroToCS} to the setting of hierarchically sparse vectors. Rigorous guarantees for reliable, i.e., bounded error, channel estimation by the proposed algorithms are provided based on the, so called, HiRIP constant of the Kronecker-product type sensing matrix of the corresponding CS estimation problem. \item The above characterization provides a concrete design goal for the pilot sequence design, namely, it should be such that the HiRIP constant of the sensing matrix is sufficiently small. Towards this, a design based on phase-shifted UE pilot sequences is proposed, which allows for a rigorous description of the scaling of the number of pilot subcarriers and number of observed antennas required to achieve reliable channel estimation. The analysis highlights the benefit of using multiple antennas in the sense of allowing for reduced pilot-overhead compared to the single antenna case. Even more important, for sufficiently large number of antennas, the pilot overhead required is independent of the number of (active) UEs and number of channel paths per UE. These conclusions are verified by numerical simulations demonstrating also the superior performance of the proposed algorithms compared to standard CS algorithms of comparable complexity that ignore hierarchical sparsity. The latter requires a significantly larger minimum pilot overhead to achieve reasonable performance that also increases with number of channel paths per UE. \item The cases of jointly processing multiple training OFDM symbols as well as channels with off-grid parameters is also discussed, as both can be naturally accommodated by the proposed framework. Simulations show that in the latter case, although the mismatch of assuming on-grid channel parameters by the algorithm results in a performance degradation, performance remains still significantly better in terms of required overhead compared to standard CS algorithms of comparable complexity as well as the conventional linear minimum mean square error (LMMSE) estimator that ignores channel sparsity altogether. \end{itemize} \subsection{Notation} Vectors and matrices will be denoted by lower and upper case bold letters, respectively. All vectors are column vectors. The $(n,m)$ element of $\mathbf{X}\in\mathbb{C}^{N\times M}$ is denoted by $[\mathbf{X}]_{n,m},n\in[N],m\in[M],$ with $[N]\triangleq\{0,1,\ldots,N-1\}$. $(\cdot)^{*},(\cdot)^{T},(\cdot)^{H}$ denote complex conjugate, transpose, and Hermitian operation, respectively. $\lVert\mathbf{X}\rVert\triangleq\sqrt{\text{tr}\{\mathbf{X}^{H}\mathbf{X}\}}$ is the Frobenius norm (Euclidean norm if $\mathbf{X}$ is a vector). The cardinality of a set $\mathcal{A}$ is denoted by $|\mathcal{A}|$. $\mathbf{X}_{\mathcal{A}}$ ($\mathbf{x}_{\mathcal{A}}$) denotes the matrix (vector) obtained either by extracting the rows (elements) of $\mathbf{X}$ ($\mathbf{x}$) enumerated by $\mathcal{A}\subseteq[N]$ or by setting the rows (elements) of $\mathbf{X}$ ($\mathbf{x}$) that do not belong to $\mathcal{A}$ equal to zero (the case will be clear from the context). The $N\times N$ identity matrix is denoted by $\mathbf{I}_{N}$ and $\text{diag}(\mathbf{x})$ denotes the diagonal matrix with $\mathbf{x}$ on its diagonal. $\mathbf{F}_{N,M}$ denotes the matrix obtained by the first $M\leq N$ columns of the $N\times N$ DFT matrix, i.e., $[\mathbf{F}_{N,M}]{}_{n,m}\triangleq e^{-j2\pi mn/N},n\in[N],m\in[M]$. The vector resulting of stacking the columns of a matrix $\mathbf{X}$ is denoted by $\text{vec}(\mathbf{X})$. $\text{supp}(\mathbf{x})\subseteq[N]$ denotes the set of non-zero elements (support) of $\mathbf{x}\in\mathbb{C}^{N}$. $\mathbb{C}^{N_{1}\cdot N_{2}\cdots N_{\ell}}$ denotes the space of complex-valued, multilevel block vectors consisting of $N_{1}$ blocks, each containing $N_{2}$ blocks, $\ldots$, each containing $N_{\ell-1}$ blocks of $N_{\ell}$ elements (for a total of $N_{1}N_{2}\cdots N_{\ell}$ elements). A vector $\mathbf{x}$ is called $s$-sparse if $|\text{supp}(\mathbf{x})|=s$.\textcolor{red}{{} }For reference, the following standard definition from CS theory \cite{MathIntroToCS} is recalled below. \begin{defn}[RIP constant] \label{def: standard RIP definition}The restricted isometry constant $\delta_{s}(\mathbf{A})$ of a (deterministic) matrix $\mathbf{A}\in\mathbb{C}^{N\times M}$ is the smallest $\delta\geq0$ such that \begin{equation} (1-\delta)\Vert\mathbf{x}\Vert^{2}\leq\Vert\mathbf{A}\mathbf{x}\Vert^{2}\leq(1+\delta)\Vert\mathbf{x}\Vert^{2},\label{eq:standard RIP definition} \end{equation} for all $s$-sparse vectors $\mathbf{x}\in\mathbb{C}^{M}$ ($s\leq M$). We say that $\mathbf{A}$ satisfies the ($s$-th) restricted isometry property ($s$-RIP) if $\delta_{s}(\mathbf{A})<\bar{\delta}$ where $\bar{\delta}<1$ is a pre-specified constant. \end{defn} \section{Wideband \textcolor{black}{Massive MIMO }Channel Model and Delay-Angular Representation} We consider the uplink of a single cell with a BS equipped with $M\gg1$ antenna elements serving multiple single-antenna UEs. For a ULA, the array manifold $\mathbf{a}\left(\cdot\right):\left[-\pi/2,\pi/2\right]\rightarrow\mathbb{C}^{M}$, which maps angular to spatial domain, is given by $\mathbf{a}\left(\phi\right)\triangleq[1,e^{-j2\pi d\sin\phi},\ldots,e^{-j2\pi d\left(M-1\right)\sin\phi}]^{T}$ \cite{ChenYang206_TWC}. Here, $d$ is the normalized spatial separation of the ULA (with respect to carrier wavelength), which, without loss of generality (w.l.o.g.), is assumed to be equal to $1/2$ in the following. As is routinely done, we perform the change of variable $\theta=d\sin(\phi)\in[-1/2,1/2]$ and, with a slight abuse of notation, we write the array manifold as a function of $\theta$, i.e., $\mathbf{a}\left(\theta\right)=[1,e^{-j2\pi\theta},\ldots,e^{-j2\pi\left(M-1\right)\theta}]^{T}$. Noting that $\mathbf{a}(\theta)=\mathbf{a}(1-\theta)$ for $\theta<0$, it is convenient to treat $\theta$ as taking values in $[0,1]$. Considering a sampled version of this interval by the $M$ points $\{k/M\}_{k=0}^{M-1}$ yields the steering (dictionary) matrix $\mathbf{A}_{\theta}\triangleq\left[\mathbf{a}(0),\mathbf{a}(1/M),\ldots,\mathbf{a}((M-1)/M)\right]=\mathbf{F}_{M,M}\in\mathbb{C}^{M\times M}$. Transmissions are performed via wideband OFDM signals with $N\gg1$ subcarriers centered at the \textcolor{black}{baseband} frequencies $\{2\pi k/T_{s}\}_{k=0}^{N-1}$, with $T_{s}>0$ being the \textcolor{black}{useful (without the cyclic prefix) }OFDM symbol duration. Assuming that the maximum delay spread of \textcolor{black}{all UE} channels is not longer than $\alpha T_{s},\alpha\leq1$, which is the case in any reasonable OFDM design, the delay manifold $\mathbf{b}\left(\cdot\right):\left[0,\alpha T_{s}\right]\rightarrow\mathbb{C}^{N}$, which maps the delay to the frequency domain, is defined as $\mathbf{b}\left(\tau\right)\triangleq[1,e^{-j2\pi\tau/T_{s}},\ldots,e^{-j2\pi(N-1)\tau/T_{s}}]^{T}$ \cite{ChenYang206_TWC}. Considering a\textcolor{black}{{} sampled version} of $[0,T_{s}]$ by the $N$ points $\{kT_{s}/N\}_{k=0}^{N-1}$, yields the steering (dictionary) matrix $\mathbf{A}_{\tau}\triangleq\left[\mathbf{b}(0),\mathbf{b}(T_{s}/N),\ldots,\mathbf{b}((D-1)T_{s}/N)\right]=\mathbf{F}_{N,D}\in\mathbb{C}^{N\times D}$ where $D\triangleq\lfloor\alpha N\rfloor$ is the channel delay spread in samples.\footnote{In general, a denser sampling for the angle and delay domains could be employed. We leave investigations of this case to future work.} The channel of an arbitrary UE is a superposition of a small number $L$ of impinging wavefronts (paths) characterized by their delay/angle pairs $\{(\tau_{p},\theta_{p})\}_{p=0}^{L-1}$, with $\tau_{p}\in[0,\alpha T_{s}]$, $\theta_{p}\in[0,1]$. This is reflected in the channel transfer matrix $\mathbf{H}\in\mathbb{C}^{N\times M}$ whose $(n,m)$-th element corresponds to the complex channel gain at subcarrier $n$ and antenna $m$ and can be written as \cite{Haghighatshoar2017_TWC,ChenYang206_TWC} \begin{equation} \mathbf{H}=\sum_{p=0}^{L-1}\rho_{p}\mathbf{b}\left(\tau_{p}\right)\mathbf{a}^{H}\left(\theta_{p}\right),\label{eq:channelmatrix} \end{equation} where $\rho_{p}\in\mathbb{C}$ is the complex gain of the $p$-th path. It is noted that $L$ is treated here as a given parameter that depends only on the physical propagation properties and is independent of system parameters $M$ and $N$. Targeting low-complexity channel estimation, \textcolor{black}{it is beneficial to} consider an \textcolor{black}{alternative} representation of $\mathbf{H}$, which translates the physical sparsity to sparsity of an appropriately defined matrix that is to be identified by the estimator. Towards this end, we will first \textcolor{black}{consider the case of }\textcolor{black}{\emph{on-grid}}\textcolor{black}{{} channel parameters,} when every delay/angle pair lies exactly on the delay/angle grid corresponding to the steering matrices $\mathbf{A}_{\theta}$ and $\mathbf{A}_{\tau}$, i.e., it holds $(\tau_{p},\theta_{p})=(k_{p}T_{s}/N,l_{p}/M)$ for some $k_{p}\in[D]$ and $l_{p}\in[M]$, for all $p\in[L]$. In general, this assumption does not hold, however, it is a reasonable approximation for asymptotically large $N$, $M$, and is convenient for algorithm design and (asymptotic) performance analysis. The more general, \emph{off-grid} channel parameters case will be treated in Sec. V. Note that, apart from the on-grid/off-grid delay/angle pairs characterization, no assumptions on the (joint) statistics of path delays, angles, and gains are considered in the following treatment. With on-grid parameters, $\mathbf{H}$ can then be written as \begin{equation} \mathbf{H}=\mathbf{A}_{\tau}\mathbf{X}\mathbf{A}_{\theta}^{H},\label{eq:channel_delay_angle_decomposition} \end{equation} where \begin{equation} \mathbf{X}\triangleq\sum_{p=0}^{L-1}\rho_{p}\mathbf{e}_{k_{p},D}\mathbf{e}_{l_{p},M}^{T}\in\mathbb{C}^{D\times M},\label{eq:Wongrid} \end{equation} with $\mathbf{e}_{n,N}\in\mathbb{C}^{N\times1}$ denoting the canonical basis vector with the $n$-th element equal to $1$. Matrix $\mathbf{X}$ is the \emph{delay-angular representation} of the channel, which is a sparse matrix with $L$ nonzero elements out of a total $DM$. An example of $\mathbf{X}$ with on-grid channel parameters is shown in Figure \ref{fig grid and off-grid W} (left panel). This sparsity of $\mathbf{X}$ (or its corresponding covariance matrix) has been exploited in the literature for obtaining efficient channel estimators \cite{Swindlehurst2016_TSP,Heath JSAC 2017,Haghighatshoar2017_TWC} by direct application of algorithms from the field of CS. However, as will be argued in the following, the sparsity pattern (support) of $\mathbf{X}$ is not completely random but follows a hierarchical pattern, a property that will be exploited for algorithm design and rigorous analysis in terms of performance and overhead required to achieve it. \begin{figure}[ptb] \noindent \centering{}\includegraphics[width=1\columnwidth]{W_matrix_plot_off_grid}\caption{Example heatmap (modulus values) for the delay-angular representation $\mathbf{X}$ of a channel with $L=3$ and $\rho_{p}=1$ for all $p\in[L]$, and $N=M=D=16$. Left: on grid case; Right: off-grid case, obtained by slight perturbation of the angle/delay pairs values of the on-grid case.} \label{fig grid and off-grid W} \end{figure} \section{Multiuser Channel Estimation Problem Statement} Towards reducing the pilot overhead, the BS partitions the uplink UEs to groups of $U$ UEs. Each group is assigned an exclusive set of pilot subcarriers and all $V\leq U$ \emph{active} UEs within a group transmit their pilots on these subcarriers and on the same OFDM symbol. {For the analysis and design purposes, we consider an arbitrary UE group and discuss later the joint assignment of subcarriers to multiple groups. Let $\mathcal{N}_{p}\subseteq[N]$ denote the set of $N_{p}\triangleq|\mathcal{N}_{p}|$ dedicated pilot subcarriers to this group.} Towards reducing implementation complexity, only the received signals from a set $\mathcal{M}_{p}\subseteq[M]$ of $M_{p}\triangleq|\mathcal{M}_{p}|$ antennas are considered at the BS for channel estimation purposes. Let $\mathbf{P}_{\mathcal{N}_{p}}\triangleq\mathbf{I}_{N,\mathcal{N}_{p}}\in\{0,1\}^{N_{p}\times N}$ and $\mathbf{P}_{\mathcal{M}_{p}}\triangleq\mathbf{I}_{M,\mathcal{M}_{p}}\in\{0,1\}^{M_{p}\times M}$ denote the \emph{sampling} matrices in frequency and space, respectively. The task of the BS is to identify all the UE channels from the observation \begin{equation} \mathbf{Y}=\sum_{u=0}^{U-1}\text{diag}(\mathbf{c}_{u})\mathbf{P}_{\mathcal{N}_{p}}\mathbf{H}_{u}\mathbf{P}_{\mathcal{M}_{p}}^{T}+\mathbf{Z}\in\mathbb{C}^{N_{p}\times M_{p}},\label{eq:observed_signal} \end{equation} where $\mathbf{c}_{u}\in\mathbb{C}^{N_{p}},\mathbf{H}_{u}\in\mathbb{C}^{N\times M}$, are the \textcolor{black}{pilot} \emph{signature} and channel transfer matrix of the $u$-th UE, respectively, and $\mathbf{Z}\in\mathbb{C}^{N_{p}\times M_{p}}$ is a noise matrix of arbitrary distribution apart from the mild assumption that $\|\mathbf{Z}\|$ is finite with probability $1$. The elements of $\mathbf{c}_{u}$ are known to the BS and assumed, w.l.o.g., to be of unit modulus for all $u\in[U]$. For the $U-V$ UEs that are not active, the channel transfer matrix is equal to an all-zeros matrix. The receiver is not aware which UEs are inactive but does know $V$ as well as the number of channel paths $L$, assumed to be the same for all UEs. It follows from the discussion of Sec. II that the problem of estimating the transfer matrices $\{\mathbf{H}_{u}\}_{u\in[U]}$ can be equivalently posed as the problem of estimating the delay-angular channel representations $\{\mathbf{X}_{u}\}_{u\in[U]}$. Setting $\mathbf{H}_{u}=\mathbf{A}_{\tau}\mathbf{X}_{u}\mathbf{A}_{\theta}^{H}$ in (\ref{eq:observed_signal}) and normalizing for technical reasons by $1/\sqrt{N_{p}M_{p}}$ results in the system equation \begin{equation} \mathbf{Y}=\bar{\mathbf{A}}_{\tau}\bar{\mathbf{X}}\bar{\mathbf{A}}_{\theta}^{H}+\mathbf{Z},\label{eq:system model w.r.t. =00005Cbar(X)} \end{equation} \noindent where \begin{equation} \bar{\mathbf{A}}_{\tau}\triangleq\frac{1}{\sqrt{N_{p}}}\left[\text{diag}(\mathbf{c}_{0})\mathbf{P}_{\mathcal{N}_{p}}\mathbf{A}_{\tau},\ldots,\text{diag}(\mathbf{c}_{U-1})\mathbf{P}_{\mathcal{N}_{p}}\mathbf{A}_{\tau}\right],\label{eq:A_tau structure} \end{equation} \begin{equation} \mathbf{\bar{\mathbf{A}}_{\theta}\triangleq}\frac{1}{\sqrt{M_{p}}}\mathbf{P}_{\mathcal{M}_{p}}\mathbf{A}_{\theta},\label{eq:A_theta structure} \end{equation} \[ \bar{\mathbf{X}}\triangleq\left[\mathbf{X}_{0}^{T},\mathbf{X}_{1}^{T},\ldots,\mathbf{X}_{U-1}^{T}\right]^{T}, \] \noindent and, with a slight abuse of notation, we denote also by $\mathbf{Y}$ and $\mathbf{Z}$ the normalized observation and noise matrix, respectively. Note that $\bar{\mathbf{X}}$ is a sparse matrix with $VL$ non-zero elements out of a total $UDM$. Towards expressing the linear model of (\ref{eq:system model w.r.t. =00005Cbar(X)}) in standard form (w.r.t. the unknown elements of $\bar{\mathbf{X}}$), the matrix observation should be vectorized. Note that there are two options to do this: Either consider $\text{vec}(\mathbf{Y})$ or $\text{vec}(\mathbf{Y}^{T})$, which will be referred to as the \emph{frequency-space} (F-S) and \emph{space-frequency} (S-F) option, respectively. These two options are, of course, mathematically equivalent, when $\bar{\mathbf{X}}$ is treated as an arbitrary matrix. However, as the support of $\bar{\mathbf{X}}$ reflects physical channel properties, these two options suggest different (additional) channel modeling assumptions, which can be algorithmically exploited and result in different overhead requirements, as will be discussed in the next section. By straightforward algebra, the channel estimation problem can be stated as follows. \begin{problem} Find a computationally efficient estimator of $\mathbf{x}\in\mathbb{C}^{UDM}$ given the measurement \begin{equation} \mathbf{y}=\mathbf{A}\mathbf{x}+\mathbf{z}\in\mathbb{C}^{N_{p}M_{p}},\label{eq:linear model} \end{equation} \noindent where $\mathbf{z}\in\mathbb{C}^{N_{p}M_{p}}$ is a noise vector, and, under the F-S option, \begin{equation} \left\{ \begin{array}{c} \mathbf{y}\triangleq\text{vec}(\mathbf{Y})\\ \mathbf{A}\triangleq\bar{\mathbf{A}}_{\theta}^{*}\otimes\bar{\mathbf{A}}_{\tau}\\ \mathbf{x}\triangleq\text{vec}(\bar{\mathbf{X}}) \end{array}\right\} ,\label{eq:option 1 system model} \end{equation} \noindent or, under the S-F option, \begin{equation} \left\{ \begin{array}{c} \mathbf{y}\triangleq\text{vec}(\mathbf{Y}^{T})\\ \mathbf{A}\triangleq\bar{\mathbf{A}}_{\tau}\otimes\bar{\mathbf{A}}_{\theta}^{*}\\ \mathbf{x}\triangleq\text{vec}(\bar{\mathbf{X}}^{T}) \end{array}\right\} .\label{eq: option 2 system model} \end{equation} We also ask for the design (selection) of $\mathcal{N}_{p}$, $\mathcal{M}_{p}$, and $\{\mathbf{c}_{u}\}_{u\in[U]}$, towards minimizing the pilot subcarriers $N_{p}$ and observed antenna signals $M_{p}$ required for reliable (in a specific sense that we will made precise later) estimation. \end{problem} With $N_{p}M_{p}<UDM$, which is the case of interest in a wideband, massive MIMO setting, the linear estimation problem of (\ref{eq:linear model}) becomes under-determined. However, as $\mathbf{x}$ is $VL$-sparse, one can utilize tools from CS theory \cite{MathIntroToCS} for its estimation. In particular, it is known that, in the absence of noise, a necessary requirement for perfect recovery of $\mathbf{x}$ from $\mathbf{y}$ (by means of any algorithm) is \cite[Theorem 11.6]{MathIntroToCS} \begin{equation} N_{p}M_{p}=\mathcal{O}\left(VL\log(UDM)\right)\text{ for }UDM\rightarrow\infty.\label{eq:universal overhead bound of CS recovery} \end{equation} \noindent We will refer to the product $N_{p}M_{p}$ as \emph{overhead}. Equation (\ref{eq:universal overhead bound of CS recovery}) reveals that the necessary overhead scales much slower than the overhead corresponding to a naive consideration of all $N$ subcarriers and $M$ antennas for channel estimation. However, achievability of the universal bound of (\ref{eq:universal overhead bound of CS recovery}) depends crucially on the \emph{sensing matrix} $\mathbf{A}$ that appears in (\ref{eq:linear model}). In particular, a typical sufficient condition for $\mathbf{A}$ is to satisfy the restricted isometry property (RIP) (see Definition \ref{def: standard RIP definition}). This would indeed be the case (with high probability) if the elements of $\mathbf{A}$ were, e.g., Gaussian distributed\textcolor{black}{{} }\cite{MathIntroToCS}\textcolor{black}{, allowing the use of standard algorithms from CS theory for the recovery of }$\mathbf{x}$ with the overhead of (\ref{eq:universal overhead bound of CS recovery}).\textcolor{black}{{} Unfortunately, there is very limited flexibility in designing the sensing matrix }$\mathbf{A}$ as the latter has by default the Kronecker product structure shown in (\ref{eq:option 1 system model}) and (\ref{eq: option 2 system model}) and the design of the UE signatures only affects the constituent matrix $\bar{\mathbf{A}}_{\tau}$ under the specific block structure of (\ref{eq:A_tau structure}). {Works towards a characterization of the RIP constant of Kronecker-product sensing matrices are available \cite{Jokar Kronecker, Duarte Kronecker}, with the main result being a lower bound of in terms of the RIP constants of the individual constituent matrices. This bound can be used to obtain insights on the necessary (but not sufficient) scaling of training overhead. However, as will be shown later (cf. discussion after Theorem 9), this scaling is overly pessimistic. This is due to the fact that $\mathbf{x}$ is not simply sparse, as treated by the standard CS approach, but \emph{hierarchically sparse}, a notion we define next, which effectively implies a reduction of the solution space for the channel estimation problem. This solution space reduction implies that the estimation problem is ``easier'' than the one implied by the standard CS treatment, hence a smaller training overhead is expected. In the following section, a family of recovery algorithms (in the presence of noise) exploiting the hierarchical sparsity are presented, for which rigorous scaling laws for the required training overhead are obtained based on the concept of \emph{Hierarchical RIP} (HiRIP).} \section{Algorithm Design and Analysis Exploiting \textcolor{black}{Hierarchical Sparsity}} This section identifies important structural properties of $\mathbf{x}$ (under both F-S and S-F options), which are taken into account for the design of efficient channel estimation algorithms as well as providing performance guarantees. The latter will in turn provide design criteria for the pilot signatures. For a specific pilot signature design, a rigorous identification of the overhead scaling sufficient to guarantee channel identification with bounded error is provided, which also suggests that the F-S option is preferable towards minimum number of pilots $N_{p}$. \subsection{Hierarchical Sparsity Under F-S and S-F Options} The fundamental observation towards an efficient channel estimation algorithm and rigorous performance analysis is that $\mathbf{x}$ is not only sparse, but its support possesses a certain structure, called \emph{hierarchical sparsity } {\cite{C-HiLasso, Bockelmann, Roth2016_TSP,RothEtAl2018_ISIT}}. \begin{defn}[Hierarchical sparsity] Let $\mathbf{s}=(s_{1},\dots,s_{\ell})$ be an $\ell$-tuple of natural numbers and consider an $\ell$-level block vector $\tilde{\mathbf{x}}\in\mathbb{C}^{N_{1}\cdot N_{2}\cdots N_{\ell}}$, with $N_{i}\geq s_{i},i\in\{1,2,\ldots,\ell\}$. We say that $\tilde{\mathbf{x}}$ is \emph{$\mathbf{s}$-hierarchically-sparse} (written as $\mathbf{s}$-Hi-sparse) if it has the property of hierarchical $\mathbf{s}$-sparsity defined inductively as follows: For $\ell=1$, $\tilde{\mathbf{x}}$ is $\mathbf{s}$-Hi-sparse if at most $s_{1}$ of its $N_{1}$ elements are non-zero (this is the standard notion of sparsity). For $l>1$, $\tilde{\mathbf{x}}$ is called $\mathbf{s}$-Hi-sparse if it consists of $N_{1}$ blocks and at most $s_{1}$ of these are non-zero with each non-zero block being $(s_{2},\dots,s_{\ell})$-Hi-sparse.\textcolor{red}{{} }The lower part of Fig. \ref{fig:T_operator} demonstrates an example of a vector in $\mathbb{C}^{2\cdot3\cdot5}$ that is $(1,2,2)$-Hi-sparse. \end{defn} \begin{figure}[t] \centering \includegraphics[scale=0.8]{sparsification_example_multiuser}\caption{\label{fig:T_operator}{Illustration of the sequence of actions of the $\mathcal{T}_{(1,2,2)}(\cdot)$ operator (as described in Algorithm \ref{alg:T_operator_alg}) on a three-level block vector in $\mathbb{C}^{2\cdot 3 \cdot 5}$ ($2$ blocks of $3$ blocks of $5$ elements each). The support of the best $(1,2,2)$-Hi-sparse approximation of the vector is $\{1,4,10,13\}$, with index $0$ corresponding to the leftmost element in the vector. Note that this support is different from the support $\{1,9,19,24\}$ of the best $1\times2\times2=4$-sparse approximation, when the vector is treated as an arbitrary vector (no block structure) in $\mathbb{C}^{30}$. }} \end{figure} {It is noted that the notion of hierarchical sparsity is more general than that of the common block sparsity where a vector of length $N$ is partitioned into $N/d$ blocks of $d$ elements each, with $s<N/d$ of the blocks (and their elements) being non-zero \cite{block sparse}. Note that in this case, the vector can be treated as a two-level block vector in $\mathbb{C}^{\frac{N}{d} \cdot d}$ that is $(s,d)$-Hi-sparse.} It is easy to see that the unknown vector $\mathbf{x}$ in (\ref{eq:linear model}) is actually a hierarchically sparse, $3$-level block vector under both F-S and S-F options. In particular, under the F-S option and the assumption $LV\leq M$ (reasonable for massive MIMO and sparse channels), $\mathbf{x}\in\mathbb{C}^{M\cdot U\cdot D}$ and is $(LV,V,L)$-Hi-sparse. Note that the first (outer) hierarchy level corresponds to angles (up to $LV$ angle values can be present, equal to the number of total paths from all active UEs), the second hierarchy level corresponds to UEs (up to $V$ active UEs can have a path with the same angle), and the third hierarchy level corresponds to delays (up to $L$ delays per UE per angle can be present, equal to the total paths per UE). However, for (asymptotically) large $M$, one may reasonably assume that (a) for each angle value there can be no more than $K_{V}<V$ UEs with a channel path having this angle and (b) for each angle there can be no more than $K_{L}<L$ paths for each UE with this value, rendering $\mathbf{x}$ as $(LV,K_{V},K_{L})$-Hi-sparse. Under the S-F option, $\mathbf{x}\in\mathbb{C}^{U\cdot D\cdot M}$ and is $(V,L,L)$-Hi-sparse, with the first (outer) hierarchy level corresponding to UEs ($V$ out of $U$ UEs active), the second corresponding to delays (up to $L$ paths present per UE), and the third corresponding to angles (up to $L$ paths with the same delay). Similar to the F-S option, the hierarchical sparsity characterization under S-F option can be refined in the (asymptotically) large $N$ regime, where up to $K_{L}$ paths can be assumed to have the same delay per UE, rendering $\mathbf{x}$ as $(V,L,K_{L})$-Hi-sparse. The F-S and S-F options result in a different ordering of levels for $\mathbf{x}$ and suggest different (but reasonable) assumptions for the delay/angular distribution of UE channels. However, at this point, it is not clear which of the two options is preferable. Note also that, for asymptotically large $M$ or $N$, it is reasonable to assume that \emph{$K_{V}=K_{L}=1$}, although we do not explicitly write them as such for generality of presentation. \subsection{Algorithm Design} Clearly, the hierarchically sparse property of $\mathbf{x}$ \emph{should be exploited} in algorithm design and analysis as it provides significant restrictions on its support, compared to the standard notion of sparsity (which would characterize $\mathbf{x}$ simply as $VL$-sparse). \textcolor{black}{Towards this end, the low-complexity, iterative hard thresholding (IHT) and hard threshold pursuit (HTP) algorithms \cite{MathIntroToCS} are modified as shown in Algorithm \ref{alg:HiIHT/HiHTP-Channel-Estimation} to take into account the hierarchical sparsity }of $\mathbf{x}$\textcolor{black}{{} and are referred to in the following as hierarchical IHT (HiIHT) and hierarchical HTP (HiHTP), respectively. The algorithms can be applied equally well under either the F-S or S-F option and are independent of the noise statistics.} \begin{algorithm}[tbh] \caption{HiIHT/HiHTP Channel Estimation\label{alg:HiIHT/HiHTP-Channel-Estimation}} \begin{algorithmic}[1] \REQUIRE $\mathbf{y}$, $\mathbf{A}$, $V$, $L$, $K_L$, $K_V$ (the latter only under F-S option). \STATE $i=0$, $\hat{\mathbf{x}}^{\left( 0\right) }=\mathbf{0} \in \mathbb{C}^{UDM}$ \REPEAT \STATE $i = i + 1$, \STATE$\hat{\mathbf{x}}_\text{temp} = \hat{\mathbf{x}}^{\left( i-1\right) }+\mathbf{A}^{H}\left( {\mathbf{y}}-\mathbf{A} \hat{\mathbf{x}}^{\left( i-1\right) }\right) $ \STATE$\hat{\mathcal{S}}^{\left( i\right) }= \begin{cases} \mathcal{T}_{(VL,K_V,K_L)}(\hat{\mathbf{x}}_\text{temp}), &\text{F-S option,}\\ \mathcal{T}_{(V,L,K_L)}(\hat{\mathbf{x}}_\text{temp}), &\text{S-F option} \end{cases}$ \IF{HiIHT} \STATE $\hat{\mathbf{x}}^{\left( i\right) }=\mathbf{0} \in \mathbb{C}^{UDM}$ \STATE$\hat{\mathbf{x}}^{\left( i\right) }_{\hat{\mathcal{S}}^{\left( i\right) }}=\hat{\mathbf{x}}_{\text{temp},\hat{\mathcal{S}}^{\left( i\right) }} $ \ELSIF{HiHTP} \STATE$\hat{\mathbf{x}}^{\left( i\right) }=\arg\min_{\beta\in \mathbb{C}^{UDM}, \text{supp}({\beta})\subseteq \hat{\mathcal{S}}^{\left( i\right) }} \left\{ \Vert\mathbf{y}-\mathbf{A} \beta \Vert\right\} $ \ENDIF \UNTIL stopping criterion is met at $i=i^{\ast}$ \RETURN$\begin{cases} (VL, K_V, K_L)\text{-Hi-sparse }\hat{\mathbf{x}}^{\left( i^{\ast}\right) }, &\text{F-S option,}\\ (V, L, K_L)\text{-Hi-sparse }\hat{\mathbf{x}}^{\left( i^{\ast}\right) }, &\text{S-F option} \end{cases}$ \end{algorithmic} \end{algorithm} In iteration $i$, the estimate of iteration $i-1$ is first updated by a standard gradient-descent step to obtain $\hat{\mathbf{x}}_{\text{temp}}$. From $\hat{\mathbf{x}}_{\text{temp}}$, the hierarchically sparse support $\mathcal{S}\in[UDM]$ of $\mathbf{x}$ is estimated by application of the \emph{thresholding} operator $\mathcal{T}_{(\cdot,\cdot,\cdot)}(\cdot)$, to be defined next. For HiIHT, the current iteration estimate of $\mathbf{x}$ is set equal to $\hat{\mathbf{x}}_{\text{temp}}$ except for its elements that do not belong to the estimated support and are set equal to zero. {For HiHTP, the iteration $i$ estimate is obtained as the hierarchically sparse vector whose non-zero element values are obtained by minimizing a standard least squares cost function.} Utilization of the operator $\mathcal{T}_{(\cdot,\cdot,\cdot)}(\cdot)$ is the only but critical differentiator of HiIHT/HiHTP compared to their ``standard'' IHT/HTP counterparts \cite{MathIntroToCS}. In particular, for any multi-level block vector $\tilde{\mathbf{x}}\in\mathbb{C}^{N_{1}\cdot N_{2}\cdots N_{\ell}}$ and any $\mathbf{s}=(s_{1},s_{2},\ldots,s_{\ell})$, $\mathcal{T}_{\mathbf{s}}(\tilde{\mathbf{x}})$ is defined as the support of the multi-level block vector $\tilde{\mathbf{z}}\in\mathbb{C}^{N_{1}\cdot N_{2}\cdots N_{\ell}}$ that is $\mathbf{s}$-Hi-sparse and minimizes $\|\tilde{\mathbf{x}}-\tilde{\mathbf{z}}\|$. Its action can be computed recursively with minimal complexity as described in Algorithm \ref{alg:T_operator_alg} with an example of this computation shown in Fig. \ref{fig:T_operator}. {For the channel estimation problem, $\ell=3$ in the description of the algorithm, with $(N_1,N_2,N_3)=(M,U,D)$ and $(N_1,N_2,N_3)=(U,D,M)$ under the F-S and S-F option respectively.} \begin{algorithm}[tbh] \caption{Action of operator $\mathcal{T}_{\mathbf{s}}(\cdot)$\label{alg:T_operator_alg}} \begin{algorithmic} [1] \REQUIRE $\tilde{\mathbf{x}} \in \mathbb{C}^{N_1 \cdot N_2 \cdots N_\ell}$, $\mathbf{s}=(s_1,s_2,\ldots, s_\ell)$, $\ell \geq 2$ \STATE $\tilde{\mathbf{z}} = \tilde{\mathbf{x}}$. \STATE For each of the $N_1 N_2 \cdots N_{\ell-1}$ blocks at level $\ell-1$ of $\tilde{\mathbf{z}}$, identify the $s_\ell$ (out of a total $N_\ell$) largest-modulus elements and set the remaining elements equal to zero. Ties are resolved arbitrarily. \STATE $k=\ell-2$ . \WHILE{$k\geq 1$} \STATE For each of the $N_1 N_2 \cdots N_k$ blocks at level $k$ of $\tilde{\mathbf{z}}$, identify the $s_{k+1}$ (out of a total $N_{k+1}$) blocks with the largest Euclidean norm and set the elements of the remaining blocks equal to zero. Ties are resolved arbitrarily. \STATE $k=k-1$ \ENDWHILE \RETURN $\text{supp}(\tilde{\mathbf{z}})$ \end{algorithmic} \end{algorithm} \subsection{Performance Analysis and Overhead Requirements} Towards characterizing the performance of HiIHT/HiHTP, which, in turn, will provide insights on pilot signature design and overhead requirements, the concept of \emph{hierarchical} \emph{RIP} (HiRIP) constant, first introduced in \cite{Roth2016_TSP}, is essential. \begin{defn}[HiRIP constant] Let $\mathbf{s}=(s_{1},s_{2},\ldots,s_{\ell})$ be an $\ell$-tuple of natural numbers. The $\mathbf{s}$-HiRIP constant $\delta_{\mathbf{s}}(\tilde{\mathbf{A}})$ of a (deterministic) matrix $\tilde{\mathbf{A}}\in\mathbb{C}^{N_{0}\times(N_{1}N_{2}\cdots N_{\ell})}$ is the smallest $\delta\geq0$ such that \begin{equation} (1-\delta)\Vert\tilde{\mathbf{x}}\Vert^{2}\leq\Vert\tilde{\mathbf{A}}\tilde{\mathbf{x}}\Vert^{2}\leq(1+\delta)\Vert\tilde{\mathbf{x}}\Vert^{2},\label{eq:RIPequation} \end{equation} for all $\mathbf{s}$-Hi-sparse $\ell$-level block vectors $\tilde{\mathbf{x}}\in\mathbb{C}^{N_{1}\cdot N_{2}\cdots N_{\ell}}$. We say that $\tilde{\mathbf{A}}$ satisfies the $\mathbf{s}$-HiRIP if $\delta_{\mathbf{s}}(\tilde{\mathbf{A}})<\bar{\delta}$ where $\bar{\delta}<1$ is a pre-specified constant.\footnote{Note the difference in the notation $\delta_s(\cdot)$ and $\delta_\mathbf{s}(\cdot)$ for the RIP and HiRIP constants, respectively. A scalar $s$ is used as a subscript for RIP, whereas a vector $\mathbf{s}$, sometimes with its elements explicitly indicated, is used for HiRIP.} \end{defn} \begin{rem} \label{rem:HiRIP stricter than RIP}The definition of the $\mathbf{s}$-HiRIP constant closely follows Def. \ref{def: standard RIP definition} of the (standard) $s$-RIP constant and they actually coincide when $\mathbf{s}$ contains only a single element, i.e., it is a scalar. However, when $\mathbf{s}$ is a vector of two or more elements, the notion of HiRIP constant is not directly comparable to that of the RIP constant as the first applies to hierarchically sparse vectors whereas the second applies to more general, sparse vectors (that may or may not be hierarchically sparse). However, a link between the two notions exists by noting that, for any matrix $\tilde{\mathbf{A}}$, it must hold {(see Appendix \ref{sec:proof of Eq 14})} \begin{equation} \delta_{(s_{1},s_{2},\ldots,s_{\ell})}(\tilde{\mathbf{A}})\leq\delta_{s_{1}s_{2}\cdots s_{\ell}}(\tilde{\mathbf{A}}),\label{eq:HIRIP<=00003DRIP} \end{equation} for any $s_{1},s_{2},\ldots,s_{\ell}$, a result that will be utilized in the following.\textcolor{red}{{} } \end{rem} The HiRIP framework allows to obtain the following rigorous guarantees for the performance of HiIHT/HiHTP. \begin{thm}[Recovery guarantee of HiIHT/HiHTP] \label{thm:HiHTP_performance}Assume that $M,N,D\gg L$ and suppose that the sensing matrix $\mathbf{A}$ in (\ref{eq:linear model}) has a HiRIP constant \[ \delta\triangleq\begin{cases} \delta_{(3LV,3K_{V},3K_{L})}(\mathbf{A}), & \mathcal{\text{\emph{under F-S option}}},\\ \delta_{(3V,3L,3K_{L})}(\mathbf{A}), & \mathcal{\text{\emph{under S-F option,}}} \end{cases} \] \noindent with \begin{equation} \delta<1/\sqrt{3}.\label{eq:HiRIP requirement} \end{equation} \noindent Then, the sequence of estimates $\{\hat{\mathbf{x}}^{\left(i\right)}\}$ generated by the HiIHT and HiHTP algorithms satisfies \[ \Vert\mathbf{x}-\hat{\mathbf{x}}^{\left(i\right)}\Vert\leq\kappa^{i}\Vert\mathbf{x}\Vert+\tau\Vert\mathbf{z}\Vert, \] \noindent for all $i\geq0$, with \[ \kappa\triangleq\begin{cases} \sqrt{3}\delta, & \text{\emph{for HiIHT},}\\ \sqrt{2\delta/(1-\delta^{2})}, & \text{\emph{for HiHTP},} \end{cases} \] \noindent and \[ \tau\triangleq\begin{cases} 2.18/(1-\kappa), & \text{\emph{for HiIHT}},\\ 5.15/(1-\kappa), & \emph{for HiHTP}. \end{cases} \] \end{thm} \begin{IEEEproof} The result for the HiHTP follows directly from application of \cite[Theorem 4]{Roth2016_TSP}. The proof for the HiIHT follows the HiHTP proof with the same modifications as the ones considered in the recovery guarantee proofs of the HTP/IHT algorithms given in \cite[Theorem 6.18]{MathIntroToCS}. \end{IEEEproof} It follows that in order to ensure \emph{reliable} channel estimation in the sense of perfect and bounded-error recovery of $\mathbf{x}$ via the HiHTP/HiIHT algorithms in the noiseless ($\|\mathbf{z}\|=0$) and noisy ($\|\mathbf{z}\|>0$) case, respectively, we need to design $\mathcal{N}_{p}$, $\mathcal{M}_{p}$, and $\{\mathbf{c}_{u}\}_{u=0}^{U-1}$ such that the HiRIP constant of $\mathbf{A}$ satisfies (\ref{eq:HiRIP requirement}). Similar to RIP, the explicit computation of HiRIP constants is a very difficult problem (even numerically) \cite{RothEtAl2018_ISIT}. However, the following bound on the HiRIP constant of a Kronecker-product sensing matrix in terms of the RIP constants of its factor matrices is available, which can be used to obtain a rigorous description of the overhead required to achieve (\ref{eq:HiRIP requirement}). \begin{lem} \label{thm:HiRIP bound}Consider a matrix $\tilde{\mathbf{A}}\triangleq\tilde{\mathbf{A}}_{1}\otimes\tilde{\mathbf{A}}_{2}$, with $\tilde{\mathbf{A}}_{k}\in\mathbb{C}^{M_{k}\times N_{k}},k=1,2$, which, for all $3$-level block vectors $\tilde{\mathbf{x}}\in\mathbb{C}^{N'_{1}\cdot N_{2}'\cdot N_{3}'}$ with $N_{1}'N_{2}'N_{3}'=N_{1}N_{2}$, has an $\mathbf{s}$-HiRIP constant $\delta_{\mathbf{s}}(\tilde{\mathbf{A}})$ for some $\mathbf{s}\triangleq(s_{1},s_{2},s_{3})$. If $N_{1}=N_{1}'$ and $N_{2}=N_{2}'N_{3}'$, it holds \begin{equation} \delta_{\mathbf{s}}(\tilde{\mathbf{A}})\leq\left(1+\delta_{s_{1}}(\tilde{\mathbf{A}}_{1})\right)\left(1+\delta_{s_{2}s_{3}}(\tilde{\mathbf{A}}_{2})\right)-1,\label{eq:HiRIP bound case 1} \end{equation} \noindent whereas, if $N_{1}=N_{1}'N_{2}'$ and $N_{2}=N_{3}'$, it holds \begin{equation} \delta_{\mathbf{s}}(\tilde{\mathbf{A}})\leq\left(1+\delta_{s_{1}s_{2}}(\tilde{\mathbf{A}}_{1})\right)\left(1+\delta_{s_{3}}(\tilde{\mathbf{A}}_{2})\right)-1.\label{eq:HiRIP bound case 2} \end{equation} \end{lem} \begin{IEEEproof} The bound of (\ref{eq:HiRIP bound case 1}) follows from the inequality \cite[Theorem 4]{RothEtAl2018_ISIT} \[ \delta_{\mathbf{s}}(\tilde{\mathbf{A}})\leq\left(1+\delta_{s_{1}}(\tilde{\mathbf{A}}_{1})\right)\left(1+\delta_{(s_{2},s_{3})}(\tilde{\mathbf{A}}_{2})\right)-1, \] \noindent and (\ref{eq:HIRIP<=00003DRIP}). The bound of (\ref{eq:HiRIP bound case 2}) follows by the inequality \[ \delta_{\mathbf{s}}(\tilde{\mathbf{A}})\leq\left(1+\delta_{(s_{1},s_{2})}(\tilde{\mathbf{A}}_{1})\right)\left(1+\delta_{s_{3}}(\tilde{\mathbf{A}}_{2})\right)-1, \] \noindent which can be shown to hold by a straightforward extension of the proof of \cite[Theorem 4]{RothEtAl2018_ISIT} and again applying (\ref{eq:HIRIP<=00003DRIP}). \end{IEEEproof} The importance of this theorem is that it bounds the HiRIP constant of $\mathbf{A}$ in terms of the RIP constants of its constituent matrices $\bar{\mathbf{A}}_{\tau}$ and $\bar{\mathbf{A}}_{\theta}^{*}$. {Any design resulting in this bound of the HiRIP constant of $\mathbf{A}$ been less than $1/\sqrt{3}$} is therefore sufficient to achieve the performance guarantees of Theorem \ref{thm:HiHTP_performance}. To this end, we propose the following design. \begin{defn}[System Design] Set $U\leq N/D$ and let $\mathbf{c}\in\mathbb{C}^{N}$ be an arbitrary sequence of unit modulus elements. For an arbitrary group of $U$ UEs, the set of its dedicated pilot subcarriers $\mathcal{N}_{p}$ is a randomly and uniformly selected subset of $[N]$ with cardinality $N_{p}$, whereas the set of observed antennas $\mathcal{M}_{p}$ is randomly and uniformly selected subset of $[M]$ with cardinality $M_{p}$ (same for all UE groups). The UE signature sequences are \begin{equation} \mathbf{c}_{u}=\mathbf{P}_{\mathcal{N}_{p}}\text{diag}\left(\left[1,e^{-j\frac{2\pi}{N}uD},\ldots,e^{-j\frac{2\pi}{N}uD(N-1)}\right]\right)\mathbf{c},\label{eq: frequency_shifted_signatures} \end{equation} for all $u\in[U].$ {Note that under this design the joint assignment of pilot subcarriers to multiple UE groups is simplified to a random partition of subcarriers.} This design is motivated by the availability of rigorous RIP constant characterization for matrices obtained by random sampling of rows of orthogonal matrices. Indeed, it immediately follows from (\ref{eq:A_theta structure}) and the random selection of antennas that $\bar{\mathbf{A}}_{\theta}^{*}$ is a random sampling of the rows of the orthogonal matrix $(1/\sqrt{M_{p}})\mathbf{F}_{M,M}^{*}$, whereas, direct substitution of (\ref{eq: frequency_shifted_signatures}) into (\ref{eq:A_tau structure}) results in \begin{equation} \mathbf{\bar{A}}_{\tau}=(1/\sqrt{N_{p}})\mathbf{P}_{\mathcal{N}_{p}}\text{diag}(\mathbf{c})\mathbf{F}_{N,UD},\label{eq:A_tau_with_proposed_design} \end{equation} \noindent i.e., $\mathbf{\bar{A}}_{\tau}$ is a random sampling of the rows from the first $UD$ columns of the orthogonal matrix $(1/\sqrt{N_{p}})\text{diag}(\mathbf{c})\mathbf{F}_{N,N}$. Equally important, this form of $\bar{\mathbf{A}}_{\theta}^{*}$ and $\bar{\mathbf{A}}_{\tau}$ allows for the efficient computation of the gradient-descent step in Algorithm \ref{alg:HiIHT/HiHTP-Channel-Estimation} by means of fast Fourier transform (FFT). {This makes HiIHT in particular especially attractive for application in systems with (very) large $M$ and/or $N$.} We note that phase-shifted pilot sequence designs similar to (\ref{eq: frequency_shifted_signatures}) were also proposed in \cite{Leus optimal training MIMO OFDM,Swindlehurst2016_TSP,Al-Dhahir}, however, under different contexts in terms of system model and/or assuming regularly-spaced pilot subcarriers. {In addition, the well-known Zadoff-Chu sequences employed in cellular standards \cite{ZC_LTE} are compatible with the design of (\ref{eq: frequency_shifted_signatures}).} The proposed design cannot be claimed to be optimal in the sense that it is not obtained as the explicit solution of an optimization problem. However, as will be shown in Sec. VI, it achieves very good performance and, equally important, results in a sensing matrix $\mathbf{A}$ whose HiRIP can be analytically characterized as a function of $N_p$ and $M_p$. This characterization, in combination with the performance guarantees of Theorem 6, allows for rigorous analytical insights on the overhead requirements for reliable channel estimation, as stated in the following result. \end{defn} \begin{thm} \label{thm:required overhead} Let $\delta_{\tau}>0$, $\delta_{\theta}>0$ be two arbitrary numbers that satisfy $\delta_{\tau}+\delta_{\theta}+\delta_{\tau}\delta_{\theta}<1/\sqrt{3}$. With the proposed design and with a probability greater than $1-M^{-\log^{3}(M)}-N^{-\log^{3}(N)}$, the HiIHT/HiHTP algorithm performance is as described in Theorem \ref{thm:HiHTP_performance} when it holds \begin{align} N_{p} & \geq\min\left\{ 3C\delta_{\tau}^{-2}K_{V}K_{L}\log^{4}(N),N\right\} ,\label{eq:O_tau min}\\ M_{p} & \geq\min\left\{ 9C\delta_{\theta}^{-2}VL\log^{4}(M),M\right\} ,\label{eq:O_theta min} \end{align} under the F-S option, or \begin{align} N_{p} & \geq\min\left\{ 9C\delta_{\tau}^{-2}VL\log^{4}(N),N\right\} ,\label{eq:O_tau min-1}\\ M_{p} & \geq\min\left\{ 3C\delta_{\theta}^{-2}K_{L}\log^{4}(M),M\right\} ,\label{eq:O_theta min-1} \end{align} \noindent under the S-F option, where $C>0$ is a universal constant. \end{thm} \begin{IEEEproof} Please see Appendix \ref{sec:proof of required overhead}. \end{IEEEproof} The following remarks are in order: \begin{itemize} \item Both F-S and S-F options require an overhead $N_{p}M_{p}$ that is proportional to $VL$ (assuming $M,N\gg L$ and $K_{V},K_{L}$ independent of $V$ and $L$), similarly to the universal bound of (\ref{eq:universal overhead bound of CS recovery}). Of course, using $N_{p}=N$ and $M_{p}=M$ will result in the best performance in the presence of noise, however, this would be achieved with an overly large overhead cost. \item There is flexibility in distributing the overhead over the frequency and space dimensions by changing the values of $\delta_{\tau}$ and $\delta_{\theta}$ in Theorem \ref{thm:required overhead}. The minimum pilot overhead ($N_p$) is achieved with $\delta_{\theta}=\epsilon$ and $\delta_{\tau}=1/\sqrt{3}-\epsilon$, for some arbitrarily small $\epsilon>0$, resulting in $M_{p}=M$, i.e., all antennas are utilized, whereas minimum number of observed antennas is achieved with $\delta_{\tau}=\epsilon$ and $\delta_{\theta}=1/\sqrt{3}-\epsilon$ with all subcarriers utilized for training. \item The scaling laws for $N_{p}$ and $M_{p}$ are different between the F-S and S-F option due to the different hierarchical sparsity properties for $\mathbf{x}$ corresponding to each of these (see discussion in Sec. IV. A). Interestingly, under the F-S option and assuming that $K_{V}$ and $K_{L}$ are independent of $V$ and $L$,\emph{ $N_{p}$ is independent of the number of channel paths $L$ and active UEs $V$}, which is particularly appealing as it implies a robust pilot design without a need for pilot reconfiguration with changing $L$ and/or $V$. Of course, one expects that performance will degrade with increasing $L$ and/or $V$ with a fixed $N_{p}$, however, as long as (\ref{eq:O_theta min}) holds, this degradation is expected to be graceful in the sense of achieving a bounded estimation error, as also verified in the numerical results of Sec. VI. Similar conclusions hold for the S-F option, this time with $M_{p}$ being independent of $L$ and $V$. \item {The result of Theorem \ref{thm:required overhead}, even though only sufficient, provides a much better indication of the minimum possible overhead requirements than the one provided by conventional (unstructured) CS theory. Indeed, under the conventional CS treatment, a sufficient condition to achieve reliable channel estimation is $\delta_{cVL}(\mathbf{A})<\delta_{\text{RIP}}$, where the values of $c>0$ and $\delta_{\text{RIP}}>0$ depend on the considered estimation algorithm \cite{MathIntroToCS}. For Kronecker-type sensing matrices $\tilde{\mathbf{A}}=\tilde{\mathbf{A}}_1 \otimes \tilde{\mathbf{A}}_2$, it is known that $\delta_{s}(\tilde{\mathbf{A}}) \geq \max\{\delta_{s}(\mathbf{A}_1), \delta_{s}(\mathbf{A}_2) \}$, for any $s$ \cite{Jokar Kronecker, Duarte Kronecker}, which for the massive MIMO channel estimation problem implies that the pilot design should be such that it holds $\delta_{cVL}(\mathbf{A}_\tau)<\delta_{\text{RIP}}$ \emph{ and } $\delta_{cVL}(\mathbf{A}_\theta)<\delta_{\text{RIP}}$. For the training sequence design considered above, it is easy to show that in order to achieve this condition \emph{both} $N_p$ and $M_p$ should scale proportionally to $VL$ (up to logarithmic factors), irrespective of $c$ and $\delta_{\text{RIP}}$. In contrast, Theorem 9 reveals that \emph{only one} of $N_p$ and $M_p$ needs to scale proportionally to $VL$ (up to logarithmic factors). } \item The overhead requirement of Theorem \ref{thm:required overhead} is a sufficient condition for the application of the HiIHT/HiHTP algorithms. Therefore, this value may be greater than the necessary and sufficient overhead requirement when a more sophisticated and more complex algorithm such as, e.g., maximum likelihood estimation, is employed. \item The HiIHT/HiHTP algorithm description, performance analysis, and overhead requirements described in this section are \emph{independent} of the statistics of UE channels as well as noise. The only assumption considered is that each UE channel consists of $L$ paths with on-grid values for angles and delays. \end{itemize} As the pilot overhead reduction is critical towards increasing the system capacity, i.e., accommodate more UEs and/or increase per-UE rates, it is clear that the F-S option is preferable as $N_{p}$ does not scale with $V$ and $L$ (assuming that $K_V$ and $K_L$ are also independent of $V$, $L$). In particular, we have the following sufficient pilot overhead requirement obtained by setting $\delta_{\theta}=\epsilon$ and $\delta_{\tau}=1/\sqrt{3}-\epsilon$, $\epsilon \rightarrow 0$, in Theorem \ref{thm:required overhead}. \begin{cor} Towards achieving reliable channel estimation with minimum pilot overhead, the F-S option should be selected with $M_{p}=M$ (full antenna array utilization) and \[ N_{p}\geq CK_{V}K_{L}\log^{4}(N), \] \noindent where $C$ is a universal constant. \end{cor} It is noted that the independence of the scaling behavior of the pilot overhead from $L$ and $V$ is only possible by the utilization of a massive number of antennas. In a loose sense, under the F-S option, we shift the estimation burden to the spatial domain and corresponding measurements, thus allowing for a minimum overhead in the frequency domain. It is easy to see that when $M=1$, only the S-F option is available, which results in a pilot overhead that scales with $L$ and $V$. \section{Extension to Off-Grid Channel Parameters} The previous sections considered on-grid channel parameters, which can be assumed to be a good approximation in the regime of asymptotically large $M$ and $N$. The fundamental benefit offered by this assumption is that it naturally introduces the delay-angular channel representation according to (\ref{eq:channel_delay_angle_decomposition}) and (\ref{eq:Wongrid}) that is exploited for algorithm development and system design. Considering an arbitrary UE transfer matrix $\mathbf{H}\in\mathbb{C}^{N\times M}$ corresponding to a channel with off-grid parameters, a unique delay-angular channel representation as in (\ref{eq:channel_delay_angle_decomposition}) exists only by treating the delay spread as equal to the OFDM symbol duration $T_{s}$, i.e., $D=N$, even if the actual spread is actually smaller than this value. Under this assumption, it follows from (\ref{eq:channelmatrix}) and (\ref{eq:channel_delay_angle_decomposition}) that the delay-angular representation equals \begin{align} \mathbf{X} & =\mathbf{F}_{N,N}^{-1}\mathbf{H}(\mathbf{F}_{M,M}^H)^{-1}.\label{eq:delay-angular_representation_off_grid} \end{align} An example of $\mathbf{X}$ for a channel with off-grid parameters is shown in Fig. \ref{fig grid and off-grid W} (right panel). It can be seen that, in contrast to the on-grid case, the energy of each path is leaked over \emph{all} elements of $\mathbf{X}$ rendering it non-sparse. However, most of the energy of each path is concentrated on a few elements of $\mathbf{X}$, suggesting that the latter can be approximated by a sparse matrix, which, as in the on-grid case, can be exploited in the channel estimation procedure. This approximate sparsity of $\mathbf{X}$ in the off-grid case is confirmed by the following result. \begin{thm} \label{thm:sparsification_in_off_grid_case}Let $L_{1}\leq\frac{N-1}{2}$, $L_{2}\leq\frac{M-1}{2}$ be strictly positive integers. Setting $D=N$, the delay-angle representation $\mathbf{X}\in\mathbb{C}^{N\times M}$ of any channel with $L$ paths of arbitrary (off-grid) delay and angle values can be approximated by a sparse matrix $\mathbf{X}_{\text{\emph{sp}}}\in\mathbb{C}^{N\times M}$ that consists of at most $L(2L_{1}+1)(2L_{2}+1)$ non zero elements with an error \begin{equation} \left\Vert \mathbf{X}-\mathbf{X}_{\text{\emph{sp}}}\right\Vert \leq\left(\frac{1}{\sqrt{L_{1}}}+\frac{1}{\sqrt{L_{2}}}\right)\sum_{p=0}^{L-1}\left|\rho_{p}\right|,\label{eq:off_grid_sparsification_error_bound} \end{equation} \noindent where $\rho_{p}$ is the complex gain of the $p$-th channel path. \end{thm} \begin{IEEEproof} Please see Appendix \ref{sec:Proof_of_sparsification_if_off_grid_case}. \end{IEEEproof} The result implies that by choosing the parameters $L_{1}$ and $L_{2}$ sufficiently large, the delay-angular representation of any channel with $L$ off-grid paths can be approximated with small error by the delay-angular representation of a channel with $L(2L_{1}+1)(2L_{2}+1)$ on-grid paths. This increase of on-grid equivalent paths is due to the, so called, basis mismatch error \cite{chi2011sensitivity} and can be viewed as the cost of representing the channel on the fixed basis corresponding to the dictionary matrices $\mathbf{A}_{\tau}$, $\mathbf{A}_{\theta}$. Since an accurate on-grid representation is available, the HiIHT/HiHTP algorithms operating under the on-grid assumption can be employed to identify the $L(2L_{1}+1)(2L_{2}+1)$ equivalent on-grid paths per UE. In particular, the proof of Theorem \ref{thm:sparsification_in_off_grid_case} considers a delay-angular representation where most of each path energy spills over $L_{1}$ consecutive on-grid delay values and $L_{2}$ consecutive on-grid delay values. The example of Fig. \ref{fig grid and off-grid W} identifies these energy regions for each path assuming $L_{1}=L_{2}=1$. By the same arguments discussed in the on-grid case and considering the F-S option, the sparse vector $\mathbf{x}$ to be estimated according to the model (\ref{eq:linear model}) by HiIHT/HiHTP is now $(VL(2L_{2}+1),K_{V},K_{L}(2L_{1}+1))$-Hi-sparse in $\mathbb{C}^{M\cdot U\cdot D}$. Note that this approach will introduce the following four errors compared to the on-grid case discussed in the previous sections: (a) channel representation error due to the consideration of $\mathbf{X}_{\text{sp}}\in\mathbb{C}^{N\times N}$ instead of $\mathbf{X}\in\mathbb{C}^{N\times M}$, as described above, for any UE channel, (b) channel representation error due to the algorithms estimating a $D\times M$ (instead of $N\times M$) delay-angular matrix representation for each UE (implying the ``missing'' $N-D$ columns are estimated as zeros), (c) channel estimation error due to the channel representation error treated as an additional noise term by the algorithm, and (d) channel estimation error due to the increase of unknown parameters to be estimated. Note that these error terms are controlled to a large extent by the design parameters $L_{1}$ and $L_{2}$. These should be selected to satisfy the two conflicting requirements: reduce the sparse channel representation error (large values for $L_{1}$ and $L_{2}$) and reduce the number of parameters to be estimated (small values for $L_{1}$ and $L_{2}$). We numerically investigate the performance in the off-grid case and the selection of $L_{1}$, $L_{2}$ in Sec. VI. \section{Numerical Results} For demonstrating the merits of the proposed, hierarchical-sparsity-based framework for channel estimation, numerical examples are presented in this section, demonstrating its effectiveness in achieving good channel estimation accuracy with limited pilot overhead $N_{p}$. In all cases, an OFDM system with $N=1024$ subcarriers and a BS with a ULA equipped with $M=256$ antennas are considered. {Note that, for these system parameters, the channel transfer matrix of each UE consists of $MN = 262144$ elements, a huge number that imposes insurmountable computational challenges to conventional estimation approaches in addition to performance and overhead issues.} For all UEs, the channel consists of $L$ paths and has a maximum delay spread equal to $1/4$ of the useful OFDM symbol period. The channel path gains for each UE were generated as i.i.d. zero mean, complex Gaussian variables with a total power $\sum_{p=0}^{L-1}\mathbb{E}(|\rho_{p}|^{2})=1$, resulting in an average {received} power per subcarrier also equal to $1$ for all UEs (note that the pilot signatures of the proposed design consist of unit-modulus symbols). The elements of the noise matrix $\mathbf{Z}$ in (\ref{eq:observed_signal}) were generated as i.i.d. zero mean, complex Gaussian random variables of variance $1/\mathsf{SNR}$, where $\mathsf{SNR}$ denotes the average {received} signal-to-noise ratio per subcarrier. Towards minimizing the pilot overhead, {the F-S option will be considered throughout with $M_{p}=M$, i.e., all antenna signals are used, unless stated otherwise}. In most examples, the HiIHT algorithm is employed due to its simple implementation. The iterations of HiIHT and HiHTP terminate when the estimated support between two consecutive iterations remains the same or when ten iterations have been performed. \subsection{The On-Grid Case} \emph{Single User Case}: The single-UE case is considered first (i.e., $U=V=1$). The path angles $\{\theta_{p}\}_{p=0}^{L-1}$ are generated independently and uniformly over the angle sampling grid determined by $\mathbf{A}_{\theta}$, however, no two paths are allowed to have the same angle, which is a reasonable assumption for the asymptotic $M$ case. The path delays $\{\tau_{p}\}_{p=0}^{L-1}$ are generated independently and uniformly over the delay sampling grid determined by $\mathbf{A}_{\tau}$ with $D=N/4=256$. Note that this channel model corresponds to an $(L,1,1)$-Hi-sparse vector $\mathbf{x}\in \mathbb{C}^{M\cdot U \cdot D}$ in (\ref{eq:linear model}). Figure \ref{fig:MSE_vs_O_tau} depicts the per-element mean squared error (MSE) $\frac{1}{NM}\mathbb{E}(\|\mathbf{H}-\hat{\mathbf{H}}\|^{2})$ of the channel matrix estimate $\hat{\mathbf{H}}\triangleq\mathbf{A}_{\tau}\mathbf{\hat{X}}\mathbf{A}_{\theta}^{H}$, where $\hat{\mathbf{X}}$ is the estimate of the delay-angular channel representation provided by HiIHT, as a function of the normalized pilot overhead $N_{p}/N$ and for various values of $L$ (assumed known at the BS). The $\mathsf{SNR}$ was set equal to 10 dB. \begin{figure}[ptb] \centering\includegraphics[width=1\columnwidth]{MSE_vs_training_overhead_HiIHT_vs_IHT_SNR10dB_ongrid_various_L}\caption{Single UE MSE of HiIHT and IHT estimators as a function of pilot overhead (on-grid case, $N=1024,M=256,D=256,\mathrm{\mathsf{SNR}}=10\text{ dB}$).} \label{fig:MSE_vs_O_tau} \end{figure} It can be seen that HiIHT offers excellent estimation accuracy with a very small pilot overhead. For example, a normalized pilot overhead of around $10^{-2}$ is sufficient to achieve a MSE that is at least one order of magnitude less than the noise variance level $1/\mathsf{SNR}=10^{-1}$, which corresponds to the MSE achieved with $N_{p}=N$ and the naive channel estimate $\hat{\mathbf{H}}=\mathbf{Y}$. This pilot overhead should be compared with conventional (non sparsity-exploiting) estimation approaches which would require a normalized pilot overhead approximately $D/N=0.25$ \cite{OFDM_chan_est_by_SVD} (see also discussion of Fig. \ref{fig:MSE_vs_O_tau_offgrid}). As expected, the performance of HiIHT degrades with increasing $L$ as the number of unknown parameters increases. {However, note that (a) this degradation is rather graceful, i.e., the MSE remains bounded, and (b) the minimum required overhead to achieve a bounded MSE is independent of $L$, in line with the remarks made in the discussion of Theorem \ref{thm:required overhead}. Note also that reasonable MSE is also achieved even with $L>N_{p}$. This reflects the advantage of observing multiple antennas and is in line with the flexibility in distributing overhead indicated by Theorem \ref{thm:required overhead}. Of course, in the single antenna case ($M=1$), reliable channel estimation can only be achieved with $N_p\geq L$.} As a comparison, the performance of the standard IHT algorithm is depicted in Fig. \ref{fig:MSE_vs_O_tau}. A ``phase transition'' phenomenon is clearly seen: a minimum pilot overhead is required in order to achieve a reasonable MSE performance that is at least $6$ times greater than the one needed by HiIHT to achieve a MSE less than $10^{-2}$. Also, this minimum overhead is increasing with $L$. This clearly demonstrates the advantage of exploiting the hierarchical channel sparsity in the channel estimation procedure, which allows for reliable and robust performance in the small pilot overhead regime. For sufficiently large training overhead, the performances of IHT and HiIHT are the same, implying that knowledge of the sparsity structure plays no role in this regime. This is in line to the well-known fact from estimation theory that \emph{a priori} information (in this case, hierarchical structure of sparsity) becomes irrelevant once sufficiently many observations have been obtained. \emph{Multiuser Case}: The multiuser case is considered next. Note that for the scenario with $D=N/4$ considered here, up to $U=4$ UEs can be supported per UE group by the pilot sequence assignment scheme of Sec. IV. With $\mathsf{SNR}=10$ dB and UE channels with $L=3$ paths generated independently as described in the single UE case example, Fig. \ref{fig:MSE_vs_O_tau_MC} demonstrates the total MSE, defined as $\frac{1}{MN}\sum_{u=0}^{U-1}\mathbb{E}(\|\mathbf{H}_{u}-\hat{\mathbf{H}}_{u}\|^{2})$, for various number of randomly and uniformly selected active UEs $V\leq U$. Note that, in the MSE formula, $\mathbf{H}_{u}$ is equal to zero if the UE is not active. \begin{figure}[ptb] \centering\includegraphics[width=1\columnwidth]{MSE_vs_training_overhead_HiIHT_multi_user_1_slot_SNR10dB_per_UE_MSE_new_model_FS_SF}\caption{Multiuser MSE of HiIHT estimator as a function of pilot overhead (on-grid case, $N=1024,M=256,D=256,L=3,\mathrm{\mathsf{SNR}}=10\text{ dB}$).} \label{fig:MSE_vs_O_tau_MC} \end{figure} When only one UE is active, i.e., $V=1$, a slightly larger pilot overhead compared to the single UE case ($U=V=1)$ is required to achieve the same MSE performance. This overhead cost can be attributed to the uncertainty at the BS of who the actual active UE is. By increasing $V$, a degradation of MSE performance is observed that is proportional to $V$ due to the corresponding increase of unknown parameters to be estimated. {However, the minimum required overhead to achieve a bounded channel estimation error is independent of $V$, as guaranteed by Theorem \ref{thm:required overhead}}. {Figure \ref{fig:MSE_vs_O_tau_MC} also depicts the MSE performance under the S-F option. For this case, the channel vector $\mathbf{x}$ in (\ref{eq:linear model}) was generated as an $(V,L,1)$-Hi-sparce vector in $\mathbb{C}^{U\cdot D \cdot M}$ with path gains having the same statistics as in the channel model considered under the F-S option. It can be seen that this approach (a) requires increased training overhead to achieve reliable channel estimation and (b) the minimum overhead increases with $V$. Both these observations are consistent with Theorem \ref{thm:required overhead}.} {\emph{Unknown $L$}: Both the analysis and the previous results assume knowledge of the number of channel paths $L$. Figure \ref{fig:mismatched_L} shows the performance of HiIHT assuming $\hat{L}$ number of paths instead of $L$. A case with $U=4, V=2$, and $N_p=15$ pilot subcarriers is considered with the rest of the system parameters same as above. As expected, there is degradation in MSE when $\hat{L}\neq L$. This degradation is much more prominent when $\hat{L}< L$, whereas $\hat{L}> L$ results in a moderate degradation. This suggests that setting $\hat{L}$ as an upper bound (worst case) value could be a practical approach when $L$ is unknown. Another approach is to modify HiIHT/HiHTP so as it also provides an estimate of $L$, as described in, e.g., \cite{unknown_L1, unknown_L2}.} \begin{figure}[ptb] \centering\includegraphics[width=1\columnwidth]{HiIHT_mismatched_L_v2}\caption{ {Performance of HiIHT under mismatched $L$ ($N=1024,M=256,D=256, N_p=15, U=4,V=2,\mathsf{SNR}=10\text{ dB}$).}} \label{fig:mismatched_L} \end{figure} {\emph{Comparison with Orthogonal Matching Pursuit (OMP)}: In this example, we compare the proposed HiIHT/HiHTP algorithms with the commonly employed OMP algorithm \cite{MathIntroToCS}, which forms the basis for many previously proposed massive MIMO channel estimation schemes \cite{Heath JSAC 2017, Heath how many measurements}. OMP is a greedy, iterative algorithm of roughly the same complexity as HiHTP. It ignores any structural properties of sparsity, i.e., treats the unknown vector in (\ref{eq:linear model}) as $VL$-sparse, instead of $(VL,1,1)$-Hi-sparse (the F-S option is considered). Simulations (not shown here) with $M_p=M$ showed that OMP achieved the exact same performance as HiIHT and HiHTP. Towards identifying performane differences, we considered a case with $M_p=M/4=64$, with the remaining system parameters same as above and the results are shown in Fig. \ref{fig:OMP}. It can be seen that, in this scenario, OMP is a competitive alternative of HiHTP, whereas HiIHT performs slightly worse but is significantly less complex. The good performance of OMP in estimating hierarchically sparse vectors, even though not explicitly taking this property into account, was previously identified analyzed in \cite{OMP_hisparse}. This close correspondence of OMP with HiHTP/HiIHT suggests that the analytical results in this paper may have broader applicability than the HiHTP/HiIHT algorithms. We leave this topic for future investigation.} \begin{figure}[ptb] \centering\includegraphics[width=1\columnwidth]{OMP_vs_HiHTP_vs_HiIHT}\caption{ {Performance comparison of HiHTP, HiIHT, and OMP ($N=1024,M=256,D=256, L=3, M_p=64, U=4,V=2,\mathsf{SNR}=10\text{ dB}$).}} \label{fig:OMP} \end{figure} \subsection{The Off-Grid case} Figure \ref{fig:MSE_vs_O_tau_offgrid} demonstrates the single UE MSE performance for the off-grid case, where the channel is generated as described in the on-grid case with $L=3$, however, with the paths angles and delays uniformly and independently distributed over the continuous domains $[0,1)$ and $[0,T_{S}/4)$, respectively. The $\mathsf{SNR}$ was set to $10$ dB. As can be seen, performance of HiIHT strongly depends on the choice of $L_1$ and $L_2$. Small values of these parameters result in the estimation of a small number of unknown parameters by the channel estimator, however, with the cost of a large sparse channel approximation error. As can be seen, the optimal values of $L_1$, $L_2$ are proportional to the pilot overhead, which is expected as increasing the latter allows for the reliable estimation of more parameters. In any case, the basis mismatch effect results in a great performance degradation compared to the idealized, on-grid examples presented above. However, a MSE of almost an order of magnitude less that the noise level is achievable, rendering the effect of the channel estimation error negligible at the decoding stage. For comparison, the performance of the conventional, linear minimum mean squares estimator (LMMSE) estimator with equally-spaced pilot subcarriers is depicted in Fig. \ref{fig:MSE_vs_O_tau_offgrid}. The LMMMSE estimator utilizes only information about the correlation function $\mathbb{E}([\mathbf{H}]_{n,m}[\mathbf{H}]_{n^{\prime},m^{\prime}}^{*})$ for $n,n^{\prime}\in[N]$, $m,m^{\prime}\in[M]$. The latter can be obtained by a straightforward generalization of the approach shown in \cite{OFDM_chan_est_by_SVD} for the single receive antenna case. It can be seen that the LMMSE estimator performs very poorly, requiring at least $N_{p}/N\approx0.25$ in order to achieve an MSE that is equal to the noise level. This is due to the correlation function not capturing the sparsity properties of the channel. Figure \ref{fig:MSE_vs_O_tau_offgrid} also shows the performance of the standard IHT algorithm operating assuming $LK_{\tau}K_{\theta}$ on-grid paths, i.e., the same number of on-grid paths considered by HiIHT. It can be seen that IHT provides a reasonable performance only for large pilot overhead (greater than $0.4$). In that regime, it actually provides a better MSE than HiIHT suggesting that the hierarchical sparsity structure assumed by the HiIHT is not accurate, resulting in an additional error term introduced in the estimate due to this mismatch. However, even though not accurate the assumption of hierarchical sparsity is beneficial in the small pilot overhead regime. \begin{figure}[ptb] \centering\includegraphics[width=1\columnwidth]{MSE_vs_training_overhead_HiIHT_vs_IHT_SNR10dB_offgrid}\caption{Single UE MSE of HiHTP, HTP and LMMSE estimators as a function of pilot overhead (off-grid case, $N=1024,M=256,D=256,\mathrm{SNR}=10\text{ dB}$).} \label{fig:MSE_vs_O_tau_offgrid} \end{figure} \section{Conclusion} The problem of channel estimation for multiuser wideband massive MIMO via a compressive sensing approach was investigated. Under the assumption of on-grid channel param- eters, a problem reformulation that highlights the hierarchical sparsity property of the wireless channel was considered. This property was taken into account for the design of low- complexity channel estimation algorithms. Using the HiRIP analysis framework, rigorous performance guarantees for these algorithms were obtained that, in turn, provide design rules for UE pilot signature design and selection of pilot subcarriers. A characterization of the sufficient pilot overhead required to achieve reliable channel estimation was provided, revealing that in the massive MIMO regime, the number of subcarriers is independent from the number of active UEs and channel paths per UE. These observations were also verified numerically, with the proposed algorithm showing significant performance gain over conventional CS approaches of similar complexity. Application of the algorithms in a multiple measurements and off-grid channel parameter setting was discussed. For the later case, which is valid in the finite antenna and bandwidth regime, even though there exists an error due to model mismatch, performance of proposed algorithms is still significantly better from conventional CS as well as the standard LMMSE approach. \appendices{} \section{\label{sec:proof of Eq 14} Proof of (\ref{eq:HIRIP<=00003DRIP})} {Let $\{(N_i,s_i)\}_{i=1}^{\ell}$ be a set of $\ell$ tuples of integers such that $N_i \geq s_i\geq 1$ for all $i$. Denote $\mathcal{S}_{(s_1,s_2,\ldots,s_\ell)}\subseteq \mathbb{C}^{N_1\cdot N_2 \cdots N_\ell}$ the set of all $(s_1,s_2,\ldots,s_\ell)$-Hi-sparse vectors in $\mathbb{C}^{N_1\cdot N2 \cdots N_\ell}$, and $\mathcal{S}_{s_1s_2\cdots s_\ell}\subseteq \mathbb{C}^{N_1 N_2 \cdots N_\ell}$ the set of all $s_1s_2\cdots s_\ell$-sparse vectors in $\mathbb{C}^{N_1 N2 \cdots N_\ell}$. Note that $\mathcal{S}_{(s_1s_2\cdots s_\ell)} \subseteq \mathcal{S}_{s_1 s_2 \cdots s_\ell}$. For an arbitrary matrix $\tilde{\mathbf{A}} \in \mathbb{C}^{N_0 \times (N_1 N_2 \cdots N_\ell)}$, $N_0\geq 1$, it follows from the definition of the HiRIP and RIP constants that \begin{align*} \delta_{(s_1, s_2, \ldots, s_\ell)}(\tilde{\mathbf{A}}) &= \underset{\mathbf{x} \in \mathcal{S}_{(s_1,s_2,\ldots,s_\ell)}}{\max} \frac{\left| \| \tilde{\mathbf{A}} \mathbf{x} \|^2 - \|\mathbf{x}\|^2 \right|}{\|\mathbf{x}\|^2}\\ &\leq \underset{\mathbf{x} \in \mathcal{S}_{s_1 s_2 \cdots s_\ell}}{\max} \frac{\left| \| \tilde{\mathbf{A}} \mathbf{x} \|^2 - \|\mathbf{x}\|^2 \right|}{\|\mathbf{x}\|^2}\\ &= \delta_{s_1 s_2 \cdots, s_\ell}(\tilde{\mathbf{A}}). \end{align*}} \section{\label{sec:proof of required overhead}Proof of Theorem \ref{thm:required overhead}} \noindent Let $\tilde{\mathbf{x}}\neq\mathbf{0}\in\mathbb{C}^{UD}$ denote the $s$-sparse vector for which $\left|\|\bar{\mathbf{A}}_{\tau}\tilde{\mathbf{x}}\|^{2}-\|\tilde{\mathbf{x}}\|^{2}\right|=\delta_{s}(\bar{\mathbf{A}}_{\tau})\|\tilde{\mathbf{x}}\|^{2}$, where $\delta_{s}(\bar{\mathbf{A}}_{\tau})$ is the $s$-RIP constant of matrix $\bar{\mathbf{A}}_{\tau}$ given in (\ref{eq:A_tau_with_proposed_design}). Let $\tilde{\mathbf{x}}_{\text{ext}}\triangleq[\tilde{\mathbf{x}}^{T},\mathbf{0}^{T}]^{T}\in\mathbb{C}^{N}$ denote its zero padded extension that is also $s$-sparse. Consider $\bar{\mathbf{A}}_{\tau,\text{ext}}\triangleq(1/\sqrt{N_{p}})\mathbf{P}_{\mathcal{N}_{p}}\text{diag}(\mathbf{c})\mathbf{F}_{N,N}$. It holds \begin{align*} \delta_{s}(\bar{\mathbf{A}}_{\tau})\|\tilde{\mathbf{x}}\|^{2} &=\left|\|\bar{\mathbf{A}}_{\tau}\tilde{\mathbf{x}}\|^{2}-\|\tilde{\mathbf{x}}\|^{2}\right|\\ &=\left|\|\bar{\mathbf{A}}_{\tau,\text{ext}}\tilde{\mathbf{x}}_{\text{ext}}\|^{2}-\|\tilde{\mathbf{x}}_{\text{ext}}\|^{2}\right|\\ & \leq\underset{s\text{-sparse }\mathbf{p}\in\mathbb{C}^{N},\|\mathbf{p}\|=\|\tilde{\mathbf{x}}\|}{\max}\left|\|\bar{\mathbf{A}}_{\tau,\text{ext}}\mathbf{p}\|^{2}-\|\mathbf{p}\|^{2}\right|\\ &\leq\delta_{s}(\bar{\mathbf{A}}_{\tau,\text{ext}})\|\tilde{\mathbf{x}}\|^{2}, \end{align*} where $\delta_{s}(\bar{\mathbf{A}}_{\tau,\text{ext}})$ is the $s$-RIP constant of matrix $\bar{\mathbf{A}}_{\tau,\text{ext}}$, resulting in \begin{equation} \delta_{s}(\bar{\mathbf{A}}_{\tau})\leq\delta_{s}(\bar{\mathbf{A}}_{\tau,\text{ext}}),\text{ for all }s\leq UD.\label{eq:bound of RIP by extended matrix RIP} \end{equation} \noindent Now consider the F-S option, i.e., with $\mathbf{A}=\bar{\mathbf{A}}_{\theta}^{*}\otimes\bar{\mathbf{A}}_{\tau}$ acting on $\mathbf{x}\in\mathbb{C}^{M\cdot U\cdot D}$. It holds \begin{align*} &\delta_{(3VL,3K_{V},3K_{L})}(\mathbf{A})\\ \overset{(a)}{\leq} & \left(1+\delta_{3VL}\left(\bar{\mathbf{A}}_{\theta}^{*}\right)\right)\left(1+\delta_{9K_{V}K_{L}}\left(\bar{\mathbf{A}}_{\tau}\right)\right)-1\\ \overset{(b)}{\leq} & \left(1+\delta_{3VL}\left(\bar{\mathbf{A}}_{\theta}^{*}\right)\right)\left(1+\delta_{9K_{V}K_{L}}\left(\bar{\mathbf{A}}_{\tau,\text{ext}}\right)\right)-1, \end{align*} \noindent where $(a)$ follows from Theorem \ref{thm:HiRIP bound} and ($b)$ from (\ref{eq:bound of RIP by extended matrix RIP}). Therefore, a sufficient condition for (\ref{eq:HiRIP requirement}) to hold is \begin{align} 1/\sqrt{3} & >\left(1+\delta_{3VL}\left(\bar{\mathbf{A}}_{\theta}^{*}\right)\right)\left(1+\delta_{9K_{V}K_{L}}\left(\bar{\mathbf{A}}_{\tau,\text{ext}}\right)\right)-1.\label{eq:app_eq_1} \end{align} \noindent For any $\delta_{\tau}\in(0,1)$ and $\delta_{\theta}\in(0,1)$, set $N_{p}$ and $M_{p}$ as in (\ref{eq:O_tau min}) and (\ref{eq:O_theta min}), respectively. Noting that $\bar{\mathbf{A}}_{\theta}^{*}$ and $\bar{\mathbf{A}}_{\tau,\text{ext}}$ are obtained by random sampling of the rows of orthonormal matrices, it follows from \cite[Theorem 12.31]{MathIntroToCS} that $\delta_{9K_{V}K_{L}}\left(\bar{\mathbf{A}}_{\tau,\text{ext}}\right)<\delta_{\tau}$ and $\delta_{3VL}\left(\bar{\mathbf{A}}_{\theta}^{*}\right)<\delta_{\theta}$ with probability larger $1-N^{-\log^{3}(N)}$ and $1-M^{-\log^{3}(M)}$, respectively, which results in an upper bound for the right hand side expression of (\ref{eq:app_eq_1}) equal to $\delta_{\tau}+\delta_{\theta}+\delta_{\tau}\delta_{\theta}$. Selecting values for $\delta_{\theta}$ and $\delta_{\tau}$ such that this upper bound is less than $1/\sqrt{3}$ immediately implies (\ref{eq:HiRIP requirement}). The proof for the S-F case follows the exact same steps. \section{\label{sec:Proof_of_sparsification_if_off_grid_case}Proof of Theorem \ref{thm:sparsification_in_off_grid_case}} For an arbitrary channel transfer matrix $\mathbf{H}\in\mathbb{C}^{N\times M}$ assuming $D=N$, it follows from (\ref{eq:channelmatrix}) and (\ref{eq:delay-angular_representation_off_grid}) that its delay-angular representation equals $\mathbf{X} =\sum_{p=0}^{L-1}\rho_{p}\mathbf{u}_{N}(\tilde{\tau}_{p})\mathbf{u}_{M}^{H}(\theta_{p})$, where $\tilde{\tau}_{p}\triangleq\tau_{p}/T_{s}\in[0,1]$ is the normalized delay of the $p$-th path and $\mathbf{u}_{K}:[0,1]\rightarrow\mathbb{C}^{K}$ with \cite{ChenYang206_TWC} \[ \left[\mathbf{u}_{K}(\omega)\right]_{k}\triangleq\frac{\sin\left(\pi K(\omega-k/K)\right)}{K\sin\left(\pi(\omega-k/K)\right)}e^{-j\pi\left(K-1\right)(\omega-k/K)},k\in[K]. \] We consider a sparse approximation of $\mathbf{X}$ given by $\mathbf{X}_{\text{sp}}=\sum_{p=0}^{L-1}\rho_{p}\mathbf{u}_{N,\text{sp}}(\tilde{\tau}_{p};L_{1})\mathbf{u}_{M,\text{sp}}^{H}(\theta_{p};L_{2})$, where $\mathbf{u}_{K,\text{sp}}(\omega;J)\in\mathbb{C}^{K}$ is a $(2J+1)$-sparse vector obtained by retaining the $(2J+1)$ largest modulus elements of $\mathbf{u}_{K}(\omega)$ and the rest elements set equal to zero. Note that with this construction, $\mathbf{X}_{\text{sp}}$ can have at most $L(2L_{1}+1)(2L_{2}+1)$ non-zero elements. In order to investigate the sparse approximation error, we first focus on quantifying the error $\|\mathbf{u}_{M,\text{sp}}(\theta;L_{2})-\mathbf{u}_{M}(\theta)\|$for any $\theta\in[0,1]$. It is easy to see that the non-zero elements of $\mathbf{u}_{M,\text{sp}}(\theta;L_{2})$ are consecutive in a wrap-around sense (i.e., the element indices $0$ and $M-1$ are assumed consecutive). By symmetry, it is sufficient to consider the error for some value of $\theta\in[0,\frac{1}{2M}]$. In this case, the set of non-zero elements of $\mathbf{u}_{M,\text{sp}}(\theta;L_{2})$ is $\mathcal{A}=\{0,1,\ldots L_{2}\}\cup\{M-1-L_{2},M-L_{2},\ldots,M-1\}$ and it holds \begin{align} &\|\mathbf{u}_{M,\text{sp}}(\theta;L_{2})-\mathbf{u}_{M}(\theta)\|^{2}\\ = &\sum_{m\in[M]\setminus\mathcal{A}}\left|\left[\mathbf{u}_{M}(\theta)\right]_{m}\right|^{2}\nonumber \\ \leq &\sum_{m\in[M]\setminus[L_{2}+1]}\left|\left[\mathbf{u}_{M}(\theta)\right]_{m}\right|^{2}\nonumber \\ = & \frac{1}{M^{2}}\sum_{m\in[M]\setminus[L_{2}+1]}\frac{\sin^{2}(\pi M(\theta-m/M))}{\sin^{2}(\pi(\theta-m/M))}\label{eq:ap2_eq}\\ \overset{(a)}{\leq} &\frac{1}{M^{2}}\sum_{m\in[M]\setminus[L_{2}+1]}\frac{(M+1)^{2}}{(M-1)^{2}4(\theta-m/M)^{2}}\label{eq:ap2_eq2}\\ \overset{(b)}{\leq} &\frac{(M+1)^{2}}{(M-1)^{2}}\sum_{m\in[M]\setminus[L_{2}+1]}\frac{1}{(2m-1)^{2}}\nonumber \\ \leq &\frac{(M+1)^{2}}{(M-1)^{2}}\int_{L_{2}+1}^{M-1}\frac{1}{(2x-1)^{2}}dx\nonumber \\ = &\frac{(M+1)^{2}}{(M-1)^{2}}\frac{1}{2}\left(\frac{1}{2L_{2}+1}-\frac{1}{2M-3}\right)\nonumber \\ \leq &\frac{1}{L_{2}},\nonumber \end{align} \noindent where $(a)$ follows by trivially upper bounding the numerator of the summand in (\ref{eq:ap2_eq}) by $1$ and by lower bounding the denominator according to the inequality $\sin^{2}(\pi x)\geq4x^{2}(M-1)/(M+1),$ which holds for all $|x|\leq\pi/2+1/(2M)$, $(b)$ follows by minimizing the term $(\theta-m/M)^{2}$ in the summand of (\ref{eq:ap2_eq2}) w.r.t. $\theta\in[0,1/(2M)]$ and the last inequality holds for $M\geq3$, which can be safely assumed to hold in massive MIMO applications. Note that the obtained bound holds for any $\theta\in[0,1]$. In the exact same fashion, it can be proved that $\|\mathbf{u}_{N,\text{sp}}(\tilde{\tau};L_{1})-\mathbf{u}_{N}(\tilde{\tau})\|^{2}\leq1/L_{1}$ for any $\tilde{\tau}\in[0,1]$. Now, for any $\tilde{\tau}$, $\theta$, and dropping, for simplicity, the arguments from the notation of $\mathbf{u}_{N}(\tilde{\tau})$, $\mathbf{u}_{M}(\theta)$, $\mathbf{u}_{N,\text{sp}}(\tilde{\tau};L_{1})$, $\mathbf{u}_{M,\text{sp}}(\theta;L_{2})$ it holds \begin{align} &\left\Vert \mathbf{u}_{N}\mathbf{u}_{M}^{H}-\mathbf{u}_{N,\text{sp}}\mathbf{u}_{M,\text{sp}}^{H}\right\Vert\\ \overset{(a)}{\leq} &\left\Vert \mathbf{u}_{N}\mathbf{u}_{M}^{H}-\mathbf{u}_{N}\mathbf{u}_{M,\text{sp}}^{H}\right\Vert +\left\Vert \mathbf{u}_{N}\mathbf{u}_{M,\text{sp}}^{H}-\mathbf{u}_{N,\text{sp}}\mathbf{u}_{M,\text{sp}}^{H}\right\Vert \nonumber \\ \overset{(b)}{\leq} &\|\mathbf{u}_{N}\|\left\Vert \mathbf{u}_{M}^{H}-\mathbf{u}_{M,\text{sp}}^{H}\right\Vert +\|\mathbf{u}_{M,\text{sp}}\|\left\Vert \mathbf{u}_{N}-\mathbf{u}_{N,\text{sp}}\right\Vert \nonumber \\ \overset{(c)}{\leq} &\|\mathbf{u}_{N}\|\left\Vert \mathbf{u}_{M}^{H}-\mathbf{u}_{M,\text{sp}}^{H}\right\Vert +\|\mathbf{u}_{M}\|\left\Vert \mathbf{u}_{N}-\mathbf{u}_{N,\text{sp}}\right\Vert\\ \overset{(d)}{\leq} &\frac{1}{\sqrt{L_{1}}}+\frac{1}{\sqrt{L_{2}}},\label{eq:app.eq} \end{align} \noindent where $(a)$ follows from the triangle inequality, $(b)$ from the Cachy-Schwarz inequality, $(c)$ by noting that $\|\mathbf{u}_{M,\text{sp}}\|\le\|\mathbf{u}_{M}\|$ and $(d)$ by noting that $\|\mathbf{u}_{M}\|=\|\mathbf{u}_{N}\|=1$ and using the bounds for $\left\Vert \mathbf{u}_{N}-\mathbf{u}_{N,\text{sp}}\right\Vert $ and $\left\Vert \mathbf{u}_{M}-\mathbf{u}_{M,\text{sp}}\right\Vert $ obtained above. The sparse approximation error of $\mathbf{X}_{\text{sp}}$ can now be obtained as \begin{align*} &\|\mathbf{X}_{\text{sp}}-\mathbf{X}\|\\ = &\left\Vert \sum_{p=0}^{L-1}\rho_{p}\left[\mathbf{u}_{N}\left(\tilde{\tau}\right)\mathbf{u}_{M}^{H}(\theta)-\mathbf{u}_{N,\text{sp}}\left(\tilde{\tau};L_{1}\right)\mathbf{u}_{M,\text{sp}}^{H}(\theta;L_{2})\right]\right\Vert \\ \leq &\sum_{p=0}^{L-1}\left|\rho_{p}\right|\left\Vert \mathbf{u}_{N}\left(\tilde{\tau}\right)\mathbf{u}_{M}^{H}(\theta)-\mathbf{u}_{N,\text{sp}}\left(\tilde{\tau};L_{1}\right)\mathbf{u}_{M,\text{sp}}^{H}(\theta;L_{2})\right\Vert . \end{align*} \noindent Applying (\ref{eq:app.eq}) results in (\ref{eq:off_grid_sparsification_error_bound}).
{ "timestamp": "2018-12-11T02:21:51", "yymm": "1806", "arxiv_id": "1806.00815", "language": "en", "url": "https://arxiv.org/abs/1806.00815" }
\section{Introduction} The sheer amount of data conveyed over the wireless medium requires novel concepts to be integrated into 5G (and beyond) transmission standards. A promising technique for more efficient spectrum utilization is RF (radio frequency) in-band full-duplex communication, where transmission and reception of signals are run simultaneously on the same frequency band~\cite{src:sexton20175g}. Theoretically such an approach supports an improved capacity of the wireless link~\cite{src:sabharwal2014band,src:aijaz2017simultaneous}. However, some considerable signal power from the transmitter inevitably leaks as self-interference (SI) into the receiver chain, and therefore any signal-of-interest (SoI) from a distant communication node cannot be reliably recovered at the receiver. Cancellation of the strong SI is a challenging task. Several landmark papers have successfully demonstrated the feasibility of such an approach~\cite{src:jain2011practical,src:riihonen2011hybrid,src:duarte2012experiment}. Common state-of-the-art SI cancellation is generally implemented as a multi-stage operation. Full-duplex prototypes have combined passive SI suppression, active analog cancellation, and digital cancellation~\cite{src:khandani2013two,src:duarte2014design,src:debaillie2014analog,src:ahmed2015all,src:askar2016agile}. The availability of SI cancellation allows for a wide range of full-duplex applications. It can increase the communication rate in a wireless network by using full-duplex relays~\cite{src:avestimehr2008approximate,src:nunn2017antenna}, reduce end-to-end delay~\cite{src:kariminezhad2017fullduplex}, provide enhanced security at the physical layer by friendly jamming~\cite{src:zheng2013improving} or key agreement~\cite{src:vogt2016practical}, and it implies the potential of energy-harvesting~\cite{src:zeng2015full, src:bi2016accumulate}. To remove the impact of SI in the digital domain, a variety of digital SI channel estimation metrics have been studied such as least-squares (LS)~\cite{src:bharadia2013full}, minimum-mean squared error (MMSE)~\cite{src:day2012fullduplex} or maximum-likelihood (ML) estimation~\cite{src:masmoudi2016maximum}. Non-uniform sampling can avoid undesired SI in Orthogonal Frequency-Division Multiplexing (OFDM) systems~\cite{src:bernhardt2018self}. Under time-variant conditions, adaptive algorithms generally can cover the task of adjusting an SI estimate to gradual changes in the SI channel~\cite{src:heino2015recent}. The linear SI channel model has been extended to nonlinear forms, where the Hammerstein polynomial model is the most popular~\cite{src:bharadia2014full}. It takes the form of a parallel, multi-channel representation with monolithic structure. Alternatively, the nonlinear SI channel can be characterized by an artificial neural network~\cite{src:stimming2018nonlinear}. Beyond the nonlinear SI model, previous work established enhancements like signal orthogonalization, least-mean square (LMS) adaptation~\cite{src:korpi2015adaptive,src:ferrand2017multi} or recursive-least square (RLS) algorithms~\cite{src:lemos2015fullduplex,src:emara2017nonlinear}. Adaptive approaches can be used in hybrid digital/analog cancellation designs~\cite{src:kiayani2018adaptive}. Many prototypes use an analog cancellation stage to remove the line-of-sight component, which is the dominant and, in most cases, the time-invariant part of the SI. However, moving objects close to the antenna can significantly change the multipath characteristics of the SI channel due to power backscattering~\cite{src:everett2014passive} and therefore introduce a significant time-variant contribution to the SI. The general problem of adaption to time-variant SI channels requires further research in the digital domain. The SI is a challenge not only in wireless communication. Consider hands-free voice communication, where microphone and loudspeaker signals require full-duplex operations. Here, similar to the SI in wireless full-duplex applications, the unacceptable acoustic echo signal needs to be effectively removed. It was, however, found that the eventual echo cancellation system involves a number of challenges, such as the fast adaptivity to time-variant loudspeaker-room-microphone systems~\cite{src:benesty2001advances,Haensler97,Haensler2006}. The standard for acoustic echo control is the LMS and normalized least means-square (NLMS) type adaptive-filter algorithm in single- and multi-channel configurations~\cite{Haykin2002,Enzner2014}. Similarly, RLS-based approaches of the multi-channel type have been introduced~\cite{src:benesty2001advances}. As a compromise between NLMS and RLS in terms of fast adaptation and computational complexity, approaches based on affine projection algorithms (APA)~\cite{src:gay1995the} have been proposed and applied to nonlinear echo cancellation~\cite{src:glicacho2012nonlinear}. State-space modeling of the time- and frequency-domain echo channel~\cite{Enzner06,Enzner2010} has been introduced. Variants of the Kalman filter algorithm were provided in multichannel~\cite{Malik2011a} and nonlinear configurations \cite{MalikEnzner2012b,src:malik2013variational}. Both the domains of acoustic echo control and wireless SI cancellation share many aspects of the system modeling. Thus, we can transfer concepts and insights from the acoustic echo cancellation. Most prominently, the nonlinear echo/SI models of both the acoustic loudspeaker and the RF power amplifiers are very similar. In the acoustic echo channel, however, impulse responses can have up to a few thousand of significant taps. In the wireless domain, the delay spread is much shorter, but the accurate depiction of fractional signal delays still requires a significant amount of taps~\cite{src:laakso96splitting}. There is a notable difference between acoustic echo control and wireless SI cancellation: in the handling of the SoI. In acoustics, the SoI is usually modeled as a random process and therefore contributes to the observation noise during adaptation. In wireless communications, however, the SoI is deliberately encoded and thus can be removed before adaptation. In this work, we set our focus on the digital part of SI cancellation by an adaptive algorithm. Consider the system overview of~\figref{fig:overview}. \def\antenna{% -- +(0mm,4.0mm) -- +(2.625mm,7.5mm) -- +(-2.625mm,7.5mm) -- +(0mm,4.0mm) } \begin{figure} \centering \begin{tikzpicture}[ sysBlock/.style={draw, fill=white, rectangle, minimum height=2em, minimum width=3.5em}, nlBlock/.style={draw, fill=white, rectangle, minimum height=2em, minimum width=4em}, shadowBlock/.style={draw, drop shadow, fill=white, rectangle}, triangle/.style = {draw, regular polygon, regular polygon sides=3 }, border rotated/.style = {shape border rotate=90} ] \node (inTx) {$x_{k}$}; \node (outRx) at ($(inTx)+(0,-3)$) {}; \node[shadowBlock, minimum height=4em](dec) at ($(inTx)!0.5!(outRx)+(0.75,0.25)$) {Decoder}; \myadd{add2}{$(dec)+(1.5,0)$} \node[shadowBlock, minimum height=4em, minimum width=4em, align=center, fill=red!10] at ($(add2)+(1.5,0)$) (adapt) {Adaptive\\algorithm}; \myadd{add1}{$(adapt|-outRx)+(0,0)$} \node[dot] (dot1) at ($(adapt|-inTx)$) {}; \node[dot] (dot2) at ($(add2|-outRx)+(0,0)$) {}; \node at ($(add1.east)+(0.1,0.2)$) {$-$}; \node at ($(add2.west)+(-0.1,0.2)$) {$-$}; \node[shadowBlock, align=center, minimum height=10em] (analog) at ($(adapt)+(1.6,-0.25)$) {Tx/Rx\\path}; \node (ant) at ($(analog)+(1.1,0)$) {}; \draw[very thick] (ant.center) \antenna; \draw[very thick] (analog.east) -- (ant.center); \node[shadowBlock, align=center, minimum height=2em] (dist) at ($(analog)+(2.3,-1)$) {Distant\\node}; \node (antDist) at ($(dist)+(0,0.5)$) {}; \draw[very thick] (antDist.center) \antenna; \draw[very thick] (dist.north) -- (antDist.center); \draw[thick,->] (inTx) -- (dot1); \draw[thick,->] (dot1) -- (analog.west|-dot1); \draw[thick,->] (dot1) -- (adapt); \draw[thick,->] (analog.west|-add1) -- node[below] {$y_{k}$} (add1); \draw[thick,->] (adapt.south-|add1) -- node[right] {$\hat{x}_{\text{si},k}$} (add1); \draw[thick,->] (dot2) -- (add2); \draw[thick,->] (add1) -- node[above] {$e_{k}$} (dot2); \draw[thick,->] (dot2) -| (dec); \draw[thick,->] (dec) -- node[below] {$\hat{d}_{k}^h$} (add2); \draw[thick,->] (add2) -- node[above] {$\tilde{e}_{k}$} (adapt); \draw[dashed,->] ($(antDist)+(-0.2,0.2)$) -- node[below,pos=0.75] {$d_{k}^h$} ($(ant)+(0.2,0.2)$); \end{tikzpicture} \caption{Proposed full-duplex SI estimation and cancellation.} \label{fig:overview} \end{figure} A distant node sends a SoI~$d^h_k$ at time $k$ to the receiver, while the local transmitter conveys $x_k$. At the local receiver, an adaptive algorithm reconstructs an estimated SI~$\hat{x}_{\text{si},k}$ and subtracts its contribution from the received signal $y_k$, which leaves the error signal $e_k$. Next, this signal $e_k$ is fed into a decoder which retrieves the decoded SoI~$\hat{d}^h_k$. By removing $\hat{d}^h_k$ from the error signal $e_k$, the residual signal $\tilde{e}_k$ is given to the algorithm for adaptation. Inspired by recent progress in the area of acoustic echo control, we intend to tailor acoustic echo cancellation methods to wireless SI cancellation by employing a nonlinear, composite state-space system model in cascade structure. The state-space representation allows more dedicated treatment of time-variant SI channels, since its a-priori knowledge is embedded directly into the system model. This leads to a better control over the adaptation process and an improved performance of the solution. Unlike in the parallel nonlinear SI channel model, the cascade structure is reducing the number of variables to be estimated and simultaneously the susceptibility to over-fitting~\cite{src:malik2013variational}. We linearize the model by decoupling linear SI path and nonlinear coefficients, which are then estimated separately and iteratively by linear algorithms in each adaptation cycle. The natural algorithmic solution to the estimations based on MMSE criterion turns out to be the Kalman filter. This Kalman filter algorithm is derived in DFT domain. Frequency domain approaches, while often explicitly tailored to OFDM systems~\cite{src:sohaib2017alow}, have the potential of reducing computational complexity while maintaining estimation accuracy~\cite{src:komatsu2017freq}. In previous work, adaptive algorithms with input orthogonalization have been proposed~\cite{src:korpi2015adaptive,src:emara2017nonlinear} to meet the aforementioned assumptions. We maintain this feature to decouple the nonlinear basis function for improved speed of convergence. We pursue a systematic treatise of that design feature and explore its impact on the performance. Furthermore, we integrate the decoding of the signal-of-interest (SoI), which has been conveyed by a distant node. This approach is beneficial, since in wireless full-duplex communication, the SoI contains structure which is a-priori known and therefore can be exploited. In the development of adaptive SI cancellation algorithms, the a-priori knowledge of SoI signal statistics have been rather neglected so far. We study the performance by considering certain metrics like the signal-to-residual-interference-and-noise ratio, system identification accuracy and the communication rate. Both the time convergence behavior and the global performance with respect to input signal-to-interference-and-noise ratio are considered. We provide comparisons of the proposed algorithm in exact and approximated form to other approaches like Kalman-based, NLMS and RLS algorithms from the literature. This leads to a comprehensive analysis of certain design options (such as input orthogonalization, complexity or model structure) on the performance of the novel and state-of-the-art adaptive algorithms. We show that the temporal variations impose a fundamental performance limitation, regardless of whether input orthogonalization is applied or not. The simulation results indicate that the decoding of the SoI is generally beneficial to both speed of convergence and cancellation performance, especially under time-variant conditions. Throughout the paper, we print column vectors and matrices as bold lower-case and upper-case letters, respectively. The operators $\mathbb{E}\left[\cdot\right]$, $\text{tr}\left[\cdot\right]$, $\left|\cdot\right|$, $\left\lVert\cdot\right\rVert_2$, $\left(\cdot\right)^T$, $\left(\cdot\right)^H$ denote expectation, trace of a matrix, absolute value, Euclidean norm, matrix transpose and Hermitian transpose, respectively. The operator $\text{diag}\left[ \mybold{x}\right]$ represents a diagonal matrix with the elements of vector $\mybold{x}$ on its main diagonal. Signal vectors in DFT domain are marked by underline $\underline{\mybold{x}}$. The term $\mybold{x}\circ\mybold{y}$ denotes the Hadamard product, i.e., the element-wise multiplication of $\mybold{x}$ and $\mybold{y}$. The identity matrix of size $M\times M$ is given by $\eye{M}$. The paper is organized as follows. Section~\ref{sec:systemmodel} introduces the system model with more details on the Tx/Rx path of~\figref{fig:overview}, its DFT domain representation and the state-space model. We derive the nonlinear adaptive algorithm in Section~\ref{sec:algorithm} and discuss useful approximations in Section~\ref{sec:approximations}. In Section~\ref{sec:results}, we define suitable metrics and evaluate the performance by considering computational complexity, time convergence behavior, global performance and various use cases. Finally, Section~\ref{sec:conclusion} concludes the paper. \section{System model} \label{sec:systemmodel} Consider the system model of~\figref{fig:systemmodel}, which depicts a nonlinear cascaded SI channel. \begin{figure} \centering \begin{tikzpicture}[ sysBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=2em, minimum width=3em}, nlBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=2em, minimum width=4.5em}, chanBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=3em, minimum width=2em} ] \node (in1) {$x_{k}$}; \node[dot] (dot2) at ($(in1)+(1,0)$) {}; \node[nlBlock] at ($(dot2)+(1.35,0.8)$) (nlfunc0) {$\phi_0(\cdot)$}; \node[nlBlock] at ($(dot2)+(1.35,0)$) (nlfunc1) {$\phi_1(\cdot)$}; \node at ($(dot2)+(1.35,-0.6125)+(0,0)$) (colon1) {$\colon$}; \node[nlBlock] at ($(dot2)+(1.35,-1.25)$) (nlfuncN) {$\phi_{N-1}(\cdot)$}; \mymod{mult1}{$(nlfunc0)+(2.5,0)$} \mymod{mult2}{$(nlfunc1)+(2.5,0)$} \node at ($(colon1)+(2.5,0)+(0,0)$) (colon1) {$\colon$}; \mymod{mult3}{$(nlfuncN)+(2.5,0)$} \node at ($(mult1)+(-0.75,-0.6)$) (a0) {$a_{0,k}$}; \node at ($(mult2)+(-0.75,-0.6)$) (a1) {$a_{1,k}$}; \node at ($(mult3)+(-0.75,-0.6)$) (aN) {$a_{N-1,k}$}; \myadd{add1}{$(mult2)+(1.5,0)$} \node[chanBlock] at ($(add1)+(1,-1)$) (sichan) {$\mybold{w}_{k}$}; \draw[thick,->] (in1) to (dot2); \draw[thick,->] (dot2) |- (nlfunc0); \draw[thick,->] (dot2) -- (nlfunc1); \draw[thick,->] (dot2) |- (nlfuncN); \draw[thick,->] (a0) -- (mult1); \draw[thick,->] (a1) -- (mult2); \draw[thick,->] (aN) -- (mult3); \draw[thick,->] (nlfunc0) -- (mult1); \draw[thick,->] (nlfunc1) -- (mult2); \draw[thick,->] (nlfuncN) -- (mult3); \draw[thick,->] (mult1) -| (add1); \draw[thick,->] (mult2) -- (add1); \draw[thick,->] (mult3) -| (add1); \myadd{add3}{$(sichan)+(0,-2)$} \myadd{add4}{$(add3)+(-1.5,0)$} \node (noise) at ($(add4)+(0,-0.75)$) {$n_{k}$}; \node[sysBlock] at ($(add4)+(-3,0)$) (comchan) {$\mybold{h}_{k}$}; \node (distant) at (in1 |- add4) {$d_{k}$}; \node (out) at ($(add3)+(0,-1)$) {$y_{k}$}; \node (labelSi) at ($(dot2) + (2,1.5)$) {Nonlinear cascade SI channel}; \begin{pgfonlayer}{background} \node [fill=red!10,fit={($(labelSi.west) + (0,0.1)$) ($(sichan) + (0.55,-0.9)$)}] {}; \end{pgfonlayer} \node (labelWc) at ($(comchan) + (-0.8,0.6)$) {Wireless channel}; \draw[dashed] ($(labelWc.west) + (0,0.25)$) rectangle ($(comchan) + (1,-1)$); \draw[thick,->] (add1) -| (sichan); \draw[thick,->] (sichan) to node[right,pos=0.8] {$x_{\text{si},k}$} (add3); \draw[thick,->] (noise) to (add4); \draw[thick,->] (add4) to node[above] {$s_{k}$} (add3); \draw[thick,->] (distant) to (comchan); \draw[thick,->] (comchan) to node[above] {$d^{h}_{k}$} (add4); \draw[thick,->] (add3) to (out); \end{tikzpicture} \caption{Cascade model of the nonlinear SI channel with input $x_{k}$ and the signal-of-interest $d_{k}$.} \label{fig:systemmodel} \end{figure} The local transmitter creates an input signal $x_{k}$ to be conveyed to a distant receiver. Due to hardware impairments like I/Q imbalance in the quadrature mixer or high-order harmonics of the power amplifier, the signal is distorted by nonlinear components. This effect is especially severe at high transmitter output powers~\cite{src:korpi2014full}. We model the nonlinear contribution as $N^{\text{th}}$-order memoryless expansion, where the $i^{\text{th}}$ component consists of a nonlinear basis function $\phi_i\left(\cdot\right)$ and the coefficient $a_{i,k}$. Now, the linear SI channel is modeled by a linear, time-variant finite-impulse response (FIR) filter. Let the aforementioned FIR filter have $L$ leading coefficients given by \begin{align} \label{eq:siChanDef} \mybold{w}_{k}=\left[ w_{k,0}, w_{k,1}, \ldots, w_{k,L-1}\right]^T. \end{align} Alternatively, the nonlinear SI channel can be modeled in parallel structure, which is shown in~\figref{fig:parallelmodel}. \begin{figure} \centering \begin{tikzpicture}[ nlBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=1.75em, minimum width=4.5em}, chanBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=1.75em, minimum width=4.5em} ] \node (in1) {$x_{k}$}; \node[dot] (dot2) at ($(in1)+(1,0)$) {}; \node[nlBlock] at ($(dot2)+(1.35,0.8)$) (nlfunc0) {$\phi_0(\cdot)$}; \node[nlBlock] at ($(dot2)+(1.35,0)$) (nlfunc1) {$\phi_1(\cdot)$}; \node at ($(dot2)+(1.35,-0.6125)+(0,0)$) (colon1) {$\colon$}; \node[nlBlock] at ($(dot2)+(1.35,-1.25)$) (nlfuncN) {$\phi_{N-1}(\cdot)$}; \node[chanBlock] at ($(nlfunc0)+(2.5,0)$) (sichan0) {$\mybold{w}_{0,k}$}; \node[chanBlock] at ($(nlfunc1)+(2.5,0)$) (sichan1) {$\mybold{w}_{1,k}$}; \node at ($(colon1)+(2.5,0)+(0,0)$) (colon1) {$\colon$}; \node[chanBlock] at ($(nlfuncN)+(2.5,0)$) (sichanN) {$\mybold{w}_{N-1,k}$}; \myadd{add1}{$(sichan1)+(1.5,0)$} \draw[thick,->] (in1) to (dot2); \draw[thick,->] (dot2) |- (nlfunc0); \draw[thick,->] (dot2) -- (nlfunc1); \draw[thick,->] (dot2) |- (nlfuncN); \draw[thick,->] (nlfunc0) -- (sichan0); \draw[thick,->] (nlfunc1) -- (sichan1); \draw[thick,->] (nlfuncN) -- (sichanN); \draw[thick,->] (sichan0) -| (add1); \draw[thick,->] (sichan1) -- (add1); \draw[thick,->] (sichanN) -| (add1); \node (out) at ($(add1)+(1,0)$) {$x_{\text{si},k}$}; \draw[thick,->] (add1) to (out); \end{tikzpicture} \caption{Parallel model of the nonlinear SI channel.} \label{fig:parallelmodel} \end{figure} In that case, each of the nonlinear basis functions is followed by a different FIR filter with coefficients $\mybold{w}_{i,k}$ for $0\leq i\leq N-1$. However, the parallel approach significantly increases the degrees of freedom to be estimated. Conventional SI channel models require a large number of filter taps, which has been identified as an obstacle for practical systems~\cite{src:nadh2017ataylor}. Specifically, such an architecture might lead to ambiguous solutions. Thus, we focus on the cascade modeling of~\figref{fig:systemmodel} in our work. Then, the SI signal of~\figref{fig:systemmodel} can be written in a generalized linear model~\cite{src:bishop2006pattern} \begin{align} x_{\text{si},k} &= \sum^{L-1}_{l=0} w_{k,l} \sum_{i=0}^{N-1}a_{i,k}\phi_i\left(x_{k-l}\right), \nonumber\\ \label{eq:nldef} &= \sum_{i=0}^{N-1}a_{i,k}\sum^{L-1}_{l=0} w_{k,l} \phi_i\left(x_{k-l}\right), \end{align} where $w_{k,l}$ is taken from~\eqref{eq:siChanDef}. At a distant node, the transmitter broadcasts a SoI, denoted by $d_{k}$ with zero mean and power $\mathbb{E}\left[|d_{k}|^2\right]=P_d$ over a wireless channel. Similar to the SI channel, the wireless channel is modeled as linear FIR filter with $L$ coefficients \begin{align} \label{eq:wChanDef} \mybold{h}_{k}=\left[ h_{k,0}, h_{k,1}, \ldots, h_{k,L-1}\right]^T \end{align} with the same number of coefficients $L$ as in the case of the linear SI channel for convenience. Hence, at the receiver, the SoI is designated as \begin{align} \label{eq:rxSoi} d^{h}_{k} = \sum_{l=0}^{L-1}h_{k,l}d_{k-l}. \end{align} The signal $s_{k}$ contains all elements at the receiver that are not related to the SI, such as the SoI~\eqref{eq:rxSoi} and the additive noise $n_{k}$, i.e., \begin{align} s_{k} &= d^{h}_{k} + n_{k} \notag \\ \label{eq:notSIrelatedTime} &= \sum_{l=0}^{L-1}h_{k,l}d_{k-l} + n_{k} \end{align} and hence we have the overall observed signal at the receiver \begin{align} y_{k} = x_{\text{si},k} + s_{k}. \end{align} Next, we introduce a vector notation for all signals in time domain. We group consecutive samples in time into frames. This is depicted in \figref{fig:frames}. Suppose that there are frames consisting of $M$ samples, where each frame is indexed by $\kappa$. We form a new frame on every $R^{\text{th}}$ sample, such that we have an overlap of neighboring frames by $L=M-R$ samples. \begin{figure} \centering \newcommand*{\timeBlockLength}{1.75}% \begin{tikzpicture} [ timeBlock/.style={draw, drop shadow, fill=white, rectangle, minimum height=2em, minimum width=\timeBlockLength cm}, dimen/.style={<->,>=latex,thin,every rectangle node/.style={fill=white,midway}}, dimenright/.style={->,>=latex,thin,every rectangle node/.style={fill=white,right}} ] \pgfmathsetmacro{\dotsGap}{1.25} \node[timeBlock] (t1) at (0,0) {\scriptsize $\kappa R-M+1$}; \node[] (c1) at ($(t1)+(\dotsGap,0)$) {$\dots$}; \node[timeBlock] (t2) at ($(c1)+(\dotsGap,0)$) {\scriptsize $(\kappa-1)R$}; \node[timeBlock] (t3) at ($(t2)+(\timeBlockLength,0)$) {\scriptsize $(\kappa-1)R+1$}; \node[] (c2) at ($(t3)+(\dotsGap,0)$) {$\dots$}; \node[timeBlock] (t4) at ($(c2)+(\dotsGap,0)$) {\scriptsize $\kappa R$}; \draw (t3.south west) -- ++(0,-0.5) coordinate (D1) -- +(0,-5pt); \draw (t4.south east) -- ++(0,-0.5) coordinate (D2) -- +(0,-5pt); \draw [dimen] (D1) -- (D2) node {$R$}; \draw (t1.south west) -- ++(0,-1) coordinate (D3) -- +(0,-5pt); \draw (t4.south east) -- ++(0,-1) coordinate (D4) -- +(0,-5pt); \draw [dimen] (D3) -- (D4) node {$M$}; \draw (t1.north west) -- ++(0,0.25) coordinate (time1) -- +(0,5pt); \draw [dimenright] (time1) -- ($(time1)+(1,0)$) node {$k$}; \end{tikzpicture} \caption{Frame of $M$ samples with frame shift $R$ at index $\kappa$.} \label{fig:frames} \end{figure} On this basis, we can express the convolution of the transmitted signal and the SI channel by the overlap-save method. Let \begin{align} \label{eq:nlBasisVec} \mybold{\phi}_{i,\kappa} = \left[ \phi_{i}\left(x_{\kappa R-M+1}\right),\phi_{i}\left(x_{\kappa R-M+2}\right),\ldots,\phi_{i}\left(x_{\kappa R}\right) \right]^T \end{align} be the $\kappa^{\text{th}}$ frame vector for the $i^{\text{th}}$ nonlinear basis function, applied to the input $x_{k}$. Furthermore, let \begin{align} \label{eq:nlBasisMatrix} \mybold{\Phi}_{\kappa}=\left[ \mybold{\phi}_{0,\kappa}, \mybold{\phi}_{1,\kappa},\ldots, \mybold{\phi}_{N-1,\kappa} \right] \end{align} be a $M\times N$ matrix with the signal of all nonlinear basis functions from~\eqref{eq:nlBasisVec}, stacked as column vectors. We assume the SI channel to be slowly varying and therefore constant within a single frame, similar to block fading. Then, we form a FIR channel vector of size $M\times 1$ by appending $R$ zeros to~\eqref{eq:siChanDef}, and get \begin{align} \label{eq:siChanVec} \mybold{w}_{\kappa} = \left[ \mybold{w}^T_{\kappa R}, \mybold{0}^T_{R\times 1} \right]^T. \end{align} Similarly, we define a $M\times 1$ frame vector for the desired $d_{k}$ \begin{align} \mybold{d}_{\kappa} &= \left[ d_{\kappa R-M+1},d_{\kappa R-M+2},\ldots,d_{\kappa R} \right]^T \end{align} and a $R\times 1$ frame vector for the receiver noise \begin{align} \mybold{n}_{\kappa}=\left[n_{(\kappa-1)R+1},n_{(\kappa-1)R+2},\ldots,n_{\kappa R} \right] \end{align} and we define $\mybold{y}_{\kappa}$, $\mybold{s}_{\kappa}$ and $\mybold{x}_{\text{si},\kappa}$ similarly as $\mybold{n}_{\kappa}$. \subsection{DFT-domain representation} \label{sec:dft} Next, we transform selected signals into DFT domain and express the convolutions of~\eqref{eq:nldef} and~\eqref{eq:rxSoi} as multiplications. Let \begin{align} \underline{\mybold{\Phi}}_{\kappa} &=\mybold{F}_M\mybold{\Phi}_{\kappa} = \left[ \underline{\mybold{\phi}}_{0,\kappa}, \label{eq:basisFrameMatrix} \underline{\mybold{\phi}}_{1,\kappa},\ldots, \underline{\mybold{\phi}}_{N-1,\kappa} \right] \end{align} denote the $M\times N$ DFT-domain representations of the basis frame, where $\mybold{F}_M$ is the $M\times M$ DFT matrix. Similarly, let \begin{align} \label{eq:siChanVecDft} \underline{\mybold{w}}_{\kappa} &=\mybold{F}_M\mybold{w}_{\kappa} \end{align} be the DFT-domain transform of the linear SI channel coefficients from~\eqref{eq:siChanVec}. It is likewise assumed that the nonlinear coefficients $a_{i,k}$ only vary slowly over time, thus they are approximately constant within a frame. We write \begin{align} \label{eq:nlCoeffsVecEqual} \underline{a}_{i,\kappa} = a_{i,(\kappa-1)R} = a_{i,(\kappa-1)R+1} = \ldots = a_{i,\kappa R} \end{align} and group the nonlinear coefficients~\eqref{eq:nlCoeffsVecEqual} in vector form \begin{align} \label{eq:nlCoeffsVec} \underline{\mybold{a}}_{\kappa}= \left[\underline{a}_{0,\kappa},\underline{a}_{1,\kappa},\ldots,\underline{a}_{N-1,\kappa}\right]^T. \end{align} Now we are equipped to express the SI signal of~\eqref{eq:nldef} in DFT domain. The nonlinear basis frames are given by $\underline{\mybold{\Phi}}_{\kappa}\underline{\mybold{a}}_{\kappa}$ based on~\Cref{eq:basisFrameMatrix,eq:nlCoeffsVec}. They are multiplied with the linear SI channel coefficients~\eqref{eq:siChanVecDft}, since the convolution is transformed into a multiplication in DFT domain. Thus, we construct \begin{align} \label{eq:obsCompact} \underline{\mybold{x}}_{\text{si},\kappa} &= \text{diag}\left[\underline{\mybold{\Phi}}_{\kappa}\underline{\mybold{a}}_{\kappa}\right] \underline{\mybold{w}}_{\kappa} \\ \label{eq:obsDetail} &\hintedrel[eq:xsiDFT1]{=} \left(\sum_{i=0}^{N-1}\underline{a}_{i,\kappa}\text{diag}\left[ \underline{\mybold{\phi}}_{i,\kappa}\right]\right) \underline{\mybold{w}}_{\kappa}, \end{align} where we introduce~$(\hintref{eq:xsiDFT1})$ to prepare a form that later on allows to efficiently infer the coefficients $\underline{a}_{i,\kappa}$ by a linear algorithm. The corresponding $R\times 1$ time-domain signal of~\eqref{eq:obsDetail}, in line with~\eqref{eq:nldef}, is \begin{align} \mybold{x}_{\text{si},\kappa} &= \mybold{\Upsilon}^T\mybold{F}^{-1}_M\underline{\mybold{x}}_{\text{si},\kappa} \\ \label{eq:siTime} &= \left(\sum_{i=0}^{N-1}\underline{a}_{i,\kappa}\mybold{\Upsilon}^T\mybold{F}^{-1}_M\text{diag}\left[ \underline{\mybold{\phi}}_{i,\kappa}\right]\right) \underline{\mybold{w}}_{\kappa}, \end{align} where the matrix \begin{align} \mybold{\Upsilon}^T = \begin{bmatrix} \zeros{R}{L} & \eye{R} \end{bmatrix} \end{align} removes the first $L=M-R$ entries from a vector since these entries are contaminated by aliasing. Next, we address the SoI. In DFT domain, we have \begin{align} \label{eq:soiDFT} \underline{\mybold{d}}_{\kappa}&=\mybold{F}_M\mybold{d}_{\kappa} \\ \label{eq:wChanDFT} \underline{\mybold{h}}_{\kappa}&=\mybold{F}_M\mybold{h}_{\kappa} \end{align} for the SoI and the coefficients of the wireless channel. Similar to~\eqref{eq:siTime}, we express the convolution of SoI and the wireless channel as multiplication of~\eqref{eq:soiDFT} and~\eqref{eq:wChanDFT} in DFT domain, and then transform it back into time domain \begin{align} \label{eq:soiPlusNoiseTime} \mybold{s}_{\kappa} &= \mybold{\Upsilon}^T\mybold{F}^{-1}_M\text{diag}\left[ \underline{\mybold{d}}_{\kappa}\right] \underline{\mybold{h}}_{\kappa}+\mybold{n}_{\kappa}. \end{align} Next, we transform the overall received signal $\mybold{y}_{\kappa} = \mybold{x}_{\text{si},\kappa} + \mybold{s}_{\kappa}$ back into DFT domain by first prepending $L$~zeros and then applying the DFT matrix. Thus we have \begin{align} \underline{\mybold{y}}_{\kappa}&= \mybold{F}_M\mybold{\Upsilon}\mybold{y}_{\kappa} \nonumber\\ &= \mybold{F}_M\mybold{\Upsilon}\mybold{x}_{\text{si},\kappa} + \mybold{F}_M\mybold{\Upsilon}\mybold{s}_{\kappa} \\ \label{eq:obsDFT} &\hintedrel[eq:obsDFT1]{=} \left(\sum_{i=0}^{N-1} \underline{a}_{i,\kappa}\underline{\mybold{C}}_{i,\kappa}\right) \underline{\mybold{w}}_{\kappa} + \underline{\mybold{s}}_{\kappa}, \end{align} where~$(\hintref{eq:obsDFT1})$ uses~\eqref{eq:siTime} and we employed the definition \begin{align} \label{eq:CDef} \underline{\mybold{C}}_{i,\kappa} = \mybold{F}_M\mybold{\Upsilon}\mybold{\Upsilon}^T\mybold{F}^{-1}_M\text{diag}\left[ \underline{\mybold{\phi}}_{i,\kappa}\right] \end{align} and the DFT domain representation \begin{align} \label{eq:soiPlusNoiseDft} \underline{\mybold{s}}_{\kappa}&=\mybold{F}_M\mybold{\Upsilon}\mybold{s}_{\kappa}. \end{align} The received signal without SI contribution~\eqref{eq:soiPlusNoiseDft} can be alternatively expressed by using~\eqref{eq:soiPlusNoiseTime} by \begin{align} \underline{\mybold{s}}_{\kappa}&= \underline{\mybold{d}}^{h}_{\kappa} + \underline{\mybold{n}}_{\kappa}, \end{align} where the term $\underline{\mybold{n}}_{\kappa}$ denotes the DFT domain representation of the additive receiver noise and $\underline{\mybold{d}}^{h}_{\kappa}$ is the DFT domain equivalent of the SoI at the receiver. \subsection{State-space model} We propose state-space models for both the linear SI channel~\eqref{eq:siChanVecDft} and the nonlinear coefficients~\eqref{eq:nlCoeffsVec}. The time-variant nature of the nonlinear SI channel is specifically modeled by first-order Markov models of the linear SI channel and the nonlinear coefficients, i.e., \begin{align} \label{eq:siChanMarkov} \underline{\mybold{w}}_{\kappa} &= \underline{A}^{w}\underline{\mybold{w}}_{\kappa-1}+\underline{\mybold{\Delta}}^{w}_{\kappa}, \\ \label{eq:nlCoeffsMarkov} \underline{\mybold{a}}_{\kappa} &= \underline{\mybold{A}}^{a}\underline{\mybold{a}}_{\kappa-1}+\underline{\mybold{\Delta}}^{a}_{\kappa}, \end{align} where the parameter $\underline{A}^{w}$ with $0\leq|\underline{A}^{w}|\leq 1$ and the diagonal matrix $\underline{\mybold{A}}^{a}$ denote the transition factor between consecutive channel realizations and nonlinear coefficients over time, respectively. We assume a scalar $\underline{A}^{w}$, since the linear SI channel coefficients are assumed to be similar in transient behavior. On the other hand, the matrix $\underline{\mybold{A}}^{a}$ reflects different degrees of temporal variations among the nonlinear coefficients. The Gaussian system noise variables $\underline{\mybold{\Delta}}^{w}_{\kappa}\sim\cgauss{\mybold{0},\underline{\psi}^{\Delta w}_{\kappa}\eye{M}}$ and $\underline{\mybold{\Delta}}^{a}_{\kappa}\sim\cgauss{\mybold{0},\underline{\mybold{\Psi}}^{\Delta a}_{\kappa}}$ are independent with variance $\underline{\psi}^{\Delta w}_{\kappa}$ and covariance matrix $\underline{\mybold{\Psi}}^{\Delta a}_{\kappa}$. By definition, we assume the top path from~\figref{fig:systemmodel} with index $i=0$ represents the linear component of the SI, thus, without loss of generality, we have $\phi_{0}(x_{k})=x_{k}$ and fix $\underline{a}_{0,\kappa}=1$, $\underline{\Delta}^{a}_{0,\kappa}=0$ in the following. Let $\underline{\mybold{R}}^{w}_{\kappa}=\mathbb{E}\left[ \underline{\mybold{w}}_{\kappa}\underline{\mybold{w}}^{H}_{\kappa} \right]$ be the autocorrelation matrix of the random SI channel coefficients, thus, from~\eqref{eq:siChanMarkov}, we have \begin{align} \label{eq:siChanCorr} \text{tr}\left[\underline{\mybold{R}}^{w}_{\kappa}\right]=\left|\underline{A}^{w}\right|^{2}\text{tr}\left[\underline{\mybold{R}}^{w}_{\kappa-1}\right]+M\underline{\psi}^{\Delta w}_{\kappa}. \end{align} Apparently, the system model is characterized by a significant number of parameters. Since these parameters are difficult to determine, we reduce their number by using certain relations between them. For instance, from a practical perspective, it is reasonable to assume that the statistical properties of the linear SI channel remain similar over time. Thus, we assume \begin{align} \label{eq:siChanCorrTrEqual} \text{tr}\left[\underline{\mybold{R}}^{w}_{\kappa}\right] &= \text{tr}\left[\underline{\mybold{R}}^{w}_{\kappa+1}\right] = \ldots = \text{tr}\left[\underline{\mybold{R}}^{w}_{\infty}\right]. \end{align} and thus we find the covariance of the system noise as \begin{align} \label{eq:siChanSystemNoiseCovLongTerm} \underline{\psi}^{\Delta w} = \frac{1}{M}\text{tr}\left[\underline{\mybold{R}}^{w}_{\infty}\right] \left(1-\left|\underline{A}^{w}\right|^{2}\right). \end{align} The steady-state parameter $\text{tr}\left[\underline{\mybold{R}}^{w}_{\infty}\right]$ determines the power of the linear SI channel components. In practice, this can be obtained from a-priori knowledge, since the relation of SI power level to the SoI is at least approximately known. Furthermore, the \textit{coherence time} reflects the variability of the communication channel in time, and therefore is regarded as main characteristic of temporal variations. To best of our knowledge, the coherence time has not been systematically studied for SI channels so far. However, we believe it to be on the same order as the wireless channel, as indicated in some previous work~\cite{src:jain2011practical}. To integrate the notion of a channel coherence time into our framework, we introduce a coherence frame index $\kappa^{w}_{\text{coh}}$, which denotes the time when the correlation of subsequent channel variables has dropped by half, thus \begin{align} \label{eq:channelCoherence} \left| A^{w} \right|^{\kappa^{w}_{\text{coh}}} = \frac{1}{2}. \end{align} For the zero-mean nonlinear coefficients $\underline{\mybold{a}}_{\kappa}$, we define the diagonal covariance matrix $\underline{\mybold{R}}^{a}_{\kappa}= \mathbb{E}\left[ \underline{\mybold{a}}_{\kappa}\underline{\mybold{a}}^{H}_{\kappa} \right]$ with the $i^{\text{th}}$ diagonal element $\underline{p}^{a}_{i,\kappa}=\mathbb{E}\left[ \left|\underline{a}_{i,\kappa}\right|^2 \right]$. From~\eqref{eq:nlCoeffsMarkov}, we can derive \begin{align} \label{eq:nlCorr} \underline{\mybold{R}}^{a}_{\kappa}=\underline{\mybold{A}}^{a}\underline{\mybold{R}}^{a}_{\kappa-1}\underline{\mybold{A}}^{a^H}+\underline{\mybold{\Psi}}^{\Delta a}_{\kappa}. \end{align} With the elements of $\underline{\mybold{R}}^{a}_{\kappa}$ approximately constant over time, \begin{align} \underline{\mybold{R}}^{a}_{\kappa}=\underline{\mybold{R}}^{a}_{\kappa+1}=\ldots=\underline{\mybold{R}}^{a}_{\infty}, \end{align} the system noise covariance matrix is expressed by \begin{align} \underline{\mybold{\Psi}}^{\Delta a}_{\kappa} = \underline{\mybold{R}}^{a}_{\infty} \left(\mybold{I}- \underline{\mybold{A}}^{a}\underline{\mybold{A}}^{a^H}\right), \end{align} since $\underline{\mybold{A}}^{a}$ is diagonal. Similar to~\eqref{eq:channelCoherence}, we define a coherence time $\kappa^{a}_{i,\text{coh}}$ for the $i^{\text{th}}$ nonlinear coefficient by \begin{align} \label{eq:nlCoherence} \left| A_i^{a} \right|^{\kappa^{a}_{i,\text{coh}}} = \frac{1}{2}, \end{align} where $A_i^{a}$ denotes the $i^{\text{th}}$ element on the main diagonal of $\underline{\mybold{A}}^{a}$. \section{State-space nonlinear adaptive algorithm} \label{sec:algorithm} Consider the proposed SI estimation and cancellation algorithm depicted in~\figref{fig:algorithm}. \begin{figure} \begin{tikzpicture}[ sysBlock/.style={draw, fill=white, rectangle, minimum height=2em, minimum width=3.5em}, nlBlock/.style={draw, fill=white, rectangle, minimum height=2em, minimum width=4em}, shadowBlock/.style={draw, drop shadow, fill=white, rectangle}, triangle/.style = {draw, regular polygon, regular polygon sides=3 }, border rotated/.style = {shape border rotate=90} ] \node (inTx) {$\underline{\mybold{x}}_{\kappa}$}; \node[shadowBlock, align=center, minimum height=7em] (kalpred) at ($(inTx) +(1,-1.8)$) {Kalman\\prediction\\$\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}$,\\$\underline{\hat{\mybold{a}}}_{\kappa|\kappa-1}$}; \node (inRx) at ($(kalpred)+(-1,-2)$) {$\underline{\mybold{y}}_{\kappa}$}; \node[shadowBlock, align=center, text width=1.3cm, minimum height=7em] (sirec) at ($(kalpred) + (2,0)$) {SI\\recon\-struc\-tion}; \node[shadowBlock, align=center, text width=2cm] (ortho) at ($(inTx)+(1.8,0)$) {Ortho\-go\-n\-ali\-za\-tion}; \myadd{add1}{$(inRx-|sirec)$} \node at ($(add1.east)+(0.1,0.2)$) {$-$}; \node[shadowBlock, align=center, minimum height=7em] (dec) at ($(sirec)+(1.75,0)$) {Decoder}; \myadd{add2}{$(dec)+(1.5,0)$} \node at ($(add2.west)+(-0.1,0.2)$) {$-$}; \node[shadowBlock, align=center, minimum height=7em] (kalupd) at ($(add2)+(1.5,0)$) {Kalman\\update\\$\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$,\\$\underline{\hat{\mybold{a}}}_{\kappa|\kappa}$}; \node[dot] (dot1) at ($(sirec|-ortho)+(0.25,0)$) {}; \node[dot] (dot2) at ($(dec|-inRx)$) {}; \draw[thick,->] (inTx) to (ortho); \draw[thick,->] (ortho) -- (dot1); \draw[thick,->] (dot1) -- (sirec.north-|dot1); \draw[thick,->] (dot1) -| (kalupd); \draw[thick,->] (kalpred) -- (sirec); \draw[thick,->] (inRx) -- (add1); \draw[thick,->] (sirec) -- node[left] {$\underline{\hat{\mybold{x}}}_{\text{si},\kappa}$} (add1); \draw[thick,->] (add1) -- node[above] {$\underline{\mybold{e}}_{\kappa}$} (dot2); \draw[thick,->] (dot2) -- (dec); \draw[thick,->] (dot2) -| (add2); \draw[thick,->] (dec) -- node[below] {$\underline{\hat{\mybold{d}}}^h_{\kappa}$} (add2); \draw[thick,->] (add2) -- node[above] {$\underline{\tilde{\mybold{e}}}_{\kappa}$} (kalupd); \end{tikzpicture} \caption{Proposed nonlinear SI estimation and cancellation.} \label{fig:algorithm} \end{figure} All signals are represented here in DFT domain. The adaptive algorithm has access to the transmitted signal $\underline{\mybold{x}}_{\kappa}$. Initially, the input is subject to an orthogonalization process, where the nonlinear contributions are transformed to uncorrelated equivalents. Then, the adaptation is performed in an iterative process over time. At each step $\kappa$, we first acquire a predicted Kalman estimate of the linear SI channel and the nonlinear coefficients. We intend to cancel as much SI as possible, thus the algorithm produces a reconstruction $\underline{\hat{\mybold{x}}}_{\text{si},\kappa}$ of the SI signal. After subtracting~$\underline{\hat{\mybold{x}}}_{\text{si},\kappa}$ from the received signal $\underline{\mybold{y}}_{\kappa}$, the algorithm outputs an error signal $\underline{\mybold{e}}_{\kappa}$. This error signal is then provided to the decoder which produces a reconstructed SoI $\underline{\hat{\mybold{d}}}^h_{\kappa}$. Finally, after $\underline{\hat{\mybold{d}}}^h_{\kappa}$ is subtracted from the error signal $\underline{\mybold{e}}_{\kappa}$, the residual error $\underline{\tilde{\mybold{e}}}_{\kappa}$ supports the Kalman update estimations for both the linear SI channel and the nonlinear coefficients. \subsection{Orthogonalization} \label{sec:ortho} The nonlinear basis functions $\phi_i(\cdot)$ do not necessarily represent an orthogonal basis for the SI signal. Therefore, we intend to change the basis by a matrix transformation. There are many approaches available that achieve this orthogonalization, such as eigenvalue decomposition~\cite{src:korpi2015adaptive}, Cholesky factorization~\cite{src:emara2017nonlinear} or Gram-Schmidt method~\cite{src:proakis2007digital}. We describe the statistical connection between different basis functions by a time-invariant $N\times N$ autocorrelation matrix. Using~\eqref{eq:basisFrameMatrix}, we have \begin{align*} \mybold{R}^{\Phi} &= \mathbb{E}\Big[ \underline{\mybold{\Phi}}^T_{\kappa} \underline{\mybold{\Phi}}^*_{\kappa}\Big]. \end{align*} Next, we choose a $N\times N$ transform matrix $\mybold{G}$, which is not necessarily unitary, but full-rank, and such that $\mybold{G}\mybold{R}^{\Phi}\mybold{G}^H$ is diagonal. We apply the transformation to each basis frame by \begin{align} \underline{\tilde{\mybold{\Phi}}}_{\kappa} &= \underline{\mybold{\Phi}}_{\kappa} \mybold{G}^T \nonumber\\ \label{eq:orthoPhi} &= \left[ \underline{\tilde{\mybold{\phi}}}_{0,\kappa}, \underline{\tilde{\mybold{\phi}}}_{1,\kappa},\ldots, \underline{\tilde{\mybold{\phi}}}_{N-1,\kappa} \right]. \end{align} Next, starting from~\eqref{eq:obsCompact}, we have \begin{align} \underline{\mybold{x}}_{\text{si},\kappa} &= \text{diag}\left[\underline{\mybold{\Phi}}_{\kappa}\mybold{G}^{T}\left(\mybold{G}^T\right)^{-1}\underline{\mybold{a}}_{\kappa}\right] \underline{\mybold{w}}_{\kappa} \nonumber\\ &\hintedrel[eq:obsOrtho1]{=} \text{diag}\left[\underline{\tilde{\mybold{\Phi}}}_{\kappa}\underline{\tilde{\mybold{a}}}_{\kappa}\right]\underline{\mybold{w}}_{\kappa} \nonumber\\ &= \text{diag}\left[\underline{\tilde{\mybold{\Phi}}}_{\kappa}\underline{\tilde{\mybold{a}}}_{\kappa}\frac{1}{\underline{\tilde{a}}_{0,\kappa}}\right]\underline{\tilde{a}}_{0,\kappa}\underline{\mybold{w}}_{\kappa} \nonumber\\ &\hintedrel[eq:obsOrtho2]{=} \left(\sum_{i=0}^{N-1}\underline{\check{a}}_{i,\kappa}\text{diag}\left[ \underline{\tilde{\mybold{\phi}}}_{i,\kappa}\right]\right) \underline{\tilde{\mybold{w}}}_{\kappa}, \end{align} where~$(\hintref{eq:obsOrtho1})$ uses~\eqref{eq:orthoPhi} and $\underline{\tilde{\mybold{a}}}_{\kappa}=\left(\mybold{G}^T\right)^{-1}\underline{\mybold{a}}_{\kappa}$, and~$(\hintref{eq:obsOrtho2})$ applies normalization with $\underline{\check{a}}_{i,\kappa}=\underline{\tilde{a}}_{i,\kappa}/\underline{\tilde{a}}_{0,\kappa}$ and $\underline{\tilde{\mybold{w}}}_{\kappa}=\underline{\tilde{a}}_{0,\kappa}\underline{\mybold{w}}_{\kappa}$. Apparently, the change of basis has an effect on the outcome of both the estimated linear SI channel and the nonlinear coefficients as they represent the inner state of the system. However, the overall SI reconstruction is left unchanged. In the following, for the sake of simplicity, we drop the ``tilde'', but orthogonalization is always applied unless noted otherwise. \subsection{Predictions} \label{sec:predictions} Consider the state equations for the linear SI channel~\eqref{eq:siChanMarkov} and the nonlinear coefficients~\eqref{eq:nlCoeffsMarkov}. We intend to derive estimators for $\underline{\mybold{w}}_{\kappa}$ and $\underline{\mybold{a}}_{\kappa}$. Let \begin{align} \label{eq:siChanEstimatorPred} \underline{\mybold{w}}_{\kappa} &= \hat{\underline{\mybold{w}}}_{\kappa|\kappa-1} + \underline{\mybold{n}}^w_{\kappa|\kappa-1}\\ \label{eq:nlCoeffsEstimatorPred} \underline{\mybold{a}}_{\kappa} &= \hat{\underline{\mybold{a}}}_{\kappa|\kappa-1} + \underline{\mybold{n}}^a_{\kappa|\kappa-1}, \end{align} where $\hat{\underline{\mybold{w}}}_{\kappa|\kappa-1}$ and $\hat{\underline{\mybold{a}}}_{\kappa|\kappa-1}$ are estimators for the linear SI channel and the nonlinear coefficients given all past observations $\left(\underline{\mybold{y}}_{0},\ldots, \underline{\mybold{y}}_{\kappa-2}, \underline{\mybold{y}}_{\kappa-1}\right)$, respectively. The zero-mean a-priori errors $\underline{\mybold{n}}^w_{\kappa|\kappa-1}$ and $\underline{\mybold{n}}^a_{\kappa|\kappa-1}$ have the covariances $\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}$ and $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$, respectively. Similarly, we define $\hat{\underline{\mybold{w}}}_{\kappa|\kappa}$ and $\hat{\underline{\mybold{a}}}_{\kappa|\kappa}$ to be estimators for the linear SI channel and the nonlinear coefficients given the present and all past observations $\left(\underline{\mybold{y}}_{0},\ldots, \underline{\mybold{y}}_{\kappa-1}, \underline{\mybold{y}}_{\kappa}\right)$, respectively. The zero-mean a-posteriori errors have the covariances $\underline{\mybold{P}}^{w}_{\kappa|\kappa}$ and $\underline{\mybold{P}}^{a}_{\kappa|\kappa}$, respectively. Furthermore, consider the following assumptions: \begin{enumerate} \item The linear SI channel $\underline{\mybold{w}}_{\kappa}$ and the nonlinear coefficients $\underline{\mybold{a}}_{\kappa}$ are independent. \item The joint probability density functions of both the estimators $\hat{\underline{\mybold{w}}}_{\kappa|\kappa-1}$, $\hat{\underline{\mybold{a}}}_{\kappa|\kappa-1}$, $\hat{\underline{\mybold{w}}}_{\kappa|\kappa}$, $\hat{\underline{\mybold{a}}}_{\kappa|\kappa}$ and the a-priori, a-posterori errors are circular-symmetric, complex Gaussian distributions. \item The a-priori, a-posteriori errors are independent. \end{enumerate} The adaptive algorithms for the estimation of the linear SI channel and of the nonlinear coefficients are derived in the following. We show the overall adaptive algorithm in~\algref{alg:overall}. Following the lines of~\cite{src:scharf1991statistical}, the predictions of the estimates $\hat{\underline{\mybold{w}}}_{\kappa|\kappa-1}$, $\hat{\underline{\mybold{a}}}_{\kappa|\kappa-1}$ and the estimation error covariances $\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}$, $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$ are computed first. Next, the information encoded in the SoI is retrieved and its contribution is removed from the received signal. This is explained in more detail in Subsection~\ref{sec:decoding}. Finally, the update steps of the linear SI channel and the nonlinear coefficients are computed. This is presented in Subsections~\ref{sec:estLin} and~\ref{sec:estNl}, respectively. \begin{algorithm} \caption{Iterative estimation of linear SI channel~$\underline{{\mybold{w}}}_{\kappa}$ and nonlinear coefficients~$\underline{\mybold{a}}_{\kappa}$} \label{alg:overall} \begin{algorithmic} \REQUIRE Initialization of $\underline{\hat{\mybold{w}}}_{0|0}$, $\underline{\mybold{P}}^{w}_{0|0}$, $\underline{\hat{\mybold{a}}}_{0|0}$ and $\underline{\mybold{P}}^{a}_{0|0}$ \REPEAT \STATE $\kappa \leftarrow \kappa + 1$ \PREDICT{linear SI channel estimate} \STATE $\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}=\underline{A}^{w} \underline{\hat{\mybold{w}}}_{\kappa-1|\kappa-1}$ \ENDBLOCK \PREDICT{linear SI channel estimation error covariance} \STATE $\underline{\mybold{P}}^{w}_{\kappa|\kappa-1} = \left|\underline{A}^{w}\right|^{2}\underline{\mybold{P}}^{w}_{\kappa-1|\kappa-1} +\underline{\mybold{\Psi}}^{\Delta w}_{\kappa}$ \ENDBLOCK \PREDICT{coefficients estimate} \STATE $\underline{\hat{\mybold{a}}}_{\kappa|\kappa-1}=\underline{\mybold{A}}^{a} \underline{\hat{\mybold{a}}}_{\kappa-1|\kappa-1}$ \ENDBLOCK \PREDICT{coefficients estimation error covariance} \STATE $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1} = \underline{\mybold{A}}^{a}\underline{\mybold{P}}^{a}_{\kappa-1|\kappa-1}\underline{\mybold{A}}^{a^H} +\underline{\mybold{\Psi}}^{\Delta a}_{\kappa}$ \ENDBLOCK \STATE{\textbf{Decode}~the information from the SoI and subtract its contribution from the received signal, see Section~\ref{sec:decoding}} \item[\algorithmicupdate]{linear SI channel estimate from prediction $\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}$, see~\algref{alg:linearChannel} in Subsection~\ref{sec:estLin}} \STATE Requires all predictions and provides $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$ and $\underline{\mybold{P}}^{w}_{\kappa|\kappa}$ \ENDBLOCK \item[\algorithmicupdate]{nonlinear coefficients estimate from prediction $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$, see~\algref{alg:nonlinearCoeffs} in Subsection~\ref{sec:estNl}} \STATE Requires all predictions, the updated $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$ and $\underline{\mybold{P}}^{w}_{\kappa|\kappa}$, and provides $\underline{\hat{\mybold{a}}}_{\kappa|\kappa}$ and $\underline{\mybold{P}}^{a}_{\kappa|\kappa}$ \ENDBLOCK \UNTIL{forever} \end{algorithmic} \end{algorithm} \subsection{Decoding} \label{sec:decoding} After predicting the estimates of both the linear SI channel and the nonlinear coefficients, the SI contribution can be reconstructed based on~\eqref{eq:obsDFT} for the current time frame $\kappa$ by \begin{align} \label{eq:reconstructedSI} \underline{\hat{\mybold{x}}}_{\text{si},\kappa}= \sum_{i=0}^{N-1}\underline{\hat{a}}_{i,\kappa|\kappa-1}\underline{\mybold{C}}_{i,\kappa}\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}. \end{align} We define the error signal \begin{align} \label{eq:errorSignal} \underline{\mybold{e}}_{\kappa} &= \underline{\mybold{y}}_{\kappa} - \underline{\hat{\mybold{x}}}_{\text{si},\kappa}. \end{align} At this point the error signal~\eqref{eq:errorSignal} comprises many sources of observation noise, such as the residual SI, the SoI and independent additive noise. Large observation noise generally reduces the convergence speed or increases the misalignment of adaptive algorithms~\cite{Haykin2002}, but here we can reduce its contribution. Unlike in acoustic echo cancellation, where the acoustic signal is modeled as a random process, the SoI in wireless communication contains structure from encoded information. By decoding the information from the SoI and subtracting the corresponding SoI signal from~\eqref{eq:errorSignal}, we have better conditioning for subsequent updates of the SI path. Let $\underline{\hat{\mybold{d}}}^{h}_{\kappa}$ be the SoI generated from decoded information in DFT domain. Then, we have the residual error \begin{align} \label{eq:residualError} \underline{\tilde{\mybold{e}}}_{\kappa} = \underline{\mybold{y}}_{\kappa} - \underline{\hat{\mybold{x}}}_{\text{si},\kappa} - \underline{\hat{\mybold{d}}}^{h}_{\kappa}. \end{align} It comprises a potential decoding error, the residual SI and independent noise. \subsection{Update estimation of linear SI channel} \label{sec:estLin} During the $\kappa^{\text{th}}$~step of adaption, we need to know the estimate of the nonlinear coefficients from the previous step. Therefore, in the following, we assume that the estimates $\underline{\hat{a}}_{i,\kappa|\kappa-1}$ are given. Recall the observation equation~\eqref{eq:obsDFT}. After the decoding, we subtract the reconstructed SoI and have \begin{align} \underline{\tilde{\mybold{y}}}_{\kappa} &= \underline{\mybold{y}}_{\kappa} - \underline{\hat{\mybold{d}}}^{h}_{\kappa} \notag \\ \label{eq:obsDec} &\hintedrel[eq:y1]{=} \left(\sum_{i=0}^{N-1} \underline{a}_{i,\kappa}\underline{\mybold{C}}_{i,\kappa}\right) \underline{\mybold{w}}_{\kappa} + \underline{\tilde{\mybold{s}}}_{\kappa} \\ &\hintedrel[eq:y2]{=} \left(\sum_{i=0}^{N-1} \underline{\hat{a}}_{i,\kappa|\kappa-1}\underline{\mybold{C}}_{i,\kappa}\right) \underline{\mybold{w}}_{\kappa} + \underline{\tilde{\mybold{s}}}^{w1}_{\kappa}, \label{eq:obsLinear} \end{align} where~$(\hintref{eq:y1})$ introduces the noise term $\underline{\tilde{\mybold{s}}}_{\kappa} = \underline{\mybold{s}}_{\kappa} - \underline{\hat{\mybold{d}}}^{h}_{\kappa}$ comprising the residual SI and additive noise with covariance matrix $\underline{\mybold{\Psi}}^{\tilde{s}}_{\kappa}$. We assume that $\underline{\mybold{\Psi}}^{\tilde{s}}_{\kappa}$ can be characterized from a-priori information on the noise level. The term~$(\hintref{eq:y2})$ is due to both~\eqref{eq:nlCoeffsEstimatorPred} and the zero-mean augmented noise term \begin{align} \label{eq:augNoiseLinear} \underline{\tilde{\mybold{s}}}^{w1}_{\kappa} = \left(\sum_{i=0}^{N-1} \underline{n}^{a}_{i,\kappa|\kappa-1}\underline{\mybold{C}}_{i,\kappa}\right) \underline{\mybold{w}}_{\kappa} + \underline{\tilde{\mybold{s}}}_{\kappa}. \end{align} The covariance matrix of the augmented noise $\underline{\tilde{\mybold{s}}}^{w1}_{\kappa}$ is \begin{align} \underline{\mybold{\Psi}}^{\tilde{s}^{w1}}_{\kappa} &= \mathbb{E}\left[ \underline{\tilde{\mybold{s}}}^{w1}_{\kappa}\underline{\tilde{\mybold{s}}}^{w1^H}_{\kappa} \right] \nonumber \\ &= \sum_{i=0}^{N-1}\underline{p}^{a}_{i,\kappa|\kappa-1} \underline{\mybold{C}}_{i,\kappa} \underline{\mybold{R}}^{w}_{\kappa} \underline{\mybold{C}}^{H}_{i,\kappa} +\underline{\mybold{\Psi}}^{\tilde{s}}_{\kappa}, \label{eq:covAugNoiseLinear} \end{align} where we have $\underline{p}^{a}_{i,\kappa|\kappa-1}=\mathbb{E}\left[ |\underline{n}^{a}_{i,\kappa|\kappa-1}|^2 \right]$ and $\underline{\mybold{R}}^{w}_{\kappa}=\mathbb{E}\left[\underline{\mybold{w}}_{\kappa}\underline{\mybold{w}}^H_{\kappa}\right]$. Due to the product term of the nonlinear estimation error and the channel vector, the overall term~\eqref{eq:augNoiseLinear} is non-Gaussian. As a consequence, the derivation of exact Kalman filter equations is not possible, since it requires jointly Gaussian distributions within the state-space model. Therefore, in the following, we approximate~\eqref{eq:augNoiseLinear} by an independent Gaussian random vector $\underline{\tilde{\mybold{s}}}^{w2}_{\kappa}$, which has the same second-order moment as $\underline{\tilde{\mybold{s}}}^{w1}_{\kappa}$. Thus, the modified observation vector is \begin{align} \underline{\tilde{\mybold{y}}}_{\kappa} &\approx \left(\sum_{i=0}^{N-1} \underline{\hat{a}}_{i,\kappa|\kappa-1}\underline{\mybold{C}}_{i,\kappa}\right) \underline{\mybold{w}}_{\kappa} + \underline{\tilde{\mybold{s}}}^{w2}_{\kappa} \nonumber\\ &= \underline{\mybold{C}}^{\hat{a}}_{\kappa|\kappa-1} \underline{\mybold{w}}_{\kappa} + \underline{\tilde{\mybold{s}}}^{w2}_{\kappa} \label{eq:obsLinearGauss} \end{align} with \begin{align} \underline{\mybold{C}}^{\hat{a}}_{\kappa|\kappa-1} = \sum_{i=0}^{N-1} \underline{\hat{a}}_{i,\kappa|\kappa-1}\underline{\mybold{C}}_{i,\kappa}. \label{eq:bwTau} \end{align} We are now prepared to provide the Kalman gain and update equations~\cite{src:scharf1991statistical}, based on the observation equation~\eqref{eq:obsLinearGauss}, and by using~\eqref{eq:covAugNoiseLinear} and~\eqref{eq:bwTau}, and the definitions of error covariance matrices from Section~\ref{sec:predictions}. The computations for the $\kappa^{\text{th}}$ step are depicted in~\algref{alg:linearChannel}. \begin{algorithm} \caption{Obtain linear SI channel estimate at step $\kappa$} \label{alg:linearChannel} \begin{algorithmic} \REQUIRE known $\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}$, $\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}$, $\underline{\hat{\mybold{a}}}_{\kappa|\kappa-1}$ and $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$ \GAIN{} \STATE $\underline{\mybold{K}}^{w}_{\kappa} = \underline{\mybold{P}}^{w}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{a}^H}_{\kappa|\kappa-1}\cdot$\newline $\phantom{\underline{\mybold{K}}^{w}_{\kappa} = \underline{\mybold{P}}^{w}_{\kappa|\kappa-1} } \left( \underline{\mybold{C}}^{\hat{a}}_{\kappa|\kappa-1} \underline{\mybold{P}}^{w}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{a}^H}_{\kappa|\kappa-1}+ \underline{\mybold{\Psi}}^{\tilde{s}^{w2}}_{\kappa} \right)^{-1} $ \ENDBLOCK \item[\algorithmicupdate]{linear SI channel estimation} \STATE $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}= \underline{\hat{\mybold{w}}}_{\kappa|\kappa-1}+ \underline{\mybold{K}}^{w}_{\kappa} \left( \underline{\tilde{\mybold{y}}}_{\kappa} - \underline{\mybold{C}}^{\hat{a}}_{\kappa|\kappa-1}\underline{\hat{\mybold{w}}}_{\kappa|\kappa-1} \right)$ \ENDBLOCK \item[\algorithmicupdate]{linear SI channel estimation error covariance} \STATE $\underline{\mybold{P}}^{w}_{\kappa|\kappa} = \left( \eye{M}-\underline{\mybold{K}}^{w}_{\kappa}\underline{\mybold{C}}^{\hat{a}}_{\kappa|\kappa-1} \right) \underline{\mybold{P}}^{w}_{\kappa|\kappa-1}$ \ENDBLOCK \end{algorithmic} \end{algorithm} \subsection{Update estimation of nonlinear coefficients} \label{sec:estNl} We first provide an alternative expression for the observation~\eqref{eq:obsDec} at the receiver. The adaptation with respect to the nonlinear coefficients~$\underline{\mybold{a}}_{\kappa}$ requires another linear form of the observation. Thus, we express~\eqref{eq:obsDec} alternatively by using~\eqref{eq:nlCoeffsVec} \begin{align} \label{eq:obsAlt} \underline{\tilde{\mybold{y}}}_{\kappa}&= \underline{\mybold{C}}^{w}_{\kappa} \underline{\mybold{a}}_{\kappa} + \underline{\tilde{\mybold{s}}}_{\kappa}, \end{align} where we define \begin{align} \underline{\mybold{C}}^{w}_{\kappa} = \left[ \underline{\mybold{C}}_{0,\kappa}\underline{\mybold{w}}_{\kappa}, \ldots, \underline{\mybold{C}}_{N-1,\kappa}\underline{\mybold{w}}_{\kappa} \right]. \end{align} Compared to the original form~\eqref{eq:obsDec}, we have essentially swapped the roles of~$\underline{\mybold{a}}_{\kappa}$ and the linear SI channel~$\underline{\mybold{w}}_{\kappa}$. Let \begin{align} \label{eq:y2dft} \underline{\tilde{\mybold{y}}}_{\kappa}= \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \underline{\mybold{a}}_{\kappa} + \underline{\tilde{\mybold{s}}}^{a1}_{\kappa} \end{align} where we use~\eqref{eq:obsAlt} and the definition \begin{align} \label{eq:cwtau} \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} &= \left[ \underline{\mybold{C}}_{0,\kappa}\underline{\hat{\mybold{w}}}_{\kappa|\kappa}, \underline{\mybold{C}}_{1,\kappa}\underline{\hat{\mybold{w}}}_{\kappa|\kappa}, \ldots, \underline{\mybold{C}}_{N-1,\kappa}\underline{\hat{\mybold{w}}}_{\kappa|\kappa} \right] \\ &= \begin{bmatrix} \underline{\mybold{c}}^{\hat{w}}_{0,\kappa|\kappa} & \underline{\mybold{c}}^{\hat{w}}_{1,\kappa|\kappa} & \ldots & \underline{\mybold{c}}^{\hat{w}}_{N-1,\kappa|\kappa} \end{bmatrix}. \label{eq:frameColsDef} \end{align} Furthermore, the zero-mean augmented noise here is \begin{align} \label{eq:SignalNoiseA} \underline{\tilde{\mybold{s}}}^{a1}_{\kappa} &= \left[ \begin{matrix} \underline{\mybold{C}}_{0,\kappa}\underline{\mybold{n}}^{w}_{\kappa|\kappa} & \underline{\mybold{C}}_{1,\kappa}\underline{\mybold{n}}^{w}_{\kappa|\kappa} & \ldots \end{matrix} \right.\nonumber\\ &\qquad\left. \begin{matrix} & \underline{\mybold{C}}_{N-1,\kappa}\underline{\mybold{n}}^{w}_{\kappa|\kappa} \end{matrix} \right] \underline{\mybold{a}}_{\kappa} + \underline{\tilde{\mybold{s}}}_{\kappa}. \end{align} The covariance matrix of the augmented noise $\underline{\tilde{\mybold{s}}}^{a1}_{\kappa}$ is \begin{align} \underline{\mybold{\Psi}}^{\tilde{s}^{a1}}_{\kappa} &= \sum_{i=0}^{N-1} \underline{p}^{a}_{i,\kappa} \underline{\mybold{C}}_{i,\kappa} \underline{\mybold{P}}^{w}_{\kappa|\kappa} \underline{\mybold{C}}^{H}_{i,\kappa} +\underline{\mybold{\Psi}}^{\tilde{s}}_{\kappa}. \label{eq:SignalNoiseACov} \end{align} with $\underline{p}^{a}_{i,\kappa}=\mathbb{E}\left[ |\underline{a}_{i,\kappa}|^2 \right]$ and $\underline{\mybold{P}}^{w}_{\kappa|\kappa}=\mathbb{E}\left[ \underline{\mybold{n}}^{w}_{\kappa|\kappa}\underline{\mybold{n}}^{w^{H}}_{\kappa|\kappa} \right]$. As a starting point, it is noteworthy to say that the estimation of the linear SI channel provided in Section~\ref{sec:estLin} depends on the predictions of $\underline{\mybold{w}}_{\kappa}$ and $\underline{\mybold{a}}_{\kappa}$ only, while here the updated $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$ is available. Similar to~\eqref{eq:augNoiseLinear}, we approximate~\eqref{eq:SignalNoiseA} by an independent Gaussian random vector $\underline{\tilde{\mybold{s}}}^{a2}_{\kappa}$ with the same second-order moment as $\underline{\tilde{\mybold{s}}}^{a1}_{\kappa}$. Thus, we have \begin{align} \label{eq:yaDef} \underline{\tilde{\mybold{y}}}_{\kappa} &\approx \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \underline{\mybold{a}}_{\kappa} + \underline{\tilde{\mybold{s}}}^{a2}_{\kappa}. \end{align} On this basis, we provide the Kalman filter equations, based on the system equation~\eqref{eq:nlCoeffsMarkov} and the observation equation~\eqref{eq:yaDef}, and by considering~\eqref{eq:SignalNoiseACov}. The computations for the $\kappa^{\text{th}}$ step are depicted in~\algref{alg:nonlinearCoeffs}. \begin{algorithm} \caption{Obtain nonlinear coefficients estimate at step $\kappa$} \label{alg:nonlinearCoeffs} \begin{algorithmic} \REQUIRE known $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$, $\underline{\mybold{P}}^{w}_{\kappa|\kappa}$, $\underline{\hat{\mybold{a}}}_{\kappa|\kappa-1}$ and $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$ \GAIN{} \STATE $\underline{\mybold{K}}^{a}_{\kappa} = \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa} \left( \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa}+ \underline{\mybold{\Psi}}^{\tilde{s}^{a2}}_{\kappa} \right)^{-1}$ \ENDBLOCK \item[\algorithmicupdate]{coefficients estimate} \STATE $\underline{\hat{\mybold{a}}}_{\kappa|\kappa}= \underline{\hat{\mybold{a}}}_{\kappa|\kappa-1}+ \underline{\mybold{K}}^{a}_{\kappa} \left( \underline{\tilde{\mybold{y}}}_{\kappa} - \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa}\underline{\hat{\mybold{a}}}_{\kappa|\kappa-1} \right)$ \ENDBLOCK \item[\algorithmicupdate]{coefficients estimation error covariance} \STATE $\underline{\mybold{P}}^{a}_{\kappa|\kappa} = \left( \eye{M}-\underline{\mybold{K}}^{a}_{\kappa}\underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \right) \underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$ \ENDBLOCK \end{algorithmic} \end{algorithm} \section{Approximations} \label{sec:approximations} In this section, we discuss three approximations of the algorithm derived in Section~\ref{sec:algorithm} such as an intra-channel approximation (Section~\ref{sec:intraChannelApprox}), an overlap-save diagonalization (Section~\ref{sec:overlapSaveDiag}) and an nonlinear diagonalization (Section~\ref{sec:nonlinearDiag}). \subsection{Intra-channel approximation} \label{sec:intraChannelApprox} The computation of noise covariances~\eqref{eq:covAugNoiseLinear} and~\eqref{eq:SignalNoiseACov} during the Kalman update step requires a-priori knowledge of the second moments $\underline{p}^{a}_{i,\kappa}$, $\underline{\mybold{R}}^{w}_{\kappa}$ of the linear channel and nonlinear coefficients, respectively. However, this information might not be available, and therefore we approximate the second moments by their estimated counterparts: \begin{align} \underline{\mybold{R}}^{w}_{\kappa} &= \mathbb{E}\left[ \underline{\mybold{w}}_{\kappa}\underline{\mybold{w}}^H_{\kappa} \right] \nonumber \\ &\approx \text{diag}\left[ \underline{\hat{\mybold{w}}}_{\kappa|\kappa-1} \circ \underline{\hat{\mybold{w}}}^{*}_{\kappa|\kappa-1} \right] + \underline{\mybold{P}}^{w}_{\kappa|\kappa-1}. \\ \underline{p}^{a}_{i,\kappa} &= \mathbb{E}\left[ |\underline{a}_{i,\kappa}|^2 \right] \nonumber \\ &\approx |\underline{\hat{a}}_{i,\kappa|\kappa-1}|^2+ \underline{p}^{a}_{i,\kappa|\kappa-1}, \end{align} where we approximate the linear channel and nonlinear coefficient covariance by considering the instantaneous representations of the variables~\eqref{eq:siChanEstimatorPred} and~\eqref{eq:nlCoeffsEstimatorPred} instead of the average. Furthermore, we assume the error covariance matrices $\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}$, $\underline{\mybold{P}}^{w}_{\kappa|\kappa}$ and the augmented noise covariance matrix of~\eqref{eq:covAugNoiseLinear} to be diagonal. \subsection{Overlap-save diagonalization} \label{sec:overlapSaveDiag} The computation of the Kalman gain in \algref{alg:linearChannel} requires the inversion of an $M\times M$ matrix, which is both computationally intensive and numerically unstable. To simplify~\eqref{eq:CDef}, we apply the intra-channel diagonalization from Section~\ref{sec:intraChannelApprox} and use the approximations~\cite{src:benesty2001advances,Enzner06} for each nonlinear contribution \begin{align} \label{eq:approx1} \underline{\mybold{C}}_{i,\kappa} &\approx \frac{R}{M}\text{diag}\left[ \underline{\mybold{\phi}}_{i,\kappa}\right] \\ \label{eq:approx2} \underline{\mybold{C}}_{i,\kappa}\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}\underline{\mybold{C}}^{H}_{j,\kappa} &\approx \frac{R}{M}\text{diag}\left[ \underline{\mybold{\phi}}_{i,\kappa}\right]\underline{\mybold{P}}^{w}_{\kappa|\kappa-1}\text{diag}\left[ \underline{\mybold{\phi}}^{*}_{j,\kappa}\right]. \end{align} The approximation errors of~\eqref{eq:approx1},~\eqref{eq:approx2} vanish for $R\to M$~\cite{src:malik2013variational}. The diagonalization step essentially ignores the overlap-save constraint during fast convolution, however, this effect is small if the difference $M-R$ is rather low. Considering the aforementioned changes, the prediction, Kalman gain and update steps from~\algref{alg:linearChannel} are diagonalized. \subsection{Nonlinear diagonalization} \label{sec:nonlinearDiag} The Kalman gain of \algref{alg:nonlinearCoeffs} uses the inversion of an $M\times M$ matrix. We intend to reduce computational complexity of that operation by diagonalizing that matrix. Since $\underline{\mybold{P}}^{a}_{\kappa|\kappa-1}$ and $\underline{\mybold{\Psi}}^{s^{a2}}_{\kappa}$ are already diagonal by the approximations of Sections~\ref{sec:intraChannelApprox} and~\ref{sec:overlapSaveDiag}, only the $M\times N$ frame matrix $\underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa}$ has to be considered. Recall the definition~\eqref{eq:frameColsDef}. Each of the columns represents a nonlinear basis frame after convolution with the linear SI channel. Thus, we can write \begin{align} \label{eq:nldiagInv} &\underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa}+ \underline{\mybold{\Psi}}^{\tilde{s}^{a2}}_{\kappa} = \\ &\qquad= \sum_{i=0}^{N-1} \underline{p}^{a}_{i,\kappa|\kappa-1} \underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa} \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa}+ \underline{\mybold{\Psi}}^{\tilde{s}^{a2}}_{\kappa}, \notag \\ \label{eq:nlDiagInvApprox} &\qquad \hintedrel[eq:nldiagInv1]{\approx} \sum_{i=0}^{N-1} \underline{p}^{a}_{i,\kappa|\kappa-1} \underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa} \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa}+ \underline{\sigma}^{2}_{\kappa}\eye{M}, \\ &\qquad \hintedrel[eq:nldiagInv2]{=} \mybold{U} \text{diag}\left[ \lambda_0, \lambda_1, \ldots, \lambda_{M-1} \right] \mybold{U}^H \end{align} where in~$(\hintref{eq:nldiagInv1})$, $\underline{\mybold{\Psi}}^{\tilde{s}^{a2}}_{\kappa}$ is approximated by $\underline{\sigma}^{2}_{\kappa}\eye{M}$ and $\underline{\sigma}^{2}_{\kappa}$ denotes the largest element of $\underline{\mybold{\Psi}}^{\tilde{s}^{a2}}_{\kappa}$. Furthermore, in~$(\hintref{eq:nldiagInv2})$, $\mybold{U}$ denotes the matrix of eigenvectors and $\lambda_i$ are the eigenvalues. The basis frames $\underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa}$ are not necessarily orthogonal. However, if we assume they are approximately orthogonal, then the basis frames are eigenvectors of $ \underline{\mybold{C}}^{\hat{w}}_{\kappa|\kappa} \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa} $, and, since $\underline{\sigma}^{2}_{\kappa}\eye{M}$ is just a scaled identity matrix, they are also eigenvectors of~\eqref{eq:nlDiagInvApprox} with eigenvalues \begin{align*} \lambda_i = \begin{cases} \underline{p}^{a}_{i,\kappa|\kappa-1} \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa} \underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa}+ \underline{\sigma}^{2}_{\kappa} & 0\leq i\leq N-1 \\ \underline{\sigma}^{2}_{\kappa} & N\leq i\leq M-1. \end{cases} \end{align*} Furthermore, the first $N$ columns of $\mybold{U}$ are normalized versions of the $\underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa}$. Next, for the Kalman gain, we have \begin{align*} \underline{\mybold{K}}^{a}_{\kappa} &= \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa} \left( \sum_{i=0}^{N-1} \underline{p}^{a}_{\kappa|\kappa-1} \underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa} \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa}+ \underline{\sigma}^{2}_{\kappa}\eye{M} \right)^{-1} \\ &= \underline{\mybold{P}}^{a}_{\kappa|\kappa-1} \underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa} \left( \mybold{U} \text{diag}\left[ \frac{1}{\lambda_0}, \frac{1}{\lambda_1}, \ldots, \frac{1}{\lambda_{M-1}} \right] \mybold{U}^H \right). \end{align*} Let $\underline{\mybold{k}}^{a^{T}}_{i,\kappa}$ be the $i$th row of $\underline{\mybold{K}}^{a}_{\kappa}$, then we have \begin{align} \label{eq:kalmanApprox} \underline{\mybold{k}}^{a^{T}}_{i,\kappa}= \frac{ \underline{p}^{a}_{i,\kappa|\kappa-1} }{ \underline{p}^{a}_{i,\kappa|\kappa-1} \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa} \underline{\mybold{c}}^{\hat{w}}_{i,\kappa|\kappa}+ \underline{\sigma}^{2}_{\kappa}. } \underline{\mybold{c}}^{\hat{w}^H}_{i,\kappa|\kappa}, \end{align} since the $i$th row of $\underline{\mybold{C}}^{\hat{w}^H}_{\kappa|\kappa}$ is orthogonal to all columns of $\mybold{U}$ except the $i$th one. The computation of~\eqref{eq:kalmanApprox} is much less expensive in terms of computational complexity. Finally, we apply the overlap-save diagonalization of~\eqref{eq:approx1} to~\eqref{eq:kalmanApprox}, and get \begin{align*} \underline{\mybold{k}}^{a^{T}}_{i,\kappa}\approx \frac{ \frac{R}{M} \underline{p}^{a}_{i,\kappa|\kappa-1} \left( \underline{\mybold{\phi}}_{i,\kappa} \circ \underline{\mybold{\hat{w}}}_{i,\kappa|\kappa} \right)^H }{ \frac{R}{M} \underline{p}^{a}_{i,\kappa|\kappa-1} \left( \underline{\mybold{\phi}}_{i,\kappa} \circ \underline{\mybold{\hat{w}}}_{i,\kappa|\kappa} \right)^H \left( \underline{\mybold{\phi}}_{i,\kappa} \circ \underline{\mybold{\hat{w}}}_{i,\kappa|\kappa} \right) + \underline{\sigma}^{2}_{\kappa}. }. \end{align*} \section{Results} \label{sec:results} In this section, we evaluate the performance of the algorithms derived in this work by comparing them to other approaches in the literature. We refer to~\algref{alg:linearChannel},~\algref{alg:nonlinearCoeffs} from Section~\ref{sec:algorithm} as the \emph{exact} Kalman algorithm in cascade structure. In addition, the \emph{approximated} Kalman algorithm is taken from Section~\ref{sec:approximations}. From the literature, we employ the Kalman algorithm in parallel structure~\cite{MalikEnzner2012b} and both the time-domain nonlinear NLMS and RLS in parallel structure. The section is organized as follows. First, we define the performance metrics in Section~\ref{sec:metrics}. Then, the computational complexity is analyzed in Section~\ref{sec:complexity}. By performing simulations, we look into time convergence behavior in Section~\ref{sec:convergence}, global performance of the proposed algorithms in Sections~\ref{sec:global} and use cases and~\ref{sec:usecases}. \subsection{Performance metrics} \label{sec:metrics} We define the key metrics that serve as performance indicators for the adaptive SI cancellation algorithms. First, we introduce definitions according to the mean-squared error (MSE) principle. The average signal to residual-interference-and-noise ratio (SRINR) is given as follows \begin{align} \label{eq:srinrDef} \text{SRINR}_{\kappa}= \frac{ \mathbb{E}\left[\left| \mybold{d}^{h}_{\kappa} \right|^2\right] }{ \mathbb{E}\left[\left| \mybold{e}_\kappa-\mybold{d}^{h}_{\kappa} \right|^2\right] } , \end{align} where $\mybold{d}^{h}_{\kappa}$ denotes the desired signal at the receiver, and $\mybold{e}_\kappa$ is the error signal before decoding~\eqref{eq:errorSignal} in time domain. The wireless channel~\eqref{eq:wChanDef} is assumed to be random, thus the process~\eqref{eq:srinrDef} is non-ergodic, and therefore we define the ``global'' SRINR to be the average in time for $\kappa\to\infty$ over many realizations of the wireless channel. The capability of system identification can be measured by the system distance, i.e., the power of the estimation error compared to the power of the system variables. In the domain of Kalman filtering, the internal system state is the unknown quantity to be observed. To evaluate the quality of state estimation, we define the system distance for the linear SI channel coefficients as follows: \begin{align} \label{eq:sysDistWDef} \text{SysDist}^{w}_{\kappa} &= \frac{ \mathbb{E}\left[ \left( \mybold{w}_{\kappa}-\hat{\mybold{w}}_{\kappa|\kappa} \right)^H \left( \mybold{w}_{\kappa}-\hat{\mybold{w}}_{\kappa|\kappa} \right) \right] }{ \mathbb{E}\left[ \mybold{w}^H_{\kappa} \mybold{w}_{\kappa}\right] } , \end{align} where $\mybold{w}_{\kappa}$ is the true SI channel vector per frame, as given by~\eqref{eq:siChanVec}, and $\hat{\mybold{w}}_{\kappa|\kappa}$ is the corresponding estimate of $\underline{\hat{\mybold{w}}}_{\kappa|\kappa}$ from~\algref{alg:linearChannel} in time domain. Similarly, the system distance for the $i$th nonlinear coefficient is defined as follows: \begin{align} \label{eq:sysDistADef} \text{SysDist}^{a}_{i,\kappa} &= \frac{ \mathbb{E}\left[ \left| \underline{a}_{i,\kappa}-\underline{\hat{a}}_{i,\kappa|\kappa} \right|^2 \right] }{ \mathbb{E}\left[ \left| \underline{a}_{i,\kappa} \right|^2 \right] } , \end{align} where $\underline{a}_{i,\kappa}$ and $\underline{\hat{a}}_{i,\kappa|\kappa}$ are the nonlinear coefficient from~\eqref{eq:nlCoeffsVecEqual} and its updated estimate at frame $\kappa$, respectively. We need a fair comparison of the algorithms based on either cascaded (see~\figref{fig:systemmodel}) or parallel (see~\figref{fig:parallelmodel}) channel structure. The coefficients $\hat{\mybold{w}}_{0,\kappa|\kappa}$ of the parallel structure correspond to the linear SI channel from the cascade model. The nonlinear coefficients $\underline{\hat{a}}_{i,\kappa}$ in case of a parallel structure are hence obtained by a Least-Squares solution \begin{align} \label{eq:nlLSEst} \underline{\hat{a}}_{i,\kappa|\kappa}=\frac{\hat{\mybold{w}}^H_{0,\kappa|\kappa}\hat{\mybold{w}}_{i,\kappa|\kappa}}{\hat{\mybold{w}}^H_{0,\kappa|\kappa}\hat{\mybold{w}}_{0,\kappa|\kappa}}. \end{align} Next, we turn to a metric of digital communications. To simplify the analysis, we assume the wireless channel to be constant over time. Furthermore, since after SI cancellation the residual SI can be approximated by white noise~\cite{src:day2012fullduplex}, we assume that the residual SI process is independent and ergodic. Thus, the information capacity with power constraint $P_d$ at the distant node is given by~\cite{src:cover2006elements} \begin{align} C &=\max_{f(d):\mathbb{E}\left[|d|^2\right]<P_d}\:I\left(d;y\right) \nonumber\\ &\hintedrel[eq:ratelower1]{\geq}\max_{f(d):\mathbb{E}\left[|d|^2\right]<P_d}\:I\left(d;y_G\right) \nonumber\\ \label{eq:ratelower} &\hintedrel[eq:ratelower2]{=} \log\left( 1+\frac{P_d\left\lVert \mybold{h} \right\rVert^2_2}{\sigma^2_{\tilde{e}}} \right) =: R \end{align} where in~$(\hintref{eq:ratelower1})$ we approximate the received signal $y$ by a circular-symmetric complex Gaussian counterpart $y_G$, since Gaussian noise serves as the worst-case noise~\cite{src:shomorony2012Is}. We arrive at~$(\hintref{eq:ratelower2})$, since in a Gaussian point-to-point channel, it is known that a Gaussian input is optimal. Here, $\sigma^2_{\tilde{e}}$ denotes the variance of the residual SI plus independent noise. \subsection{Complexity} \label{sec:complexity} The cascade model of the nonlinear SI channel (recall~\figref{fig:systemmodel}) is essentially motivated by reducing computational complexity of the adaptive cancellation algorithm, which is primarily determined by the number of multiplications and divisions. In our analysis, we focus on the principal complexity only and do not evaluate the cost of each calculation step in detail. Thus, we identify those parts of the algorithms that exhibit the most significant impact on the complexity, depending on the (potentially large) frame shift $R$, the FIR filter length $L$ and on the order of the nonlinear expansion $N$. \tabref{tab:complexity} shows the results. In order to compare time-domain and DFT domain SI estimation and cancellation algorithms, we refer to the complexity as per sample. In all cases, DFT domain approaches are normalized to the frame shift $R=M-L$. This is reducing complexity by an order of magnitude if $R$ is growing while $L$ is kept fixed. Now, consider the frame shift $R$. The exact Kalman algorithm in cascade structure, derived in Section~\ref{sec:algorithm}, does have at least quadratic complexity, since it requires the inversion and multiplication of $M\times M$ matrices. On the other hand, considering the approximations of Section~\ref{sec:approximations}, the DFT operation becomes dominant, and therefore we have logarithmic complexity for the Kalman algorithm with nonlinear diagonalization. Similar reasoning holds for the Kalman algorithm in parallel structure after submatrix or full diagonalization~\cite{MalikEnzner2012b}. The complexity of the time-domain algorithms is determined by the the FIR filter size $L$. The classic NLMS algorithm in time domain exhibits linear complexity in $L$, while the parallel RLS algorithm in time domain has quadratic complexity due to the update step. However, a direct comparison of time-domain and DFT domain complexities is more involved and depends on the priorities of Kalman filter design. Certain designs might increase $L$ for finer channel estimation (and thus keep $R$ essentially constant), while others might use larger frame shifts $R$ to diminish the impact of the overlap-save approximation. The number of nonlinear basis functions $N$ affects the computational complexity of the various algorithms in different ways. In cascade structure, the complexity generally scales linearly with respect to $N$. On the other hand, in parallel structures, Kalman or RLS algorithms take the correlation between all sub-channels into account, and therefore require cubic and quadratic complexity in $N$, respectively. Especially the cubic term might be greater than $\log_2M$ even for small $N$ and thus can have a significant impact. Kalman algorithm with full diagonalization~\cite{MalikEnzner2012b} or the NLMS allow to reduce the complexity to linear scale, since the correlation between parallel channels is neglected. \begin{table} \begin{center} \begin{tabular}{ | l | l |} \hline Kalman, Cascade, Exact~\ref{sec:algorithm} & $\mathcal{O}\left(N\frac{(R+L)^3}{R}\right)$ \\ \hline Kalman, Cascade, Approx.~\ref{sec:nonlinearDiag} & $\mathcal{O}\left(N\frac{(R+L)\log_2(R+L)}{R}\right)$ \\ \hline Kalman, Parallel, Sub. diag.~\cite{MalikEnzner2012b}& $\mathcal{O}\left(N^3\frac{R+L}{R}\right) + \mathcal{O}\left(N\frac{(R+L)\log_2(R+L)}{R}\right)$ \\ \hline Kalman, Parallel, Full diag.~\cite{MalikEnzner2012b} & $\mathcal{O}\left(N\frac{(R+L)\log_2(R+L)}{R}\right)$ \\ \hline NLMS, Parallel, Time domain & $\mathcal{O}(NL)$ \\ \hline RLS, Parallel, Time domain & $\mathcal{O}(N^2L^2)$ \\ \hline \end{tabular} \end{center} \caption{Comparison of computational complexity with respect to frame shift $R$, FIR filter length $L$ and expansion order $N$ for different SI estimation and cancellation algorithms.} \label{tab:complexity} \end{table} \subsection{Time convergence behavior} \label{sec:convergence} In this Section, we analyze the performance of the adaptive estimations over the course of time. We evaluate the performance of the algorithm by simulations. We choose the parameters $M=64$, $R=56$ and select $N=3$ basis functions. The basis function are chosen according to the following emphases: (i) the linear part $\phi_0(x_{k})=x_{k}$ of the SI, then (ii) the widely-linear component $\phi_1(x_{k})=x^*_{k}$~\cite{src:korpi2014widely} and finally, (iii), the $3^{\text{rd}}$ order nonlinear component $\phi_2(x_{k})=x^2_{k}x^*_{k}$ with initial nonlinear coefficients $\underline{a}_{1,0}=\underline{a}_{2,0}=-10$~dB. The additive noise level is set to $-50$~dB. For the SI signal and the SoI, we generate zero-mean, standard normal i.i.d. samples. The Kalman filter parameters are perfectly matched to the simulated time-variant SI channel conditions. The NLMS step size is $10^{-2}$, while the RLS forgetting factor is matched to the Markov model of~\eqref{eq:siChanMarkov}. Let the \emph{input SINR} be the ratio of the SoI power to the SI-and-noise power at the receiver before any signal processing occurs. The input SINR is fixed to $-15$~dB, thus, the SI is much stronger than the SoI. Consider~\figref{fig:convergence}, which shows the time convergence behavior of SI cancellation algorithms over the frame index $\kappa$. \begin{figure*} \ifshowTikzPictures \begin{tikzpicture} \begin{groupplot}[group style={ group name=framePlots, horizontal sep=3cm, vertical sep=2cm, group size= 2 by 2}, height=5cm, width=8cm, xmin=0, xmax=400, grid=major] \pgfmathtruncatemacro{\markRepeatFaster}{\markRepeat/2} \nextgroupplot[ylabel={SRINR [dB]}, xlabel={$\kappa$}] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=\markRepeatFaster,mark phase=\markPOne] table[]{./resSrinrVsSinr NonOrtho-Static-1.tsv};\label{plots:plot1a} \addplot [color=cyan, mark=star, dashdotted, mark repeat=\markRepeat,mark phase=\markPFour] table[]{./resSrinrVsSinr NonOrtho-Static-4.tsv};\label{plots:plot4d} \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=\markRepeat,mark phase=\markPFive] table[]{./resSrinrVsSinr NonOrtho-Static-5.tsv};\label{plots:plot5e} \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=\markRepeat,mark phase=\markPSeven] table[]{./resSrinrVsSinr NonOrtho-Static-7.tsv};\label{plots:plot7g} \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=\markRepeat,mark phase=\markPEight] table[]{./resSrinrVsSinr NonOrtho-Static-8.tsv};\label{plots:plot8h} \nextgroupplot[ylabel={System distance [dB]}, xlabel={$\kappa$}, ymin=-45, ymax=20] \addplot [color=blue, mark=asterisk, mark options={solid, blue}, mark repeat=\markRepeat,mark phase=\markPThree] table[]{./resSysDistWVsSinr NonOrtho-Static-3.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=\markRepeat,mark phase=\markPFour] table[]{./resSysDistWVsSinr NonOrtho-Static-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=\markRepeat,mark phase=\markPFive] table[]{./resSysDistWVsSinr NonOrtho-Static-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=\markRepeat,mark phase=\markPSeven] table[]{./resSysDistWVsSinr NonOrtho-Static-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=\markRepeat,mark phase=\markPEight] table[]{./resSysDistWVsSinr NonOrtho-Static-8.tsv}; \nextgroupplot[ylabel={System distance [dB]}, xlabel={$\kappa$}, ymin=-45, ymax=20] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=\markRepeatFaster,mark phase=\markPOne] table[]{./resSysDistA1VsSinr NonOrtho-Static-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=\markRepeat,mark phase=\markPFour] table[]{./resSysDistA1VsSinr NonOrtho-Static-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=\markRepeat,mark phase=\markPFive] table[]{./resSysDistA1VsSinr NonOrtho-Static-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=\markRepeat,mark phase=\markPSeven] table[]{./resSysDistA1VsSinr NonOrtho-Static-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=\markRepeat,mark phase=\markPEight] table[]{./resSysDistA1VsSinr NonOrtho-Static-8.tsv}; \nextgroupplot[xlabel={Rounds}, ylabel={System distance [dB]}, xlabel={$\kappa$}, ymin=-45, ymax=20] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=\markRepeatFaster,mark phase=\markPOne] table[]{./resSysDistA2VsSinr NonOrtho-Static-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=\markRepeat,mark phase=\markPFour] table[]{./resSysDistA2VsSinr NonOrtho-Static-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=\markRepeat,mark phase=\markPFive] table[]{./resSysDistA2VsSinr NonOrtho-Static-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=\markRepeat,mark phase=\markPSeven] table[]{./resSysDistA2VsSinr NonOrtho-Static-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=\markRepeat,mark phase=\markPEight] table[]{./resSysDistA2VsSinr NonOrtho-Static-8.tsv}; \end{groupplot} \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]framePlots c1r1.south) {\subcaption{Signal-to-residual-interference-and-noise ratios~\eqref{eq:srinrDef}.\label{fig:srinrNonOrtho}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]framePlots c2r1.south) {\subcaption{System distance~\eqref{eq:sysDistWDef} to the linear SI channel $\underline{\mybold{w}}_{\kappa}$\label{fig:sysdistWNonOrtho}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]framePlots c1r2.south) {\subcaption{System distance~\eqref{eq:sysDistADef} to the nonlinear coefficient $\underline{a}_{1,\kappa}$\label{fig:sysdistA1NonOrtho}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]framePlots c2r2.south) {\subcaption{System distance~\eqref{eq:sysDistADef} to the nonlinear coefficient $\underline{a}_{2,\kappa}$\label{fig:sysdistA2NonOrtho}}}; \path (framePlots c1r1.north west|-current bounding box.north)-- coordinate(legendposFramePlots) (framePlots c2r1.north east|-current bounding box.north); \matrix[ matrix of nodes, anchor=south, draw, inner sep=0.2em, nodes={anchor=west} ]at([yshift=1ex]legendposFramePlots) { \ref*{plots:plot1a}& Kalman, Cascade, Exact~\ref{sec:algorithm} &[5pt] \ref*{plots:plot4d}& Kalman, Cascade, Approximated~\ref{sec:nonlinearDiag} & [5pt] \ref*{plots:plot5e}& Kalman, Parallel, Sub. diag.~\cite{MalikEnzner2012b}& \\ \ref*{plots:plot7g}& NLMS, Parallel, time domain & [5pt] \ref*{plots:plot8h}& RLS, Parallel, time domain \\ }; \end{tikzpicture} \fi \vspace{-0.25cm} \caption{Time convergence performance of SI cancellation algorithms over the frame index $\kappa$ with non-orthogonalized inputs with static SI path} \label{fig:convergence} \end{figure*} The inputs are not orthogonalized. Both the linear SI channel and the nonlinear coefficients remain constant over time. The SRINR results of~\figref{fig:srinrNonOrtho} demonstrate that without prior orthogonalization, the algorithms with strong inherent decorrelation feature, such as the Kalman in parallel structure with submatrix diagonalization or the RLS, are advantageous. Conversely, the NLMS is much slower in convergence. The Kalman approaches with cascade structure are bounded from above by the RLS and from below by the NLMS. Interestingly, the performance difference between the exact Kalman approach in cascade structure and its approximated counterpart are almost negligible. Sometimes the exact calculation performs slightly worse. This has been identified as a particularity of the iterative design~\algref{alg:linearChannel} and~\algref{alg:nonlinearCoeffs}, which does not jointly estimate the linear SI channel and the nonlinear coefficients. The results of the system distance~(\Cref{fig:sysdistWNonOrtho,fig:sysdistA1NonOrtho,fig:sysdistA2NonOrtho}) essentially support the aforementioned findings. It can be observed that the system distances of the linear SI channel coefficients $\underline{\mybold{w}}_{\kappa}$ of~\figref{fig:sysdistWNonOrtho} and the first nonlinear coefficient $\underline{a}_{1,\kappa}$ in~\figref{fig:sysdistA1NonOrtho} are ranked consistently. In contrast to that, the distance of second nonlinear coefficient $\underline{a}_{2,\kappa}$ in~\figref{fig:sysdistA2NonOrtho} is much larger, especially for the NLMS. This is due to the fact that the input signal and its $3$rd order component are correlated, and thus the performance deteriorates if the algorithms lack a decorrelation feature. The system distances in~\Cref{fig:sysdistWNonOrtho,fig:sysdistA1NonOrtho,fig:sysdistA2NonOrtho} of the Kalman approach in parallel structure with submatrix diagonalization and the RLS are almost identical. But the SRINR result (\figref{fig:srinrNonOrtho}) indicates that the Kalman algorithm does not precisely achieve the same level as the RLS. However, this observation can be explained. Both algorithms are devised in parallel structure. Thus, the nonlinear coefficients are computed by~\eqref{eq:nlLSEst}, which essentially is averaging over the channel coefficients and thus removing potential estimation errors. The parallel Kalman algorithm is slightly more susceptible to errors than the RLS since it is formulated in DFT domain with overlap-save approximation similar to the method described in sub-section~\ref{sec:overlapSaveDiag}. These error phenomena are compensated by reducing the number of estimated variables, such as by using~\eqref{eq:nlLSEst}. \subsection{Global performance} \label{sec:global} For the global performance, we evaluate SRINR (after cancellation) and system distances with respect to certain input SINR (before cancellation), which is illustrated in~\figref{fig:global}. \begin{figure*} \ifshowTikzPictures \begin{tikzpicture} \begin{groupplot}[group style={ group name=globalPlots, horizontal sep=3cm, vertical sep=2.5cm, group size= 2 by 1}, height=5cm, width=8cm, xmin=-20, xmax=20, grid=major] \nextgroupplot[ylabel={SRINR [dB]}, xlabel={Input SINR [dB]}, ymin=-5, ymax=25] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=2,mark phase=1] table[]{./resSrinrConvVsSinr NonOrtho-Static-1.tsv};\label{plots:global1} \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=2] table[]{./resSrinrConvVsSinr NonOrtho-Static-4.tsv};\label{plots:global4} \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resSrinrConvVsSinr NonOrtho-Static-5.tsv};\label{plots:global5} \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resSrinrConvVsSinr NonOrtho-Static-7.tsv};\label{plots:global7} \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resSrinrConvVsSinr NonOrtho-Static-8.tsv};\label{plots:global8} \addplot [color=black, mark options={none}, thick] coordinates {(0,0) (20,20)}; \node[text width=2cm,align=center] at ($(axis cs:14,2.5)+(0,0)$) {Treating-interference-as-noise threshold}; \addplot [color=black, mark options={none}, thick] coordinates {(-20,20) (20,20)}; \node[text width=2cm,align=center] at ($(axis cs:-10,23)+(0,0)$) {SNR}; \addplot [color=black, mark options={none}, thick] coordinates {(-20,0) (0,0)}; \node[text width=2cm,align=center] at ($(axis cs:-10,3)+(0,0)$) {min. SRINR}; \nextgroupplot[ylabel={System distance [dB]}, xlabel={Input SINR [dB]}, ymin=-50,ymax=20 \addplot [color=red, mark=+, mark options={solid, red},mark repeat=2, mark phase=1] table[]{./resSysDistWConvVsSinr NonOrtho-Static-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=2, mark phase=2] table[]{./resSysDistWConvVsSinr NonOrtho-Static-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resSysDistWConvVsSinr NonOrtho-Static-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resSysDistWConvVsSinr NonOrtho-Static-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resSysDistWConvVsSinr NonOrtho-Static-8.tsv}; \end{groupplot} \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]globalPlots c1r1.south) {\subcaption{Signal-to-residual -interference-and-noise ratios~\eqref{eq:srinrDef}\label{fig:srinrConvNonOrtho}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]globalPlots c2r1.south) {\subcaption{System distance~\eqref{eq:sysDistWDef} to the linear SI channel $\underline{\mybold{w}}_{\kappa}$\label{fig:sysdistWConvNonOrtho}}}; \path (globalPlots c1r1.north west|-current bounding box.north)-- coordinate(legendposglobalPlots) (globalPlots c2r1.north east|-current bounding box.north); \matrix[ matrix of nodes, anchor=south, draw, inner sep=0.2em, nodes={anchor=west} ]at([yshift=1ex]legendposglobalPlots) { \ref*{plots:global1}& Kalman, Cascade, Exact~\ref{sec:algorithm} &[5pt] \ref*{plots:global4}& Kalman, Cascade, Approximated~\ref{sec:nonlinearDiag} & [5pt] \ref*{plots:global5}& Kalman, Parallel, Sub. diag.~\cite{MalikEnzner2012b}&\\ \ref*{plots:global7}& NLMS, Parallel, time domain & \ref*{plots:global8}& RLS, Parallel, time domain \\ }; \end{tikzpicture} \fi \vspace{-0.25cm} \caption{Global performance of SI cancellation algorithms over input SINR with non-orthogonalized inputs and static SI path.} \label{fig:global} \end{figure*} Again, the input signals are not orthogonalized and both the linear SI channel and the nonlinear coefficients remain static. Consider~\figref{fig:srinrConvNonOrtho} first, which depicts the SRINR. The solid black line labeled ``SNR'' designates the maximum possible SRINR, since the noise floor is fixed at $-20$~dB with respect to the SoI. If the SRINR vs. input SINR graph crosses the ``treating-interference-as-noise threshold'', then it is no longer reasonable to use the algorithm and better to rather accept the SI as noise. The ``min. SRINR'' line is simply attained when the algorithm selects $\underline{\hat{\mybold{x}}}_{\text{si},\kappa}=\underline{\mybold{y}}_{\kappa}$ as output such that the received signal is completely suppressed. The results show that all algorithms can be used over the whole range of SINR scenarios. However, as already previously indicated, algorithms with already existing decorrelation property (Kalman in parallel structure with submatrix diagonalization or the RLS) have an advantage. This is especially prominent for the low SINR regime, where a great amount of cancellation is required. On the other hand, in the high SINR regime, the performance difference between the algorithms significantly diminishes. The system distance results of the linear SI channel coefficients $\underline{\mybold{w}}_{\kappa}$ are shown in~\Cref{fig:sysdistWConvNonOrtho}. If we compare the performance of the various algorithms, the graphs are essentially reflecting the expected ``upside-down'' pattern compared to the SRINR figure, since a negative system distance naturally pays off in terms of SRINR. Here, however, the overall system distance is much better in the low SINR regime. This is intuitive, since at low SINR, the SI power is much larger than the SoI power, and thus the SI system components can be identified more accurately. Conversely, in the high SINR regime, the SI is buried in the SoI and the noise floor, and therefore the coefficient estimations are less reliable. While the detailed system identification is more difficult in the high SINR regime, the SRINR performance (denoting the overall SI estimation) is, however, not degraded. \subsection{Use cases} \label{sec:usecases} In this Section, we investigate the impact of time-variant channels on the performance of the SI estimation and cancellation algorithms. Static and time-variant channels represent fundamentally different use cases that significantly influence the proper choice of algorithms. As a performance metric, we apply the communication rate~\eqref{eq:ratelower}. The time-variant case is characterized by the (normalized) coherence time $\kappa^{w}_{\text{coh}}=10^3$ for the linear SI channel and $\kappa^{a}_{\text{coh}}=10^4$ for the nonlinear coefficients. We assume that the algorithm parameters are perfectly matched to the simulated channel conditions. Now consider~\figref{fig:rates}, where we illustrate the rate over the input SINR for different cases of temporal variations with or without orthogonalization. \begin{figure*} \ifshowTikzPictures \begin{tikzpicture}[spy using outlines= {rectangle, magnification=4, connect spies} ] \begin{groupplot}[group style={ group name=ratePlots, horizontal sep=3cm, vertical sep=2cm, group size= 2 by 2}, height=5cm, width=8cm, xmin=-20, xmax=20, grid=major ] \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Static-1.tsv};\label{plots:rate1} \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=2] table[]{./resRateVsSinr NonOrtho-Static-4.tsv};\label{plots:rate4} \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Static-5.tsv};\label{plots:rate5} \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Static-7.tsv};\label{plots:rate7} \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr NonOrtho-Static-8.tsv};\label{plots:rate8} \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr NonOrtho-Static-9.tsv};\label{plots:rate9} \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Static-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=2] table[]{./resRateVsSinr Orthogonal-Static-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Static-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Static-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr Orthogonal-Static-8.tsv}; \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr Orthogonal-Static-9.tsv}; \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Varying-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=2] table[]{./resRateVsSinr NonOrtho-Varying-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Varying-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Varying-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr NonOrtho-Varying-8.tsv}; \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr NonOrtho-Varying-9.tsv}; \coordinate (spypoint1) at (axis cs:-15,2); \coordinate (magnifyglass1) at (axis cs:12,2.25); \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=red, mark=+, mark options={solid, red}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Varying-1.tsv}; \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=2] table[]{./resRateVsSinr Orthogonal-Varying-4.tsv}; \addplot [color=magenta, mark=x, mark options={solid, magenta}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Varying-5.tsv}; \addplot [color=black, mark=diamond, mark options={solid, black}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr Orthogonal-Varying-7.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr Orthogonal-Varying-8.tsv}; \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr Orthogonal-Varying-9.tsv}; \coordinate (spypoint2) at (axis cs:-15,2); \coordinate (magnifyglass2) at (axis cs:12,2.25); \end{groupplot} \spy [black, width=2.25cm, height=2.25cm] on (spypoint2) in node at (magnifyglass2); \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]ratePlots c1r1.south) {\subcaption{Non-orthogonalized inputs, static SI path.\label{fig:rateNonOrthoStatic}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]ratePlots c2r1.south) {\subcaption{Orthogonalized inputs, static SI path.\label{fig:rateOrthoStatic}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]ratePlots c1r2.south) {\subcaption{Non-orthogonalized inputs, time-variant SI path.\label{fig:rateNonOrthoVarying}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]ratePlots c2r2.south) {\subcaption{Orthogonalized inputs, time-variant SI path.\label{fig:rateOrthoVarying}}}; \path (ratePlots c1r1.north west|-current bounding box.north)-- coordinate(legendposratePlots) (ratePlots c2r1.north east|-current bounding box.north); \matrix[ matrix of nodes, anchor=south, draw, inner sep=0.2em, nodes={anchor=west} ]at([yshift=1ex]legendposratePlots) { \ref*{plots:rate1}& Kalman, Cascade, Exact~\ref{sec:algorithm} &[5pt] \ref*{plots:rate4}& Kalman, Cascade, Approximated~\ref{sec:nonlinearDiag} & [5pt] \ref*{plots:rate5}& Kalman, Parallel, Sub. diag.~\cite{MalikEnzner2012b}& \\ \ref*{plots:rate7}& NLMS, Parallel, time domain & [5pt] \ref*{plots:rate8}& RLS, Parallel, time domain & [5pt] \ref*{plots:rate9}& Capacity &[5pt] \\ }; \end{tikzpicture} \fi \vspace{-0.25cm} \caption{Communication rate~\eqref{eq:ratelower} over the input SINR for different cases of input processing and temporal variations.} \label{fig:rates} \end{figure*} In each figure, we have provided the capacity as the upper bound for all rates. In~\figref{fig:rateNonOrthoStatic}, we have depicted the rate for static conditions of both the linear SI channel and the nonlinear coefficients, while the input signals are not orthogonalized. This is essentially a similar situation shown as in~\figref{fig:srinrConvNonOrtho}, but now the performance gaps in the low SINR regime are even more pronounced. Algorithms with inherent decorrelation like the Kalman in parallel structure with submatrix diagonalization perform the best regardless of the input SINR, while the RLS actually almost achieves the capacity. The cascaded and computationally efficient Kalman algorithms proposed in this work have a middle position, while the simple NMLS exhibits worse performance. However, if orthogonalization of the input signal is applied, the situation fundamentally changes, as shown in~\figref{fig:rateOrthoStatic}. Now all the algorithm performances are completely independent of the input SINR. Furthermore, the performance gaps are almost insignificant, and thus a system designer can switch to a different metric in order to choose the best algorithm, such as the computational complexity. In the time-convergence behavior, where we can obverse the same evolution for all the algorithms except the NLMS, but we omit the figure due to space constraints. In addition, the system distance results (not shown here) are very similar for all studied algorithms. \figref{fig:rateNonOrthoVarying} illustrates the communication rate in time-variant scenario, but without using input orthogonalization. Compared to the result of~\figref{fig:srinrConvNonOrtho}, it can clearly be seen that now the input SINR has a significant impact on the performance of all algorithms. \figref{fig:rateOrthoVarying} shows the results under time-variant conditions for orthogonalized input signals. In this case, the orthogonalization does not seem to help either, since the temporal variations represent the overall performance limitation. \subsection{Decoding} \label{sec:decodingRes} The decoding step has been introduced in Section~\ref{sec:decoding}. It takes place after the Kalman prediction, right before the update step is performed. Now, we evaluate the benefits of the information decoding to the SI estimation and cancellation. We essentially regard two relevant cases. First, we consider the situation that the algorithm does not decode the desired information at all. As a consequence, the SoI is considered as unknown noise to the adaptive algorithm, and thus the estimation performance is equal to the results of the previous Sections. Second, in the case of perfect decoding, the desired information is obtained without any errors, and thus the remaining residual error signal contains the residual SI only and some independent noise with $\text{SNR}=20$~dB. Consider~\figref{fig:DecRateNonOrthoStatic}. \begin{figure*} \ifshowTikzPictures \begin{tikzpicture} \begin{groupplot}[group style={ group name=decPlots, horizontal sep=3cm, vertical sep=2.5cm, group size= 2 by 1}, height=5cm, width=8cm, xmin=-20, xmax=20, grid=major] \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Static-4.tsv};\label{plots:dec4} \addplot [color=orange, mark=square, mark options={solid, orange}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr PerfectDecNonOrtho-Static-4.tsv};\label{plots:dec6} \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr PerfectDecNonOrtho-Static-8.tsv};\label{plots:dec8} \addplot [color=green, mark=diamond, mark options={solid, green}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr PerfectDecNonOrtho-Static-10.tsv};\label{plots:dec10} \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr Orthogonal-Static-11.tsv};\label{plots:dec11} \nextgroupplot[ylabel={Rate [bits/sample]}, xlabel={Input SINR [dB]}, ymin=0] \addplot [color=cyan, mark=star, dashdotted, mark repeat=2,mark phase=1] table[]{./resRateVsSinr NonOrtho-Varying-4.tsv}; \addplot [color=orange, mark=square, mark options={solid, orange}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr PerfectDecNonOrtho-Varying-4.tsv}; \addplot [color=blue, mark=triangle, mark options={solid, blue}, mark repeat=2,mark phase=1] table[]{./resRateVsSinr PerfectDecNonOrtho-Varying-8.tsv}; \addplot [color=green, mark=diamond, mark options={solid, green}, mark repeat=2,mark phase=2] table[]{./resRateVsSinr PerfectDecNonOrtho-Varying-10.tsv}; \addplot [color=blue, mark=none, thick, mark options={solid, blue}] table[]{./resRateVsSinr Orthogonal-Varying-11.tsv}; \end{groupplot} \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]decPlots c1r1.south) {\subcaption{Communication rate~\eqref{eq:ratelower} for static SI path.\label{fig:DecRateNonOrthoStatic}}}; \node[text width=8cm,align=center,anchor=north] at ([yshift=-0.75 cm]decPlots c2r1.south) {\subcaption{Communication rate~\eqref{eq:ratelower} for time-variant SI path.\label{fig:DecRateNonOrthoVarying}}}; \path (decPlots c1r1.north west|-current bounding box.north)-- coordinate(legendposdecPlots) (decPlots c2r1.north east|-current bounding box.north); \matrix[ matrix of nodes, anchor=south, draw, inner sep=0.2em, nodes={anchor=west} ]at([yshift=1ex]legendposdecPlots) { \ref*{plots:dec4}& Kalman, Cascade, Approximated~\ref{sec:nonlinearDiag}, No decoding & \ref*{plots:dec6}& Kalman, Cascade, Approximated~\ref{sec:nonlinearDiag}, Perfect decoding & \\ \ref*{plots:dec8}& RLS, Parallel, time domain, No decoding & [5pt] \ref*{plots:dec10}& RLS, Parallel, time domain, Perfect decoding & [5pt] \\ \ref*{plots:dec11}& Capacity &[5pt] \\ }; \end{tikzpicture} \fi \vspace{-0.25cm} \caption{Impact of decoding strategies on the SI estimation and cancellation performance.} \label{fig:decoding} \end{figure*} It shows the communication rate~\eqref{eq:ratelower} for different input SINR in static~(\figref{fig:DecRateNonOrthoStatic}) and time-variant~(\figref{fig:DecRateNonOrthoVarying}) environments without input orthogonalization. We keep our focus on Kalman algorithms in cascade structure with approximations~(Section~\ref{sec:nonlinearDiag}) and the time-domain RLS in parallel configuration. At first glance, it can be presumed that with perfect decoding, the performance is no longer dependent on the input SINR. However, this is only the case for the RLS algorithm in static environments. In the low SINR regime, the SI power is strong and thus the approximated algorithm is rather limited by estimation errors or favors certain levels of SI power. Moreover, the probability of decoding errors is much higher in that regime and thus the potential rate gain is in fact even lower. This leads to the conclusion that in low SINR regime for static environments, the decoding step is of less relevance. However, the picture is different if the environment is time-variant, as illustrated in~\figref{fig:DecRateNonOrthoVarying}. The primary limiting factor is the temporal variations. The Kalman algorithm benefits twofold from the perfect decoding: First, the SoI is removed from the received signal before the SI channel update, and second, the signal covariance is adjusted to the noise floor. This improves adaptation to the time-variant context and makes it possible for the Kalman algorithm to be uniformly superior over the RLS in both low and high input SINR regimes. The actual rate performance of decoding with partial errors is expected to lie somewhere in between the two extreme cases of either perfect or no decoding at all. Thus,~\figref{fig:DecRateNonOrthoVarying} provide insights on the lower and upper performance bounds. Decoding increases the rate performance especially in moderate-to-high input SINR regimes. \section{Conclusion} \label{sec:conclusion} In this work, we have proposed an adaptive SI cancellation algorithm for full-duplex communication in the digital domain. It is based on a state-space model of the nonlinear SI channel in cascade structure. The signal model is derived in DFT domain. Linear and nonlinear components, although statistically coupled by the system model, can be linearized and then estimated separately in a two-step iterative approach, each by a Kalman filter. The algorithm comprises input orthogonalization, prediction, decoding of the desired signal, and updating the estimations. The state-space representation has introduced a more fine-grained handling of the time-variant nature of the SI channel. It underlines the significance of a-priori knowledge on the scale of temporal variations, which is represented by the state-space system model. Our simulation results indicate the following conclusion. While the choice of the underlying algorithmic principle like NLMS, RLS or Kalman do influence the performance of SI cancellation, the structural elements such as orthogonalization, parallel or cascade model, and the decoding have a much more significant impact. Especially the orthogonalization equalizes the performance gain for all algorithms. This work also demonstrates that the temporal variations are determining the fundamental performance limitation.
{ "timestamp": "2018-06-05T02:15:23", "yymm": "1806", "arxiv_id": "1806.01004", "language": "en", "url": "https://arxiv.org/abs/1806.01004" }
\section{\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\bfseries\scshape\centering}} \def\subsection{\@startsection{subsection}{2}% \z@{.5\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\bfseries\scshape}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus\linespacing}{-.5em {\normalfont\bfseries}} \catcode`\@=12 \addtolength{\textheight}{-1mm} \topmargin5mm \addtolength{\textwidth}{20mm} \hoffset -6mm \newcommand\mps[1]{\marginpar{\small\sf#1}} \newtheorem{Theorem}{Theorem \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Property}[Theorem]{Property} \newtheorem{Notation}[Theorem]{Notation} \newtheorem{Conjecture}[Theorem]{Conjecture} \newtheorem{Algorithme}{Algorithme} \newcommand{{\rm sym} }{{\rm sym} } \newtheorem{Question}{Question} \def$\hfill{\vrule height 3pt width 5pt depth 2pt}${$\hfill{\vrule height 3pt width 5pt depth 2pt}$} \def$\hfill{\Box}${$\hfill{\Box}$} \newcommand\marginal[1]{\marginpar{\raggedright\parindent=0pt\tiny #1}} \newfont{\bbold}{msbm10 scaled \magstep1} \newfont{\bbolds}{msbm7 scaled \magstep1} \newcommand{\mathbb{N}}%{\mbox{\bbold N}}{\mathbb{N} \newcommand{\mbox{\bbolds N}}{\mbox{\bbolds N}} \newcommand{\mathbb{Z}}%{\mbox{\bbold Z}}{\mathbb{Z} \newcommand{\mbox{\bbolds Z}}{\mbox{\bbolds Z}} \newcommand{\mathbb{Q}}%{\mbox{\bbold Q}}{\mathbb{Q} \newcommand{\mbox{\bbolds Q}}{\mbox{\bbolds Q}} \newcommand{\mathbb{R}}%{\mbox{\bbold R}}{\mathbb{R} \newcommand{\mathbb{C}}%{\mbox{\bbold C}}{\mathbb{C} \newcommand{formal power series}{formal power series} \newcommand{\bm}[1]{\mbox{\boldmath \ensuremath{#1}}} \newcommand{\bs}[1]{\mbox{\boldmath \ensuremath{\scriptstyle #1}}} \newcommand{\bar x_1}%{\overline {x}_1}{\bar x_1 \newcommand{\bar x_2}%{\overline {x}_2}{\bar x_2 \newcommand{\bar x}%{\overline x}{\bar x \newcommand{\bar u}{\bar u} \newcommand{\bar v}{\bar v} \newcommand{\bar X}{\bar X} \newcommand{\bar U}{\bar U} \newcommand{\bar y}%{\overline y}{\bar y \newcommand{\bar z}{\bar z} \newcommand{\Bx}{\mathbf{x}} \newcommand{\Bi}{\mathbf{i}} \newcommand{\Bu}{\mathbf{u}} \newcommand{\Bv}{\mathbf{v}} \newcommand{\Bw}{\mathbf{w}} \newcommand{\mathbf{s}}{\mathbf{s}} \newcommand{\mathbf{a}}{\mathbf{a}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\overline m}{\overline m} \newcommand{\omega}{\omega} \newcommand{\mathbb{L}}%\mbox{\rm I$\!$L}}{\mathbb{L} \newcommand{\mathbb{M}}{\mathbb{M}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\overline\GK}{\overline\mathbb{K}} \newcommand{\widehat\GK}{\widehat\mathbb{K}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{S}}{\mathbb{S}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{B}}{\mathbb{B}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{U}}{\mathbb{U}} \newcommand{\mathbb{W}}{\mathbb{W}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{V}}{\mathbb{V}} \newcommand{\mathcal O}{\mathcal O} \newcommand{\mathcal D}{\mathcal D} \newcommand{\mathcal I}{\mathcal I} \newcommand{\mathcal L}{\mathcal L} \newcommand{\mathcal V}{\mathcal V} \newcommand{\mathcal W}{\mathcal W} \newcommand{\mathcal M}{\mathcal M} \newcommand{\mathcal G}{\mathcal G} \newcommand{\mathcal K}{\mathcal K} \newcommand{\mathcal N}{\mathcal N} \newcommand{\mathcal S}{\mathcal S} \newcommand{\overline{\mathcal S}}{\overline{\mathcal S}} \newcommand{\mathcal W}{\mathcal W} \newcommand{\mathcal P}{\mathcal P} \newcommand{\mathcal X}{\mathcal X} \newcommand{\mathcal Q}{\mathcal Q} \newcommand{\mathcal U}{\mathcal U} \newcommand{\mathcal V}{\mathcal V} \newcommand{\mathcal T}{\mathcal T} \newcommand{\tilde U}{\tilde U} \newcommand{\tilde D}{\tilde D} \newcommand{\mathcal C}{\mathcal C} \newcommand{\mathcal Z}{\mathcal Z} \newcommand{\mathfrak S}{\mathfrak S} \newcommand\atopfix[2]{\genfrac{}{}{0pt}{}{#1}{#2}} \DeclareMathOperator{\id}{id} \newcommand{permutations}{permutations} \newcommand{permutation}{permutation} \newcommand{generating function}{generating function} \newcommand{generating functions}{generating functions} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\cdot}{\cdot} \newcommand{x_1}{x_1} \newcommand{x_2}{x_2} \newcommand{\bxun}{\bar x_1}%{\overline {x}_1} \newcommand{\bxde}{\bar x_2}%{\overline {x}_2} \newcommand{\bone} {\bar 1} \newcommand{\bar 2}{\bar 2} \def\emm#1,{{\em #1}} \newcommand{\sigma}{\sigma} \newcommand{\lambda}{\lambda} \newcommand{\varepsilon}{\varepsilon} \def\par\nopagebreak\rightline{\vrule height 3pt width 5pt depth 2pt{\par\nopagebreak\rightline{\vrule height 3pt width 5pt depth 2pt} \medbreak} \def\twoFone#1#2#3#4{{_2F_1}\biggl(\begin{matrix} {#1}\kern.707em {#2}\\{#3} \end{matrix}\,\bigg|\,#4\biggr)} \def\clap#1{\hbox to0pt{\hss#1\hss}} \begin{document} \title[Counting walks with large steps in an orthant]{Counting walks with large steps in an orthant} \author[A. Bostan]{Alin Bostan} \author[M. Bousquet-M\'elou]{Mireille Bousquet-M\'elou} \author[S. Melczer]{Stephen Melczer} \thanks{S.M. was supported by the University of Waterloo, an Eiffel Fellowship, an NSERC Graduate Scholarship and Postdoctoral Fellowship, and NSF grant DMS-1612674.} \address{AB: INRIA Saclay, 1 rue Honor{\'e} d'Estienne d'Orves, F-91120 Palaiseau, France} \email{Alin.Bostan@inria.fr} \address{MBM: CNRS, LaBRI, Universit\'e de Bordeaux, 351 cours de la Lib\'eration, F-33405 Talence Cedex, France} \email{bousquet@labri.fr} \address{SM: Department of Mathematics, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104, USA.} \email{smelczer@sas.upenn.edu} \keywords{Enumerative combinatorics; Lattice paths; Discrete partial differential equations; D-finite generating functions} \subjclass[2010]{Primary 05A15, 05A10, 05A16; Secondary 33C05, 33F10} \begin{abstract} In the past fifteen years, the enumeration of lattice walks with steps taken in a prescribed set $\mathcal S$ and confined to a given cone, especially the first quadrant of the plane, has been intensely studied. As a result, the generating function s of quadrant walks are now well-understood, provided the allowed steps are \emph{small}, that is $\mathcal S \subset \{-1, 0,1\}^2$. In particular, having small steps is crucial for the definition of a certain group of bi-rational transformations of the plane. It has been proved that this group is finite if and only if the corresponding generating function\ is D-finite (that is, it satisfies a linear differential equation with polynomial coefficients). This group is also the key to the uniform solution of 19 of the 23 small step models possessing a finite group. In contrast, almost nothing is known for walks with arbitrary steps. In this paper, we extend the definition of the group, or rather of the associated orbit, to this general case, and generalize the above uniform solution of small step models. When this approach works, it invariably yields a D-finite generating function. We apply it to many quadrant problems, including some infinite families. After developing the general theory, we consider the $13\ 110$ two-dimensional models with steps in $\{-2,-1,0,1\}^2$ having at least one $-2$ coordinate. We prove that only 240 of them have a finite orbit, and solve 231 of them with our method. The 9 remaining models are the counterparts of the 4 models of the small step case that resist the uniform solution method (and which are known to have an algebraic generating function). We conjecture D-finiteness for their generating functions, but only two of them are likely to be algebraic. We also prove non-D-finiteness for the $12\ 870$ models with an infinite orbit, except for 16 of them. \end{abstract} \date{\today} \maketitle \section{Introduction} The enumeration of planar lattice walks confined to the quadrant has received a lot of attention over the past fifteen years. The basic question reads as follows: given a finite step set $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}^2$ and a starting point $P\in \mathbb{N}}%{\mbox{\bbold N}^2$, what is the number $q_n$ of $n$-step walks, starting from $P$ and taking their steps in $\mathcal S$, that remain in the non-negative quadrant $\mathbb{N}}%{\mbox{\bbold N}^2$? This is a versatile question, since such walks encode in a natural fashion many discrete objects (systems of queues, Young tableaux and their generalizations, among others). More generally, the study of these walks fits in the larger framework of walks confined to cones. These walks are also much studied in probability theory, both in a discrete~\cite{duraj,FaIaMa99} and in a continuous~\cite{deblassie,garbit-raschel} setting. From a technical point of view, counting walks in the quadrant is part of a general program aiming at solving functional equations {that involve} \emm divided differences with respect to several variables, (or \emm discrete, partial differential equations): see {Equation~\eqref{dd} below} for a typical example, and~\cite[Sec.~2]{mbm-gessel} for a general discussion on these equations. On the combinatorics side, much attention has focused on the \emm nature, of the associated generating function\ $Q(t)=\sum_n q_n t^n$. Is it rational in $t$, as for unconstrained walks? Is it algebraic over $\mathbb{Q}}%{\mbox{\bbold Q}(t)$, as for walks confined to a (rational) half-space? More generally, is $Q(t)$ the solution of a linear differential equation with polynomial coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[t]$? (in short: is it \emph{D-finite}?) The answer depends on the step set and, to a lesser extent, on the starting point. A systematic study was initiated in~\cite{mishna-jcta,BoMi10} for walks starting at the origin $(0,0)$ and taking only \emm small, steps (that is, $\mathcal S\subset \{-1, 0, 1\}^2$). For these walks, a complete classification is now available (Figure~\ref{fig:class-2D}). In particular, the {trivariate} generating function\ $Q(x,y;t)$ that also records the coordinates of the endpoint of the walk is D-finite if and only if a certain group $G$ of bi-rational transformations is finite. The proof involves an attractive variety of tools, ranging from {elementary} power series algebra~\cite{Bous05,mishna-jcta,BoMi10,mbm-gessel} to complex analysis~\cite{KuRa12,raschel-unified}, computer algebra~\cite{BoKa10,KaKoZe08}, probability theory~\cite{denisov-wachtel,duraj} and number theory~\cite{BoRaSa14}. The most recent results on this topic discriminate, among non-D-finite models, those that are still \emm D-algebraic, (that is, satisfy polynomial differential equations) from those that are not~\cite{BeBMRa-FPSAC-16,BeBMRa-17,DHRS-17,DHRS-sing}. Remarkably, a new tool then comes into play: differential Galois theory. \begin{figure} \centering \begin{center}\edgeheight=5pt\nodeskip=1em\leavevmode \tree{quadrant models with small steps: 79 }{ \tree{ $|G|{<}\infty$: 23 }{ \tree{ OS${\neq}0$: {{19 }}} {\tree{\textbf{{D-finite}}}{}} \tree{ OS${=}0$: {{4 }}}{\tree{\textbf{{algebraic}}}{}} } \tree{ $|G|{=}\infty$: {56}}{\tree{\textbf{{non-D-finite}}}{}} } \end{center} \caption{Classification of quadrant walks with small steps. The group of the walk is denoted by $G$, and OS stands for the \emm orbit sum,, a rational function which vanishes precisely for algebraic models. The 4 algebraic models are those of Figure~\ref{fig:alg}.} \label{fig:class-2D} \end{figure} \begin{figure}[htb] \centering \begin{tabular}{cccc} $\diagr{NE,S,W}$ & $ \diagr{SW,E,N}$ & $\diagr{NE,S,W,SW,E,N} $& $\diagr{E,W,NE,SW}$\\ Kreweras& Reverse Kreweras & Double Kreweras & Gessel \end{tabular} \caption{The four algebraic small step models in the quadrant, with their usual names.} \label{fig:alg} \end{figure} \medskip Contrasting with the precision of this classification is the case of quadrant walks with \emph{arbitrary steps}, for which it is fair to say that almost nothing is known. Indeed, the small step assumption is crucial in all methods used in the small step case, aside from two of them: the computer algebra approach of~\cite{BoKa10,KaKoZe08} can in principle be adapted to any steps, provided one is able to \emm guess, differential or algebraic equations for the solution; and the asymptotic estimates of~\cite{denisov-wachtel} do not require assumptions on the size of the steps. But even the definition of the group that is central in the classification requires small steps. The complex analytic approach of~\cite{KuRa12} that has proved very powerful for small steps seems difficult to extend, and the first attempts have not yet led to any explicit solution, nor indications on the nature of the generating functions~\cite{fayolle-raschel-big}. {The classical reflection principle~\cite{gessel-zeilberger} requires that no walk crosses the $x$- or $y$-axis without actually touching it, which is equivalent to a small step condition.} The study of quadrant walks with arbitrary steps is not only a natural mathematical challenge. It is also motivated by ``real life'' examples. For instance, certain orientations of planar maps were recently shown by Kenyon et al.~\cite{kenyon-bip} to be in bijection with quadrant walks taking their steps in $\{(-p,0), (-p+1,1), \ldots, (0,p), (1,-1)\}$. In the forthcoming paper~\cite{BoFuRa17} it is shown that the method of the current article solves all these models. Other examples can be found in queuing theory, where several clients may arrive, or be served, at the same time (think of ski-lifts in a ski resort!). Also, a problem as innocuous as counting walks on the square lattice confined to the cone bounded by the $x$-axis (for $x$ positive) and the line $y=2x$ becomes, after a linear transformation, a quadrant problem with large steps (Figure~\ref{fig:slope2}). Moreover, our study raises intriguing combinatorial questions, which can be seen as an \emm a posteriori, motivation of this work. For instance, some walks with large steps turn out to be counted by simple hypergeometric numbers, for reasons that remain combinatorially mysterious (see for instance Propositions~\ref{prop:12-a} and~\ref{prop:hard-18}). Furthermore, our study gives rise to attractive conjectures involving nine large step analogues of the four algebraic models of Figure~\ref{fig:alg} (Section~\ref{sec:interesting}). We hope that this paper will have a progeny as rich as its small step counterpart~\cite{BoMi10}. \begin{figure}[htb] \centering \includegraphics[height=23mm]{slope2} \caption{A square lattice walk confined to a wedge becomes a quadrant walk with large steps.} \label{fig:slope2} \end{figure} Our aim here is primarily to extend to arbitrary steps (and arbitrary dimension, for walks confined to the orthant~$\mathbb{N}}%{\mbox{\bbold N}^d$) a power series approach that was introduced in~\cite{BoMi10} to solve the 19 easiest small step models, namely those of the leftmost branch of Figure~\ref{fig:class-2D}. The group is lost, but the associated orbit survives. When the method works, it yields an expression of the generating function\, as the non-negative part of an algebraic series --- a form which implies D-finiteness. On the negative side, we give a criterion that simultaneously implies that the orbit of a 2-dimensional model is infinite and that its generating function\ is not D-finite. We provide evidence that in 2D, the finiteness of the orbit may still be related to the D-finiteness of the solution. This is based in particular on the systematic exploration of quadrant walks with steps in $\{-2,-1,0,1\}^2$. Before we give more details on our results, let us examine the solution of a simple small step model, as presented in~\cite{BoMi10}. \subsection{A basic example: \texorpdfstring{$\mathcal S= \{\searrow,\leftarrow,\uparrow\}$}{S = \{SE,E,N\}}} \label{sec:basic} We denote by $q(i,j;n)$ the number of walks with steps in $\mathcal S$ that start at $(0,0)$, end at $(i,j)$ and remain in the non-negative quadrant $\mathbb{N}}%{\mbox{\bbold N}^2$. The associated generating function\ is \[ Q(x,y;t) :=\sum_{i,j,n\ge 0} q(i,j;n) x^i y^j t^n. \] We will find an explicit expression for this power series using a four-step approach, sometimes called the \emm algebraic kernel method, and borrowed from~\cite{BoMi10}, which we then generalize in the rest of the paper. \smallskip\noindent {\bf A functional equation.} A step-by-step construction of quadrant walks with steps in $\{\searrow,\leftarrow,\uparrow\}$ yields the functional equation \begin{equation} \label{eqfunc-base} Q(x,y)=1+t(x\bar y}%{\overline y + \bar x}%{\overline x +y) Q(x,y) -tx\bar y}%{\overline y Q(x,0) -t\bar x}%{\overline x Q(0,y), \end{equation} where we write $\bar x}%{\overline x:=1/x$, $\bar y}%{\overline y:=1/y$ and {replace} $Q(x,y;t)$ by $Q(x,y)$ to lighten notation. In this equation the constant term 1 stands for the empty walk. The next term counts quadrant walks extended by one of our three steps. The final two terms remove the contributions of the two ``forbidden moves'': either we have extended a walk ending on the $x$-axis by a $\searrow$ step (term $-tx\bar y}%{\overline y Q(x,0)$) or we have extended a walk ending on the $y$-axis by a $\leftarrow$ step (term $-t\bar x}%{\overline x Q(0,y)$). Observe that the above equation can also be written in a form that involves two divided differences, one in $x$ and the other in $y$: \begin{equation}\label{dd} Q(x,y)=1+ty Q(x,y) +tx\ \frac{Q(x,y)-Q(x,0)}{y} +t\ \frac{Q(x,y)- Q(0,y)}x. \end{equation} We refer to~\cite[Sec.~2]{mbm-gessel} for a general discussion on equations involving divided differences with respect to two variables (those that involve divided differences with respect to one variable only are known to have algebraic solutions~\cite{mbm-jehanne}). We rewrite~\eqref{eqfunc-base} as \begin{equation} \label{eqfunc-base-RS} K(x,y) xy Q(x,y)=xy -R(x) -S(y) , \end{equation} where $R(x)= tx^2Q(x,0)$, $S(y)=ty Q(0,y)$, and $K(x,y)=1-t(x\bar y}%{\overline y + \bar x}%{\overline x +y)$ is the \emm kernel, of the equation. Observe the decoupling of the $x$ and $y$ variables in the right-hand side. We call the bivariate series $R(x)$ and $S(y)$ \emm sections,. \smallskip\noindent{\bf The group of the walk.} We now define two bi-rational transformations $\Phi$ and $\Psi$, acting on pairs $(u,v)$ of coordinates (which will be, typically, algebraic functions of $x$ and $y$): \[ \Phi : (u,v) \mapsto ( \bar u v, v) \qquad \hbox{and} \qquad \Psi: (u,v) \mapsto (u, u\bar v). \] Each transformation fixes one coordinate, and transforms the other \emm so as to leave the step polynomial $u\bar v + \bar u +v$ unchanged,. Both transformations are involutions, and the orbit of $(x,y)$ under the action of $\Phi$ and $\Psi$ consists of 6 elements: \[ (x,y) {\overset{\Phi}{\longleftrightarrow}} (\bar x}%{\overline x y,y) {\overset{\Psi}{\longleftrightarrow}} (\bar x}%{\overline x y,\bar x}%{\overline x) {\overset{\Phi}{\longleftrightarrow}} (\bar y}%{\overline y,\bar x}%{\overline x) {\overset{\Psi}{\longleftrightarrow}} (\bar y}%{\overline y,x\bar y}%{\overline y) {\overset{\Phi}{\longleftrightarrow}} (x,x\bar y}%{\overline y) {\overset{\Psi}{\longleftrightarrow}} (x,y). \] The group generated by $\Phi$ and $\Psi$ is thus the dihedral group of order 6. \smallskip\noindent{\bf A section-free equation.} We now write, for each element $(x',y')$ of the orbit, the functional equation~\eqref{eqfunc-base-RS} with $(x,y)$ replaced by $(x',y')$: \begin{align} K(x,y)\ xy Q(x,y) &= xy -R(x) -S(y), \nonumber\\ {K(x,y)}\ \bar x}%{\overline x y^2 Q(\bar x}%{\overline x y,y) &= \bar x}%{\overline x y^2 -R(\bar x}%{\overline x y )- S(y),\nonumber \\ {K(x,y)}\ \bar x}%{\overline x^2 y Q(\bar x}%{\overline x y,\bar x}%{\overline x) &= \bar x}%{\overline x^2 y -R(\bar x}%{\overline x y )- S(\bar x}%{\overline x), \label{6-eqs} \\ \vdots \qquad\qquad &= \qquad\qquad\vdots \nonumber\\ {K(x,y)}\ x^2\bar y}%{\overline y Q(x,x\bar y}%{\overline y) &= x^2\bar y}%{\overline y - R(x) - S(x\bar y}%{\overline y).\nonumber \end{align} Due to the definition of $\Phi$ and $\Psi$, two consecutive equations have one section $R( \cdot)$ or $S(\cdot)$ in common. Thus, the alternating sum of our 6 equations has a right-hand side \emph{free from sections}: \begin{multline} {K(x,y)} \Big( xyQ(x,y) -\bar x}%{\overline x y^2 Q(\bar x}%{\overline x y,y)+ \bar x}%{\overline x ^2 y Q(\bar x}%{\overline x y,\bar x}%{\overline x) -\bar x}%{\overline x \bar y}%{\overline y Q(\bar y}%{\overline y,\bar x}%{\overline x)+x\bar y}%{\overline y^2Q(\bar y}%{\overline y,x\bar y}%{\overline y)-x^2\bar y}%{\overline y Q(x,x\bar y}%{\overline y)\Big) \\= {xy-\bar x}%{\overline x y^2+ \bar x}%{\overline x ^2 y -\bar x}%{\overline x \bar y}%{\overline y+x\bar y}%{\overline y^2-x^2\bar y}%{\overline y}.\label{alt-tandem} \end{multline} The right-hand side of this equation is the \emm orbit sum, occurring in the classification of Figure~\ref{fig:class-2D}. Equivalently, \begin{multline*} xyQ(x,y) -\bar x}%{\overline x y^2 Q(\bar x}%{\overline x y,y)+ \bar x}%{\overline x ^2 y Q(\bar x}%{\overline x y,\bar x}%{\overline x) -\bar x}%{\overline x \bar y}%{\overline y Q(\bar y}%{\overline y,\bar x}%{\overline x)+x\bar y}%{\overline y^2Q(\bar y}%{\overline y,x\bar y}%{\overline y)-x^2\bar y}%{\overline y Q(x,x\bar y}%{\overline y) \\={\frac {xy \left( 1-\bar x}%{\overline x\bar y}%{\overline y \right) \left( 1-\bar x}%{\overline x^2y \right) \left( 1-x\bar y}%{\overline y ^2\right)} {1-t(y+ \bar x}%{\overline x+ x\bar y}%{\overline y)}}. \end{multline*} \smallskip\noindent{\bf Extracting $\boldsymbol{Q(x,y)}$.} The last equation, combined with the fact that $Q(x,y)$ is a power series in~$t$ with polynomial coefficients in $x$ and $y$, characterizes $Q(x,y)$ uniquely: indeed, the series $xy Q(x,y)$ has coefficients in $xy \mathbb{Q}}%{\mbox{\bbold Q}[x,y]$, and thus involves only positive powers of $x$ and $y$. But the monomials occurring in each of the five other terms of the left-hand side involve either a negative power of~$x$, or a negative power of $y$ (or both). Hence the series $xy Q(x,y)$ is obtained by expanding the right-hand side as a series in $t$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x,\bar x}%{\overline x, y, \bar y}%{\overline y]$, and then collecting terms with positive powers of $x$ and $y$. We denote this extraction by: \[ xy Q(x,y) = [x^{>} y^{>}] \frac {xy \left( 1-\bar x}%{\overline x\bar y}%{\overline y \right) \left( 1-\bar x}%{\overline x^2y \right) \left( 1-x\bar y}%{\overline y ^2\right)} {1-t(y+ \bar x}%{\overline x+ x\bar y}%{\overline y)}. \] Equivalently, upon dividing by $xy$, the series $Q(x,y)$ is obtained by collecting the \emm non-negative part, in $x$ and $y$ of a rational function: \[ Q(x,y)= [x^{\ge } y^{\ge }]\frac{ \left( 1-\bar x}%{\overline x\bar y}%{\overline y \right) \left( 1-\bar x}%{\overline x^2y \right) \left( 1-x\bar y}%{\overline y ^2\right)}{1-t(\bar x}%{\overline x+y+x\bar y}%{\overline y)}. \] This explicit expression has strong consequences. First, it guarantees that $Q(x,y)$ is D-finite~\cite{lipshitz-diag}. Second, expanding $(\bar x}%{\overline x+y+x\bar y}%{\overline y)^n$ in powers of $x$ and $y$, it delivers a hypergeometric expression for the number of walks of length $n=3m+2i+j$ ending at $(i,j)$: \[ q(i,j;n)=\frac{(i+1)(j+1)(i+j+2) (3m+2i+j)!}{m!(m+i+1)!(m+i+j+2)!}. \] We conclude this example with a remark for the combinatorially inclined readers: since walks with steps in $\mathcal S= \{\searrow,\leftarrow,\uparrow\}$ give a simple encoding of Young tableaux of height at most 3, the above formula is just the translation in terms of walks of the classical hook formula~\cite[\S3.10]{sagan-book}. \subsection{Outline of the paper} Based on the above example, we can now describe our results more precisely. The next four sections {present} the extension to arbitrary steps (and dimension) of the four stages involved in the above solution. The principles of our approach are robust enough to be applicable to the enumeration of \emm weighted, walks, which can be especially interesting in a probabilistic context. We give many examples to illustrate these stages, but also to show how they can fail: indeed, since our method only solves 19 of the 79 small step models in the quadrant, we know in advance that it \emm has, to fail for some models. Two obstacles can already be seen in the classification of Figure~\ref{fig:class-2D}: the group (or what is left of it, namely its orbit) can be infinite, and the orbit sum can vanish. Interestingly, we provide in Section~\ref{sec:infinite-orbit} a criterion that implies simultaneously the infiniteness of the orbit and the non-D-finiteness of the generating function. In Sections~\ref{sec:1D} and~\ref{sec:hadamard}, we show that our approach applies systematically in dimension 1 (walks on a half-line) and for the so-called \emm Hadamard models, in dimension 2. Working in dimension 1 is the least one can ask for, as walks on a half-line are very well understood~\cite{banderier-flajolet,bousquet-petkovsek-recurrences,gessel-factorization}. It is worth noting that the form of our solution is not exactly the standard form obtained by earlier approaches. The second result, dealing with Hadamard models, is more interesting as it seems that many models with finite orbit are Hadamard. In the small step case for instance, 16 of the 19 models solvable by our approach (that is, 16 of the 23 D-finite models) are of Hadamard type. In Section~\ref{sec:m21} we apply these principles to the classification of models with steps in $\{-2,-1,0,1\}^2$. Several results are still conjectural, but in a sense we obtain a perfect analogue of the small step classification shown in Figure~\ref{fig:class-2D}: our approach solves all 231 models with a finite orbit and a non-vanishing orbit sum (Figure~\ref{classificationm2}). For each of them, we express $Q(x,y;t)$ as the non-negative part in $x$ and $y$ of an explicit rational function. Exactly 227 of these 231 solved models are in fact Hadamard. This leaves out 9 models with a finite group but orbit sum zero, for which we state several attractive conjectures. Finally, we establish non-D-finiteness for the $12\ 870$ models with an infinite orbit, except for 16 of them, which we still conjecture to be non-D-finite. In Section~\ref{sec:asympt} we show that the form of the solutions that we obtain is well-suited to the asymptotic analysis of their coefficients, and we work out {explicitly} the analysis for the 4 non-Hadamard models with a finite orbit solved in Section~\ref{sec:m21}. We conclude in Section~\ref{sec:final} with a number of remarks and open questions. \\ \noindent {\bf Notation and definitions.} For the sake of compactness we often encode a step {into a word} consisting of its coordinates, with a bar above negative coordinates: for example, the step $(-2,3,-5) \in \mathbb{Z}}%{\mbox{\bbold Z}^3$ {will} be denoted $\bar 23\bar 5$. Similarly, as used above, we use a bar over variables to denote their reciprocals, so that $\bar x}%{\overline x=1/x$. A \emm small forward step, has its coordinates in $\{1, 0, -1, -2, \ldots\}$ while a \emm large forward step, has at least one coordinate larger than 1. We define similarly small and large backward steps. A \emm small step, has only coordinates in $\{-1, 0, 1\}$. In two dimensions, small steps can be identified by the compass directions, and we sometimes draw them pictorially with arrows: for instance, $(1,1)$ can be denoted $\nearrow$. For a ring $R$, we denote by $R[x]$ (resp.~$R[[x]]$) the ring of polynomials (resp. formal power series) in $x$ with coefficients in $R$. If $R$ is a field, then $R(x)$ stands for the field of rational functions in $x$, and $R((x))$ is the field of Laurent series in $x$ (that is, series of the form $\sum_{n\ge n_0} a_n x^n$, with $n_0\in \mathbb{Z}}%{\mbox{\bbold Z}$). This notation is generalized to several variables in the usual way. For instance, the generating function $Q(x,y;t)$ of walks restricted to the first quadrant is a series in $\mathbb{Q}}%{\mbox{\bbold Q}[x,y][[t]]$. We shall also consider \emm fractional power series,, namely power series in a (positive) fractional power of $x$, and finally Puiseux series, which are Laurent series in a fractional power of the variable. We recall that if $R$ is an algebraically closed field, then Puiseux series in $x$ with coefficients in~$R$ form an algebraically closed field~(see~\cite{abhyankar-book} or~\cite[Chap.~6]{stanley-vol2}). If $F(u;t)$ is a power series in $t$ whose coefficients are Laurent series in $u$, \[ F(u;t)=\sum_{n\ge0} t^n \left(\sum_{i\ge i(n)} u^i f(i;n)\right), \] we denote by $[u^{>}]F(u;t)$ the \emm positive part of $F$ in $u$,: \[ [u^{>}]F(u;t)= \sum_{n\ge 0} t^n \left(\sum_{i>0 } u^i f(i;n)\right). \] We define the \emm non-negative part, $[u^{\ge}]F(u;t)$ in a similar fashion, by retaining as well the constant term in $u$. We recall that a series $Q(x,y;t)$ is \emm algebraic, if there exists a non-zero polynomial $P\in \mathbb{Q}}%{\mbox{\bbold Q}[x,y,t,s]$ such that $P(x,y,t,Q(x,y;t))=0$. It is \emm D-finite, (with respect to the variable $t$) if the vector space over $\mathbb{Q}}%{\mbox{\bbold Q}(x,y,t)$ spanned by the iterated derivatives $\partial_t^m Q(x,y;t)$, for $m \ge 0$, has finite dimension (here $\partial_t$ denotes differentiation with respect to $t$). The latter definition can be adapted to D-finiteness in several variables, for instance $x$, $y$ and $t$: in this case we require D-finiteness with respect to \emm each, variable separately~\cite{lipshitz-df}. Every algebraic series is D-finite~\cite[Prop.~2.3]{lipshitz-df}. If $Q(x,y;t)$ is D-finite in its three variables, then so are $Q(0,0;t)$ and $Q(1,1;t)$. For a one-variable series $F(t)=\sum f_n t^n$, D-finiteness is equivalent to the existence of a linear recurrence relation with polynomial coefficients in $n$ satisfied by the {coefficients} sequence $(f_n)$. We often denote by $F_t$ the derivative ${\partial_t F}$ of a series $F(t)$. This notation is generalized to several variables. For instance, $F_{t,u}$ stands for ${\partial_t\partial _u F}$. \section{A functional equation} \label{sec:eqfunc} Let $d\ge 1$ and {let} $\mathcal S$ be a finite subset of $\mathbb{Z}}%{\mbox{\bbold Z}^d$. We would like to count walks that take their steps in $\mathcal S$, start from the origin and are confined to {the orthant}~$\mathbb{N}}%{\mbox{\bbold N}^d$. We denote by $q(i_1, \ldots , i_d;n)$ the number of such walks consisting of $n$ steps and ending at $(i_1, \ldots, i_d)$, and by $Q(x_1,\ldots, x_d;t)$ the associated generating function: \[ Q(x_1, \ldots, x_d;t)\equiv Q(x_1, \ldots, x_d):= \sum_{(i_1, \ldots, i_d , n ) \in \mathbb{N}}%{\mbox{\bbold N}^{d+1}} q(i_1, \ldots, i_d;n) x_1^{i_1} \cdots x_d^{i_d}t^n. \] Note that we often omit the dependence of $Q$ in $t$. The notation $Q$ refers to the two-dimensional case (walks in a \emm quadrant,), from which we will borrow most of our examples. In that case, we use the variables $x$ and $y$ instead of $x_1$ and $x_2$. We use bold notation for multivariate quantities, so that $\Bx=(x_1, \ldots, x_d)$, and for a $d$-tuple $\Bi=(i_1, \ldots, i_d) \in \mathbb{Z}}%{\mbox{\bbold Z}^d$ we use the abbreviation $\Bx^{\Bi} = x_1^{i_1} \cdots x_d^{i_d}$. The \emm step polynomial, of a model (also called the \emm characteristic polynomial,) is \begin{equation} \label{S-def} S(x_1, \ldots, x_d)=S(\Bx) = \sum_{\mathbf{s} \in\mathcal S} {\Bx}^{\mathbf{s}}. \end{equation} The step polynomial is a Laurent polynomial in the variables $x_i$; here every step has weight 1, but our approach can be adapted to the enumeration of weighted walks with weights in some field $\mathbb{F}$ (for instance~{$\mathbb{F}=\mathbb{R}}%{\mbox{\bbold R}$} in a probabilistic context). One can always write for the generating function\ $Q(x_1, \ldots, x_d;t)$ a functional equation defining this series, based on a step-by-step construction of walks confined to $\mathbb{N}}%{\mbox{\bbold N}^d$, as was done in~\eqref{eqfunc-base}. This functional equation is linear in the main series $Q(x_1, \ldots, x_d;t)$ {and,} when the terms are grouped on one side of the {equation,} the coefficient {in front of} $Q(x_1,\dots,x_d;t)$ is the \emm kernel, \[ K(x_1,\ldots, x_d)= 1- t S(x_1, \ldots, x_d). \] The equation also involves unknown series that only depend on \emm some, of the variables $x_1, \ldots, x_d$ {(and on $t$)}, {such as, for instance}, the series $Q(x,0)$ and $Q(0,y)$ in~\eqref{eqfunc-base}. These series are called \emm sections, (of $Q$). Let us consider a few examples. \medskip \noindent{\bf Example A.} Take $d=1$, and $\mathcal S=\{\bar 1, 2\}$. The equation satisfied by the series $Q(x;t)\equiv Q(x)$ reads \[ {Q(x)= 1+t(\bar x}%{\overline x +x^2) Q(x) -t \bar x}%{\overline x Q(0),} \] where the term $-t\bar x}%{\overline x Q(0)$ removes forbidden moves from position 0 to position $-1$. Equivalently, with $K(x)=1-t(\bar x}%{\overline x +x^2)$, {the previous equation reads} \begin{equation} \label{eqf-1D-bar} K(x) Q(x) = 1- t\bar x}%{\overline x Q(0). \end{equation}$\hfill{\Box}$ \medskip \noindent{\bf Example B.} Still with $d=1$, we now reverse the steps of the previous example so as to have a long backward step, and study $\mathcal S=\{\bar 2, 1\}$. Extending a walk $w$ by the step $-2$ is now forbidden as soon as $w$ ends at position $0$ or $1$. Hence, denoting by $Q_i\equiv Q_i(t)$ the length generating function\ of walks ending at position $i$, the equation satisfied by $Q(x)$ reads \[ {Q(x)= 1+t(\bar x}%{\overline x^2 +x) Q(x) -t \bar x}%{\overline x^2 Q_0 -t \bar x}%{\overline x Q_1,} \] or equivalently, with $K(x)=1-t(\bar x}%{\overline x^2 +x)$, \begin{equation}\label{eqf-1D} K(x) Q(x) = 1- t \bar x}%{\overline x^2 Q_0 -t \bar x}%{\overline x Q_1. \end{equation} Observe that $Q_0=Q(0)$ and $Q_1= \partial_xQ(0)$. {The {occurrence} of a large backward step results in one more section on the right-hand side.} $\hfill{\Box}$ \medskip \noindent{\bf Example C: Gessel's walks.} We return to two-dimensional models, now with the step set $\mathcal S=\{\rightarrow, \nearrow, \leftarrow, \swarrow\}$. Appending a south-west step is forbidden as soon as the walk ends at abscissa or ordinate zero. The functional equation thus reads: \[ Q(x,y)=1+ t ( x+xy+\bar x}%{\overline x + \bar x}%{\overline x\bar y}%{\overline y) Q(x,y) -t\bar x}%{\overline x Q(0,y) - t\bar x}%{\overline x \bar y}%{\overline y (Q(x,0)+Q(0,y)-Q(0,0)). \] The term in $Q(0,0)$ avoids removing twice walks {that end} at $(0,0)$. Equivalently, with $K(x,y)=1-t( x+xy+\bar x}%{\overline x + \bar x}%{\overline x\bar y}%{\overline y)$, \[ K(x,y) Q(x,y) = 1 -t \bar x}%{\overline x(1+\bar y}%{\overline y) Q(0,y) - t\bar x}%{\overline x\bar y}%{\overline y (Q(x,0)-Q(0,0)). \] $\hfill{\Box}$ \medskip \noindent{\bf Example D: {A model with a large forward step and a large backward step.}} We now take $\mathcal S=\{ \bar 20, \bar 11, 02, 1\bar 1\}$. Quadrant walks formed of these steps, starting and ending at the origin, are known to be in bijection with \emm bipolar orientations of quadrangulations,~\cite{kenyon-bip,BoFuRa17}. The functional equation reads \[ Q(x,y)=1+t(\bar x}%{\overline x^2+\bar x}%{\overline x y+y^2+x\bar y}%{\overline y ) Q(x,y) -t\bar x}%{\overline x^2 \left(Q_{0,-}(y)+xQ_{1,-}(y)\right) -t\bar x}%{\overline x y Q_{0,-}(y)- tx\bar y}%{\overline y Q(x,0), \] where $x^iQ_{i, -}(y)$ counts quadrant walks ending {at} $x$-coordinate~$i$. Note that $Q_{0,-}(y)=Q(0,y)$. We can rewrite {the functional equation}, using $K(x,y)=1-t( \bar x}%{\overline x^2+\bar x}%{\overline x y+y^2+x\bar y}%{\overline y )$, as \begin{equation} \label{eqfunc:quadrangulations} K(x,y) Q(x,y)= 1 -t \bar x}%{\overline x(\bar x}%{\overline x+y)Q_{0,-}(y) -t\bar x}%{\overline x Q_{1,-}(y)- tx\bar y}%{\overline y Q(x,0). \end{equation}$\hfill{\Box}$ \medskip \noindent{\bf Example E: A model in three dimensions.} We now take $\mathcal S=\{\bar 1 \bar 1 \bar 1 ,\bar 1 \bar 1 1,\bar 1 10,100\}$. As {for Gessel's walks} (Example C), the functional equation involves inclusion-exclusion so as to avoid excluding several times the same move, and one obtains: \begin{multline} K(x,y,z) Q(x,y,z)= 1 -t\bar x}%{\overline x(\bar y}%{\overline y \bar z+\bar y}%{\overline y z+ y) Q( 0,y,z) -t\bar x}%{\overline x\bar y}%{\overline y (\bar z+z) Q(x, 0,z) -t\bar x}%{\overline x\bar y}%{\overline y\bar z Q(x,y, 0)\\ \label{eqfunc-3D} +t\bar x}%{\overline x\bar y}%{\overline y (\bar z+z) Q( 0, 0,z) + t\bar x}%{\overline x\bar y}%{\overline y\bar z Q( 0, y, 0) +t\bar x}%{\overline x\bar y}%{\overline y\bar z Q(x, 0, 0) -t\bar x}%{\overline x\bar y}%{\overline y\bar z Q( 0, 0, 0), \end{multline} where the kernel is \[ K(x,y,z)=1-t( \bar x}%{\overline x\bar y}%{\overline y\bar z +\bar x}%{\overline x\bar y}%{\overline y z+\bar x}%{\overline x y+x). \]$\hfill{\Box}$ \medskip After {seeing} all these examples, the reader should be convinced that a functional equation can be written for any model $\mathcal S$. {We only give its general form} in two cases: first in dimension two, and then for models with small backward steps. In dimension {two}, the equation reads: \begin{equation}\label{eq:quadrant} K(x,y)Q(x,y)=1 - t \sum_{(k, \ell) \in \mathcal S} x^k y^\ell \left( \sum_{0\le i <-k} x^i Q_{i,-}(y) + \sum_{0\le j<-\ell} y^j Q_{-,j}(x) - \sum_{\substack{0\le i<-k\\ 0\le j<-\ell}} x^i y^j Q_{i,j}\right), \end{equation} where $K(x,y)=1-tS(x,y)$ is the kernel, $x^iQ_{i, -}(y)$ (resp. $y^jQ_{-,j}(x)$) counts quadrant walks ending at abscissa $i$ (resp. at ordinate $j$), and $Q_{i,j}$ is the length generating function\ of walks ending at $(i,j)$. For a model {of walks with \emph{small backward steps} confined to the orthant $\mathbb{N}^d$ in arbitrary dimension~$d$, the functional equation reads}: \begin{equation}\label{eqfunc:small} K(\Bx)Q(\Bx)=1+t \sum_{\emptyset \not = I \subset \llbracket 1, d\rrbracket} \left((-1)^{|I|} Q_I(\Bx) \sum_{\substack{\mathbf{s} \in \mathcal S: \\ s_i =-1 \,\, \forall i \in I}} \Bx^{\mathbf{s}}\right), \end{equation} where $\Bx=(x_1, \ldots, x_d)$, $K(\Bx)=1-tS(\Bx)$, and $Q_I(\Bx)$ is the specialization of $Q(\Bx)$ where each $x_i, i\in I$, is set to $0$ (for instance, if $I=\{2,3\}$ then $Q_I(x)=Q(x_1, 0,0, x_4, \ldots, x_d)$). The proof is an inclusion-exclusion argument generalizing {the proof of}~\eqref{eqfunc-3D}. \section{The orbit of \texorpdfstring{$\boldsymbol{(x_1, \ldots, x_d)}$}{(x1,...,xd)}} \label{sec:orbit} In Section~\ref{sec:basic}, we have {shown on one example} how to associate a group to a 2D model with small steps. We now describe, for a general step set $\mathcal S$ in arbitrary dimension $d$, how to define the counterpart of this group, or more precisely of its orbit. To avoid trivial cases, we only consider models that have {both} positive and negative steps in each direction. \subsection{Definition and first examples} We denote by $\mathbb{K}$ the field $\mathbb{C}}%{\mbox{\bbold C}(x_1, \ldots, x_d)$, and by $\overline \mathbb{K}$ an algebraic closure of $\mathbb{K}$. We first define two relations $\approx$ and $\sim$ on elements of $\left(\overline \mathbb{K}\setminus \{0\}\right)^d$; recall that $S(\Bx)$ denotes the step polynomial of~$\mathcal S$, defined by~\eqref{S-def}. \begin{Definition} \label{def:equiv} Let $\Bu$ and $\Bv$ be two {distinct} $d$-tuples in $\left(\overline \mathbb{K}\setminus \{0\}\right)^d$, and let $1\le i \le d$. Then $\Bu$ and $\Bv$ are \emm $i$-adjacent,, denoted $\Bu \stackrel i \approx \Bv$, if $S(\Bu)=S(\Bv)$ and $\Bu$ and $\Bv$ differ only by their $i$th coordinate. They are \emm adjacent,, denoted $\Bu \approx \Bv$, if they are $i$-adjacent for some $i$. Clearly, the relation $\approx$ is symmetric. We denote by $\sim$ its reflexive and transitive closure. The \emm orbit, of $\Bu$ is its equivalence class for this relation. The \emm {$\Bu$-length}, of an element $\Bv$ in the orbit of $\Bu$ is the smallest $\ell$ such that there exists $\Bu^{(0)}=\Bu, \Bu^{(1)}, \ldots, \Bu^{(\ell)}= \Bv$ with $\Bu^{(0)}\approx \Bu^{(1)} \approx \cdots \approx \Bu^{(\ell)}$. \end{Definition} Note that the value of $S$ is constant over the orbit of $\Bu$. We will often refer to the orbit of $\Bx=(x_1, \ldots,x_d)$ (or $(x,y)$ in two dimensions) as \emph{the} orbit of the model $\mathcal S$, {and to the \emm length, of an element of this orbit as its $\Bx$-length}. We use the word \emm orbit, even though we have not defined any underlying group: this terminology comes from the case of small steps, as justified by Proposition~\ref{prop:small-group} below. Before we proceed, let us check that the structure of the orbit does not depend on the choice of the algebraic closure of $\mathbb{K}$. \begin{Lemma}\label{lem:isom} Let $ \overline\GK$ and $\widehat\GK$ be two algebraic closures of $\mathbb{K}$ and $\tau : \overline\GK \rightarrow \widehat\GK$ a field automorphism fixing $\mathbb{K}$. For any $\Bu=(u_1, \ldots, u_d)\in \left(\overline\GK\setminus\{0\}\right)^d$, we denote by $\tau(\Bu)$ the element of {$\left(\widehat\GK\setminus\{0\}\right)^d$} obtained by applying $\tau$ to $\Bu$ component-wise. Then $\tau$ {preserves adjacencies, and sends the orbit of $\Bu$ onto the orbit of $\tau(\Bu)$.} \end{Lemma} \begin{proof} (sketch) Clearly, if $S(\Bv)=S(\Bw)$ then $S(\tau(\Bv))=S(\tau(\Bw))$, because $S$ has rational coefficients. And if $\Bv$ and $\Bv'$ differ by their $i$th coordinate, the same holds for their images by~$\tau$. {This shows that adjacencies are preserved.} The {isomorphism of orbits} then follows by induction on the length. \end{proof} The next proposition tells that two models that are equivalent up to a symmetry of the hypercube have isomorphic orbits. Since these symmetries are generated by a reflection and adjacent transpositions, it suffices to examine these two cases. \begin{Proposition}\label{prop:sym} Let $\mathcal S \subset \mathbb{Z}}%{\mbox{\bbold Z} ^d$ be a model with step polynomial $S(x_1, \ldots, x_d)$, and let $\tilde \mathcal S$ be the model obtained by swapping the first two coordinates, with step polynomial $S(x_2, x_1, x_3 ,\ldots, x_d)$. Then the orbits of $\mathcal S$ and $\tilde \mathcal S$ are \emm isomorphic, (there is a bijection from one to the other that preserves adjacencies). The same holds if $\tilde S$ is obtained from $\mathcal S$ by a reflection in the hyperplane $x_1=0$; that is, if its step polynomial is $S(1/x_1, x_2, \ldots, x_d)$. \end{Proposition} \begin{proof} To lighten notation, we prove this result in two dimensions. The proof is similar in higher dimensions. In the first case, let us construct the orbit of $\mathcal S$ in the field $\overline\GK$ of iterated Puiseux series in $x$ and $y$ (Puiseux series in $x$ whose coefficients are Puiseux series in $y$). We shall construct the orbit of $\tilde \mathcal S$ in the field $\widehat\GK$ of iterated Puiseux series in $y$ and $x$ (note the inversion). If $u\in \overline\GK$, let $\delta(u) \in \widehat\GK$ be obtained from $u$ by swapping $x$ and $y$. We claim that, if $(u,v)$ is in the orbit of~$\mathcal S$, then the pair $(\delta(v), \delta(u))$ is in the orbit of $\tilde \mathcal S$, and vice-versa. {First, if $(u,v)=(x,y)$, then $(\delta(v), \delta(u))=(x,y)$. Then, if $(u,v)$ is 2-adjacent to $(u,w)$ in the orbit of $\mathcal S$, then $(\delta(v), \delta(u))$ is 1-adjacent to $(\delta(w), \delta(u))$ in the orbit of $\tilde \mathcal S$, because \[ \tilde S(\delta(w),\delta(u))= S(\delta(u), \delta(w))= S(\delta(u), \delta(v))=\tilde S(\delta(v),\delta(u)). \] (The second equality comes from the 2-adjacency of $(u,v)$ and $(u,w)$ for $\mathcal S$.) One proves similarly that 1-adjacencies for $\mathcal S$ become 2-adjacencies for $\tilde \mathcal S$. The isomorphism between the orbits of $\mathcal S$ and $\tilde \mathcal S$ then follows by induction on the length.} The proof is similar in the second case, upon constructing the orbit of $\tilde \mathcal S$ in the field $\widehat\GK$ of iterated Puiseux series in $\bar x$ and $y$. Denoting by $\delta$ the transformation from $\overline\GK$ to $\widehat\GK$ that sends $x$ to $\bar x}%{\overline x$, a pair $(u,v)$ is in the orbit of $\mathcal S$ if and only if $(1/\delta(u), {\delta(v)})$ is in the orbit of $\tilde \mathcal S$. \end{proof} We will now examine examples. One important observation is the following. \begin{Lemma}\label{lem:ind} If the coordinates of $\Bu$ are algebraically independent over $\mathbb{Q}}%{\mbox{\bbold Q}$, then the same holds for any $\Bv$ {in the orbit of $\Bu$}. Moreover, the number of elements $\Bv$ that are $i$-adjacent to $\Bu$ is {$M_i+m_i-1$}, where $M_i$ (resp. $-m_i$) is the largest (resp. smallest) move in the $i$th direction among the steps of~$\mathcal S$. In particular, for small step models ($M_i=m_i=1$ {for all $i$}) there is one adjacent element in every direction. \end{Lemma} \begin{proof} {Let $\Bv=(u_1, \ldots, u_{i-1},v, u_{i+1}, \ldots, u_d)$ be $i$-adjacent to $\Bu=(u_1, \ldots, u_d)$, and let us prove that the coordinates of $\Bv$ are independent over~$\mathbb{Q}}%{\mbox{\bbold Q}$.} Assume that there exists a non-trivial polynomial $P(\mathbf{a})$ with rational coefficients such that $P(\Bv)=0$. Since the $u_i$'s are algebraically independent, $P(\mathbf{a})$ must depend on $a_i$. Hence $v$ is algebraic over $\mathbb{Q}}%{\mbox{\bbold Q}(u_1, \ldots, u_{i-1}, u_{i+1}, \ldots, u_d)$. The same holds for $S(\Bv)$, and hence for $S(\Bu)$. Since $S(\mathbf{a})$ actually depends on $a_i$, this means that~$u_i$ is algebraic over $\mathbb{Q}}%{\mbox{\bbold Q}(u_1, \ldots, u_{i-1}, u_{i+1}, \ldots, u_d)$, which contradicts the algebraic independence of the $u_j$'s. {The first statement of the lemma follows, by induction on the length.} Then, by expanding $S(\Bv)$ in powers of $v$, we have \[ {S(\Bv):=} P_{M_i}(u_1, \ldots, u_{i-1},u_{i+1}, \ldots,u_d) v^{M_i} +\cdots +P_{-m_i}(u_1, \ldots, u_{i-1},u_{i+1}, \ldots,u_d) v^{-m_i}= S(\Bu). \] As the coordinates of $\Bu$ are algebraically independent, this equation has $M_i+m_i$ solutions in~$v$. One of them is the trivial solution $v=u_i$. Each root $v$ gives rise to an element $\Bv$ whose coordinates are algebraically independent. In particular, $S_{a_i}(\Bv)\not = 0$, which means that $v$ is not a multiple root of $S(\Bv)-S(\Bu)$. Hence this polynomial (in $v$) has distinct roots. Removing the trivial root $v=u_i$ gives $m_i+M_i-1$ distinct elements $\Bv$ that are $i$-adjacent to $\Bu$. \end{proof} \medskip \noindent{\bf Example $\boldsymbol {(d=1)}$.} In dimension 1 the orbit of $x$ consists of all solutions $x'$ of the equation $S(x)=S(x')$. It is thus finite.$\hfill{\Box}$ \medskip \noindent{\bf Example: small steps with $\boldsymbol {d=2}$.} Let us get back to the example of Section~\ref{sec:basic}. Then it can be checked that two elements are adjacent if and only of one is obtained from the other by applying $\Phi$ or $\Psi$. This will be generalized to all small step models {(in arbitrary dimension)} in the proposition below. Note however that in dimension 2, and beyond, the orbit may be infinite. This happens for 56 of the 79 small step quadrant models~\cite{BoMi10}, for instance when $\mathcal S=\{\uparrow, \rightarrow, \swarrow, \leftarrow\}$, {in which case $S(x,y) = y + x + \bar x}%{\overline x \bar y}%{\overline y + \bar x}%{\overline x$.} {For models with small steps, the orbit of $\Bx$ is indeed its orbit under the action of a certain group, as in the example of Section~\ref{sec:basic}.} \begin{Proposition}\label{prop:small-group} Assume that $\mathcal S$ consists of small steps, that is, $\mathcal S\subset \{-1, 0,1\}^d$. Define $d$ bi-rational transformations $\Phi_1, \ldots, \Phi_d$ by: \[ \Phi_i(a_1, \ldots, a_d)= \left(a_1, \ldots, a_{i-1}, \frac 1 {a_i} \ \frac{ S_{i}^-(\mathbf{a})}{S_i^+(\mathbf{a})}, a_{i+1}, \ldots, a_d\right), \] where $\mathbf{a}=(a_1, \ldots, a_d)$ and $S_{i}^-(\mathbf{a})$ (resp. $S_i^+(\mathbf{a})$) is the coefficient of $1/a_i$ (resp. $a_i$) in $S(\mathbf{a})$. Then the $\Phi_i$'s are involutions. If the $a_j$'s are algebraically independent over {$\mathbb{Q}}%{\mbox{\bbold Q}$}, then $\mathbf{a}$ and $\Phi_i(\mathbf{a})$ are $i$-adjacent. {Conversely,} let $\Bx=(x_1, \ldots, x_d)$ and {let} $\Bu=(u_1, \ldots, u_d)$ be in the orbit of $\Bx$. An element $\Bv$ of $\left( {\overline\GK}\setminus\{0\}\right)^d$ is {$i$-adjacent} to $\Bu$ if and only if $\Bv=\Phi_i(\Bu)$. Consequently, the orbit of $\Bx$ is indeed its orbit under the action of a group, namely the group generated by the involutions $\Phi_i$. {Finally, the length of two adjacent elements in the orbit of $\Bx$ differ by $\pm1$.} \end{Proposition} \begin{proof} To prove that $\Phi_i$ is an involution, we first observe that $S_i^+(\mathbf{a})$ and $S_i^-(\mathbf{a})$ are independent of~$a_i$. Hence, denoting $\mathbf{a}'=\Phi_i(\mathbf{a})$, the $i$th coordinate of $\Phi_i(\mathbf{a}')$ is \[ \frac 1 {a'_i} \frac{ S_{i}^-(\mathbf{a}')}{S_i^+(\mathbf{a}')} = a_i \frac { S_{i}^+(\mathbf{a})}{S_i^-(\mathbf{a})}\frac{ S_{i}^-(\mathbf{a})}{S_i^+(\mathbf{a})}=a_i. \] If the $a_j$'s are algebraically independent over {$\mathbb{Q}}%{\mbox{\bbold Q}$}, then $\mathbf{a}$ and $\mathbf{a}'$ {are distinct}, differ in their {$i$th} coordinate only, and, upon writing \[ S(\Bx)= \frac 1 {x_i} S_i^-(\Bx) +S_i^0(\Bx)+ x_i S_i^+(\Bx), \] we can check that $S(\mathbf{a})= S(\mathbf{a}')$. {Hence $\mathbf{a}$ and $\Phi_i(\mathbf{a})$ are $i$-adjacent.} Now {let $\Bu$ be in} the orbit of $\Bx$. Write $S(\Bx)$ as above. Note that $S_i^-(\Bx)$, $S_i^0(\Bx)$ and $ S_i^+(\Bx)$ are unchanged if we only modify the $i$th coordinate of~$\Bx$. So if {$\Bv\stackrel i \approx\Bu$}, {the fact that $S(\Bu)=S(\Bv)$ gives} \[ \frac 1 {u_i} S_i^-(\Bu) + u_i S_i^+(\Bu) = \frac 1 {v_i} S_i^-(\Bu) + v_i S_i^+(\Bu), \quad \hbox{that is, } \quad S_i^-(\Bu) = u_i v_i S_i^+(\Bu). \] {By the above lemma,} the coordinates of $\Bu$ are algebraically independent, hence $S_i^+(\Bu)\not = 0$ and $\Bv$ must be $\Phi_i(\Bu)$. Conversely, we have proved above that $\Phi_i(\Bu)$ is $i$-adjacent to $\Bu$. This concludes the {description of the orbit of $\Bx$}. The proof of the final result was communicated to us by Andrew Elvey Price and Michael Wallner, whom we thank for their great help. Clearly, if $\Bu$ and $\Bv$ are two adjacent elements in the orbit of $\Bx$, their lengths differ by $0, +1$ or $-1$. We want to exclude the value $0$, which amounts to saying that in the graph whose vertices are the elements of the orbit, with edges between adjacent elements, there is no odd cycle. Equivalently, this graph is bipartite. In order to prove this, we define a sign $\varepsilon(\Bu) \in \{-1, +1\}$ on elements $\Bu$ of the orbit of $\Bx$, which changes when an involution $\Phi_i$ is applied. {The sign is defined by \[ \varepsilon(\Bu)= \left(\prod_{i=1}^d x_i\right) \det M(\Bu), \qquad \hbox{where} \qquad M(\Bu)=\left( \frac 1 {u_i} \frac{\partial u_i}{\partial x_j}\right)_{1\le i,j \le d}. \] It is readily checked that $\varepsilon(\Bx)=1$, and this implies $\varepsilon(\Bu)=(-1)^{\hbox {\scriptsize{length}}(\Bu)}$.} Let us then take $\Bv=\Phi_i(\Bu)$, and prove that $\varepsilon(\Bv)=-\varepsilon(\Bu)$. The matrix $M(\Bv)$ only differs from $M(\Bu)$ in the $i$th row. Let us denote \[ \Phi_i(\mathbf{a})= \left(a_1, \ldots, a_{i-1},\frac 1 {a_i} R_i(\mathbf{a}), a_{i+1}, \ldots, a_d\right), \] where $R_i(\mathbf{a})=S_i^-(\mathbf{a})/S_i{^+}(\mathbf{a})$ only depends on the variables $a_1, \ldots, a_{i-1}, a_{i+1}, \ldots, a_d$. Then for $1\le j \le d$, the $(i,j)$ entry of $M(\Bv)$ is \begin{align*} \frac 1{v_i} \frac{\partial v_i}{\partial x_j} &= \frac{u_i}{R_i(\Bu)} \left( -\frac 1{u_i^2} R_i(\Bu) \frac{\partial u_i}{\partial x_j} + \sum_{k\not = i} \frac{\partial R_i}{\partial a_k}(\Bu) \frac{\partial u_k}{\partial x_j}\right) \\ &= -\frac 1 {u_i} \frac{\partial u_i}{\partial x_j} + \frac{u_i}{R_i(\Bu)} \sum_{k\not = i} \frac{\partial R_i}{\partial a_k}(\Bu) \frac{\partial u_k}{\partial x_j}. \end{align*} Upon subtracting from the $i$th row of $M(\Bv)$ its $k$th row, multiplied by ${u_i u_k} \partial R_i/\partial a_k(\Bu)/R_i(\Bu)$, for $1\le k \not =i \le d$, we see that $\det M(\Bv)$ is also the determinant of the matrix obtained from $M(\Bu)$ by changing the sign of all elements of the $i$th row, which concludes the proof. \end{proof} \medskip \noindent{\bf Example D (continued): large steps with $\boldsymbol{d=2}$.} Let us take $\mathcal S=\{\bar 20,\bar 11, 02,1\bar 1\}$, so that \[ S(x,y)= \bar x}%{\overline x^2+\bar x}%{\overline x y +y^2+x\bar y}%{\overline y. \] We will incrementally construct the orbit of $(x,y)$. This example should provide the intuition for the algorithm given in the next subsection. We start from $(x,y)$ and want to determine which elements $(X,y)$ are 1-adjacent to it; that is, to find the solutions to $S(X,y)=S(x,y)$ with $X\not = x$. We have \[ S(X,y)-S(x,y)=\frac {(X-x) \left(x^2X^2-y(1+xy)X-xy\right)}{x^2 y X^2}. \] Hence the two elements that are 1-adjacent to $(x,y)$ are $(x_1,y)$ and $(x_2,y)$, where $x_1$ and $x_2$ are the two roots of $P_1(X,x,y):=x^2X^2-y(1+xy)X-xy$ {(when solved for $X$)}. The $x_i$'s can be taken as Laurent series in $x$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[y,\bar y}%{\overline y]$: \[ x_1= y{x}^{-2}+{y}^{2}{x}^{-1} -x_2\qquad \hbox{and} \qquad x_2=-x+y{x}^{2}-{y}^{2}{x}^{3}+\left( {{y}^{3}+\bar y}%{\overline y}\right){x}^{4}+ O({x}^5). \] Similarly, we find that the two elements that are 2-adjacent to $(x,y)$ are $(x,y_1)$ and $(x,y_2)$, where $y_1$ and $y_2$ are the roots of $Q_1(Y,x,y):=xyY^2+y(1+xy)Y-x^2$. But $Q_1(Y,x,y)$ {coincides with $P_1(1/Y,x,y)$} (up to a factor of $Y^2$), thus we take $y_1=\bar x_1 := 1/x_1$ and $y_2=\bar x_2 := 1/x_2$. We have now obtained five elements in the orbit of $(x,y)$ (one can follow the construction on Figure~\ref{fig:orbit-quadrangulations}). Now we want to find the elements $(x_1,Y)$ that are 2-adjacent to $(x_1,y)$. In principle, we should thus solve $S(x_1,Y)=S(x_1,y)(=S(x,y))$, but we prefer not to handle equations with algebraic coefficients (like $x_1$). So instead, we consider the \emm {polynomial} system, \[ P_1(X,x,y)=0, \qquad S(X,Y)=S(x,y), \] whose solutions $(X,Y)$ are the pairs $(x_i,Y)$ belonging to the orbit. Upon eliminating $X$ between these two equations, we find that $Y$ is necessarily either $y$, or $\bar x}%{\overline x$, or one of the series $\bar x}%{\overline x_i$. Upon checking that $S(x_1, \bar x}%{\overline x_1)\not = S(x,y)$, we conclude that the two elements that are 2-adjacent to $(x_1,y)$ are $(x_1,\bar x}%{\overline x)$ and $(x_1, \bar x}%{\overline x_2)$. Symmetrically, $(x_2,\bar x}%{\overline x)$ and $(x_2, \bar x}%{\overline x_1)$ are 2-adjacent to $(x_2,y)$. We now have 9 elements in the orbit. In order to find the elements that are 1-adjacent to $(x, \bar x}%{\overline x_i)$, for $i=1,2$, we study similarly the {polynomial} system \[ Q_1(Y,x,y)=0, \qquad S(X,Y)=S(x,y) \] and conclude that $(\bar y}%{\overline y, \bar x}%{\overline x_1)$ and $(x_2, \bar x}%{\overline x_1)$ are 1-adjacent to $(x, \bar x}%{\overline x_1)$ while $(\bar y}%{\overline y, \bar x}%{\overline x_2)$ and $(x_1, \bar x}%{\overline x_2)$ are 1-adjacent to $(x, \bar x}%{\overline x_2)$. We have reached 11 elements. At this stage, we still need one element that would be 1-adjacent to $(x_1, \bar x}%{\overline x)$ and $(x_2, \bar x}%{\overline x)$, and one element that would be 2-adjacent to $(\bar y}%{\overline y, \bar x}%{\overline x_1)$ and $(\bar y}%{\overline y, \bar x}%{\overline x_2)$. We address the first problem by solving $S(X,\bar x}%{\overline x)=S(x,y)$, and find that $(\bar y}%{\overline y,\bar x}%{\overline x)$ in fact solves both problems. The orbit is now complete, and contains 12 elements. \begin{figure}[htb] \centering \scalebox{0.9}{\input{orbit-quadrangulations.pdf_t}} \caption{The orbit of $\mathcal S=\{\bar 20,\bar 11, 02,1\bar 1\}$. The values $x_1$ and $x_2$ are the roots of $P_1(X,x,y)=x^2X^2-y(1+xy)X-xy$. The values $\bar x_1}%{\overline {x}_1$ and $\bar x_2}%{\overline {x}_2$ are their reciprocals. The dashed (resp. {solid}) edges join 1-adjacent (resp. 2-adjacent) elements.} \label{fig:orbit-quadrangulations} \end{figure} \subsection{An algorithm that detects finite orbits (case $\boldsymbol{d=2}$)} \label{sec:algo} Given a model in dimension $d=2$ we now describe a (semi-)algorithm that stops if and only if the orbit is finite. This algorithm constructs incrementally two sets $\mathcal P$ and $\mathcal Q$ of irreducible polynomials in $X$ and $Y$, respectively, with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}(x,y)$. It starts with $\mathcal P=\{X-x\}$ and $\mathcal Q=\{Y-y\}$, and both polynomials are declared non-treated. At each stage, the algorithm chooses a non-treated polynomial in $\mathcal P \cup \mathcal Q$, say $Q\in \mathcal Q$, and constructs a new polynomial $P'(X,x,y)$, which is the resultant in $Y$ of $ Q(Y,x,y) $ and { the numerator of the Laurent polynomial $S(x,y)-S(X,Y)$ (namely $(xX)^{m_1}(yY)^{m_2}(S(x,y)-S(X,Y))$, where $-m_1$ is the smallest move in the $x$-direction and similarly for $m_2$)}. Then the algorithm adds to $\mathcal P$ every irreducible factor of $P'$, and the {new} factors are declared non-treated. The algorithm treats symmetrically polynomials of $\mathcal P$. These stages are repeated as long as there are non-treated polynomials. We recall that $\overline\mathbb{K}$ denotes an algebraic closure of $\mathbb{K}:=\mathbb{Q}}%{\mbox{\bbold Q}(x,y)$. \begin{Proposition}\label{prop:algo} The following two properties hold at each stage of the algorithm: \begin{enumerate} \item[$(i)$] the set $\mathcal P$ contains no element of $\mathbb{Q}}%{\mbox{\bbold Q}[X]$; moreover, for $P \in \mathcal P$ and $x' \in \overline \mathbb{K}$ such that $P(x',x,y)=0$, there exists $Q\in \mathcal Q$ and $y'\in \overline\GK$ such that $Q(y',x,y)=0$ and $(x',y')$ is in the orbit of $(x,y)$, \item [$(ii)$] symmetrically, the set $\mathcal Q$ contains no element of $\mathbb{Q}}%{\mbox{\bbold Q}[Y]$; moreover, for $Q \in \mathcal Q$ and $y' \in \overline\GK$ such that $Q(y',x,y)=0$, there exists $P\in \mathcal Q$ and $x'\in \overline\GK$ such that $P(x',x,y)=0$ and $(x',y')$ is in the orbit of $(x,y)$. \end{enumerate} The algorithm stops if and only if the orbit of $(x,y)$ is finite. In this case, the converse of $(i)$ and $(ii)$ holds, that is: \begin{enumerate} \item [$(iii)$]for every $(x',y')$ in the orbit of $(x,y)$, the minimal polynomials of $x'$ and $y'$ over $\mathbb{Q}}%{\mbox{\bbold Q}(x,y)$ belong respectively to $\mathcal P$ and $\mathcal Q$. \end{enumerate} \end{Proposition} Note that the sets $\mathcal P$ and $\mathcal Q$ do not determine completely the orbit: one still has to decide, for each $x'$ that solves a polynomial of $\mathcal P$, which $y'$ (taken from the roots of the polynomials of~$\mathcal Q$) go with it in the orbit, as was done in Example D above. \begin{proof} Let us first prove $(i)$ and $(ii)$, by induction on the number of stages performed by the algorithm. Both properties obviously hold at the initialization step, where $\mathcal P= \{X-x\}$ and $\mathcal Q=\{Y-y\}$. Now assume that they hold at some stage, and that we treat a polynomial $Q\in \mathcal Q$ as described at the beginning of Section~\ref{sec:algo}. Let us prove that the extended collections of polynomials still satisfy $(i)$ and $(ii)$. Clearly $(ii)$ still holds, since we have not extended $\mathcal Q$. So let us check~$(i)$. It suffices to check it for the factors of $P'$ that we have added to $\mathcal P$. So let us take one of these factors, and let $x'$ be one of its roots. Then $x'$ is a root of $P'(X,x,y)$. The properties of the resultant imply that there exists $y'$ such that $Q(y',x,y)=0$ and \begin{equation}\label{S-equal} x'^{m_1} y'^{m_2} \left(S(x,y)-S(x',y')\right)=0, \end{equation} where $-m_1$ (resp. $-m_2$) is the smallest move along the $x$-axis (resp. the $y$-axis). By Property~$(ii)$ applied to $Q$ and $y'$, there exists an element $x'' \in \overline \mathbb{K}$ such that $(x'',y')$ is in the orbit. By Lemma~\ref{lem:ind}, $x''$ and $y'$ are algebraically independent over $\mathbb{Q}}%{\mbox{\bbold Q}$, and in particular $y'$ is not an algebraic {number}. If $x'=0$, then~\eqref{S-equal} tells us that the coefficient of $x^{-m_1}$ in $S(x,y)$, evaluated at $y=y'$, vanishes, which would make $y'$ algebraic, a contradiction. Thus $x'\not =0$, {$y'\not = 0$,} and $S(x,y)=S(x',y')$. Hence $S(x',y')=S(x'',y')$, which shows that $(x',y')$ is adjacent to $(x'',y)$, and thus is in the orbit of $(x,y)$. In particular, $x'$ and $y'$ are algebraically independent {over $\mathbb{Q}}%{\mbox{\bbold Q}$}, thus $x'\not \in {\overline \mathbb{Q}}%{\mbox{\bbold Q}}$, which means that its minimal polynomial $P$ is not in $\mathbb{Q}}%{\mbox{\bbold Q}[X]$. \medskip Now assume that the algorithm stops; that is, that there are no more non-treated polynomials. Let us prove $(iii)$ by induction of the length of $(x',y')$. If $\ell=0$, then $(x',y')=(x,y)$ and we have precisely initialized $\mathcal P$ and $\mathcal Q$ with the minimal polynomials of $x$ and $y$. Now assume that $(iii)$ holds for length $\ell-1$, and that $(x',y')$ has length $\ell$. Without loss of generality, we can assume that $(x',y')\approx (x'',y')$, where $(x'',y')$ has length $\ell-1$. By the induction hypothesis, the minimal polynomial $Q$ of $y'$ belongs to $\mathcal Q$, so we only need to consider~$x'$. The polynomials (in $Y$) $Q(Y,x,y)$ and $X^{m_1} Y^{m_2}\left(S(x,y)-S(X,Y)\right)$ have a common root (namely $y'$) when $X=x'$. Hence their resultant $P'(X,x,y)$ must have $x'$ as a root. This implies that one of the factors of $P'$ is the minimal polynomial of $x'$, and this factor is added to $\mathcal P$ when the algorithm treats the polynomial $Q$ (unless it was already in $\mathcal P$). We have thus established $(iii)$, assuming the algorithm stops. In this case $\mathcal P$ and $\mathcal Q$ are finite so $(iii)$ implies that the orbit is finite. Conversely, assume that the orbit is finite. By $(i)$, every $P\in\mathcal P$ must be the minimal polynomial of some $x' \in \overline\GK$ such that $(x',y')$ is in the orbit for some $y'$. Hence $\mathcal P$ cannot grow indefinitely. A similar argument applies to $\mathcal Q$, and the algorithm has to stop. \end{proof} \subsection{Infinite orbits and the excursion exponent} \label{sec:infinite-orbit} We now describe an approach, of wide applicability, to prove that a model has an infinite orbit. It generalizes a fixed point argument applied to quadrant walks with small steps in~\cite[Thm.~3]{BoMi10} {(see also~\cite{Du} for an application to 3D walks with small steps)}. It also constructs a group of transformations which generates part of the orbit of $\Bx$. In the 2-dimensional case, it establishes a connection with the asymptotic proof of non-D-finiteness developed in~\cite{BoRaSa14}. One outcome will be the following convenient criterion for 2-dimensional models. \begin{Theorem} \label{thm:theta} Let $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}^2$ be a {step set} that is not contained in a half-plane, and {contains an element of}~$\mathbb{N}}%{\mbox{\bbold N}^2$. Then the step polynomial $S(x,y)$ has a unique \emm critical point, $(a,b)$ in $\mathbb{R}}%{\mbox{\bbold R}_{>0}^2$ (that is, a solution of $S_x(a,b)=S_y(a,b)=0$), which satisfies $S_{xx}(a,b)>0$ and $S_{yy}(a,b)>0$. Define \[ c= \frac{ S_{xy}(a,b)}{\sqrt{S_{xx}(a,b)S_{yy}(a,b)}}. \] Then $c\in [-1,1]$ can be written as $\cos \theta$. If $\theta$ is not a rational multiple of $\pi$, then the orbit of $\mathcal S$ is infinite, and the series $Q(x,y;t)$ is not D-finite. \end{Theorem} Note that this result is algorithmic: the quantities $a,b, c$ are algebraic over $\mathbb{Q}}%{\mbox{\bbold Q}$ and one can compute their minimal polynomials. Saying that $\theta$ is a rational multiple of $\pi$ amounts to saying that the solutions of $z+1/z=2c$ are roots of unity, so that their minimal polynomials are cyclotomic. This can be checked algorithmically. In Section~\ref{sec:m21} we apply this theorem systematically to the 13 110 models having steps in $\{-2, -1, 0, 1\}^2$ and at least one large step. Combined with the algorithm that detects finite orbits, it determines the size of the orbit for all but 16 models. (These 16 models turn out to have an infinite orbit, see Section~\ref{sec:embarassing}). The above theorem also shows that the calculations performed in~\cite{BoMi10} to prove that 51 small step models have an infinite group are equivalent to those performed in~\cite{BoRaSa14} to prove that these~51 models have a non-D-finite generating function. \subsubsection{A group acting on the orbit} \label{sec:group} We begin with the part of the above theorem that deals with the size of the orbit. In fact, we have a more general result that holds for models in $d$ dimensions. So let $\mathcal S \in \mathbb{Z}}%{\mbox{\bbold Z}^d$, and assume that there exists a point $\mathbf{a}:=(a_1,\ldots, a_d)$ such that $S_{x_1}(\mathbf{a}) = \partial S/\partial {x_1}(a_1, \ldots, a_d)=0$. {If $I(X,\Bx)$ denotes the Laurent polynomial \[ I(X,\Bx) = \frac{S(X,x_2,\dots,x_d)-S(x_1,\dots,x_d)}{X-x_1} \] (after {normalizing the rational function}) then $I(a_1,\mathbf{a})= S_{x_1}(\mathbf{a})=0$. Assume now that $S_{x_1 x_1}(\mathbf{a})\not =0$, so that $I_X(a_1,\mathbf{a})=S_{x_1x_1}(\mathbf{a})/2 \neq 0$.} By the implicit function theorem (in its analytic form), there exists a unique analytic function $\mathcal X_1(x_1, \ldots, x_d)$ defined in a neighborhood of $\mathbf{a}$, satisfying $\mathcal X_1(\mathbf{a})={a_1}$ and \begin{equation}\label{X1-char} I(\mathcal X_1(\Bx), \Bx)=\frac{S(\mathcal X_1(\Bx), x_2, \ldots, x_d)-S(\Bx)}{\mathcal X_1(\Bx)-x_1}=0. \end{equation} The expansion of $\mathcal X_1(\Bx)$ around $\mathbf{a}$ can be computed inductively. Writing $\Bx=\mathbf{a}+\Bu$, we have \begin{equation}\label{X1-exp} \mathcal X_1(\Bx)= a_1 - u_1 - \frac 2 {S_{x_1x_1}(\mathbf{a})} \sum_{i=2}^d S_{x_1x_i}(\mathbf{a}) u_i + \cdots, \end{equation} the missing terms being of degree at least 2 in the $u_i$'s. We define the transformation $\Phi_1$ by $\Phi_1(\Bx)= (\mathcal X_1(\Bx), x_2, \ldots, x_d)$. Clearly, {$\Phi_1(\Bx)$ is 1-adjacent to $\Bx$} and thus lies in the orbit of $\Bx$ (which we construct in an algebraic closure of $\mathbb{Q}}%{\mbox{\bbold Q}(\Bx)$ containing power series in the $u_i$'s). {Since $\Phi_1(\mathbf{a}) = \mathbf{a}$}, we can iterate $\Phi_1$. In particular, \[ \Phi_1\circ \Phi_1(\Bx)= ( \mathcal X_1(\mathcal X_1(\Bx), x_2, \ldots,x_d), x_2, \ldots, x_d) \] satisfies \[ S( \Phi_1\circ \Phi_1(\Bx))= S(\Phi_1(\Bx))=S(\Bx), \] by~\eqref{X1-char}. Hence either $\Phi_1\circ \Phi_1$ is the identity, or \[ I(\Phi_1\circ \Phi_1(\Bx)) = \frac{S(\Phi_1\circ \Phi_1(\Bx))-S(\Bx)}{\mathcal X_1(\mathcal X_1(\Bx), x_2, \ldots,x_d)-x_1}=0, \] which means that the function $\tilde \mathcal X_1:\Bx\mapsto \mathcal X_1(\mathcal X_1(\Bx), x_2, \ldots,x_d)$ satisfies the same conditions as~$\mathcal X_1$. By uniqueness of $\mathcal X_1$, this would imply that $\tilde \mathcal X_1=\mathcal X_1$: but this is impossible as $\mathcal X_1(\Bx)$ has linear part $a_1-u_1+ \cdots$ while $\tilde \mathcal X_1$ has linear part $a_1+u_1+ \cdots$ (by~\eqref{X1-exp}). Hence $\Phi_1$ is an involution. Assume now that $\mathbf{a}$ is a critical point of $S$, that is, $S_{x_i}(\mathbf{a})=0$ for $i=1, \ldots, d$. {Assume moreover that $S_{x_ix_i}(\mathbf{a})\not = 0$ for all $i$.} We then define similarly the transformations $\Phi_i$ for $i=1, \ldots, d$. Still writing $\Bx=\mathbf{a}+\Bu$, each $\Phi_i$ leaves the constant term of $\Bx$ unchanged, so we can compose them and they form a group $G$. For any $\Theta$ in this group, $\Theta(\Bx)$ lies in the orbit of $\Bx$. If the orbit of $\Bx$ is finite, $G$ is finite as well, and every $\Theta\in G$ has finite order. The expansion of $\Theta$ around $\mathbf{a}$ reads: \[ \Theta(\mathbf{a}+\Bu)= \mathbf{a}+ \Bu\, J(\mathbf{a}) + \hbox{quadratic terms in the } u_i, \] hence the Jacobian matrix $J(\mathbf{a})$ must have finite order. This means that its eigenvalues are roots of unity, which, once again, can be checked algorithmically. We now restrict the discussion to the 2-dimensional case, in order to lighten notation. We denote $\Phi:=\Phi_1$ and $\Psi:=\Phi_2$, $\Bx=(x,y)$, $\mathbf{a}=(a,b)$ and $\Bu=(u,v)$. For $\Theta:=\Psi\circ \Phi$, we have \[ J(a,b):=\left( \begin{array}{cc} -1& -\eta \\ \nu & \eta \nu-1 \end{array} \right)\] where \[ \eta= \frac{2 S_{xy}(a,b)}{S_{xx}(a,b)} \qquad \hbox{and } \qquad \nu= \frac{2 S_{xy}(a,b)}{S_{yy}(a,b)}. \] The eigenvalues of $J$ are the roots of \[ \lambda^2-(\eta\nu-2)\lambda+1 \] and, as the orbit is finite, they must equal $e^{\pm 2i \theta}$ for $\theta$ a rational multiple of $\pi$. That is, \[ \lambda^2-(\eta\nu-2)\lambda+1= (\lambda-e^{2i\theta})(\lambda-e^{-2i\theta}). \] Extracting the coefficient of $\lambda$ gives the following proposition. \begin{Proposition}\label{prop:ab} Consider a two-dimensional model $\mathcal S$, and a critical point $(a,b)$ of $S(x,y)$ such that $S_{xx}(a,b)S_{yy}(a,b) \not = 0$. Then one can define involutions $\Phi$ and $\Psi$ as described above. If the orbit is finite, then $\Theta:=\Psi\circ \Phi$ has finite order. In particular, there exists a rational multiple of~$\pi$, denoted $\theta$, such that \[ \frac{ S_{xy}(a,b)^2}{S_{xx}(a,b)S_{yy}(a,b)} =\cos^2 \theta . \] \end{Proposition} We can now prove the part of Theorem~\ref{thm:theta} that deals with the orbit size. Since $\mathcal S$ is not contained in a half-plane, there exists a unique \emm positive, critical point $(a,b)$ (an argument is given in the proof of~\cite[Thm.~4]{BoRaSa14}). The derivatives $S_{xx}$ and $S_{yy}$ are positive at this point (because every monomial $x^i y^j$ gives a non-negative contribution, and one of them at least gives a positive contribution), and thus the above proposition applies. $\hfill{\vrule height 3pt width 5pt depth 2pt}$ \subsubsection{The excursion exponent} We will now show that in the 2-dimensional case, the above criterion is closely related to an asymptotic result that has been used as a criterion for the non-D-finiteness of $Q(x,y;t)$ in~\cite{BoRaSa14}. This result originally applies to \emm strongly aperiodic models, only, and it will take us a bit of work to obtain a version that is valid for periodic models as well. Given a model $\mathcal S$, we denote by $\Lambda$ the lattice of $\mathbb{Z}}%{\mbox{\bbold Z}^2$ spanned by its steps. Then $\mathcal S$ is \emm strongly aperiodic, if for any point $x\in\Lambda$, the lattice $\Lambda_x$ spanned by the points $x+s$ for $s\in \mathcal S$, which is clearly a sublattice of $\Lambda$, coincides with $\Lambda$. For instance, Kreweras' model $\{\nearrow, \leftarrow, \downarrow\}$ is \emm not, strongly aperiodic: one has $\Lambda=\mathbb{Z}}%{\mbox{\bbold Z}^2$, but for $x=(1,0)$, the lattice $\Lambda_x$ only contains points $(i,j)$ such that $i+j$ is a multiple of $3$. Given a model $\mathcal S$, and a point $(i,j)$ in $\mathbb{Z}}%{\mbox{\bbold Z}^2$, we denote by $w(i,j;n)$ the number of $n$-step walks going from $(0,0)$ to $(i,j)$ consisting of steps taken in $\mathcal S$ \emm without the quadrant condition,. We call any walk starting and ending at the same point an \emm excursion,. \begin{Proposition}\label{prop:lattice} Let $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}^2$ be a model that is not contained in a half-plane, and denote by $\Lambda$ the lattice of $\mathbb{Z}}%{\mbox{\bbold Z}^2$ generated by $\mathcal S$. Then there exists an integer $p$, called the \emm period, of $\mathcal S$, such that for any $(i,j)\in \Lambda$, there exists $r\in \llbracket0, p-1\rrbracket$ with $ w(i,j;n) =0$ if $n \not \equiv r \!\mod p$ and $w(i,j;n) >0$ if $n=mp+r$ and $m$ is large enough. The model $\mathcal S$ is strongly aperiodic if and only if $p=1$. \end{Proposition} \begin{proof} Several ingredients of the proof are borrowed from Spitzer~\cite[Sec.~I.5]{spitzer}, who deals with recurrent random walks and only considers the case $\Lambda=\mathbb{Z}}%{\mbox{\bbold Z}^2$. The fact that all points {$(i,j)$} of $\Lambda$ can be reached {from $(0,0)$} is closely related to Farkas' Lemma~\cite[Sec.~7.3]{schrijver}. Let $\mathcal N= \{n \ge 0: w(0,0;n)\not = 0\}$. Since one can concatenate two walks starting and ending at the origin, $\mathcal N$ is an additive semi-group of $\mathbb{N}}%{\mbox{\bbold N}$. Our first objective is to prove that it is not reduced to $\{0\}$, that is, that there exist non-empty excursions. Let $s$ be a non-zero vector of $\mathcal S$. Since $\mathcal S$ is not contained in a half-plane, there exists another non-zero vector of $\mathcal S$, say $s'$, such that the wedge formed by the pair $s,s'$ forms an angle $\phi\in (0, \pi)$. Let us choose $s'$ so as to maximize $\phi$ in this interval (Figure~\ref{fig:wedge}). Since $s$ and $s'$ form a basis of~$\mathbb{R}}%{\mbox{\bbold R}^2$, any other vector $s''$ of $\mathcal S$ can be written as $s''=\alpha s+\beta s'$ for a unique pair $(\alpha,\beta) \in \mathbb{Q}}%{\mbox{\bbold Q}^2$. Since $\mathcal S$ is not contained in a half-plane, there must exist a vector $s''$ in~$\mathcal S$ such that $\alpha$ is negative. By maximality of $\phi$, this vector is such that $\beta \le 0$. Writing $\alpha=-a/d$ and $\beta=-b/d$ with $a, d$ positive integers and $b$ a non-negative integer, we conclude that $as+bs'+ds''=0$, which shows that the walk starting at the origin and formed of $a$ copies of $s$, $b$ copies of $s'$ and $d$ copies of $s''$ ends at the origin as well. Thus there exist non-empty excursions. Moreover, we have \[ -s= (a-1)s+bs'+ds''. \] Since $a$ is a positive integer, {and $s$ is an arbitrary element of $\mathcal S$,} this proves that the set of endpoints of walks starting at the origin is not only a semi-group of $\mathbb{Z}}%{\mbox{\bbold Z}^2$ (again, by a concatenation argument), but in fact the entire lattice~$\Lambda$. \begin{figure} \centering \scalebox{0.9}{\input{wedge.pdf_t}} \caption{On the existence of excursions.} \label{fig:wedge} \end{figure} We have established that $\mathcal N\not = \{0\}$. Let $p$ be the greatest common divisor of the elements of~$\mathcal N$. The structure of semi-groups of $\mathbb{N}}%{\mbox{\bbold N}$ are well-understood: $\mathcal N \subset p \mathbb{N}}%{\mbox{\bbold N}$, and $pm \in \mathcal N$ for any large enough $m$. We have thus proved the first statement of the proposition for $(i,j)=(0,0)$, {with $r=0$}. Now let $(i,j)\in \Lambda$. We have proved above that there exists a walk going from $(0,0)$ to $(i,j)$. Assume that there are two such walks $w$ and $w'$, and choose a walk $w''$ from $(i,j)$ to $(0,0)$. Then both $ww''$ and $w'w''$ are excursions, hence they must have length $0$ modulo~$p$. Consequently, $w$ and $w'$ must have the same length modulo $p$, say $r$. Finally, by concatenating a large excursion to a walk ending at $(i,j)$, we see that for $m$ large enough, there is a walk of length $pm+r$ from the origin to $(i,j)$. The equivalence between strong aperiodicity and $p=1$ can be proved by mimicking the corresponding part of the proof of Proposition P1 in~\cite[Sec.~I.5]{spitzer}. \end{proof} In the following theorem, we assume that each step $s$ of $\mathcal S$ is weighted by a positive weight~$\omega_s$. This means that the ``number'' $q(i,j;n)$ is actually the sum of the weights of all quadrant walks from $(0,0)$ to $(i,j)$, the weight of a walk being the product of the weights of its steps. In this context, the step polynomial is \[ S(x,y)=\sum_{s=(s_1, s_2)\in \mathcal S} \omega_s x^{s_1} y^{s_2}. \] \begin{Definition} Given a model $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}^2$, a point $(i,j)\in \mathbb{N}}%{\mbox{\bbold N}^2$ is \emm reachable from infinity, if there exists a quadrant walk that starts from a point $(k, \ell) \in (i,j)+ \mathbb{Z}}%{\mbox{\bbold Z}_{>0}^2$ and ends at $(i,j)$. \end{Definition} Note that in this case, $(k, \ell)$ itself is reachable from infinity. Moreover, upon concatenating several copies of the walk, we can find a starting point $(k', \ell')$ with arbitrarily large coordinates, and a quadrant walk from this point to $(i,j)$. Finally, Proposition~\ref{prop:lattice} implies that if $\mathcal S$ is not contained in a half-plane, then any point with large enough coordinates is reachable from infinity. We can now complete the asymptotic result of Denisov and Wachtel~\cite{denisov-wachtel} with a statement that holds in the periodic case. \begin{Theorem}\label{thm:exponent} Let $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}^2$ be a model that is not contained in a half-plane and {contains an element of}~$\mathbb{N}}%{\mbox{\bbold N}^2$. Then the step polynomial $S(x,y)$ has a unique critical point $(a,b)$ in $\mathbb{R}}%{\mbox{\bbold R}_{>0}^2$, which satisfies $S_{xx}(a,b)>0$ and $S_{yy}(a,b)>0$. Define \[ \mu= S(a,b), \qquad \qquad c= \frac{ S_{xy}(a,b)}{\sqrt{S_{xx}(a,b)S_{yy}(a,b)}}\qquad \hbox{and } \qquad \alpha=-1 -\pi/\arccos(-c).\] Assume first that $\mathcal S$ is strongly aperiodic. Then if $(i,j)$ is reachable from infinity, there exists a positive constant $\kappa $ such that, as $n$ goes to infinity, \[ q(i,j;n)\sim \kappa\, \mu^n n^{\alpha}. \] If $\mathcal S$ is not strongly aperiodic and has period $p>1$, define \[ \overline{\mathcal S}= \{ s_1+\cdots + s_p, \ (s_1, \ldots, s_p)\in \mathcal S ^p \}, \] and let $\overline \Lambda$ be the lattice spanned by the vectors of $\overline{\mathcal S}$. Then if $(i,j)\in \overline \Lambda$ is reachable from infinity {for $\mathcal S$}, there exist positive constants $\kappa _1$ and $\kappa _2$ such that for $n=pm$ and $m$ large enough, \begin{equation}\label{bounds} \kappa _1 \, \mu^n n^{\alpha} \le q({i,j};n)\le \kappa_2\, \mu^n n^{\alpha}. \end{equation} We call $\alpha$ the \emm excursion exponent., \end{Theorem} \noindent{\bf Remarks}\\ {\bf 1.} It is very likely that an asymptotic estimate holds as well in the periodic case (see~\cite[p.~3/4]{duraj-wachtel}), but the proof does not seem to be written down, and we will content ourselves with the above bounds. \\ {\bf 2.} The reachability condition, which is somewhat implicit in~\cite{denisov-wachtel}, is important. Consider for instance the (strongly aperiodic) model $\mathcal S=\{10,01, 1 \bone, \bone 1, \bar 3 2, 2 \bar 3\}$. Then for $n>0$, \[ q(0,0;n)= 0 \quad \hbox{and} \quad q(1,0;n)=1, \] while \[ q(1,1;n) \sim \kappa \, 6^n n^\alpha \] with $\alpha =-1-\pi/\arccos(7/8)$. The reason for these different asymptotic behaviours is that the points $(0,0)$ and $(1,0)$ are not reachable from infinity, while $(1,1)$ is. Similarly, any asymptotic result for quadrant walks starting from a given point $(k,\ell)$ should require that there exists a quadrant walk that starts from $(k, \ell)$ and ends in $(k, \ell) +\mathbb{Z}}%{\mbox{\bbold Z}_{>0}^2$ (we say that $(k, \ell)$ \emm reaches infinity,). Given that we have assumed that $\mathcal S$ contains a point of $\mathbb{N}}%{\mbox{\bbold N}^2$ and is not included in a half-plane, this condition holds here for any $(k, \ell)$. \begin{proof} [{Proof of Theorem~\ref{thm:exponent}}] In the aperiodic case, the proof can be copied verbatim from the proof of Theorem~4 in~\cite{BoRaSa14}. One considers an underlying random walk and normalizes it into a walk whose projections on the $x$- and $y$-axes are centered, reduced, and of covariance $0$. The key result is then a local limit theorem of Denisov and Wachtel that applies to such walks~\cite[Thm.~6]{denisov-wachtel} (note that one should assume in that theorem that $V(x)>0$ and $V'(y)>0$, which holds if $x$ reaches infinity and $y$ is reached from infinity). We thus focus on the periodic case. The idea is to consider $p$ consecutive steps of a walk as a single generalized step to obtain a strongly aperiodic walk. More precisely, let us define $\overline{\mathcal S}$ as above, and define the weight of a step $s$ of $\overline{\mathcal S}$ to be \[ \bar \omega_s= \sum_{\atopfix{(s_1, \ldots, s_p)\in \mathcal S^p}{ s_1+\cdots + s_p=s}} \omega_{s_1} \cdots \omega_{s_p}. \] We denote with bars all quantities that deal with the model $\overline{\mathcal S}$. For instance, $\bar w(i,j;n)$ is the (weighted) number of walks going from $(0,0)$ to $(i,j)$ in $n$ steps taken from $\overline{\mathcal S}$. By the definition of~$p$, we have $w(0,0;pn)=\bar w(0,0;n) >0$ for $n$ large enough, hence $\overline{\mathcal S}$ is strongly aperiodic (on the lattice $\overline \Lambda$ that it generates). Observe that \[ \bar S(x,y)=S(x,y)^p, \qquad (\bar a, \bar b)=(a,b), \qquad \bar \mu= \mu^p, \qquad \bar c=c \qquad \hbox{and} \qquad {\bar \alpha= \alpha}. \] Note that if $(i,j) \in \overline \Lambda$ is reachable from infinity in the model $\mathcal S$, then it is also reachable from infinity in the model $\overline{\mathcal S}$. Since we will consider both models $\mathcal S$ and $\overline{\mathcal S}$ at the same time, we will often refer to a walk with steps in $\mathcal S$ as an \emm $\mathcal S$-walk., \medskip \noindent {\bf Upper bound.} A quadrant walk from $(0,0)$ to $(i,j)$ consisting of $n=pm$ steps of $\mathcal S$ can be seen as a quadrant walk from $(0,0)$ to $(i,j)$ consisting of $m$ steps of $\overline{\mathcal S}$ (the converse is not true in general: for instance, taking a step $(1,0)$ in $\overline{\mathcal S}$ may correspond to a sequence $(-1, 0),( 2,0)$ of steps of $\mathcal S$ and involve crossing the $y$-axis). Hence \[ q({i,j};pm) \le \bar q ({i,j};m). \] Since $\overline{\mathcal S}$ is strongly aperiodic, and $(i,j)$ reachable from infinity in $\overline{\mathcal S}$, the right hand-side is asymptotic to $\kappa\, {(\mu^p)^m} m^\alpha$ for some positive $\kappa$, which gives the desired upper bound on $q({i,j};n)$. \medskip \noindent{\bf Lower bound.} Since $(0,0)$ reaches infinity, and $(i,j)$ is reachable from infinity, we can pick two quadrant walks $w_1$ and $w_2$ satisfying the following conditions: \begin{itemize} \item $w_1$ goes from $(0,0)$ to a point $x=(i_1,j_1)$, whose coordinates are larger than $pM$, where $M$ is the maximal norm of a step of $\mathcal S$. Moreover, $w_1$ has length $pm_1$; \item $w_2$ goes from some point $y=(i_2,j_2)$ to $(i,j)$, and the coordinates of $y$ are large enough for $y-x$ to be reachable from infinity in the model $\overline{\mathcal S}$ {(in particular, $i_2\ge i_1$ and $j_2\ge j_1$)}. Moreover, $w_2$ has length $pm_2$. \end{itemize} Now take a quadrant walk $w$ from $(0,0)$ to $y-x$ consisting of $m$ elements of $\overline{\mathcal S}$: if we replace every step $\sigma=s_1+\cdots +s_p$ (with each $s_k \in \mathcal S$), by the sequence $s_1, \ldots, s_p$, the resulting walk $\tilde w$ may exit the quadrant. But it will remain in the translated quadrant $[-pM, \infty)^2$. Thus, if we translate $\tilde w$ so that it starts at $x$, it will remain in the quadrant $\mathbb{N}}%{\mbox{\bbold N}^2$, and end at $y$. Adding $w_1$ as a prefix and $w_2$ as a suffix gives a quadrant walk of length $n=p(m_1+m_2+m)$ ending at $(i,j)$. Consequently, \[ q(i,j;n) \ge c\, \bar q (i_2-i_1, j_2-j_1;m) \] for some positive constant $c$ that depends on the weights of $w_1$ and $w_2$. Since $\overline{\mathcal S}$ is strongly aperiodic, and $y-x$ is reachable from infinity in this model, the right-hand side is asymptotic to some $\kappa\, {(\mu^{p})^m} m^\alpha$, which gives the desired lower bound on $q(i,j;n)$. \end{proof} We can now conclude the proof of Theorem~\ref{thm:theta}. \begin{proof}[Proof of Theorem~\ref{thm:theta}] We have already established the part that deals with the orbit size, so we focus on the nature of the series $Q(x,y;t)$. We assign weight $\omega_s=1$ to every step of $\mathcal S$. Let {$(i,j) \in \overline \Lambda$, with $i$ and~$j$ large enough} for $(i,j)$ to be reachable from infinity. Then the bounds~\eqref{bounds} on $q(i,j;n)$ hold (whether the model is periodic or not) with $\alpha$ irrational. The generating function\ $\sum_{n} q(i,j;n) t^n$ is the coefficient of $x^i y^j$ in $Q(x,y;t)$, and it is D-finite if $Q(x,y;t)$ is D-finite. In this case it must be a G-function~\cite[Sec.~2]{BoKa09}. But the properties of these functions are incompatible with the existence of such bounds~\cite[Thm.~2]{BoKa09}, and thus $Q(x,y;t)$ cannot be D-finite. Indeed, it follows from the Katz-Chudnovsky-Andr\'e theorem~\cite{Andre00,FiRi14} on the local structure of G-functions, combined with classical transfer theorems, that $q(i,j;n)$ needs to be asymptotically equivalent to a sum of terms of the form $\kappa \rho^n n^a (\log n)^b$ with only \emph{rational} exponents~$a$, and our exponent $\alpha$ must be one of these $a$'s. \end{proof} \subsubsection{Examples} \label{sec:ex} We now illustrate the above results with {five} examples. \smallskip \noindent{\bf Example D (continued): a model with rational exponent and finite orbit.} Let us take $S(x,y)= \bar x}%{\overline x^2 +\bar x}%{\overline x y+y ^2+x\bar y}%{\overline y$. The unique positive critical pair is $(a,b)= (3^{1/4}, 3^{-1/4})$. We have seen that the orbit of $\mathcal S$ is finite {(Figure~\ref{fig:orbit-quadrangulations})}, and indeed, \[c:= \frac{ S_{12}(a,b)}{\sqrt{S_{11}(a,b)S_{22}(a,b)}}= -\frac 1 2 = \cos \frac {2\pi} 3. \] {With the notation of Theorem~\ref{thm:theta}}, we have $\theta=2\pi/3$, and by Theorem~\ref{thm:exponent} the excursion exponent is $\alpha=-4$. The involutions $\Phi$ and $\Psi$ defined in Section~\ref{sec:group} satisfy \[ \Phi(a+u,b+v)= (a-u+\sqrt 3 v + \cdots, b+v)\qquad \Psi(a+u,b+v)= (a+u,b+u/\sqrt 3 -v + \cdots), \qquad \] so that \[ \Theta(a+u,b+v)= (a-u+\sqrt 3 v+\cdots ,b-u/\sqrt 3 +\cdots). \] The matrix $J$ is \[ J=\left( \begin{array}{cc} -1 & \sqrt 3\\ -1/\sqrt 3 & 0 \end{array}\right), \] its eigenvalues are $e^{\pm 2 i \pi/3}$, and $J^3$ is the identity matrix. In fact, it can be checked that $\Theta^3=\id$. This is reflected in Figure~\ref{fig:orbit-quadrangulations} by the existence of bicoloured hexagons. $\hfill{\Box}$ \medskip \noindent{\bf Example: a model with irrational exponent and infinite orbit.} Now take $S(x,y)=\bar x}%{\overline x^2+y+x\bar y}%{\overline y$. The unique positive critical pair is $(a,b)=(2^{2/5}, 2^{1/5})$. We have \[c:= \frac{ S_{12}(a,b)}{\sqrt{S_{11}(a,b)S_{22}(a,b)}}= -\frac 1 {\sqrt 6}. \] Let us prove that this is not the cosine of a rational multiple $\theta$ of $\pi$. With $z=e^{i\theta}$, this would mean that $z+1/z=-\sqrt{2/3}$, so that the minimal polynomial of $z$ (and $1/z$) would be $z^4+4z^2/3+1$. This is not a cyclotomic polynomial, hence $c$ is not of the requested form. We conclude from Theorem~\ref{thm:theta} that the orbit is infinite, and the series $Q(x,y;t)$ not D-finite. The excursion exponent is $\alpha=-1-\pi/\arccos(1/\sqrt6)\sim {-3.73 }\ldots$, and it is an irrational number. The involutions $\Phi$ and $\Psi$ satisfy \[ \Phi(a+u,b+v)= (a-u+ 2^{6/5} v/3 + \cdots,b+v) \qquad \Psi(a+u,b+v)= (a+u,b+u/2^{1/5} -v + \cdots), \qquad \] so that \[ \Theta(a+u,b+v)= (a-u +2^{6/5} v/3+\cdots ,b-u/2^{1/5}-v/3 +\cdots). \] The matrix $J$ is \[ J=\left( \begin{array}{cc} -1 &2^{6/5} /3\\ -1/2^{1/5} & -1/3 \end{array}\right). \] Its eigenvalues are the roots of $\lambda^2+4\lambda/3+1$, and thus are not roots of unity. In particular, the group generated by $\Phi$ and $\Psi$ is infinite. $\hfill{\Box}$ \medskip {The same argument proves that the walks of Figure~\ref{fig:slope2} have an irrational excursion exponent $\alpha=-1-\pi/\arccos(1/\sqrt 5)$, and thus a non-D-finite generating function.} \medskip We will now consider {three} models that have a rational excursion exponent, but still an infinite orbit. We will prove this using the approach of Section~\ref{sec:group}, either by taking for $(a,b)$ the positive critical point and pushing further the expansion of $\Theta$, or by considering another critical point. \medskip \noindent{\bf Example: a model with rational exponent but infinite orbit.} Take $S(x,y)=x+y+\bar x}%{\overline x +\bar y}%{\overline y+x\bar y}%{\overline y^2+\bar x}%{\overline x^2y$. This is model \#13 in Table~\ref{tab:embarassing} {(Section~\ref{sec:embarassing})}. The unique positive critical pair is $(a,b)=(\sqrt 2, \sqrt 2)$. We have \[c:= \frac{ S_{12}(a,b)}{\sqrt{S_{11}(a,b)S_{22}(a,b)}}= -\frac 1 2 = \cos \frac {2\pi} 3. \] {With the notation of Theorem~\ref{thm:theta}}, we have $\theta=2\pi/3$. The excursion exponent is $\alpha=-4$. If we start from the positive critical point $(a,b)=(\sqrt 2, \sqrt 2)$ to define the involutions $\Phi$ and $\Psi$, we find \[ \Phi(a+u,b+v)=( a-u+ v+\cdots,b+v) \qquad \Psi(a+u,b+v)= (a+u,b +u-v + \cdots), \qquad \] so that \[ \Theta(a+u,b+v)= (a-u+v +\cdots ,b-u+\cdots). \] The matrix $J$ is \[ J=\left( \begin{array}{cc} -1 &1\\ -1&0 \end{array}\right). \] Its eigenvalues are $e^{\pm2i\pi/3}$, and $J^3=\id$, so we cannot use the criterion of Theorem~\ref{thm:theta} to prove that the orbit is infinite. But let us push further the expansion of $\Theta$. We have \[ \Phi(a+u,b+v)=\left(a-u+v+\frac5 8\,a{u}^{2}-\frac 5 8\,a uv-\frac 1 8\,a{v} ^{2}-{\frac {25\,}{32}}{u}^{3}+{\frac {7\,}{8}}{u}^{2}v+\frac 1{16}\,u{v}^{2} +\frac 1{32}\,{v}^{3}+ \cdots, b+v\right), \] where the missing terms are of order 4 or more. A symmetric formula holds for $\Psi$. Hence \begin{multline*} \Theta(a+u,b+v) = \left( a-u+v+\frac5 8\,a{u}^{2}-\frac 5 8\,a uv-\frac 1 8\,a{v} ^{2}-{\frac {25\,}{32}}{u}^{3}+{\frac {7\,}{8}}{u}^{2}v+\frac 1{16}\,u{v}^{2} +\frac 1{32}\,{v}^{3}+ \cdots, \right. \\ \left. b -u+\frac 1 2\,a{u}^{2}+\frac 1 4\,auv-\frac 1 4\,a{v}^ {2}-\frac 1 2\,{u}^{3}-\frac 3 8\,{u}^{2}v+{\frac {7\,}{16}}{v}^{3}+\cdots \right). \end{multline*} We have already seen that $\Theta^3$ comes close to being the identity -- at least, it is the identity at first order. But in fact, \[ \Theta^3(a+u,b+v) = \left(a+u + \frac 1 8 (u-2v)(u^2-uv+v^2)+\cdots, b+v {-} \frac 1 8 (v-2u)(u^2-uv+v^2)+\cdots \right), \] so that \[ \Theta^{3k}(a+u,b+v) = \left(a+u + \frac k 8 (u-2v)(u^2-uv+v^2)+\cdots, b+v {-} \frac k 8 (v-2u)(u^2-uv+v^2)+\cdots \right). \] Thus $\Theta$ has infinite order, and by Proposition~\ref{prop:ab} the orbit is infinite. The nature of $Q(x,y;t)$ remains unknown. An alternative way to prove infiniteness of the orbit for this model is to start from another critical point and use a first order argument rather than the above longer expansion. Let us take $(a,b)=(e^{5i\pi/6}, e^{i\pi/6})$. Then the involutions $\Phi$ and $\Psi$ satisfy \begin{multline*} {\Phi(a+u,b+v)= \left(a-u - \frac {2-6i \sqrt 3}7v+\cdots, b+v\right),} \\ {\Psi(a+u,b+v)=\left(a+u, b-v- \frac {2+6i\sqrt 3}7 u+\cdots\right), } \end{multline*} so that \[ {\Theta(a+u,b+v)=\left( a-u -\frac {2-6i\sqrt 3}7 v+\cdots, b +\frac {2+6i\sqrt 3}7 u+\frac 9 7 v+\cdots\right). }\] The characteristic polynomial of the corresponding matrix $J$ is {$\lambda^2-2\lambda/7+1$}, and its roots are not roots of unity. By Proposition~\ref{prop:ab}, the orbit is infinite. $\hfill{\Box}$ \medskip In the next example, the excursion exponent is again rational, and only the first method above (expanding $\Theta$ to higher order) works to prove infiniteness of the orbit. \medskip \noindent{\bf Example: one more model with rational exponent but infinite orbit.} Take $S(x,y)=xy+x\bar y}%{\overline y^2+\bar x}%{\overline x^2y$. This is model \#2 in Table~\ref{tab:embarassing}. The positive critical point is $(a,b)=(1,1)$, and $c= -1/2= \cos(2\pi/3)$. The excursion exponent is again $-4$. Note that there is no quadrant excursion from $(0,0)$ to $(0,0)$, because this point is not reachable from infinity. But the asymptotic bounds~\eqref{bounds} apply for instance for $(i,j)=(1,1)$ (with period $p=3$). We can prove that the orbit is infinite by expanding $\Theta:=\Psi\circ \Phi$ up to order 3: \begin{multline*} \Theta(1+u,1+v)= \left( 1-u+v+\frac 4 3\,{u}^{2}-\frac 4 3\,uv-\frac 2 3\,{v}^{2}-{\frac {16\,}{9}}{u}^{3}+2\,{u }^{2}v+\frac 2 3\,u{v}^{2}-\frac 1 9\,{v}^{3}+\cdots,\right. \\ \left. 1-u+\frac 2 3\,{u}^{2}+\frac 4 3\,uv-\frac 4 3\,{v}^{2} +\frac 1 9\,{u}^{3}-3\,{u}^{2}v+\frac 1 3\,u{v}^{2}+{\frac {22\,}{9}}{v}^{3}+\cdots \right) \end{multline*} which gives \[ \Theta^3(1+u,1+v)= \left( 1+u + \frac 2 3 (u-2v)(u^2-uv+v^2)+\cdots, 1+v {-} \frac 2 3 (v-2u)(u^2-uv+v^2)+\cdots \right). \] We conclude as above that all series $\Theta^{3k}(1+u,1+v)$ are distinct, and that the orbit is infinite. Starting from another critical pair $(a,b)$ does not make the argument shorter: for all possible choices, the transformation $\Theta^3$ is the identity at linear order. $\hfill{\Box}$ \medskip We conclude with a third model with a rational exponent but an infinite orbit. This one is symmetric in both coordinate axes. Recall that highly symmetric models \emm with small steps, behave nicely in any dimension: they have a finite orbit, a D-finite generating function, and explicit asymptotic enumeration is known~\cite{MelczerMishna2016}. But the large step, highly symmetric model of the next example has an infinite orbit. This cannot be proved starting from the positive critical point, because the corresponding involutions $\Phi$ and $\Psi$ \emm do, generate a finite group. But taking another critical point works. \medskip \noindent{\bf Example: a highly symmetric model with an infinite orbit.} Take \[ S(x,y)=(x+\bar x}%{\overline x)(y+\bar y}%{\overline y)+(x^2+\bar x}%{\overline x^2)(y^2+\bar y}%{\overline y^2). \] The positive critical point is $(a,b)=(1,1)$, and $c=0$. The excursion exponent is $-3$. The transformations $\Phi$ and $\Psi$ {defined from $a=b=1$} are respectively $(x,y)\mapsto (\bar x}%{\overline x,y)$ and $(x,y)\mapsto (x, \bar y}%{\overline y)$ and they \emm do, generate a finite group, {of order $4$}. But let us consider instead the critical point $(a,b)=(i,i)$. Then \[ \Theta(a+u,b+b)=(a-u+v/2+ \cdots, b+u/2-v+\cdots), \] and the Jacobian matrix $J$ has characteristic polynomial $\lambda^2+7\lambda/4+1$, which is not cyclotomic. Hence the orbit is infinite. We do not know about the nature of the associated generating function, but {the first 70\,000 terms of the series $Q(0,0)$ (modulo a prime)} did not allow us to guess any recurrence relation for its coefficients. $\hfill{\Box}$ \section{Section-free functional equations} \label{sec:free} In this section we consider step sets $\mathcal S \subset \mathbb{Z}}%{\mbox{\bbold Z}^d$ such that the orbit of $\Bx=(x_1, \ldots, x_d)$ is finite. For every element $\Bx'$ of this orbit we can replace $\Bx$ by $\Bx'$ in the main functional equation defining $Q(\Bx)$, as {we did} in~\eqref{6-eqs}. The resulting equation will be called an \emm orbit equation,\footnote{In other papers, like~\cite{BoBoKaMe16}, the \emm orbit equation, is what we call here the \emm section-free, equation. We hope that this change in the terminology will not cause any trouble.}. As the left-hand side of the original functional equation is $K(\Bx)Q(\Bx)$, where $K(\Bx)=1-tS(\Bx)$ is the kernel, the orbit equation associated with $\Bx'$ has left-hand side $K(\Bx)Q(\Bx')$, because the kernel takes the same value for all elements in the orbit. On the right-hand side of the orbit equations are several specializations of the generating function \ $Q$, which we call \emm sections,. Due to the construction of the orbit, every section occurs at least in two orbit equations. The next step in our approach is to form a linear combination of the orbit equations that is free from sections, if one exists, as was the case for~\eqref{alt-tandem}. Once the main functional equation is written, and the (finite) orbit determined, section-free equations can be found by solving a linear system with coefficients in the algebraic closure of $\mathbb{Q}}%{\mbox{\bbold Q}(x_1, \ldots, x_d)$. {In all cases that we have examined, we find that a section-free equation exists (and sometimes several). However,} we have not been able to find a generic form for section-free equations. Let us examine two simple examples; the first one shows that there can be multiple section-free combinations. \medskip \noindent{\bf Example A (continued).} We return to the one-dimensional step set $\mathcal S=\{\bar 1,2\}$. The step polynomial is $S(x)= \bar x}%{\overline x+x^2$, and the elements $x'$ of the orbit of $x$ are the solutions of $S(x)=S(x')$. Hence the orbit is $\{x, x_1, x_2\}$, with \[ x_{1,2}=\frac {-{x}^{2}\pm\sqrt {x \left( {x}^{3}+4 \right) }}{2x}. \] Substituting the three orbit elements into {the functional equation}~\eqref{eqf-1D-bar} gives three orbit equations, each involving only one section (namely, $Q(0)$). There are several section-free linear combinations of the orbit equations. One of them is \begin{equation}\label{exA:eq1} K(x)\left( x Q(x)-x_1 Q(x_1 )\right)= x-x_1 , \end{equation} another one is \begin{equation}\label{exA:eq2} K(x)\left( x Q(x)-x_2 Q(x_2 )\right)= x-x_2 , \end{equation} and in fact any section-free equation is a linear combination of these two. $\hfill{\Box}$ \medskip \noindent{\bf Example B (continued).} We now reverse the steps of the previous example and consider $\mathcal S=\{\bar 2,1\}$. The orbit of $x$ consists of $x$, $x_1$ and $x_2$ with \[ x_{1,2}= {\frac {1\pm\sqrt {4\,{x}^{3}+1}}{2{x}^{2}}}. \] Substituting the orbit elements into~\eqref{eqf-1D} gives three orbit equations containing two sections, $Q_0$ and $Q_1$. There is, up to a multiplicative factor, a \emm unique, section-free linear combination of these three equations: \begin{multline*} K(x) \left( \frac{x^2}{(x-x_1 )(x-x_2 )}Q(x) +\frac{x_1 ^2}{(x_1 -x)(x_1 -x_2 )}Q(x_1 ) +\frac{x_2 ^2}{(x_2 -x)(x_2 -x_1 )}Q(x_2 )\right) \\= \frac{x^2}{(x-x_1 )(x-x_2 )} +\frac{x_1 ^2}{(x_1 -x)(x_1 -x_2 )} +\frac{x_2 ^2}{(x_2 -x)(x_2 -x_1 )}= 1. \end{multline*} $\hfill{\Box}$ \medskip The above two examples are instances of a more general result that applies to any 1-dimensional model. \begin{Proposition}\label{prop:section-free1D} Assume $d=1$. Let $-m$ (resp. $M$) be the smallest (resp. largest) element in~$\mathcal S$, and assume that $m\ge 0$ and $M> 0$. Then the orbit of $x$ has cardinality $m+M$. The vector space of section-free equations consists of all linear combinations of \[ K(x) \sum_{i=0}^m \frac{u_i^m}{\prod_{j\not = i} (u_i-u_j)}Q(u_i) =1, \] where $u_0, u_1, \ldots, u_m$ are any distinct elements in the orbit of $x$. \end{Proposition} This proposition will proved in Section~\ref{sec:1D}. The number of ways of choosing the $u_i$'s is $\binom{m+M}{m+1}$. These section-free equations are not always linearly independent (in Example A, the third equation of this type, which involves $x_1$ and $x_2$, is the difference of~\eqref{exA:eq1} and~\eqref{exA:eq2}). However, if the largest step is~$1$ (that is, $M=1$), then Proposition~\ref{prop:section-free1D} tells that there is a unique section-free equation (up to a multiplicative factor). This was observed in Example B, and seems to generalize to dimension 2. \begin{Conjecture}\label{conj:section-free} When $d=2$ and the orbit is finite, there always exist non-trivial section-free linear combinations of the orbit equations. Moreover, if there is no large forward step, then there is a unique section-free combination, up to a multiplicative factor. \end{Conjecture} \noindent{\bf Example.} In some $x/y$-symmetric quadrant models, like Kreweras' model $\mathcal S=\{\nearrow, \leftarrow, \downarrow\}$, the orbit of $(x,y)$ contains $(y,x)$, and we want to clarify what we mean with the uniqueness of the section-free equation. The functional equation reads \[ K(x,y)Q(x,y)=\\ 1- t\bar x}%{\overline x Q(0,y) -t\bar y}%{\overline y Q(x,0). \] The orbit of $(x,y)$ consists of 6 pairs: \[ (x,y), \quad (\bar x}%{\overline x \bar y}%{\overline y, y), \quad (\bar x}%{\overline x\bar y}%{\overline y, x), \quad (y,x), \quad (y, \bar x}%{\overline x\bar y}%{\overline y), \quad (x, \bar x}%{\overline x\bar y}%{\overline y). \] A linear combination of the 6 orbit equations, with indeterminate weights $\alpha_1, \ldots, \alpha_{6}$, involves 6 sections: three specializations of $Q(x,0)$, and three of $Q(0,y)$. If we require the contribution of each to vanish, we find (up to a multiplicative factor) a unique solution for the $\alpha_i$'s, and thus a unique section-free equation: \begin{equation}\label{sec-free-K} K(x,y) \big(xyQ(x,y) -\bar x}%{\overline x Q(\bar x}%{\overline x \bar y}%{\overline y, y) +\bar y}%{\overline y Q(\bar x}%{\overline x\bar y}%{\overline y, x) -xyQ(y,x)+ \bar x}%{\overline x Q(y, \bar x}%{\overline x\bar y}%{\overline y)-\bar y}%{\overline y Q(x, \bar x}%{\overline x\bar y}%{\overline y)\big)=0. \end{equation} Note that the right-hand side (the so-called \emm orbit sum,) vanishes. The $x/y$-symmetry makes this equation trivial. However, it makes sense to exploit the symmetry of the model in the functional equation, and to write: \[ K(x,y)Q(x,y)= 1- t\bar x}%{\overline x Q(y,0) -t\bar y}%{\overline y Q(x,0). \] Now a linear combination of the 6 orbit equations involves only 3 sections. If we want the contribution of each to vanish, we find a vector space of dimension 3 of solutions, generated by all equations of the form \[ K(x,y) \left( Q(x',y')-Q(y',x')\right)=0, \] for $(x',y')$ in the orbit. Again, these equations are trivial. $\hfill{\Box}$ \medskip We now prove that {Conjecture~\ref{conj:section-free}} holds in the case of small steps --- and in fact, in arbitrary dimension. \begin{Proposition}\label{prop:sec-free-small} If $\mathcal S\subset \{-1,0,1\}^d$ has positive and negative steps in every direction, and the associated orbit is finite, then there is a unique section-free linear combination of orbit equations, up to a multiplicative factor. {It reads \begin{equation}\label{sec-free} \sum_{\Bu} (-1)^{\ell(\Bu)} K(\Bu) Q(\Bu) \prod _{i=1}^d u_i = \sum_{\Bu} (-1)^{\ell(\Bu)} \prod _{i=1}^d u_i, \end{equation} where the sum runs over all elements $\Bu=(u_1, \ldots, u_d)$ of the orbit and $\ell(\Bu)$ is the length of $\Bu$.} \end{Proposition} \begin{proof} We consider the result of multiplying the functional equation~\eqref{eqfunc:small} by the product of all variables $\prod_i x_i$: \begin{equation} \label{eqfunc:small-mult} K(\Bx)Q(\Bx) \prod_i x_i = \prod_i x_i+t \sum_{\emptyset \not = I \subset \llbracket 1, d\rrbracket} \left( (-1)^{|I|} Q_I(\Bx) \sum_{\mathbf{s} \in \mathcal S: s_i =-1 \forall i \in I} \Bx^{\mathbf{s}}\prod_i x_i\right). \end{equation} Note that, since the last sum is over all $\mathbf{s}$ such that $s_i=-1$ for $i\in I$, the monomial $\Bx^{\mathbf{s}}\prod_i x_i$ does not involve any of the $x_i$'s for $i\in I$. The same holds for $Q_I(\Bx)$. We now call any version of~\eqref{eqfunc:small-mult} instantiated at an orbit element an \emm orbit equation,. Take $I=\{i\}$, with $1\le i\le d$. For $\Bu$ in the orbit of $\Bx$, the section $Q_I(\Bu)$ occurs in exactly two orbit equations: the equation obtained from $\Bu$, and the one obtained from $\Bv:=\Phi_i(\Bu)$, with~$\Phi_i$ defined as in Proposition~\ref{prop:small-group}. Moreover, the coefficient of $Q_I(\Bu)$ is the same in both equations (it does not depend on the $i$th coordinate of $\Bu$). Hence in a section-free linear combination of orbit equations, the weights of the equations associated with $\Bu$ and $\Phi_i(\Bu)$ must be opposite. {By transitivity, there cannot be more than one section-free equation. Moreover, in the small step case, the lengths of two adjacent elements differ by $\pm 1$ (Proposition~\ref{prop:small-group}), and thus the only possible section-free equation is~\eqref{sec-free}.} {So let us form the linear combination of orbit equations having the same left-hand side as~\eqref{sec-free}.} For $\Bu$ in the orbit and $I\subset \llbracket 1, d\rrbracket$, the section $Q_I(\Bu)$ occurs (with the same weight) in all orbit equations obtained from elements $\Bv$ that only differ from $\Bu$ at positions of $I$. We can define on these elements an involution that changes the parity of the length (for instance $\Phi_{\min (I)}$). This implies that the coefficient of $Q_I(\Bu)$ in the signed sum vanishes, and that we have indeed constructed a section-free equation. \end{proof} Of course, all the examples of this paper support Conjecture~\ref{conj:section-free}. The next example shows that the number of sections occurring in the orbit equations can be larger than the number of orbit equations, which makes the existence of section-free equations more surprising. \medskip \noindent{\bf Example F: a model with small forward steps.} Take $\mathcal S=\{10, \bar 1 0, \bar 2 1, 0 \bar 1\}$. Then the orbit of $(x,y)$ consists of the following pairs: \begin{equation}\label{orbit:hard1} \begin{array}{ccc} (x, y) &(x_1 , y) & (x_2 , y) \\ (x, x^2\bar y}%{\overline y) &(-\bar x_1}%{\overline {x}_1 , x^2\bar y}%{\overline y) &(-\bar x_2}%{\overline {x}_2 , x^2\bar y}%{\overline y) \\ (x_1 , x_1 ^2\bar y}%{\overline y) &(-\bar x}%{\overline x, x_1 ^2\bar y}%{\overline y) &(-\bar x_2}%{\overline {x}_2 , x_1 ^2\bar y}%{\overline y) \\ (x_2 , x_2 ^2\bar y}%{\overline y) &(-\bar x}%{\overline x, x_2 ^2\bar y}%{\overline y) &(-\bar x_1}%{\overline {x}_1 , x_2 ^2\bar y}%{\overline y) \end{array} \end{equation} where \[ x_{1,2}= \frac{x+y\pm\sqrt{(x+y)^2+4x^3y}}{2x^2} \] and $\bar x}%{\overline x_i=1/x_i$. The structure of this orbit is the first shown in Figure~\ref{fig:orbits} {(Section~\ref{sec:m21})}. The functional equation reads \begin{equation}\label{eq:hard1} K(x,y)Q(x,y)=1-t\bar x}%{\overline x (1+\bar x}%{\overline x y)Q_{0,-}(y)-t\bar x}%{\overline x y Q_{1,-}(y)-t\bar y}%{\overline y Q(x,0). \end{equation} The 12 orbit equations involve in total $6+4+4=14$ distinct sections: 6 specializations of $Q(x,0)$, 4 specializations of $Q_{0,-}(y)$ and 4 specializations of $Q_{1,-}(y)$. Hence in order to find a section-free equation, we need to solve a linear system with 14 equations but only 12 unknowns. Still, we find a solution (and only one, up to a multiplicative factor). The weight of the orbit equation associated with the pair $(x',y')$ is \[ \pm x'^2(x'_1 -x'_2 )\sqrt {yy'}, \] where $(x'_i,y')\approx (x',y')$ for $i=1,2$, and $x'_1\not = x'_2$. More precisely, the weights associated with the above 12 orbit elements are \begin{equation}\label{weights:hard1} \begin{array}{rrr} x^2(x_1-x_2)y &x_1 ^2(x_2-x) y & -x_2^2 (x_1-x) y \\ x^2( \bar x_2}%{\overline {x}_2 -\bar x_1}%{\overline {x}_1) x &-\bar x_1}%{\overline {x}_1^2(x+\bar x_2}%{\overline {x}_2 )x &\bar x_2}%{\overline {x}_2 ^2(x+\bar x_1}%{\overline {x}_1)x \\ x_1^2(\bar x}%{\overline x-\bar x_2}%{\overline {x}_2 )x_1 &\bar x}%{\overline x^2(x_1+\bar x_2}%{\overline {x}_2 ) x_1 &-\bar x_2}%{\overline {x}_2 ^2(x_1+\bar x}%{\overline x) x_1 \\ -x_2 ^2(\bar x}%{\overline x-\bar x_1}%{\overline {x}_1) x_2 &-\bar x}%{\overline x^2(x_2+\bar x_1}%{\overline {x}_1)x_2 &\bar x_1}%{\overline {x}_1^2(x_2+\bar x}%{\overline x) x_2 . \end{array} \end{equation}$\hfill{\Box}$ \medskip \noindent{\bf Example D (continued): a model with large forward and backward step.} Let us take $\mathcal S=\{ \bar 20,\bar 11, 02,1\bar 1\}$. Recall that the orbit of $(x,y)$ is shown in Figure~\ref{fig:orbit-quadrangulations}, with \[x_{1,2}= {\frac {x{y}^{2}+y\pm\sqrt {y \left( {x}^{2}{y}^{3}+4\,{x}^{3}+2\,x{y}^{2}+y \right) }}{2{x}^{2}}}. \] The functional equation for this model is given by~\eqref{eqfunc:quadrangulations}, and the 12 orbit equations involve $4+4+4=12$ sections. The vector space of section-free linear combinations has dimension 2; it is generated by two linear combinations of 9 orbit equations: \begin{multline*} x^2y\,Q ( x,y ) -{\frac {{x_1}^{2}y \left( x-x_2 \right) Q ( x_1,y ) }{ x_1-x_2 }} +{\frac {{x_2}^{2}y \left( x-x_1 \right) Q ( x_2,y ) }{ x_1-x_2 }} -{{x^2\bar x_1}%{\overline {x}_1 \, Q ( x,\bar x_1}%{\overline {x}_1 ) }} \\ +{\frac {{x_2}^{2} \left( xy-1 \right) Q ( x_2,\bar x_1}%{\overline {x}_1 ) }{x_1 \left( x_2 \,y-1 \right) }} -{\frac { \left( x-x_2 \right) Q ( \bar y}%{\overline y,\bar x_1}%{\overline {x}_1 ) }{yx_1 \left( x_2\,y-1 \right) }} +{\frac {{x_1}^{2} \left( x-x_2 \right) Q ( x_1,\bar x}%{\overline x ) }{x \left( x_1-x_2 \right) }} \\ -{\frac {{x_2}^{2} \left( x_1\,y-1 \right) \left( x-x_2 \right) Q ( x_2,\bar x}%{\overline x ) }{{x} \left( x_1-x_2 \right) \left( x_2\,y-1 \right) }} +{\frac { \left( x-x_2 \right) Q ( \bar y}%{\overline y, \bar x}%{\overline x ) }{xy \left( x_2\,y-1 \right) }} ={\frac { \left(1- xy\right) \left( 1-x_1y \right) \left( x-x_1 \right) \left( x-x_2 \right) }{x{y}x_1K(x,y)}}, \end{multline*} and the same equation with $x_1$ and $x_2$ exchanged. We refer to~\cite{BoFuRa17} for the solution of a family of models with arbitrarily large steps which generalizes this one. \section{Extracting the main generating function} \label{sec:extract} We now assume that, for a step set $\mathcal S$ with a finite orbit, we have obtained one (or several) section-free functional equations. Can we extract from these equations the main generating function\ $Q(x_1, \ldots, x_d)$, as we did in Section~\ref{sec:basic}? Not systematically, as we already learnt from some small step models. \medskip \noindent{\bf Example C: Gessel's walks (continued).} The orbit of $(x,y)$ consists of 8 elements. The steps are small, hence the unique section-free equation is the {alternating sum~\eqref{sec-free}.} Remarkably, its right-hand side vanishes: \begin{multline*} xyQ(x, y)-\bar x}%{\overline x Q (\bar x \bar y, y) +xQ (\bar x \bar y, x^2y)-xyQ (\bar x, x^2y)\\ +\bar x}%{\overline x\bar y}%{\overline y Q (\bar x, \bar y)-xQ (xy, \bar y)+\bar x}%{\overline x Q(xy, {\bar x}^2 \bar y)-\bar x}%{\overline x\bar y}%{\overline y Q (x, \bar y {\bar x}^2)=0. \end{multline*} This homogeneous equation does not characterize $Q(x,y)$. For instance, 1, $x$, $xy$, and $y-x^2$ are solutions. The space of solutions is actually infinite dimensional, as it clearly contains all monomials $x^iy^i$. $\hfill{\Box}$ \medskip Among the 23 quadrant models with small steps that have a finite orbit, exactly 4 have a section-free equation that does not characterize $Q(x,y)$: Gessel's model, as just shown, and the three Kreweras like models: $\mathcal S=\{\nearrow, \leftarrow, \downarrow\}$, its reverse $\overline\mathcal S=\{\swarrow, \rightarrow, \uparrow\}$ and the union $\mathcal S \cup \overline\mathcal S$~\cite{BoMi10}. For those three, the orbit of $(x,y)$ contains $(y,x)$, and the section-free equation is~\eqref{sec-free-K}. Clearly, any symmetric series in $x$ and $y$ satisfies this equation. For these four models, the \emm orbit sum,, that is, the right-hand side of the section-free equation, vanishes. However, there exist as well (weighted) models with a non-vanishing orbit sum, for which the section-free equation does not characterize $Q(x,y)$. Let us recall an example taken from~\cite[Sec.~8.2]{BoBoKaMe16}. \medskip \noindent{\bf Example.} Take $\mathcal S=\{\bar 1 \bar 1 , \bar 1 1 , \bar 1 0, \bar 1 0, 1 0,1 1\}$ (note the repeated West step). The step polynomial is \[ S(x,y)= (1+y)\left(\bar x}%{\overline x(1+\bar y}%{\overline y)+x\right). \] The orbit of $(x,y)$ contains 6 elements, and the unique section-free equation reads: \begin{align*} xyQ(x,y) &- \bar x}%{\overline x(1+y)Q(\bar x}%{\overline x(1+\bar y}%{\overline y),y ) + \frac{x(1+y)}{(1+y)^2+x^2y^2}\, Q\left(\bar x}%{\overline x(1+\bar y}%{\overline y), \frac{x^2y}{(1+y)^2+x^2y^2}\right) \\ &- \frac{xy(1+y+x^2y)}{(1+y)^2+x^2y^2}\, Q\left(\bar x}%{\overline x(1+y)+xy, \frac{x^2y}{(1+y)^2+x^2y^2}\right)\\ &+\frac{\bar x}%{\overline x\bar y}%{\overline y(1+y+x^2y)}{1+x^2} Q\left(\bar x}%{\overline x(1+y)+xy, \frac{\bar y}%{\overline y}{1+x^2}\right) -\frac{x\bar y}%{\overline y}{1+x^2}\, Q\left(x, \frac{\bar y}%{\overline y}{1+x^2}\right) \\ &=\frac{\left(1+y(1-x^2)\right) \left(1-y^2(1+x^2)\right) \left( 1-x^2 +y(1+x^2)\right)}{xy(1+x^2)K(x,y)\left((1+y)^2+x^2y^2\right)}. \end{align*} The right-hand side is non-zero, but this equation does not define $Q(x,y)$ uniquely in the ring $\mathbb{Q}}%{\mbox{\bbold Q}[x,y][[t]]$. In fact, the associated homogeneous equation (in $Q(x,y)$) seems to have an infinite dimensional space of solutions. It includes at least the following polynomials in $x$ and $y$: \[ x, \quad 2xy+{x}^{3}y , \quad {x}^{2}y+{x}^{2}+y+2 , \quad {x}^{3}{y}^{2}-{x}^{3}y+{x}^{3}+2x{y}^{2}. \] $\hfill{\Box}$\medskip We now consider examples where the series $Q(\Bx)$ is {indeed} characterized by a section-free equation, but for which the extraction is not as simple as in Section~\ref{sec:basic}. Our first example is one-dimensional. \medskip \medskip\noindent{\bf Example A (continued).} It can be seen that~\eqref{exA:eq1} (or~\eqref{exA:eq2}) characterizes $Q(x)$, but how can we extract it effectively? Here is {one solution}. Take the first of these two linear combinations, written as \[ Q(x)- \bar x}%{\overline x x_1 Q(x_1)= \frac{1- \bar x}%{\overline x x_1}{K(x)} \] with $K(x)=1-t(\bar x}%{\overline x+x^2)$, and choose for the algebraic closure of $\mathbb{Q}}%{\mbox{\bbold Q}(x)$ the set of Puiseux series in $\bar x}%{\overline x$ (not in $x$!). Then \[ x_1={\frac {\sqrt {4\,{\bar x}%{\overline x}^{3}+1}-1}{2\bar x}%{\overline x}}=\bar x}%{\overline x^2-\bar x}%{\overline x^5+O(\bar x}%{\overline x^8) \] is a formal power series\ in $\bar x}%{\overline x$. Now both sides of the above section-free equation are series in $t$ whose coefficients are Laurent series in $\bar x}%{\overline x$. Extracting the non-negative part in $x$ gives: \[ Q(x)= [x^{\ge}] \frac{1- \bar x}%{\overline x x_1}{K(x)}, \] where the right-hand side is first expanded in $t$, then in $\bar x}%{\overline x$. This will be generalized to arbitrary one-dimensional models in Section~\ref{sec:1D} (Proposition~\ref{prop:dim1-alg}). $\hfill{\Box}$ \medskip In our next example, one simply has to extract the positive part of a rational series to obtain $Q(x,y)$, but justifying why is a bit delicate. \medskip\noindent{\bf Example F (continued).} Let $\mathcal S=\{10, \bar 1 0, \bar 2 1, 0 \bar 1\}$. The functional equation is given by~\eqref{eq:hard1}, the orbit by~\eqref{orbit:hard1} and the weights in the section-free linear combination by~\eqref{weights:hard1}. Let us divide this linear combination by $x^2y(x_1-x_2)K(x,y)$, so as to isolate $Q(x,y)$. The resulting equation reads \begin{equation}\label{eqA} Q(x,y)+ x\bar x_1}%{\overline {x}_1\bar x_2}%{\overline {x}_2\bar y}%{\overline y\, Q(x,x^2\bar y}%{\overline y) + A_1 +A_2 +A_3+A_4+A_5=R(x,y) \end{equation} with \begin{align} A_1&= \bar x}%{\overline x^2 \, \frac{x_1^2(x_2-x) Q(x_1,y)-x_2^2(x_1-x) Q(x_2,y)}{x_1-x_2} \notag \\ A_2&=-\bar x}%{\overline x^2\bar y}%{\overline y \, \frac{ \bar x_1}%{\overline {x}_1^2 (x+\bar x_2}%{\overline {x}_2 ) x Q(-\bar x_1}%{\overline {x}_1,x^2\bar y}%{\overline y)-\bar x_2}%{\overline {x}_2 ^2 (x+\bar x_1}%{\overline {x}_1) x Q(-\bar x_2}%{\overline {x}_2 ,x^2\bar y}%{\overline y)}{x_1-x_2} \notag\\ A_3&=\bar x}%{\overline x^2 \bar y}%{\overline y \, \frac{x_1^3(\bar x}%{\overline x-\bar x_2}%{\overline {x}_2 )Q(x_1,x_1^2y)-x_2^3(\bar x}%{\overline x-\bar x_1}%{\overline {x}_1)Q(x_2,x_2^2y)}{x_1-x_2} \notag\\ A_4&=\bar x}%{\overline x^2 \bar y}%{\overline y\, \frac{\bar x}%{\overline x^2 (x_1+\bar x_2}%{\overline {x}_2)x_1 Q(-\bar x}%{\overline x, x_1^2\bar y}%{\overline y) -\bar x}%{\overline x^2 (x_2+\bar x_1}%{\overline {x}_1) x_2 Q(-\bar x}%{\overline x, x_2^2\bar y}%{\overline y)}{x_1-x_2} \notag\\ A_5&=-\bar x}%{\overline x^2\bar y}%{\overline y\, \frac{\bar x_2}%{\overline {x}_2^2 (x_1+\bar x}%{\overline x )x_1Q(-\bar x_2}%{\overline {x}_2 , x_1^2\bar y}%{\overline y)-\bar x_1}%{\overline {x}_1^2 (x_2+\bar x}%{\overline x)x_2Q(-\bar x_1}%{\overline {x}_1, x_2^2\bar y}%{\overline y)}{x_1-x_2} \label{A5} \end{align} and \[ R(x,y)= {\frac { \left( {x}^{2}+1 \right) \left( x+y \right) \left( y-x \right) \left( {x}^{2}y-2\,x-y \right) \left( {x}^{3}-x-2\,y \right) }{{x}^{7}{y}^{3} \left(1-t(x+\bar x}%{\overline x+\bar x}%{\overline x^2 y +\bar y}%{\overline y) \right)}}. \] Each term in~\eqref{eqA} is written as a power series in $t$ whose coefficients are Laurent polynomials in $x$, $y$, $x_1$ and $x_2$, symmetric in $x_1$ and $x_2$ (because the numerators of the series $A_i$ are \emm anti-symmetric, in $x_1$ and $x_2$). Observe that the symmetric functions of $x_1$ and $x_2$ are Laurent polynomials in $x$ and $y$, and more precisely, polynomials in $\bar x}%{\overline x\mathbb{Q}}%{\mbox{\bbold Q}[\bar x}%{\overline x,y,\bar y}%{\overline y]$ (we say that they are \emm $x$-negative,): \begin{equation} \label{neg-x} x_1 +x_2 = \bar x}%{\overline x(1+\bar x}%{\overline x y) \quad \hbox{and} \quad x_1 x_2 =-\bar x}%{\overline x y. \end{equation} The symmetric functions of their reciprocals are Laurent polynomials in $x$ and $y$, and more precisely, polynomials in $\mathbb{Q}}%{\mbox{\bbold Q}[x, \bar x}%{\overline x,\bar y}%{\overline y]$ (we say that they are \emm $y$-non-positive,): \begin{equation}\label{neg-y} \bar x_1}%{\overline {x}_1 +\bar x_2}%{\overline {x}_2 = -\bar x}%{\overline x-\bar y}%{\overline y \quad \hbox{and} \quad \bar x_1}%{\overline {x}_1 \bar x_2}%{\overline {x}_2 =-x \bar y}%{\overline y. \end{equation} Hence every term of~\eqref{eqA} is a series in $t$ whose coefficients are Laurent polynomials in $x$ and $y$. We claim that extracting from the left-hand side of~\eqref{eqA} the non-negative part in $x$ and $y$ gives $Q(x,y)$. First, the second term of~\eqref{eqA} is $y$-negative, and hence does not contribute. Then \[ A_1=\bar x}%{\overline x\, \frac{x_1^2(\bar x}%{\overline x x_2-1) Q(x_1,y)-x_2^2(\bar x}%{\overline x x_1-1) Q(x_2,y)}{x_1-x_2}, \] and is $x$-negative by~\eqref{neg-x}. Using $x x_1 x_2=-y$, we see that the same holds for \[ A_3=\bar x}%{\overline x\, \bar y}%{\overline y \, \frac{x_1^3(\bar x}%{\overline x^2+x_1\bar y}%{\overline y)Q(x_1,x_1^2y)-x_2^3(\bar x}%{\overline x^2+x_2\bar y}%{\overline y)Q(x_2,x_2^2y)}{x_1-x_2}, \] and for \[ A_4=\bar x}%{\overline x^2 \bar y}%{\overline y\, \frac{\bar x}%{\overline x x_1^2 (\bar x}%{\overline x-\bar y}%{\overline y) Q(-\bar x}%{\overline x, x_1^2\bar y}%{\overline y) -\bar x}%{\overline x x_2^2 (\bar x}%{\overline x-\bar y}%{\overline y) Q(-\bar x}%{\overline x, x_2^2\bar y}%{\overline y)}{x_1-x_2}. \] We are left with two terms. One is \[ A_2= -\bar x}%{\overline x\,\bar y}%{\overline y^2 \, \frac{ \bar x_1}%{\overline {x}_1^2 (x+\bar x_2}%{\overline {x}_2 ) x Q(-\bar x_1}%{\overline {x}_1,x^2\bar y}%{\overline y)-\bar x_2}%{\overline {x}_2 ^2 (x+\bar x_1}%{\overline {x}_1) x Q(-\bar x_2}%{\overline {x}_2 ,x^2\bar y}%{\overline y)}{\bar x_1}%{\overline {x}_1-\bar x_2}%{\overline {x}_2 }, \] which is $y$-negative by~\eqref{neg-y}. The other is $A_5$, which looks more challenging because the variables in the series $Q$ mix positive and negative powers of the $x_i$'s. Its analysis requires the following lemma. \begin{Lemma} \label{lem:extr} For $a\ge 0$, the expression \begin{equation} E_a:=\frac{x_1^{a+1}-x_2^{a+1}}{x_1-x_2} \label{eq:Ea} \end{equation} is a polynomial in $\bar x}%{\overline x$ and $y$. Every monomial $\bar x}%{\overline x^e y^f$ that occurs in it satisfies $f\le e$. \end{Lemma} \begin{proof} By induction on $a\ge 0$, using $E_{-1}=0$, $E_0=1$, $E_a=(x_1+x_2)E_{a-1}-x_1x_2E_{a-2}$ and~\eqref{neg-x}. \end{proof} Let us return to the expression~\eqref{A5} of $A_5$. Since $Q(x,y)$ is a series in $t$ with polynomial coefficients in $x$ and $y$, it suffices to prove that, for $i,j\ge 0$, the term obtained by replacing $Q(x,y)$ by $x^i y^j$, namely \[ \pm\bar x}%{\overline x^2\bar y}%{\overline y\, \frac{\bar x}%{\overline x_2^2 (x_1+\bar x}%{\overline x )x_1 \bar x_2}%{\overline {x}_2 ^i x_1^{2j}\bar y}%{\overline y^j-\bar x}%{\overline x_1^2 (x_2+\bar x}%{\overline x)x_2\bar x_1}%{\overline {x}_1^i x_2^{2j}\bar y}%{\overline y^j }{x_1-x_2}, \] has no non-negative part in $x$ and $y$. By splitting the sum and using $xx_1 x_2=-y$, it suffices to prove this for \begin{equation} \label{sum1} \bar x}%{\overline x^2 \bar y}%{\overline y^{j+1}\frac{ \bar x_2}%{\overline {x}_2 ^{2+i} x_1^{2+2j} -\bar x_1}%{\overline {x}_1^{2+i} x_2^{2+2j}}{x_1-x_2} =(-1)^i x^{i} \bar y}%{\overline y^{i+j+3} \frac{ x_1^{4+i+2j} - x_2^{4+i+2j}}{x_1-x_2} \end{equation} and for \begin{equation} \label{sum2} \bar x}%{\overline x^3\bar y}%{\overline y^{j+1}\, \frac{ \bar x_2}%{\overline {x}_2 ^{i+2} x_1^{1+2j} -\bar x_1}%{\overline {x}_1^{i+2} x_2^{1+2j}}{x_1-x_2} = (-1)^{i+1} x^{i-1}\bar y}%{\overline y^{i+j+3}\, \frac{ x_1^{3+i+2j} - x_2^{3+i+2j}}{x_1-x_2}. \end{equation} By Lemma~\ref{lem:extr}, any monomial $x^ay^b$ that occurs in~\eqref{sum1} satisfies \[ a=i-e, \qquad b=f-i-j-3, \] with $f\le e$. Saying that $a$ and $b$ are both non-negative means that $e\le i$ and $f \ge i+j+3$, so that \[ e+j+3\le f \le e, \] which is impossible for $j \ge 0$. A similar argument proves that~\eqref{sum2} contains no monomial that would be non-negative in $x$ and in $y$. So the non-negative part of the left-hand side of~\eqref{eqA} is indeed $Q(x,y)$. This tricky extraction deserves a proposition. \begin{Proposition}\label{prop:F} The generating function\ $Q(x,y)$ of quadrant walks with steps in $\mathcal S=\{10, \bar 1 0, 0 \bar 1, \bar 2 1\}$ is the non-negative part (in $x$ and $y$) of the rational series \[ R(x,y)= {\frac { \left( {x}^{2}+1 \right) \left( x+y \right) \left( y-x \right) \left( {x}^{2}y-2\,x-y \right) \left( {x}^{3}-x-2\,y \right) }{{x}^{7}{y}^{3} \left(1-t(x+\bar x}%{\overline x+\bar x}%{\overline x^2 y +\bar y}%{\overline y) \right)}}, \] seen as a power series in $t$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x,\bar x}%{\overline x,y,\bar y}%{\overline y]$. \end{Proposition} {From this, one can derive interesting results for the specialization $Q(0,0)$ counting excursions.} \begin{Corollary}\label{cor:explicit} For $\mathcal S=\{10, \bar 1 0, 0 \bar 1, \bar 2 1\}$, the sequence $e_n:=q(0,0;2n)$ counting excursions satisfies a linear recurrence relation of order $2$: \[ ( n+3 ) ( n+2 ) ( n+1 ) e_n = 12 ( 2\,n-1 ) ( 2\,n-3 ) ( n-1 ) e _{ n-2 } +4 ( 2\,n-1 ) ( n+2 ) ( n+1 ) e_ { n-1 }, \] with $e_0=e_1=1$. It is not hypergeometric. The associated generating function\ admits an expression in terms of hypergeometric series: \begin{equation*} Q(0,0) = \frac{3}{4t} + \frac{{9t-2}}{2t^2} \int \frac{(1+4t)^{3/2}}{(9t-2)^2} \left( \twoFone{-\frac32}{\frac32}{2}{\frac{16\,t}{1+4t}} + 2\times \twoFone{-\frac12}{\frac32}{3}{\frac{16\,t}{1+4t}} \right). \end{equation*} \end{Corollary} \begin{proof} (sketched) {The recurrence relation is easily guessed from the first few values of $e_n$. It can be proved using computer algebra and the approach of~\cite{BoChHoKaPe17}. The idea is to write $Q(0,0)$ as the constant coefficient (w.r.t. $x$ and $y$) of the rational function {$R(x,y)$}, then to apply creative telescoping techniques. This proves that $Q(0,0)$ satisfies an explicit linear differential equation of order 4, from which the validity of the above linear recurrence relation for $e_n$ is easily deduced. The fact that the sequence $(e_n)$ is not hypergeometric follows from Petkov\v sek's algorithm~\cite{AB}.} The use of $_2F_1$ solving algorithms~\cite{BoChHoPe11,ImHo17,BoChHoKaPe17} then provides a closed-form expression of $Q(0,0)$. \end{proof} \section{The one-dimensional case revisited} \label{sec:1D} So far we have only studied sporadic models. We now consider a family of models, namely general one-dimensional models. We take $\mathcal S\subset \mathbb{Z}}%{\mbox{\bbold Z}$ and denote by $-m$ (resp. $M$) the smallest (resp. largest) step of $\mathcal S$; to avoid trivial cases we assume $m\ge 0$ and $M>0$. Finally, we allow step weights taken in some algebraically closed field $\mathbb{F}$ of characteristic zero. The indeterminates $t$ and $x$ are algebraically independent over $\mathbb{F}$. The step polynomial is then \[ S(x)=\sum_{i\in \mathcal S} w_i x^i, \] where $w_i$ is the weight of the step $i$. The weight of a walk is the product of the weights of its steps. Let us first recall the standard solution, originally obtained by Gessel~\cite{gessel-factorization} (see also~\cite[Ex.~3]{bousquet-petkovsek-recurrences} and~\cite{banderier-flajolet}). It involves auxiliary series $X_i$, which are fractional series in the length variable $t$, algebraic over $\mathbb{F}(t)$. \begin{Proposition}\label{prop:dim1-standard} The kernel $K(x)=1-tS(x)$, when solved for $x$, admits $m+M$ roots, which are Puiseux series in $t$ with coefficients in $\mathbb{F}$. Exactly $m$ of these roots, denoted $X_1, \ldots, X_m$, are finite at $t=0$ (and in fact, vanish at $t=0$). Let us denote by $X_{m+1}, \ldots, X_{m+M}$ the other ones. The generating function\ $Q(x;t)\equiv Q(x)$ is \begin{equation} \label{F-dim1-standard} Q(x)= \frac { \prod_{i=1} ^m (1-\bar x}%{\overline x X_i) } {K(x)} = -\frac 1 {tw_M} \prod_{i=m+1} ^{m+M} \frac 1 {x-X_i}. \end{equation} \end{Proposition} We recall the proof given in~\cite[Ex.~3]{bousquet-petkovsek-recurrences} or~\cite{banderier-flajolet}, for comparison with the approach of this paper. Roughly speaking, the standard solution is obtained by \emm canceling the kernel, by appropriate specializations of $x$, while the approach of this paper is more algebraic and consists in playing with certain invariance properties of the kernel. \begin{proof} The statements of the proposition dealing with the roots of the kernel come from the fact that the equation $K(x)=0$, once written as a polynomial equation in $x$ (that is, as $x^m K(x)=0$), has degree $m+M$ in $x$, reducing to $m$ when $t=0$ (see~\cite[Prop.~6.1.8]{stanley-vol2}). Let us write $Q(x)=\sum_{i\ge 0 } x^i Q_i,$ where $Q_i$ counts walks ending at abscissa $i$. The functional equation reads \begin{equation}\label{eq-func-1D} K(x)Q(x)=1- \sum_{k=-m}^{-1}x^k G_k, \end{equation} where \[ G_k=t \sum_{i\in \mathcal S, i\le k} w_i Q_{k-i}. \] So we have $m$ unknown series $G_{-1}, \ldots, G_{-m}$ (or equivalently, $Q_0, \ldots, Q_{m-1}$) on the right-hand side of the functional equation. When we replace $x$ by $X_i$ in~\eqref{eq-func-1D}, for $1\le i\le m$, both the left and right-hand sides vanish (we only use the ``small'' roots $X_1, \ldots, X_m$, because the substitution by a root involving negative powers of $t$ may be undefined). But the right-hand side is a polynomial in $\bar x}%{\overline x$, of degree $m$ and constant term $1$. Hence it must be equal to $\prod_{i=1}^m (1-\bar x}%{\overline x X_i)$, and this gives the first expression of $Q(x)$. The second one follows by factoring $K(x)$ as \begin{equation}\label{K-fact-1D} K(x)= -t w_M \prod_{i=1}^m (1-\bar x}%{\overline x X_i) \prod_{i=m+1}^{m+M} (x-X_i). \end{equation} (The factor $-tw_M$ is obtained by extracting the coefficient of $x^M$ in $K(x)$.) \end{proof} We now present the expression provided by the method of this paper. Rather than algebraic series in $t$ (the $X_i$'s), it involves algebraic series in $\bar x}%{\overline x$ (denoted by $x_i$), and then the extraction of a non-negative part. Admittedly, it is not as attractive as the standard solution. In particular, it does not make the algebraicity of $Q(x)$ clear, unless the largest step is 1. But we show later how to recover the standard solution from it. One surprising feature of this solution is that, as foreseen in Example A, it involves expansions in $\bar x}%{\overline x$ rather than $x$. \begin{Proposition}\label{prop:dim1-alg} The equation $S(X)=S(x)$ (when solved for $X$) admits $m+M$ roots, which can be taken in the field of Puiseux series in $\bar x}%{\overline x:=1/x$ with coefficients in $\mathbb{F}$. Exactly $m$ of these roots, denoted $x_1, \ldots, x_m$, contain no positive power of $x$ (and, in fact, have no constant term either). The generating function\ $Q(x;t)\equiv Q(x)$ is \begin{equation}\label{1D-gen} Q(x)= [x^{\ge}] \frac{\prod_{j=1}^m (1-\bar x}%{\overline x x_j)}{K(x)}, \end{equation} where the right-hand side is expanded first in $t$, then in $\bar x}%{\overline x$. If the largest step of $\mathcal S$ is $M=1$ the right-hand side of~\eqref{1D-gen} is rational, and \begin{equation}\label{1D-M1} Q(x)= [x^{\ge}] \frac{ S'(x)}{w_1 K(x)}. \end{equation} \end{Proposition} We will use the following lemma, {which is a simple application of the Lagrange interpolation formula}~\cite[Lemma~13]{mbm-chapuy-preville}. \begin{Lemma}\label{lem:Lagrange} Let $u_0, u_1, \dots , u_m$ be $m+1$ variables. Then \[ \sum_{i=0}^m \frac{{u_i}^d}{\prod_{j\neq i}{(u_i-u_j)}} = \begin{cases} 1 & \hbox{if } d=m,\\ 0 & \hbox{if } 0\le d <m. \end{cases} \] \end{Lemma} \begin{proof}[Proof of Propositions~\ref{prop:section-free1D} and~\ref{prop:dim1-alg}] We first establish the section-free equation of Proposition~\ref{prop:section-free1D}. The equation $S(X)=S(x)$ has $m+M$ solutions (counted with multiplicity), including $X=x$, which form the orbit of $x$. These solutions are in fact distinct: a solution of $S'(X)=0$ belongs to the ground field $\mathbb{F}$, and cannot satisfy $S(X)=S(x)$ since $x$ is an indeterminate. Let $u_0, \ldots, u_m$ be $m+1$ {distinct} orbit elements. For $0\le i \le m$, the functional equation~\eqref{eq-func-1D} specializes into \[ K(x)Q(u_i)=1- \sum_{k=-m}^{-1}u_i^k G_k. \] Note that $K(u_i)=K(x)$ since $K(x)=1-tS(x)$. We can eliminate the $m$ series $G_k$ by taking an appropriate linear combination of our $m+1$ equations, namely: \begin{align} K(x) \sum_{i=0}^m \frac{u_ i^m}{\prod_{j\not = i} (u_i-u_j)}Q(u_i) &=\sum_{i=0}^m \frac{u_i^m}{\prod_{j\not = i} (u_i-u_j)} - \sum_{k=-m}^{-1}G_k \sum_{i=0}^m \frac{u_i^{k+m}}{\prod_{j\not = i} (u_i-u_j)}\nonumber\\ &= 1\label{1D-lc-bis} \end{align} by Lemma~\ref{lem:Lagrange}. We have thus exhibited $\binom{m+M}{m+1}$ section-free equations, each involving $m+1$ orbit equations, but we still need to prove that they generate all section-free equations. So let us take a generic section-free equation, say \[ \sum_{i=0}^{m+M-1} \alpha_i K(x) Q(u_i)= \sum_{i=0}^{m+M-1} \alpha_i \left(1-\sum_{k=-m}^{-1}u_i^k G_k\right) =\sum_{i=0}^{m+M-1} \alpha_i , \] where $u_0, u_1, \ldots, u_{m+M-1}$ are now all orbit elements. By subtracting a number of versions of~\eqref{1D-lc-bis} (with well chosen $u_i$'s and well chosen weights), we can assume that this equation only involves (at most) $m$ of the $u_i$'s, say $u_1, \ldots, u_m$. Then saying that this equation is section-free means that for all $k$ in $\llbracket -m, -1\rrbracket$, \[ \sum_{i=1}^m \alpha_i u_i^k=0. \] But the determinant of this system is not zero (since the $u_i$'s are distinct), and thus all $\alpha_i$'s must be zero. \medskip We now go on with the proof of Proposition~\ref{prop:dim1-alg}. The equation $S(X)=S(x)$, written as a polynomial in $\bar x}%{\overline x$ and $X$, reads \[ \bar x}%{\overline x^M \sum_{i\in \mathcal S} w_i X^{m+i}=X^m \sum_{i\in \mathcal S}w_i \bar x}%{\overline x^{M-i}. \] The number of solutions $X$ that are fractional power series in $\bar x}%{\overline x$ is the degree in $X$ of the above polynomial, once evaluated at $\bar x}%{\overline x=0$ (see again~\cite[Prop.~6.1.8]{stanley-vol2}), hence $m$. From now on we denote these roots by $x_1, \ldots, x_m$, and it is clear that $x$ is not among them so we denote $x_0=x$. \medskip We now write the section-free equation~\eqref{1D-lc-bis} with $u_i=x_i$, and isolate $Q(x_0)=Q(x)$: \begin{equation}\label{F-iso} Q(x)+ \prod_{j=1}^m (1-\bar x}%{\overline x x_j) \sum_{i=1}^m \frac{x_i^m}{\prod_{0\le j\not = i \le m} (x_i-x_j)}Q(x_i) = \frac{\prod_{j=1}^m (1-\bar x}%{\overline x x_j)}{K(x)}. \end{equation} {Comparing with~\eqref{1D-gen} shows {that} we have to prove that the second term in the left-hand side, once expanded as a series in $t$, only contains negative powers of $x$.} In the coefficient of $Q(x_i)$, the term $(1-\bar x}%{\overline x x_i)$ coming from the numerator gets simplified with the term $(x_i-x_0)=(x_i-x)=-x(1-\bar x}%{\overline x x_i)$ coming from the denominator. Hence the least common denominator of the coefficients of all $Q(x_i)$ is the Vandermonde determinant in $x_1, \ldots, x_m$. We can thus rewrite the second term as follows: \begin{multline} \prod_{j=1}^m (1-\bar x}%{\overline x x_j) \sum_{i=1}^m \frac{x_i^m}{\prod_{0\le j\not = i \le m} (x_i-x_j)}Q(x_i) =- \bar x}%{\overline x \sum_{i=1}^m \left(x_i^m Q(x_i) \prod_{1\le j\not = i \le m} \frac{ 1-\bar x}%{\overline x x_j} {x_i- x_j}\right)\label{vdm} \\ =\frac \bar x}%{\overline x{\prod_{1\le k <\ell \le m} (x_k-x_\ell)}\sum_{i=1}^m\left( (-1)^i x_i^m Q(x_i) \prod_{1\le j\not = i \le m} (1-\bar x}%{\overline x x_j) \prod_{ \substack{1\le k < \ell\le m \\ k, \ell\not = i}} (x_k-x_\ell)\right). \end{multline} The sum over $i$ is easily checked to be an antisymmetric expression in $x_1, \ldots, x_m$. More precisely, if we exchange in this sum $x_a$ and $x_{a+1}$, the summands involving $Q(x_a)$ and $Q(x_{a+1})$ are exchanged, and their signs change (because of the factor $(-1)^i$), and for $i \not \in\{a, a+1\}$ the sign of the summand involving $Q(x_i)$ changes (because of the factor $(x_a-x_{a+1})$ occurring in the rightmost product). Thus, dividing the sum over $i$ by the Vandermonde {determinant} in $x_1, \ldots, x_m$ gives a series in $t$ with \emm polynomial, coefficients in $\bar x}%{\overline x, x_1, \ldots, x_m$. Hence, once expanded in $t$ and~$\bar x}%{\overline x$, the right-hand side of~\eqref{vdm} contains only negative powers of $x$ (because the $x_i$'s contain no positive power of $x$ and there is a factor $\bar x}%{\overline x$). We now return to~\eqref{F-iso}, which we expand in powers of $t$ and $\bar x}%{\overline x$. The expression~\eqref{1D-gen} of $Q(x)$ follows. \medskip Now assume $M=1$. Then $x_1, \ldots, x_m$ are \emm all, roots of $S(X)=S(x)$ except $X=x$. That is, \[ \frac {S(X)-S(x)}{X-x}= w_1 \prod_{j=1}^m (1-x_j/X). \] Taking the limit as $X\rightarrow x$ gives \begin{equation}\label{S-prime} S'(x)=w_1\prod_{j=1}^m (1-\bar x}%{\overline x x_j). \end{equation} Substituting into~\eqref{1D-gen} gives~\eqref{1D-M1}. \end{proof} \noindent{\bf Why Proposition~\ref{prop:dim1-alg} implies Proposition~\ref{prop:dim1-standard}.} We now derive from~\eqref{1D-gen} the standard expression~\eqref{F-dim1-standard}. We start from the factorization~\eqref{K-fact-1D} of the kernel. It gives the following partial fraction decomposition {in $x$}: \[ \frac{1}{K(x)}= -\frac 1 {tw_M} \sum_{i=1}^m \frac{\bar x}%{\overline x X_i^m }{(1-\bar x}%{\overline x X_i) \prod_{j\not = i}(X_i-X_j)} +\frac 1 {tw_M} \sum_{i=m+1}^{m+M}\frac {X_i^{m-1}} {(1-x/X_i)\prod_{j\not = i}(X_i-X_j)}. \] The expansion in $\bar x}%{\overline x$ of the term $A(\bar x}%{\overline x):=\prod_{j=1}^m (1-\bar x}%{\overline x x_j)$ only involves non-positive powers of $x$, hence~\eqref{1D-gen} implies \begin{equation}\label{QA} Q(x)= \frac 1 {tw_M} [x^{\ge}]A(\bar x}%{\overline x) \sum_{i=m+1}^{m+M}\frac {X_i^{m-1}} {(1-x/X_i)\prod_{j\not = i}(X_i-X_j)}. \end{equation} Recall that, as $x_1, \ldots, x_m$ themselves, $A(\bar x}%{\overline x)$ is a fractional power series in $\bar x}%{\overline x$ with coefficients in~$\mathbb{F}$, say $A(\bar x}%{\overline x)=\sum_{n\ge 0} a_n \bar x}%{\overline x^{n/p}$, for a positive integer $p$. In fact we can take $p=1$. Indeed, by~\cite[Prop.~6.1.6]{stanley-vol2}, for $1\le i\le m$, every conjugate of $x_i$ over the field $\mathbb{F}((\bar x}%{\overline x))$ of Laurent series in $\bar x}%{\overline x$ is one of the $x_j$'s, with $1\le j\le m$; hence $\prod_{i=1}^m (u-x_i)$ is a product of minimal polynomials over $\mathbb{F}((\bar x}%{\overline x))$, and thus only involves integer powers of $\bar x}%{\overline x$ in its expansion. {Now let us return to~\eqref{QA}, and focus on the term $A( \bar x}%{\overline x)/(1-x/X_i)$.} Recall that for $i>m$, $X_i$ is a Puiseux series in $t$, infinite at $t=0$. Thus $1/X_i$ is a fractional power series in $t$, vanishing at $t=0$, and hence $A(1/X_i)$ is also a fractional series in~$t$. Moreover, {in the ring of fractional series in $t$ with coefficients in $\mathbb{F}[[\bar x}%{\overline x]]$, we have} \begin{align*} [x^{\ge}]\frac{A(\bar x}%{\overline x)}{1-x/X_i}& = [x^{\ge}]\left( \sum_{m\ge 0} \frac {x^m}{X_i^m} \sum_{n\ge 0} a_n \bar x}%{\overline x^{n}\right) \\ &=\sum_{n\ge 0} a_n\sum_{m\ge n} \frac {x^{m-n}}{X_i^m} \\ &=\sum_{n\ge 0} a_n \frac{1}{X_i^n(1-x/X_i)} \\ &=\frac{A(1/X_i)}{1-x/X_i}. \end{align*} Returning to~\eqref{QA}, this gives: \[ Q(x)= \frac 1 {tw_M} \sum_{i=m+1}^{m+M}\frac {X_i^{m-1} A(1/X_i)} {(1-x/X_i)\prod_{j\not = i}(X_i-X_j)}. \] Thus it remains to determine $A(1/X_i)$ when $X_i$ is one of the roots of the kernel that diverges at $t=0$. That is, we have to know the values of $x_1, \ldots, x_m$ when $x$ is $X_i$. Recall the definition of these $x_j$: they are power series in (a rational power of) $\bar x}%{\overline x$, satisfying $S(x_j)=S(x)$. Specializing this at $\bar x}%{\overline x=1/X_i$ shows that when $x=X_i$, the series $x_1, \ldots, x_m$ are power series in (a fractional power of) $t$, satisfying $S(X_i)=S(x_j)$. But $X_i$ cancels the kernel $1-tS$, hence the $x_j$ are also roots of the kernel, and since they must be finite at $t=0$, they are $X_1, \ldots, X_m$. This holds for any $X_i$ with $i>m$. Hence, \begin{align*} Q(x)&= \frac 1 {tw_M} \sum_{i=m+1}^{m+M}\frac {X_i^{m-1}} {(1-x/X_i)\prod_{j\not = i}(X_i-X_j)} \prod_{j=1}^m (1-X_j/X_i) \\ &= \displaystyle \frac 1 {tw_M} \sum_{i=m+1}^{m+M}\frac {1} {(X_i-x)\prod_ {j>m, j\not = i}(X_i-X_j)} \end{align*} {where we recognize the partial fraction expansion of \[ -\frac 1 {tw_M} \prod_{i=m+1}^{m+M}\frac{1}{x-X_i}. \] This gives} the second expression in~\eqref{F-dim1-standard}. \section{Two-dimensional Hadamard walks} \label{sec:hadamard} Following~\cite{BoBoKaMe16}, we say that a 2-dimensional model $\mathcal S$ is \emm Hadamard, if its step polynomial can be written as: \begin{equation}\label{S-Hadamard} S(x,y)=U(x)+V(x)T(y), \end{equation} for some Laurent polynomials $U$, $V$ and $T$. {Some examples are shown in Figure~\ref{fig:hadamard}.} When $T(y)=y+\bar y}%{\overline y$, the model has small variations along the $y$-axis and is symmetric with respect to the $x$-axis. It was proved in~\cite{bousquet-versailles,bousquet-petkovsek-knight} that the associated generating function\ $Q(x,y)$ is always D-finite. This holds in fact for \emm all, two-dimensional Hadamard models, whatever $T(y)$ is. We provide two proofs, one based on a simple projection argument, the other on the method of this paper. \begin{figure}[t!] \centering \includegraphics[scale=0.8]{hadamard} \caption{Some Hadamard models. The series $Q(x,y)$ is D-finite for all of them, and given as the non-negative part of a rational function for those that have small forward steps (the leftmost two).} \label{fig:hadamard} \end{figure} \begin{Proposition}\label{prop:Hadamard-product} Consider a Hadamard model with step polynomial given by~\eqref{S-Hadamard}, and let $\mathcal U$, $\mathcal V$ and $\mathcal T$ be the subsets of $\mathbb{Z}}%{\mbox{\bbold Z}$ with generating polynomials $U(x)$, $V(x)$ and $T(y)$ respectively. Let $C_1(x,v;t)$ be the generating function\ of walks on $\mathbb{N}}%{\mbox{\bbold N}$, starting from $0$ and taking steps in {the multiset} $\mathcal U \cup \mathcal V$ {(steps in $\mathcal U \cap \mathcal V$ occur twice)}, counted by the length (variable $t$), the position of the endpoint ($x$), and {the number of steps in $\mathcal V$ ($v$)}. Let $C_2(y;v)$ be the generating function\ of walks on $\mathbb{N}}%{\mbox{\bbold N}$, starting from $0$ and taking steps in $\mathcal T$, counted by the length ($v$) and the endpoint ($y$). Then $C_1(x,v;t)$ and $C_2(y;v)$ are algebraic, and the generating function\ of quadrant walks with steps in $\mathcal S$ is \[ Q(x,y;t) =\left. C_1(x,v;t) \odot_v C_2(y;v)\right|_{v=1}, \] where $\odot_v$ denotes the Hadamard product in $v$, defined by $\sum a_n v^n\odot_v \sum b_n v^n = \sum a_n b_n v^n$. In particular, $Q(x,y;t)$ is D-finite. \end{Proposition} \begin{proof} The proof is the same as in~\cite[Sec.~5]{BoBoKaMe16}, but generalized (in a harmless fashion) to walks with arbitrary steps. It goes by projecting quadrant walks along the $x$-axis, and ``decorating'' steps {of $\mathcal V$} in this 1D walk with steps of a ``vertical'' walk with steps in $\mathcal T$; we omit the details. The Hadamard product of algebraic (and in fact, of D-finite) series is known to be D-finite~\cite{lipshitz-diag}. \end{proof} The approach of this paper works systematically in the Hadamard case, and provides the solution as the positive part of an algebraic (sometimes rational) series, often more explicitly than the above solution. In the case of small steps, 16 of the 19 models solvable by the method of this paper (the leftmost branch in Figure~\ref{fig:class-2D}) are Hadamard. The three remaining ones are shown below. \smallskip \begin{center} \begin{tabular}{ccc} $ \diagr{N,SE,W}$ & \hskip 4mm $ \diagr{SE,N,E,S,W,NW}$ & \hskip 4mm$\diagr{E,W,NW,SE} $ \end{tabular} \end{center} \smallskip Consider a Hadamard model $\mathcal S$. Let $-m$ (resp. $M$) be the valuation (resp. degree) of $S(x,y)$ in $x$, and write similarly $-m'$ and $M'$ for the valuation and degree in $y$. In other words, $-m$ (resp. $M$) is the smallest (resp. largest) move in the $x$-direction, and similarly for $m'$ and $M'$. We assume $m, m'\ge 0$ and $M, M' >0$. The solution given below has strong analogies with the 1-dimensional case of Proposition~\ref{prop:dim1-alg}. \begin{Proposition}\label{prop:hadamard} The equation $S(x,y)=S(X,y)$, solved for $X$, admits $m+M$ solutions (including $x$ itself), which can be seen as Puiseux series in $\bar x}%{\overline x$ with coefficients in an algebraic closure of $\mathbb{Q}}%{\mbox{\bbold Q}(y)$ (below we take Puiseux series in {$\bar y}%{\overline y$}). We denote them by $x_0(y), \ldots, x_{m+M-1}(y)$, with $x_0(y)=x$. Exactly $m$ of them, say $x_1(y), \ldots, x_m(y)$, do not involve positive powers of $x$. The equation $S(x,y)=S(x,Y)$, now solved for $Y$, reads $T(y)=T(Y)$. It admits $m'+M'$ solutions (including $y$ itself), which can be seen as Puiseux series in $\bar y}%{\overline y$ with coefficients in $\mathbb{C}}%{\mbox{\bbold C}$. We denote them by $y_0, \ldots, y_{m'+M'-1}$, with $y_0=y$. Exactly $m'$ of them, say $y_1, \ldots, y_{m'}$, do not involve positive powers of $y$. The orbit of $(x,y)$ consists of all pairs $(x_i,y_j)$, for $i\in\llbracket 0, m+M-1\rrbracket$ and $j\in\llbracket 0, m'+M'-1\rrbracket$. The series $Q(x,y)$ reads \begin{equation} \label{Q-Hadamard1} Q(x,y)= [x^\ge y^{\ge}]\frac{\prod_{i=1}^m(1-\bar x}%{\overline x x_i(y))\prod_{j=1}^{m'}(1-\bar y}%{\overline y y_j)}{K(x,y)}, \end{equation} where the right-hand side is expanded first in powers of $t$, then $\bar x}%{\overline x$, and finally $ \bar y}%{\overline y$. The extraction of the non-negative part in $x$ can be done explicitly, {and yields}: \begin{equation} \label{Q-Hadamard2} Q(x,y)= [ y^{\ge}]\frac{\prod_{i=1}^m(1-\bar x}%{\overline x X_i(y))\prod_{j=1}^{m'}(1-\bar y}%{\overline y y_j)}{K(x,y)} = -[ y^{\ge}] \frac{\prod_{j=1}^{m'}(1-\bar y}%{\overline y y_j)}{tB_M(y) \prod_{i=m+1}^{m+M}(x-X_i)}, \end{equation} where $X_1(y), \ldots, X_m(y)$ are the roots (in $x$) of $1-tS(x,y)$, seen as Puiseux series in $t$ {with coefficients in the algebraic closure of $\mathbb{Q}}%{\mbox{\bbold Q}(y)$}, that are finite at $t=0$, and $X_{m+1}, \ldots, X_{m+M}$ are the other ones. The polynomial $B_M(y)$ is the coefficient of $x^M$ in $S(x,y)$. If $M=1$, then the derivative of $S(x,y)$ with respect to $x$ factors as \[ S_x(x,y)= B_1(y) \prod_{i=1}^m (1-\bar x}%{\overline x x_i(y)). \] Similarly, if $M'=1$, then \[ T_y(y)= \prod_{j=1}^{m'} (1-\bar y}%{\overline y y_j). \] This simplifies the above expressions. In particular, when all forward steps are small ($M=M'=1$), we can write $Q(x,y)$ as the non-negative part of a simple rational function: \begin{equation}\label{M=Mp=1} Q(x,y) = [x^{\ge}y^{\ge}] \frac{S_x(x,y)T_y(y)}{B_1(y)K(x,y)}. \end{equation} \end{Proposition} \begin{proof} The statements dealing with roots are in essence one-dimensional, and follow from Proposition~\ref{prop:dim1-alg} since we allowed weights in the previous section. We next want to build the orbit of $(x,y)$. By definition of the $x_i$'s and $y_j$'s we have $(x_i,y) \approx (x,y) \approx (x,y_j)$ for all $i$ and $j$. Now \begin{align*} S(x_i,y_j)&=U(x_i)+V(x_i)T(y_j) \\ &= U(x_i)+V(x_i) T(y) \ \ \hbox{ by definition of } y_j \\ &= S(x_i,y). \end{align*} Thus $(x_i,y_j)\approx (x_i,y)$, and all pairs $(x_i, y_j)$ are in the orbit. In this collection, every element $(x_i,y_j)$ is 1-adjacent to $m+M-1$ other elements, and 2-adjacent to $m'+M'-1$ other elements, hence the orbit is complete (Lemma~\ref{lem:ind}). The functional equation has the following general form (see~\eqref{eq:quadrant}): \[ K(x,y)Q(x,y)=1- \sum_{k=1}^m \bar x}%{\overline x^k R_k(y)- \sum_{\ell=1}^{m'} { \bar y}%{\overline y^\ell }S_\ell(x), \] for some series $R_k(y)$ and $S_\ell(x)$. A similar equation holds with $(x,y)$ replaced by any element $(x_i,y_j)$ of the orbit. The fact that the orbit is a Cartesian product allows us to construct a section-free equation by mimicking the argument that led to~\eqref{1D-lc-bis}: \[ K(x,y) \left( \sum_{i=0}^m \sum_{j=0}^{m'}\frac{x_i^m y_j^{m'} \, Q(x_i, y_j)}{\prod_{0\le k \not= i \le m} (x_i-x_k) \prod_{0\le \ell \not= j \le m'} (y_j-y_\ell)} \right) =1. \] Equivalently, after isolating $Q(x_0,y_0)=Q(x,y)$: \begin{multline*} Q(x,y)- \bar y}%{\overline y \sum_{j=1}^{m'} y_j^{m'}Q(x,y_j) \prod_{1\le \ell \not = j \le m'} \frac{1-\bar y}%{\overline y y_\ell}{y_j-y_\ell} -\bar x}%{\overline x \sum_{i=1}^m x_i^m Q(x_i,y) \prod_{1\le k\not = i \le m} \frac{1-\bar x}%{\overline x x_k}{x_i-x_k} \\ +\bar x}%{\overline x \bar y}%{\overline y \sum_{i=1}^m\sum_{j=1}^{m'} x_i^m y_j^{m'}Q(x_i,y_j) \prod_{1\le k\not = i \le m} \frac{1-\bar x}%{\overline x x_k}{x_i-x_k} \prod_{1\le \ell\not = j\le m'} \frac{1-\bar y}%{\overline y y_\ell}{y_j-y_\ell} \\ = \frac{\prod_{i=1}^m (1-\bar x}%{\overline x x_i)\prod_{j=1}^{m'} (1-\bar y}%{\overline y y_j)}{K(x,y)}. \end{multline*} We now expand the coefficient of $t^n$ in this identity in powers of $\bar x}%{\overline x$ (with coefficients in the field of Puiseux series in {$\bar y}%{\overline y$}), and extract the non-negative powers of $x$. The coefficients of the first two terms on the first line {(those involving $Q(x,y)$ and $Q(x,y_j)$)} are clearly non-negative in $x$. By recycling our analysis of~\eqref{vdm}, we see that the coefficient of $t^n$ in the third term {(involving $Q(x_i,y)$)} is a \emm polynomial, in $y$, $\bar x}%{\overline x$, $x_1, \ldots, x_m$, multiplied by $\bar x}%{\overline x$, and thus only involves negative powers of $x$ and does not contribute. A similar argument shows that the second line does not contribute either. We are thus left with \[ Q(x,y)- \bar y}%{\overline y \sum_{j=1}^{m'} y_j^{m'}Q(x,y_j) \prod_{\ell \not = j \in \llbracket1, m'\rrbracket} \frac{1-\bar y}%{\overline y y_\ell}{y_j-y_\ell} = [x^\ge] \frac{\prod_{i=1}^m (1-\bar x}%{\overline x x_i)\prod_{j=1}^{m'} (1-\bar y}%{\overline y y_j)}{K(x,y)}. \] The symmetry argument applied earlier to~\eqref{vdm} shows that the sum over $j$ is a series in $t$ whose coefficients are polynomials in $x$, $\bar y}%{\overline y, y_1, \ldots, y_m$. Hence a final expansion in powers of $\bar y}%{\overline y$, followed by the extraction of non-negative powers of $y$ gives the {first expression~\eqref{Q-Hadamard1} of $Q(x,y)$}. The second one, {that is~\eqref{Q-Hadamard2}}, follows by combining the one-dimensional results of Propositions~\ref{prop:dim1-standard} and~\ref{prop:dim1-alg}. Indeed, {Proposition~\ref{prop:dim1-alg}} shows that \[ [x^\ge ]\frac{\prod_{i=1}^m(1-\bar x}%{\overline x x_i(y))}{K(x,y)} \] counts walks with steps in $\mathcal S$ confined to the half-plane $\{(i,j): i\ge 0\}$, and {Proposition~\ref{prop:dim1-standard}} gives an alternative expression for this series. The rest of the proof follows the same lines as the end of the proof of Proposition~\ref{prop:dim1-alg} (see in particular~\eqref{S-prime}). \end{proof} \medskip\noindent {\bf Example: a Hadamard model with small forward steps.} Take $\mathcal S=\{10,\bar11, \bar1\bar2\}$. The step polynomial is \[ S(x,y)= x+\bar x}%{\overline x (y+\bar y}%{\overline y^2)=U(x)+V(x)T(y) \] with $U(x)=x$, $V(x)=\bar x}%{\overline x$ and $T(y)=y+\bar y}%{\overline y^2$, so this is a Hadamard model. Moreover, the forward steps are small, so that the simple formula~\eqref{M=Mp=1} holds: \[ Q(x,y) = [x^{\ge} y^{\ge}]\frac{(1-\bar x}%{\overline x^2 (y+\bar y}%{\overline y^2))(1-2\bar y}%{\overline y^3)}{1-t(x+\bar x}%{\overline x (y+\bar y}%{\overline y^2))}. \] The number of walks of length $n$ ending at $(i,j)$ is non-zero if and only if $n=i+2j+6m$ for some $m$, in which case \begin{equation}\label{hadamard-sol} q(i,j;n)=\frac{(i+1)(j+1) n!}{m! (2m+j+1)! (3m+i+j+1)!}. \end{equation} $\hfill{\Box}$ \medskip \noindent{\bf Example: a Hadamard model with a large forward step.} Let us now reverse the above steps. The step polynomial becomes \[ S(x,y)=\bar x}%{\overline x +x(\bar y}%{\overline y+y^ 2) \] and is of course still Hadamard. With the notation of Proposition~\ref{prop:hadamard}, $m=m'=1$, \[ x_1(y)=\frac{\bar x}%{\overline x}{\bar y}%{\overline y+y^2} \qquad \hbox{and} \qquad y_{1}= \frac{-1+ \sqrt{4\bar y}%{\overline y^3+1}}{2\bar y}%{\overline y}. \] Indeed $y_1$ is a power series in $\bar y}%{\overline y$, while {its conjugate root $y_2$} contains a term $-y$ in its expansion. {The two expressions of Proposition}~\ref{prop:hadamard} read: \[ Q(x,y)= [x^\ge y^\ge ]\frac{(1-\bar x}%{\overline x x_1(y))(1-\bar y}%{\overline y y_1)}{K(x,y)} =- [ y^\ge ] \frac{\bar x}%{\overline x(1-\bar y}%{\overline y y_1)}{t(\bar y}%{\overline y + y^2) (1-\bar x}%{\overline x X_2)}, \] with \[ X_2= \frac{1+\sqrt{1-4t^2(\bar y}%{\overline y+y^2)}}{2t(\bar y}%{\overline y+y^2)}. \] As before, we expand the right-hand side first in $t$, then $\bar x}%{\overline x$, then $\bar y}%{\overline y$. \section{Quadrant walks with steps in \texorpdfstring{$\boldsymbol{\{-2,-1,0, 1\}^2}$}{\{-2,-1,0,1\}} } \label{sec:m21} In this section, we explore systematically all models obtained by taking $\mathcal S$ in $\{-2,-1,0, 1\}^2 \setminus{(0,0)}$, with the (ultimate) objective of reaching a classification similar to that of quadrant walks with small steps (Figure~\ref{fig:class-2D}). Our results are summarized in Figure~\ref{classificationm2}. In Section~\ref{sec:final} we discuss the classification of orbits (not of generating functions!) for models in $\{-1,0, 1,2\}^2 \setminus{(0,0)}$. \begin{figure}[htb] \begin{center}\edgeheight=7pt\nodeskip=2.23em\leavevmode \tree{quadrant models: 13 110} { \tree{|orbit| $<\infty$: $13 + {227}$} { \tree{OS $\not = 0$: ${4 + {227}}$}{\tree{ \begin{tabular}{c} \textbf{D-finite}\\ Sec.~\ref{sec:works}\end{tabular}}{}} \tree{OS $=0$: {9}}{\tree{ \begin{tabular}{c} \textbf{D-finite?}\\ Sec.~\ref{sec:interesting} \end{tabular} }{}}} \tree{\hskip -2mm|orbit| $=\infty$: {12 870}}{ \tree{ $\alpha$ rational: 16}{\tree{ \begin{tabular}{c}\textbf{non-D-finite?}\\ Sec.~\ref{sec:embarassing}\end{tabular} }{}} \tree{$\alpha$ irrational: 12 854} {\tree{ \begin{tabular}{c}\textbf{non-D-finite}\\ Sec.~\ref{subsec:exponent} \end{tabular} }{}} }} \end{center} \caption{Partial classification of quadrant walks with steps in $\{-2,-1,0,1\}^2$, when at least one large backward step is allowed. The approach of this paper solves the 231 models on the leftmost branch, {including} 227 Hadamard models.} \label{classificationm2} \end{figure} \subsection{The number of relevant models} We first proceed as in~\cite[Sec.~2]{BoMi10} in order to count, among the $2^{15}$ possible models {(Figure~\ref{fig:15})}, those that are really distinct and relevant. Clearly, we do not want to consider separately two models that only differ by an $x/y$-symmetry, as such models are isomorphic. Moreover, for certain models, forcing walks to lie in some half-plane automatically forces them to {remain in} the first quadrant. This happens, for instance, for $\mathcal S=\{\nearrow, \uparrow, \swarrow\}$ and the right half-plane. Half-plane models are essentially 1-dimensional and thus have an algebraic generating function, which can be determined in an automatic fashion (Proposition~\ref{prop:dim1-standard}). \begin{figure}[htb] \centering \includegraphics{15steps} \caption{The 15 allowed steps.} \label{fig:15} \end{figure} Using the same arguments as in~\cite[Sec.~2]{BoMi10}, we first determine the number of step sets $\mathcal S$ that contain at least an $x$-positive, an $x$-negative, a $y$-positive and a $y$-negative step. More precisely, we count such sets by their cardinality. An inclusion-exclusion argument gives their generating polynomial as: \[ P_1(z)= (1+z)^{15}-2(1+z)^{11}-2(1+z)^7+2(1+z)^3+(1+z)^8+2(1+z)^5+(1+z)^3-2(1+z)^2-2(1+z)+1. \] The term $(1+z)^{15}$ counts all step sets, while $(1+z)^{11}$ counts those that contain no $x$-positive step, $(1+z)^7$ those that contain no $x$-negative step, $(1+z)^3$ those that contain no $x$-positive nor $x$-negative step, and so on. We refer to~\cite[Sec.~2]{BoMi10} for a detailed argument. Then, we must exclude sets in which no step belongs to $\mathbb{N}}%{\mbox{\bbold N}^2$. This leaves fewer step sets, counted by: \[ P_2(z)=P_1(z) - \big((1+z)^{12} -2(1+z)^{10} +(1+z)^8\big). \] We also do not wish to consider step sets such that all walks confined to the right half plane $x \ge 0$ are automatically quadrant walks. As in the case of small steps, this means that all steps $(i,j)$ of $\mathcal S$ satisfy $j\ge i$. That is, we have an upper diagonal model. The generating polynomial of such sets, satisfying the above conditions (steps in all directions, at least one step in $\mathbb{N}}%{\mbox{\bbold N}^2$) is \[ z\left( (1+z)^8-(1+z) ^5\right), \] where the factor $z$ accounts for the step $(1,1)$, which is necessarily in such a set. Symmetrically, we need to exclude lower diagonal models, and avoid excluding twice the models that are both upper and lower diagonal. We are left with a collection of step sets counted by \[ P_3(z)=P_2(z)-2z \left( (1+z)^8-(1+z) ^5\right) + z(2z+z^2). \] Finally, if two models differ only by a diagonal symmetry, we do not want to consider them both. We thus have to count separately the models counted by $P_3$ that have an $x/y$ symmetry. {Mimicking the above argument, and including the symmetry constraint,} gives: \[ P_1^{\rm sym} (z)= (1+z)^3(1+z^2)^6 -(1+z)^2(1+z^2)^3 -(1+z)(1+z^2) +1, \] \[ P_2^{\rm sym} (z)= P_1^{\rm sym} (z)- \big( (1+z) ^2(1+z^2)^5-(1+z)^2(1+z^2)^3\big), \] and \[P_3^{\rm sym} = P_2^{\rm sym} - z(2z+z^2). \] We have thus restricted the collection of models that we have to study to 13 189 models, with generating polynomial \begin{multline*} \frac 1 2 \left(P_3(z)+P_3^{\rm sym} (z)\right)= z^{15}+9 z^{14}+57 z^{13}+236 z^{12}+691 z^{11}+1481 z^{10}+2374 z^9+2872 z^8\\ +2610 z^7+1749 z^6+826 z^5+248 z^4+35 z^3. \end{multline*} Among these, we know from~\cite{BoMi10} that those with small steps are counted by \[ 7\,{z}^{3}+23\,{z}^{4}+27\,{z}^{5}+16\,{z}^{6}+5\,{z}^{7}+{z}^{8}, \] and we are thus left with 13 110 models with at least one large backward step, counted by \[ z^{15}+9 z^{14}+57 z^{13}+236 z^{12}+691 z^{11}+1481 z^{10}+2374 z^9+2871 z^8+2605 z^7+1733 z^6+799 z^5+225 z^4+28 z^3. \] Note that no model in our collection is included in a half-plane. {This will allow us to apply Theorem~\ref{thm:theta} systematically.} \subsection{The size of the orbit} \label{sec:size} \subsubsection{The excursion exponent} \label{subsec:exponent} Consider a model $\mathcal S$ in our collection. Recall that if the quantity $c$ defined in Theorem~\ref{thm:theta} cannot be written as $\cos \theta$ with $\theta\in \pi \mathbb{Q}}%{\mbox{\bbold Q}$, then the orbit of $\mathcal S$ is infinite and $Q(x,y;t)$ is not D-finite. In order to decide if $c$ is of the requested form, we apply the following procedure, borrowed from~\cite{BoRaSa14}. \begin{enumerate}\itemsep=1em \item Compute a polynomial $P(C)$ that admits $c$ as a root. This is done by eliminating the variables $x,y$ and $u$ from the polynomial system comprised of (the numerators of): \[ S_x(x,y), \quad S_y(x,y), \quad C^2-\frac{S_{xy}(x,y)}{S_{xx}(x,y)^2S_{yy}(x,y)^2}, \quad 1-uxy . \] The final equation forces $x,y \neq 0$. This is done via a Gr{\"o}bner basis computation. \item Identify the irreducible factor $I(C)$ of $P(C)$ which admits $c$ as a root. To do this it is sufficient to determine the critical pair $(a,b)$, and thus $c$, to sufficient numerical precision. \item Decide whether $c$ can be written as $\cos \theta$, with $\theta \in \pi \mathbb{Q}}%{\mbox{\bbold Q}$. Equivalently, decide if the solutions of $2c=z+1/z$ are roots of unity. To do this it is sufficient to {examine whether} the polynomial $R(z) := z^{\deg I} I(\frac{z+1/z}{2})$ has cyclotomic factors. \end{enumerate} The polynomials $R(z)$ which are constructed by running this algorithm on the 13\,110 step sets {in our collection} are all irreducible and have degree less than 72. Thus, as the degree of the {$k$th} cyclotomic polynomial is \[ \phi(k) > \frac{k}{e^{\gamma}\log\log k + \frac{3}{\log \log k}},\] where $\gamma \approx 0.577$ is Euler's constant~\cite[Thm.~8.8.7]{bach-shallit}, to prove that the excursion exponent is irrational it is sufficient to show that $R({z})$ is not divisible by any of the first 349 cyclotomic polynomials; {constructing cyclotomic polynomials is a routine task in computer algebra~\cite{ArMo11}}. After performing this filtering step we conclude that 12\,854 models have an irrational excursion exponent, and thus an infinite orbit and a non-D-finite generating function. They form the rightmost branch in Figure~\ref{classificationm2}. \subsubsection{Detecting finite orbits} We are thus left with 256 step sets, each of which having a rational exponent $\alpha$. Among them we find 227 Hadamard models. Proposition~\ref{prop:hadamard} tells us that they have a finite orbit, of cardinality 6 or 9 depending on the sizes of the steps {(Figure~\ref{fig:hadamardOrbits})}. For each of them the excursion exponent is found to be $\alpha=-3$. \begin{figure}[htb] \centering \hspace{0.9in} \begin{minipage}{0.45\linewidth} \include{O4include} \end{minipage} \hspace{-0.5in} \begin{minipage}{0.45\linewidth} \include{O5include} \end{minipage} \caption{The possible orbits for two-dimensional Hadamard models with long steps in $\{-2,-1,0,1\}^2$, depending on whether there are steps with $-2$ in only one coordinate (left) or both coordinates (right). {The convention for dashed and solid edges is the same as in Figure~\ref{fig:orbit-quadrangulations}.} } \label{fig:hadamardOrbits} \end{figure} There are 29 models remaining. We apply to them the semi-algorithm of Section~\ref{sec:algo}, {which detects} 13 more models with a finite orbit, of cardinality 12 or 18. They are listed in Table~\ref{tab:finite}. Three distinct orbit structures arise, shown in Figure~\ref{fig:orbits}. \newcolumntype{C}{ >{\centering\arraybackslash} m{2cm} } \begin{table}[htb] \centering \begin{tabular}{c@{}C@{}cc|c@{}C@{}cc|c@{}C@{}cc} $g$&steps & orbit & $\alpha$&$g$& steps & orbit & $\alpha$ & $g$&steps & orbit & $\alpha$ \\ &&&&&&&\\ 1&$\diag{-21,-10,0-1,10}$&$O_{12}$ &$-4$ & 2&$\diag{-2-1,-1-2,01,10}$ &$\tilde O_{12}$ & $-5/2$ &1& $\diag{-2-1,-10,01,10}$ & $O_{12}$ & $-5/2$ \\ 2&$\diag{-21,-1-1,01,1-1}$&$O_{12}$& $-4$ & 3& $\diag{-20,-1-1,0-2,11}$ &$\tilde O_{12}$ & $-5/2$ &2&$\diag{-2-1,-11,0-1,11}$ & $ O_{12}$ & $-5/2$ \\ 2& $\diag{-21,-1-1,-10,01,1-1}$&$O_{12}$& $-4$ & 2&$\diag{-2-1,-1-2,-1-1,01,10} $ &$\tilde O_{12}$ & $-5/2$ &2&$\diag{-2-1,-10,-11,0-1,11}$ & $ O_{12}$ & $-5/2$ \\ 2&$\diag{-20,-21,1-1,10}$&$O_{18}$& -4 & 3& $ \diag{-20,-1-1,-10,0-2,0-1,11} $ &$\tilde O_{12}$ & $-5/2$ &2& $\diag{-2-1,-20,10,11}$ & $ O_{18}$ & $-7/3$ \\ &&&& 4& $ \diag{-2-1,-20,-1-2,-1-1,0-2,01,10,11}$ &$\tilde O_{12}$ & $-5/2$ \\ \end{tabular} \vskip 4mm \caption{The 13 non-Hadamard models with a finite orbit. Our method solves the ones on the left, proving that their generating function\ is D-finite (and transcendental). We conjecture that the 9 others are D-finite too, two of them being possibly algebraic {(the second and third in the last column)}. {We also give the excursion exponent $\alpha$, and the genus $g$ of the curve $K(x,y)$, which is $0$ or $1$ for small step models.} } \label{tab:finite} \end{table} \begin{figure}[htb] \centering \begin{minipage}{0.28\linewidth} \include{O1include} \end{minipage} \qquad \begin{minipage}{0.25\linewidth} \include{O2include} \end{minipage} \qquad \begin{minipage}{0.25\linewidth} \include{O3include} \end{minipage} \caption{The three finite orbit types which arise from non-Hadamard models in $\{-2,-1,0,1\}^2$. Two have cardinality 12, the third one {has} cardinality 18. We call these orbit structures, {from} left to right, $O_{12}$, $\tilde O_{12}$ and $O_{18}$.} \label{fig:orbits} \end{figure} \subsubsection{Sixteen models with a rational exponent but an infinite orbit} \label{sec:embarassing} For each of the remaining 16 models, listed in Table~\ref{tab:embarassing}, we ran our semi-algorithm by specializing $x=1$ and $y=2$ until we found at least 200 distinct orbit elements (the sum of the degrees of the polynomials in $\mathcal P$ --- or $\mathcal Q$ --- gives a lower bound on the size of the orbit). We found in each case minimal polynomials of degree over 100. The following proposition explains why. \begin{table}[htb] \centering \begin{tabular}{cCc|cCc|cCc} &steps & $\alpha$ && steps & $ \alpha$ && steps &$ \alpha$ \\ &&&&&\\ \#1 &$\diag{-21, 01, 1-2}$ & -5 & \#7&$\diag{-21, -11, 01, 1-1}$ & -7&\#13& $\diag{-21, -10, 0-1, 01, 1-2, 10}$ & -4\\ \#2&$\diag{-21, 1-2, 11}$ & -4&\#8&$\diag{-2-1, -1-1, 0-1, 11}$ & -11/5&\#14&$\diag{-21, -10, 01, 1-2, 1-1, 11}$ & -4\\ \#3&$\diag{-21, 1-1, 10}$ & -7&\#9& $\diag{-2-2, -20, 10, 11}$ & -7/3 &\#15& $\diag{-2-1, -11, 01, 1-2, 10, 11}$ &-3\\ \#4&$\diag{-21, 1-1, 11}$ & -5&\#10&$\diag{-2-2, -20, -1-1, -10, 10, 11}$& -7/3 &\#16&$\diag{-21, -10, -11, 0-1, 01, 1-2, 1-1, 10, 11}$ & -4 \\ \#5&$\diag{-2-1, 1-1, 11}$& -7/3&\#11&$\diag{-2-1, -1-1, 0-1, 01, 1-1, 11}$ & -5/2\\ \#6 &$\diag{-2-1, 10, 11}$ & -11/5&\#12& $\diag{-21, -11, 0-1, 01, 1-1, 11}$ & -4\\ \end{tabular} \vskip 4mm \caption{Sixteen models with a rational excursion exponent $\alpha$ and an infinite orbit.} \label{tab:embarassing} \end{table} \begin{Proposition} The $16$ models of Table~\ref{tab:embarassing} have an infinite orbit. \end{Proposition} \begin{proof} The proof is based on Proposition~\ref{prop:ab}, and mimics the proof used in the third example of Section~\ref{sec:ex}. For each model, we start from the positive critical point $(a,b)$, define $\Phi$ and $\Psi$ as in Section~\ref{sec:group}, and compute the expansion of $\Theta:=\Psi\circ \Phi$ to cubic order. There exists some integer $m>0$ such that $\Theta^m$ is the identity at first order (otherwise the excursion exponent would be irrational). Moreover, we observe that the quadratic term in $\Theta^m$ vanishes, but there is a non-zero cubic term. This implies that all elements $\Theta^{km}$ are distinct, so that the orbit is infinite. We give below the values of $a$, $b$, and $m$ (for model \#10 the value of $a$ is the positive root of $a^3=a+2$). Since models \#5, \#6, \#8 and \#12 are obtained from another model in the table by a reflection in the $x$-axis, we omit them (their orbits are infinite by Proposition~\ref{prop:sym}). However, the method works as well for them (with $b$ replaced by $1/b$, and $a$ and $m$ unchanged). \newcommand\Tstrut{\rule{0pt}{3.0ex}} \newcommand\Bstrut{\rule[-1.9ex]{0pt}{2.0ex}} \[\begin{array}{c||c|c|c|c|c|} \hbox{model} & \#1 & \#2 & \#3 &\#4 &\#7 \\ \hline (a,b)& ( 3^{1/2}, 3^{1/2}/ 2^{1/3}) & (1,1) &(1,1) & (2^{-1/3}, 3^{-1/2})& (1, 3^{-1/2}) \Tstrut \\ m &4 & 3 & 6 & 4&6 \end{array}\] \[\begin{array}{c||c|c|c|c|c|c|c|} \hbox{model} &\#9& \# 10 & \#11 &\#13 &\#14 &\#15 &\#16 \\ \hline (a,b)& (2^{1/3},1)& (a,1) & (1,\sqrt 2) &(\sqrt 2 , \sqrt 2)& (1,1) & (1,1) & (1,1)\Tstrut\\ m & 4& 4 & 3 & 3& 3& 2& 3 \end{array}\] \end{proof} For each model, we have tested D-finiteness experimentally, by generating $10\, 000$ coefficients of the series $Q(0,0)$, and trying to guess from them a linear recurrence relation for the coefficients or a {linear} differential equation for the {generating function}. The guessing procedure is detailed in Section~\ref{sec:interesting}. We could not find any recurrence nor {differential} equation, and are tempted to believe that $Q(x,y;t)$ is not D-finite for these 16 models. However, it must be noted that, for some models of Section~\ref{sec:interesting}, it takes more than $10\,000$ coefficients to guess a differential equation. \subsection{Solving models with a finite orbit} \label{sec:works} As written above, 227 of the 240 models that have a finite orbit are Hadamard. Our method applies systematically to them, as proved in Section~\ref{sec:hadamard}. In particular, all these models have a D-finite generating function, and, because they make small forward moves, $Q(x,y)$ is expressed as the non-negative part of a simple rational function (see~\eqref{M=Mp=1}). {The excursion exponent being~$-3$ in all cases, these series are transcendental~\cite{flajolet-context-free}.} We are left with the 13 models shown in Table~\ref{tab:finite}. For each of them, there exists a unique section-free equation, {in agreement with Conjecture~\ref{conj:section-free}} (up to a multiplicative factor, as usual). For the 4 models shown in the first column, this equation defines $Q(x,y)$ uniquely and we are able to extract it as the positive part of a rational series. In particular, these four series are D-finite (but transcendental, {because of the exponent $-4$}). {Details are given below, and we} work out detailed asymptotic behaviour of their coefficients in Section~\ref{sec:asympt}. For the remaining 9 models, the right-hand side of the section-free equation vanishes, so that this equation does not characterize $Q(x,y)$ (for a start, any constant is a solution). These models are the counterparts of the 4 algebraic models from the small step case, shown in the second branch of Figure~\ref{fig:class-2D}. Clearly they deserve a specific study, and we state conjectures regarding the nature of their generating functions\ in Section~\ref{sec:interesting}. \subsubsection{Case $\boldsymbol{\mathcal S=\{10, \bar 1 0, 0 \bar 1, \bar 2 1\}}$} \label{sec:solved1} This is model F, which we have studied as one of our examples in this paper. Our main result is stated in Proposition~\ref{prop:F}: \[ Q(x,y)= [x^{\ge} y^{\ge} ]{\frac { \left( {x}^{2}+1 \right) \left( x+y \right) \left( y-x \right) \left( {x}^{2}y-2\,x-y \right) \left( {x}^{3}-x-2\,y \right) }{{x}^{7}{y}^{3} \left( 1-t(x+\bar x}%{\overline x+\bar x}%{\overline x^2 y +\bar y}%{\overline y) \right)}}. \] The excursion exponent ${\alpha\equiv}\alpha_e$, given by Theorem~\ref{thm:exponent}, is $-4$. The exponent of {all quadrant walks} -- that is, the exponent associated with the coefficients of $Q(1,1)$ -- can be determined using multivariate singularity analysis, and is found to be $\alpha_w=-4$. This is detailed for all four models shown on the left of Table~\ref{tab:finite} in Section~\ref{sec:asympt}. For comparison with the next cases, we recall that the orbit, given by~\eqref{orbit:hard1}, has type $O_{12}$. \subsubsection{Case $\boldsymbol{\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1\}}$} \begin{Proposition}\label{prop:12-a} For $\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1\}$, we have \[ Q(x,y)=[x^{\ge } y^{\ge }]{\frac { \left( {x}^{3}-2\,{y}^{2}-x \right) \left( {y}^{2}-x \right) \left( {x}^{2}{y}^{2}-{y}^{2}-2\,x \right) }{{x}^{5}{y}^{4} \left(1-t ( y+x\bar y}%{\overline y +\bar x}%{\overline x\bar y}%{\overline y +\bar x}%{\overline x^2 y)\right)} }, \] where the right-hand side is seen as a power series in $t$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x, \bar x}%{\overline x, y, \bar y}%{\overline y]$. The coefficients are hypergeometric: $q(i,j;n)$ is zero unless $n$ is of the form $n=2i+j+4m$, in which case \[ q(i,j;n)= \frac{(i+1)(j+1)(i+j+2)n!(n+2)!}{m!(3m+2i+j+2)!(2m+i+1)!(2m+i+j+2)!}. \] The excursion exponent $\alpha_e$ and the walk exponent $\alpha_w$ are both $-4$. \end{Proposition} \begin{proof} The proof is very similar to the solution of Example F. The step polynomial is $S(x,y)= y+ x\bar y}%{\overline y +\bar x}%{\overline x \bar y}%{\overline y +\bar x}%{\overline x^2 y$. All elements of the orbit belong to the extension of $\mathbb{Q}}%{\mbox{\bbold Q}(x,y)$ generated by $\sqrt{ (x+y^2)^2 +4x^3y^2}$. More precisely, denoting \[ x_{1,2} = \frac{x+y^2 \pm \sqrt{ (x+y^2)^2 +4x^3y^2}}{2x^2}, \] the orbit has type $O_{12}$ and consists of the following 12 pairs: \begin{equation}\label{orbit-hard2} \begin{array}{ccc} (x,y) & (x_1, y) & (x_2, y) \\ (x,x\bar y}%{\overline y) & (-\bar x_1}%{\overline {x}_1,x\bar y}%{\overline y) & (-\bar x_2}%{\overline {x}_2,x\bar y}%{\overline y) \\ (x_1, x_1 \bar y}%{\overline y) &(-\bar x}%{\overline x, x_1 \bar y}%{\overline y) &(-\bar x_2}%{\overline {x}_2, x_1 \bar y}%{\overline y) \\ (x_2, x_2 \bar y}%{\overline y) &(-\bar x}%{\overline x, x_2 \bar y}%{\overline y) &(-\bar x_1}%{\overline {x}_1, x_2 \bar y}%{\overline y) . \end{array} \end{equation} Note the similarities with the orbit~\eqref{orbit:hard1} obtained for model F. The functional equation reads: \[ (1-tS(x,y)) Q(x,y)=1 -t x\bar y}%{\overline y Q(x,0) - t\bar x}%{\overline x\bar y}%{\overline y (Q_{0,-}(y)+Q(x,0)-Q(0,0)) -t\bar x}%{\overline x^2 y \left( Q_{0,-}(y)+xQ_{1,-}(y)\right), \] where as before $x^iQ_{i,-}(y)$ is the generating function\ of walks ending at abscissa $i$. There is a unique section-free equation. To form it, the orbit equation associated with $(x',y')$ must be weighted by $\pm x'^2(x'_1-x'_2)$, where $(x'_1,y') \approx (x',y') \approx (x'_2, y')$ and $x_1'\not = x_2'$. More precisely, the weights associated with the 12 above orbit elements are: \begin{equation}\label{weights-hard2} \begin{array}{rrr} x^2(x_1-x_2) & x_1^2 (x_2-x) & -x_2^2(x_1-x) \\ x^2(\bar x_2}%{\overline {x}_2 -\bar x_1}%{\overline {x}_1) & -\bar x_1}%{\overline {x}_1^2( x+\bar x_2}%{\overline {x}_2) & \bar x_2}%{\overline {x}_2^2 (x+\bar x_1}%{\overline {x}_1) \\ x_1^2(\bar x}%{\overline x-\bar x_2}%{\overline {x}_2) &\bar x}%{\overline x^2(x_1+\bar x_2}%{\overline {x}_2) &-\bar x_2}%{\overline {x}_2^2(\bar x}%{\overline x+x_1) \\ -x_2^2(\bar x}%{\overline x-\bar x_1}%{\overline {x}_1) &-\bar x}%{\overline x^2(x_2+\bar x_1}%{\overline {x}_1) &\bar x_1}%{\overline {x}_1^2(\bar x}%{\overline x+x_2) . \end{array} \end{equation} Again, note the similarities with~\eqref{weights:hard1}. We now divide the section-free equation by $x^2(x_1-x_2)$, so as to isolate $Q(x,y)$. This gives an equation similar to the one obtained with Example F (see~\eqref{eqA}): \begin{equation}\label{eqA-bis} Q(x,y)+\bar x_1}%{\overline {x}_1 \bar x_2}%{\overline {x}_2 Q(x,x\bar y}%{\overline y) +A_1 +A_2 +A_3 +A_4 +A_5 = R(x,y), \end{equation} where $R(x,y)$ is the rational function occurring in Proposition~\ref{prop:12-a}, and each $A_i$ involves two of the series $Q(x',y')$ (again, as in Example F), chosen so that the expression of $A_i$ is symmetric in $x_1$ and $x_2$. More precisely, the orbit elements occurring in $A_1$ (resp. $A_2, A_3, A_4, A_4, A_5$) are $(x_1,y)$ { and } $(x_2,y)$ $\big($resp. $(-\bar x_1}%{\overline {x}_1, x\bar y}%{\overline y)$ and $ (-\bar x_2}%{\overline {x}_2, x\bar y}%{\overline y)$, $(x_1, x_1 \bar y}%{\overline y)$ and $(x_2, x_2\bar y}%{\overline y)$, $(-\bar x}%{\overline x, x_1\bar y}%{\overline y)$ and $ (-\bar x}%{\overline x, x_2 \bar y}%{\overline y)$, $(-\bar x_1}%{\overline {x}_1, x_2 \bar y}%{\overline y)$ and $(-\bar x_2}%{\overline {x}_2, x_1 \bar y}%{\overline y)\big)$. We now examine the symmetric functions of the $x_i$'s and of their reciprocals. They are Laurent polynomials in $x$ and $y$, which are, respectively, negative in $x$ and non-positive in $y$: \[ x_1+x_2=\bar x}%{\overline x+ \bar x}%{\overline x^2y^2, \quad x_1 x_2 = -\bar x}%{\overline x y^2, \] while \[ \bar x_1}%{\overline {x}_1+\bar x_2}%{\overline {x}_2= -\bar x}%{\overline x-\bar y}%{\overline y^2 \quad \hbox{and} \quad \bar x_1}%{\overline {x}_1\bar x_2}%{\overline {x}_2= -x\bar y}%{\overline y^2. \] With this, we conclude that the series $A_i$ also have coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x,\bar y}%{\overline y,y,\bar y}%{\overline y]$, that $A_1$, $A_3$ and $A_4$ are negative in $x$, and that $A_2$ is negative in $y$. As in Example F, the case of $A_5$ is a bit trickier, due to the mixture of positive and negative powers of the $x_i$'s. Following the same lines as in Example F, one can prove that $A_5$ contains no monomial that would be non-negative in $x$ and $y$. The counterpart of Lemma~\ref{lem:extr} is that every monomial $\bar x}%{\overline x^e y^f$ occurring in {the expression $E_a$ defined by~\eqref{eq:Ea}}, for the values of $x_1$ and $x_2$ here, satisfies $f\le 2e$. The simplicity of the coefficients $q(i,j;n)$ {comes} from the fact that the expansion of $S(x,y)^n=(1+\bar x}%{\overline x^2)^n (y+x\bar y}%{\overline y)^n$ in $x$ and $y$ has simple coefficients. The excursion exponent can be determined using Theorem~\ref{thm:exponent}, but it is more natural to start from the explicit expression of $q(0,0;4m)$, for which we derive: \[ q(0,0;4m)\sim \frac {4\sqrt 3}{27 \pi m^4} \left( \frac {16} 3\right)^{3m}. \] The asymptotic behaviour of the number of quadrant walks is determined in Section~\ref{sec:asympt}. \end{proof} \subsubsection{Case $\boldsymbol{\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1, \bar 1 0\}}$} \begin{Proposition} \label{prop:model3} For $\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1, \bar 1 0\}$, we have: \[ Q(x,y)=[x^{\ge } y^{\ge }] {\frac { \left( {y}^{2}-x \right) \left( {x}^{2}{y}^{2}-xy-{y}^{2}-2 \,x \right) \left( {x}^{3}-xy-2\,{y}^{2}-x \right) }{ {x}^{5}{y}^{4}\left(1-t(y+x \bar y}%{\overline y +\bar x}%{\overline x \bar y}%{\overline y + \bar x}%{\overline x^2y + \bar x}%{\overline x )\right)}}, \] where the right-hand side is seen as a power series in $t$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x, \bar x}%{\overline x, y, \bar y}%{\overline y]$. The excursion exponent $\alpha_e$ and the walk exponent $\alpha_w$ are both $-4$. \end{Proposition} \begin{proof} This example is very close to the previous one, from which it only differs by one West step. The orbit is still given by~\eqref{orbit-hard2}, but with different values of $x_1$ and $x_2$: \[ x_{1,2}= \frac{x+xy+y^2\pm\sqrt{(x+xy+y^2)^2+4x^3y^2}}{2x^2}. \] The symmetric functions of the $x_i$'s, and of their reciprocals, have promising non-negativity properties (the same as in the previous example): \[ x_1+x_2=\bar x}%{\overline x+\bar x}%{\overline x y +\bar x}%{\overline x^2y^2 , \quad x_1x_2=-\bar x}%{\overline x y^2 , \quad \bar x_1}%{\overline {x}_1 +\bar x_2}%{\overline {x}_2=-\bar x}%{\overline x-\bar y}%{\overline y-\bar y}%{\overline y^2 \quad \hbox{and} \quad \bar x_1}%{\overline {x}_1 \bar x_2}%{\overline {x}_2= -x\bar y}%{\overline y^2. \] The functional equation differs from the previous one by the new term $-t\bar x}%{\overline x Q_{0,-}(y)$ in the right-hand side. There is a unique section-free equation, with weights again given by~\eqref{weights-hard2}. Thus this equation is again~\eqref{eqA-bis}, with the same expressions of the series $A_i$. The series $Q(x,y)$ is extracted in the same way as in the previous example. In particular, only one series, $A_5$, raises difficulties in the extraction procedure. They are solved as before by proving that $f\le 2e$ for every monomial $\bar x}%{\overline x^e y^f$ occurring in {the expression $E_a$ defined by~\eqref{eq:Ea}}. The excursion exponent is determined from Theorem~\ref{thm:exponent}, and the walk exponent in Section~\ref{sec:asympt}. \end{proof} \subsubsection{Case $\boldsymbol{\mathcal S=\{10, 1 \bar 1, \bar 2 1, \bar 2 0\}}$} \label{sec:solved4} \begin{Proposition}\label{prop:hard-18} For $\mathcal S=\{10, 1 \bar 1, \bar 2 1, \bar 2 0\}$, we have: \[ Q(x,y)=[x^{\ge } y^{\ge }] \frac{(2-y)(x^3-y^2)(x^6y-3x^3y-x^3-y^2)(x^3-2y)}{x^9y^4(1-t(\bar x}%{\overline x^2+\bar x}%{\overline x^2y+x\bar y}%{\overline y+x))}, \] where the right-hand side is seen as a power series in $t$ with coefficients in $\mathbb{Q}}%{\mbox{\bbold Q}[x, \bar x}%{\overline x, y, \bar y}%{\overline y]$. The coefficients are hypergeometric: $q(i,j;n)$ is zero unless $n$ is of the form $n=i+3j+3m$, in which case \[ q(i,j;n)=\frac{(i+1)(j+1)(i+3j+4)\big((i+2j+2)(i+2j+3)+m(2i+3j+4)\big) n!(n+3)!}{m!(m+j+1)!(2m+i+2j+3)!(2m+i+3j+4)!}. \] The excursion exponent $\alpha_e$ and the walk exponent $\alpha_w$ are both $-5$. \end{Proposition} \begin{proof} The step polynomial is $S(x,y)=x+x\bar y}%{\overline y+ \bar x}%{\overline x^2+\bar x}%{\overline x^2y$. All elements of the orbit belong to the extension of $Q(x,y)$ generated by $\sqrt{y(y+4x^3)} $ and $\sqrt{1+4y}$. More precisely, let us define \begin{equation}\label{xu} x_{1,2}= \frac{y\pm \sqrt{y(y+4x^3)}}{2x^2} \quad \hbox{and} \quad u_{3,4}= \frac{1\pm\sqrt{1+4y}}{2y}. \end{equation} Then the orbit consists of the following 18 pairs: \[ \begin{array}{ccc} (x,y) & (x_1,y) & (x_2, y) \\ (x, x^3\bar y}%{\overline y) & (xu_3, x^3\bar y}%{\overline y) & (xu_4, x^3\bar y}%{\overline y) \\ (x_1, x_1^3\bar y}%{\overline y) &(x_1u_3, x_1^3\bar y}%{\overline y) &(x_1u_4, x_1^3\bar y}%{\overline y) \\ (x_2, x_2^3\bar y}%{\overline y) &(x_2u_3, x_2^3\bar y}%{\overline y) &(x_2u_4, x_2^3\bar y}%{\overline y) \\ (xu_3,u_3^3y) &(x_1u_3,u_3^3y) &(x_2u_3,u_3^3y) \\ (xu_4,u_4^3y)&(x_1u_4,u_4^3y)&(x_2u_4,u_4^3y) \end{array} \] and its structure is shown in Figure~\ref{fig:orbit18}. \begin{figure}[htb] \centering \scalebox{0.9}{\input{orbit18.pdf_t}} \caption{The orbit of $\mathcal S=\{10, 1 \bar 1, \bar 2 1, \bar 2 0\}$. The values $x_i$ and $u_i$ are given by~\eqref{xu}. } \label{fig:orbit18} \end{figure} The functional equation reads \[ (1-tS(x,y)) Q(x,y)=1 -tx\bar y}%{\overline y Q(x,0) -t\bar x}%{\overline x^2(1+y) \left( Q_{0,-}(y)+xQ_{1,-}(y)\right), \] and there is a unique section-free equation. To form it, the orbit equation associated with $(x',y')$ must be weighted by $\pm x'y' (x'_1-x'_2)(x'_3-x'_4)$, where \[ \left. {\atopfix{(x'_1,y'')}{(x'_2,y'')}} \right\} \approx (x',y') \approx (x',y'') \approx \left\{ {\atopfix{(x'_3,y'')}{(x'_4,y'')}} \right. \] and both $x'_1\not = x'_2$ and $x'_3 \not = x'_4$. More precisely, the weights associated with the 18 above pairs are \begin{equation}\label{weights-18} \begin{array}{rrr} x^2y(x_1-x_2)(u_3-u_4) & - x_1^2y(x-x_2)(u_3-u_4) & x_2^2 y (x-x_1)(u_3-u_4) \\ -x^5\bar y}%{\overline y(x_1-x_2)(u_3-u_4) & x^5\bar y}%{\overline y u_3^2 (1-u_4)(x_1-x_2)& -x^5\bar y}%{\overline y u_4 ^2(1-u_3)(x_1-x_2) \\ x_1^5\bar y}%{\overline y(x-x_2)(u_3-u_4) & - x_1^5\bar y}%{\overline y u_3^2(1-u_4)(x-x_2) & x_1^5\bar y}%{\overline y u_4^2(1-u_3)(x-x_2)\\ -x_2^5\bar y}%{\overline y (x-x_1)(u_3-u_4) & x_2^5\bar y}%{\overline y u_3 ^2(x-x_1)(1-u_4) & -x_2^5\bar y}%{\overline y u_4^2( x-x_1)(1-u_3) \\ -x^2u_3^5y(1-u_4)(x_1-x_2) &x_1^2u_3^5y (x-x_2)(1-u_4) &-x_2^2u_3^5y(x-x_1) (1-u_4)\\ x^2u_4^5y (1-u_3)(x_1-x_2)&-x_1^2u_4^5y(1-u_3)(x-x_2)&x_2^2u_4^5y(1-u_3)(x-x_1). \end{array} \end{equation} We now divide the section-free equation by $x^2y(x_1-x_2)(u_3-u_4)$, so as to isolate $Q(x,y)$. This gives: \begin{equation}\label{Qeq18} Q(x,y)-x^3\bar y}%{\overline y^2 Q(x,x^3\bar y}%{\overline y)+ A_1+A_2+A_3+A_4+A_5+A_6 = R(x,y), \end{equation} where $R(x,y)$ is the rational function occurring in Proposition~\ref{prop:hard-18} and each $A_i$ involves two or four instances of the series $Q$, as described below: \newcommand\Tstrut{\rule{0pt}{3.0ex}} \newcommand\Bstrut{\rule[-1.9ex]{0pt}{2.0ex}} \[ \begin{array}{cccccc} A_1 & A_2 &A_3 & A_4 &A_5 &A_6 \\ \hline (x_1,y)&(x_1, x_1^3\bar y}%{\overline y)&(xu_3, x^3\bar y}%{\overline y)&(xu_3,u_3^3y) &(x_1u_3, x_1^3\bar y}%{\overline y)&(x_1u_3,u_3^3y)\Tstrut\\ (x_2,y)&(x_2, x_2^3\bar y}%{\overline y)&(xu_4, x^3\bar y}%{\overline y)&(xu_4,u_4^3y)&(x_2u_3, x_2^3\bar y}%{\overline y)&(x_2u_3,u_3^3y)\\ &&&&(x_1u_4, x_1^3\bar y}%{\overline y)&(x_1u_4,u_4^3y)\\ &&&&(x_2u_4, x_2^3\bar y}%{\overline y)&(x_2u_4,u_4^3y) \end{array} \] Then each $A_i$ is a series in $t$ whose coefficients are polynomials in $x, \bar x}%{\overline x, y, \bar y}%{\overline y, x_1, x_2, u_3, u_4$. The symmetric functions of the $x_i$'s (resp. $u_i$'s) are Laurent polynomials in $x$ and $y$, negative in $x$ (resp. in $y$): \[ x_1+x_2= \bar x}%{\overline x^2y, \qquad x_1 x_2= -\bar x}%{\overline x y, \qquad u_3+u_4=\bar y}%{\overline y, \qquad u_3u_4=-\bar y}%{\overline y. \] Hence each $A_i$ is a series in $t$ whose coefficients are Laurent polynomials in $x$ and $y$. We now want to extract the non-negative part, in $x$ and $y$, of~\eqref{Qeq18}. Clearly the second term is $y$-negative. Then the above properties, and the form~\eqref{weights-18} of the weights, imply that \begin{itemize} \item $A_1$, $A_2$, $A_5$, and $A_6$ are $x$-negative; \item $A_3$ is $y$-negative. \end{itemize} There remains to examine \[ A_4= \frac{-u_3^5(1-u_4)Q(xu_3, u_3^3y)+u_4^5(1-u_3)Q(xu_4, u_4^3y)}{u_3-u_4}. \] Since $Q(x,y)$ has polynomial coefficients in $x$ and $y$, it suffices to prove that for any $i,j \ge 0$, the expression \[ \frac{-u_3^5(1-u_4)u_3^i u_3^{3j}y^j+u_4^5(1-u_3)u_4^i u_4^{3j}y^j}{u_3-u_4}, \] which is a Laurent polynomial in $y$, is in fact $y$-negative. This is readily checked, using the fact that $u_3u_4=-\bar y}%{\overline y$ and \[ E_a:=\frac {u_3^{a+1}- u_4^{a+1}}{u_3-u_4} \] is a polynomial in $\bar y}%{\overline y$ of valuation $\lceil a/2\rceil$ (this is proved by induction on $a$, as $E_a=\bar y}%{\overline y (E_{a-1}+E_{a-2})$). The expression of $Q(x,y)$ follows by extracting the non-negative part, in $x$ and $y$, of~\eqref{Qeq18}. The simplicity of the coefficients $q(i,j;n)$ {comes} from the fact that the expansion of $S(x,y)^n=(1+y)^n (x\bar y}%{\overline y+x^2)^n$ in $x$ and $y$ has simple coefficients. The excursion exponent can be computed from Theorem~\ref{thm:exponent}, but it is more natural to start from the explicit expression of $q(0,0;3m)$, for which we derive: \[ q(0,0;3m)\sim \frac {81}{32\, \pi m^5} \left( \frac {27} 4\right)^{2m}. \] The asymptotic behaviour of the number of quadrant walks is determined in Section~\ref{sec:asympt}. \end{proof} \subsection{Nine interesting models with a finite orbit} \label{sec:interesting} For nine models, shown in the second and third {columns} of Table~\ref{tab:finite}, an {interesting} phenomenon occurs: the orbit is finite and the right-hand side of the unique section-free equation vanishes. These models come in two types, depending on whether they have an $x/y$-symmetry or not. {They} cannot be solved using the method of {this paper}, and we explore them \emph{experimentally}. \smallskip\noindent{\bf Questions.} For each of the nine models, we focus on two important univariate specializations of $Q(x,y) = Q(x,y;t)$, namely the generating function of excursions $Q(0,0) = \sum_n e_n t^n$ and the generating function of {all quadrant walks} $Q(1,1) = \sum_n q_n t^n$. For these 18 power series we address, as before, three types of questions: \emph{qualitative} (are they algebraic? are they D-finite transcendental? are they non-D-finite?), \emph{quantitative} (do they admit closed-form expressions?) and \emph{asymptotic} (what is the growth of the sequences $(e_n)$ and $(q_n)$?). \smallskip\noindent{\bf Answers.} In this section, most answers to these questions are \emph{conjectural}, although with a high degree of confidence. They are obtained by performing computer calculations that take as input a finite amount of information on $Q(0,0)$ and $Q(1,1)$, namely the first terms\footnote{{Precisely, $20\, 000$ integer coefficients, and even $100\, 000$ coefficients modulo the prime $p=2147483647$. For this time- and memory-consuming step, we have appealed to highly efficient implementations due to Axel Bacher.}} of the sequences $(e_n)$ and~$(q_n)$. The main technique that we use is \emph{automated guessing}, a classical tool in experimental mathematics~{\cite{BoKa09}}. In principle, the guessing part could be complemented by an \emph{automated proof} part, which would make the (algebraicity/D-finiteness) results fully rigorous, as in~\cite{BoKa10} and \cite[\S8]{BoBoKaMe16}. This would require, among other things, to consider more general {series} such as $Q(x,0)$ and $Q(0,y)$. Given that the equations conjectured for $Q(0,0)$ and $Q(1,1)$ are already {quite} big (see Tables~\ref{tab:kreweras-models} and \ref{tab:gessel-models}), we have decided to conduct the guessing part only. \smallskip\noindent{\bf Approach.} For each model, we have first tried to guess linear recurrence relations with coefficients in $\mathbb{Z}}%{\mbox{\bbold Z}[n]$ satisfied by the sequences $(e_n)$ and $(q_n)$, starting from the integer values of their first terms. When the available terms were not enough to recognize such a recurrence, we have used more terms modulo {the prime~$p=2147483647$}, and tried to recover recurrences with coefficients in $\mathbb{Z}}%{\mbox{\bbold Z}/p\mathbb{Z}}%{\mbox{\bbold Z}[n]$. In both cases, we used the guessed recurrence relations to produce even more terms, on which we repeated guessing procedures in order to get (hopefully) minimal-order linear differential equations with polynomial coefficients (in~$\mathbb{Z}}%{\mbox{\bbold Z}[t]$, resp. in $\mathbb{Z}}%{\mbox{\bbold Z}/p\mathbb{Z}}%{\mbox{\bbold Z}[t]$) {for the associated series}. {On the one hand, such minimal-order equations are hard to guess because they tend to have many apparent singularities and {thus} coefficients of very large degrees}; sometimes, it is necessary to produce them indirectly, e.g., by taking (right) gcd's of equations with higher orders but smaller degrees. On the other hand, {they} are interesting because they contain a lot of information on their solutions. For instance, minimal-order differential equations with coefficients in $\mathbb{Z}}%{\mbox{\bbold Z}[t]$ are helpful in proving transcendence of their solutions. {This is detailed below in Section~\ref{sec:Kreweras}.} Even when one can only guess differential equations with coefficients in $\mathbb{Z}}%{\mbox{\bbold Z}/p\mathbb{Z}}%{\mbox{\bbold Z} [t]$, for {a sufficiently large prime such as~$p=2147483647$}, rational reconstruction allows one to predict {the small factors of the leading coefficients of plausible differential operators over $\mathbb{Q}}%{\mbox{\bbold Q}[t]$, and thus} the growth constant in the asymptotics of $(e_n)$ and $(q_n)$. A similar procedure applied to recurrences instead of differential equations allows one to guess the critical exponents of these sequences. They {can} also give, via $p$-curvature computations~\cite{BoCaSc15,BoCaSc16}, some insight on the algebraic/transcendental nature of the power series in $\mathbb{Z}}%{\mbox{\bbold Z}[[t]]$ (modulo classical conjectures in the arithmetic theory of G-operators~\cite{Andre04}). {Examples are provided in~\cite{BoKa09,BoKa10} and \cite[\S2.3.3]{Bostan2017}. However, given the size of our conjectured equations, and especially of the prime number~$p$, we have not applied these algorithms here.} We refer to~\cite{BoKa09,Bostan2017} for more details on guessing techniques, and now describe the results that we have obtained on the nine models. \medskip \subsubsection{Five models of the Kreweras type.} \label{sec:Kreweras} These models are symmetric in the first diagonal, and are shown in the central column of Table~\ref{tab:finite} {and in Table~\ref{tab:kreweras-models} below}. Their orbits are all of the same form: they consist of all pairs $(x_i,x_j)$, with $0\le i \not = j \le 3$, where $x_3=y$ and $x_0=x$, and $x_1$ and $x_2$ are the three roots of the equation $S(X,y)=S(x,y)$. In particular, for a pair $(x',y')$ in the orbit, the symmetric pair $(y',x')$ {also lies} in the orbit. The orbit structure is $\tilde O_{12}$, as shown in Figure~\ref{fig:orbits}. Theorem~\ref{thm:exponent} gives for each model the growth constant $\mu$ of excursions and the associated exponent ${\alpha\equiv}\alpha_e$, which happens to be $-5/2$ in all cases. The first two models are not strongly aperiodic, but it appears (numerically) that an asymptotic estimate $e_n \sim \kappa\, \mu^n n^{-5/2}$ holds in all cases (provided $n$ is a multiple of 4 in the first case, and of 2 in the second case). The growth constant of the total number $q_n$ of quadrant walks of length $n$ can be determined using the results of~\cite{garbit-raschel,JoMiYe18}: in all five cases, it coincides with the excursion constant $\mu$. Observe that the \emm drift, $(S_x(1,1), S_y(1,1))$ is always negative. When the model is, in addition, aperiodic (last three models), we can apply the result of~\cite[Ex.~7]{duraj}: there exists a constant $K$ such that \[ q_n \sim K \mu^n n^{-5/2}. \] {Numerical computations (of two different types: floating point and modulo~$p$)} suggest that this also holds for the first two (periodic) models, with a constant~$K$ that depends on $n\!\mod 4$ (first model) and on $n\!\mod 2$ (second model). Such periodicity phenomena will be established in Section~\ref{sec:asympt} for the four solved models of Section~\ref{sec:works} (see for instance~\eqref{eq:asm1}). \medskip \begin{table}[htb] \begin{tabular}{|@{}C@{}c|cccc|cccc|} model & $m$ & $e_n$ & $Q(0,0)$ &alg.& $\alpha_e$ &$q_n$ & $Q(1,1)$ &alg.& $\alpha_w$ \\ \hline $\diag{-2-1,-1-2,01,10}$ & 4 & $[2,12]$ & \begin{tabular}[t]{c}$[8, 13]$\\ irred. \end{tabular}& no & $-5/2$ & $[32,76]$& \begin{tabular}[t]{c}$[17,296]$ \\ red. min. \end{tabular}& no & $-5/2$ ? \\ $\diag{-20,-1-1,0-2,11}$ & 2& $[4,5]$ & \begin{tabular}[t]{c}$[5,8]$ \\ red. min. \end{tabular}& no & $-5/2$ & $[19,59]$& $ \begin{tabular}[t]{c}$[9,83]$ \\ red. min. \end{tabular} $ & no & $-5/2$ ? \\ $\diag{-2-1,-1-2,-1-1,01,10} $& 1 &$[12,37]$ & \begin{tabular}[t]{c}$[9,52]$ \\ red. min. \end{tabular}& no & $-5/2$ & $[33,266]$& \begin{tabular}[t]{c}$[17,309]$ \\ red. min. \end{tabular}& no & $-5/2$ \\ $ \diag{-20,-1-1,-10,0-2,0-1,11} $& 1& $[20,75]$ & \begin{tabular}[t]{c}$[13,94]$ \\ red. min. \end{tabular}& no & $-5/2$ & $[60,118]^\star$& $[25,663]^\star$ & ? & $-5/2$ \\ $ \diag{-2-1,-20,-1-2,-1-1,0-2,01,10,11}$& 1 &$[36,520]^\star$ & \begin{tabular}[t]{c}$[26,573]^\star$ \\ \end{tabular}& ? & $-5/2$ & $[99,204]^\star$& $[44,652]^\star$ & ? & $-5/2$ \end{tabular} \vskip 4mm \caption{The five Kreweras-like models, with their periods~$m$. For each sequence $(e_{mn})$ and $(q_n)$, resp. for the associated series $Q(0,0;t^{1/m})$ and $Q(1,1;t)$, a pair $[r,d]$ indicates the order~$r$ and the coefficients degree~$d$ of a (conjectural) recurrence relation, resp. of a differential equation. A star indicates that we have {only guessed} recurrences or differential equations modulo $p=2147483647$. } \label{tab:kreweras-models} \end{table} Algorithmic guessing has succeeded for all 10 sequences in Table~\ref{tab:kreweras-models}, {but only modulo {$p=2147483647$} for 3 of them}. We are extremely confident that the guessed recurrences and differential operators are correct. In particular, they pass with success the filters described in~\cite[Sec.~2.4]{BoKa09}. For instance, the leading coefficients of the differential operators that (conjecturally) annihilate $Q(0,0)$ and $Q(1,1)$, or their rational reconstruction when operators are available modulo~$p$ only, vanish at $t=1/\mu$. Also, the occurrence of $3/2$ among the local exponents of the operators around $t=1/\mu$ is in agreement with the exponents $\alpha_e=\alpha_w=-5/2$. Assuming these recurrences and equations correct, we can use them to derive some properties of the sequences $(e_n)$ and $(q_n)$. For instance, guessing already strongly indicates that there is no hypergeometric sequence among the 10 sequences. In cases where recurrences are guessed over the integers (not only modulo~$p$), we have applied Petkov\v sek's algorithm~\cite{Petkovsek92} to them, and obtained a proof that these sequences are indeed not hypergeometric. Guessing also strongly indicates that there is no algebraic generating function for any of the 10 sequences. In cases where differential equations are guessed over the integers (not only modulo~$p$), we have a proof for this fact, based on the following strategy. Linear differential operators can be factored algorithmically~\cite{Hoeij97}. Those that are irreducible in $\mathbb{Q}}%{\mbox{\bbold Q}(t)\langle \partial_t \rangle$ are necessarily minimal. We have proved minimality of the others using the argument of~\cite[Prop.~8.4]{BoBoKaMe16}. Next, we computed the first terms of a local basis of solutions at $t=0$. At least one basis element contains logarithms, which, combined with minimality, implies that the solution is transcendental~\cite[\S2]{CoSiTrUl02}. Note that this cannot be directly deduced from estimates of the form $c\, \mu^n n^{-5/2}$, which \emm are, compatible with algebraicity~\cite{flajolet-context-free}. For the excursions of the second model, we were even able to solve the differential equation, thus obtaining a conjectural closed form expression of $Q(0,0;\sqrt t)$ (Conjecture~\ref{conj:K2-ex}). {When we have only guessed differential equations modulo {$p=2147483647$}, we still conjecture that the corresponding operators have minimal order.} \medskip We now review briefly the five Kreweras-like models and add a few details completing Table~\ref{tab:kreweras-models}. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal K_1=\{ \bar 2 \bar 1, \bar 1 \bar 2, 0 1, 1 0 \}}$} The excursion generating function $Q(0,0) = \sum_n e_n t^n$ starts \[Q(0,0) = 1+6\, \, t^4+236\, t^8+14988\, t^{12}+ 1193748\, t^{16} + O(t^{20})\] and the walk generating function $Q(1,1) = \sum_n q_n t^n$ starts \[Q(1,1) = 1+2\, t+4\, t^2+8\, t^3+22\, t^4+64\, t^5+178\, t^6+O(t^7).\] The growth constant is $\mu= 8/(3^{3/4})$ for both sequences. The model has period $m=4$, and $e_n=0$ if $n$ is not a multiple of $4$. For the subsequence $(u_n) = (e_{4n})$ we have guessed that \begin{multline*} (4608\, n^4+37504\, n^3+114144\, n^2+153992 \, n+77715)\times \\ (2\, n+3)\, (2\, n+1)\, (4\, n+5)\, (4\, n+1)\, (n+1)^2\, (4\, n+3)^2\, u_{n} - \\ \left( 62208\, n^{12}+1159488\, n^{11}+9826272\, n^{10}+50056248\, n^9+\frac{341349339}{2}\, n^8+410259762\, n^7 + \right. \\ \frac{22807094283}{32}\, n^6 +\frac{28845939249}{32}\, n^5+\frac{421694744175}{512}\, n^4 +\frac{1085550761145}{2048}\, n^3+ \\ \frac{1868027110233}{8192}\, \left. n^2+\frac{1929023165205}{32768}\, n+\frac{1807811742825}{262144} \right) \, u_{n+1} + \\ (n+3)\, (n+2)\, (2\, n+5)^2\, (6\, n+13)^2\, (6\, n+11)^2\, \times \hfill \\ \left(\frac{81}{2048}\, n^4+ \frac{1341}{8192}\, n^3+\frac{8235}{32768}\, n^2+\frac{22257}{131072}\, n+\frac{44739}{1048576} \right)\ u_{n+2} = 0. \end{multline*} The leading coefficient of the minimal differential operator $L_e$ annihilating $Q(0,0;t^{1/4})$ is \[t^7\, (27-4096\, t)^2\, \left(t^4-\frac{47}{640} \, t^3-\frac{374489}{125829120} \, t^2-\frac{23644531}{2319282339840} \, t+\frac{29645}{281474976710656} \right), \] where the factor $27-4096\, t$ vanishes when $t=27/4096=1/\mu^4$. For walks ending anywhere in the quadrant, the leading coefficient of the operator $L_w$ is \[t^{13} \, (4t-1) \, (16 \, t^3+8 \, t^2+11 \, t-4)^4 \, (4096 \, t^4-27)^4 \, \times \left( \text{irreducible poly. of degree 254} \right),\] which is again compatible with the value of $\mu$. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal K_2=\{ \bar 2 0, \bar 1 \bar 1, 0 \bar 2, 1 1 \}}$} The excursion generating function $Q(0,0) = \sum_n e_n t^n$ starts \[Q(0,0) = 1+ t^2+4\, t^4+21\, t^6+138\, t^8+1012\, t^{10} +8064\, t^{12}+O(t^{14}) \] while \[ Q(1,1) = 1+ t+2\, t^2+5\, t^3+12\, t^4+32\, t^5+86\, t^6+O(t^7).\] The growth constant is $\mu=2\sqrt{3}$ for both sequences. The model has period $m=2$, and $e_n=0$ if $n$ is odd. For the nontrivial subsequence $(u_n) = (e_{2n})$ we have guessed that \begin{multline*} (4\, n+9)\, (n+5)^2\, (n+4)^2\, u_{n+4} - 4\, (n+2)\, (16\, n^2+100\, n+153)\, (n+4)^2\, u_{n+3} - \\ 4\, (32\, n^5+584\, n^4+4096\, n^3+13909\, n^2+22947\, n+14742)\, u_{n+2} + \\ 96\, (2\, n+3)\, (n+2)\, (16\, n^3+108\, n^2+239\, n+183)\, u_{n+1} + \\ (9216\, n^5+76032\, n^4+230400\, n^3+319680\, n^2+201024\, n+44928)\, u_n=0. \end{multline*} The differential operator $L_e$ found for $ Q(0,0; t^{1/2}) = \sum_n e_{2n} t^{n}$ has leading coefficient \[ t^3 \, (1 + 4 \, t)^2 \, (1 - 12 \, t)^3 \] where the factor $(1-12\, t)$ is compatible with the value of $\mu$. Furthermore, $L_e$ is reducible in $\mathbb{Q}}%{\mbox{\bbold Q}(t)\langle \partial_t \rangle$; one can write $L= L_2^{(1)} L_2^{(2)} L_1$, where $L_1$ has order 1 and $L_2^{(1)}$ and $L_2^{(2)}$ have order~2. More importantly, $L$ can be written as the {least common left multiple} of the three following operators: \begin{multline*} \partial_t+\frac{1}{t}, \qquad \partial_t^2+ \frac{120\, t^2+2\, t-3}{(-1+12\, t)t(4\, t+1)}\, \partial_t+ \frac{288\, t^3-48\, t^2+14\, t+1}{(4\, t+1) t^2 (-1+12\, t)^2}, \\ \partial_t^2+ \frac{120\, t^2+2\, t-3}{(-1+12\, t)t(4\, t+1)}\, \partial_t+ \frac{24\, t^2-8\, t-1}{t^2(4\, t+1)(-1+12\, t)}. \end{multline*} The use of $_2F_1$ solving algorithms~\cite{BoChHoPe11,ImHo17,BoChHoKaPe17} leads us to the following conjectural expression. \begin{Conjecture}\label{conj:K2-ex} For the model ${\mathcal S=\{ \bar 2 0, \bar 1 \bar 1, 0 \bar 2, 1 1 \}}$, the excursion generating function $Q(0,0; t^{1/2})$ is equal to \[ \frac{1}{3t} - \frac{\sqrt{1-12t}}{6t} \left( \twoFone{\frac16}{\frac13}{1}{\frac{-108\,t(1+4t)^2}{(1-12t)^3}} + \twoFone{-\frac16}{\frac23}{1}{\frac{-108\,t(1+4t)^2}{(1-12t)^3}} \right). \] \end{Conjecture} \noindent{\bf Remark.} The first hypergeometric term above can be rewritten with a simpler argument, as \[ \frac 1 {\sqrt{1-12t}}\ \twoFone{\frac16}{\frac13}{1}{\frac{-108\,t(1+4t)^2}{(1-12t)^3}}= \twoFone{\frac16}{\frac13}{1}{108t^2(1+4t)}. \] Moreover, the square of this power series is known to count excursions of the face centered cubic lattice~{\cite[Appendix~A]{BoBoChHaMa13}, see also~\cite[\S4]{Joyce01}.} This is entry A002899 in the on-line encyclopedia of integer sequences~\cite{oeis}. {The guessed operator $L_e$ is the minimal-order operator canceling the conjectured series.} The leading coefficient of the operator $L_w$ contains the factor \[ t^5\,(4\,t-1)\,(4\,t^2+1)^2\,(16\,t^3+8\,t^2+11\,t-4)^2\,(12\,t^2-1)^4 ,\] which is compatible with $\mu=2\sqrt{3}$. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal K_3=\{ \bar 2 \bar 1, \bar 1 \bar 2, \bar 1 \bar 1, 0 1, 1 0 \}}$} The excursion generating function starts \[Q(0,0) = 1+2\, t^3+6\, t^4+16\, t^6+ 122\, t^7+236\, t^8+ O(t^9)\] while \[Q(1,1) = 1+2\, t+4\, t^2+10\, t^3+32\, t^4+98\, t^5+292\, t^6+O(t^7).\] The model is strongly aperiodic, with growth constant $\mu \sim 4.03$ for both sequences, where $\mu$ is the unique positive root of $4069 +768\, u-6\, u^2+u^3-27u^4$. The leading coefficient of $L_e$ is \[ t^8 \, (1 + t^2)^2 \, (4069 \, t^4+768\, t^3-6\, t^2+t-27)^2 \times \left( \text{irreducible poly. of degree 32} \right),\] and vanishes at $t=1/\mu$. Similarly, the leading coefficient of $L_w$ is \[ t^{13} \, (1- 5\, t) \, (t^2+1)^2 \, (4069 \, t^4+768\, t^3-6\, t^2+ t-27)^4 \, (23\, t^3 + 32\, t^2 + 8\, t -4)^4 \, \times \left( \text{irreducible poly. of degree 263} \right).\] \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal K_4=\{ \bar 2 0, \bar 1\bar 1, \bar 1 0, 0 \bar 2, 0\bar 1, 11 \}}$} The excursion generating function starts \[Q(0,0) = 1+ t^2+2\, t^3+4 \, t^4+24\, t^5+37\, t^6+276\, t^7+O(t^8)\] while \[Q(1,1) = 1+t+4\, t^2+11\, t^3+42\, t^4+148\, t^5+576\, t^6+O(t^7).\] The model is strongly aperiodic, with growth constant $\mu \sim 4.91$ for both sequences, where $\mu $ is the largest positive root of $ 405-108\, u-72\, u^2+u^3+3u^4$. The leading coefficient of $L_e$ is \[ t^8 \, (65\, t^2+8\, t+16)^2 \, (405\, t^4-108\, t^3-72\, t^2+t+3)^4 \times \left( \text{irreducible poly. of degree 66} \right),\] and vanishes at $t=1/\mu$. Similarly, the leading coefficient of $L_w$ contains the factor \[ t^{17} \, (1-6t) \, (405\, t^4-108\, t^3-72\, t^2+t+3)^8 \, (3\, t^3+4\, t^2+20\, t-4)^6 \, (65\, t^2+8\, t+16)^2 .\] \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal K_5=\{ \bar 2\bar 1, \bar 2 0, \bar 1\bar 2, \bar 1\bar 1, 0\bar 2, 01, 10, 11 \}}$} The excursion generating function starts \[Q(0,0) = 1+ t^2+8\, t^3+10\, t^4+106\, t^5+467\, t^6+1850\, t^7+O(t^8)\] while \[Q(1,1) =1+3\, t+10\, t^2+51\, t^3+260\, t^4+1350\, t^5+7568\, t^6+O(t^7).\] The model is strongly aperiodic, with growth constant $\mu= 2\,\sqrt{3} +8/3^{3/4}\approx 6.97$ for both sequences. The value $\mu $ is the unique positive root of $208+4608 \,u+648\, u^2-27\,u^4$. In this case we only have conjectures modulo $p=2147483647$ for both $e_n$ and $q_n$. For excursions, rational reconstruction shows that the leading coefficient of the operator $L_e$ contains the factor \[ t^{23} \, (4\, t^2 + 1)^4 \, (208\, t^4+4608\, t^3+648\, t^2-27)^7 \] which vanishes at $t=1/\mu$. {Similarly, the leading coefficient of $L_w$} contains the factor \[ t^{35} \, (8\, t - 1) \, (4\, t^2+1)^4 \, (64\, t^3+16\, t^2+11\, t-2)^{10} \, (208\, t^4+4608\, t^3+648\, t^2-27)^{12} \, \] which vanishes again at $t=1/\mu$. \subsubsection{Four models of the Gessel type.} The remaining 4 models, shown on the right of Table~\ref{tab:finite} {and in Table~\ref{tab:gessel-models}}, do not have a symmetry property. They are obtained from the models shown on the left of {Table~\ref{tab:finite}} (solved in Section~\ref{sec:works}) by a reflection in a horizontal line. By Proposition~\ref{prop:sym}, their orbit type is $O_{12}$ for the first three, and $O_{18}$ for the last one. More precisely, in the first three cases the orbit consists of \[ \begin{array}{ccc} (x, y) &(x_1 , y) & (x_2 , y) \\ (x, \bar x}%{\overline x^e\bar y}%{\overline y) &(-\bar x_1}%{\overline {x}_1 , \bar x}%{\overline x^e\bar y}%{\overline y) &(-\bar x_2}%{\overline {x}_2 , \bar x}%{\overline x^e\bar y}%{\overline y) \\ (x_1 , \bar x}%{\overline x_1 ^e\bar y}%{\overline y) &(-\bar x}%{\overline x, \bar x}%{\overline x_1 ^e\bar y}%{\overline y) &(-\bar x_2}%{\overline {x}_2 , \bar x}%{\overline x_1 ^e\bar y}%{\overline y) \\ (x_2 , \bar x}%{\overline x_2 ^e\bar y}%{\overline y) &(-\bar x}%{\overline x, \bar x}%{\overline x_2 ^e\bar y}%{\overline y) &(-\bar x_1}%{\overline {x}_1 , \bar x}%{\overline x_2 ^e\bar y}%{\overline y) \end{array} \] where $x_1, x_2$ are the two solutions of $S(X,y)=S(x,y)$ (different from $x$), $e=2$ for the first model and $e=1$ for the next two. In the fourth case, the orbit consists of \[ \begin{array}{ccc} (x,y) & (x_1,y) & (x_2, y) \\ (x, \bar x}%{\overline x ^3\bar y}%{\overline y) & (xu_3, \bar x}%{\overline x ^3\bar y}%{\overline y) & (xu_4, \bar x}%{\overline x ^3\bar y}%{\overline y) \\ (x_1, \bar x}%{\overline x _1^3\bar y}%{\overline y) &(x_1u_3, \bar x}%{\overline x _1^3\bar y}%{\overline y) &(x_1u_4, \bar x}%{\overline x _1^3\bar y}%{\overline y) \\ (x_2, \bar x}%{\overline x _2^3\bar y}%{\overline y) &(x_2u_3, \bar x}%{\overline x _2^3\bar y}%{\overline y) &(x_2u_4, \bar x}%{\overline x _2^3\bar y}%{\overline y) \\ (xu_3,\bar u _3^3y) &(x_1u_3,\bar u _3^3y) &(x_2u_3,\bar u _3^3y) \\ (xu_4,\bar u _4^3y)&(x_1u_4,\bar u _4^3y)&(x_2u_4,\bar u _4^3y) \end{array} \] where \[ x_{1,2}= \frac{1\pm\sqrt{1+4x^3y}}{2x^2y} \qquad \hbox{and} \qquad u_{3,4}= \frac{y \pm\sqrt{y^2+4y}}{2}. \] For each of these four models, there exists a unique section-free equation, and its right-hand side vanishes. Theorem~\ref{thm:exponent} gives for each model the excursion constant $\mu$ and the corresponding exponent, which is $\alpha_e=-5/2$ for the first three models, and $\alpha_e=-7/3$ for the last one. Only the third model is strongly aperiodic, {the other models having respectively period $m=2$ (first model), $m=4$ (second model) and $m=3$ (last model)}. But it appears {numerically} that an asymptotic estimate $q(0,0;n)\sim \kappa\, \mu^n n^{\alpha_e}$ holds in all cases (provided $n$ is a multiple of $m$ in the periodic cases). The growth constant $\bar \mu$ of the sequence $(q_n)$ can be determined using the results of~\cite{garbit-raschel,JoMiYe18}: in all four cases, it is larger than the excursion constant~$\mu$. Observe that the \emm drift, $(S_x(1,1), S_y(1,1))$ is always of the form $(-\delta, 0)$ with $\delta$ positive. The second component being $0$, we cannot apply the result of~\cite[Ex.~7]{duraj}, and indeed, the walk exponent $\alpha_w$, {that we conjecture numerically}, turns out to differ from $\alpha_e$. In fact, we believe that for each of the four models, \[ q_n \sim K \bar \mu^n n^{-3/2}. \] with a constant $K$ that depends on $n\!\mod m$ {in the periodic cases.} \medskip \noindent{\bf What we have done.} We have applied to these four models the same guessing procedures as for the Kreweras-like models. Remarkably, we discovered two {possibly} algebraic models among them. More precisely, for the second and third models, the series $Q(0,0)$ seems to be algebraic of degree~$32$. But it must be noted that in contrast with Kreweras-like models, for three of the four models we could not guess any recurrence for the sequence $(q_n)$, even modulo the prime $p=2147483647$. \begin{table}[htb] \begin{tabular}{|@{}C@{}c|c@{}c@{}cc|cccc|} model & $m$ & $e_n$ & $Q(0,0)$ &alg.& $\alpha_e$ &$q_n$ & $Q(1,1)$ &alg.& $\alpha_w$ \\ \hline $\diag{-2-1,-10,01,10}$ & 2 & $[8,5]$ & \begin{tabular}[t]{c}$[9,18]$ \\ red. min. \end{tabular}& no & $-5/2$ & $[46,176]$& \begin{tabular}[t]{c}$[17,400]$ \\ red. min. \end{tabular}& no & $-3/2$ ? \\ $\diag{-2-1,-11,0-1,11}$ & 4 &$[2,12]$ & \begin{tabular}[t]{c}$[8,13]$ \\ irred. \end{tabular}& $[32,14]$ & $-5/2$ & $?$& $?$ & ? & $-3/2$ ? \\ $\diag{-2-1,-10,-11,0-1,11}$ & 1 &$[12,37]$ & \begin{tabular}[t]{c}$[9,52]$ \\ irred. \end{tabular}& $[32,57]$ & $-5/2$ & $?$& $?$ &? & $-3/2$ ? \\ $\diag{-2-1,-20,10,11}$ & 3 &$[23,572]^\star$ & \begin{tabular}[t]{c}$[48,589]^\star$ \\ \end{tabular}& ? & $-7/3$ & $?$& $?$ & ? & $-3/2$ ? \end{tabular} \vskip 4mm \caption{The four Gessel-like models, with their periods~$m$. The table gives, for each sequence $(e_{mn})$ and $(q_n)$, and for the associated series $Q(0,0;t^{1/m})$ and $Q(1,1;t)$, the order and degree of the guessed recurrence relation or differential equation (in the two algebraic cases, the first/second value is the degree in the series/variable). A star indicates that we have {only guessed} recurrences or differential equations modulo $p=2147483647$. } \label{tab:gessel-models} \end{table} Our results are summarized in Table~\ref{tab:gessel-models}, and completed with a few details below. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal G_1=\{ \bar 2\bar 1, \bar 1 0, 01, 10 \}}$} The excursion generating function starts \[ Q(0,0) = 1+t^2+5\, t^4+27\, t^6+188\, t^8+1414\, t^{10}+O(t^{12})\] while \[ Q(1,1) = 1+2\, t+5\, t^2+13\, t^3+38\, t^4+112\, t^5+346\, t^6+1071\, t^7+O(t^8).\] The model has period $m=2$, and $e_n=0$ if $n$ is odd. The growth constant is $\mu=2\sqrt{3}$ for the excursion sequence, and \begin{equation} \label{bar-mu-G1} \bar \mu= \frac{\sqrt[3]{6371+624 \, \sqrt{78}}}{12} + \frac{217}{12 \, \sqrt[3]{6371+624 \, \sqrt{78}}} +\frac{11}{12} \, \sim 3.61 \end{equation} for all quadrant walks. The value $\bar \mu$ is the unique (positive) real root of $16+8u+11u^2 -4u^3$. The leading coefficient of the operator $L_e$ annihilating $Q(0,0;t^{1/2})$ is \[ t^5 \, (1 + 4\, t)^3 \, (1-12\, t)^5 \, (279936\, t^5-62208\, t^4+13608\, t^3-5796\, t^2+675\, t-20) \] where the factor $(1-12\, t)$ is compatible with the growth constant $\mu$. The leading coefficient of~$L_w$ is \[ t^{10} \, (1 + 4\, t) \, (1 - 4\, t)^4 \, (1 + 4 \, t^2)^5 \, (1 - 12 \, t^2)^9 \, \, (16\, t^3+8\, t^2+11\, t-4)^4 \, \times \left( \text{irreducible poly. of degree 345} \right),\] which is compatible with the value of $\bar \mu$. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal G_2=\{ \bar 2\bar 1, \bar 1 1, 0\bar 1, 11 \}}$} The excursion generating function starts \[ Q(0,0) = 1+5\, t^4+190\, t^8+11892\, t^{12} +939572\, t^{16}+O(t^{20})\] while \[ Q(1,1) = 1+ t+3\, t^2+8\, t^3+24\, t^4+65\, t^5+211\, t^6+649\, t^7+O(t^8).\] The model has period $m=4$, and $e_n=0$ if $n$ is not a multiple of $4$. The growth constant of excursions is $\mu=8/3^{3/4}$, while the constant for all quadrant walks is again given by~\eqref{bar-mu-G1}. For the nontrivial subsequence $(u_n) = (e_{4n})$ we have guessed that \begin{multline*} 3\, (6\, n+11)\, (18\, n+41)\, (2\, n+5)\, (3\, n+7)\, (18\, n+35)\, (6\, n+13)\, (n+2)\, (18\, n+29)\, \\ (41472\, n^4+150144\, n^3+200864\, n^2+117704\, n+25491)\, u_{n+2}-\\ (47552535724032\, n^{12}+\ 798266178404352\, n^{11}+ 6092888790269952\, n^{10}+27954969361514496\, n^9 \\ + 85850716160655360\, n^8 + 185860480394330112\, n^7 + 290753615920332800\, n^6 + \\ 331020927507759104\, n^5 + 272073153165252608\, n^4 + 157356059182977536\, n^3+ \\ 60749526504280448\, n^2 + 14046784950077600\, n + 1470033929525700)\, u_{n+1} +\\ 1048576\, (12\, n+5 )\, (4\, n+1)\, (3\, n+2)\, (2\, n+1)\, (12\, n+11)\, (4\, n+3)\, (6\, n+7)\, (n+1)\, \\ (41472\, n^4+316032\, n^3+900128\, n^2+1135752\, n+535675) u_n = 0. \end{multline*} Remarkably, the series $E(t):=Q(0,0; t^{1/4})$ appears to be algebraic, of degree 32. More precisely, $E(t)^2$ seems to have degree 16 and to satisfy an equation $P(t,E(t)^2)=0$ with coefficients of degree at most 14 in $t$. The guessed polynomial $P(t,z)$ seems plausible because: it has a small bitsize compared to the bitsize of the {expansion of $E(t)$ that we used to produce it; we have then checked, using more terms of $E(t)$, that} it annihilates $E(t)^2$ to much higher orders; its discriminant factors as \[ t^{418} \, (268435456 \, t^3+57671680 \, t^2-69632 \, t-27)^2 \, (4096 \,t - 27)^{48}\, \times \left( \text{irreducible poly. of degree 31} \right)^4, \] which is compatible with the value of $\mu$. Moreover, $P(t,z)$ defines a rational curve, parametrized by \[ t = \frac{U\, (1-2\,U)^3\,(1-3\,U)^3\,(1-6\,U)^9}{(1-4\,U)^4}, \qquad z = \frac{(1-4\,U)^2\,(1-24\,U+120\,U^2-144\,U^3)^2}{(1-3\,U)^2\,(1-2\,U)^3\,(1-6\,U)^9}. \] This leads to the following conjectural statement. \begin{Conjecture} For the model $\mathcal S=\{\bar 2\bar 1, \bar 1 1, 0\bar 1, 11 \}$, the excursion generating function $Q(0,0;t)$ is equal to \[ \frac{(1-4\,U)\,(1-24\,U+120\,U^2-144\,U^3)}{(1-3\,U)\, (1-2\,U)^{3/2}\,(1-6\,U)^{9/2}}, \] where $U = t^4 +53\, t^8+4363\, t^{12} + \cdots$ is the unique power series in $\mathbb{Q}}%{\mbox{\bbold Q}[[t]]$ satisfying \[ {U\, (1-2\,U)^3\,(1-3\,U)^3\,(1-6\,U)^9} = t^4 \, {(1-4\,U)^4}. \] \end{Conjecture} {As mentioned above, we could not guess any differential nor algebraic equation for $Q(1,1;t)$, even with 100\,000 terms and modulo $p{=2147483647}$.} \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal G_3=\{ \bar 2\bar 1, \bar 1 0, \bar 1 1, 0\bar 1, 11 \}}$} The excursion generating function starts \[ Q(0,0) = 1+2\, t^3+5\, t^4+16\, t^6+107\, t^7+190\, t^8+O(t^{9})\] while \[ Q(1,1) = 1+t+4\, t^2+12\, t^3+39\, t^4+133\, t^5+485\, t^6+1746\, t^7+O(t^8).\] This model is strongly {aperiodic}. The growth constants are $\mu \sim 4.03$, the unique positive root of $4069+768\,u-6\,u^2+u^3-27\,u^4$, and \[ \bar \mu = \frac{\sqrt[3]{1261+57 \, \sqrt{57}}}{6} + \frac{56}{3 \, \sqrt[3]{1261+57 \, \sqrt{57}}} +\frac{2}{3} \, \sim 4.22. \] This is the unique (positive) real root of $4\, u^3-8\, u^2-32\, u-23$. Again, $Q(0,0; t)$ appears to be algebraic of degree $32$, this time with coefficients of degree 57. The guessed polynomial seems plausible for various reasons, including the nice factorization of its discriminant as \begin{multline*} t^{1732} \, (t^2+1)^{32} \, (4069\, t^4+768\, t^3-6\, t^2+t-27)^{48} \\ \times \left( \text{irreducible poly. of degree 13} \right)^2 \times \left( \text{irreducible poly. of degree 31} \right)^4, \end{multline*} which vanishes at $t=1/\mu$. \medskip \noindent{\bf Remark.} There are analogies between excursions of this model and those of the Kreweras-type model ${\mathcal K_3=\{ \bar 2 \bar 1, \bar 1 \bar 2, \bar 1 \bar 1, 0 1, 1 0 \}}$. Indeed, the sizes of the recurrence relation and of the differential equation match. The growth constant and the singular exponent are also the same for both models. \medskip \noindent\paragraph{$\bullet$ \bf Case $\boldsymbol{\mathcal G_4=\{ \bar 2\bar 1, \bar 2 0, 10, 11 \}}$} The excursion generating function starts \[ Q(0,0) = 1+3\, t^3+41\, t^6+850\, t^9+21538\, t^{12}+614530\, t^{15}+O(t^{16})\] while \[ Q(1,1) = 1+2\, t+4\, t^2+15\, t^3+45\, t^4+121\, t^5+471\, t^6+1533\, t^7+O(t^8).\] The model has period $m=3$, and $e_n=0$ if $n$ is not a multiple of $3$. The growth constants are $\mu=9/2^{4/3}$ and $\bar \mu= 3 \cdot 2^{1/3}$. The leading coefficient of $L_e$ contains the factor $t^{42} \, (729\, t-16)^{23}$ which is compatible with the value of $\mu$. \section{A glimpse at asymptotics} \label{sec:asympt} The method that we develop in this paper {provides} expressions for generating functions\ of walks confined to an orthant, as positive parts of certain rational or algebraic series. We now demonstrate that these expressions are often well suited to a multivariate singularity analysis. The use of analytic techniques in this fashion is the domain of \emph{analytic combinatorics in several variables} (ACSV)~\cite{PemantleWilson2013}; recent work has shown the strength of this approach, proving conjectures in lattice path asymptotics~\cite{MelczerWilson2016}, generalizations in higher dimensions~\cite{MelczerMishna2016}, and handling families of models with weighted steps~\cite{CourtielMelczerMishnaRaschel2017}. Much of the singularity analysis is effective~\cite{Melczer2017} when the multivariate generating function under consideration is represented in the form $Q(x,y;t)=[x^{\geq }y^{\geq}]R(x,y;t)$ for a {\emm rational,} function $R(x,y;t)$. Although some asymptotic techniques have been developed to perform a singularity analysis on multivariate functions with algebraic singularities~\cite{Greenwood2018}, this is a more difficult task. For the purposes of this paper, we show how dominant asymptotics for the number of walks in the four models of Section~\ref{sec:works} can be determined through the simple use of analytic techniques. {We focus on the series $Q(1,1)$ counting all quadrant walks.} Future work could extend this argument to deal with the multivariate algebraic functions which arise, for instance, in the generating functions for {2D} Hadamard models given by Proposition~\ref{prop:hadamard}. The first step is to convert our expression of the form $Q(x,y;t)=[x^{\geq }y^{\geq}]R(x,y;t)$ for the multivariate generating function $Q(x,y;t)$ into an expression for the univariate generating function $Q(1,1;t)$ which is amenable to asymptotic computations. Given an element \begin{equation} R(x,y;t) = \sum_{n \geq 0} \left( \sum_{i,j} r(i,j;n)x^iy^jt^n \right) \in \mathbb{Q}[x,\bar x}%{\overline x,y,\bar y}%{\overline y][[t]], \label{eq:takediag} \end{equation} the \emph{diagonal} operator $\Delta$ takes $R(x,y;t)$ and returns the univariate power series $(\Delta R)(t) := \sum_{n \geq 0} r(n,n;n)t^n$. The relationship between positive parts and diagonals is given by the following lemma. \begin{Lemma} \label{lem:diag} Given $R(x,y;t)$ as in~\eqref{eq:takediag}, and $(a,b) \in \{0,1\}^2$, one has \[ [x^\geq y^\geq]R(x,y;t)\bigg|_{x=a,y=b} = \Delta\left( \frac{R\left(\bar x}%{\overline x,\bar y}%{\overline y;xyt\right)}{(1-x)^a(1-y)^b}\right). \] \end{Lemma} The proof follows from basic formal series manipulations; see Proposition 2.6 of~\cite{MelczerMishna2016} for details. In particular, this lemma, combined with the expressions obtained for $Q(x,y;t)$ in Section~\ref{sec:works}, gives us diagonal representations for the generating functions of quadrant walks ending anywhere $(a=b=1)$, returning to the origin (excursions, $a=b=0$), or returning to the $x$- or $y$-axes ($a=1$, $b=0$ or $a=0$, $b=1$). At its most basic level, the theory of ACSV takes a multivariate Cauchy residue integral representation for power series coefficients and reduces it to an integral expression where saddle-point techniques can be used to determine asymptotics. Because of the simple rational functions which are obtained for many lattice path models, the usual analysis can be greatly simplified. In particular, for each of the four models detailed in Section~\ref{sec:works} we obtain the generating function $Q(1,1;t)$ as a diagonal of the form \[ Q(1,1;t) = \Delta \left(\frac{P(x,y)}{(1-x)(1-y)(1-txyS(\bar x}%{\overline x,\bar y}%{\overline y))}\right), \] where $P(x,y)$ is a Laurent polynomial which is coprime with $1-x$ and $1-y$. Expanding the rational function on the right-hand side of this equation as a power series in $t$ then gives \[ q_n = [t^n] Q(1,1;t) = [x^ny^nt^n] \left(\frac{P(x,y)}{(1-x)(1-y)(1-txyS(\bar x}%{\overline x,\bar y}%{\overline y))}\right) = [x^0y^0]\frac{P(x,y)S(\bar x}%{\overline x,\bar y}%{\overline y)^n}{(1-x)(1-y)}, \] and the multivariate Cauchy integral formula~\cite[Prop. 7.2.6]{PemantleWilson2013} implies \begin{align*} q_n &= \frac{1}{(2\pi i)^2}\int_{|x|=r_1,|y|=r_2} \frac{P(x,y)S(\bar x}%{\overline x,\bar y}%{\overline y)^n}{(1-x)(1-y)} \cdot \frac{\mathrm dx\,\mathrm dy}{xy} \notag \\[+1mm] &= \frac{1}{(2\pi i)^2} \int_{|x|=r_1,|y|=r_2} \frac{P(x,y)}{xy(1-x)(1-y)}e^{n \log S(\bar x}%{\overline x,\bar y}%{\overline y)} \mathrm d x\,\mathrm d y \end{align*} for any $0 < r_1,r_2 < 1$. {Making the substitutions $x=r_1e^{i\theta_1}$ and $y=r_2e^{i\theta_2}$ converts this integral into a \emm Fourier-Laplace, integral; that is, an integral of the form \[ \int_T A(\theta_1,\theta_2) e^{-n \phi(\theta_1,\theta_2)} \mathrm d\theta_1 \mathrm d\theta_2 . \] Here $T = [-\pi, \pi]^2$, while \[ A(\theta_1, \theta_2)=\frac 1{(2\pi)^2} \frac{P(r_1e^{i\theta_1}, r_2 e^{i\theta_2})}{(1-r_1e^{i\theta_1})(1-r_2 e^{i\theta_2})}, \] and \[ \phi(\theta_1, \theta_2):= -\log S\left(r_1^{-1} e^{-i\theta_1},r_2^{-1} e^{-i\theta_2}\right) . \] The asymptotics of Fourier-Laplace integrals have been well studied.} In particular, suppose the \emph{amplitude} $A$ and \emph{phase} $\phi$ are analytic functions on the domain $T$. If $\phi$ admits a non-empty finite set of critical points\footnote{For the purposes of this discussion, points in $T$ where the gradient of $\phi$ vanishes.}, at which the Hessian of $\phi$ is non-singular and the real part of $\phi$ is locally minimized, then explicit asymptotic formulas in terms of the Taylor coefficients of $A$ and~$\phi$ are known~\cite[Theorem 7.7.5]{Hormander1990a} (see also~\cite[Prop. 53]{Melczer2017} for the explicit formulas used in our calculations). Each critical point of $\phi$ has an asymptotic contribution, and one simply sums up the contributions of all critical points to determine dominant asymptotics of the Fourier-Laplace integral. For the above value of $\phi(\theta_1, \theta_2)$, the chain rule shows that in order to find real values $r_1$ and~$r_2$ such that $\phi$ admits critical points, it is sufficient to find the complex points $(x,y)$ such that \begin{equation}\label{crit} S_x(\bar x}%{\overline x,\bar y}%{\overline y) = S_y(\bar x}%{\overline x,\bar y}%{\overline y) = 0 \end{equation} and take $r_1$ and $r_2$ to be their moduli. One then determines the corresponding critical pairs $(\theta_1, \theta_2)$, that is, the arguments of $x$ and $y$ satisfying~\eqref{crit}, and computes the Hessian of $\phi$ at these points. In each of the four cases that we consider there are critical points with $0<r_1,r_2<1$, and the Hessian is never singular. The next step is to show that the real part of $\phi(\theta_1,\theta_2)$, \[ \Re \phi(\theta_1,\theta_2) = - \log |S(r_1^{-1}e^{-i\theta_1},r_2^{-1}e^{-i\theta_2})|, \] is locally minimized at critical values of $(\theta_1, \theta_2)$. Minimizing this quantity means maximizing $|S(\bar x}%{\overline x,\bar y}%{\overline y)|$ on $\{(x,y): |x|= r_1, |y|=r_2\}$. Since $S(\bar x}%{\overline x,\bar y}%{\overline y)$ is a {(Laurent)} polynomial with non-negative coefficients, when $|x|$ and $|y|$ are fixed then $|S(\bar x}%{\overline x,\bar y}%{\overline y)|$ is maximized (in particular) when~$x$ and~$y$ are positive and real {(that is, $x=r_1, y=r_2$)}. The triangle inequality then shows that the maximizers of $|S(\bar x}%{\overline x,\bar y}%{\overline y)|$ occur when the arguments of all monomials occurring in $S(\bar x}%{\overline x,\bar y}%{\overline y)$ are equal. {When this holds for all critical values} $(\theta_1, \theta_2)$, explicit asymptotics can be obtained by direct computation. {In particular, the exponential growth associated with the critical point $(\theta_1, \theta_2)$ is $e^{-\phi(\theta_1,\theta_2)}=S(\bar x}%{\overline x, \bar y}%{\overline y)$.} We now list our results; full details of the computations can be found in an accompanying {\sc Maple} worksheet, available on {the authors' webpages}\footnote{For lattice path examples with more exotic critical point behaviour, see~\cite[Ch. 10 and 11]{Melczer2017}.}. \subsection{Case $\boldsymbol{\mathcal S=\{10, \bar 1 0, 0 \bar 1, \bar 2 1\}}$} Specializing Lemma~\ref{lem:diag} to Proposition~\ref{prop:F} gives the diagonal representation \[ Q(1,1;t) = \Delta \left(\frac{(x^2+1)(x^2+2xy-1)(2x^3+x^2y-y)(x^2-y^2)}{x^2y(1-x)(1-y)(1-t(x^3+x^2y+xy^2+y))}\right). \] Solving~\eqref{crit} for $x$ and $y$ gives two solutions with coordinates of modulus less than 1, \[ (x,y) = \left(3^{-1/2},3^{-1/2}\right) \quad \text{ and } \quad (x,y) =\left(-3^{-1/2},-3^{-1/2}\right), \] along with solutions $(i,-i),$ and $(-i,i)$ which are irrelevant to asymptotics. Taking $r_1=r_2 = 3^{-1/2}$ in the argument above, one gets a Fourier-Laplace integral with critical points at $(\theta_1,\theta_2) = (0,0)$ and $(\pi,\pi)$. A direct computation shows that the Hessian of $\phi$ is non-singular at these critical points. Following the above lines, we then check that \[ {\left| S(\bar x}%{\overline x,\bar y}%{\overline y)\right| =\left| \bar x}%{\overline x+x+y+x^2\bar y}%{\overline y\right|} \] is indeed maximal on the integration domain for angles $(0,0)$ and $(\pi, \pi)$, as desired. The exponential growth of the resulting Fourier-Laplace integral is given by the value of $e^{-\phi(\theta_1,\theta_2)}{= S(\bar x}%{\overline x, \bar y}%{\overline y)}$ at the critical points, in this case $S(\sqrt{3},\sqrt{3}) = 2\sqrt{3}$ and $S(-\sqrt{3},-\sqrt{3}) = -2\sqrt{3}$. One then computes successively higher order terms in an asymptotic expansion \[ (2\sqrt{3})^n\left( A_0 + \frac{A_1}{n} + \frac{A_2}{n^2} + \cdots \right) + (-2\sqrt{3})^n\left( A_0' + \frac{A_1'}{n} + \frac{A_2'}{n^2} + \cdots \right) \] until finding terms which are non-zero (see~\cite[Prop. 53]{Melczer2017}). The vanishing of the highest order terms is related to, but not completely determined by, the order of vanishing of the amplitude $A(\theta_1,\theta_2)$ at the critical points under consideration. Ultimately, we obtain the asymptotic expansion \[ q_n = \frac{(2\sqrt{3})^n}{\pi n^4}\left(C_n + O\left(\frac{1}{n}\right)\right), \] where \begin{equation} C_n = \begin{cases} {5616\sqrt{3}} &: n \text{ even} \\ {9720} &: n \text{ odd} \end{cases}. \label{eq:asm1} \end{equation} \subsection{Case $\boldsymbol{\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1\}}$} Applying Lemma~\ref{lem:diag} to the generating function expression in Proposition~\ref{prop:12-a} gives a diagonal representation \[ Q(1,1;t) = \Delta \left( \frac{(2xy^2+x^2-1)(x-y^2)(x^2y^2+2x^3-y^2)}{xy^2(1-x)(1-y)(1-t(x^2y^2+x^3+y^2+x))} \right). \] This time the system of equations~\eqref{crit} admits four solutions whose coordinates have moduli less than $1$, \[ \left(3^{-1/2},3^{-1/4}\right), \left(3^{-1/2},-3^{-1/4}\right), \left(-3^{-1/2},i3^{-1/4}\right), \left(-3^{-1/2},-i3^{-1/4}\right), \] all of which have coordinate-wise moduli $(r_1,r_2) = \left(3^{-1/2},3^{-1/4}\right)$. A similar analysis to the first case gives \[ q_n = \frac{\left(8\cdot3^{-3/4}\right)^n}{\pi n^4}\left(C_n + O\left(\frac{1}{n}\right)\right), \] where \[ C_n = \begin{cases} {5120\sqrt{3}} &: n \equiv 0 \mod{4} \\ {6656 \cdot 3^{1/4}} &: n \equiv 1 \mod{4} \\ {26624}/{3} &: n \equiv 2 \mod{4} \\ {3840\cdot3^{3/4}} &: n \equiv 3 \mod{4} \end{cases}. \] \subsection{Case $\boldsymbol{\mathcal S=\{01, 1\bar 1 , \bar 1 \bar 1, \bar 2 1, \bar 1 0\}}$} Specializing Lemma~\ref{lem:diag} to Proposition~\ref{prop:model3} gives a diagonal representation \[ Q(1,1;t) = \Delta \left( \frac{(x-y^2)(2xy^2+x^2+xy-1)(x^2y^2+2x^3+x^2y-y^2)}{xy^2(1-x)(1-y) (1-t(x^2y^2+x^3+x^2y+y^2+x)))} \right). \] Here the system~\eqref{crit} has four solutions $(x,y)$ with coordinates of modulus less than 1, which make up the set \begin{equation} \left\{ (y^2,y) : 3y^4+y^3-1=0 \right\}. \label{eq:crit3} \end{equation} The polynomial $3y^4+y^3-1$ has a unique positive root, $y_c\simeq 0.688...$, and we consider the solution $(y_c^2,y_c)$. None of the three other solutions has the same coordinate-wise moduli, {hence our only critical point associated with moduli $(r_1, r_2)=(y_c^2,y_c)$ is $(\theta_1, \theta_2)=(0,0)$. The Hessian of~$\phi$ is not singular at $(0,0)$, and by positivity, this point maximizes the modulus of $S$ in $[-\pi, \pi]^2$.} In the end, one obtains asymptotics \[ q_n = \frac{(8y_c^3 +3y_c^2)^n}{2313\pi n^4}\left(C + O\left(\frac{1}{n}\right)\right) \approx (1112.183\cdots)\frac{(4.03164\cdots)^n}{n^4}, \] where \[ C = \sqrt{3}\left(2527386y_c^3 +2727881y_c^2 + 1805111y_c + 1306017\right). \] It can be checked that the three other solutions in~\eqref{eq:crit3} are \emm not, local maximizers of $|S(\bar x}%{\overline x,\bar y}%{\overline y)|$ among points with the same coordinate-wise moduli. \subsection{Case $\boldsymbol{\mathcal S=\{10, 1 \bar 1, \bar 2 1, \bar 2 0\}}$} Specializing Lemma~\ref{lem:diag} to Proposition~\ref{prop:hard-18} gives a diagonal representation \[ Q(1,1;t) = \Delta \left( \frac{(1-2y)(x^3-y^2)(x^6+x^3y^2+3x^3y-y)(2x^3-y)} {x^3y^2(1-x)(1-y)(1-t(x^3y+x^3+y^2+y))} \right). \] Here there are three solutions to~\eqref{crit} with moduli less than $1$: \[ \left(4^{-1/3},1/2\right), \quad \left(e^{2\pi i/3}4^{-1/3},1/2\right), \quad \left(e^{-2\pi i/3}4^{-1/3},1/2\right). \] All of them have moduli $(r_1, r_2)=\left(4^{-1/3},1/2\right)$. They give rise to three critical points of $\phi$, where the Hessian is non-singular and $|S(\bar x}%{\overline x, \bar y}%{\overline y)|$ is maximized. An analysis similar to those above gives another periodic asymptotic expansion \[ q_n = \frac{\left(9 \cdot 4^{-2/3}\right)^n}{\pi n^5}\left(C_n + O\left(\frac{1}{n}\right)\right), \] where \[ C_n = \begin{cases} {216513}/2 &: n \equiv 0 \mod{3} \\ {1358127 \cdot 2^{-11/3}} &: n \equiv 1 \mod{3} \\ {124659\cdot 2^{-1/3}} &: n \equiv 2 \mod{3} \end{cases}. \] \section{Final questions and comments} \label{sec:final} We have outlined above the first general approach to count walks confined to an orthant with {arbitrary} steps, and demonstrated its efficacy across several families and a large number of sporadic cases. In addition to the examples presented here, the power of this method is illustrated by the fact that it {solves} another family of quadrant models, with steps $\mathcal S=\{(-p,0), (-p+1, 1), \ldots, (0,p), (1, -1)\}$, which arose naturally in other applications; the details of this family (containing both large forward and large backward steps) are given in~\cite{BoFuRa17}. {The current} work attempts to lay a basis for the systematic study of lattice walks with longer steps, and we suggest here some possible research directions. \begin{itemize} \item{\bf{Uniqueness of the section-free equation.}} Is it true that, for a model with no large forward step and a finite orbit, there exists a unique section-free equation (Conjecture~\ref{conj:section-free})? Can one describe it generically? \item{\bf{Walks with steps in $\boldsymbol{\{\bar 2, \bar 1, 0, 1\}^2}$.}} In our study of these walks (Section~\ref{sec:m21}) we have left open the case of nine models which have analogies with the four tricky-but-algebraic small step models of Figure~\ref{fig:alg} (see Tables~\ref{tab:kreweras-models} and~\ref{tab:gessel-models}). Can one apply to them some of the techniques used for the small step algebraic models{~\cite{BeBMRa-17,BoKa10,BKR-13,Bous05,mbm-gessel,BoMi10,gessel-proba,KaKoZe08,kurkova-raschel,mishna-jcta}}? In particular, are the associated series D-finite? Which ones are algebraic? Can one prove the non-D-finiteness of the 16 models of Table~\ref{tab:embarassing}, {which have a rational excursion exponent but an infinite orbit}? \item {\bf{Walks with steps in $\boldsymbol{\{\bar 1,0,1,2\}^2 }$.}} Symmetrically, one can examine the $14\,268$ interesting (non-isomorphic and non-trivial) models with steps in $\{-1,0,1,2\}^2$, having at least one large forward step. Proceeding as in Section~\ref{sec:m21} reveals that $1\,189$ of them are included in a half-space, and thus analogous to the 5 half-space models with small steps. Of the remaining $13\, 079$ {models}, $12\,828$ have an irrational excursion exponent, and hence a non-D-finite generating function\ and an infinite orbit (Section~\ref{sec:infinite-orbit}). {The $251$ that have a rational exponent split in three families}: \begin{itemize} \item 11 have yet an infinite orbit. They are the reverses of the 11 models of Table~\ref{tab:embarassing} that contain a step in $\mathbb{Z}}%{\mbox{\bbold Z}_{-}^2$ (for the other 5 models in this table, there is no non-trivial walk starting at the origin after reversing steps); \item 227 are Hadamard, and thus solvable by Proposition~\ref{prop:hadamard} and D-finite. They are the reverses of the 227 Hadamard models of Section~\ref{sec:works}; \item 13 are the reverses of the models in Table~\ref{tab:finite}, and thus share their orbit structure: $O_{12}$, $\tilde O_{12}$ or $O_{18}$. {They also share their excursion generating function, which we have either proved or conjectured to be D-finite in all 13 cases.} \end{itemize} \item {It has been proved~\cite{BoChHoKaPe17} that for the 19 small step models in the quadrant that are D-finite but transcendental, the series $Q(x,y;t)$ has an explicit expression involving integrals and specializations of the hypergeometric series $_2F_1$. For which models with larger steps is this still true? Corollary~\ref{cor:explicit} and Conjecture~\ref{conj:K2-ex} show that a similar property may indeed hold in some cases.} \item {We have focussed in this paper on 2D examples, because the quadrant is already a rich source of interesting problems. But the four stages of the method, described in Sections~\ref{sec:eqfunc} to~\ref{sec:extract}, apply just as well to higher dimensional models. In fact, they were already successfully applied to 3D models with small steps in~\cite{BoBoKaMe16}.} \end{itemize} \bigskip \noindent {\bf Acknowledgments.} We thank Axel Bacher for great help with computations of walk sequences, {Andrew Elvey Price and Michael Wallner for providing the parity argument of Proposition~\ref{prop:small-group},} Mark van Hoeij for useful discussions on hypergeometric solutions of differential equations, and J\'er\^ome Leroux for pointing us to Farkas' Lemma. \bigskip \bibliographystyle{plain}
{ "timestamp": "2018-06-05T02:14:33", "yymm": "1806", "arxiv_id": "1806.00968", "language": "en", "url": "https://arxiv.org/abs/1806.00968" }
\section{Introduction} \label{sec:introduction} Our universe can be divided into two sectors: the visible and the dark. The visible sector of the universe is comprised of the standard model (SM) particles, whose constituents were all identified with the last discovery of the Higgs boson in 2012 \cite{Aad:2012tfa,Chatrchyan:2012xdj}. Just as the SM has various kinds of fermions, gauge bosons, and a scalar boson, the dark sector might also have a rich spectrum of dark fermions, dark gauge bosons, and dark scalar bosons rather than a single dark matter (DM) particle. Although there has been no discovery of the dark sector particles so far, the existence of the dark sector is backed by significant observational evidence of DM \cite{Patrignani:2016xqp}. It is natural to expect that there is a connection between the SM particles and DM particles other than gravity as the typical explanations (freeze-out, freeze-in \cite{Hall:2009bx}) of the dark matter relic density require an interaction between the two sectors. While it is possible the dark sector particles carry the SM weak charges (as in many supersymmetric dark matter models), it may also be very possible they do not carry any charges under the SM gauge symmetries. Even in the latter case, two separate sectors might still be able to communicate with each other if there is a `portal,' a way to connect the visible sector particles and the dark sector particles through a mixing or a loop-effect. There have been four popular portals: \\ $~~~~~~~~~~$ (i) Vector portal: $\frac{\varepsilon}{2 \cos\theta_W} B_{\mu\nu} Z'^{\mu\nu}$, \\ $~~~~~~~~~~~$(ii) Axion portal: $\frac{G_{a\gamma\gamma}}{4} a F_{\mu\nu} \tilde F^{\mu\nu} , \cdots$, \\ $~~~~~~~~~~~$(iii) Higgs portal: $\kappa |S|^2 H^\dagger H , \cdots$, \\ $~~~~~~~~~~~$(iv) Neutrino portal: $y_N LHN$. \\ The constraints on these portals can be found in Refs.~\cite{Essig:2013lka,Patrignani:2016xqp,Alexander:2016aln}. The relic DM can be either a portal particle or a particle coupled to portal particles via hidden interactions (for examples see Refs.~\cite{ArkaniHamed:2008qn,Nomura:2008ru}). The vector portal \cite{Holdom:1985ag} is a mixing between a SM gauge boson and a dark sector gauge boson (such as dark photon \cite{ArkaniHamed:2008qn} and the dark $Z$ \cite{Davoudiasl:2012ag}, which is a variant of the dark photon with an axial coupling \cite{Davoudiasl:2012qa,Davoudiasl:2013aya,Lee:2013fda,Davoudiasl:2014kua,Kong:2014jwa,Kim:2014ana,Davoudiasl:2015bua}). The axion portal connects the axion or axion-like particle to a pair of the SM gauge bosons. Recently, it was pointed out that a `dark axion portal' \cite{Kaneta:2016wvf}) may exist, connecting the dark photon and axion to the SM. The new portal is independent from the vector and axion portals as it arises from a different mechanism. When a new portal is introduced, it can provide new opportunities to search for dark sector particles \cite{Essig:2013lka}. For some of the recent studies using the dark axion portal, see Refs.~\cite{Choi:2016kke,Kaneta:2017wfh,Agrawal:2017eqm,Kitajima:2017peg,Choi:2018dqr}. Because of the very small couplings between the dark sector and the SM particles, their masses can be much smaller than the typical (electroweak - TeV) scale of new physics. As a matter of fact, most of the studies of the portal focus on the rather light masses as we can see in the dark photon and axion (or axion-like particle) cases. (For some mechanisms to introduce very light particles, see Refs.~\cite{Kim:1979if,Shifman:1979if,Lee:2016ejx}.) Various studies can be summarized in a similar fashion as in Ref.~\cite{Jaeckel:2013ija} that show the constraints on the vast parameter space of the portal particle (mass and coupling). In this paper, we study the implications of the dark axion portal for a roughly MeV - 10 GeV scale dark photon, the mass range focused on by the typical intensity frontier new physics \cite{Essig:2013lka}. We investigate possible signals of the new portal at the $B$-factories and use the existing data from the BaBar experiment to constrain the axion-photon-dark photon coupling. We also study the sensitivities of the new Belle-II experiment for both Phase II and Phase III running. Belle-II began data taking in April 2018 with a partially complete detector, and will be one of the major players in the intensity frontier physics over the next decade. We also study the implications for the muon and electron $g-2$ and determine the constraints on the new coupling from existing measurements. Finally, we study possible dark axion portal signals at the LSND and MiniBooNE fixed target neutrino experiments, and the CHARM proton beam dump. While we focus on MeV - 10 GeV scale physics, much heavier particles can be searched for by energy frontier experiments such as the LHC experiments; much lighter ones may be observed by the cosmic frontier observations such as stellar cooling and supernovae. The exact nature of the axion-photon-dark photon coupling is model dependent and the predictions may change depending on other related couplings (such as the axion-photon-photon and axion-dark photon-dark photon couplings), but we will treat it in a model-independent way by taking the limit where only axion-photon-dark photon coupling is relevant. In Sec.~\ref{sec:DAP}, we briefly discuss the dark axion portal vertex, and elaborate on our parameterization. In Sec.~\ref{sec:calc}, we discuss the search channels and constraints for the dark axion portal from the BaBar and Belle-II experiments. In Sec.~\ref{ssec:g_2}, we discuss the contributions to the electron and muon $g-2$ from the new axion-photon-dark photon vertex and obtain constraints from current measurements. In Sec.~\ref{sec:ftnf}, we study the ability of the LSND and MiniBooNE fixed target neutrino facilities to constrain the dark axion portal. In Sec.~\ref{sec:beamdump}, we place limits on the dark axion portal parameter space with two analyses at the CHARM experiment, and make a few comments on electron beam dumps. In Sec.~\ref{sec:summary}, we provide a summary of our study and future directions. \section{Dark axion portal} \label{sec:DAP} The axion portal and the dark axion portal terms \cite{Kaneta:2016wvf} can be written as following. \begin{eqnarray} && {\cal L}_\text{axion portal} = \frac{G_{agg}}{4} a G_{\mu\nu}\tilde G^{\mu\nu} + \frac{G_{a\gamma\gamma}}{4} a F_{\mu\nu}\tilde F^{\mu\nu} + \cdots ~~ \\ &&{\cal L}_\text{dark axion portal} = \frac{G_{a\gamma^\prime\gamma^\prime}}{4} a Z'_{\mu\nu}\tilde Z'^{\mu\nu} +\frac{G_{a\gamma\gamma^\prime}}{2} a F_{\mu\nu}\tilde Z'^{\mu\nu} ~~~ \end{eqnarray} The dark axion and the axion portal are constructed using the anomaly triangle and the actual couplings depend on the details of the model. For instance, in the dark KSVZ axion model introduced in Ref.~\cite{Kaneta:2016wvf}, the portal couplings are given as \bea G_{a\gamma\gamma} &=& \frac{e^2}{8\pi^2} \frac{PQ_\Phi}{f_a} \Big[ 2 N_C Q_\psi^2 - \frac{2}{3} \frac{4+z}{1+z} \Big] , \label{eq:Gagg} \\ G_{a\gamma\gamma'} &\simeq& \frac{e e'}{8\pi^2} \frac{PQ_\Phi}{f_a} \big[ 2 N_C D_\psi Q_\psi \big] + \varepsilon G_{a\gamma\gamma} , \\ G_{a\gamma'\gamma'} &\simeq& \frac{e'^2}{8\pi^2} \frac{PQ_\Phi}{f_a} \big[ 2 N_C D_\psi^2 \big] + 2 \varepsilon G_{a\gamma\gamma'}, \end{eqnarray} where $N_C = 3$ is the color factor. $e$ ($e'$) and $Q_\psi$ ($D_\psi$) are the electric (dark) coupling constant and charge of the exotic quarks in the anomaly triangle. $f_a / PQ_\Phi$ is the mass scale of the exotic quarks. $z = m_u / m_d \simeq 0.56$ is the mass ratio of the $u$ and $d$ quarks. $\varepsilon$ is the vector portal coupling which we take to be 0 in our study. While one can consider the coupling in the context of a specific model that can decide the couplings in terms of the model parameters and provide connections among them, we focus on the limit where a model independent treatment makes sense and consider only the $G_{a\gamma\gamma'}$ coupling. Specifically, we take $m_a \ll m_{\gamma'}$ and also take the view the model-specific part of the $G_{a\gamma\gamma}$ such as the electric charge contribution are arranged to make $G_{a\gamma\gamma}$ small enough to neglect its effect in the analysis we perform in this paper. We do not claim that the $a$ should be the QCD axion, but take it to an axion-like particle with a mass much smaller than that of the $\gamma^\prime$. The $G_{a\gamma'\gamma'}$ is nonzero, but the on-shell decay process $a \to \gamma' \gamma'$ is kinematically forbidden, while the off-shell process would be negligibly small. While the decay $a \to \gamma \gamma$ is allowed, by considering a very small $m_a$, the aforementioned arrangement to minimize $G_{a\gamma\gamma}$ would ensure the the $a$ is sufficiently long-lived to escape the $B$-factory detectors before its decay and its effect on the lepton $g-2$ is suppressed, making the effect of the $G_{a\gamma\gamma}$ negligible in our analysis. More general cases and their implications will be studied in subsequent works. \section{B Factories} \label{sec:calc} \subsection{BaBar} \label{ssec:visible} BaBar \cite{Aubert:2001tu} is an asymmetric electron-positron collider with a 9\,GeV electron beam and a 3.1\,GeV positron beam for a center of mass energy of 10.5\,GeV. The experiment collected an integrated luminosity of over 500\,fb$^{-1}$ \cite{Lees:2013rw} between 1999 and 2008, but the monophoton trigger was only implemented for its final running period. \begin{figure}[t] \centerline{\includegraphics[width=0.3\textwidth]{figs/e_e_bar_to_DP_axion_2.pdf}} \caption{Electron-positron annihilation to on-shell $a$ and $\gamma^\prime$. Observable at $B$ factories as a monophoton produced through subsequent decay $\gamma^\prime \to \gamma a$.} \label{fig:feyn_vis} \end{figure} \begin{table}[b] \centering \begin{tabular}{lcc} \hline \hline & Low-Cut & High-Cut\\ \hline $E_\gamma^*$ & $[2.2,3.7]$GeV & $[3.2, 5.5]$GeV \\ \hline $\cos \theta_\gamma^*$ & $[-0.46,0.46]$ & $[-0.31,0.6]$ \\ \hline Luminosity & 19 $\mathrm{fb}^{-1}$ & 28 $\mathrm{fb}^{-1}$\\ \hline Efficiency & 55\% & 30\% \\ \hline \end{tabular} \caption{Kinematic cuts on the BaBar Low-Cut and High-Cut samples. $E_\gamma^*$ is the center of mass energy of the detected photon and $\theta_\gamma^*$ is the angle of the photon relative to the beam axis in the center of mass frame.} \label{tab:params} \end{table} \begin{figure}[t] \centerline{\includegraphics[height=0.33\textwidth]{figs/No_Cut_Hist_Decay.pdf}} \caption{As an illustration, we provide a histogram of the cosine of the center-of-mass emission angle $\cos \theta_\gamma^*$ for a sample of $4\times10^5$ secondary photons produced through $e^+ e^- \to a (\gamma^\prime \to a \gamma)$ for High-Cut energies. The boxed window reflects the imposed High-Cut $\cos \theta_\gamma^*$ range detailed in Table \ref{tab:params}. Of the $10^6$ events initially generated, $1.36\times10^5$ survive both the energy and angle cuts. The bin size for this histogram was chosen to provide a good representation of the angular distribution, but is not relevant to the analysis.} \label{fig:angle_hist_2} \end{figure} We examine the process $e^+ e^- \to a \gamma^\prime$ shown in Fig. \ref{fig:feyn_vis}, and calculated with FeynCalc \cite{Shtabovenko:2016sxi,Mertig:1990an} to be \begin{equation} \frac{d\sigma}{dt} = \frac{\alpha_\text{EM} G_{a\gamma\gamma^\prime}^2}{16 s^3}\left(2 m_{\gamma^\prime}^4 - 2m_{\gamma^\prime}^2(s+t+u)+t^2+u^2\right), \label{eq:ee_ann} \end{equation} where $s=(p_{e^-}+p_{e^-})^2$, $t=(p_{\gamma^\prime}-p_{e^-})^2$ and $u=(p_{a}-p_{e^-})^2$ are the Mandelstam variables. This process can result in the production of a monophoton final state through a subsequent $\gamma^\prime \to a\gamma$ decay, so long as the $\gamma^\prime$ is reasonably prompt. We will assume that $m_a \ll m_{\gamma^\prime}$, and that the $a$ is sufficiently long-lived to escape the BaBar detector before decaying radiatively. We follow the approach of Ref.~\cite{Essig:2013vha} in using BaBar's $\Upsilon(3S) \to \gamma$A0 data, where A0 is some invisibly decaying scalar particle \cite{Aubert:2008as}\footnote{The limits we will place using this data could potentially be improved by using a larger BaBar analysis that included a background model \cite{Lees:2017lec}.}. This set of data records the measured center-of-mass energy of detected monophotons, $E_\gamma^\star$. The data is divided into overlapping Low-Cut and High-Cut $E_\gamma^\star$ domains, where the Low-Cut domain is $E_\gamma^\star\in[2.2,3.7]$\,GeV, and the High-Cut domain is $E_\gamma^\star\in[3.2,5.5]$\,GeV. See Table \ref{tab:params} for a summary of the cuts, luminosities and efficiencies. Samples of $10^6$ $e^+ e^- \to a\gamma^\prime $ events were generated with CalcHEP 3.6.27 \cite{Pukhov:1999gg,Belyaev:2012qa} for 45 dark photon masses. The subsequent $\gamma^\prime \to a\gamma$ decays were simulated using an external Python code. As in Ref. \cite{Essig:2013vha}, the simulated photons were smeared using a Crystal Ball function (see Ref.~\cite{Skwarnicki:1986xj}) with $n=1.79$, $\alpha=0.811$ and $\sigma/(E_\gamma^\star) = 0.015\times \left(\rm GeV/E_\gamma^\star\right)^{3/4} + 0.01$. In the absence of a background model, we will place a conservative limit on the coupling constant $G_{a \gamma \gamma^\prime}$ by treating all measured events as signal, and taking the maximum value of $G_{a \gamma \gamma^\prime}$ for which the theory prediction does not exceed the measured number of events in any bin of either the High- or Low-Cut data by more than $2\sigma$. \begin{figure}[t] \centerline{\includegraphics[width=0.48\textwidth]{figs/prod_cross_combined.pdf}} \caption{Cross sections for the processes $e^+ e^- \to a \gamma^\prime$ and $e^+ e^- \to a \gamma \gamma^\prime$ with $m_a \ll m_{\gamma^\prime}$ $E_\gamma \in [2.2,5.5]\,\rm GeV$. The cross section of $e^+ e^- \to a \gamma \gamma^\prime$ is heavily suppressed relative to $e^+ e^- \to a \gamma^\prime$ due in part to the three body final state.} \label{fig:cross_section_2} \end{figure} A sample of the angular distribution of photons produced in the chain $e^+e^-\to a(\gamma^\prime \to a\gamma)$ is shown in Fig.~\ref{fig:angle_hist_2}. The events from $e^+ e^- \to a\gamma (\gamma^\prime \to a \gamma)$ could be potentially relevant, as the primary photon is preferentially emitted along the beam axis while the secondary photon produced through the $\gamma^\prime$ decay has a much broader angular distribution and frequently passes the required monophoton cuts. However, this process possesses a much smaller cross section than pure annihilation (compare the lines shown in Fig.~\ref{fig:cross_section_2}) and would contribute at a subleading level to the observed monophoton signal. We show the limits obtained by BaBar, using only $e^+e^-\to a(\gamma^\prime \to a\gamma$), for $m_a \ll m_{\gamma^\prime}$ in Fig.~\ref{fig:limits_2}. For $m_{\gamma^\prime} \le 100\,{\rm MeV}$, the lifetime can become sufficiently large for relevant values of $G_{a\gamma\gamma^\prime}$ that dark photons begin to escape the detector before they decay, reducing the number of observed monophoton events: \begin{align} \label{eq:gammap_tau} c\tau_{\gamma^\prime} &\approx \frac{5.95 \times 10^{-14} \, \mathrm{m}\cdot \mathrm{GeV}}{G_{a\gamma\gamma^\prime}^2 m_{\gamma^\prime}^3} \quad (\text{for }m_a \ll m_{\gamma^\prime})\\ &\approx 60\,\text{m}\times\left(\frac{10^{-3}\,\rm GeV^{-1}}{G_{a\gamma\gamma^\prime}}\right)^2\times\left(\frac{1\,{\rm MeV}}{m_{\gamma^\prime}}\right)^3 . \end{align} This is reflected in a pronounced shoulder in the limit contour as, due to the decline in $m_{\gamma^\prime}$, the lifetime of the $\gamma^\prime$ becomes of $\mathcal{O}(1\,\text{m})$, a length comparable in size to the BaBar detector. \begin{figure*}[t] \centerline{\includegraphics[width=0.7\textwidth]{figs/Limits_1_decay.pdf}} \caption{Limits placed on $G_{a\gamma\gamma^\prime}$ for the hierarchical mass scenario with $m_a\ll m_{\gamma^\prime}$. The BaBar line refers to searches for monophotons produced through $e^+e^-\to a\gamma^\prime$ followed by the decay $\gamma^\prime \to a\gamma$. Also shown are projected sensitivities for a similar search in Phase-II and III of the Belle-II experiment. The LSND and MiniBooNE lines reflect a search for excess neutral current elastic scattering events in the LSND and MiniBooNE detectors. The CHARM constraint is the result of a search for $\gamma^\prime \to a\gamma$ decays in the CHARM fine-grain detector. The electron and muon g-2 constraints represent parameter space for which the scenario is excluded due to changes in the lepton anomalous magnetic moment that are incompatible with current experimental measurements.} \label{fig:limits_2} \end{figure*} \subsection{Belle-II} \label{ssec:belle2} The Belle-II experiment \cite{Abe:2010gxa} is the successor to the Belle and BaBar experiments, and has recently begun taking data as part of Phase II of its operations. Phase II aims to record 20\,fb$^{-1}$ of integrated luminosity with a partially completed detector, while Phase III of the experiment will take 50\,ab$^{-1}$ of data with the completed detector and the SuperKEKB particle accelerator. Unlike BaBar, Belle-II will run with a monophoton trigger for the entirety of its run. To estimate the sensitivity of Belle-II to the $a$-$\gamma$-$\gamma^\prime$ vertex, samples of $10^6$ $e^+ e^- \to \gamma \gamma^\prime a$ events were generated with CalcHEP 3.6.27 \cite{Pukhov:1999gg,Belyaev:2012qa} for 48 values of $m_{\gamma^\prime}$ chosen so as to smoothly render all features of the contour. Photons were generated through the decay of the $\gamma^\prime$ and the number satisfying the preliminary cuts shown in Ref. \cite{HeartyBelleII} were recorded. This preliminary analysis predicted that 300 background events would survive these cuts for 20\,fb$^{-1}$ of data, and we scale this to $7.5\times10^{5}$ background events for 50\,ab$^{-1}$. We show contours for the expected Phase-II and Phase-III luminosities, with $2\sqrt{300}$ events for the $20\,\text{fb}^{-1}$ contour, and scale up these backgrounds to $2\sqrt{7.5\times10^5}$ events for the $50\,\text{ab}^{-1}$ contour for $m_a \ll m_{\gamma^\prime}$ in Fig. \ref{fig:limits_2}. Thanks to a combination of greater luminosity and generous angular cuts, Belle-II is capable of probing far smaller values of $G_{a \gamma \gamma^\prime}$ than BaBar. \section{\boldmath Lepton $g-2$} \label{ssec:g_2} The dark axion model introduces the new two-loop contribution to the lepton anomalous magnetic moment shown in Fig. \ref{fig:g_2}. The change to lepton $a_\ell = (g-2)/2$ is given by \cite{Jegerlehner:2009ry,Lindner:2016bgg}: \begin{equation} \label{eq:g_minus_2} \Delta a_\ell = \frac{\alpha}{\pi} \int_0^1 dx(1-x) \Pi_R(s_x)-\frac{\alpha}{3\pi} c\, m_\ell^2 G_{a\gamma \gamma^\prime}^2, \end{equation} where $c$ is a positive free parameter introduced during the renormalization of the $a$-$\gamma$-$\gamma^\prime$ vertex (see App. \ref{app:g_2} for further details), \begin{equation} s_x \equiv -\frac{x^2}{1-x}m_\ell^2, \end{equation} and \bea \Pi_R(q^2) &=& \Pi(q^2) - \Pi(0) - q^2\Pi^\prime(0)\\ &=& \int_0^1 dx \left[ \log \left( \frac{xm_a^2+(1-x)m_{\gamma^\prime}^2}{xm_a^2+(1-x)m_{\gamma^\prime}^2-x(1-x)q^2} \right) \right. \nonumber \\ &&\times(xm_a^2+(1-x)m_{\gamma^\prime}^2-x(1-x)q^2) - \frac{q^2}{6} \Biggr]. \label{eq:PiR} \end{eqnarray} \begin{figure}[t] \centerline{\includegraphics[width=0.3\textwidth]{figs/gminus2_axion_DP2.pdf}} \caption{Two-loop diagram providing the leading contribution to lepton anomalous magnetic moment including $a$-$\gamma$-$\gamma^\prime$ vertices.} \label{fig:g_2} \end{figure} While the free parameter $c$ makes the theory unpredictive, both terms that contribute to $\Delta a$ are always negative, and conservative limits can be placed on $G_{a\gamma\gamma^\prime}$ by assuming $c=0$, as non-zero values of $c$ will only magnify the effect of the dark axion portal contribution and correspondingly improve the limits. The current best measurements of the anomalous magnetic moment of the muon come from a muon storage ring at Brookhaven National Laboratory \cite{Mohr:2012tt,Bennett:2002jb,Bennett:2004pv,Bennett:2006fi}. Their measurement exceeds the theoretically predicted value by $3.5\sigma$ \cite{Patrignani:2016xqp}, \begin{equation} \Delta a_\mu = a_\mu(\text{exp}) - a_\mu(\text{SM}) = (26.8 \pm 7.6)\times10^{-10}. \end{equation} The dark axion portal unfortunately exacerbates this disagreement. We place a limit where the SM+dark axion portal increases $\Delta a_\mu$ by $15.2 \times 10^{-10}$, a $5.5\sigma$ disagreement. In the future, the E989 collaboration at FNAL intends to improve on the precision of the current experimental measurement by a factor of three \cite{Grange:2015fou}. The electron anomalous magnetic moment has been determined most accurately through one-electron quantum cyclotron experiments and measurements of the ratio between the Planck constant and the mass of Rubidium-87 \cite{Bouchendira:2010es,Hanneke:2008tm,Hanneke:2010au}. The theory prediction \cite{Aoyama:2012wj} exceeds experimental measurements by approximately $1\sigma$ \cite{Endo:2012hp}, \begin{equation} \Delta a_e = a_e(\text{exp}) - a_e(\text{SM}) = -(1.06\pm0.82)\times10^{-12}. \end{equation} The dark axion portal can reduce the disagreement between theory and experiment of the electron anomalous magnetic moment. We place a limit where the dark axion portal contribution overcorrects the difference between the SM and experiment, and $a_e$ disagrees with the experimentally measured value by more than $2\sigma$. Both this contour and that derived for muon $g-2$ are shown in Fig. \ref{fig:limits_2}. While the two-loop contribution from the dark axion portal cannot resolve the muon $g-2$ discrepancy (because of the wrong sign), the situation could become nontrivial if we allow for the model-dependent contributions from $a$-fermion-fermion Yukawa couplings or $a$-$\gamma$-$\gamma$ coupling ($G_{a\gamma\gamma}$). As studied in Ref. \cite{Marciano:2016yhf}, the combined effect of Bar-Zee one-loop diagrams and light-by-light and vacuum polarization two-loop diagrams might resolve the muon $g-2$ discrepancy if sufficiently large coupling strengths are allowed. We will study this more general situation in subsequent works. \section{Fixed Target Neutrino Experiments} \label{sec:ftnf} Fixed target neutrino experiments (FTNEs) impact high intensity proton beams onto thick targets to produce charged mesons, primarily pions and kaons, the decays of which produce neutrinos. FTNEs deliver in excess of $10^{20}$ protons on target (POT) over the life of their running time. While the objective of FTNEs is to study neutrino oscillations, their high intensity and low Standard Model backgrounds are also well suited to searching for hidden sector states with sub-GeV masses \cite{Pospelov:2007mp,Batell:2009di}. This section will consider the sensitivity of LSND and MiniBooNE to the dark axion portal by repurposing published analyses of neutral current elastic scattering. All cross sections in this section were calculated with the assistance of FeynCalc \cite{Shtabovenko:2016sxi,Mertig:1990an}. \begin{figure}[t] \centerline{\includegraphics[width=0.3\textwidth]{figs/pi0_eta_to_gamma_A_a.pdf}} \caption{Decay of the pseudoscalar mesons $\pi^0$ and $\eta$ to $\gamma a \gamma^\prime$ used in the analysis of LSND, MiniBooNE and CHARM. The off-shell internal photon and three body final state suppress the branching ratio.} \label{fig:meson_decay} \end{figure} Alongside the charged mesons, FTNEs also produce the neutral pseudoscalar mesons $\pi^0$ and $\eta$. The $\pi^0$ is produced in quantities similar to those of the $\pi^+$ and $\pi^-$, while the $\eta$ is produced at a rate suppressed by a factor of 20 to 30 \cite{Jaeger:1974pk,Amaldi:1979zk}. The $a$ and the $\gamma^\prime$ could be produced in radiative decays of the pseudoscalar mesons through the diagram shown in Fig. \ref{fig:meson_decay}. The partial decay width of the decay $\pi^0 \to a \gamma \gamma^\prime$ is given by \begin{equation} \frac{d^2\Gamma}{dm_{12}^2 dm_{23}^2} = \frac{1}{(2\pi)^3}\frac{1}{32m_{\pi^0}^3}\overline{|\mathcal{M}|^2}, \end{equation} where $m_{ij}^2=(p_i+p_j)^2$ for $i,j=1,2,3$, where particle 1 corresponds to the $\gamma$, particle 2 is the $a$ and particle 3 is the $\gamma^\prime$, and the amplitude is \begin{align*} \overline{|\mathcal{M}|^2} =& \frac{e^4 G_{a\gamma\gamma^\prime}^2}{64 \pi^4 f_\pi^2 m_{23}^4} \bigg[m_{23}^4 \left(m_{12}^2+m_{23}^2-m_a^2-m_{\pi^0}^2 \right)^2 \nonumber\\ &-m_{23}^2\left(m_{23}^2-m_{\pi^0}^2\right) \left(m_{23}^2-m_a^2+m_{\gamma^\prime}^2\right)\nonumber\\ &\times\left(m_{12}^2+m_{23}^2-m_a^2-m_{\pi^0}^2\right)\nonumber\\ &+\frac{1}{2}\left(m_{23}^2-m_{\pi^0}^2\right)^2\left(m_{23}^2-m_a^2+m_{\gamma^\prime}^2\right)^2\nonumber\\&-m_{23}^2 m_{\gamma^\prime}^2\left(m_{23}^2-m_{\pi^0}^2\right)^2\bigg], \end{align*} where $m_e$ is the electron mass, $e = \sqrt{4\pi\alpha_\mathrm{em}}$ and $\alpha_mathrm{em}$ is the fine structure constant. The same expression holds for $\eta$, but with $m_{\pi^0}\to m_{\eta}$ The decay width is suppressed by the kinematics of the three-body final state and the off-shell internal photon propagator. Interestingly, the $\eta$ is far more likely to decay to the dark sector than the $\pi^0$ due to the dependence of the width on the meson mass. Both the $a$ and the $\gamma^\prime$ are able to propagate to the neutrino detector where they could be observed through the inelastic scattering channels $ae\to \gamma^\prime e$ and $\gamma^\prime e \to a e$ shown in Fig. \ref{fig:scatter}. The scattering cross section is given by \begin{equation} \frac{d\sigma}{dt} = \frac{1}{64\pi s} \frac{S\Sigma|\mathcal{M}|^2}{|\mathbf{p}_\mathrm{1cm}|^2}, \end{equation} where $s$ and $t$ are the Mandelstam variables, $\mathbf{p}_\mathrm{1cm}$ is the center of mass momentum of the incoming $a$ or $\gamma^\prime$, $S=\frac{1}{2s+1}$ is a spin symmetry factor, and is equal to $1$ for the $a$ and $1/3$ for the $\gamma^\prime$, and the squared amplitude is given in the limit of $m_a\to0$ by \begin{align} \Sigma{|\mathcal{M}|^2} = -\frac{G_{a\gamma\gamma^\prime}^2 e^2}{6t^2}&\bigg[m_{\gamma^\prime}^4\left(2m_e^2+t\right) -2m_{\gamma^\prime}^2 t\left(m_e^2+s+t\right) \nonumber \\ &+t\left(2[m_e^2-s]^2+2st+t^2\right)\bigg]. \end{align} Should the $\gamma^\prime$ be sufficiently massive, it will decay to $a\gamma$ before reaching the detector, and only a beam of the long-lived $a$'s will reach the detector. The mass reach of FTNEs is restricted by the kinematics of the inelastic scattering, as the $a$ must be increasingly energetic to scatter into a higher mass state. There is an additional complication in $a$-electron scattering should the $\gamma^\prime$ not escape the detector before its decay. See Fig~\ref{fig:decay_distance} for some examples of the characteristic travel distances before decay. As we will be comparing with Neutral Current Elastic-like analyses to impose limits on the dark axion portal, we will exclude events in which the scattering produces an additional photon. \begin{figure}[t] \centerline{\includegraphics[width=0.25\textwidth]{figs/Axion_e_to_DP_e.pdf}\includegraphics[width=0.25\textwidth]{figs/DP_e_to_Axion_e.pdf}} \caption{Inelastic scattering channels observable by MiniBooNE and LSND. The right-hand diagram is always kinematically accessible, while the left-hand diagram requires an energetic $a$ to enable scattering into the higher mass $\gamma^\prime$ state.} \label{fig:scatter} \end{figure} A modified version of the \textsc{BdNMC} code \cite{deNiverville:2016rqh} was used to simulate both the production and detection of the dark matter signal expected at LSND and MiniBooNE. In the case of LSND, the code takes a momentum distribution of initial mesons chosen based on the experiment and beam energies and produces meson four-momenta by sampling the distribution using an acceptance-rejection algorithm. In the case of MiniBooNE, the code draws four-momenta from a prepared list of sample mesons generated by the MiniBooNE Collaboration. Decays into dark sector particles are generated from the selected meson four-momenta, and the propagation trajectories of those particles are checked for intersection with the detector geometry. If the $\gamma^\prime$ decays before reaching the detector, the resulting decay axion is also checked for intersection. The likelihood of this process is highly dependent on the precise value of $G_{a\gamma\gamma^\prime}$, but adjusting the coupling has only a minor effect on the event rate beyond the expected $G_{a\gamma\gamma^\prime}^4$ scaling for the parameter space of interest, as $\gamma^\prime$ particles are replaced with $a$ particles. \begin{figure}[t] \subfigure[]{ \centerline{\includegraphics[width=0.45\textwidth]{figs/lsnd_decay.pdf}} } \subfigure[]{ \centerline{\includegraphics[width=0.45\textwidth]{figs/miniboone_decay.pdf}} } \caption{Mean travel distance before decay for characteristic energies of $\gamma^\prime$ at (a) LSND and (b) MiniBooNE. Also marked are relevant distance scales: the distance to the LSND and the MiniBooNE detectors as well as the 1\,m fiducial volume cut. The secondary $\gamma^\prime$ produced in the LSND and MiniBooNE detectors through $ae\to \gamma^\prime e$ scattering have lower energies than those produced in the target.} \label{fig:decay_distance} \end{figure} Once a dark sector particle $i$ reaches the detector, it scatters with a probability of \begin{equation} \label{ref:pscatter} P_i = \frac{\sigma(E_i)\times L_i}{P_\mathrm{max}}, \end{equation} where $E_i$ is the energy of the incident particle, $L_i$ is the length of the particle intersection with the detector, $\sigma$ is the scattering cross section and $P_\mathrm{max}$ is the maximum recorded scattering probability. Once a scattering occurs, the differential scattering cross section is sampled with acceptance-rejection, and a set of end state particles is generated and recorded. The total event rate is calculated as \begin{equation} \label{eq:ftnf_events} N_\mathrm{events} =\frac{N_\mathrm{scatter}}{N_\mathrm{trials}}P_\mathrm{max}\sum_{\alpha=\pi^0,\eta}N_{\alpha}\times \mathrm{Br}(\alpha\to a\gamma\gamma^\prime), \end{equation} where $N_\mathrm{scatter}$ is the number of scattering events generated, $N_\mathrm{trials}$ is the total number of attempts that were required to generate those scattering events, and $N_\alpha$ is the total number of mesons of type $\alpha$ produced by the experiment. Some additional processing was performed while calculating the constraint line in order to determine the probability that the secondary dark photons produced in $ae\to\gamma^\prime e$ scattering escaped the detector undetected. Each experiment excludes a region of the parameter space if it would observe in excess of some number of dark sector events $N_\mathrm{cut}$, where this is largely determined the size of the neutrino signal. The value of $G_{a\gamma\gamma^\prime}$ was calculated iteratively with \begin{equation} \label{eq:dark_photon_efficiency} G_{a\gamma\gamma^\prime}^{i+1} = \left(\frac{N_\mathrm{cut}}{N_\mathrm{events} \epsilon_i} \right)^{0.25}\times G_{a\gamma\gamma^\prime}^i, \end{equation} where $G_{a\gamma\gamma^\prime}^0$ was the value of the coupling used for the simulation and $\epsilon_i$ is an efficiency factor representing the percentage of dark photons that escaped the detector and $\epsilon_0=1$. The efficiency factor was calculated for each $G_{a\gamma\gamma^\prime}^{i}$ for $i\neq0$ as follows: The probability of a dark photon with energy $E_j$ traveling a set distance $L$ before decaying was averaged over a range of $L$ between 1\,m and the size of the detector itself. The lower bound of 1\,m was chosen to ensure that the dark photon did not decay in any veto regions surrounding the fiducial volume of the detector. The value $\epsilon_i$ was set to the fraction of $\gamma^\prime$ which escaped the detector undetected for a coupling of $G_{a\gamma\gamma^\prime}^i$. This limit could be improved by a more refined treatment of the cut-off, but this would require an analysis using the experiment's own detector Monte Carlo. As implemented, this process provides a conservative estimate of the cutoff, as any refinement would lead to an improvement of the efficiency factor $\epsilon$. A more complicated treatment is also possible by more carefully considering the detector geometry but it would only lead to small changes in the constraint contour as the behavior of the efficiency factor is primarily determined by the cut-off of 1\,m. This iterative process was terminated when $\epsilon_i \left(G_{a\gamma\gamma^\prime}^i\right)^4 N_\mathrm{events}$ differed from $N_\mathrm{cut}$ by less than some tolerance fraction, which we took to be 1\%. Note that the efficiency only becomes important for masses sufficiently large that almost all $\gamma^\prime$ decay before reaching the detector, as otherwise $\epsilon\approx1$. See Fig. \ref{fig:decay_distance} for a visual representation of this effect. \subsection{LSND} \label{ssec:lsnd} LSND was an experiment that ran at Los Alamos Neutron Science Centre from 1994 to 1998 \cite{Athanassopoulos:1996ds, Aguilar:2001ty}. The experiment delivered a total of $1.8\times10^{23}$ POT with a kinetic energy of 798 MeV. The experiment used a 167 tonne mineral oil detector with a diameter of 5.7 meters and a length of 8.3 meters located 30 meters downstream and 4.6 meters below the target\footnote{When calculating the event rate, it is important to note that substantial portions of the detector were excluded from the fiducial volume to serve as cosmic vetoes and improve reconstruction efficiency.}. This analysis will be following the lead of previous efforts in Refs. \cite{deNiverville:2011it,Kahn:2014sra} and focus on $a\gamma\gamma^\prime$ production through radiative $\pi^0$ decays, as the $\eta$ is unlikely to be produced in significant numbers. We follow previous work and estimate the $\pi^0$ production rate to be $N_{\pi^0}=0.06\times$POT$=1.08\times10^{22}$. The Burman-Smith distribution \cite{Burman:1989ds} was used to generate the $\pi^0$ momentum distribution. For signal, we compare the expected signal from the dark axion portal with the analysis presented in Ref.~\cite{Auerbach:2001wg}, and look for electron recoil events with energies in the range $[15,53]\,\mathrm{MeV}$. We assume a detection efficiency of 16\%, and place a limit on 110 dark axion portal events. Events in which a photon is subsequently produced inside of the detector by the decay of the $\gamma^\prime$ are discarded. The drop in the event rate is reflected by the sudden cutoff in the constraint curve in Fig. \ref{fig:limits_2}, as the distance the dark photon travels before decaying is much smaller than the size of the detector for $m_{\gamma^\prime}\ge2.5\,\mathrm{MeV}$. If some means of ignoring or utilizing the photon produced in the decay of the $\gamma^\prime$ was available, we would expect LSND to be able to place limits on the scenario for $m_{\gamma^\prime}<30\,\mathrm{MeV}$. \subsection{MiniBooNE} \label{ssec:miniboone} MiniBooNE is a fixed target neutrino experiment at Fermi National Accelerator Laboratory (FNAL) that was conducted, in part, to verify the results of the LSND experiment. It ran in on-target mode from 2002 to 2012 with a 70\,cm beryllium target \cite{AguilarArevalo:2008yp,Aguilar-Arevalo:2012fmn,Aguilar-Arevalo:2013nkf}. However, more useful for this analysis was a later 2013-2014 run in which the MiniBooNE experiment ran in off-target mode, directing their proton beam around the target and into the steel beam dump at the end of a 50\,m long open air decay pipe \cite{Aguilar-Arevalo:2017mqx,Aguilar-Arevalo:2018wea}. This dramatically reduced the background from the neutrino signal itself. During this run, the MiniBooNE experiment received $1.86\times10^{20}$ POT with a total energy of 8.9\,GeV. The MiniBooNE detector is a 12\,m diameter sphere filled with 818 tonnes of mineral oil and located 490 meters downstream from the steel beam dump \cite{AguilarArevalo:2008qa}. A similar analysis to that of LSND can be performed for MiniBooNE, and this work will mirror previous dark matter searches at proton fixed targets \cite{deNiverville:2012ij,deNiverville:2016rqh}. Only $\pi^0$ and $\eta$ decays are considered in this work. While proton bremsstrahlung could contribute, meson decays dominate the hidden sector signal at these experiments for $m_{\gamma^\prime}$ masses below 100\,MeV. The $\pi^0$ and $\eta$ production rates and distributions are drawn from the public data release in the recent MiniBooNE analysis \cite{Aguilar-Arevalo:2018wea}. The total number of $\pi^0$'s produced is calculated to be $N_{\pi^0}=2.5 \times 1.86\times10^{20} = 4.65\times10^{20}$, while the number of $\eta$'s is estimated to be $N_{\pi^0}/30$. The momentum distributions were generated by drawing from the sample $\pi^0$ and $\eta$ positions and momenta supplied in the MiniBooNE data release. The handling of the signal is similar to the treatment given at LSND, but instead of a cut on electron recoil energy we employ a cut on the electron recoil angle relative to the beamline direction of $\cos\theta_e > 0.99$. This cut removes nearly all of the neutrino background, and the exclusion curve in Fig. \ref{fig:limits_2} was made for 2.3 events with a detection efficiency of 35\%. This exclusion curve demonstrates the same sharp cutoff as LSND, but appears at a larger mass due to the higher energy, effectively extending the lifetime of the $\gamma^\prime$ due to the larger boost factor. An interesting quirk of the MiniBooNE detector is its difficulty in differentiating electrons and photons. This leads to the possibility of extending the analysis to higher masses by reconstructing the recoil electron and secondary photon produced through $ae\to \gamma^\prime e \to \gamma a e$ as a photon pair produced through $\pi^0\to\gamma\gamma$. The relevant MiniBooNE analysis \cite{Aguilar-Arevalo:2018wea} requires $\sqrt{s} \in [80,200]\,\mathrm{MeV}$ and the invariant mass of the recoil electron and decay photon produced through $a$-electron scattering is too small to survive the cuts, as shown for a sample in Fig. \ref{fig:inv_mass_spec}. Were further analysis focused on $\sqrt{s} \le 80\,\mathrm{MeV}$ performed, it is possible that the constraints could be extended to larger masses. \begin{figure}[t] \centerline{\includegraphics[height=0.33\textwidth]{figs/Invariant_Mass_Spectrum.pdf}} \caption{Invariant mass spectrum of the photon and recoil electron produced through $ae\to \gamma^\prime e \to \gamma a e$, where $s = (p_\gamma+p_e)^2$. The histogram shown is for the MiniBooNE off-target run, in which the proton beam was impacted on the 50\,m steel beam dump, with $m_{\gamma^\prime}=3\,\mathrm{MeV}$ and $G_{a\gamma\gamma^\prime}=0.01$. This invariant mass range is too small to survive the cuts placed by the MiniBooNE $\pi^0$ reconstruction, as it requires $\sqrt{s}=[80,200]\,\mathrm{MeV}$. A search of the $m_{\gamma\gamma}$ spectrum for smaller invariant masses would be sensitive to this scenario.} \label{fig:inv_mass_spec} \end{figure} \section{Beam Dump Experiments} \label{sec:beamdump} \subsection{CHARM} \label{ssec:charm} Beam dump experiments impact high intensity beams of protons and electrons on thick targets in order to generate weakly coupled particles, either directly or through the subsequent decays of heavy particles in a downstream decay volume. These particles then travel tens to hundreds of meters through beam stops, dirt and air before being detected through either their decay into Standard Model particles or, in the case of neutrinos, their scattering interactions with the detector material. This section considers the sensitivity to the dark axion portal of two CHARM searches for heavy neutrinos. Both analyses follow Refs. \cite{Gninenko:2011uv,Gninenko:2012eq} and consider production through $\pi^0$ and $\eta$ decays. Following the previous works, the ratio of $\eta$ mesons to $\pi^0$ mesons is taken to be $N_{\pi^0}/N_\eta=0.078$. While $\eta^\prime$ decays could also be considered in order reach larger $m_{\gamma^\prime}$, it is the rapid decline in the lifetime of the $\gamma^\prime$ with increasing mass and the distance to the detector that determines the $m_{\gamma^\prime}$ reach of the CHARM experiment rather than the phase space, and branching ratio, available in meson decays. As it is produced in smaller quantities than the $\eta$ and $\pi^0$, we will neglect the $\eta^\prime$ contribution in this work. The overall $\pi^0$ production rate as well as its momentum distribution was calculated with the BMPT distribution for a 300\,cm copper target \cite{Bonesini:2001iz}. The BMPT distribution is also used for the $\eta$ momentum distribution, as the $\pi^0$ and $\eta$ momentum distributions are expected to be quite similar. The CHARM fine-grain target calorimeter is composed of 72 marble plates with a thickness of 8\,cm, spaced 20\,cm apart with scintillation counters and proportional drift tubes inserted in the intervening space \cite{Dorenbosch:1987pn}. The center of the detector is located 487.3\,m downstream from the target. The fiducial volume is made up of a cross-section of 2.4 $\times$ 2.4\,$\mathrm{m}^2$, beginning at target plane 3 and ending at target plane 59, with the first three planes serving as vetoes for particle production in the upstream muon spectrometer. The last 14 planes are used for shower measurements, as particles produced in these planes might escape the detector before depositing sufficient energy to correctly reconstruct the shower. We now consider two analyses that we will label (1) and (2). In analysis (1), the detector was used in a heavy neutrino decay search \cite{Bergsma:1983rt} with 7.1$\times10^{18}$ POT. The analysis searched for electron-positron pairs produced through neutrino decays, where the resulting electromagnetic showers possessed $E\in[7.5,50]\,\mathrm{GeV}$ and $E^2\theta^2<0.54\,\mathrm{GeV}^2$. Of particular interest for the dark axion portal, single photon emission such as that produced through the decay $\gamma^\prime\to a\gamma$, could also survive these cuts. This is the dominant decay channel for the $\gamma^\prime$ in the dark axion portal. The analysis attributed $1\pm 49$ events to heavy neutrino decays, and we conservatively exclude regions of the parameter space that generate more than 99 events. In analysis (2), a search for electron positron pair production was conducted in a 35\,m long decay volume with a cross section of 3 $\times$ 3\,$\mathrm{m}^2$ \cite{Bergsma:1985is}. This decay region is parallel to the neutrino beamline, but offset by 5\,m. This data was used to constrain dark photon decays in Ref. \cite{Gninenko:2012eq}. This search required the electron-positron pair to possess greater than $3\,\mathrm{GeV}$ of energy. The decay $\gamma^\prime\to a e^+ e^-$ is rare compared to the radiative decay channel, with a branching fraction rising to nearly 2\% for $m_{\gamma^\prime} \sim 100\,\mathrm{MeV}$ (see Fig.~\ref{fig:A_to_aee}). This analysis was only performed on 2.4$\times10^{18}$ POT, and recorded zero background events. We exclude regions of the parameter space for which this analysis would have observed more than 2.3 events. \begin{figure}[t] \centerline{\includegraphics[height=0.3\textwidth]{figs/DP_to_aee.pdf}} \caption{The branching ratio of $\gamma^\prime \to a e^+ e^-$, only considering decays through the $a\gamma\gamma^\prime$ vertex.} \label{fig:A_to_aee} \end{figure} Both analyses described above were simulated with a modified version of the \textsc{BdNMC} code, previously described in Sec.~\ref{sec:ftnf}. The event rate calculation is similar to that shown in Eq.~\eqref{eq:ftnf_events}, but with $N_\mathrm{scatters} \to N_\mathrm{decays}$ and a different calculation of the event probability for some $\gamma^\prime$ $i$: \begin{equation} \label{eq:dec_prob } P_\mathrm{decay,i} = \mathrm{Br}(\gamma^\prime\to X)\left[\exp\left(-\frac{L_{1,i} E_i}{c\tau m}\right) - \exp\left(-\frac{L_{2,i} E_i}{c\tau m}\right)\right], \end{equation} where $\mathrm{Br}(\gamma^\prime\to X)$ is the probability of the $\gamma^\prime$ decaying via the channel of interest, with $X=a\gamma$ for analysis (1) and $X=ae^+e^-$ for analysis (2), $E_i$ is the $\gamma^\prime$ energy and $L_{1,i},L_{2,i}$ are the distances from the center of the production target at which $\gamma^\prime$ $i$'s trajectory enters and exits the target, respectively. The constraints placed by CHARM on the dark axion portal are shown in Fig.~\ref{fig:limits_2}. Due to a combination of the far larger radiative branching ratio and larger exposure, analysis (1) places stronger limits than analysis (2) and the contour shown is entirely due to the $\gamma^\prime\to a \gamma$ signal. Note that unlike searches for $e^+ e^-$ pairs, the contour shown extends to masses below $m_{\gamma^\prime} = 2 m_e$. \subsection{Electron Beam Dumps} \label{ssec:ebd} Also considered were electron beam dumps, with the E137 experiment \cite{Bjorken:1988as} taken as a test case. E137 was a beam dump experiment that searched for metastable hidden sector particles, generating them through bremsstrahlung by impacting 30 C of 20 GeV electrons on an Aluminum target. The particles then travelled 383\,m to a detector, the exact makeup of which changed between the two runs of the experiment. The signal required for detection, more than 1\,GeV of energy deposited in the electromagnetic calorimeters \cite{Batell:2014mga}, is easily satisfied by the dark axion portal through inelastic scattering with electrons in the detector. The bremsstrahlung cross section $eN \to eN\gamma^\prime a$ was calculated with CalcHEP 3.6.27 \cite{Pukhov:1999gg,Belyaev:2012qa} and CT10 parton distribution functions \cite{Lai:2010vv}. Unfortunately, the cross section appears to be far too small (on the order of several hundred pb) to generate any events without a prohibitively large value of $G_{a\gamma\gamma^\prime}$. This heavy suppression should extend to all electron beam dumps, and without some additional enhancement to the production rate, they are unable to place strong limits on the scenario. \section{Summary and Discussion} \label{sec:summary} We studied implications of the dark axion portal for the lepton $g-2$, $B$-factories, fixed target neutrino experiments and the CHARM experiment in the $m_a \ll m_{\gamma'}$ limit. We focused on the dark photon masses for which $B$-factories are most sensitive, roughly from 1\,MeV to 10\,GeV, and we restricted our plotted results to this window in Fig. \ref{fig:limits_2}. BaBar and Belle-II have too little energy to produce an on-shell $\gamma^\prime$ with a mass much larger than 10.2\,GeV. It is important to consider the lifetime of the $\gamma^\prime$ for $m_{\gamma^\prime}$ below a hundred MeV, as it becomes increasingly likely that the dark photon will escape the detector before decaying in an observable fashion. This is reflected in the gradual decline in the sensitivity for smaller masses as the $\gamma^\prime$ lifetime becomes comparable to the size of the BaBar and Belle-II experiments. The mass reach could be extended in both directions by considering off-shell dark photon production through the process $e^+ e^- \to a \gamma^{\prime*} \to a a \gamma$, though this cross section is suppressed by several orders of magnitude relative to $e^+ e^- \to a \gamma^\prime$. The finite lifetime of the $\gamma^\prime$ is exploited in the analysis of the CHARM experiment, where we search for observable decays of the $\gamma^\prime$. Both the $\gamma^\prime \to a \gamma$ and $\gamma^\prime \to a e^+ e^-$ were considered, and the former mode was found to provide the best constraints. Note that the constraint from the CHARM experiment possesses both an upper and lower bound, as is characteristic of searches for rare decays in beam dumps. These curves are determined in large part by the lifetime of the $\gamma^\prime$, which scales with $G_{a\gamma\gamma^\prime}^{-2}$. The optimal signal is found when the mean travel distance of the $\gamma^\prime$ is approximately equal to the distance to the detector. If $G_{a\gamma\gamma^\prime}$ is too large, the lifetime is short and it decays before reaching the detector. The upper bound slopes downward because the lifetime also declines with increasing mass. If $G_{a\gamma\gamma^\prime}$ is too small, the $\gamma^\prime$ will be likely to propagate far beyond the detector, reducing its probability of decaying inside the detector. The lower bound is also affected by the overall $\gamma^\prime$ production rate, which declines as $G_{a\gamma\gamma^\prime}^2$. It is the product of these two effects that determines the lower bound of the CHARM exclusion. The LSND and MiniBooNE analyses are greatly weakened by the short-lived $\gamma^\prime$, as only $a$ particles are sufficiently long-lived to reach the detector for $m_{\gamma^\prime}\ge$~few\,MeV. The $a$ scatters inelastically into $\gamma^\prime$, the radiative decay of which changes the observed signal. The cutoff in the sensitivity observed in Fig.~\ref{fig:limits_2} appears when the $\gamma^\prime$ is unlikely to escape the detector before decaying. We also investigated the effects on both the muon and electron $g-2$ in a conservative manner, though they only imposed meaningful constraints at relatively low masses. These limits become stronger at low masses, as the contribution from the internal $\gamma^\prime-a$ loop is suppressed by a large $m_{\gamma^\prime}$. We now move on to possible extensions of this work. While we have restricted our attention to monophoton searches at asymmetric $B$-factories, $e^+e^-$ colliders could potentially probe the scenario in other ways. As mentioned in Ref. \cite{Marciano:2016yhf} in the context of axion-like particles, $e^+e^- \to e^+e^-+a \gamma^\prime$ is an intriguing channel, with final states ranging from $e^+e^- + \text{missing energy}$ to $e^+e^- + \text{multiple photons}$ depending on the lifetimes of the dark particles, but has yet to see an experimental analysis. Evidence of $\gamma \to a\gamma^\prime$ conversion may also be found in radiative meson decays, but the rapid decay of the $\gamma^\prime$ complicates the signal for larger values of $m_{\gamma^\prime}$. For long-lived $\gamma^\prime$'s, we can compare the limit of Br$(\pi^0 \to \gamma \nu \bar \nu)<6\times 10^{-4}$ placed by Ref. \cite{Atiya:1992sm} to the branching fraction of $\pi^0 \to a\gamma\gamma^\prime$, a decay with a similar end-state. Unfortunately, this branching fraction is quite small, and the possible limit of $G_{a\gamma\gamma^\prime}\gsim1\,\rm GeV^{-1}$ is not competitive with those placed by electron or muon $g-2$. For a short-lived $\gamma^\prime$, the signal would be $\pi^0 \to \gamma\gamma +\text{Missing Energy}$, which would require a measurement of the invariant mass distribution of the end-state photons. Limits could also be derived by comparing $K^+ \to \pi^+ \nu \bar \nu$ \cite{Artamonov:2008qb} to $K^+ \to \pi^+ (\gamma^\star \to a\gamma^\prime)$ or $\phi \to \pi^0 \gamma$ to $\phi \to \pi^0 a(\gamma^\prime \to a\gamma)$ \cite{Achasov:2000zd}. For much larger masses, one could look to the Higgs decay $H \to \gamma \gamma^\star \to \gamma\gamma + \text{Missing Energy}$. Future directions of interest involve exploring the implications of a long-lived $\gamma^\prime$ more thoroughly, as planned beam dump experiments such as SHiP \cite{Anelli:2015pba} could be sensitive to monophotons produced through $\gamma^\prime \to a\gamma$. In the case of very long-lived dark photons, inelastic $a$ or $\gamma^\prime$ scattering inside the detectors of future fixed target neutrino experiments such as those associated with the Short Baseline Neutrino program \cite{Tufanli:2016hyo}, and reactor neutrino experiments \cite{Ko:2016owz,Park:2017prx} could also be useful search avenues. Missing momentum/energy experiments such as NA64 \cite{Banerjee:2017hhz} provide another probe of the parameter space that may be worth consideration. The constraints from $a$-electron scattering at fixed target experiments should also be extended to consider the effects of the $a$-$\gamma$-$\gamma$ vertex, though in many cases this will resemble a rescaling of existing limits on axion-like particles coupled predominantly to photons. For masses below a few ${\rm MeV}$, constraints from stellar cooling and supernovae become interesting, as both the $a$ and the $\gamma^\prime$ provide potential carriers for additional energy loss \cite{Dolan:2017osp,Dreiner:2013mua}, as well as production from the sun \cite{Redondo:2013lna}. It should be noted that the production is suppressed from standard dark photon or axion-like particle searches by the requirement that both an $a$ and a $\gamma^\prime$ are produced simultaneously. A more complete treatment of this limit would require the inclusion of the $a$-$\gamma$-$\gamma$ vertex, as even at a suppressed rate, $a$ reabsorption would have a significant effect on stellar energy loss due to dark particle emission. The $\gamma^\prime$ could escape, but this signal is suppressed by decay to an $a-\gamma$ pair before escape, or inelastical scattering into an $a$ through $\gamma$ mediated interactions with stellar material. \begin{acknowledgments} This work was supported in part by IBS (Project Code IBS-R018-D1), NRF Strategic Research Program (NRF-2017R1E1A1A01072736), NRF Basic Science Research Program (NRF-2016R1A2B4008759) and CAU Research Grants in 2018. We would like to thank Tyler Thornton and Richard Van De Water for helpful discussions regarding the MiniBooNE experiment. HL appreciates hospitality of the Pitt-PACC at University of Pittsburgh during the Light Dark World International Forum 2017. \end{acknowledgments}
{ "timestamp": "2018-12-18T02:14:16", "yymm": "1806", "arxiv_id": "1806.00757", "language": "en", "url": "https://arxiv.org/abs/1806.00757" }
\section{Introduction} Consider two natural images, picture of a bird and picture of a cat for example, \textit{can we continuously transform the bird into the cat without ever generating a picture that is not neither bird nor cat?} In other words, \textit{is there a continuous transformation between the two that never leaves the manifold of "real looking" images?} It is often the case that real world data falls on a union of several \textit{disjoint} manifolds and such a transformation does not exist, i.e. the real data distribution is supported on a disconnected manifold, and an effective generative model needs to be able to learn such manifolds. Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative}, model the problem of finding the unknown distribution of real data as a two player game where one player, called the discriminator, tries to perfectly separate real data from the data generated by a second player, called the generator, while the second player tries to generate data that can perfectly fool the first player. Under certain conditions,~\citet{goodfellow2014generative} proved that this process will result in a generator that generates data from the real data distribution, hence finding the unknown distribution implicitly. However, later works uncovered several shortcomings of the original formulation, mostly due to violation of one or several of its assumptions in practice~\cite{arjovsky2017towards, arjovsky2017wasserstein, metz2016unrolled, srivastava2017veegan}. Most notably, the proof only works for when optimizing in the function space of generator and discriminator (and not in the parameter space)~\cite{goodfellow2014generative}, the Jensen Shannon Divergence is maxed out when the generated and real data distributions have disjoint support resulting in vanishing or unstable gradient~\cite{arjovsky2017towards}, and finally the mode dropping problem where the generator fails to correctly capture all the modes of the data distribution, for which to the best of our knowledge there is no definitive reason yet. One major assumption for the convergence of GANs is that the generator and discriminator both have unlimited capacity \cite{goodfellow2014generative, arjovsky2017wasserstein, srivastava2017veegan, hoang2018mgan}, and modeling them with neural networks is then justified through the Universal Approximation Theorem. However, we should note that this theorem is only valid for continuous functions. Moreover, neural networks are far from universal approximators in practice. In fact, we often explicitly restrict neural networks through various regularizers to stabilize training and enhance generalization. Therefore, when generator and discriminator are modeled by stable regularized neural networks, they may no longer enjoy a good convergence as promised by the theory. In this work, we focus on learning distributions with disconnected support, and show how limitations of neural networks in modeling discontinuous functions can cause difficulties in learning such distributions with GANs. We study why these difficulties arise, what consequences they have in practice, and how one can address these difficulties by using a collection of generators, providing new insights into the recent success of multi-generator models. However, while all such models consider the number of generators and the prior over them as fixed hyperparameters \cite{arora2017generalization, hoang2018mgan, ghosh2017multi}, we propose a novel prior learning approach and show its necessity in effectively learning a distribution with disconnected support. We would like to stress that we are not trying to achieve state of the art performance in our experiments in the present work, rather we try to illustrate an important limitation of common GAN models and the effectiveness of our proposed modifications. We summarize the contributions of this work below: \begin{itemize} \item We identify a shortcoming of GANs in modeling distributions with disconnected support, and investigate its consequences, namely mode dropping, worse sample quality, and worse local convergence (Section~\ref{sec:dif}). \item We illustrate how using a collection of generators can solve this shortcoming, providing new insights into the success of multi generator GAN models in practice (Section~\ref{sec:dml}). \item We show that choosing the number of generators and the probability of selecting them are important factors in correctly learning a distribution with disconnected support, and propose a novel prior learning approach to address these factors. (Section~\ref{sec:dml_pl}) \item Our proposed model can effectively learn distributions with disconnected supports and infer the number of necessary disjoint components through prior learning. Instead of one large neural network as the generator, it uses several smaller neural networks, making it more suitable for parallel learning and less prone to bad weight initialization. Moreover, it can be easily integrated with any GAN model to enjoy their benefits as well (Section~\ref{sec:exp}). \end{itemize} \begin{figure*}[t] \centering \subfloat[Suboptimal Continuous $G$]{ \centering \includegraphics[trim=350 150 350 150, clip, width=0.3\textwidth]{img/eccv_concept_g.pdf} \label{img:concept_g}} \quad \subfloat[Optimal $G^*$ \label{img:concept_r}]{ \centering \includegraphics[trim=350 150 350 150, clip, width=0.3\textwidth]{img/eccv_concept_r.pdf} \label{img:concept_g}} \caption{Illustrative example of continuous generator $G(z): \mathcal{Z} \rightarrow \mathcal{X}$ with prior $z \sim \mathcal{U}(-1,1)$, trying to capture real data coming from $p(x) = \frac{1}{2}(\delta(x-2) + \delta(x+2))$, a distribution supported on union of two disjoint manifolds. (a) shows an example of what a stable neural network is capable of learning for $G$ (a continuous and smooth function), (b) shows an optimal generator $G^*(z)$. Note that since $z$ is uniformly sampled, $G(z)$ is necessarily generating off manifold samples (in $[-2,2]$) due to its continuity.} \label{img:concept} \end{figure*} \section{Difficulties of Learning Disconnected Manifolds} \label{sec:dif} A GAN as proposed by~\citet{goodfellow2014generative}, and most of its successors (e.g. \cite{arjovsky2017wasserstein,gulrajani2017improved}) learn a continuous $G: \mathcal{Z} \rightarrow \mathcal{X}$, which receives samples from some prior $p(z)$ as input and generates real data as output. The prior $p(z)$ is often a standard multivariate normal distribution $\mathcal{N}(0,I)$ or a bounded uniform distribution $\mathcal{U}(-1, 1)$. This means that $p(z)$ is supported on a globally connected subspace of $\mathcal{Z}$. Since a continuous function always keeps the connectedness of space intact~\cite{kelley2017general}, the probability distribution induced by $G$ is also supported on a globally connected space. Thus $G$, a continuous function by design, can not correctly model a union of disjoint manifolds in $\mathcal{X}$. We highlight this fact in Figure~\ref{img:concept} using an illustrative example where the support of real data is $\{+2, -2\}$. We will look at some consequences of this shortcoming in the next part of this section. For the remainder of this paper, we assume the real data is supported on a manifold $S_r$ which is a union of disjoint globally connected manifolds each denoted by $M_i$; we refer to each $M_i$ as a submanifold (note that we are overloading the topological definition of submanifolds in favor of brevity): \begin{align*} S_r &= \bigcup_{i=1}^{n_r} M_i &\forall i \not = j: M_i \cap M_j = \emptyset \end{align*} \textbf{Sample Quality.} Since GAN's generator tries to cover all submanifolds of real data with a single globally connected manifold, it will inevitably generate off real-manifold samples. Note that to avoid off manifold regions, one should push the generator to learn a higher frequency function, the learning of which is explicitly avoided by stable training procedures and means of regularization. Therefore the GAN model in a stable training, in addition to real looking samples, will also generate low quality off real-manifold samples. See Figure~\ref{img:wg_mdg} for an example of this problem. \textbf{Mode Dropping.} In this work, we use the term \textit{mode dropping} to refer to the situation where one or several submanifolds of real data are not completely covered by the support of the generated distribution. Note that mode collapse is a special case of this definition where all but a small part of a single submanifold are dropped. When the generator can only learn a distribution with globally connected support, it has to learn a cover of the real data submanifolds, in other words, the generator can not reduce the probability density of the off real-manifold space beyond a certain value. However, the generator can try to minimize the volume of the off real-manifold space to minimize the probability of generating samples there. For example, see how in Figure~\ref{img:wg_mdg_wg} the learned globally connected manifold has minimum off real-manifold volume, for example it does not learn a cover that crosses the center (the same manifold is learned in 5 different runs). So, in learning the cover, there is a trade off between covering all real data submanifolds, and minimizing the volume of the off real-manifold space in the cover. This trade off means that the generator may sacrifice certain submanifolds, entirely or partially, in favor of learning a cover with less off real-manifold volume, hence mode dropping. \textbf{Local Convergence.} \citet{nagarajan2017gradient} recently proved that the training of GANs is locally convergent when generated and real data distributions are equal near the equilibrium point, and~\citet{barratt2018converge}~showed the necessity of this condition on a prototypical example. Therefore when the generator can not learn the correct support of the real data distribution, as is in our discussion, the resulting equilibrium may not be locally convergent. In practice, this means the generator's support keeps oscillating near the data manifold. \begin{figure*}[t] \centering \subfloat[Real Data]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.21\textwidth]{img/4l_r.png} \label{img:wg_mdg_real}} \quad \subfloat[WGAN-GP]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.21\textwidth]{img/4l_wg.png} \label{img:wg_mdg_wg}} \quad \subfloat[DMWGAN]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.21\textwidth]{img/4l_wg10.png} \label{img:mdg10_mdg}} \quad \subfloat[DMWGAN-PL]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.21\textwidth]{img/4l_wg10_pg.png} \label{img:mdg10_mdgpl}} \caption{Comparing Wasserstein GAN (WGAN) and its Disconnected Manifold version with and without prior learning (DMWGAN-PL, DMWGAN) on disjoint line segments dataset when $n_g=10$. Different colors indicate samples from different generators. Notice how WGAN-GP fails to capture the disconnected manifold of real data, learning a globally connected cover instead, and thus generating off real-manifold samples. DMWGAN also fails due to incorrect number of generators. In contrast, DMWGAN-PL is able to infer the necessary number of disjoint components without any supervision and learn the correct disconnected manifold of real data. Each figure shows 10K samples from the respective model. We train each model 5 times, the results shown are consistent across different runs.} \label{img:wg_mdg} \end{figure*} \begin{figure*}[t] \centering \subfloat[WGAN-GP]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.3\textwidth]{img/4l7ub_wg.png} \label{img:ub_wg}} \quad \subfloat[DMWGAN]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.3\textwidth]{img/4l7ub_wg10.png} \label{img:ub_mdg}} \quad \subfloat[DMWGAN-PL]{ \centering \includegraphics[trim=30 10 30 50, clip, width=0.3\textwidth]{img/4l7ub_wg10_pg.png} \label{img:ub_mdgpl}} \caption{Comparing WGAN-GP, DMWGAN and DMWGAN-PL convergence on unbalanced disjoint line segments dataset when $n_g=10$. The real data is the same line segments as in Figure~\ref{img:wg_mdg}, except the top right line segment has higher probability. Different colors indicate samples from different generators. Notice how DMWGAN-PL (c) has vanished the contribution of redundant generators wihtout any supervision. Each figure shows 10K samples from the respective model. We train each model 5 times, the results shown are consistent across different runs.} \label{img:ub} \end{figure*} \section{Disconnected Manifold Learning} \label{sec:dml} There are two ways to achieve disconnectedness in $\mathcal{X}$: making $\mathcal{Z}$ disconnected, or making $G: \mathcal{Z} \rightarrow \mathcal{X}$ discontinuous. The former needs considerations for how to make $\mathcal{Z}$ disconnected, for example adding discrete dimensions~\cite{chen2016infogan}, or using a mixture of Gaussians~\cite{gurumurthy2017deligan}. The latter solution can be achieved by introducing a collections of independent neural networks as $G$. In this work, we investigate the latter solution since it is more suitable for parallel optimization and can be more robust to bad initialization. We first introduce a set of generators $G_c: \mathcal{Z} \rightarrow \mathcal{X}$ instead of a single one, independently constructed on a uniform prior in the shared latent space $\mathcal{Z}$. Each generator can therefore potentially learn a separate connected manifold. However, we need to encourage these generators to each focus on a different submanifold of the real data, otherwise they may all learn a cover of the submanifolds and experience the same issues of a single generator GAN. Intuitively, we want the samples generated by each generator to be perfectly unique to that generator, in other words, each sample should be a perfect indicator of which generator it came from. Naturally, we can achieve this by maximizing the mutual information $\mathcal{I}(c;x)$, where $c$ is generator id and $x$ is generated sample. As suggested by~\citet{chen2016infogan}, we can implement this by maximizing a lower bound on mutual information between generator ids and generated images: \begin{align*} \mathcal{I}(c;x) &= H(c) - H(c|x) \\ &= H(c) + \Ex{c\sim p(c), x \sim p_g(x|c)}{\Ex{c' \sim p(c'|x)}{\ln{p(c'|x)}}} \\ &= H(c) + \Ex{x\sim p_g(x)}{\mathcal{K}\mathcal{L}(p(c|x)||q(c|x))} + \Ex{c\sim p(c), x \sim p_g(x|c), c' \sim p(c'|x)}{\ln{q(c'|x)}} \\ &\geq H(c) + \Ex{c\sim p(c), x \sim p_g(x|c), c' \sim p(c'|x)}{\ln{q(c'|x)}} \\ &= H(c) + \Ex{c\sim p(c), x \sim p_g(x|c)}{\ln{q(c|x)}} \end{align*} where $q(c|x)$ is the distribution approximating $p(c|x)$, $p_g(x|c)$ is induced by each generator $G_c$, $\mathcal{K}\mathcal{L}$ is the Kullback Leibler divergence, and the last equality is a consequence of Lemma 5.1 in~\cite{chen2016infogan}. Therefore, by modeling $q(c|x)$ with a neural network $Q(x;\gamma)$, the encoder network, maximizing $\mathcal{I}(c;x)$ boils down to minimizing a cross entropy loss: \begin{align} L_c = -\Ex{c\sim p(c), x \sim p_g(x|c)}{\ln{q(c|x)}} \end{align} Utilizing the Wasserstein GAN~\cite{arjovsky2017wasserstein} objectives, discriminator (critic) and generator maximize the following, where $D(x;w): \mathcal{X} \rightarrow \mathbb{R}$ is the critic function: \begin{align} V_d &= \Ex{x\sim p_r(x)}{D(x;w)} - \Ex{c\sim p(c), x\sim p_g(x|c)}{D(x;w)} \\ V_g &= \Ex{c\sim p(c), x\sim p_g(x|c)}{D(x;w)} - \lambda L_c \label{eq:dmwgan_g} \end{align} We call this model Disconnected Manifold Learning WGAN (DMWGAN) in our experiments. We can similarly apply our modifications to the original GAN~\cite{goodfellow2014generative} to construct DMGAN. We add the single sided version of penalty gradient regularizer~\cite{gulrajani2017improved} to the discriminator/critic objectives of both models and all baselines. See Appendix A for details of our algorithm and the DMGAN objectives. See Appendix F for more details and experiments on the importance of the mutual information term. The original convergence theorems of~\citet{goodfellow2014generative} and ~\citet{arjovsky2017wasserstein} holds for the proposed DM versions respectively, because all our modifications concern the internal structure of the generator, and can be absorbed into the unlimited capacity assumption. More concretely, all generators together can be viewed as a unified generator where $p(c)p_g(x|c)$ becomes the generator probability, and $L_c$ can be considered as a constraint on the generator function space incorporated using a Lagrange multiplier. While most multi-generator models consider $p(c)$ as a uniform distribution over generators, this naive choice of prior can cause certain difficulties in learning a disconnected support. We will discuss this point, and also introduce and motivate the metrics we use for evaluations, in the next two subsections. \subsection{Learning the Generator's Prior} \label{sec:dml_pl} In practice, we can not assume that the true number of submanifolds in real data is known a priori. So let us consider two cases regarding the number of generators $n_g$, compared to the true number of submanifolds in data $n_r$, under a fixed uniform prior $p(c)$. If $n_g < n_r$ then some generators have to cover several submanifolds of the real data, thus partially experiencing the same issues discussed in Section~\ref{sec:dif}. If $n_g > n_r$, then some generators have to share one real submanifold, and since we are forcing the generators to maintain disjoint supports, this results in partially covered real submanifolds, causing mode dropping. See Figures~\ref{img:mdg10_mdg}~and~\ref{img:ub_mdg} for examples of this issue. Note that an effective solution to the latter problem reduces the former problem into a trade off: the more the generators, the better the cover. We can address the latter problem by learning the prior $p(c)$ such that it vanishes the contribution of redundant generators. Even when $n_g = n_r$, what if the distribution of data over submanifolds are not uniform? Since we are forcing each generator to learn a different submanifold, a uniform prior over the generators would result in a suboptimal distribution. This issue further shows the necessity of learning the prior over generators. We are interested in finding the best prior $p(c)$ over generators. Notice that $q(c|x)$ is implicitly learning the probability of $x\in \mathcal{X}$ belonging to each generator $G_c$, hence $q(c|x)$ is approximating the true posterior $p(c|x)$. We can take an EM approach to learning the prior: the expected value of $q(c|x)$ over the real data distribution gives us an approximation of $p(c)$ (E step), which we can use to train the DMGAN model (M step). Instead of using empirical average to learn $p(c)$ directly, we learn it with a model $r(c; \zeta)$, which is a softmax function over parameters $\{\zeta_i \}_{i=1}^{n_g}$ corresponding to each generator. This enables us to control the learning of $p(c)$, the advantage of which we will discuss shortly. We train $r(c)$ by minimizing the cross entropy as follows: \begin{align*} H(p(c), r(c)) &= -\Ex{c\sim p(c)}{\log r(c)} = -\Ex{x\sim p_r(x), c\sim p(c|x)}{\log r(c)} = \Ex{x\sim p_r(x)}{H(p(c|x), r(c))} \end{align*} Where $H(p(c|x), r(c))$ is the cross entropy between model distribution $r(c)$ and true posterior $p(c|x)$ which we approximate by $q(c|x)$. However, learning the prior from the start, when the generators are still mostly random, may prevent most generators from learning by vanishing their probability too early. To avoid this problem, we add an entropy regularizer and decay its weight $\lambda''$ with time to gradually shift the prior $r(c)$ away from uniform distribution. Thus the final loss for training $r(c)$ becomes: \begin{align} L_{prior} = \Ex{x\sim p_r(x)}{H(q(c|x), r(c))} - \alpha^t \lambda'' H(r(c)) \end{align} Where $H(r(c))$ is the entropy of model distribution, $\alpha$ is the decay rate, and $t$ is training timestep. The model is not very sensitive to $\lambda''$ and $\alpha$, any combination that insures a smooth transition away from uniform distribution is valid. We call this augmented model Disconnected Manifold Learning GAN with Prior Learning (DMGAN-PL) in our experiments. See Figures~\ref{img:wg_mdg}~and~\ref{img:ub} for examples showing the advantage of learning the prior. \begin{comment} \begin{algorithm} \caption{Disentagled Manifold Learning GAN with Prior Learning (DMGAN-PL)\label{alg:dmganpl}. Replace $V_d$ and $V_g$ according to Eq~\ref{eq:dmwgan_d} and Eq~\ref{eq:dmwgan_g} in lines~\ref{alg:dmganpl:vd}~and~\ref{alg:dmganpl:vg} for the Wasserstein version (DMWGAN-PL).} \begin{algorithmic}[1] \Require{$p(z)$ prior on $\mathcal{Z}$, $m$ batch size, $k$ number of discriminator updates, $n_g$ number of generators, $\lambda$, $\lambda'$ and $\lambda''$ are weight coefficients, $\alpha$ is decay rate, and $t=0$} \Repeat \For{$j \in \{ 1 \dots k\}$} \Sample{$\{x^i\}_{i=1}^m$}{$p_r(x)$} \Comment{A batch from real data} \Sample{$\{z^i\}_{i=1}^m$}{$p(z)$} \Comment{A batch from $\mathcal{Z}$ prior} \Sample{$\{c^i\}_{i=1}^m$}{$r(c; \zeta)$} \Comment{A batch from generator's prior} \Let{$\{x_g^i\}_{i=1}^m$}{$G(z^i;\theta_{c^i})$} \Comment{Generate batch using selected generators} \Let{$g_w$}{$\nabla_w \frac{1}{m}\sum_i \left[ \ln D(x^i; w) + \ln (1-D(x_g^i; w)) + \lambda'V_{regul} \right]$ \label{alg:dmganpl:vd}} \Let{$w$}{Adam($w$, $g_w$)} \Comment{Maximize $V_d$ wrt. $w$} \EndFor \Sample{$\{x^i\}_{i=1}^m$}{$p_r(x)$} \Sample{$\{z^i\}_{i=1}^m$}{$p(z)$} \Sample{$\{c^i\}_{i=1}^m$}{$r(c; \zeta)$} \Let{$\{x_g^i\}_{i=1}^m$}{$G(z^i;\theta_{c^i})$} \For{$j \in \{ 1 \dots n_g\}$} \Let{$g_{\theta_j}$}{$\nabla_{\theta_j} \frac{1}{m}\sum_i \left[ \ln D(x_g^i; w) - \lambda \ln Q(x_g^i; \gamma) \right]$} \Comment{$\theta_j$ is short for $\theta_{c^j}$ \label{alg:dmganpl:vg}} \Let{$\theta_j$}{Adam($g_{\theta_j}$, $\theta_j$)} \Comment{Maximize $V_g$ wrt. $\theta$} \EndFor \Let{$g_\gamma$}{$\nabla_\gamma \frac{1}{m} \sum_i \ln Q(x_g^i; \gamma)$} \Let{$\gamma$}{Adam($g_\gamma$, $\gamma$)} \Comment{Minimize $L_c$ wrt. $\gamma$} \Let{$g_\zeta$}{$\nabla_\zeta \frac{1}{m} \sum_i \left[ H(Q(x^i; \gamma), r(c; \zeta)) \right] - \alpha^t \lambda'' H(r(c; \zeta))$} \Let{$\zeta$}{Adam($g_\zeta$, $\zeta$)} \Comment{Minimize $L_{prior}$ wrt. $\zeta$} \Let{$t$}{$t+1$} \Until{convergence.} \end{algorithmic} \end{algorithm} \end{comment} \subsection{Choice of Metrics} \label{sec:met} We require metrics that can assess inter-mode variation, intra-mode variation and sample quality. The common metric, Inception Score~\cite{salimans2016improved}, has several drawbacks~\cite{barratt2018note, lucic2017gans}, most notably it is indifferent to intra-class variations and favors generators that achieve close to uniform distribution over classes of data. Instead, we consider more direct metrics together with FID score~\cite{heusel2017gans} for natural images. For inter mode variation, we use the Jensen Shannon Divergence (JSD) between the class distribution of a pre-trained classifier over real data and generator's data. This can directly tell us how well the distribution over classes are captured. JSD is favorable to KL due to being bounded and symmetric. For intra mode variation, we define mean square geodesic distance (MSD): the average squared geodesic distance between pairs of samples classified into each class. To compute the geodesic distance, Euclidean distance is used in a small neighborhood of each sample to construct the Isomap graph~\cite{yang2002extended} over which a shortest path distance is calculated. This shortest path distance is an approximation to the geodesic distance on the true image manifold~\cite{tenenbaum2000global}. Note that average square distance, for Euclidean distance, is equal to twice the trace of the Covariance matrix, i.e. sum of the eigenvalues of covariance matrix, and therefore can be an indicator of the variance within each class: \begin{align*} \Ex{x,y}{||x-y||^2} &= 2\Ex{x}{x^Tx} - 2\Ex{x}{x}^T\Ex{x}{x} = 2Tr(Cov(x)) \end{align*} In our experiments, we choose the smallest $k$ for which the constructed k nearest neighbors graph (Isomap) is connected in order to have a better approximation of the geodesic distance ($k=18$). Another concept we would like to evaluate is sample quality. Given a pretrained classifier with small test error, samples that are classified with high confidence can be reasonably considered good quality samples. We plot the ratio of samples classified with confidence greater than a threshold, versus the confidence threshold, as a measure of sample quality: the more off real-manifold samples, the lower the resulting curve. Note that the results from this plot are exclusively indicative of sample quality and should be considered in conjunction with the aforementioned metrics. What if the generative model memorizes the dataset that it is trained on? Such a model would score perfectly on all our metrics, while providing no generalization at all. First, note that a single generator GAN model can not memorize the dataset because it can not learn a distribution supported on $N$ disjoint components as discussed in Section~\ref{sec:dif}. Second, while our modifications introduces disconnnectedness to GANs, the number of generators we use in our proposed modifications are in the order of data submanifolds which is several orders of magnitude less than common dataset sizes. Note that if we were to assign one unique point of the $\mathcal{Z}$ space to each dataset sample, then a neural network could learn to memorize the dataset by mapping each selected $z\in \mathcal{Z}$ to its corresponding real sample (we have introduced $N$ disjoint component in $\mathcal{Z}$ space in this case), however this is not how GANs are modeled. Therefore, the memorization issue is not of concern for common GANs and our proposed models (note that this argument is addressing the very narrow case of dataset memorization, not over-fitting in general). \section{Related Works} Several recent works have directly targeted the mode collapse problem by introducing a network $F: \mathcal{X} \rightarrow \mathcal{Z}$ that is trained to map back the data into the latent space prior $p(z)$. It can therefore provide a learning signal if the generated data has collapsed. ALI~\cite{dumoulin2016ali} and BiGAN~\cite{donahue2016bigan} consider pairs of data and corresponding latent variable $(x,z)$, and construct their discriminator to distinguish such pairs of real and generated data. VEEGAN~\cite{srivastava2017veegan} uses the same discriminator, but also adds an explicit reconstruction loss $\Ex{z\sim p(z)}{||z - F_\theta(G_\gamma(z))||_2^2}$. The main advantage of these models is to prevent loss of information by the generator (mapping several $z \in \mathcal{Z}$ to a single $x \in \mathcal{X}$). However, in case of distributions with disconnected support, these models do not provide much advantage over common GANs and suffer from the same issues we discussed in Section~\ref{sec:dif} due to having a single generator. Another set of recent works have proposed using multiple generators in GANs in order to improve their convergence. MIX+GAN~\cite{arora2017generalization} proposes using a collection of generators based on the well-known advantage of learning a mixed strategy versus a pure strategy in game theory. MGAN~\cite{hoang2018mgan} similarly uses a collection of $k$ generators in order to model a mixture distribution, and train them together with a k-class classifier to encourage them to each capture a different component of the real mixture distribution. MAD-GAN~\cite{ghosh2017multi}, also uses $k$ generators, together with a $k+1$-class discriminator which is trained to correctly classify samples from each generator and true data (hence a $k+1$ classifier), in order to increase the diversity of generated images. While these models provide reasons for why multiple generators can model mixture distributions and achieve more diversity, they do not address why single generator GANs fail to do so. In this work, we explain why it is the disconnectedness of the support that single generator GANs are unable to learn, not the fact that real data comes from a mixture distribution. Moreover, all of these works use a fixed number of generators and do not have any prior learning, which can cause serious problems in learning of distributions with disconnected support as we discussed in Section~\ref{sec:dml_pl} (see Figures~\ref{img:mdg10_mdg}~and~\ref{img:ub_mdg} for examples of this issue). Finally, several works have targeted the problem of learning the correct manifold of data. MDGAN~\cite{che2016mode}, uses a two step approach to closely capture the manifold of real data. They first approximate the data manifold by learning a transformation from encoded real images into real looking images, and then train a single generator GAN to generate images similar to the transformed encoded images of previous step. However, MDGAN can not model distributions with disconnected supports. InfoGAN~\cite{chen2016infogan} introduces auxiliary dimensions to the latent space $\mathcal{Z}$, and maximizes the mutual information between these extra dimensions and generated images in order to learn disentangled representations in the latent space. DeLiGAN~\cite{gurumurthy2017deligan} uses a fixed mixture of Gaussians as its latent prior, and does not have any mechanisms to encourage diversity. While InfoGAN and DeLiGAN can generate disconnected manifolds, they both assume a fixed number of discreet components equal to the number of underlying classes and have no prior learning over these components, thus suffering from the issues discussed in Section~\ref{sec:dml_pl}. Also, neither of these works discusses the incapability of single generator GANs to learn disconnected manifolds and its consequences. \begin{table*}[t] \centering \caption{Inter-class variation measured by Jensen Shannon Divergence (JSD) with true class distribution for MNIST and Face-Bedroom dataset, and FID score for Face-Bedroom (smaller is better). We run each model 5 times with random initialization, and report average values with one standard deviation interval} \begin{tabular}{llll} \toprule Model & JSD MNIST $\times10^{-2}$ & JSD Face-Bed $\times10^{-4}$ & FID Face-Bed \\ \midrule WGAN-GP & $0.13 \text{ std } 0.05$ & $0.23 \text{ std } 0.15$ & $8.30 \text{ std } 0.27$\\ MIX+GAN & $0.17 \text{ std } 0.08$ & $0.83 \text{ std } 0.57$ & $8.02 \text{ std } 0.14$\\ DMWGAN & $0.23 \text{ std } 0.06$ & $0.46 \text{ std } 0.25$ & $7.96 \text{ std } 0.08$\\ DMWGAN-PL & $0.06 \text{ std } 0.02$ & $0.10 \text{ std } 0.05$ & $7.67 \text{ std } 0.16$\\ \bottomrule \end{tabular} \label{tab:jsd_w} \end{table*} \begin{figure*}[t] \centering \subfloat[Intra-class variation MNIST]{ \centering \includegraphics[trim=20 10 50 40, clip, width=0.55\textwidth]{img/vars_modes_Real_DMWGAN-PL_WGAN-GP.pdf} \label{img:mnist_w_vars}} \quad \subfloat[Sample quality MNIST]{ \centering \includegraphics[trim=30 10 50 50, clip, width=0.3\textwidth]{img/sample_quality_Real_DMWGAN-PL_WGAN-GP.pdf} \label{img:mnist_w_sq}} \subfloat[Sample quality Face-Bed]{ \centering \includegraphics[trim=30 10 50 50, clip, width=0.3\textwidth]{img/im_sample_quality.pdf} \label{img:cl_w_sq}} \caption{(a) Shows intra-class variation in MNIST. Bars show the mean square distance (MSD) within each class of the dataset. On average, DMGAN-PL outperforms WGAN-GP in capturing intra class variation, as measured by MSD, with larger significance on certain classes. (b) Shows the sample quality in MNIST experiment. (c) Shows sample quality in Face-Bed experiment. Notice how DMWGAN-PL outperforms other models due to fewer off real-manifold samples. We run each model 5 times with random initialization, and report average values with one standard deviation intervals in both figures. 10K samples are used for metric evaluations.} \label{img:mnist_w} \end{figure*} \begin{figure*}[t] \centering \subfloat[WGAN-GP]{ \centering \includegraphics[trim=95 20 95 30, clip, width=0.29\textwidth]{img/im_wgan_samples_red.png} \label{img:samples_w_real}} \quad \subfloat[DMWGAN]{ \centering \includegraphics[trim=95 20 95 30, clip, width=0.29\textwidth]{img/im_dmwgan_samples_red.png} \label{img:samples_w_dmw}} \quad \subfloat[DMWGAN-PL]{ \centering \includegraphics[trim=95 20 95 30, clip, width=0.29\textwidth]{img/im_dmwgan_pl_samples.png} \label{img:samples_w_wg}} \caption{Samples randomly generated by GAN models trained on Face-Bed dataset. Notice how WGAN-GP generates combined face-bedroom images (red boxes) in addition to faces and bedrooms, due to learning a connected cover of the real data support. DMWGAN does not generate such samples, however it generates completely off manifold samples (red boxes) due to having redundant generators and a fixed prior. DMWGAN-PL is able to correctly learn the disconnected support of real data. The samples and trained models are not cherry picked.} \label{img:samples_w} \end{figure*} \section{Experiments} \label{sec:exp} In this section we present several experiments to investigate the issues and proposed solutions mentioned in Sections~\ref{sec:dif}~and~\ref{sec:dml} respectively. The same network architecture is used for the discriminator and generator networks of all models under comparison, except we use $\frac{1}{4}$ number of filters in each layer of multi-generator models compared to the single generator models, to control the effect of complexity. In all experiments, we train each model for a total of 200 epochs with a five to one update ratio between discriminator and generator. $Q$, the encoder network, is built on top of discriminator's last hidden layer, and is trained simultaneously with generators. Each data batch is constructed by first selecting 32 generators according to the prior $r(c; \zeta)$, and then sampling each one using $z\sim {\mathcal{U}(-1,1)}$. See Appendix B for details of our networks and the hyperparameters. \textbf{Disjoint line segments.} This dataset is constructed by sampling data with uniform distribution over four disjoint line segments to achieve a distribution supported on a union of disjoint low-dimensional manifolds. See Figure~\ref{img:wg_mdg} for the results of experiments on this dataset. In Figure~\ref{img:ub}, an unbalanced version of this dataset is used, where 0.7 probability is placed on the top right line segment, and the other segments have 0.1 probability each. The generator and discriminator are both MLPs with two hidden layers, and 10 generators are used for multi-generator models. We choose WGAN-GP as the state of the art GAN model in these experiments (we observed similar or worse convergence with other flavors of single generator GANs). MGAN achieves similar results to DMWGAN. \textbf{MNIST dataset.} MNIST~\cite{lecun1998mnist} is particularly suitable since samples with different class labels can be reasonably interpreted as lying on disjoint manifolds (with minor exceptions like certain $4$s and $9$s). The generator and discriminator are DCGAN like networks~\cite{radford2015unsupervised} with three convolution layers. Figure~\ref{img:mnist_w} shows the mean squared geodesic distance (MSD) and Table~\ref{tab:jsd_w} reports the corresponding divergences in order to compare their inter mode variation. 20 generators are used for multi-generator models. See Appendix C for experiments using modified GAN objective. Results demonstrate the advantage of adding our proposed modification on both GAN and WGAN. See Appendix D for qualitative results. \textbf{Face-Bed dataset.} We combine 20K face images from CelebA dataset~\cite{liu2015celeba} and 20K bedroom images from LSUN Bedrooms dataset~\cite{yu15lsun} to construct a natural image dataset supported on a disconnected manifold. We center crop and resize images to $64\times64$. 5 generators are used for multi-generator models. Figures~\ref{img:cl_w_sq},~\ref{img:samples_w} and Table~\ref{tab:jsd_w} show the results of this experiment. See Appendix E for more qualitative results. \begin{figure*}[t] \centering \subfloat[]{ \centering \includegraphics[trim=80 10 80 20, clip, width=0.32\textwidth]{img/wgset_top_samples.png} \label{img:gset_dmw_mnist}} \subfloat[]{ \centering \includegraphics[trim=20 10 10 50, clip, width=0.45\textwidth]{img/rl_policy.pdf} \label{img:gset_p_mnist}} \qquad \subfloat[]{ \centering \includegraphics[trim=130 20 120 78, clip, width=0.32\textwidth, height=130pt]{img/im_dmwgan_pl_top.png} \label{img:gset_dmw_cl}} \subfloat[]{ \centering \includegraphics[trim=20 10 10 50, clip, width=0.45\textwidth]{img/dmwgan_pl_rl_policy.pdf} \label{img:gset_p_cl}} \caption{DMWGAN-PL prior learning during training on MNIST with 20 generators (a,b) and on Face-Bed with 5 generators (c, d). (a, c) show samples from top generators with prior greater than $0.05$ and $0.2$ respectively. (b, d) show the probability of selecting each generator $r(c; \zeta)$ during training, each color denotes a different generator. The color identifying each generator in (b) and the border color of each image in (a) are corresponding, similarly for (d) and (c). Notice how prior learning has correctly learned probability of selecting each generators and dropped out redundant generators without any supervision.} \label{img:gset} \end{figure*} \begin{comment} \begin{figure*}[b] \centering \subfloat[WGAN-GP]{ \centering \includegraphics[trim=80 20 80 20, clip, width=0.3\textwidth]{img/true_samples.png} \label{img:samples_w_real}} \quad \subfloat[DMWGAN]{ \centering \includegraphics[trim=80 20 80 20, clip, width=0.3\textwidth]{img/dmwgan_samples.png} \label{img:samples_w_dmw}} \quad \subfloat[DMWGAN-PL]{ \centering \includegraphics[trim=80 20 80 20, clip, width=0.3\textwidth]{img/wgan_samples.png} \label{img:samples_w_wg}} \caption{Samples randomly generated from (a) WGAN-GP, (b) DMWGAN model, and (c) DMWGAN-PL model. Notice how WGAN-GP generates combination of faces and bedrooms (red boxes) due to learning a cover of the real data manifold. DMWGAN does not have any combination samples, however it generates completely off manifold samples due to fixed prior. DMWGAN-PL is able to correctly learn the disconnected support of real data. The samples and trained models are not cherry picked.} \label{img:samples_w} \end{figure*} \end{comment} \section{Conclusion and Future Works} In this work we showed why the single generator GANs can not correctly learn distributions supported on disconnected manifolds, what consequences this shortcoming has in practice, and how multi-generator GANs can effectively address these issues. Moreover, we showed the importance of learning a prior over the generators rather than using a fixed prior in multi-generator models. However, it is important to highlight that throughout this work we assumed the disconnectedness of the real data support. Verifying this assumption in major datasets, and studying the topological properties of these datasets in general, are interesting future works. Extending the prior learning to other methods, such as learning a prior over shape of $\mathcal{Z}$ space, and also investigating the effects of adding diversity to discriminator as well as the generators, also remain as exciting future paths for research. \section*{Acknowledgement} This work was supported by Verisk Analytics and NSF-USA award number 1409683. \small \bibliographystyle{plainnat}
{ "timestamp": "2019-01-14T02:05:26", "yymm": "1806", "arxiv_id": "1806.00880", "language": "en", "url": "https://arxiv.org/abs/1806.00880" }
\section{Introduction} Modeling textual relevance between document query pairs lives at the heart of information retrieval (IR) research. Intuitively, this enables a wide assortment of real life applications, ranging from standard web search to automated chatbots. The key idea is that these systems learn a scoring function between document-query pairs, providing a ranked list of candidates as an output. A considerable fraction of such IR systems are focused on short textual documents, e.g., answering facts based questions or selecting the best response in the context of a chat-based system. The application of retrieval-based response and question answering (QA) systems is overall versatile, potentially serving as a powerful standalone domain-specific system or a crucial component in larger, general purpose chat systems such as Alexa. This paper presents a universal neural ranking model for such tasks. Neural networks (or deep learning) have garnered considerable attention for retrieval-based systems \cite{Severyn2015,DBLP:conf/acl/WangL016,shen2014latent,DBLP:conf/cikm/YangAGC16,DBLP:conf/naacl/HeL16}. Notably, the dominant state-of-the-art systems for many benchmarks are now neural models, almost completely dispensing with traditional feature engineering techniques altogether. In these systems, convolutional or recurrent networks are empowered with recent techniques such as neural attention \cite{DBLP:conf/acl/WangL016,rocktaschel2015reasoning,bahdanau2014neural}, achieving very competitive results on standard benchmarks. The key idea of attention is to extract only the most relevant information that is useful for prediction. In the context of textual data, attention learns to weight words and sub-phrases within documents based on how important they are. In the same vein, co-attention mechanisms \cite{DBLP:journals/corr/XiongZS16,DBLP:journals/corr/SantosTXZ16,DBLP:conf/aaai/ZhangLSW17,DBLP:conf/emnlp/ShenYD17} are a form of attention mechanisms that learn joint pairwise attentions, with respect to both document and query. Attention is traditionally used and commonly imagined as a feature extractor. It's behavior can be thought of as a dynamic form of pooling as it learns to select and compose different words to form the final document representation. This paper re-imagines attention as a form of feature augmentation method. Attention is casted with the purpose of not compositional learning or pooling but to provide hints for subsequent layers. To the best of our knowledge, this is a new way to exploit attention in neural ranking models. We begin by describing not only its advantages but also how it handles the weaknesses of existing models designed today. Typically, attention is applied once to a sentence \cite{rocktaschel2015reasoning,DBLP:conf/acl/WangL016}. A final representation is learned, and then passed to prediction layers. In the context of handling sequence pairs, co-attention is applied and a final representation for each sentence is learned \cite{DBLP:journals/corr/SantosTXZ16,DBLP:conf/aaai/ZhangLSW17,DBLP:conf/emnlp/ShenYD17}. An obvious drawback which applies to many existing models is that they are generally restricted to one attention variant. In the case where one or more attention calls are used (e.g., co-attention and intra-attention, etc.), concatenation is generally used to fuse representations \cite{DBLP:conf/emnlp/ShenYD17,DBLP:conf/emnlp/ParikhT0U16}. Unfortunately, this incurs cost in subsequent layers by doubling the representation size per call. The rationale for desiring more than one attention call is intuitive. In \cite{DBLP:conf/emnlp/ShenYD17,DBLP:conf/emnlp/ParikhT0U16}, Co-Attention and Intra-Attention are both used because each provides a different view of the document pair, learning high quality representations that could be used for prediction. Hence, this can significantly improve performance. Moreover, Co-Attention also comes in different flavors and can either be used with extractive max-mean pooling \cite{DBLP:journals/corr/SantosTXZ16,DBLP:conf/aaai/ZhangLSW17} or alignment-based pooling \cite{DBLP:conf/emnlp/ParikhT0U16,DBLP:conf/acl/ChenZLWJI17,DBLP:conf/emnlp/ShenYD17}. Each co-attention type produces different document representations. In max-pooling, signals are extracted based on a word's \textit{largest} contribution to the other text sequence. Mean-pooling calculates its contribution to the overall sentence. Alignment-pooling is another flavor of co-attention, which aligns semantically similar sub-phrases together. As such, different pooling operators provide a different view of sentence pairs. This is often tuned as a hyperparameter, i.e., performing architectural engineering to find the best variation that works best on each problem domain and dataset. Our approach is targeted at serving two important purposes - (1) It removes the need for architectural engineering of this component by enabling attention to be called for an arbitrary $k$ times with hardly any consequence and (2) concurrently it improves performance by modeling multiple views via multiple attention calls. As such, our method is in similar spirit to multi-headed attention, albeit efficient. To this end, we introduce \textit{Multi-Cast Attention Networks} (MCAN), a new deep learning architecture for a potpourri of tasks in the question answering and conversation modeling domains. In our approach, attention is \textbf{casted}, in contrast to the most other works that use it as a pooling operation. We cast co-attention multiple times, each time returning a compressed \textbf{scalar} feature that is re-attached to the original word representations. The key intuition is that compression enables scalable casting of multiple attention calls, aiming to provide subsequent layers with a \textit{hint} of not only global knowledge but also cross sentence knowledge. Intuitively, when passing these enhanced embeddings into a compositional encoder (such as a long short-term memory encoder), the LSTM can then benefit from this hint and alter its representation learning process accordingly. \subsection{Our Contributions} In summary, the prime contributions of this work are: \begin{itemize} \item For the first time, we propose a new paradigm of utilizing attentions not as a pooling operator but as a form of feature augmentation. We propose an overall architecture, Multi-Cast Attention Networks (MCAN) for generic sequence pair modeling. \item We evaluate our proposed model on four benchmark tasks, i.e., Dialogue Reply Prediction (Ubuntu dialogue corpus), Factoid Question Answering (TrecQA), Community Question Answering (QatarLiving forums from SemEval 2016) and Tweet Reply Prediction (Customer support). On Ubuntu dialogue corpus, MCAN outperforms the existing state-of-the-art models by $9\%$. MCAN also achieves the best performing score of $0.838$ MAP and $0.904$ MRR on the well-studied TrecQA dataset. \item We provide a comprehensive and in-depth analysis of the inner workings of our proposed MCAN model. We show that the casted attention features are interpretable and are capable of learning (1) a neural adaptation of word overlap and (2) a differentiation of evidence and anti-evidence words/patterns. \end{itemize} \section{Multi-Cast Attention Networks} In this section, we describe our proposed MCAN model. The inputs to our model are two text sequences which we denote as query $q$ and document $d$. In our problem, query-document can be generalizable to different problem domains such as question-answering or message-response prediction. Figure \ref{fig:architecture} illustrates the overall model architecture for question-answer retrieval. \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{./images/architecture2.pdf} \caption{Illustration of our proposed Multi-Cast Attention Networks (\textit{Best viewed in color}). MCAN is a wide multi-headed attention architecture that utilizes compression functions and attention as features. Example is given for question-answer retrieval. Input Encoding layer is ommitted for clarity.} \label{fig:architecture} \end{figure} \subsection{Input Encoder} The document and query inputs are passed in as one-hot encoded vectors. A word embedding layer parameterized by $W_{e} \in \mathbb{R}^{d \times |V|}$ converts each word to a dense word representation $w \in \mathbb{R}^{d}$. $V$ is the set of all words in the vocabulary. \subsubsection{Highway Encoder} Each word embedding is passed through a highway encoder layer. Highway networks \cite{DBLP:journals/corr/SrivastavaGS15} are gated nonlinear transform layers which control information flow to subsequent layers. Many works adopt a projection layer that is trained in place of the raw word embeddings. Not only does this save computation cost but also reduces the number of trainable parameters. Our work extends this projection layer to use a highway encoder. The intuition for doing so is simple, i.e., highway encoders can be interpreted as data-driven word filters. As such, we can imagine them to parametrically learn which words have an inclination to be important and not important to the task at hand. For example, filtering stop words and words that usually do not contribute much to the prediction. Similar to recurrent models that are gated in nature, this highway encoder layer controls how much information (of each word) is flowed to the subsequent layers. Let $H(.)$ and $T(.)$ be single layered affine transforms with ReLU and sigmoid activation functions respectively. A single highway network layer is defined as: \begin{align} y = H(x, W_{H}) \cdot T(x, W_{T}) + (1-T(x, W_{T})) \cdot x \end{align} where $W_H, W_{T} \in \mathbb{R}^{r \times d}$. Notably, the dimensions of the affine transform might be different from the size of the input vector. In this case, an additional nonlinear transform is used to project $x$ to the same dimensionality. \subsection{Co-Attention} Co-Attention \cite{DBLP:journals/corr/XiongZS16} is a pairwise attention mechanism that enables attending to text sequence pairs jointly. In this section, we introduce four variants of attention, i.e., (1) max-pooling, (2) mean-pooling, (3) alignment-pooling, and finally (4) intra-attention (or self attention). The first step in co-attention is to learn an affinity (or similarity) matrix between each word across both sequences. Following Parikh et al. \cite{DBLP:conf/emnlp/ParikhT0U16}, we adopt the following formulation for learning the affinity matrix. \begin{align} s_{ij} = F(q_{i})^{T} F(d_{j}) \label{aff} \end{align} where $F(.)$ is a function such as a multi-layered perceptron (MLP). Alternate forms of co-attention are also possible such as $s_{ij} = q_{i}^{\top} M d_j$ and $s_{ij} = F([q_i;d_j])$. \subsubsection{Extractive Pooling} The most common variant of extractive co-attention is the \textit{max-pooling} co-attention, which attends to each word based on its maximum influence it has on the other text sequence. \begin{align} q' = Soft(\max_{col}(s))^{\top} q \:\: \: \text{and} \:\: \: d' = Soft(\max_{row}(s))^{\top} d \end{align} where $q', d'$ are the co-attentive representations of $q$ and $d$ respectively. Soft(.) is the Softmax operator. Alternatively, the mean row and column-wise pooling of matrix $s$ can be also used: \begin{align} q' = Soft(\mathop{mean}_{col}(s))^{\top} q \:\: \: \text{and} \:\: \: d' = Soft(\mathop{mean}_{row}(s))^{\top} d \end{align} However, each pooling operator has different impacts and can be intuitively understood as follows: max-pooling selects each word based on its maximum importance of all words in the other text. Mean-pooling is a more \textit{wholesome} comparison, paying attention to a word based on its overall influence on the other text. This is usually dataset-dependent, regarded as a hyperparameter and is tuned to see which performs best on the held out set. \subsubsection{Alignment-Pooling} Soft alignment-based pooling has also been utilized for learning co-attentive representations \cite{DBLP:conf/emnlp/ParikhT0U16}. However, the key difference with soft alignment is that it \textit{realigns} sequence pairs while standard co-attention simply learns to weight and score important words. The co-attentive representations are then learned as follows: \begin{align} d'_i := \sum^{\ell_{q}}_{j=1} \frac{exp(s_{ij})}{\sum_{k=1}^{\ell_{q}} exp(s_{ik})} q_{j} \:\:\:\text{and} \:\:\: q'_j := \sum^{\ell_{d}}_{i=1} \frac{exp(s_{ij})}{\sum_{k=1}^{\ell_{d}} exp(s_{kj})} d_{i} \end{align} where $d'_i$ is the sub-phrase in $q$ that is softly aligned to $d_i$. Intuitively, $d'_i$ is a weighted sum across $\{q_j\}^{\ell_{q}}_{j=1}$, selecting the most relevant parts of $q$ to represent $d_i$. \subsubsection{Intra-Attention} Intra-Attention, or Self-Attention was recently proposed to learn representations that are aware of long-term dependencies. This is often formulated as an co-attention (or alignment) operation with respect to itself. In this case, we apply intra-attention to both document and query independently. For notational simplicity, we refer to them as $x$ instead of $q$ or $d$ here. The Intra-Attention function is defined as: \begin{align} x^{\prime}_i &:= \sum^{\ell}_{j=1} \frac{exp(s_{ij})}{\sum_{k=1}^{\ell} exp(s_{ik})} x_{j} \end{align} where $x^{\prime}_i$ is the intra-attentional representation of $x_j$. \subsection{Multi-Cast Attention} At this point, it is easy to make several observations. Firstly, each attention mechanism provides a different flavor to the model. Secondly, attention is used to alter the original representation either by re-weighting or realigning. As such, most neural architectures only make use of one type of co-attention or alignment function \cite{DBLP:conf/emnlp/ParikhT0U16,DBLP:journals/corr/SantosTXZ16}. However, this requires the right model architecture to be tuned and potentially missing out from the benefits brought by using multiple variations of co-attention mechanism. As such, our work casts each attention operation as a \textbf{word-level} feature. \subsubsection{Casted Attention} Let $x$ be either $q$ or $d$ and $\bar{x}$ is the representation\footnote{We omit subscripts for clarity.} of $x$ after applying co-attention or soft attention alignment. The attention features for the co-attention operators are: \begin{align} f_{c} &= F_{c}([\bar{x}; x]) \\ f_{m} &= F_{c}(\bar{x} \odot x) \\ f_{s} &= F_{c}(\bar{x} - x) \end{align} where $\odot$ is the Hadamard product and $[.;.]$ is the concatenation operator. $F_{c}(.)$ is a compression function used to reduce features to a scalar. Intuitively, what is achieved here is that we are modeling the influence of co-attention by comparing representations before and after co-attention. For soft-attention alignment, a critical note here is that $x$ and $\bar{x}$ (though of equal lengths) have \textit{`exchanged'} semantics. In other words, in the case of $q$, $\bar{q}$ actually contains the aligned representation of $d$. Finally, the usage of multiple comparison operators (subtractive, concatenation and multiplicative operators) is to capture multiple perspectives and is inspired by the ESIM model \cite{DBLP:conf/acl/ChenZLWJI17}. \subsubsection{Compression Function} This section defines $F_{c}(.)$ the compression function used. The rationale for compression is simple and intuitive - we do not want to \textit{bloat} subsequent layers with a high dimensional vector which consequently incurs parameter costs in subsequent layers. We investigate the usage of three compression functions, which are capable of reducing a $n$ dimensional vector to a scalar. \begin{itemize} \item \textbf{Sum} (SM) Function is a non-parameterized function that sums the entire vector, returning a scalar as an output. \begin{align} F(x) = \sum^{n}_{i}x_{i} \:\:,\:\: \forall x_i \in x \end{align} \item \textbf{Neural Network} (NN) is a fully-connected layer that converts each $n$ dimensional feature vector as follows: \begin{align} F(x) = ReLU(W_c(x) + b_c). \end{align} where $W_c \times \mathbb{R}^{n \times 1}$ and $b_c \in \mathbb{R}$ are the parameters of the FC layer. \item \textbf{Factorization Machines} (FM) are general purpose machine learning techniques that accept a real-valued feature vector $x \in \mathbb{R}^{n}$ and return a scalar output. \begin{align} F(x) &= w_{0} + \sum^{n}_{i=1} w_i \: x_i + \sum^{n}_{i=1} \sum^{n}_{j=i+1} \langle v_i, v_j \rangle \: x_i \: x_j \end{align} where $w_{0} \in \mathbb{R}\:, w_i \in \mathbb{R}^{n} \: \text{and} \: \{v_1, \cdots v_{n}\} \in \mathbb{R}^{n \times k}$ are the parameters of the FM model. FMs are expressive models that capture pairwise interactions between features using factorized parameters. $k$ is the number of factors of the FM model. For more details, we refer interested readers to \cite{rendle2010factorization}. \end{itemize} Note that we do not share parameters across multiple attention casts because each attention cast is aimed at modeling a different view. Our experiments report the above mentioned variants under the model name MCAN (SM), MCAN (NN) and MCAN (FM) respectively. \subsubsection{Multi-Cast} The key idea behind our architecture is the facilitation of $k$ attention calls (or casts), with each cast augmenting raw word embeddings with a real-valued attentional hint. We formally describe the Multi-cast Attention mechanism. For each query-document pair, we apply (1) Co-Attention with mean-pooling (2) Co-Attention with max-Pooling and (3) Co-Attention with alignment-pooling. Additionally, we apply Intra-Attention to both query and document individually. Each attention cast produces three scalars (per word) which are concatenated with the word embedding. The final casted feature vector is $z \in \mathbb{R}^{12}$. As such, for each word $w_i$, the new word representation becomes $\bar{w}_i = [w_i; z_i]$. \subsection{Long Short-Term Memory Encoder} Next, the word representations with casted attention $\bar{w}_1, \bar{w}_2, \dots \bar{w}_{\ell}$ are then passed into a sequential encoder layer. We adopt a standard vanilla long short-term memory (LSTM) encoder: \begin{align} h_i = LSTM(u, i), \forall i \in [1, \dots \ell] \end{align} where $\ell$ represents the maximum length of the sequence. Notably, the parameters of the LSTM are \textit{`siamese'} in nature, sharing weights between document and query. The key idea is that the LSTM encoder learns representations that are aware of sequential dependencies by the usage of nonlinear transformations as gating functions. Since LSTMs are standard neural building blocks, we omit technical details in favor of brevity. As such, the key idea behind casting attention as features right before this layer is that it provides the LSTM encoder with hints that provide information such as (1) long-term and global sentence knowledge and (2) knowledge between sentence pairs (document and query). \subsubsection{Pooling Operation} Finally, a pooling function is applied across the hidden states $\{h_1 \dots h_{\ell}\}$ of each sentence, converting the sequence into a fixed dimensional representation. \begin{align} h = \mathop{MeanMax} [h_1 \dots h_{\ell}] \end{align} We adopt the $\mathop{MeanMax}$ pooling operator, which concatenates the result of the mean pooling and max pooling together. We found this to consistently perform better than using $\mathop{max}$ or $\mathop{mean}$ pooling in isolation. \subsection{Prediction Layer and Optimization} Finally, given a fixed dimensional representation of the document-query pair, we pass their concatenation into a two-layer $h$-dimensional highway network. The final prediction layer of our model is computed as follows: \begin{align} y_{out} = H_{2}(H_{1}([x_{q}; x_{d}; x_q \odot x_d; x_q - x_d])) \end{align} where $H_{1}(.), H_{2}(.)$ are highway network layers with ReLU activation. The output is then passed into a final linear softmax layer. \begin{align} y_{pred} = softmax(W_{F} \cdot y_{out} + b_{F}) \end{align} where $W_{F} \in \mathbb{R}^{h \times 2}$ and $b_{F} \in \mathbb{R}^{2}$. The network is then trained using standard multi-class cross entropy loss with L2 regularization. \begin{align} J(\theta) = -\sum^{N}_{i=1} \: [ y_i \log \hat{y}_i + (1-y_i)\log(1-\hat{y}_i)] + \lambda||\theta||_{L2} \end{align} where $\theta$ are the parameters of the network. $\hat{y}$ is the output of the network. $||\theta||_{L2}$ is the L2 regularization and $\lambda$ is the weight of the regularizer. \section{Empirical Evaluation} In our experiments, we aim to answer the following research questions (\textbf{RQs}): \begin{enumerate} \item \textbf{RQ1} - Does our proposed approach achieve state-of-the-art performance on question answering and conversation modeling tasks? What is the relative improvement over well-established baselines? \item \textbf{RQ2} - What are the impacts of architectural design on performance? Is the LSTM encoder necessary to make use of the casted features? Does all the variations of co-attention contribute to the overall model performance? \item \textbf{RQ3} - Can we explain the inner workings of our proposed model? Can we interpret the casted attention features? \end{enumerate} \subsection{Experiment 1 - Dialogue Prediction} In this first task, we evaluate our model on its ability to successfully predict the next reply in conversations. \subsubsection{Dataset and Evaluation Metric} For this experiment, we utilize the large and well-known large-scale Ubuntu Dialogue Corpus (UDC) \cite{lowe2015ubuntu}. We use the same testing splits provided by Xu et al. \cite{xu2016incorporating}. In this task, the goal is to match a sentence with its reply. Following \cite{wu2016knowledge}, the task mainly utilizes the last two utterances in each conversation, predicting if the latter follows the former. The training set consists of \textbf{one million} message-response pairs with a $1:1$ positive-negative ratio. The development and testing sets have a $9:1$ ratio. Following \cite{wu2016knowledge,xu2016incorporating}, we use the evaluation metrics of recall$@k$ ($R_n @K$) which indicates whether the ground truth exists in the top $k$ results from $n$ candidates. The four evaluation metrics used are $R_{2}@1$, $R_{10}@1$, $R_{10}@2$ and $R_{10}@5$. \subsubsection{Competitive Baselines and Implementation Details} We compare against a large number of competitive baselines, e.g., MLP, DeepMatch \cite{lu2013deep}, ARC-I / ARC-II \cite{Hu2014a}, CNTN \cite{DBLP:conf/ijcai/QiuH15}, MatchPyramid \cite{pang2016text}, vanilla LSTM, Attentive Pooling LSTM \cite{DBLP:journals/corr/SantosTXZ16}, MV-LSTM \cite{DBLP:conf/aaai/WanLGXPC16} and finally the state-of-the-art Knowledge Enhanced Hybrid Neural Network (KEHNN) \cite{wu2016knowledge}. A detailed description of baselines can be found at \cite{wu2016knowledge}. Since testing splits are the same, we report the results directly from \cite{wu2016knowledge}. For fair comparison, we set the LSTM encoder size in MCAN to $d=100$ which makes it equal to the models in \cite{wu2016knowledge}. We optimize MCAN with Adam optimizer \cite{DBLP:journals/corr/KingmaB14} with an initial learning rate of $3 \times 10^{-4}$. A dropout rate of $0.2$ is applied to all layers except the word embedding layer. The sequences are dynamically truncated or padded to their batch-wise maximums (with a hard limit of $50$ tokens). We initialize the word embedding layer with pretrained GloVe embeddings. \subsubsection{Experimental Results} Table \ref{tab:ubuntu} reports the results of our experiments. Clearly, we observe that all MCAN models achieve a huge performance gain over existing state-of-the-art models. More specifically, the improvement across all metrics are $\approx 5\%-9\%$ better than KEHNN. The performance improvement over strong baselines such as AP-LSTM and MV-LSTM are even greater, hitting an improvement of $15\%$ in terms of $R_{10}@{1}$. This ascertains the effectiveness of the MCAN model. Overall, MCAN (FM) and MCAN (NN) are comparable in terms of performance. MCAN (SM) is marginally lower than both MCAN (FM) and MCAN (NN). However, its performance is still considerably higher than the existing state-of-the-art models. \begin{table}[H] \centering \begin{tabular}{lrrrr} \hline & \multicolumn{1}{l}{$R_2@1$} & \multicolumn{1}{l}{$R_{10}@1$} & \multicolumn{1}{l}{$R_{10}@2$} & \multicolumn{1}{l}{$R_{10}@5$} \\ \hline MLP & 0.651 & 0.256 & 0.380 & 0.703 \\ DeepMatch & 0.593 & 0.345 & 0.376 & 0.693 \\ ARC-I & 0.665 & 0.221 & 0.360 & 0.684 \\ ARC-II & 0.736 & 0.380 & 0.534 & 0.777 \\ CNTN & 0.743 & 0.349 & 0.512 & 0.797 \\ MatchPyramid & 0.743 & 0.420 & 0.554 & 0.786 \\ LSTM & 0.725 & 0.361 & 0.494 & 0.801 \\ AP-LSTM & 0.758 & 0.381 & 0.545 & 0.801 \\ MV-LSTM & 0.767 & 0.410 & 0.565 & 0.800 \\ KEHNN & 0.786 & 0.460 & 0.591 & 0.819 \\ \hline MCAN (SM) & 0.831& 0.548&0.682 &0.873 \\ MCAN (NN) & \underline{0.833} & \underline{0.549} & \textbf{0.686} & \textbf{0.875} \\ MCAN (FM) & \textbf{0.834} & \textbf{0.551} & \underline{0.684} & \textbf{0.875} \\ \hline \end{tabular}% \caption{Performance Comparison on Ubuntu Dialogue Corpus. Best result is in boldface and second best is underlined. } \label{tab:ubuntu}% \end{table}% \subsection{Experiment 2 - Factoid Question Answering} Factoid question answering is the task of answering factual based questions. In this task, the goal is to provide a ranked list of answers to a given question. \subsubsection{Dataset and Evaluation Metric} We utilize the QA dataset from TREC (Text Retrieval Conference). TrecQA is one of the most widely evaluated dataset, competitive and long standing benchmark for QA. This dataset was prepared by Wang et al. \cite{Wang2007} and contains $53K$ QA pairs for training and $1100/1500$ pairs for development and testing respectively. Following the recent works, we evaluate on the clean setting as noted by \cite{DBLP:conf/cikm/RaoHL16}. The evaluation metrics for this task are the MAP (mean average precision) and MRR (mean reciprocal rank) scores which are well-known IR metrics. \subsubsection{Competitive Baselines and Implementation Details} We compare against all previously published works on this dataset. The competitive baselines for this task are QA-LSTM / AP-CNN \cite{DBLP:journals/corr/SantosTXZ16}, LDC model \cite{wang2016sentence}, MPCNN \cite{DBLP:conf/emnlp/HeGL15}, MPCNN+NCE \cite{DBLP:conf/cikm/RaoHL16}, HyperQA \cite{Tay:2018:HRL:3159652.3159664}, BiMPM \cite{DBLP:conf/ijcai/WangHF17} and IWAN \cite{DBLP:conf/emnlp/ShenYD17}. For our model, the size of the LSTM used is $300$. The dimensions of the highway prediction layer is $200$. We use the Adam optimizer with a $3 \times 10^{-4}$ learning rate. The L2 regularization is set to $10^{-6}$. A dropout rate of $0.2$ is applied to all layers except the embedding layer. We use pretrained $300d$ GloVe embeddings and fix the embeddings during training. For MCAN (FM), we use a FM model with $10$ factors. We pad all sequences to the maximum sequence length and truncate them to the batch-wise maximums. \subsubsection{Experimental Results} Table \ref{tab:trec} reports our results on TrecQA. All MCAN variations outperform all existing state-of-the-art models. Notably, MCAN (FM) is currently the best performing model on this extensively studied dataset. MCAN (NN) comes in second which marginally outperforms the highly competitive and recent IWAN model. Finally, MCAN (SM) remains competitive to IWAN, despite naively summing over casted attention features. \begin{table}[H] \centering \begin{tabular}{lcc} \hline Model & \multicolumn{1}{c}{MAP} & \multicolumn{1}{c}{MRR} \\ \hline QA-LSTM (dos Santos et al.) & 0.728 & 0.832\\ AP-CNN (dos Santos et al.) & 0.753 &0.851 \\ LDC Model (Wang et al.) & 0.771 & 0.845 \\ MPCNN (He et al.) &0.777 & 0.836 \\ HyperQA (Tay et al.) & 0.784 & 0.865 \\ MPCNN + NCE (Rao et al.) & 0.801 & 0.877 \\ BiMPM (Wang et al.) & 0.802 & 0.899 \\ IWAN (Shen et al.) & 0.822 & 0.889 \\ \hline MCAN (SM) & 0.827& 0.880\\ MCAN (NN) & \underline{0.827}& \underline{0.890}\\ MCAN (FM) & \textbf{0.838} & \textbf{0.904} \\ \hline \end{tabular}% \caption{Performance Comparison on TrecQA (\textit{clean}) dataset. Best result is in boldface and second best is underlined. } \label{tab:trec}% \end{table}% \subsection{Experiment 3 - Community Question Answering (cQA)} This task is concerned with ranking answers in community forums. Different from factoid QA, answers are generally subjective instead of factual. Moreover, answer lengths are also much longer. \subsubsection{Dataset and Evaluation} We use the QatarLiving dataset, a well-studied benchmark dataset from SemEval-2016 Task 3 Subtask A (cQA) and have been used extensively as a benchmark for recent state-of-the-art neural network models for cQA \cite{DBLP:conf/aaai/ZhangLSW17,1711.07656}. This is a real world dataset obtained from Qatar Living Forums and comprises $36K$ training pairs, $2.4K$ development pairs and $3.6K$ testing pairs. In this dataset, there are ten answers in each question `thread' which are marked as `Good`, `Potentially Useful' or `'Bad'. Following \cite{DBLP:conf/aaai/ZhangLSW17}, `Good' is regarded as positive and anything else is regarded as negative labels. We evaluate on two metrics, namely the Precision@1 (P@1) and Mean Average Precision (MAP) metric. \subsubsection{Competitive Baselines and Implementation Details} The key competitors of this dataset are the CNN-based ARC-I/II architecture by Hu et al. \cite{Hu2014a}, the Attentive Pooling CNN \cite{DBLP:journals/corr/SantosTXZ16}, Kelp \cite{DBLP:conf/semeval/FiliceCMB16} a feature engineering based SVM method, ConvKN \cite{DBLP:conf/semeval/Barron-CedenoMJ16} a combination of convolutional tree kernels with CNN and finally AI-CNN (Attentive Interactive CNN) \cite{DBLP:conf/aaai/ZhangLSW17}, a tensor-based attentive pooling neural model. We also compare with the \textit{Cross Temporal Recurrent Networks} (CTRN) \cite{1711.07656}, a recently proposed model for ranking QA pairs which have achieved very competitive performance on this dataset. Following \cite{1711.07656}, we initialize MCAN with domain-specific 200 dimensional word embeddings using the unannotated QatarLiving corpus. Word embeddings are not updated during training. The size of the highway projection layer, LSTM encoder and highway prediction layer are all set to $200$. The model is optimized with Adam optimizer with learning rate of $3 \times 10^{-4}$. \subsubsection{Experimental Results} \begin{table}[htbp] \centering \begin{tabular}{lcc} \hline Model & \multicolumn{1}{c}{P@1} & \multicolumn{1}{c}{MAP} \\ \hline ARC-I (Hu et al.) & 0.741 & 0.771 \\ ARC-II (Hu et al.) & 0.753 & 0.780 \\ AP-CNN (dos Santos et al.) & 0.755 & 0.771 \\ Kelp (Filice et al.) & 0.751 & 0.792 \\ ConvKN (Barron Cedeno et al.) & 0.755 & 0.777 \\ AI-CNN (Zhang et al.) & 0.763 & 0.792 \\ CTRN (Tay et al.) & 0.788 & \underline{0.794} \\ \hline MCAN (SM) & \underline{0.803}& 0.787 \\ MCAN (NN) & 0.802& 0.784 \\ MCAN (FM) & \textbf{0.804} & \textbf{0.803} \\ \hline \end{tabular}% \caption{Performance comparison on QatarLiving dataset for community question answering. Best result is in boldface and second best is underlined.} \label{tab:cqa_ql}% \end{table}% Table \ref{tab:cqa_ql} reports the performance comparison on the QatarLiving dataset. Our best performing MCAN model achieves state-of-the-art performance on this dataset. Performance improvement over recent, competitive neural network baselines is significant. Notably, the improvement of MCAN (FM) over AI-CNN on the $P@1$ metric is $4.1\%$ and $1.1\%$ in terms of MAP. MCAN (FM) also achieves competitive results relative to the CTRN model. The performance of MCAN (NN) and MCAN (SM) is lower than MCAN (FM) but still remains competitive on this benchmark. \subsection{Experiment 4 - Tweet Reply Prediction} This experiment is concerned with predicting an appropriate reply given a tweet. \subsubsection{Dataset and Evaluation Metrics} We utilize a customer support dataset obtained from Kaggle\footnote{\url{https://www.kaggle.com/soaxelbrooke/customer-support-on-twitter}}. This dataset contains tweet-response pairs of tweets to famous brands and their replies. For each Tweet-Reply pair, we randomly selected \textit{four} tweets as negative samples that originate from the same brand. The dataset is split into $8:1:1$ train-dev-test split. The evaluation metrics for this task are MRR (Mean reciprocal rank) and Precision@1 (accuracy). Unlike previous datasets, there are no published works on this dataset. As such, we implement the baselines ourselves. We implement standard baselines such as (1) CBOW (sum embeddings) passed into a 2 layer MLP with ReLU activations, (2) standard vanilla LSTM and CNN models and (3) BiLSTM and CNN with standard Co-Attention (AP-BiLSTM, AP-CNN) following \cite{DBLP:journals/corr/SantosTXZ16}. All models minimize the binary cross entropy loss (pointwise) since we found performance to be much better than using ranking loss. We also include the recent AI-CNN (Attentive Interactive CNN) which uses multi-dimensional co-attention. We set all LSTM dimensions to $d=100$ and the number of CNN filters is $100$. The CNN filter width is set to $3$. We train all models with Adam optimizer with $3 \times 10^{-4}$ learning rate. Word embeddings are initialized with GloVe and fixed during training. A dropout of $0.2$ is applied to all layers except the word embedding layer. \subsubsection{Experimental Results} Table \ref{tab:tweets} reports our results on the Tweets dataset. MCAN (FM) achieves the top performance by a significant margin. The performance of MCAN (NN) falls short of MCAN (FM), but is still highly competitive. Our best MCAN model outperforms AP-BiLSTM by $3.4\%$ in terms of MRR and $5.3\%$ in terms of P@1. The performance improvement of AI-CNN is even greater, i.e., $8.4\%$ in terms of MRR and $12.5\%$ in terms of P@1. The strongest baseline is AP-BiLSTM which significantly outperforms AI-CNN and AP-CNN. \begin{table}[htbp] \centering \begin{tabular}{lcc} \hline Model & \multicolumn{1}{c}{MRR} & \multicolumn{1}{c}{P@1} \\ \hline CBOW + MLP & 0.658 & 0.442 \\ LSTM & 0.652 & 0.431 \\ CNN & 0.657 & 0.441 \\ AP-CNN (dos Santos et al.) & 0.643 & 0.426 \\ AI-CNN (Zhang et al.) & 0.675&0.465 \\ AP-BiLSTM (dos Santos et al.) & 0.725 & 0.540 \\ \hline MCAN (SM) & 0.722& 0.548\\ MCAN (NN) & \underline{0.747}& \underline{0.585} \\ MCAN (FM) & \textbf{0.759} & \textbf{0.593} \\ \hline \end{tabular}% \caption{Performance comparison on Reply Prediction on Tweets dataset. Best performance is in boldface and second best is underlined. } \label{tab:tweets}% \end{table}% \subsection{Ablation Analysis} This section aims to demonstrate the relative effectiveness of different components of our proposed MCAN model. Table \ref{ablation} reports the results on the validation set of the TrecQA dataset. We report the scores of seven different configurations. In (1), we replace all highway layers with regular feed-forward neural networks. In (2), we remove the LSTM encoder before the prediction layer. In (3), we remove the entire multi-cast attention mechanism. This is equivalent to removing the twelve attention features. In (4-7), we remove different attention casts, aiming to showcase that removing either one results in some performance drop. From our ablation analysis, we can easily observe the crucial components to our model. Firstly, we observe that removing MCA entirely significantly decreases the performance (ablation 3). In this case, validation MAP drops from 0.866 to 0.670. As such, our casted attention features contribute a lot to the performance of the model. Secondly, we also observe that the LSTM encoder is necessary. This is intuitive because the goal of MCAN is to provide features as hints for a compositional encoder. As such, removing the LSTM encoder allows our attention hints to go unused. While the upper prediction might still manage to learn from these features, it is still sub-optimal compared to using a LSTM encoder. Thirdly, we observed that removing Max or Mean Co-Attention decreases performance marginally. However, removing the Alignment Co-Attention decreases the performance significantly. As such, it is clear that the alignment-based attention is most important for our model. However, Max, Mean and Intra attention all contribute to the performance of MCAN. Hence, using multiple attention casts can improve performance. Finally, we also note that the highway layers also contribute slightly to performance. \begin{table}[H] \begin{tabular}{lcc} \hline Setting & MAP & MRR \\ \hline Original & \textbf{0.866} & \textbf{0.922} \\ (1) Remove Highway & 0.825 & 0.863 \\ (2) Remove LSTM & 0.765 & 0.809 \\ (3) Remove MCA & 0.670 & 0.749 \\ (4) Remove Intra & 0.834 & 0.910 \\ (5) Remove Align & 0.682 & 0.726 \\ (6) Remove Mean & 0.858 & 0.906 \\ (7) Remove Max & 0.862 & 0.915 \\ \hline \end{tabular} \caption{Ablation analysis (validation set) on TrecQA dataset. } \label{ablation} \end{table} \subsection{In-depth Model Analysis} In this section, we aim to provide insights pertaining to the inner workings of our model. More specifically, we list several observations by visual inspection of the casted attention features. We trained a MCAN model with FM compression and extracted the word-level casted attention features. The features are referred to as $f_i$ where $i \in [1,12]$. $f_1, f_2, f_3$ are generated from alignment-pooling. $f_4, f_5, f_6$ and $f_7,f_8,f_9$ are generated from max and mean pooled co-attention respectively. $f_{10},f_{11},f_{12}$ are generated from intra-attention. \subsubsection{Observation 1: Features learn a Neural Adaptation of Word Overlap} Figure \ref{intra_pos} and Figure \ref{intra_neg} show a positive and negative QA pair from the TrecQA test set. Firstly, we analyze\footnote{This is done primarily for clear visualisation, lest the diagram becomes too cluttered.} the first three features $f_1, \: f_2 \: \text{and} f_3$. These features correspond to the alignment attention and multiply, subtract and concat composition respectively. From the figures, we observe that $f_1$ spikes (in the negative direction) when there is a word overlap across sentences, e.g., `\textit{teapot}' in Figure \ref{intra_neg} and \textit{`teapot dome scandal'} in Figure \ref{intra_pos}. Hence, $f_1$ (dark blue line) behaves as a neural adaptation of the conventional overlap feature. Moreover, in contrary to traditional binary overlap features, we also notice that the value of the neural word overlap feature is dependent on the word itself, i.e., `\textit{teapot}' and `\textit{dome}' have different values. As such, it encodes more information over the traditional binary feature. \begin{figure}[H] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_a2.pdf} \caption{Features $f_1, f_2, f_3$ for question.} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_b2.pdf} \caption{Features $f_1, f_2, f_3$ for answer.} \end{subfigure} \caption{Visualization of Casted Attention Features ($f_1,f_2,f_3$) on a \textit{positive} test sample from TrecQA. } \label{intra_pos} \end{figure} \subsubsection{Observation 2: Features React to Evidence and Anti-Evidence} While $f_1$ is primarily aimed at modeling overlap, we observe that $f_3$ tries to gather supporting evidence for the given QA pair. In Figure \ref{intra_pos}, the words \textit{`year'} and \textit{`1923'} have spiked. It also tries to extract key verbs such as \textit{`'take place'} (question) and \textit{`began'} (answer) which are related verbs generally used to describe events. Finally, we observe that $f_2$ (subtractive composition) seems to be searching for anti-evidence, i.e., a contradictory or irrelevant information. However, this appears to be more subtle as compared to $f_1$ and $f_3$. In Figure \ref{intra_neg}, we note that the words \textit{`died' \text{and} `attack'} (answer) have spiked. We find this example particularly interesting because the correct answer `\textit{1923}' is in fact found in the answer. However, the pair is \textbf{wrong} because the text sample refers to the `\textit{death of Harding}' and does not answer the question correctly. In the negative answer, we found that the word \textit{`died'} has the highest $f_2$ value. As such, we believe that $f_2$ is actively finding anti-evidence to why this QA pair should be negative. Additionally, irrelevant words such as \textit{`attack`} and \textit{`god'} experience nudges by $f_2$. Finally, it is good to note that MCAN classifies these two samples correctly while a standard Bidirectional LSTM does not. \begin{figure} \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_a1.pdf} \caption{Features $f_1, f_2, f_3$ for question.} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_b1.pdf} \caption{Features $f_1, f_2, f_3$ for answer.} \end{subfigure} \caption{Visualization of Casted Attention Features ($f_1,f_2,f_3$) on a \textit{negative} test sample from TrecQA. } \label{intra_neg} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_max_ans.pdf} \caption{Features generated from \textit{max}-pool Co-Attention.} \end{subfigure}% \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[width=1.0\textwidth]{./images/mcan_mean_ans.pdf} \caption{Features generated from \textit{mean}-pool Co-Attention.} \end{subfigure} \caption{Differences between Max and Mean-pooled Casted Attention Features on answer text from TrecQA dataset. Diverse features are learned by different attention casts.} \label{diversity} \end{figure} \subsubsection{Observation 3: Diversity of Multiple Casts} One of the key motivators for a multi-casted attention is that each attention cast produces features from different views of the sentence pair. While we have shown in our ablation study that all attention casts contributed to the overall performance, this section qualitatively analyzes the output features. Figure \ref{diversity} shows the casted attention features (answer text) for max-pooled attention ($f_4, f_5, f_6$) and mean-pooled\footnote{The values on $f_7$ are not constant. They appear to be since the max-min range is much smaller than $f_8$ and $f_9$.} attention ($f_7, f_8, f_9$). Note that the corresponding question is the same as Figure \ref{intra_pos} and Figure \ref{intra_neg} which allows a direct comparison with the alignment-based attention. We observe that both attention casts produce extremely diverse features. More specifically, not only the spikes are all at different words but the overall sequential pattern is very different. We also note that the feature patterns differ a lot from alignment-based attention (Figure \ref{intra_pos}). While we were aiming to capture more diverse patterns, we also acknowledge that these features are much less interpretable than $f_1,f_2$ and $f_3$. Even so, some patterns can still be interpreted, e.g., the value of $f_5$ is high for important words and low (negative) whenever the words are generic and unimportant such as \textit{`to', `the', `a'}. Nevertheless, the main objective here is to ensure that these features are not learning identical patterns. \section{Related Work} Learning to rank short document pairs is a long standing problem in IR research. The dominant state-of-the-art models for learning-to-rank today are mostly neural network based models. Neural network models, such as convolutional neural networks (CNN) \cite{Hu2014a,shen2014latent,DBLP:conf/sigir/TayPLH17,DBLP:conf/acl/WangN15}, recurrent neural networks (RNN) \cite{Mueller2016,DBLP:conf/emnlp/ShenYD17,wu2016knowledge} or recursive neural networks \cite{wan2016match} are used for learning document representations. A parameterized fuction such as multi-layered perceptrons \cite{Severyn2015}, tensor layers \cite{DBLP:conf/ijcai/QiuH15} or holographic layers \cite{DBLP:conf/sigir/TayPLH17} then learns a similarity score between document pairs. Recent advances in neural ranking models go beyond independent representation learning. There are several main architectural paradigms that invoke interactions between document pairs which intuitively improve performance due to matching at a deeper and finer granularity. The first can be thought of as extracting features from a constructed word-by-word similarity matrix \cite{DBLP:conf/aaai/WanLGXPC16,pang2016text}. The second invokes matching across multiple views and perspectives \cite{DBLP:conf/emnlp/HeGL15,DBLP:conf/ijcai/WangHF17,1805.08159}. The third method involves learning pairwise attention weights (i.e., co-attention). In these models, the similarity matrix is used to learn attention weights, learning to attend to each document based on its partner. Attentive Pooling Networks \cite{DBLP:journals/corr/SantosTXZ16} and Attentive Interactive Networks \cite{DBLP:conf/aaai/ZhangLSW17} are models that are grounded in this paradigm, utilizing \textit{extractive max-pooling} to learn the relative importance of a word based on its maximum importance to all words in the other document. The Compare-Aggregate model \cite{DBLP:journals/corr/WangJ16b} used a co-attention model for matching and then a convolutional feature extractor for aggregating features. Notably, other related problem domains such as machine comprehension \cite{DBLP:journals/corr/XiongZS16,seo2016bidirectional,wang2016machine} and review-based recommendation \cite{tay2018multi} also extensively make use of co-attention mechanisms. Learning sequence alignments via attention have been also popularized by models in related problem domains such as natural language inference \cite{DBLP:conf/emnlp/ParikhT0U16,DBLP:conf/acl/ChenZLWJI17,tay2017compare}. Notably, MCAN can be viewed as an extension of the CAFE model proposed in \cite{tay2017compare} for natural language inference. However, the key differences of this work is that (1) the propagated features in MCAN are \textit{multi-casted} (e.g., multiple co-attention variants are used consecutively) and (2) MCAN is extensively evaluated on a different and diverse set of problem domains and tasks. There are several other notable and novel classes of model architectures which have been proposed for learning to rank. Examples include knowledge-enhanced models \cite{wu2016knowledge,xiong2017word}, lexical decomposition \cite{wang2016sentence}, fused temporal gates \cite{1711.07656} and coupled LSTMs \cite{DBLP:conf/emnlp/LiuQZCH16}. Novel metric learning techniques such as hyperbolic spaces have also been proposed \cite{Tay:2018:HRL:3159652.3159664}. \cite{zhang2018end} proposed a quantum-like model for matching QA pairs. Our work is also closely related to the problem domain of ranking for web search, in which a myriad of neural ranking models were also proposed \cite{shen2014latent,shen2014learning,mitra2017learning,dehghani2017neural,hui2017pacrr,huang2013learning,hui2017re,xiong2017neural}. Ranking models for multi-turn response selection on Ubuntu corpus was also proposed in \cite{wu2016sequential}. \section{Conclusion} We proposed a new state-of-the-art neural model for a myriad of retrieval and matching tasks in the domain of question answering and conversation modeling. Our proposed model is based on a re-imagination of the standard and widely applied neural attention. For the first time, we utilize attention not as a pooling operator but as a form of feature augmentation. We propose three methods to compress attentional matrices into scalar features. Via visualisation and qualitative analysis, we show that these casted features can be interpreted and understood. Our proposed model achieves highly competitive results on four benchmark tasks and datasets. The achievements of our proposed model are as follows: (1) our model obtains the highest performing result on the well-studied TrecQA dataset, (2) our model achieves $9\%$ improvement on Ubuntu dialogue corpus relative to the best exisiting model, and (3) our model achieves strong results on Community Question Answering and Tweet Reply Prediction. \bibliographystyle{ACM-Reference-Format} \balance
{ "timestamp": "2018-06-05T02:10:42", "yymm": "1806", "arxiv_id": "1806.00778", "language": "en", "url": "https://arxiv.org/abs/1806.00778" }
\section{Introduction \label{sec:Introduction} Eigenvalue computations are at the core of simulations in various application areas, including quantum physics and electronic structure computations. Being able to best utilize the capabilities of current and emerging high-end computing systems is essential for further improving such simulations with respect to space/time resolution or by including additional effects in the models. Given these needs, the ELPA-AEO and ESSEX-II projects contribute to the development and implementation of efficient highly parallel methods for eigenvalue problems, in different contexts. Both projects are aimed at adding new features (concerning, e.g., performance and resilience) to previously developed methods and at providing additional functionality with new methods. Building on the results of the first ESSEX funding phase \cite{2016-KreutzerThiesEtAl-PerformanceEngineeringAnd-LNCSE113:317-338 2016-ThiesGalgonEtAl-TowardsAnExascale-LNCSE113:295-316}, ESSEX-II again focuses on iterative methods for very large eigenproblems arising, e.g., in quantum physics. ELPA-AEO's main application area is electronic structure computation, and for these moderately sized eigenproblems direct methods are often superior. Such methods are available in the widely used ELPA library \cite{2014-MarekBlumEtAl-TheELPALibrary-JPhysCondensMatter:213201}, which had originated in an earlier project \cite{2011-AuckenthalerBlumEtAl-ParallelSolutionOf-ParallelComput:37:783-794} and is being improved further and extended with ELPA-AEO. In Sections~\ref{sec:ELPA-AEO} and \ref{sec:ESSEX} we briefly report on the current state and on recent achievement in the two projects, with a focus on aspects that may be of particular interest to prospective users of the software or the underlying methods. In Section~\ref{sec:MixedPrecision} we turn to computations involving different precisions. Looking at three examples from the two projects we describe how lower or higher precision is used to reduce the computing time. \section{The ELPA-AEO project \label{sec:ELPA-AEO} In the ELPA-AEO project, chemists, mathematicians and computer scientists from the Max Planck Computing and Data Facility in Garching, the Fritz Haber Institute of the Max Planck Society in Berlin, the Technical University of Munich, and the University of Wuppertal collaborate to provide highly scalable methods for solving \emph{moderately-sized} ($n \lesssim 10^{6}$) Hermitian eigenvalue problems. Such problems arise, e.g., in electronic structure computations, and during the earlier ELPA project, efficient direct solvers for them had been developed and implemented in the ELPA library \cite{2014-MarekBlumEtAl-TheELPALibrary-JPhysCondensMatter:213201}. This library is widely used (see \url{https://elpa.mpcdf.mpg.de/about} for a description and pointers to the software), and it has been maintained and further improved continually since the first release in 2011. The ELPA library contains optimized routines for the steps in the direct solution of generalized Hermitian positive eigenproblems $A X = B X \Lambda$, that is, (i) the Cholesky decomposition $B = U^{H} U$, (ii) the transformation $A \mapsto \widetilde A = U^{-H} \! A U^{-1}$ to a standard eigenproblem $\widetilde A X = \widetilde X \Lambda$, (iii) the reduction of $\widetilde A$ to tridiagonal form, either in one step or via an intermediate banded matrix, (iv) a divide-and-conquer tridiagonal eigensolver, and (v) back-transformations for the eigenvectors corresponding to steps (iii) and (ii). A typical application scenario from electronic structure computations (``SCF cycle'') requires a sequence of a few dozens of eigenproblems $A^{(k)} X = B X \Lambda$ to be solved, where the matrix $B$ remains unchanged; see Section~\ref{sec:MixedPrecisionInELPAAEO} for more details. ELPA is particularly efficient in this situation by explicitly building $U^{-1}$ for steps (ii) and (v). ELPA-AEO is aimed at further improving the performance of computations that are already covered by ELPA routines and at providing new functionality. In the remainder of this section we highlight a few recent achievements that may be of particular interest to current and prospective users of the library. An alternative approach for the transformation (ii) has been developed \cite{2018-ManinLang-ReductionOfGeneralized-InPreparation}, which is based on Cannon's algorithm \cite{1969-Cannon-ACellularComputer-PhD}. The transformation is done with two matrix products: \emph{multiplication~1} computes the upper triangle $M_{u}$ of $M := A \cdot U^{-1}$, then $M_{u}$ is transposed to obtain the lower triangle $M_{l}$ of $M^{H} = U^{-H} A$, and finally \emph{multiplication~2} computes the lower triangle of $M_{l} \cdot U^{-1} = \widetilde A$. Both routines assume that one dimension of the process grid is a multiple of the other. They make use of the triangular structure of their arguments to save on computation and communication. The timing data in Figure~\ref{fig:Cannon} show that the new implementations are highly competitive. \begin{figure} \centerline \includegraphics[width=0.45\textwidth]{Multiplication1} \quad \includegraphics[width=0.45\textwidth]{Multiplication2}} \caption{Timings for the two multiplications in the transformation $A \mapsto \widetilde A$ with routines from ScaLAPACK, current ELPA routines, and the new implementations. The runs were made on the HYDRA system at MPCDF in Garching with $20$ processes per node (two $10$-core Intel Ivy bridge processors running at 2.8 GHz) and double precision real matrices of size $n = 30,000$. Process grids had aspect ratios $1:1$ or $1:2$; e.g., a $10 \times 20$ grid was set up for $p = 200$. With $p = 3200$, the new codes run at $\approx 40$\% of the nodes' \emph{peak} performance.} \label{fig:Cannon} \end{figure} Recent ELPA releases provide extended facilities for performance tuning. The computational routines have an argument that can be used to guide the routines in selecting algorithmic paths (if there are different ways to proceed) and algorithmic parameters (such as block sizes) and to receive performance data from their execution. An easy-to-use autotuning facility allows setting such parameters in an automated way by screening the parameter space; see the code fragment in Figure~\ref{fig:Autotune} for an example. Note that the parameter set obtained with the coarse probing induced by ELPA\_AUTOTUNE\_FAST might be improved later on. \begin{figure} \centerline{\ttfamil \begin{tabular}{l} autotune\_handle = elpa\_autotune\_setup( handle, ELPA\_AUTOTUNE\_FAST,\\ \In \In \In \In \In \In \In \In \In \In \In ELPA\_AUTOTUNE\_DOMAIN\_REAL, \&error ) ;\\ for ( i = 0 ; i $<$ 20 ; i++ ) \{\\[0.75ex] \In unfinished = elpa\_autotune\_step( handle, autotuning\_handle ) ;\\ \In if ( unfinished == 0 )\\ \In \In printf( "ELPA autotuning finished in the \%d th SCF step$\backslash$n", i ) ;\\[1.5ex] \In /* Solve EV problem */\\ \In elpa\_eigenvectors( handle, a, ev, z, \&error ) ;\\ \}\\ elpa\_autotune\_best\_set( handle, autotune\_handle ) ;\\ elpa\_autotune\_deallocate( autotune\_handle ) ;\\ \end{tabular} } \caption{Using ELPA's autotuning facility to adjust the algorithmic parameters during the solution of (at most) twenty eigenvalue problems in an SCF cycle, and saving them for later use.} \label{fig:Autotune} \end{figure} In earlier releases, ELPA could be configured for single or double precision computations, but due to the naming conventions only one of the two versions could be linked to a calling program. Now, both precisions are accessible from one library, and mixing them may speed up some computations; see Section~\ref{sec:MixedPrecisionInELPAAEO} for an example. New functionality for addressing banded generalized eigenvalue problems will be added. An efficient algorithm for the transformation to a banded standard eigenvalue problem has been developed \cite{2018-Lang-EfficientReductionOf-PREPRINT}, and its parallelization is currently under way. This will complement the functions for solving banded standard eigenvalue problems that are already included in ELPA. \section{The ESSEX-II project \label{sec:ESSEX} The ESSEX-II project is a collaborative effort of physicists, mathematicians and computer scientists from the Universities of Erlangen-Nuremberg, Greifs\-wald, Tokyo, Tsukuba, and Wuppertal and from the German Aerospace Center in Cologne. It is aimed at developing exascale-enabled solvers for selected types of \emph{very large} $(n \gg 10^{6}$) eigenproblems arising, e.g., in quantum physics; see the project's homepage at \url{https://blogs.fau.de/essex/} for more information, including pointers to publications and software. ESSEX-II builds on results from the first ESSEX funding phase, in particular the Exascale enabled Sparse Solver Repository (ESSR), which provides a (block) Jacobi--Davidson method, the BEAST subspace iteration-based framework, and the Kernel Polynomial Method (KPM) and Chebyshev time propagation for determining few extremal eigenvalues, a bunch of interior eigenvalues, and information about the whole spectrum and dynamic properties, respectively. Based on the versatile SELL-$C$-$\sigma$ format for sparse matrices \cite{2014-KreutzerHagerEtAl-AUnifiedSparseMatrix-SIAMJSciComput:36:C401-C423}, the \MyEmph{G}eneral, \MyEmph{H}ybrid, and \MyEmph{O}ptimized \MyEmph{S}parse \MyEmph{T}oolkit (GHOST) \cite{2016-KreutzerThiesEtAl-GHOSTBuildingBlocks-IntJParallelProg:online} contains optimized kernels for often-used operations such as sparse matrix times (multiple) vector products (optionally fused with other computations) and operations with block vectors, as well as a task manager, for CPUs, Intel Xeon Phi MICs and Nvidia GPUs and combinations of these. The \MyEmph{P}ipelined \MyEmph{H}ybrid-parallel \MyEmph{I}terative \MyEmph{S}olver \MyEmph{T}oolkit (PHIST) \cite{2016-ThiesGalgonEtAl-TowardsAnExascale-LNCSE113:295-316} provides the eigensolver algorithms with interfaces to GHOST and other ``computational cores,'' together with higher-level functionality, such as orthogonalization and linear solvers. With ESSEX-II, the interoperability of these ESSR components will be further improved to yield a mature library, which will also have an extended range of applicability, including non-Hermitian and nonlinear eigenproblems. Again we highlight only a few recent achievements. The \MyEmph{Sca}lable \MyEmph{Ma}trix \MyEmph{C}ollection (ScaMaC) provides routines that simplify the generation of test matrices. The matrices can be chosen from several physical models, e.g., boson or fermion chains, and parameters allow adjusting sizes and physically motivated properties of the matrices. With $32$ processes, a distributed size $2.36$G matrix for a Hubbard model with $18$ sites and $9$ fermions can be set up in less than $10$ minutes. The block Jacobi--Davidson solver has been extended to non-Hermitian and generalized eigenproblems. It can be run with arbitrary preconditioners, e.g., the AMG preconditioner ML \cite{2018-SongWubsEtAl-NumericalBifurcationAnalysis-CommunNonlinearSciNumerSimul:60:145-164}, and employs a robust and fast block orthogonalization scheme that can make use of higher-precision computations; see Section~\ref{sec:HigherPrecisionForOrthogonalization} for more details. The BEAST framework has been extended to seamlessly integrate three different approaches for spectral filtering in subspace iteration methods (polynomial filters, rational filters based on plain contour integration, and a moment-based technique) and to make use of their respective advantages with adaptive strategies. The BEAST framework also benefits from using different precisions; see Section~\ref{sec:MixedPrecisionInBeast}. At various places, measures for improving resilience have been included, based on verifying known properties of computed quantities and on checksums, combined with checkpoint--restart. To simplify incorporating the latter into numerical algorithms, the \MyEmph{C}heckpoint--\MyEmph{R}estart and \MyEmph{A}utomatic \MyEmph{F}ault \MyEmph{T}olerance (CRAFT) library has been developed \cite{2017-ShahzadThiesEtAl-CRAFTALibrary-PREPRINT}. Figure~\ref{fig:UsingCRAFT} illustrates its use within the BEAST framework. CRAFT can handle the GHOST and PHIST data types, as well as user-defined types. Checkpoints may be nested to accommodate, e.g., low-frequency high-volume together with high-frequency low-volume checkpointing in multilevel numerical algorithms, and the checkpoints can be written asynchronously to reduce overhead. By relying on the Scalable Checkpoint/Restart (SCR) and User-Level Failure Mitigation (ULFM-) MPI libraries, CRAFT also provides support for fast node-level checkpointing and for handling node failures. \begin{figure} \centerline{\ttfamil \begin{tabular}{l} // BEAST init (omitted)\\[1ex] Checkpoint beast\_checkpoint( "BEAST", comm ) ;\\ beast\_checkpoint->add( "eigenvectors", \&X ) ;\\ beast\_checkpoint->add( "eigenvalues", \&e ) ;\\ ... // Some more\\ beast\_checkpoint->add( "control\_variables", \&state ) ;\\ beast\_checkpoint->commit() ;\\[1ex] beast\_checkpoint->restartIfNeeded( NULL ) ;\\[1ex] // BEAST iterations\\ while ( !state.abort\_condition ) \{\\ \In // Compute projector, etc. (omitted)\\ \In ...\\ \In beast\_checkpoint->update() ;\\ \In beast\_checkpoint->write() ;\\ \} \end{tabular}} \caption{Using the CRAFT library to checkpoint the current eigenvector approximations \texttt{X} and other quantities in every iteration of the main loop.} \label{fig:UsingCRAFT} \end{figure} \section{Benefits of using a different precision \label{sec:MixedPrecision} Doing computations in lower precision is attractive from a performance point of view because it reduces memory traffic in memory-bound code and, in compute-bound situations, allows more operations per second, due to vector instructions manipulating more elements at a time. However, the desired accuracy often cannot be reached in single precision and then only a part of the computations can be done in lower precision, or a correction is needed; cf., e.g., \cite{baboulin2009accelerating} for the latter. In Subsection~\ref{sec:MixedPrecisionInBeast} we describe an approach for reducing overall runtimes of the BEAST framework by using lower-precision computations for early iterations. Higher precision, on the other hand, is often a means to improve robustness. It is less known that higher precision can also be beneficial w.r.t.\ runtime. This is demonstrated in Subsection~\ref{sec:HigherPrecisionForOrthogonalization} in the context of orthogonalization. In Section~\ref{sec:MixedPrecisionInELPAAEO} we come back to using lower precision, from the perspective of an important application area: self-consistent field (SCF) cycles in electronic structure computations. Each iteration of such a cycle requires the solution of a generalized eigenproblem (GEP). After briefly introducing the context, we discuss how ELPA-AEO's features can be used to steer the precision from the application code, targeting either the entire solution of a GEP or particular steps within its solution. \subsection{Changing precision in subspace iteration-based eigensolvers} \label{sec:MixedPrecisionInBeast} \input{secBEAST/secBEAST.tex} \subsection{Using higher precision for robust and fast orthogonalization \label{sec:HigherPrecisionForOrthogonalization} \input{secJD/secJD.tex} \subsection{Mixed precision in SCF cycles with ELPA-AEO \label{sec:MixedPrecisionInELPAAEO} \input{secSCF/secSCF.tex} \section{Concluding remarks} The ESSEX-II and ELPA-AEO projects are collaborative research efforts targeted at developing iterative solvers for very large scale eigenproblems (dimensions $\gg 1\mathrm{M}$) and direct solvers for smaller-scale eigenproblems (dimensions up to $1\mathrm{M}$), and at providing software for these methods. After briefly highlighting some recent progress in the two projects w.r.t.\ auto-tuning facilities, resilience, and added functionality, we have discussed several ways of using mixed precision for reducing the runtime. In iterative schemes such as BEAST, single precision may be used in early iterations. This need not compromise the final accuracy if we switch to double precision at the right time. Even working in extended precision may speed up the execution if the extra precision leads to fewer iterations and is not too expensive, as seen with an iterative orthogonalization scheme for the block Jacobi--Davison method. Additional finer-grained control of the working precision, addressing just particular steps of the computations can also be beneficial; this has been demonstrated with electronic structure computations, where the precision for each step was chosen directly from the calling code. Our results indicate that the users should be able to adapt the working precision, as well as algorithmic parameters, to their particular needs, together with heuristics for automatic selection. Work towards these goals will be continued in both projects. \bibliographystyle{spmpsci}
{ "timestamp": "2018-06-05T02:16:27", "yymm": "1806", "arxiv_id": "1806.01036", "language": "en", "url": "https://arxiv.org/abs/1806.01036" }
\section*{Introduction} The Yang--Baxter equation was first introduced in the field of statistical mechanics. It depends on the idea that in some scattering situations particles may preserve their momentum while changing their quantum internal states. The equation states that a matrix $R$ satisfies \[ (R\otimes\id)(\id\otimes R)(R\otimes\id) =(\id\otimes R)(R\otimes\id)(\id\otimes R). \] In one dimensional quantum systems, $R$ is the scattering matrix and if it satisfies the Yang--Baxter equation then the system is integrable. The Yang--Baxter equation also appears in topology and algebra mainly because its connections with braid groups. It takes its name from independent work of Yang~\cite{MR0261870} and Baxter~\cite{MR0290733}. In~\cite{MR1183474} Drinfeld observed that the Yang--Baxter equation also makes sense even if $R$ is not a linear operator $V\otimes V\to V\otimes V$ but a map $X\times X\to X\times X$ where $X$ is a set. In this context, non-degenerate solutions are interesting. Recall that a set-theoretic solution of the Yang--Baxter equation $(X,r)$ is said to be \emph{non-degenerate} if $r(x,y)=(\sigma_x(y),\tau_y(x))$ for all $x,y\in X$, where $\sigma_x$ and $\tau_y$ are permutations of $X$. Permutation solutions are examples of non-degenerate solutions: if $X$ is a set, $\sigma$ and $\tau$ are permutations of $X$ such that $\sigma\tau=\tau\sigma$ and $r(x,y)=(\sigma(y),\tau(x))$, then the pair $(X,r)$ is a non-degenerate set-theoretic solution of the Yang--Baxter equation. The first papers where non-degenerate set-theoretic solutions were studied are~\cite{MR1722951,MR1637256,MR1769723,MR1809284}. It turns out that set-theoretic solutions have connections for example with groups of I-type, Bieberbach groups, bijective $1$-cocycles, Garside theory, etc. In~\cite{MR2278047} Rump found a new and unexpected connection between set-theoretic solutions and Jacobson radical rings: such rings produce involutive solutions. This observation was the key for introducing a new algebraic structure that generalizes Jacobson radical rings. To strengthen the connection with rings, Rump conceived the name \emph{braces}. New connections appeared, for example with regular subgroups and Hopf--Galois extensions~\cite{MR3465351}, left orderable groups~\cite{MR3815290}, flat manifolds~\cite{MR3291816}. In~\cite{MR3647970} braces were generalized to skew left braces and this structure was used to produce and study not necessarily involutive solutions. Skew braces are useful for studying regular subgroups and Hopf--Galois extensions, bijective $1$-cocycles, rings and near-rings, triply factorized groups, see for example~\cite{MR3763907}. A skew left brace is a set with two compatible group structures. One of these groups is known as the \emph{multiplicative group}; the other as the \emph{additive group}. The terminology used in the theory of Hopf--Galois extensions suggest that the additive group determines the \emph{type} of the skew left brace. For example, skew left braces of abelian type are Rump's braces, i.e. those braces with abelian additive group. A skew left brace is a triple $(A,+,\circ)$, where $(A,+)$ and $(A,\circ)$ are (not necessarily abelian) groups such that the compatibility \[ a\circ (b+c)=a\circ b-a+a\circ c \] holds for all $a,b,c\in A$. Radical rings are certain examples of skew left braces where the additive group $(A,+)$ is abelian. For $a,b\in A$ one defines \[ a*b=-a+a\circ b-b. \] In the case of Jacobson radical rings, the operation $*$ is the multiplication of the ring. For arbitrary skew left braces the operation $*$ is not associative, but nevertheless the relation with radical rings suggests how to translate ideas from ring theory to the world of skew left braces. In~\cite[Definition 3.1]{MR1722951} Etingof, Schedler and Soloviev introduced multipermutation solutions. As the name suggests, such solutions are generalizations of permutation solutions. Gateva--Ivanova's strong conjecture~\cite[2.28(I)]{MR2095675} states that square-free involutive non-degenerate set-theoretic solutions are multipermutation solutions. Despite the conjecture was proved to be false~\cite{MR3437282}, it was the motivation of several original investigations~\cite{MR2652212,MR3177933,MR2885602,MR3439888}. Braces provide a powerful algebraic framework to work with set-theoretic solutions. In \cite[Theorem 3.1]{MR3647970} it is proved that for a skew left brace $A$, the map \[ r_A\colon A\times A\to A\times A,\quad r_A(a,b)=(-a+a\circ b,(-a+a\circ b)'\circ a\circ b), \] is a non-degenerate set-theoretic solution of the Yang--Baxter equation. In~\cite[Theorem 4.5]{MR3763907} it is proved that if $(X,r)$ be a non-degenerate solution of the Yang--Baxter equation, then there exists a unique skew left brace structure over the group \[ G=G(X,r)=\langle X:x\circ y=u\circ v\text{ whenever $r(x,y)=(u,v)$}\} \] such that \[ \begin{tikzcd} X\times X\arrow[r, "r"] \arrow["\iota\times\iota"', d] & X\times X \arrow[d, "\iota\times\iota" ] \\ G\times G \arrow[r, "r_{G}"] & G\times G \end{tikzcd} \] where $\iota\colon X\to G(X,r)$ is the canonical map. Moreover, the pair $(G(X,r),\iota)$ has the following universal property: if $B$ is a skew left brace and $f\colon X\to B$ is a map such that \[ \begin{tikzcd} X\times X\arrow[r, "r"] \arrow["f\times f"', d] & X\times X \arrow[d, "f\times f" ] \\ B\times B \arrow[r, "r_{B}"] & B\times B \end{tikzcd} \] then there exists a unique skew left brace homomorphism $\phi\colon G(X,r)\to B$ such that \[ \begin{tikzcd} X \arrow[rd, "f"'] \arrow[r, "\iota"] & G\arrow[d, "\phi"] \\ & B \end{tikzcd} \qquad\text{and}\qquad \begin{tikzcd} G\times G\arrow[r, "r_{G}"] \arrow["\phi\times\phi"', d] & G\times G \arrow[d, "\phi\times\phi" ] \\ B\times B \arrow[r, "r_{B}"] & B\times B \end{tikzcd} \] commute. To study Gateva--Ivanova's strong conjecture Rump introduced two sequences of left ideals~\cite{MR2278047}. These sequences turned out to be very important for understanding the structure of Rump's braces. It makes sense to consider similar sequences in the context of skew left braces. The right series of a skew left brace $A$ is defined as the sequence \[ A\supseteq A^{(2)}\supseteq A^{(3)}\supseteq\cdots, \] where $A^{(n+1)}=A^{(n)}*A$ denotes the additive subgroup of $A$ generated by elements of the form $x*a$ for $a\in A$ and $x\in A^{(n)}$. Each $A^{(n)}$ is an ideal of $A$, see Proposition~\ref{pro:right_series}. A skew left brace is said to be \emph{right nilpotent} if there is a positive integer $n$ such that $A^{(n)}=0$. The left series of a skew left brace $A$ is the sequence \[ A\supseteq A^2\supseteq A^3\supseteq\cdots, \] where $A^{n+1}=A*A^n$. Each $A^n$ is a left ideal of $A$, see Proposition~\ref{pro:left_series}. A skew left brace is said to be \emph{left nilpotent} if there is a positive integer $n$ such that $A^{n}=0$. Following~\cite{MR3814340} we also define the sequence \[ A\supseteq A^{[2]}\supseteq A^{[3]}\supseteq\cdots, \] where $A^{[1]}=A$ and $A^{[n+1]}$ is the additive subgroup of $A$ generated by all the elements from the sets $A^{[i]}*A^{[n+1-i]}$ for $1\leq i\leq n$. Each $A^{[n]}$ is a left ideal of $A$, see Proposition~\ref{pro:Smoktunowicz}. A skew left brace $A$ is said to be \emph{strongly nilpotent} if there is a positive integer $n$ such that $A^{[n]}=0$. Theorem~\ref{thm:equivalence} states that a skew left brace $A$ is strongly nilpotent if and only if it is left and right nilpotent. The sequence of the $A^{[n]}$ is a basic tool for understanding problems similar to K\"othe conjecture in the context of skew left braces, see Section~\ref{nilpotency}. One of our main goals is to study the connection between left and right nilpotency and the structure of skew left braces. We study the connection between right nilpotent skew left braces and multipermutation solutions (see Theorem~\ref{thm:mpl&right_nilpotent}). We also explore the cases where the groups of the skew left brace are nilpotent. If the skew left brace is finite and both groups are nilpotent, then it is possible to write the skew left brace as a direct product of skew left braces of prime-power order (see Corollary~\ref{cor:product}). This result is then applied to study infinite skew left braces with multiplicative group isomorphic to $\Z$ (see Theorems~\ref{thm:Z} and~\ref{thm:ZrightNilpotent}); this answers~\cite[Question A.10]{MR3763907}. Left nilpotent skew left braces are also studied. We show that a finite skew left brace with nilpotent additive group is left nilpotent if and only if its multiplicative group is nilpotent (see Theorem~\ref{thm:left_nilpotent=nilpotent}). \medskip This paper is organized as follows. In Section~\ref{preliminaries} basic definitions are recalled. These definitions include skew left braces, left ideals and ideals. In Section~\ref{nilpotency} we define left and right nilpotent skew left braces. In Theorem~\ref{thm:IcapSoc} and Proposition~\ref{thm:Fix} we prove analogs of Hirsch's theorem for nilpotent groups. We use these results in Theorem~\ref{thm:mpl&right_nilpotent} to prove that, under mild assumptions, a skew left brace is right nilpotent if and only if it has finite multipermutation level (see Theorem~\ref{thm:mpl&right_nilpotent}). In Section~\ref{perfect} we deal with perfect skew left braces. Using wreath product of skew left braces to show that one cannot produce an analog of Gr\"un's lemma for skew left braces. Skew left braces of nilpotent type are studied in Section~\ref{nilpotent_type}. Our first result is Corollary~\ref{cor:product}, where we prove that if both groups of a finite skew left brace are nilpotent, then the skew left brace is a direct product of sub skew left braces of prime-power size. Theorem~\ref{thm:A2} is the consequence of applying Hall's results from \cite{Hall} to the case of skew left braces. We prove in Theorem~\ref{thm:left_nilpotent=nilpotent} that a finite skew left brace of nilpotent type is left nilpotent if and only if its multiplicative group is nilpotent. In particular, skew left braces of prime-power size are left nilpotent. In Section~\ref{cyclic} we prove that skew left braces of abelian type with infinite-cyclic multiplicative group are trivial, see Theorem~\ref{thm:Z}. Theorem~\ref{thm:infinite_dihedral} shows that this result cannot be extended to skew left braces. Finally, in Section~\ref{indecomposable} some of our results are used for studying indecomposable set-theoretic solutions. In this section we give a positive answer to~\cite[Question~5.6]{MR3771874}. \section{Preliminaries} \label{preliminaries} Recall that a \emph{skew left brace} is a triple $(A,+,\circ)$, where $(A,+)$ and $(A,\circ)$ are (not necessarily abelian) groups such that \[ a\circ (b+c)=a\circ b-a+a\circ c \] holds for all $a,b,c\in A$. The group $(A,\circ)$ will be the \emph{multiplicative group} of the skew left brace and $(A,+)$ will be the \emph{additive group} of the skew left brace. We write $a'$ to denote the inverse of $a$ with respect to the circle operation $\circ$. A skew left brace $(A,+,\circ)$ such that $a\circ b=a+b$ for all $a,b\in A$ is said to be \emph{trivial}. \begin{defn} Let $\mathcal{X}$ be a property of groups. A skew left brace $A$ is said to be of $\mathcal{X}$-type if its additive group belongs to $\mathcal{X}$. \end{defn} Rump's braces are skew left braces of abelian type. \begin{convention} Skew left braces of abelian type will be called \emph{left braces}. \end{convention} If $A$ is a skew left brace, the multiplicative group acts on the additive group by automorphisms. The map $\lambda\colon (A,\circ)\to \Aut(A,+)$, $a\mapsto\lambda_a$, where $\lambda_a(b)=-a+a\circ b$, is a group homomorphism, see~\cite[Corollary 1.10]{MR3647970}. \begin{rem} Let $A$ be a skew left brace. Then the following formulas hold: \begin{align*} &a\circ b = a+\lambda_a(b), &&a+b=a\circ \lambda^{-1}_a(b), &&\lambda_a(a')=-a. \end{align*} \end{rem} Let $A$ be a skew left brace. For $a,b\in A$ let \[ a*b=\lambda_a(b)-b=-a+a\circ b-b. \] The following identities are easily verified: \begin{align} &a*(b+c)=a*b+b+a*c-b,\\ &(a\circ b)*c=(a*(b*c))+b*c+a*c, \end{align} These identities are similar to the usual commutator identities. \begin{thm} Let $A$ be an additive (not necessarily abelian) group. A skew left brace structure over $A$ is equivalent to an operation $A\times A\to A$, $(a,b)\mapsto a*b$, such that $a*(b+c)=a*b+b+a*c-b$ holds for all $a,b,c\in A$, and the operation $a\circ b=a+a*b+b$ turns $A$ into a group. \end{thm} \begin{proof} It is straightforward. \end{proof} A \emph{left ideal} of a skew left brace $A$ is a subgroup $I$ of the additive group of $A$ such that $\lambda_a(I)\subseteq I$ for all $a\in A$. It is not hard to prove that a left ideal is a subgroup of the multiplicative group of the skew left brace. An \emph{ideal} of $A$ is a left ideal $I$ of $A$ such that $a\circ I=I\circ a$ and $a+I=I+a$ for all $a\in A$. \begin{defn} For a skew left brace $A$ let \[ \Fix(A)=\{a\in A:\lambda_x(a)=a\text{ for all $x\in A$}\}. \] \end{defn} \begin{pro} Let $A$ be a skew left brace. Then $\Fix(A)$ is a left ideal of $A$. \end{pro} \begin{proof} A routine calculation proves that $\Fix(A)$ is a subgroup of the additive group of $A$. Clearly $\lambda_x(\Fix(A))\subseteq \Fix(A)$ for all $x\in A$. \end{proof} The following example shows that in general $\Fix(A)$ is not an ideal: \begin{exa} Consider the semidirect product $A=\Z/(3)\rtimes \Z/(2)$ of the trivial braces $\Z/(3)$ and $\Z/(2)$ via the non-trivial action of $\Z/(2)$ over $\Z/(3)$. We have $$\lambda_{(x,y)}(a,b)=(x,y)(a,b)-(x,y)=(x+(-1)^ya,y+b)-(x,y)=((-1)^ya,b).$$ Hence $\Fix(A)=\{ (0,b)\mid b\in \Z/(2)\}$. Clearly $\Fix(A)$ is not a normal subgroup of the multiplicative group of $A$. Thus $\Fix(A)$ is not an ideal of $A$. \end{exa} If $X$ and $Y$ are subsets of a skew left brace $A$, $X*Y$ is the additive subgroup of $A$ generated by elements of the form $x*y$, $x\in X$ and $y\in Y$, i.e. \[ X*Y=\langle x*y:x\in X\,,y\in Y\rangle_+. \] \begin{lem} \label{lem:A*I} Let $A$ be a skew left brace. A subgroup $I$ of the additive group of $A$ is a left ideal of $A$ if and only if $A*I\subseteq I$. \end{lem} \begin{proof} Let $a\in A$ and $x\in I$. If $I$ is a left ideal, then $a*x=\lambda_a(x)-x\in I$. Conversely, if $A*I\subseteq I$, then $\lambda_a(x)=a*x+x\in I$. \end{proof} \begin{lem} \label{lem:I*A} Let $A$ be a skew left brace. A normal subgroup $I$ of the additive group of $A$ is an ideal of $A$ if and only $\lambda_a(I)\subseteq I$ for all $a\in A$ and $I*A\subseteq I$. \end{lem} \begin{proof} Let $x\in I$ and $a\in A$. Assume first that $I$ is invariant under the action of $\lambda$ and that $I*A\subseteq I$. Then \begin{equation} \label{eq:trick_lambda} \begin{aligned} a\circ x\circ a' &=a+\lambda_a(x\circ a')\\ &=a+\lambda_a(x+\lambda_x(a')) =a+\lambda_a(x)+\lambda_a\lambda_x(a')+a-a\\ &=a+\lambda_a(x+\lambda_x(a')-a')-a =a+\lambda_a(x+x*a')-a. \end{aligned} \end{equation} and hence $I$ is an ideal. Conversely, assume that $I$ is an ideal. Then $I*A\subseteq I$ since \begin{align*} x*a&=-x+x\circ a-a\\ &=-x+a\circ(a'\circ x\circ a)-a =-x+a+\lambda_a(a'\circ x\circ a)-a\in I. \end{align*} This completes the proof. \end{proof} The socle of a skew left brace $A$ is defined as \[ \Soc (A)=\{ x\in A\colon x\circ a=x+a \text{ and } x+a=a+x, \text{ for all } a\in A\}. \] Clearly $\Soc(A)=\ker(\lambda)\cap Z(A,+)$. In \cite[Lemma~2.5]{MR3647970} it is proved that $\Soc(A)$ is an ideal of $A$. \begin{lem} \label{lem:socle} Let $A$ be a skew left brace and $a\in\Soc(A)$. Then $b+b\circ a=b\circ a+b$ and $\lambda_b(a)=b\circ a\circ b'$ for all $b\in A$. \end{lem} \begin{proof} Let $b\in A$. Since $b'\circ (b\circ a+b)=a-b'$ and $b'\circ (b+b\circ a)=-b'+a$, the first claim follows since $a\in Z(A,+)$. To prove the second claim one computes: \[ b\circ a\circ b'=b\circ (a\circ b')=b\circ (a+b')=b\circ a-b=-b+b\circ a=\lambda_b(a), \] and the lemma is proved. \end{proof} \section{Left and right nilpotent skew left braces} \label{nilpotency} Let $A$ be a skew left brace. Following~\cite{MR2278047} one defines $A^{(1)}=A$ and for $n\geq1$ \begin{align*} & A^{(n+1)}=A^{(n)}*A=\langle x*a: x\in A^{(n)},\,a\in A\rangle_+, \end{align*} where $\langle X\rangle_+$ denotes the subgroup of the additive group of $A$ generated by the subset $X$. The series $A^{(1)}\supseteq A^{(2)}\supseteq A^{(3)}\supseteq\cdots\supseteq A^{(n)}\cdots$ is the \emph{right series} of $A$. \begin{pro} \label{pro:right_series} Let $A$ be a skew left brace. Each $A^{(n)}$ is an ideal of $A$. \end{pro} \begin{proof} We want to prove that for each $n\in\N$, $A^{(n)}$ is a normal subgroup of $(A,+)$, that $\lambda_a(A^{(n)})\subseteq A^{(n)}$ for all $a\in A$ and that $A^{(n)}$ is a normal subgroup of $(A,\circ)$. We proceed by induction on $n$. The case $n=1$ is trivial. We assume that the claim is true for some $n\geq1$. We first prove that $A^{(n+1)}$ is a normal subgroup of $(A,+)$. Let $a,b\in A$ and $x\in A^{(n)}$. Then $a+x*b-a\in A^{(n+1)}$ since \begin{align*} a+x*b-a&=a+\lambda_x(b)-b-a\\ &=a+\lambda_x(b)-(a+b) =a+\lambda_x(-a+a+b)-(a+b)\\ &=a+\lambda_x(-a)+\lambda_x(a+b)-(a+b) =-x*a+x*(a+b). \end{align*} Now we prove that $A^{(n+1)}$ is an ideal. Let $a,b\in A$ and $x\in A^{(n)}$. Then \begin{equation} \label{eq:another_trick} \begin{aligned} \lambda_a(x*b)&=\lambda_a(\lambda_x(b)-b) =\lambda_a\lambda_x(b)-\lambda_a(b)\\ &=\lambda_{a\circ x\circ a'}(\lambda_a(b))-\lambda_a(b) =(a\circ x\circ a')*\lambda_a(b)\in A^{(n+1)} \end{aligned} \end{equation} since $a\circ x\circ a'\in A^{(n)}$ by the inductive hypothesis. From this it immediately follows that $\lambda_a(A^{(n+1)})\subseteq A^{(n+1)}$. Now let $y\in A^{(n+1)}$. By using~\eqref{eq:trick_lambda} one obtains that $a\circ y\circ a' =a+\lambda_a(y+y*a')-a\in A^{(n+1)}$. Thus the result follows by induction. \end{proof} Let $A$ be a skew left brace. Following~\cite{MR2278047} one defines $A^1=A$ and for $n\geq1$ \begin{align*} & A^{n+1}=A*A^{n}=\langle a*x: a\in A,\,x\in A^{n}\rangle_+. \end{align*} The series $ A^1\supseteq A^2\supseteq A^3\supseteq\cdots\supseteq A^n\supseteq\cdots$ is the \emph{left series} of $A$. \begin{pro} \label{pro:left_series} Let $A$ be a skew left brace. Each $A^{n}$ is a left ideal of $A$. \end{pro} \begin{proof} We proceed by induction on $n$. The case $n=1$ is trivial, so we may assume that the result is true for some $n\geq1$. Let $a,b\in A$ and $x\in A^n$. By the inductive hypothesis, $\lambda_a(x)\in A^n$ and hence \[ \lambda_a(b*x)=(a\circ b\circ a')*\lambda_a(x)\in A^{n+1}, \] where the equality follows by~\eqref{eq:another_trick}. This implies that $\lambda_a(A^{n+1})\subseteq A^{n+1}$. Thus the result follows by induction. \end{proof} The second term of the left series is particularly important: \begin{pro} Let $A$ be a skew left brace. Then $A^2$ is the smallest ideal of $A$ such that $A/A^2$ is a trivial skew left brace. \end{pro} \begin{proof} Since $A^2=A^{(2)}$, $A^2$ is an ideal by Proposition~\ref{pro:right_series}. Let $I$ be an ideal of $A$ and $\pi\colon A\to A/I$ be the canonical map. Then $A/I$ is trivial as a skew left brace if and only if $\lambda_a(b)-b\in I$ for all $a,b\in A$. Since this condition is equivalent to $A^2\subseteq I$, the claim follows. \end{proof} \begin{defn} A skew left brace $A$ is said to be \emph{right nilpotent} if $A^{(m)}=0$ for some $m\geq1$. \end{defn} \begin{lem} \label{lem:right_nilpotent:quotient} Let $f\colon A\to B$ be a surjective homomorphism of skew left braces. Then $f(A^{(k)})=B^{(k)}$ for all $k$. In particular, if $A$ is right nilpotent, then $B$ is right nilpotent. \end{lem} \begin{proof} We proceed by induction on $k$. The case $k=1$ is trivial. Let us assume that the result is valid for some $k\geq1$. Since $f(A^{(k)})=B^{(k)}$, \[ f(A^{(k+1)})=f(A^{(k)}*A)=f(A^{(k)})*f(A)=B^{(k)}*B=B^{(k+1)}. \] From this the second claim follows. \end{proof} \begin{lem} \label{lem:right_nilpotent:sub} Let $A$ be a right nilpotent skew left brace and $B\subseteq A$ be a sub skew left brace. Then $B$ is right nilpotent. \end{lem} \begin{proof} By induction, $B^{(k)}\subseteq A^{(k)}$ for all $k$. Hence the claim follows. \end{proof} \begin{lem} \label{lem:right_nilpotent:x} Let $A_1,\dots,A_k$ be right nilpotent skew left braces. Then the direct product $A_1\times\cdots\times A_k$ is right nilpotent. \end{lem} \begin{proof} It is enough to prove the lemma in the case where $k=2$. This case is trivial since $(a,b)*(c,d)=(a*c,b*d)$. \end{proof} \begin{thm} \label{thm:IcapSoc} Let $A$ be a right nilpotent skew left brace of nilpotent type and $I$ be a non-zero ideal of $A$. Then $I\cap\Soc(A)\ne0$. \end{thm} \begin{proof} Since $(A,+)$ is nilpotent and each $I\cap A^{(k)}$ is a normal subgroup of $(A,+)$, it follows from~\cite[5.2.1]{MR1357169} that $I\cap A^{(k)}\cap Z(A,+)\ne0$ whenever $I\cap A^{(k)}\ne0$. Let $m=\max\{k\in\N:I\cap A^{(k)}\cap Z(A,+)\ne 0\}$. Since \[ (I\cap A^{(m)}\cap Z(A,+))*A\subseteq I\cap (A^{(m)}*A)=I\cap A^{(m+1)}=0, \] it follows that $I\cap A^{(m)}\cap Z(A,+)\subseteq I\cap \Soc(A)$. \end{proof} \begin{cor} \label{cor:nonzero_socle} Let $A$ be a non-zero right nilpotent skew left brace of nilpotent type. Then $\Soc(A)\ne0$. \end{cor} \begin{proof} It follows directly from Theorem~\ref{thm:IcapSoc} \end{proof} \begin{cor} Let $A$ be a right nilpotent skew left brace of nilpotent type and $I$ be a minimal ideal of $A$. Then $I\subseteq\Soc(A)$. \end{cor} \begin{proof} Since $I\cap\Soc(A)$ is a non-zero ideal of $A$ by Theorem~\ref{thm:IcapSoc}, $I\cap\Soc(A)=I$ by the minimality of $I$. \end{proof} \begin{defn} Let $A$ be a skew left brace. A \emph{$s$-series} of $A$ is a sequence \[ A=I_0\supseteq I_1\supseteq I_2\supseteq\cdots\supseteq I_n=0 \] of ideals of $A$ such that $I_{j-1}/I_j\subseteq \Soc(A/I_j)$ for each $j\in\{1,\dots,n\}$. \end{defn} \begin{rem} Let $A$ be a left brace. Rump in \cite{MR2278047} defined the socle series $\Soc_n(A)$ of $A$ as follows: $\Soc_0(A)=0$ and, for $n\geq 1$, \[ \Soc_n(A)=\{ x\in A\colon x*y\in \Soc_{n-1}(A) \}. \] There are examples of nonzero left braces $A$ such that $\Soc_n(A)=0$ for all positive integers $n$. \end{rem} \begin{defn}\label{socn} Let $A$ be a skew left brace. We define $\Soc_0(A)=0$ and, for $n\geq 1$, $\Soc_n(A)$ is the ideal of $A$ containing $\Soc_{n-1}(A)$ such that \[ Soc_n(A)/\Soc_{n-1}(A)=\Soc(A/\Soc_{n-1}(A)). \] \end{defn} Note that this definition coincides with the definition of Rump for left braces. \begin{lem} \label{lem:socle_series} Let $A$ be a skew left brace and let $A=I_0\supseteq I_1\supseteq I_2\supseteq\cdots\supseteq I_n=0$ be a $s$-series for $A$. Then $A^{(i+1)}\subseteq I_i$ for all $i$. \end{lem} \begin{proof} We proceed by induction on $i$. The case $i=0$ is trivial, so let us assume that the result holds for some $i\geq 0$. Let $\pi\colon A\to A/I_{i+1}$ be the canonical map. Since $\pi(I_{i})\subseteq\Soc(A/I_{i+1})$, $\pi(I_{i}*A)=\pi(I_{i})*\pi(A)=0$ and hence $I_{i}*A\subseteq I_{i+1}$. The inductive hypothesis then implies that $A^{(i+2)}=A^{(i+1)}*A\subseteq I_{i}*A\subseteq I_{i+1}$. Thus the result follows by induction. \end{proof} \begin{lem} \label{lem:socn} Let $A$ be a skew left brace. Then $A$ admits a $s$-series if and only if there exists a positive integer $n$ such that $A=\Soc_n(A)$. \end{lem} \begin{proof} Suppose that there exists a positive integer $n$ such that $A=\Soc_n(A)$. Then \[A=\Soc_n(A)\supseteq \Soc_{n-1}(A)\supseteq\cdots\supseteq \Soc_0(A)=0,\] is a $s$-series. Conversely, suppose that $A$ admits a $s$-series. Let $$A=I_0\supseteq I_1\supseteq I_2\supseteq\cdots\supseteq I_n=0$$ be a $s$-series of $A$. We shall prove that $I_{n-j}\subseteq \Soc_{j}(A)$ by induction on $j$. For $j=0$, $I_n=0=\Soc_0(A)$. Suppose that $j>0$ and $I_{n-j+1}\subseteq \Soc_{j-1}(A)$. Since $I_{n-j}/I_{n-j+1}\subseteq \Soc(A/I_{n-j+1})$, $I_{n-j}*A\subseteq I_{n-j+1}\subseteq \Soc_{j-1}(A)$, by the induction hypothesis. Furthermore, for all $x\in A$ and all $y\in I_{n-j}$, $x+y-x-y\in I_{n-j+1}\subseteq \Soc_{j-1}(A)$. Therefore $I_{n-j}\subseteq\Soc_j(A)$. Hence $A=I_0=\Soc_n(A)$ and the result follows. \end{proof} \begin{lem} \label{lem:right_nilpotent} A skew left brace of nilpotent type is right nilpotent if and only if it admits a $s$-series. \end{lem} \begin{proof} Let $A$ be a skew left brace {of nilpotent type}. If $A$ admits a $s$-series, then $A$ is right nilpotent by Lemma~\ref{lem:socle_series}. Conversely, suppose that $A$ is right nilpotent. There exists a positive integer such that $A^{(m)}=0$. We shall prove that $A$ admits a $s$-series by induction on $m$. For $m=1$, $A=A^{(1)}=0$ is a $s$-series. Suppose that $m>1$ and that the result is true for $m-1$. Consider $\bar A=A/A^{(m-1)}$. Since $\bar A^{(m-1)}=0$, by the induction hypothesis $\bar A$ admits a $s$-series. Thus there is a sequence \[ A=I_0\supseteq I_1\supseteq I_2\supseteq\cdots\supseteq I_n=A^{(m-1)} \] of ideals of $A$ such that $I_{j-1}/I_j\subseteq \Soc(A/I_j)$ for each $j\in\{1,\dots,n\}$. Since $A^{(m)}=0$, we have that $A^{(m-1)}\subseteq \ker(\lambda)$. Since $A$ is of nilpotent type, there exists a positive integer $s$ such that $\gamma^+_s(A)=0$, where $\gamma^+_i(A)$ denotes the lower central series of the additive group of $A$, that is $\gamma^+_1(A)=A$ and $\gamma^+_{i+1}(A)=[A,\gamma^+_i(A)]_+$, for all positive integers $i$. Let $I_{n+j-1}=A^{(m-1)}\cap \gamma^+_{j}(A)$ for $j=1,\dots ,s$. Note that $I_{n+j-1}$ is a normal subgroup of the additive group of $A$ invariant by $\lambda_x$, for all $x\in A$, and $I_{n+j-1}* A=0$, for all $j=1,\dots , s$, because $A^{(m-1)}\subseteq \ker(\lambda)$. By Lemma~\ref{lem:I*A}, $I_{n+j-1}$ is an ideal of $A$, for all $j=1,\dots , s$. Note that $I_{n+j-1}/I_{n+j}\subseteq Z(A/I_{n+j},+)$, for all $j=1,\dots ,s-1$. Therefore, since $I_{n+j-1}\subseteq\ker(\lambda)$, we have that $I_{n+j-1}/I_{n+j}\subseteq \Soc(A/I_{n+j})$, for all $j=1,\dots ,s-1$. Hence \[ A=I_0\supseteq I_1\supseteq I_2\supseteq\cdots\supseteq I_n=A^{(m-1)}\supseteq I_{n+1}\supseteq\cdots\supseteq I_{n+s-1}=0 \] is a $s$-series of $A$, and the result follows by induction. \end{proof} \begin{pro} \label{thm:A/Soc} Let $A$ be a skew left brace such that $A/\Soc(A)$ is right nilpotent. Then $A$ is right nilpotent. \end{pro} \begin{proof} Note that $(A/\Soc(A))^{(k)}=0$ if and only if $A^{(k)}\subseteq\Soc(A)$ by the definition of the factor brace. Then $A^{(k+1)}=A^{(k)}*A\subseteq\Soc (A)*A=0$ as required. \end{proof} \begin{pro} Let $I$ be an ideal of a skew left brace $A$ such that $I\cap A^2=0$. Then $I$ is a trivial skew left brace. \end{pro} \begin{proof} Since $I*A\subseteq I\cap A^2=0$, $I\subseteq\ker\lambda$. From this the claim follows. \end{proof} A skew left brace $A$ has \emph{finite multipermutation level} if the sequence $S_n$ defined as $S_1=A$ and $S_{n+1}=S_n/\Soc(S_n)$ for $n\geq1$, reaches zero. \begin{pro}\label{newmulti} Let $A$ be a skew left brace. Then $A$ has finite multipermutation level if and only if $A$ admits a $s$-series. \end{pro} \begin{proof} Let $S_1=A$ and $S_{n+1}=S_n/\Soc(S_n)$, for $n\geq1$. We shall prove that $S_{n+1}\simeq A/\Soc_n(A)$, by induction on $n$. For $n=0$, it is clear since $\Soc_0(A)=0$. Suppose that $n>0$ and the result is true for $n-1$. Hence, by the induction hypothesis, \begin{align*}S_n&=S_{n-1}/\Soc(S_{n-1}) \simeq (A/\Soc_{n-1}(A))/\Soc(A/\Soc_{n-1}(A))\\ &=(A/\Soc_{n-1}(A))/(\Soc_n(A)/\Soc_{n-1}(A))\simeq A/\Soc_n(A).\end{align*} Therefore $S_n=0$ if and only if $A=\Soc_n(A)$. Now the result follows from Lemma~\ref{lem:socn}. \end{proof} \begin{thm} \label{thm:mpl&right_nilpotent} Let $A$ be a skew left brace. Then $A$ has finite multipermutation level, if and only if $A$ is right nilpotent and $(A,+)$ is nilpotent. \end{thm} \begin{proof} Suppose that $A$ has finite multipermutation level. We proceed by induction on the multipermutation level $n$. The case $n=1$ is trivial. Let $A$ be a skew left brace of finite multipermutation level $n+1$. Since $A/\Soc(A)$ has multipermutation level $n$, the inductive hypothesis implies that $(A/\Soc(A))^{(m)}=0$ for some $m$ and $(A/\Soc(A),+)$ is nilpotent. This implies that $A^{(m)}\subseteq\Soc(A)$ and hence $A^{(m+1)}=0$, furthermore, since $\Soc(A)$ is central in $(A,+)$, we have that $(A,+)$ is nilpotent. Conversely, suppose that $A$ is right nilpotent and $(A,+)$ is nilpotent. By Lemma~\ref{lem:right_nilpotent}, $A$ admits a $s$-series. Thus the result follows by Proposition~\ref{newmulti}. \end{proof} The following example shows that the assumption on the nilpotency of the additive group of the skew left brace is needed for Theorem~\ref{thm:mpl&right_nilpotent}. \begin{exa}\label{ex:trivial} Let $A$ be a non-zero skew left brace such that $a\circ b=a+b$ for all $a,b\in A$. Then $A^{(2)}=0$, thus $A$ is right nilpotent. But if $Z(A,+)=0$, then $\Soc(A)=0$ and $A$ does not have finite multipermutation level. For example, we can take $(A,+)=(A,\circ)$ any non-abelian simple group. \end{exa} \begin{defn} A skew left brace $A$ is said to be \emph{left nilpotent} if $A^{m}=0$ for some $m\geq1$. \end{defn} \begin{lem} Let $f\colon A\to B$ be a surjective homomorphism of skew left braces. Then $f(A^{k})=B^{k}$ for all $k$. In particular, if $A$ is left nilpotent, then $B$ is left nilpotent. \end{lem} \begin{proof} It is similar to the proof of Lemma~\ref{lem:right_nilpotent:quotient}. \end{proof} \begin{lem} Let $A$ be a left nilpotent skew left brace and $B\subseteq A$ be a sub skew left brace. Then $B$ is left nilpotent. \end{lem} \begin{proof} It is similar to the proof of Lemma~\ref{lem:right_nilpotent:sub}. \end{proof} \begin{lem} \label{lem:left_nilpotent:x} Let $A_1,\dots,A_k$ be left nilpotent skew left braces. Then the direct product $A_1\times\cdots\times A_k$ is left nilpotent. \end{lem} \begin{proof} It is similar to the proof of Lemma~\ref{lem:right_nilpotent:x}. \end{proof} \begin{pro} \label{thm:Fix} Let $A$ be a left nilpotent skew left brace and $I$ be a non-zero left ideal of $A$. Then $I\cap\Fix(A)\ne0$. \end{pro} \begin{proof} Let $m=\max\{k:I\cap A^k\ne0\}$. Since $A*(I\cap A^m)\subseteq I\cap A^{m+1}=0$, it follows that there exists a non-zero $x\in I\cap A^m$ such that $a*x=0$ for all $a\in A$. Thus $0\ne x\in\Fix(A)\cap I$. \end{proof} \begin{cor} Let $A$ be a non-zero skew left brace. If $A$ is left nilpotent, then $\Fix(A)\ne0$. \end{cor} \begin{proof} Apply Proposition~\ref{thm:Fix} with $I=A$. \end{proof} Another important series of ideals was defined in~\cite{MR3814340} for braces. Let $A$ be a skew left brace. Let $A^{[1]}=A$ and for $n\geq 1$ let $A^{[n+1]}$ be the additive subgroup of $A$ generated by elements from $\{A^{[i]}*A^{[n+1-i]}:1\leq i\leq n\}$, i.e. \[ A^{[n+1]}=\left\langle \bigcup_{i=1}^n A^{[i]}*A^{[n+1-i]}\right\rangle_+ \] for all $n\geq2$. One easily proves by induction that $A^{[1]}\supseteq A^{[2]}\supseteq\cdots$. \begin{pro} \label{pro:Smoktunowicz} Let $A$ be a skew left brace. Each $A^{[n]}$ is a left ideal of $A$. \end{pro} \begin{proof} Each $A^{[n]}$ is a subgroup of $(A,+)$. Since $A*A^{[n]}\subseteq A^{[n+1]}\subseteq A^{[n]}$, the claim follows from Lemma~\ref{lem:A*I}. \end{proof} We will show that there exists a left brace $A$ such that $A^{[n]}=A^{[n+1]}$ are non-zero for some positive integer $n$ and $A^{[n+2]}=0$. \begin{exa} \label{exa:funny} Let \begin{align*} &G=\langle r,s:r^8=s^2=1,\,srs=r^7\rangle\simeq\D_{16},\\ &X=\langle a,b:8a=2b=0,\,a+b=b+a\rangle\simeq \Z/(8)\times \Z/(2). \end{align*} The group $G$ acts by automorphisms on $X$ via \[ r\cdot a=a+b,\quad r\cdot b=4a+b,\quad s\cdot a=3a, \quad s\cdot b=4a+b. \] A direct calculation shows that the map $\pi\colon G\to X$ given by \begin{align*} 1 &\mapsto 0, & r&\mapsto a, & r^2&\mapsto 2a+b, & r^3&\mapsto 7a+b,\\ r^4 &\mapsto 4a, & r^5&\mapsto 5a, & r^6&\mapsto 6a+b, & r^7&\mapsto 3a+b,\\ rs &\mapsto 6a, & r^2s&\mapsto 7a, & r^3s&\mapsto b, &r^4s&\mapsto 5a+b,\\ r^5s &\mapsto 2a, & r^6s&\mapsto 3a, &r^7s&\mapsto 4a+b,&s&\mapsto a+b, \end{align*} is a bijective $1$-cocycle. Therefore there exists a left brace $A$ with additive group isomorphic to $X$ and multiplicative group isomorphic to $G$. The addition of $A$ is that of $X$ and the multiplication is given by \[ x\circ y=\pi(\pi^{-1}(x)\pi^{-1}(y)),\quad x,y\in X. \] Since \begin{align*} a*a&=-a+a\circ a-a=-a+(2a+b)-a=b,\\ (5a+b)*a&=-(5a+b)+(5a+b)\circ a-a=-(5a+b)+b-a=2a, \end{align*} it follows that $A^{[2]}$ contains $\langle 2a,b\rangle_+=\{0,2a,4a,6a,b,2a+b,4a+b,6a+b\}$, the additive subgroup of $(A,+)$ generated by $2a$ and $b$. Therefore $A^{[2]}=\langle 2a,b\rangle_+$ since $A^{[2]}\ne A$ (this can be proved by hand or using Theorem~\ref{thm:left_nilpotent=nilpotent}). Routine calculations prove that \begin{align*} A^{[3]}=\{0,2a+b,4a,6a+b\}, && A^{[4]}=A^{[5]}=\{0,4a\}, && A^{[6]}=\{0\}. \end{align*} \end{exa} The relation between the sequence of the $A^{[n]}$ and the left and right series is given in the following theorem. \begin{thm} \label{thm:equivalence} Let $A$ be a skew left brace. The following statements are equivalent: \begin{enumerate} \item $A^{[\alpha]}=0$ for some $\alpha\in\N$. \item $A^{(\beta)}=0$ and $A^\gamma=0$ for some $\beta,\gamma\in\N$. \end{enumerate} \end{thm} \begin{proof} To prove that $(1)\implies(2)$ one proves by induction that $A^{n}\subseteq A^{[n]}$ and $A^{(n)}\subseteq A^{[n]}$ for all positive integer $n$. Let us prove that $(2)\implies(1)$. We proceed by induction on $\beta$. If $\beta\in\{1,2\}$, then $0=A^{(2)}=A^2=A^{[2]}$ and the result is true. Fix $\beta\in\N$ and suppose that the result holds for this $\beta$, so for every $\gamma$ there exists $\alpha=\alpha(\gamma)$ depending on $\gamma$ such that $A^{[\alpha]}=0$. We need to show that $A^{(\beta+1)}=0$ and $A^{\gamma}=0$ imply that $A^{[n]}$ for some $n$. Let $n>\alpha(\gamma)$. Every element of $A^{[n]}$ is a sum of elements from $A^{[i]}*A^{[j]}$, where $i+j=n$ and $1\leq i\leq n-1$. Note that if $\alpha(\gamma)\leq i\leq n-1$, $a_i\in A^{[i]}$ and $a_{n-i}\in A^{[n-i]}$, then by the inductive hypothesis applied to the quotient $A/A^{(\beta)}$, $a_i\in A^{(\beta)}$ and thus $a_i*a_{n-i}\in A^{(\beta +1)}=0$. Hence we may assume that the elements of $A^{[n]}$ are sums of elements from $A^{[i]}*A^{[j]}$ for $1\leq i<\alpha(\gamma)$ and $j\geq n-\alpha(\gamma)$ such that $i+j=n$. Then \[ A^{[n]}\subseteq A*A^{[n-\alpha(\gamma)]}\subseteq A^2. \] Applying the same argument for $n'=n-\alpha(\gamma)$ we obtain that $A^{[n']}\subseteq A*A^{[n'-\alpha(\gamma)]}$ provided that $n'>\alpha(\gamma)$. Therefore \[ A^{[n]}\subseteq A*(A*A^{[n-2\alpha(\gamma)]})\subseteq A^3 \] provided that $n>2\alpha(\gamma)$. Continuing in this way we obtain that $A^{[n]}\subseteq A^{k}$ provided that $n>(k-1)\alpha(\gamma)$. Then it follows that $A^{[(\gamma-1)\alpha(\gamma)+1]}\subseteq A^{\gamma}=0$. \end{proof} \begin{defn} A skew left brace is said to be \emph{left nil} if for every $a\in A$ we have $a*(a*(a*\cdots))=0$ (for sufficiently large number of brackets in this equation). \end{defn} It is known that every finite left nil left brace is left nilpotent, see~\cite{MR3765444}. Right nil skew left braces are defined in a similar fashion. \begin{defn} A skew left brace $A$ is said to be \emph{strongly nilpotent} if there is a positive integer $n$ such that $A^{[n]}=0$. \end{defn} \begin{defn} A skew left brace $A$ is said to be \emph{strongly nil} if for every $a\in A$ there is a positive integer $n=n(a)$ such that any $*$-product of $n$ copies of $a$ is zero. \end{defn} We do not know the answer to the following questions: \begin{question} \label{question:rightnil=>rightnilp} Let $A$ be a finite right nil skew left brace. Is $A$ right nilpotent? \end{question} \begin{question} \label{question:stronglynil=>stronglynilp} Let $A$ be a finite strongly nil skew left brace. Is $A$ strongly nilpotent? \end{question} \section{Perfect skew left braces} \label{perfect} A skew left brace $A$ is said to be perfect if $A^2 = A$. Let $G$ be a perfect group, that is $G=[G,G]$. By Gr\"{u}n's Lemma (see \cite[page 3]{grun}), $Z(G/Z(G))=\{ 1\}$. Let $B$ be the skew left brace with multiplicative group $G$ and addition defined by $a+b=ba$, for all $a,b\in G$. In $B$ we have $a*b=-a+ab-b=b^{-1}aba^{-1}$, for all $a,b\in G$. Hence the multiplicative group of $B*B$ is $[G,G]$. Note also that $x\in \Soc(B)$ if and only if $1=a^{-1}xax^{-1}$, for all $a\in G$. Thus $\Soc(B)=Z(G)$. Since $Z(G/Z(G))=\{ 1\}$, we have that $\Soc(B/Soc(B)) =\{1\}$. Thus the following question appears to be natural. \begin{question}\label{newLeandro} Let $A$ be a perfect skew left brace. Is $\Soc(A/Soc(A)) = 0$? \end{question} We shall see that the answer is negative. Let $B_1$ and $B_2$ be skew left braces. Recall that the wreath product $B_2\wr B_1$ of the left braces $B_2$ and $B_1$ is a left brace which is the semidirect product of left braces $H_2\rtimes B_1$, where $H_2=\{ f\colon B_1\longrightarrow B_2\mid |\{g\in B_1\mid f(g)\neq 0\}|<\infty\}$ is a left brace with the operations $(f_1\circ f_2)(g)=f_1(g)\circ f_2(g)$ and $(f_1+f_2)(g)=f_1(g)+f_2(g)$, for all $f_1,f_2\in B_2$ and $g\in B_1$, and the action of $(B_1,\circ)$ on $H_2$ is given by the homomorphism $\sigma\colon (B_1,\circ)\longrightarrow \Aut(H_2, +,\circ)$ defined by $\sigma(g)(f)(x)=f(g'\circ x)$, for all $g,x\in B_1$ and $f\in H_2$. Recall that the operations of $H_2\rtimes B_1$ are \begin{align*} &(h_2,b_1)\circ (k_2,c_1)=(h_2\circ b_1(k_2),b_1\circ c_1),\\ &(h_2,b_1)+(k_2,c_1)=(h_2+k_2,b_1+c_1), \end{align*} where we denote $\sigma(b_1)(k_2)$ simply by $b_1(k_2)$. The wreath product of left braces appears in \cite[Corollary 1]{MR3177933} (see also \cite[Section 4]{BCJO18}). This construction also works for skew left braces (see \cite[Corollary 2.39]{MR3763907}). \begin{thm}\label{thm:perfect} Let $B$ be a finite perfect left skew left brace. Let $p$ be an odd prime non-divisor of the order of $B$. Let $T=\Z/(p)$ be the trivial left brace of order $p$. Then the subbrace $W\rtimes B$ of $T\wr B$, where $W=\{ f\colon B\longrightarrow T\mid \sum_{b\in B}f(b)=0\}$, is perfect and $\Soc(W\rtimes B)=W\times\{ 0\}$. \end{thm} \begin{proof} Note that \begin{equation}\label{eq:wreath}(h_1,b_1)*(h_2,b_2)=(-h_1+h_1\circ b_1(h_2)-h_2,b_1*b_2),\end{equation} for all $h_1,h_2\in W$ and all $b_1,b_2\in B$. In particular, $$\{0\}\times B=\{0\}\times (B*B)=(\{0\}\times B)*(\{0\}\times B)\subseteq (W\rtimes B)*(W\rtimes B).$$ Let $f_{b_1}\colon B\longrightarrow T$ be the function defined by $f_{b_1}(b_2)=\delta_{b_1,b_2}$ (the Kronecker delta), for all $b_1,b_2\in B$. Note that $\{f_{b}-f_0\mid b\in B\}$ generates the additive group of $W$. Since $p$ is not a divisor of $|B|$, there exists $|B|^{-1}\in \Z/(p)=T$, and $f_0-|B|^{-1}\sum_{b\in B}f_b\in W$ (note that the additive group of $W$ is a vector space over $\Z/(p)$). Now we have $$(0,b_1)*(f_0-|B|^{-1}\sum_{b\in B}f_b,0)=(b_1(f_0-|B|^{-1}\sum_{b\in B}f_b)-(f_0-|B|^{-1}\sum_{b\in B}f_b),0)$$ and \begin{eqnarray*} b_1(f_0-|B|^{-1}\sum_{b\in B}f_b)(b_2)&=&(f_0-|B|^{-1}\sum_{b\in B}f_b)(b_1'\circ b_2)\\ &=&f_0(b_1'\circ b_2)-|B|^{-1}\sum_{b\in B}f_b(b_1'\circ b_2)\\ &=&f_{b_1}(b_2)-|B|^{-1}\\ &=&f_{b_1}(b_2)-|B|^{-1}\sum_{b\in B}f_b(b_2)\\ &=&(f_{b_1}-|B|^{-1}\sum_{b\in B}f_b)(b_2) \end{eqnarray*} for all $b_1,b_2\in B$. Hence $$(0,b_1)*(f_0-|B|^{-1}\sum_{b\in B}f_b,0)=(f_{b_1}-f_0,0),$$ for all $b_1\in B$. Thus $W\times\{0\}\subseteq (W\rtimes B)*(W\rtimes B)$. Hence $$W\rtimes B=W\times \{0\}+\{0\}\times B\subseteq (W\rtimes B)*(W\rtimes B).$$ Therefore $W\rtimes B$ is perfect. Let $(h_1,b_1)\in \Soc(W\rtimes B)$. Then, by (\ref{eq:wreath}), $h_1\circ b_1(h_2)=h_1+h_2$ and $b_1*b_2=0$, for all $h_2\in W$ and all $b_2\in B$. Hence $$h_1(b)+h_2(b)=h_1(b)\circ b_1(h_2)(b)=h_1(b)\circ h_2(b_1'\circ b)=h_1(b)+ h_2(b_1'\circ b),$$ and thus $$h_2(b)=h_2(b_1'\circ b),$$ for all $h_2\in W$ and all $b\in B$. In particular, $(f_{b_1}-f_0)(b_1)=(f_{b_1}-f_0)(0)$. Suppose that $b_1\neq 0$. Then $1=(f_{b_1}-f_0)(b_1)=(f_{b_1}-f_0)(0)=-1$ in $\Z/(p)$, a contradiction because $p$ is odd. Hence $b_1=0$. Note that by (\ref{eq:wreath}), $$(h_1,0)*(h_2,b_2)=(-h_1+h_1\circ 0(h_2)-h_2,b_1*0)=(-h_1+h_1\circ h_2-h_2,0)=(0,0),$$ for all $h_1,h_2\in W$ and $b_2\in B$. Hence $\Soc(W\rtimes B)=W\times\{0\}$, and the result follows. \end{proof} Note that in Theorem~\ref{thm:perfect}, if $B$ is a left brace, then $W\rtimes B$ also is a left brace. The following result answers Question~\ref{newLeandro} in the negative: \begin{cor} For every positive integer $n$, there exists a finite perfect left brace $B$ such that $\Soc(B/\Soc_n(B))\neq \{0\}$. \end{cor} \begin{proof} We shall prove the result by induction on $n$. For $n=1$, let $B_0$ be a finite simple non-trivial left brace (see \cite{MR3763276}). Then by Theorem~\ref{thm:perfect}, there exists a perfect finite left brace $B_1$ with non-zero socle. By Theorem~\ref{thm:perfect}, there exists a perfect finite left brace $B_2$ with non-zero socle such that $B_2/\Soc(B_2)\cong B_1$. Therefore $\Soc(B_2/\Soc_1(B_2))\cong\Soc(B_1)\neq \{ 0\}$, and this proves the result for $n=1$. Suppose that $n>1$ and that there exists a perfect finite left brace $B_{n}$ with $\Soc(B_{n}/\Soc_{n-1}(B_{n}))\neq\{0\}$. By Theorem~\ref{thm:perfect}, there exists a perfect finite left brace $B_{n+1}$ such that $\Soc(B_{n+1})\neq\{0\}$ and $B_{n+1}/\Soc(B_{n+1})\cong B_{n}$. Hence \begin{align*} \Soc(B_{n+1}/\Soc_{n}(B_{n+1}))&\cong(\Soc((B_{n+1}/\Soc(B_{n+1}))/(\Soc_{n}(B_{n+1})/\Soc(B_{n+1}))\\ &\cong\Soc(B_{n}/\Soc_{n-1}(B_{n}))\neq\{0\}, \end{align*} by the inductive hypothesis. By induction the result follows. \end{proof} \section{Skew braces of nilpotent type} \label{nilpotent_type} We first prove that if both groups of a finite skew left brace $A$ are nilpotent, then $A$ can be decomposed as a direct product of skew left braces of prime-power size. A similar result was proved by Byott in the context of Hopf--Galois extensions, see~\cite[Theorem 1]{MR3030514}. \begin{lem}\label{sum} Let $A$ be a skew left brace such that the additive group is a direct sum of ideals $I_1,I_2$, that is $A=I_1+I_2$ and $I_1\cap I_2=\{0\}$. Then the map $f:A\rightarrow I_1\times I_2$ defined by $f(a_1+a_2)=(a_1,a_2)$, for all $a_1\in I_1$ and $a_2\in I_2$, is an isomorphism of skew left braces. \end{lem} \begin{proof} Recall that the operations of the skew left brace $I_1\times I_2$ are defined componentwise. Clearly $f$ is an isomorphism of the additive groups of $A$ and $I_1\times I_2$. Let $a_1\in I_1$ and $a_2\in I_2$. Since $I_1$ and $I_2$ are ideals we have that $$a_1+a_2-a_1-a_2, a_1*a_2, a_2*a_1\in I_1\cap I_2=\{ 0\},$$ thus $a_1+a_2=a_2+a_1$ and $a_1\circ a_2=a_1+a_2=a_2\circ a_1$. Hence \begin{eqnarray*} f((a_1+a_2)\circ (b_1+b_2))&=&f(a_1\circ a_2\circ b_1\circ b_2)= f(a_1\circ b_1\circ a_2\circ b_2)\\ &=&f(a_1\circ b_1 + a_2\circ b_2)=(a_1\circ b_1 , a_2\circ b_2)\\ &=&(a_1,a_2)\circ (b_1,b_2)=f(a_1+a_2)\circ f(b_1 +b_2), \end{eqnarray*} for all $a_1,b_1\in I_1$ and $a_2,b_2\in I_2$. Therefore, the result follows. \end{proof} \begin{thm} \label{direct} Let $n$ be a positive integer. Let $A$ be a skew left brace such that the additive group is a direct sum of ideals $I_1,\dots ,I_n$, that is every element $a\in A$ is uniquely written as $a=a_1+\dots +a_n$, with $a_j\in I_j$ for all $j$. Then the map $f:A\rightarrow I_1\times\dots\times I_n$ defined by $f(a_1+\dots +a_n)=(a_1,\dots ,a_n)$, for all $a_j\in I_j$, is an isomorphism of skew left braces. \end{thm} \begin{proof} We shall prove the result by induction on $n$. For $n=1$, it is clear. Suppose that $n>1$ and that the result is true for $n-1$. Let $A_1=I_1+\dots +I_{n-1}$. Then $A_1$ is an ideal of $A$ and $A$ is the direct sum of the ideals $A_1$ and $I_n$. By Lemma~\ref{sum}, the map $f_1: A\rightarrow A_1\times I_n$ defined by $f(a+a_n)=(a,a_n)$, for all $a\in A_1$ and $a_n\in I_n$, is an isomorphism of skew left braces. By the induction hypothesis, the map \[ f_2:A_1\rightarrow I_1\times \dots\times I_{n-1}, \quad f_2(a_1+\dots +a_{n-1})=(a_1,\dots ,a_{n-1}), \] is an isomorphism of skew left braces. Therefore $f=(f_2\times \id)\circ f_1:A\rightarrow I_1\times\dots\times I_n$ is an isomorphism of skew left braces and $f(a_1+\dots +a_n)=(a_1,\dots ,a_n)$, for all $a_j\in I_j$. Therefore, the result follows by induction. \end{proof} \begin{cor} \label{cor:product} Let $A$ be a finite skew left brace such that $(A,+)$ and $(A,\circ)$ are nilpotent. Let $I_1,\dots ,I_n$ be the distinct Sylow subgroups of the additive group of $A$. Then $I_1,\dots ,I_n$ are ideals of $A$ and the map $f:A\rightarrow I_1\times\dots\times I_n$ defined by $f(a_1+\dots +a_n)=(a_1,\dots ,a_n)$, for all $a_j\in I_j$, is an isomorphism of skew left braces. \end{cor} \begin{proof} Since $(A,+)$ is nilpotent, for every prime divisor $p$ of the order of $A$, there is a unique Sylow $p$-subgroup $I$ of $(A,+)$. Hence $I$ is a normal subgroup of $(A,+)$, and $\lambda_a(b)\in I$ for all $a\in A$ and $b\in B$. Thus $I$ is a left ideal of $A$ and thus it is a Sylow $p$-subgroup of $(A,\circ)$. Since $(A,\circ)$ is nilpotent, $I$ is the unique Sylow $p$-subgroup of $(A,\circ)$ and, thus, it is normal in $(A,\circ)$. Therefore $I$ is an ideal of $A$. Hence $I_1,\dots ,I_n$ are ideals of $A$ and clearly the additive group of $A$ is the direct sum of $I_1,\dots ,I_n$. The result follows by Theorem~\ref{direct}. \end{proof} Let $A$ be a skew left brace. Let $G$ be the multiplicative group of $A$ and $X$ be the additive group of $A$. Since $G$ acts on $X$ by automorphisms, one forms the semidirect product $\Gamma=X\rtimes G$ with multiplication \[ (x,g)(y,h)=(x+\lambda_g(y),g\circ h). \] Identifying each $g\in G$ with $(0,g)\in\Gamma$ and each $x\in X$ with $(x,0)\in\Gamma$, we see that \begin{align*} [g,x]&= gxg^{-1}x^{-1}=(0,g)(x,0)(0,g')(-x,0)\\ &=(\lambda_g(x),g)(-\lambda^{-1}_g(x),g') =(\lambda_g(x)-x,0)=\lambda_g(x)-x=g*x. \end{align*} Let \begin{equation} \label{eq:repeated} \begin{aligned} & X_0=X=A^1,\\ & X_{n+1}=[G,X_n]=A^{n+2}\quad\text{for $n\geq0$.} \end{aligned} \end{equation} Thus the elements of the left series of $A$ are indeed iterated commutators of the group $\Gamma$. This observation has strong consequences. Our first application is the following useful result, which was proved by Rump for classical braces using different methods (see the corollary after Proposition 2 of~\cite{MR2278047}). \begin{pro} \label{pro:pgroups} Let $p$ be a prime and $A$ be skew left brace of size $p^m$. Then $A$ is left nilpotent. \end{pro} \begin{proof} Let $G$ be the multiplicative group of $A$ and $X$ be the additive group of $A$. Since the semidirect product $\Gamma=A\rtimes G$ is a $p$-group, it is nilpotent. Thus there exists $k$ such that the $k$-repeated commutator $[\Gamma,\Gamma,\dots,\Gamma]$, where $\Gamma$ appears $k$-times, is trivial. Since \[ A^k=[G,\dots,G,X]\subseteq [\Gamma,\dots,\Gamma], \] it follows that $A$ is left nilpotent. \end{proof} The following results follow immediately from theorems of P. Hall: \begin{lem} Let $A$ be finite skew left brace such that $A^3=0$. Then the additive group of $A^2$ is abelian. In fact $A^2$ is a trivial brace. \end{lem} \begin{proof} The first part follows by \cite[Theorem~6]{Hall}. Note that $(A^2)^2\subseteq A^3=0$, hence $a\circ b=a+b$ for all $a,b\in A^2$, and the result follows. \end{proof} \begin{thm} \label{thm:A2} Let $A$ be left nilpotent skew left brace. Then the following statements hold: \begin{enumerate} \item The additive group of $A^2$ is locally nilpotent. \item The multiplicative group of $A/\ker\lambda$ is locally nilpotent. \end{enumerate} \end{thm} \begin{proof} Since each element of the left series of $A$ is a repeated commutator, the first claim follows from Hall's theorem \cite[Theorem~4]{Hall}. To prove the second claim, we use the notation above Proposition~\ref{pro:pgroups}. Let $K=[G,X]G\subseteq \Gamma$ and $H=[G,X]X$. Let $C$ be the centralizer of $H$ in $K$. Then by \cite[Theorem~4]{Hall}, $K/C$ is locally nilpotent. Note that, since $X$ is normal in $\Gamma$, $H=X$. Hence $G\cap C$ is the centralizer of $X$ in $G$, that is \begin{eqnarray*}G\cap C&=&\{ g\in G\mid gxg^{-1}=x, \text{ for all } x\in X\}\\ &=&\{ g\in A\mid \lambda_g(x)=x, \text{ for all } x\in A\}=\ker\lambda. \end{eqnarray*} Thus $(GC)/C\cong G/(G\cap C)=G/\ker\lambda$ is locally nilpotent. \end{proof} We shall introduce some notation. Let $A$ be a skew left brace. We denote by $\gamma^+(a,b)=a+b-a-b$ the commutator of $a,b$ in $(A,+)$, for all $a,b\in A$. Let $B,C$ be two subgroups of $(A,+)$. We define $\gamma^+(B,C)=\langle \gamma^+(b,c)\mid b\in B,\; c\in C\rangle_+$, the additive subgroup generated by the elements $\gamma^+(b,c)$, for $b\in B$ and $c\in C$. We also write $*(a,b)=a*b$ for all $a,b\in A$, and $*(B,C)=B*C=\langle *(b,c)\mid b\in B,\; c\in C\rangle_+$. Let $M$ be the free monoid with basis $\{ \gamma^+, *\}$. Then the elements of $M$ are words in the alphabet $\{ \gamma^+, *\}$, that is, if $m\in M$ then $$m=\epsilon_1\epsilon_2\cdots \epsilon_s,$$ for some non-negative integer $s$ and $\epsilon_i\in \{ \gamma^+, *\}$. In this case, we say that $m$ has degree $s$ and we write $\deg(m)=s$. Furthermore, if $s>0$, we define $$m(a_1a_2\dots a_{s+1})=\epsilon_1(a_1,\epsilon_2(a_2,\dots(\epsilon_s(a_s,a_{s+1}))\dots )),$$ for all $a_1,\dots ,a_{s+1}\in A$, and if $A_1,\dots, A_{s+1}$ are subgroups of $(A,+)$, we define $$m(A_1A_2\dots A_{s+1})=\epsilon_1(A_1,\epsilon_2(A_2,\dots(\epsilon_s(A_s,A_{s+1}))\dots )).$$ Finally we denote by $A_1(t)$ the word $A_1A_1\dots A_1$ of length $t$ in the letter $A_1$. We order $M$ with the degree-lexicographic order, extending $*<\gamma^+$. Note that if $m_2>m_1$ are elements of $M$, then \begin{eqnarray*}\lefteqn{m_2(a_1\dots a_{\deg(m_2)+1})+m_1(b_1\dots b_{\deg(m_1)+1})}\\ &=&m_1(b_1\dots b_{\deg(m_1)+1})+m_2(a_1\dots a_{\deg(m_2)+1})\\ &&-\gamma^+(-m_1(b_1\dots b_{\deg(m_1)+1}),-m_2(a_1\dots a_{\deg(m_2)+1})). \end{eqnarray*} In particular, the elements of the additive subgroup generated by $$\{m(A(\deg(m)+1))\mid m\in M, \text{ with }\deg(m)\geq t\}$$ are of the form $a_1+a_2+\dots +a_s$, where $a_i\in m_i(A(\deg(m_i)+1))$, $\deg(m_1)\geq t$ and $m_1<\dots <m_s$. We denote this additive subgroup by $$\sum_{\{m\in M\mid \deg(m)\geq t\}}m(A(\deg(m)+1)).$$ \begin{lem} \label{lem:series} Let $A$ be a skew left brace. Let $G_1=\ker \lambda$, and for $i>1$, let $G_i=[A,G_{i-1}]=\langle a\circ b\circ a'\circ b'\mid a\in A,\; b\in G_{i-1}\rangle$. Let $M$ be the free monoid with basis $\{ \gamma^+, *\}$. Then $$G_n\subseteq \sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)).$$ \end{lem} \begin{proof} Let $a\in \ker\lambda$ and $g\in A$. Note that \begin{equation} \label{eq:ker} \begin{aligned} g\circ a\circ g'\circ a'&=g\circ (a+ g')+ a'\\ &=g\circ a-g-a=g+\lambda_g(a)-g-a. \end{aligned} \end{equation} We shall prove the result by induction on $n$. For $n=1$, $$G_1=\ker\lambda\subseteq A=\sum_{\{m\in M\mid \deg(m)\geq 0\}}m(A(\deg(m)+1)).$$ Let $n>1$ and suppose that $$G_{n-1}\subseteq \sum_{\{m\in M\mid \deg(m)\geq n-2\}}m(A(\deg(m)+1)).$$ Let $g\in A$ and $a\in G_{n-1}$. Then since $G_{n-1}$ is a subgroup of $\ker\lambda$, by (\ref{eq:ker}) we have $$g\circ a\circ g'\circ a'=g+\lambda_g(a)-g-a. $$ Let $a_m\in m(A(\deg(m)+1))$ be such that $$a=a_{m_1}+a_{m_2}+\dots +a_{m_{s}},$$ with $\deg(m_1)\geq n-2$ and $m_1<\dots <m_s$. We have \begin{eqnarray*} g\circ a\circ g'\circ a'&=&g+\lambda_g(a)-g-a\\ &=&g+\lambda_g(a_{m_1}+a_{m_2}+\dots +a_{m_s})-g-(a_{m_1}+a_{m_2}+\dots +a_{m_s})\\ &=&g+\lambda_g(a_{m_1})+\dots +\lambda_g(a_{m_s})-g-(a_{m_1}+a_{m_2}+\dots +a_{m_s})\\ &=&g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_s}+a_{m_s})-g\\ &&-(a_{m_1}+a_{m_2}+\dots +a_{m_s}). \end{eqnarray*} We shall prove that \begin{eqnarray*} \lefteqn{g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_s}+a_{m_s})-g}\\ &&-(a_{m_1}+a_{m_2}+\dots +a_{m_s})\in\sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)) \end{eqnarray*} by induction on $s$. For $s=1$ we have \begin{eqnarray*} g+g*a_{m_1}+a_{m_1}-g-a_{m_1}&=&\gamma^+(g,g*a_{m_1})+g*a_{m_1}+g+a_{m_1}-g-a_{m_1}\\ &=&\gamma^+(g,g*a_{m_1})+g*a_{m_1}+\gamma^+(g,a_{m_1}). \end{eqnarray*} Since $\gamma^+(g,g*a_{m_1})\in \gamma^+*m_1(A(\deg(m_1)+3))$, $g*a_{m_1}\in *m_1(A(\deg(m_1)+2))$ and $\gamma^+(g,a_{m_1})\in \gamma^+m_1(A(\deg(m_1)+2))$, we have that $$g+g*a_{m_1}+a_{m_1}-g-a_{m_1}\in\sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)).$$ Suppose that $s>1$ and $g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_{s-1}}+a_{m_{s-1}})-g-(a_{m_1}+a_{m_2}+\dots +a_{m_{s-1}})\in \sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)).$ We have that \begin{eqnarray*} \lefteqn{g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_s}+a_{m_s})-g}\\ &&-(a_{m_1}+a_{m_2}+\dots +a_{m_s})\\ &=&g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_{s-1}}+a_{m_{s-1}})\\ &&+g*a_{m_{s}}+a_{m_{s}}-g-a_{m_s}+g-g-(a_{m_1}+a_{m_2}+\dots +a_{m_{s-1}})\\ &=&g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_{s-1}}+a_{m_{s-1}})\\ &&+g*a_{m_{s}}-\gamma^+(-g,a_{m_{s}})-g-(a_{m_1}+a_{m_2}+\dots +a_{m_{s-1}})\\ &=&g+(g*a_{m_1}+a_{m_1})+\dots +(g*a_{m_{s-1}}+a_{m_{s-1}})-g\\ &&-(a_{m_1}+a_{m_2}+\dots +a_{m_{s-1}})\\ &&+g*a_{m_{s}}-\gamma^+(-g,a_{m_{s}})\\ &&-\gamma^+(a_{m_1}+a_{m_2}+\dots +a_{m_{s-1}}+g,\gamma^+(-g,a_{m_{s}})-g*a_{m_{s}})\\ &&\in \sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)). \end{eqnarray*} Hence $$g\circ a\circ g'\circ a'\in \sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)).$$ Note that $\sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1))$ is a left ideal of $A$. Therefore $$G_{n}\subseteq \sum_{\{m\in M\mid \deg(m)\geq n-1\}}m(A(\deg(m)+1)),$$ and the result follows by induction. \end{proof} The following result generalizes \cite[Theorem~1]{MR3814340}. \begin{thm} \label{thm:left_nilpotent=nilpotent} Let $A$ be a finite skew left brace with nilpotent additive group. Then $A$ is left nilpotent if and only if the multiplicative group of $A$ is nilpotent. \end{thm} \begin{proof} Let us first assume that $(A,\circ)$ and $(A,+)$ are nilpotent. By Corollary~\ref{cor:product}, the skew left brace $A$ is the direct product of skew left braces with prime-power orders. By Proposition~\ref{pro:pgroups} all such skew left braces are left nilpotent, hence $A$ is left nilpotent by Lemma~\ref{lem:left_nilpotent:x}. Suppose now that $(A,+)$ is nilpotent and $A$ is left nilpotent. There exist positive integers $n_1,n_2$ such that $A^{n_1}=0$ and $\gamma_{n_2}^+(A)=0$, where $\gamma_{j+1}^+(A)=(\gamma^+)^j(A(j+1))$, using the notation above Lemma~\ref{lem:series}. By Theorem~\ref{thm:A2}, we know that the multiplicative group of $A/\ker\lambda$ is nilpotent. Let $\gamma_1(A)=A$ and for $i>1$ let \[ \gamma_i(A)=[A,\gamma_{i-1}(A)]=\langle a\circ b\circ a'\circ b'\mid a\in A,\; b\in \gamma_{i-1}(A)\rangle. \] Thus there exists a positive integer $k$ such that $\gamma_k(A)\subseteq\ker\lambda$. Using the notation in the proof of Lemma~\ref{lem:series}, we have that $\gamma_{k+j}(A)\subseteq G_{j+1}$ for every nonnegative integer $j$. Hence, by Lemma~\ref{lem:series} $$\gamma_{k+n_1n_2}(A)\subseteq G_{n_1n_2+1}\subseteq \sum_{\{m\in M\mid \deg(m)\geq n_1n_2\}}m(A(\deg(m)+1)).$$ Let $m\in M$ be an element with $\deg(m)\geq n_1n_2$. Note that if $\gamma^+$ appears $t$ times in $m$, then $m(A(\deg(m)+1))\subseteq (\gamma^+)^t(A(t+1))$. In particular, if $t\geq n_2$, then $m(A(\deg(m)+1))=0$. Suppose that $\gamma^+$ appears at most $n_2-1$ times in $m$. In this case, there exist $m_1,m_2\in M$ such that $m=m_1(*)^{n_1}m_2$. In this case, \begin{align*} m(A(\deg(m)+1))&=m_1(*)^{n_1}(A(\deg(m_1)+n_1)m_2(A(\deg(m_2)+1)))\\ &\subseteq m_1(*)^{n_1}(A(\deg(m_1)+n_1+1))\\ &=m_1(A(\deg(m_1)A^{n_1}))=0. \end{align*} Hence $\gamma_{k+n_1n_2}(A)=0$. Therefore the multiplicative group of $A$ is nilpotent, and the result follows. \end{proof} The assumption on the nilpotency of the additive group in Theorem~\ref{thm:left_nilpotent=nilpotent} is needed (see Example~\ref{ex:trivial}). \begin{cor} Let $A$ be a finite skew left brace of size $p^n$ for some prime number $p$ and some positive integer $n$. Then either $A$ is the trivial brace of order $p$ or it is not simple. \end{cor} \begin{proof} By Theorem~\ref{thm:left_nilpotent=nilpotent}, $A$ is left nilpotent. In particular, if $A\neq 0$, then $A^2\neq A$. Since $A^2$ is an ideal either $A$ is not simple or $A^2=0$. Assume that $A^2=0$. In this case, $a\circ b=a+b$ for all $a,b\in A$. Therefore $[A,A]$ is a proper an ideal of $A$. Hence, either $A$ is not simple or $[A,A]=0$. Assume that $A^2=[A,A]=0$. In this case $A$ is a trivial brace and the result follows. \end{proof} \begin{lem} \label{lem:sylow_leftideals} Let $A$ be a finite skew left brace with nilpotent additive group. Let $p$ and $q$ distinct prime numbers and let $P$ and $Q$ be Sylow subgroups of $(A,+)$ of sizes $p^n$ and $q^m$, respectively. Then $P$, $Q$ and $P+Q$ are left ideals of $A$. \end{lem} \begin{proof} Let us first prove that $P$ is a left ideal. Since $(A,+)$ is nilpotent, $P$ is a normal subgroup of $(A,+)$. Let $a\in A$ and $x\in P$. Then $\lambda_a(x)\in P$ since $\lambda_a$ is a group homomorphism. Similarly one proves that $Q$ is a left ideal. From this it follows that $P+Q$ is a left ideal. \end{proof} The following is based on~\cite[Theorem 5(1)]{MR3765444}. However, the proof is completely different. \begin{thm} \label{thm:P*Q=0} Let $A$ be a finite skew left brace with nilpotent additive group. Let $p$ and $q$ distinct prime numbers and let $A_p$ and $A_q$ be Sylow subgroups of $(A,+)$ of sizes $p^n$ and $q^m$, respectively. If $p$ does not divide $q^t-1$ for all $t\in\{1,\dots,m\}$, then $A_p*A_q=0$. In particular, $\lambda_x(y)=y$ for all $x\in A_p$ and $y\in A_q$. \end{thm} \begin{proof} By Lemma~\ref{lem:sylow_leftideals} $A_p$, $A_q$ and $A_p+A_q$ are left ideals of $A$. In particular, $A_p+A_q$ is a skew subbrace of $A$ and $A_p$ and $A_q$ are Sylow subgroups of $(A_p+A_q,\circ)$. By Sylow's theorem, the number $n_p$ of Sylow $p$-subgroups of the multiplicative group of $A_p+A_q$ is \[ n_p=[A_p+A_q:N]\equiv 1\bmod p, \] where $N=\{g\in A_p+A_q:g\circ A_p\circ g'=A_p\}$ is the normalizer of $A_p$ in the multiplicative group of $A_p+A_q$. Since $[A_p+A_q:N]=q^s$ for some $s\in\{0,\dots,m\}$ and $p$ does not divide $q^t-1$ for all $t\in\{1,\dots,n\}$, it follows that $s=0$ and hence $A_p$ is a normal subgroup of the multiplicative group of $A_p+A_q$. Thus $A_p$ is an ideal of the skew left brace $A_p+A_q$. Since $A_p$ is an ideal of $A_p+A_q$ and $A_q$ is a left ideal, we have that $A_p*A_q\subseteq A_p\cap A_q=0$, and the result follows. \end{proof} \begin{cor} Let $A$ be a skew left brace of size $p_1^{\alpha_1}\cdots p_k^{\alpha_k}$, where $p_1<p_2<\cdots<p_k$ are prime numbers and $\alpha_1,\dots,\alpha_k$ are positive integers. Assume that the additive group of $A$ is nilpotent. Let $A_j$ be the Sylow $p_j$- subgroups of the additive group of $A$. Assume that, for some $j\leq k$, $p_j$ does not divide $p_i^{t_i}-1$ for all $t_i\in\{ 1,\dots \alpha_i\}$ for all $i\neq j$. Then $\Soc(A_j)\subseteq \Soc(A)$. \end{cor} \begin{proof} Write $A=A_1+\cdots+A_k$. Let $a\in \Soc(A_j)$ and $b\in A$. Hence there exist elements $b_k\in A_k$ such that $b=b_1+\dots +b_k$. By Theorem~\ref{thm:P*Q=0}, $\lambda_a(b_i)=b_i$, for all $i\neq j$. Then $\lambda_a(b)=\lambda_a(b_1)+\dots +\lambda_a(b_k)=b_1+\dots +b_k=b$ and hence $a\in Soc(A)$. Thus the result follows. \end{proof} \section{Braces with cyclic multiplicative group} \label{cyclic} As a consequence of a result of P. Hall one proves that the multiplicative group of a finite left brace is solvable. The following example shows that this fact does not hold for infinite left braces: \begin{example} Let $K$ be a field of characteristic $0$. The Jacobson radical of $M_2(K[[x]])$ is $J=M_2(xK[[x]])$. Thus $(J,+,\circ)$ is a two-sided brace, where \[ A\circ B=AB+A+B \] for all $A,B\in J$. The map $f\colon J\longrightarrow \GL(M_2(K[[x]]))$ defined by $f(A)=I_2+A$, for all $A\in J$, where $I_2$ is the identity $2\times 2$ matrix, is a monomorphism of groups: $$f(A\circ B)=I_2+AB+A+B=(I_2+A)(I_2+B)=f(A)f(B).$$ By \cite[Lemma~2.8]{W}, the subgroup of $(J,\circ)$ generated by $$\left(\begin{array}{cc} 0&0\\ x&0 \end{array}\right), \quad \left(\begin{array}{cc} 0&x\\ 0&0 \end{array}\right)$$ is free of rank two. Therefore $(J,\circ)$ is not solvable. \end{example} The following result is due to Rump (see the proof of \cite[Proposition~6]{MR2298848}). \begin{thm}[Rump]\label{addZ} Let $A$ be a left brace of type $\Z$. Then either $A$ is the trivial brace isomorphic to $\Z$ or its multiplication is defined by \begin{eqnarray}\label{mult}&&(ma)\circ (na)=((-1)^mn+m)a, \end{eqnarray} for all $m,n\in \Z$, where $a\in A$ is a fixed generator of its additive group. \end{thm} Now we shall study the left braces with multiplicative group isomorphic to $\Z$. Note that these are two-sided braces. Thus this is equivalent to the study of the Jacobson radical rings such that its circle group is isomorphic to $\Z$. \begin{lem}\label{fg} Let $R$ be a Jacobson radical ring with its circle group isomorphic to $\Z$. Then the additive group of $R$ is finitely generated. \end{lem} \begin{proof} Let $x\in R$ be a generator of the circle group $(R,\circ)$ of $R$. Let $y\in R$ the inverse of $x$ in $(R,\circ)$. Then $$R=\left\{\sum_{i=1}^n {n\choose i} x^{i}\mid n\geq 1\right\}\cup \left\{\sum_{i=1}^n {n\choose i} y^{i}\mid n\geq 1\right\}\cup \{ 0\}.$$ Since $x+y\in R$ one of the following conditions holds: \begin{itemize} \item[(1)] $x+y=0$. In this case, $0=xy=-x^2$, thus $R=\{ zx\mid z\in \Z\}$ and the additive group of $R$ is isomorphic to $\Z$. \item[(2)] There exists a positive integer $n$ such that $$x+y=\sum_{i=1}^n {n\choose i} x^{i}.$$ Since $y\neq 0$, we have that $n>1$. Then $$0=xy+x+y=x\left(-x+\sum_{i=1}^n {n\choose i} x^{i}\right)+\sum_{i=1}^n {n\choose i} x^{i}.$$ Hence $R=x\Z+x^2\Z+\dots +x^n\Z$. \item[(3)] There exists a positive integer $n$ such that $$x+y=\sum_{i=1}^n {n\choose i} y^{i}.$$ Since $x\neq 0$, we have that $n>1$. Then $$0=yx+x+y=y\left(-y+\sum_{i=1}^n {n\choose i} y^{i}\right)+\sum_{i=1}^n {n\choose i} y^{i}.$$ Hence $R=y\Z+y^2\Z+\dots +y^n\Z$. \end{itemize} Therefore the result follows. \end{proof} \begin{thm}\label{multZ} Let $R$ be a Jacobson radical ring with its circle group isomorphic to $\Z$. Then $ab=0$ for all $a,b\in R$. \end{thm} \begin{proof} Let $p$ be a prime number. Since the additive group of $R$ is infinite and finitely generated, $pR$ is a proper ideal of $R$ and $R/pR$ has order $p^m$ for some positive integer $m$. Since the only simple left brace of order a power of $p$ is the trivial brace of order $p$, there exists a maximal ideal $I_p$ of $R$ such that $pR\subseteq I_p$ and $R/I_p$ has order $p$. Let $x$ be a generator of the circle group of $R$. Then it is clear that the circle group of $I_p$ is generated by $x^{\circ p}$, (where $x^{\circ n}=x\circ\dots\circ x$ ($n$ times)). Let $a,b\in R$. We have that $(a+I_p)(b+I_p)=I_p$ because $R/I_p$ is a ring with zero multiplication. Hence $ab\in I_p$, for all prime numbers $p$. Now $$\bigcap_{p \text{ prime}}I_p=\bigcap_{p \text{ prime}}\{x^{\circ zp}\mid z\in \Z\}=\{0\}.$$ Therefore $ab=0$, and the result follows. \end{proof} As a consequence we have the following result. \begin{thm} \label{thm:Z} Let $A$ be a left brace with multiplicative group isomorphic to $\Z$. Then $A$ is a trivial brace, in particular the additive group of $A$ is isomorphic to $\Z$. \end{thm} A natural question arises: Is it possible to extend Theorem~\ref{thm:Z} to skew left braces? One can prove Theorem~\ref{thm:Z} for skew left braces of finite multipermutation level. However, the following result shows that nothing new is covered in this case. \begin{thm} \label{thm:ZrightNilpotent} Let $A$ be a skew left brace with multiplicative group isomorphic to $\Z$. Then $A$ has finite multipermutation level if and only if $A$ is of abelian type. \end{thm} \begin{proof} We proved in Theorem~\ref{thm:Z} that skew left braces of abelian type with multiplicative group isomorphic to $\Z$ have finite multipermutation level. Let us assume that $A$ has finite multipermutation level. Let $c\in\Soc(A)\setminus\{0\}$. Then $c^{\circ k}=kc$ for all $k\in\N$. Since $(A,\circ)$ is torsion-free, $c^{\circ k}\ne c^{\circ l}$ if $k\ne l$. Observe that $\lambda_a(c^{\circ k})=c^{\circ k}$ for all $a\in A$ and $k\in\N$, because $(A, \circ )$ is commutative. For $k>0$ let $I_k$ be the ideal of $A$ generated by $c^{\circ k}$. Then $I_k=\{klc:l\in\Z\}$ and $\bigcap_{k>0} I_k=\{0\}$. Let $k>0$. Since $c\ne0$, $A/I_k$ is a finite skew left brace of finite multipermutation level. By Theorem~\ref{thm:mpl&right_nilpotent}, $A/I_k$ is of nilpotent type. Thus Corollary~\ref{cor:product} implies that $A/I_k$ is a direct product of skew left braces of prime-power size. Using results of T. Kohl~\cite{MR1644203} quoted in \cite[Example A.7]{MR3763907}, such skew left braces are either of abelian type or of size $2^\alpha$ for some $\alpha$. Let us assume that $A$ is not of abelian type and let $a,b\in A$ be such that $a+b-a-b\ne0$. For each $k>0$ there exists $m(k)\in\N$ such that \[ 2^{m(k)}(a+b-a-a)\in I_k. \] Since $I_k=\{klc:l\in\Z\}$, for each $a,b\in A$ and each $k>0$ there are $m(k)\in\N$ and $q(k)\in\Z$ such that \[ 2^{m(k)}(a+b-a-b)=q(k)kc. \] Let $k$ be an odd prime number coprime with $3q(3)$. Then \[ (2^{m(3)}q(k)k-2^{m(k)}q(3)3)(a+b-a-b)=0. \] Since $k$ is an odd prime number coprime with $3q(3)$, it follows that there exists $n\ne0$ such that $n(a+b-a-b)=0$. Then $nq(3)3c=0$, a contradiction. Therefore $A$ is a skew left brace of abelian type. \end{proof} \begin{rem} In~\cite{Greenfeld}, Greenfeld showed that adjoint groups of Jacobson radical and not nilpotent algebras cannot be finite products of cyclic groups. His results are general but hold only for algebras over fields; therefore our results do not follow from~\cite{Greenfeld}. \end{rem} The following result shows that Theorem~\ref{thm:Z} cannot be extended to skew left braces. Recall that the \emph{infinite dihedral group} is the group \[ \D_{\infty}=\langle r,s:srs=r^{-1},\,s^2=1\rangle\simeq\Z\rtimes\Z/(2). \] \begin{thm} \label{thm:infinite_dihedral} There exists a skew left brace with multiplicative group isomorphic to $\Z$ and additive group isomorphic to the infinite dihedral group $\D_\infty$. \end{thm} \begin{proof} Let $G=\langle g\rangle\simeq\Z$. A direct calculation shows that the operations \[ g^k+g^l=g^{k+(-1)^kl},\quad g^k\circ g^l=g^{k+l},\quad k,l\in\Z, \] turn $G$ into a skew left brace with additive group isomorphic to the infinite dihedral group $\D_{\infty}$ and multiplicative group isomorphic to $\Z$. \end{proof} \section{Indecomposable solutions} \label{indecomposable} Let $A$ be a skew left brace and $a\in A$. We say that the skew left brace $A$ is generated by $a$ if $A$ is the smallest sub skew left brace of $A$ containing $a$. Let $(X, r)$ be a non-degenerate set-theoretic solution of the Yang--Baxter equation, where $r(x, y) = (\sigma_x(y), \tau_y(x))$. Recall from \cite{MR1848966} that $(X, r)$ is said to be decomposable if there exist disjoint non-empty subsets $X_1$ and $X_2$ of $X$ such that $X = X_1\cup X_2$ and $r(X_i\times X_j) = X_j\times X_i$ for all $i, j\leq 2$. If it is not possible to find such subsets $X_1$ and $X_2$ of $X$, the solution $(X, r)$ is said to be indecomposable. By the orbit of an element $z\in X$ we will mean the smallest subset $Y$ of $X$ such that $z\in Y$ and $\sigma_x(y),\sigma^{-1}_x(y),\tau_x(y),\tau^{-1}_x(y)\in Y$, for all $y\in Y$ and $x\in X$. That is, if $H$ is the subgroup of the symmetric group $\Sym(X)$ over $X$ generated by $\sigma_x,\tau_x$, for all $x\in X$, then the orbit of $z\in X$ is $Y=\{ h(z)\colon h\in H\}$. Note that a non-degenerate set-theoretic solution $(X, r)$ of the Yang--Baxter equation can be decomposed into orbits $X_i$, for $i\in I$, such that $r(X_i\times X_j) = X_j\times X_i$ for all $i, j\in I$. Each restriction $(X_i, r|_{X_i\times X_i})$ is again a non-degenerate set-theoretic solution of the Yang--Baxter equation. However, such restricted solutions need not to be indecomposable. \begin{example} Let $(A,+,\cdot)$ be a commutative nilpotent ring with generators $x, y$ and relations $x + x = y + y = 0$, $x^2 = y^2 = 0$. Let $(A,+,\circ)$ be the associated brace and $(A, r_A)$ be the associated involutive non-degenerate set-theoretic solution. Then $Y = \{ x, x + yx\}$ is an orbit. Observe that the solution $(Y, r|_{Y\times Y})$ is decomposable and $Y = \{x\}\cup\{ x + xy\}$ is the decomposition of $Y$ into its orbits. \end{example} Recall that if $A$ is a skew left brace, then its associated solution $(A,r_A)$ is defined by $r_A(a,b)=(\sigma_a(b),\tau_b(a))$, for all $a,b\in A$, where $\sigma_a(b)=\lambda_a(b)$ and $\tau_b(a)=\lambda_a(b)'\circ a\circ b$ (see \cite{MR3647970}). In \cite[Theorem~3.1]{MR3647970} it is proved that $(A,r_A)$ is a non-degenerate set-theoretic solution of the Yang--Baxter equation. \begin{rem}\label{orbits} Let $(X,r)$ be an involutive non-degenerate set-theoretic solution of the Yang--Baxter equation. We write $r(x,y)=(\sigma_x(y),\tau_y(x))$. Since $r$ is involutive we have that $\tau_y(x)=\sigma^{-1}_{\sigma_x(y)}(x)$ and $\sigma_a(b)=\tau^{-1}_{\tau_b(a)}(b)$. Note that the orbit of $x\in X$ is \[ O_x=\{ \sigma_{y_1}\tau_{y_2}\dots\sigma_{y_{2m-1}}\tau_{y_{2m}}(x)\colon y_1,\dots y_{2m}\in X\cup \{ 0\}\}, \] where $0\notin X$ and $\sigma_0=\tau_0=\id_X$. This is because $$\sigma^{-1}_y(z)=\sigma^{-1}_{\sigma_z(\sigma^{-1}_z(y))}(z)=\tau_{\sigma^{-1}_z(y)}(z)\in O_x,$$ and $$\tau^{-1}_y(z)=\tau^{-1}_{\tau_z(\tau^{-1}_z(y))}(z)=\sigma_{\tau^{-1}_z(y)}(z)\in O_x,$$ for all $z\in O_x$ and all $y\in X$. Therefore our definition of orbit of an element of a non-degenerate set-theoretic solution of the Yang--Baxter equation coincides with the definition of orbit in \cite[Section~2.1]{MR3771874} in the involutive case. It is easy to see that these definitions of orbits also coincide for finite non-degenerate set-theoretic solutions of the Yang--Baxter equation. However our definition of orbit does not coincide with the definition of orbit in \cite[Section~2.1]{MR3771874} for arbitrary infinite non-degenerate set-theoretic solution of the Yang--Baxter equation, as the following example shows. \end{rem} \begin{exa} Consider the map $r\colon\Z\times \Z\rightarrow \Z\times \Z$ defined by $r(a,b)=(b+1,a+1)$, for all $a,b\in\Z$. It is easy to check that $(\Z,r)$ is a non-degenerate set-theoretic solution of the Yang--Baxter equation. With our definition of orbit, the orbit of every element $a\in \Z$ is $\Z$. With the definition of orbit in \cite[Section~2.1]{MR3771874}, the orbit of $a\in \Z$ is \[ X_a=\{a+n\colon n \text{ is a non-negative integer}\}. \] Note that the restriction $r'=r|_{X_a\times X_a}\colon X_a\times X_a\rightarrow X_a\times X_a$ of $r$ to $X_a\times X_a$ is non-bijective map. In fact $(X_a, r')$ is a non-bijective set-theoretic solution of the Yang--Baxter equation. Furthermore, if we write $r'(b,c)=(\sigma_b(c),\tau_c(b))$, for all $b,c\in X_a$, then $\sigma_b\colon X_a\rightarrow X_a$ is not bijective. \end{exa} \begin{pro}\label{1new} Let $A$ be a skew left brace generated (as a skew left brace) by an element $x$. Let $X=\{\lambda_a(x)\colon a\in A\}$. Then $A=\langle X\rangle_+=\langle X\rangle_{\circ}$. \end{pro} \begin{proof} Note that $x=\lambda_0(x)\in X$. Since $\lambda:(A,\circ)\rightarrow\Aut(A,+)$ is a homomorphism of groups, it is clear that $\lambda_a(\langle X\rangle_+)\subseteq \langle X\rangle_+$ for all $a\in A$. Let $y,z\in \langle X\rangle_+$. We have that $$y\circ z'=y+\lambda_y(z')=y+\lambda_y(-\lambda_{z'}(z))\in \langle X\rangle_+.$$ Hence $\langle X\rangle_+$ is a sub skew left brace of $A$ containing $x$ and thus $A=\langle X\rangle_+$. Let $t\in \langle X\rangle_{\circ}$. Then $t=\lambda_{a_1}(x)^{\varepsilon_1}\circ\dots\circ \lambda_{a_n}(x)^{\varepsilon_n}$, for some $a_j\in A$ and $\varepsilon_j\in \{ {}',1\}$ (where $a^1=a$). We shall prove that $\lambda_a(t)\in \langle X\rangle_{\circ}$ by induction on $n$. For $n=1$, we may assume that $t=\lambda_{a_1}(x)'$. In this case \begin{align*}\lambda_a(t)&=\lambda_a(-\lambda_t(\lambda_{a_1}(x)))=-\lambda_{a\circ t\circ a_1}(x)\\ &=\lambda_{(-\lambda_{a\circ t\circ a_1}(x))'}(\lambda_{a\circ t\circ a_1}(x))'\in \langle X\rangle_{\circ} \end{align*} Suppose that $n>1$ and that $\lambda_b(\lambda_{b_1}(x)^{\nu_1}\circ\dots\circ \lambda_{b_{n-1}}(x)^{\nu_{n-1}})\in \langle X\rangle_{\circ}$, for all $b,b_1,\dots, b_{n-1}\in A$ and $\nu_j\in\{ {}',1\}$. Thus \begin{align*} \lambda_a(t)&=\lambda_a(\lambda_{a_1}(x)^{\varepsilon_1}+\lambda_{\lambda_{a_1}(x)^{\varepsilon_1}}(\lambda_{a_2}(x)^{\varepsilon_2}\circ\dots\circ \lambda_{a_n}(x)^{\varepsilon_n}))\\ &=\lambda_a(\lambda_{a_1}(x)^{\varepsilon_1})+\lambda_{a\circ\lambda_{a_1}(x)^{\varepsilon_1}}(\lambda_{a_2}(x)^{\varepsilon_2}\circ\dots\circ \lambda_{a_n}(x)^{\varepsilon_n})\\ &=\lambda_a(\lambda_{a_1}(x)^{\varepsilon_1})\circ\lambda_{(\lambda_a(\lambda_{a_1}(x)^{\varepsilon_1}))'\circ a\circ\lambda_{a_1}(x)^{\varepsilon_1}}(\lambda_{a_2}(x)^{\varepsilon_2}\circ\dots\circ \lambda_{a_n}(x)^{\varepsilon_n})\in \langle X\rangle_{\circ}, \end{align*} by the inductive hypothesis. Therefore $\lambda_a(\langle X\rangle_{\circ})\subseteq \langle X\rangle_{\circ}$, for all $a\in A$. Let $t,z\in \langle X\rangle_{\circ}$. We have that $-t+z=\lambda_t(t'\circ z)\in \langle X\rangle_{\circ}$. Hence $\langle X\rangle_{\circ}$ is a sub skew left brace of $A$ containing $x$ and thus $A=\langle X\rangle_{\circ}$ and the result follows. \end{proof} As a consequence we obtain a generalization of \cite[Theorem~5.4]{MR3771874} to skew left braces. In particular, by Remark~\ref{orbits}, we answer in positive \cite[Question~5.6]{MR3771874}. \begin{pro} Let $B$ be a skew left brace and let $x\in B$. Let $A = B(x)$ be the smallest sub skew left brace of $B$ containing $x$. Let $(A, r_A)$ be the solution associated to the skew left brace $A$ and let $X$ be the orbit of $x$ in $(A, r_A)$. Then $(X, r_A|_{X\times X})$ is indecomposable. \end{pro} \begin{proof} Recall that $r_A(a,b)=(\sigma_a(b), \tau_b(a))$, where $\sigma_a(b)=\lambda_a(b)$ and $\tau_b(a)=\lambda_a(b)'\circ a\circ b$, for all $a,b\in A$. By \cite[Corollary~1.10]{MR3647970}, the map $\lambda\colon (A,\circ)\rightarrow \Aut(A,+)$ defined by $\lambda(a)=\lambda_a$, for all $a\in A$, is a homomorphism of groups. By \cite[Lemma~2.4]{Bachiller3}, the map $\tau\colon (A,\circ)\rightarrow \Sym(A)$ defined by $\tau(a)=\tau_a$ is an antihomomorphism of groups. Let $X_1=\{ \lambda_a(x)\colon a\in A\}$. Note that $X_1\subseteq X$. By Proposition~\ref{1new}, $A=\langle X_1\rangle_\circ$. Let $z,t\in X$. Hence there exist $a_1,\dots, a_{2n},b_1,\dots ,b_{2m}\in X_1\cup X'_1\cup \{ 1\}$, such that $$z=\lambda_{a_1}\tau_{a_2}\cdots \lambda_{a_{2n-1}}\tau_{a_{2n}}(x)\;\mbox{ and }\; t=\lambda_{b_1}\tau_{b_2}\cdots \lambda_{b_{2m-1}}\tau_{b_{2m}}(x),$$ where $X'_1=\{ y'\colon y\in X_1\}$. Thus $$ t=\lambda_{b_1}\tau_{b_2}\cdots \lambda_{b_{2m-1}}\tau_{b_{2m}}\tau_{a'_{2n}}\lambda_{a'_{2n-1}}\cdots \tau_{a'_{2}}\lambda_{a'_{1}}(z).$$ Therefore the result follows. \end{proof} \begin{pro} Let $(X, r)$ be a non-degenerate set-theoretic solution of the Yang--Baxter equation, and suppose that $(X, r)$ is decomposable with $X=Y\cup Z$, then $Y$ and $Z$ are unions of orbits. In particular, every non-degenerate solution with a unique orbit is indecomposable. \end{pro} \begin{proof} If $(X,r)$ is a decomposable non-degenerate solution, then $X$ is the union of two disjoint non-empty sets $Y$ and $Z$ such that $r(Y\times Y)=Y\times Y$, $r(Z\times Z)=Z\times Z$, $r(Y\times Z)=Z\times Y$ and $r(Z\times Y)=Y\times Z$. Therefore $r(X\times Y)=Y\times X$ and $r(Y\times X)=X\times Y$. We write $r(x,y)=(\sigma_x(y),\tau_y(x))$. Let $y\in Y$ and $x\in X$. Hence have $\sigma_x(y)\in Y$ and $\tau_x(y)\in Y$. Similarly $\sigma_x(z),\tau_x(z)\in Z$ for all $z\in Z$ and $x\in X$. Since $\sigma_x(\sigma^{-1}_x(y))=y\in Y$ for all $y\in Y$ and $x\in X$, we have that $\sigma^{-1}_x(y)\in X\setminus Z=Y$. Similarly, $\tau^{-1}_x(y)\in Y$ for all $y\in Y$ and all $x\in X$. Hence $Y$ and $Z$ are unions of orbits of elements. In particular, every non-degenerate solution with a unique orbit is indecomposable. \end{proof} \section*{Acknowledgements} The first named-author was partially supported by the grants MINECO-FEDER MTM2017-83487-P and AGAUR 2017SGR1725(Spain). The second-named author is supported by the ERC Advanced grant 320974 and EPSRC Programme Grant EP/R034826/1. The third-named author is supported by PICT-201-0147, MATH-AmSud 17MATH-01 and ERC Advanced grant 320974. We thank E. Acri, N. Byott and E. Jespers for useful comments. \bibliographystyle{abbrv}
{ "timestamp": "2018-09-25T02:12:54", "yymm": "1806", "arxiv_id": "1806.01127", "language": "en", "url": "https://arxiv.org/abs/1806.01127" }
\section{Introduction} Green peas (GPs) are a local population of compact ($<3$\,kpc), low-mass ($10^{8-10}M_\odot$) galaxies with strong optical emission lines \citep{Cardamone09,Izotov11} found at redshifts $z<0.5$. Their popular name was coined by the citizen science {\em Galaxy Zoo} project \citep{Lintott08}, where the targets appear unresolved in the Sloan Digital Sky Survey (SDSS) images and have green colours due to the strong [\ion{O}{iii}]$\,\lambda5007$\ nebular emission line, with equivalent widths as large as $\sim\!1500$\,\AA. \citet{Izotov11} have shown that green peas form a subset of luminous compact galaxies that are present in the SDSS over a larger redshift range, $0.02\!\lesssim\!z\!\lesssim\!0.65$. Their compactness, low masses, sub-solar metallicities in the range 12\,+\,log(O/H)\,$\sim$\,7.5\,--\,8.5, and H$\alpha$\ equivalent widths exceeding hundreds of \AA\ \citep{Izotov11} make green peas similar to high-redshift Lyman-break galaxies (LBGs) and Lyman-alpha emitters (LAEs). Green peas have recently drawn attention due to the ionizing Lyman continuum (LyC) radiation escape. The detected LyC that manages to leak into the intergalactic space proves that starburst galaxies could have played a role in the cosmic reionization. Such detections have been rare to date: while six out of the six targeted green peas leak $6-46$\,\% of their LyC \citep{Izotov16,Izotov16b,Izotov18}, only four additional low-$z$ LyC leakers have been found over the past two decades, and their escape fractions are low, $f_\mathrm{esc}$\/(LyC)\,=\,1\,--\,4\% \citep{Bergvall06,Leitet13,Borthakur14,Leitherer16,Puschnig17}. At high redshift, the situation is even more challenging; numerous LyC leaking candidates have been refuted as lower-redshift interlopers \citep{Siana15}, and stringent upper limits have been set on the escape fractions for $z\!\sim\!1\!-\!2$ galaxies using the Galaxy Evolution Explorer (GALEX) and Hubble Space Telescope (HST) imaging catalogues \citep{Rutkowski16,Rutkowski17}. Only three convincing spectroscopic LyC detections \citep{deBarros16,Vanzella16,Shapley16,Vanzella18} and one imaging detection \citep{Bian17} have recently been achieved at $z\!\sim\!2-4,$ reaching, however, large escape fractions, $f_\mathrm{esc}$\/(LyC)\,$>50$\%, for each of these galaxies. The Lyman continuum escape from green peas had been suspected due to their high star-formation rates, compactness, and high [\ion{O}{iii}]$\,\lambda5007$/[\ion{O}{ii}]$\,\lambda3727$\ flux ratios, unusual for local galaxies, which could be signatures of density-bounded \ion{H}{ii}\ regions \citep{Jaskot13,Nakajima13,Nakajima14,Stasinska15}. Furthermore, most of the green peas were known to be strong Lyman-alpha (Ly$\alpha$) emitters with unusual Lyman-alpha (Ly$\alpha$) line profiles consisting of narrow, double-peaked emission lines, and weak ultraviolet (UV) absorption lines of low-ionization-state metals \citep{Jaskot14,Henry15,Verhamme15}. These UV properties are consistent with a low \ion{H}{i}\ content, leading to the ionizing radiation escape, as was theoretically demonstrated by \citet{Verhamme15}, and observationally confirmed by \citet{Verhamme17} and \citet{Chisholm17}. The Ly$\alpha$\ line of hydrogen (1215.67\,\AA) is one of the primary tools for detecting high-$z$ galaxies \citep[e.g.][]{Ouchi09,Sobral15,Matthee15,Zitrin15,Oesch16,Bagley17}. It also is a powerful tool to study conditions in the interstellar medium (ISM), both at low and high redshift \citep[e.g.][]{Hayes13,Hayes14,Verhamme15,Guaita17}. Ly$\alpha$\ resonantly scatters off neutral hydrogen, which results in strong radiative transfer effects both in real and frequency space. The escape of Ly$\alpha$\ from galaxies is a complex, multi-parameter problem: Ly$\alpha$\ photons trapped by scattering are more susceptible to dust absorption. On the other hand, outflows, low dust contents, and low \ion{H}{i}\ column densities help their escape \citep[e.g.][]{Kunth98,Shapley03,Atek08, Verhamme08,Scarlata09,Wofford13,Hayes14,Hayes15,Henry15,Rivera15}. Building on the early analytical works \citep{Adams72,Neufeld90}, numerical Ly$\alpha$\ radiation transfer models were needed to demonstrate the effects of the ISM conditions on the Ly$\alpha$\ spectral profiles. Monte Carlo codes computing the Ly$\alpha$\ transfer in simplified plane-parallel or spherical geometries proved to be useful for this task at a relatively low computational cost \citep{Ahn01,Verhamme06,Dijkstra06, Barnes10,Dijkstra12,Laursen13,Duval14,Zheng14, Behrens14,Verhamme15,Dijkstra16,Gronke15,Gronke16,Gronke16b}. Application of the models to real galaxies has successfully reproduced most of the observed Ly$\alpha$\ profile features \citep[e.g.][]{Verhamme08,Dessauges10, Noterdaeme12,Krogager13,Hashimoto15,Martin15,Yang16,Yang17}. The similarity of green peas to high-redshift LBGs and LAEs \citep{Cardamone09,Izotov11,Jaskot13,Nakajima14, Schaerer16} make them important low-$z$ laboratories for studying the LyC and Ly$\alpha$\ escape mechanisms, essential for understanding the formation and evolution of galaxies across the cosmic history. Various aspects of the Ly$\alpha$\ emission in green peas were addressed by \citet{Jaskot14}, \citet{Henry15}, \citet{Yang16}, \citet{Yang17}, \citet{Verhamme15}, and \citet{Verhamme17}. Radiative transfer models were used by \citet{Yang16,Yang17} to interpret Ly$\alpha$\ profiles of fifty-five GPs. Their modelling generally achieved acceptable fits between the single-shell homogeneous models and the GPs. In this paper, we reinterpret the archival HST Cosmic Origins Spectrograph (COS) observations of Ly$\alpha$\ from twelve green peas studied by \citet{Yang16}. Using an independent radiative transfer code \citep{Verhamme06}, we extend this study by including observational ISM constraints. If we apply no constraints, we reproduce the \citet{Yang16} results and successfully fit the observed Ly$\alpha$\ profiles. However, we show that the ISM parameters characterizing the best-fitting models are in disagreement with those measured from independent UV and optical data. If we impose the observational constraints in the fitting process, the homogeneous shell models do not provide good fits to the data. We evaluate the mismatches and discuss the validity of the models for these peculiar spectra. We structure the paper as follows. We describe the data in Sect.\,\ref{sec_data} and the Ly$\alpha$\ radiative transfer models in Sect.\,\ref{sec_rt}. We present the model fitting results both with and without the application of observational constraints in Sect.\,\ref{sec_results}. We discuss the differences between modelled and observed ISM parameters, their correlations, and model limitations in Sect.\,\ref{sec_discussion}. \section{Data sample} \label{sec_data} \begin{table}[t!] \begin{center} \caption{Green pea sample.} \begin{tabularx}{0.4\textwidth}{cccc} \toprule \toprule ID & RA & DEC & $z$\tablefootmark{(a)} \\ \midrule GP\,0303 & 03 03 21.41 & $-$07 59 23.2 & 0.16489 \\ GP\,0816 & 08 15 52.00 & $+$21 56 23.6 & 0.14095 \\ GP\,0911 & 09 11 13.34 & $+$18 31 08.2 & 0.26224 \\ GP\,0926 & 09 26 00.44 & $+$44 27 36.5 & 0.18070 \\ GP\,1054 & 10 53 30.80 & $+$52 37 52.9 & 0.25265 \\ GP\,1133 & 11 33 03.80 & $+$65 13 41.4 & 0.24140 \\ GP\,1137 & 11 37 22.14 & $+$35 24 26.7 & 0.19440 \\ GP\,1219 & 12 19 03.98 & $+$15 26 08.5 & 0.19561 \\ GP\,1244 & 12 44 23.37 & $+$02 15 40.4 & 0.23942 \\ GP\,1249 & 12 48 34.63 & $+$12 34 02.9 & 0.26340 \\ GP\,1424 & 14 24 05.72 & $+$42 16 46.3 & 0.18480 \\ GP\,1458 & 14 57 35.13 & $+$22 32 01.8 & 0.14859 \\ \bottomrule \end{tabularx} \\ \tablefoot{ $(a)$ Derived from Gaussian fitting of multiple SDSS emission lines. Adopted from \citet{Henry15}, and derived in a consistent way here for GP\,0816 and GP\,1458 that were not part of their sample. We assume a conservative error of 40\,km\,s$^{-1}$\ due to wavelength calibration. } \label{tab_sample} \end{center} \end{table} \subsection{Observations and archival data} \label{sec_obs} \paragraph{\bf HST Ly$\alpha$\ data:} We use archival far-ultraviolet (FUV) spectra of twelve green pea galaxies at redshift $z\!\!\sim\!\!0.2$ (Table~\ref{tab_sample}), observed with the Hubble Space Telescope (HST) under programmes GO\,12928 (PI A.~Henry), GO\,13293 (PI A.~Jaskot), and GO\,11727 (PI T.~Heckman). The Ly$\alpha$\ spectra were obtained with the $2.5\arcsec$ Primary Science Aperture (PSA) of the Cosmic Origins Spectrograph (COS) onboard the HST, with the use of the medium resolution grism G160M. We use the standard pipeline-reduced data obtained through the Mikulski Archive for Space Telescopes (MAST). Details of the observations and various target properties were included in \citet{Henry15}, \citet{Jaskot14}, and \citet{Heckman11}. Additional information on the target GP\,0926 from the HST imaging and ground-based spectroscopy are available in \citet{Basu-Zych09}, \citet{Goncalves10}, \citet{Hayes13}, \citet{Hayes14}, \citet{Rivera15}, and \citet{Herenz16}. The COS spectral resolution depends on the size of the source: the resolving power varies between $R\!=\!16\,000$ ($<20$\,km\,s$^{-1}$) for a point source and $R\!=\!1500$ ($200$\,km\,s$^{-1}$) for a uniformly filled aperture (see the COS handbook). The COS acquisition near-UV images reveal that the green pea diameters are $\lesssim1\arcsec$ \citep{Henry15}. We assume that the Ly$\alpha$\ emission extent is typically a factor of two to\,four\,larger than the stellar continuum extent, based on the analysis of the GP cross-dispersion sizes in the two-dimensional COS spectra \citep{Yang17sizes}. A similar result was achieved by deep HST Advanced Camera for Surveys (ACS) imaging of nearby star-forming galaxies \citep{Hayes13,Hayes14}. We estimate the COS resolution for GPs to be $\sim\!100$\,km\,s$^{-1}$, under the assumption that the Ly$\alpha$\ emission distribution is peaked, as is the usual case in the known star-forming galaxies \citep{Hayes14}. This is consistent with the Ly$\alpha$\ spectra appearance: sharp peaks and troughs on the one hand, and the lack of more detailed Ly$\alpha$\ sub-features on the other. We have rebinned the COS spectra to a sampling of 25\,km\,s$^{-1}$. \paragraph{\bf Ancillary HST UV measurements:} Aside from Ly$\alpha$, the COS far-UV (FUV) spectra include a series of absorption lines of low-ionization-state (LIS) metals, such as \ion{Si}{ii}. Due to the low ionization potential of these species, the lines provide valuable information on geometry and kinematics of the \ion{H}{i}\ gas, where Ly$\alpha$\ propagates. The LIS lines of our sample were analysed in \citet{Jaskot14}, \citet{Henry15}, and \citet{Yang16}, and we adopt here their kinematic parameters to describe the \ion{H}{i}\ medium for Ly$\alpha$\ modelling (Sect.~\ref{sec_constrained}). \paragraph{\bf Ancillary optical SDSS data:} We use the optical Sloan Digital Sky Survey (SDSS) spectra to measure the emission-line redshifts \citep{Henry15} and to estimate parameters of the intrinsic Ly$\alpha$\ line, that is, the line as it would appear before radiation transfer. If both Ly$\alpha$\ and the Balmer lines are produced by the same recombination process, their respective luminosities are tied together through scaling laws, set by atomic physics. While Ly$\alpha$\ undergoes a resonant radiative transfer in neutral hydrogen, Balmer lines travel through the medium unaltered by \ion{H}{i}, and are only attenuated by dust. The observed H$\alpha$\ and/or H$\beta$\ line profiles corrected for the instrumental dispersion thus carry information about the intrinsic Ly$\alpha$\ line width. Their flux corrected for dust absorption then provides an estimate of the total produced Ly$\alpha$\ before it undergoes the radiative transfer. The SDSS spectra have spectral resolution $R\!\sim\!2000$ (150\,km\,s$^{-1}$), and were obtained with circular $3\arcsec$ optical fibres, similar to the $2.5\arcsec$ COS aperture. The GPs are compact, unresolved in the SDSS, and therefore the difference between the COS and SDSS apertures is not significant (but see Sects.\,\ref{sec_rt} and \ref{sec_results} where we discuss the possible impacts). With the medium SDSS spectral resolution, the information about the intrinsic Ly$\alpha$\ is limited to the total flux and basic kinematics, of which we make full use in this paper. All of our data and models have been corrected for the instrumental dispersion. We describe the application of the constraints in more detail in Sect.\,\ref{sec_constrained}. \subsection{Description of the sample: Ly$\alpha$\ line profiles} \label{sect_profiles} \begin{table}[b] \caption{Ly$\alpha$\ spectral shape parameters.} \label{tab_ewbr} \small \begin{tabularx}{0.46\textwidth}{lcccr} \toprule \toprule ID & $v_B$ & $v_R$ & Blue/Red & trough\hspace*{0.1cm} \\ & [km\,s$^{-1}$] & [km\,s$^{-1}$] & EW ratio & [km\,s$^{-1}$] \\ & (1) & (2) & (3) & (4) \\ \midrule GP\,0303 & $-300\pm60$ & $150\pm40$ & 0.06 & $-110\pm50$ \\ GP\,0816 & $-210\pm40$ & $140\pm50$ & 0.36 & $-20\pm40$ \\ GP\,0911 & $-290\pm50$ & $ 80\pm40$ & 0.17 & $-60\pm40$ \\ GP\,0926 & $-160\pm100$ & $220\pm120$& 0.14 & $-40\pm40$ \\ GP\,1054 & $-220\pm100$ & $200\pm40$ & 0.11 & $70\pm80$ \\ GP\,1133 & $-90\pm40$ & $230\pm60$ & 0.51 & $70\pm60$ \\ GP\,1137 & $-250\pm100$ & $170\pm50$ & 0.12 & $-20\pm40$ \\ GP\,1219 & $-80\pm40$ & $170\pm50$ & 0.38 & $20\pm40$ \\ GP\,1244 & $-240\pm40$ & $250\pm40$ & 0.33 & $-10\pm40$ \\ GP\,1249 & --- & $80\pm40$ & 0.00 & --- \\ GP\,1424 & $-150\pm60$ & $220\pm40$ & 0.67 & $70\pm40$ \\ GP\,1458 & $-360\pm60$ & $390\pm100$& 0.40 & $40\pm60$ \\ \bottomrule \end{tabularx} \tablefoot{ (1) Position of blue Ly$\alpha$\ peak measured from the systemic redshift; (2) Position of red Ly$\alpha$\ peak measured from the systemic redshift; (3) Blue-to-red equivalent width ration, measured with respect to the central trough. Conservative 20\%\ error considered, due to continuum calibration; (4) Position of central Ly$\alpha$\ trough measured from the systemic redshift. } \end{table} We describe here the main features of the observed Ly$\alpha$\ line profiles, and derive the first implications for radiative transfer, independent of any model geometry. \paragraph{\bf Double-peaked emission lines:} As already noted by \citet{Henry15} and \citet{Yang16}, all of the twelve green peas show Ly$\alpha$\ in net emission, which is unusual for other local star-forming galaxy samples of similar size \citep{Wofford13,Rivera15}. The net Ly$\alpha$\ emission is only prevalent in galaxy samples selected by their high FUV luminosity, the Lyman-break analogues \citep[LBAs, see][]{Heckman11,Alexandroff15}. Strangely, none of the GP Ly$\alpha$\ spectra have a P-Cygni profile with a redshifted emission and a blueshifted absorption, which is often considered as the typical Ly$\alpha$\ line signature, mainly in high $z$. The Ly$\alpha$\ line is double-peaked in eleven of the twelve targets of our GP sample, and similar statistics are present in other GP samples, such as those of \citet{Verhamme17} and \citet{Yang17}, which is truly unusual for any other galaxy sample. In high redshift, multiple-peak Ly$\alpha$\ profiles have drawn observers' attention only recently, after their discovery in low-$z$ galaxies, such as in \citet{Heckman11}, \citet{Martin15}, and \citet{Alexandroff15}. The Ly$\alpha$\ line identification in distant LAEs was traditionally done by its asymmetric single peak. \citet{Kulas12} and \citet{Trainor15} estimated the incidence of multiple peaks to be $\sim\!30$\% among UV-selected star-forming galaxies $z=2-3$ showing Ly$\alpha$\ emission. The actual number can be higher, due to the blue peak attenuation by the inter-galactic medium (IGM) \citep{Laursen11,Dijkstra14}. Also, the typical spectral resolution in high-$z$ observations is lower than that of the HST/COS, and therefore some of the spectral profiles can in reality be double peaks, such as those with non-zero flux blueward of the systemic redshift \citep[][]{Erb14}, The relative equivalent width ($EW$) of the blue peaks varies across our sample, and it represents 5\,--\,65\% of the red peak $EW$ (Table~\ref{tab_ewbr}). To measure the blue and red $EW$s, we separated the Ly$\alpha$\ profile into two parts, divided by the central trough between the peaks. This differs from the definition previously used in the literature \citep{Heckman11,Erb14,Henry15}, where separation between the blue and red parts of the spectra was defined by the systemic redshift. Our definition reflects the sufficient data resolution and the need to characterize the individual peaks, independently of the redshift (see Discussion). We infer, from the systematically stronger red peak, that the GP Ly$\alpha$\ is transferred in outflowing media. This is consistent with the measurements of UV absorption lines that originate from low-ionization state (LIS) metal species, which are blueshifted with respect to the systemic redshift, and thus indicate outflows. The red Ly$\alpha$\ dominance is model independent and has been illustrated analytically and numerically in various geometries such as slabs \citep{Neufeld90}, spherical shells \citep[e.g.][]{Verhamme06,Dijkstra06}, and in radiative transfer coupled to full hydrodynamic simulations \citep[e.g.][]{Verhamme12}. The double-peaked profiles may have various origins, such as Ly$\alpha$\ transfer in low-velocity media, in clumpy media or other, less studied, geometries. We will discuss this question in Sect.~\ref{sec_discussion}. \paragraph{\bf Ly$\alpha$\ profile symmetries:} We have (re-)measured the positions of the red and blue Ly$\alpha$\ peaks, and of the troughs that separate them (Table~\ref{tab_ewbr}). Without any modelling at this stage, we determined the local flux maxima and minima. Uncertainties resulting from the peak shape and flux variations were included in the error bars, together with the wavelength calibration uncertainties. In the cases where the Ly$\alpha$\ peak or trough had a multi-component character, or where the peak top had a peculiar shape with a varying flux (such as flat-top blue peaks with flux variations in GP\,1054 or GP\,1137), we computed the mean position of the components, and included their variance in the uncertainty. This definition, which was the best choice for the comparison with models, may differ from that applied in \citet{Henry15} or \citet{Yang16}, and we find differences in some of the measurements. Nevertheless, the measurements are consistent with the independent papers within the stated error bars. We provide the peak and trough positions in the form of velocity offsets, measured from the systemic redshift that was derived from the SDSS emission lines (Table~\ref{tab_sample}). We considered the combined wavelength calibration error of the SDSS and the HST/COS to be 40\,km\,s$^{-1}$\ \citep{Henry15}. We find that the central Ly$\alpha$\ trough is redshifted with respect to the systemic redshift in five targets of the sample (GP\,1054, GP\,1133, GP\,1219, GP\,1424, and GP\,1457), that is usually those with strong blue peaks (Table~\ref{tab_ewbr}). In at least two of them (GP\,1133 and GP\,1424) the shift cannot be explained by measurement errors, which include the systematic uncertainty on the wavelength calibration, noise on the spectral line profile, and additional features in the trough. These two targets also have the largest relative flux of the blue peak. The redshifted Ly$\alpha$\ troughs are surprising and were not expected in galaxies with \ion{H}{i}\ outflows: due to the Doppler shift in the outflows, the largest Ly$\alpha$\ optical depth should be shifted from zero to negative velocities \citep{Verhamme06}. In a reversed configuration, an inflow, the trough would be placed in positive velocities. However, the blue peak should dominate the flux in that case, which is not what we observe in the GPs. The inflow scenario therefore seems improbable. We illustrate the peak and trough positions in Fig.\,\ref{fig_symm}, where the targets have been sorted by their blue peak offset, measured either with respect to the systemic redshift (Fig.\,\ref{fig_symm}a) or to the central trough position (Fig.\,\ref{fig_symm}b). The red and blue peak positions are not symmetric. Ordering of the targets by their blue peak offset did not produce any alignment in the red peak offset. We will see in Sect.\,\ref{sec_sym_obsmod} that the peaks are not equally broad either, and we will discuss the implications for the radiative transfer models. \begin{figure} \begin{tabular}{l} \includegraphics[width=0.45\textwidth]{Fig1a.eps} \\ \includegraphics[width=0.45\textwidth]{Fig1b.eps} \\ \end{tabular} \caption{Asymmetry of the blue and red Ly$\alpha$\ peaks in the GP sample, measured from a) the systemic redshift, and b) the central trough. Panel (a) also includes the mean positions of the blue and red peaks for this GP sample, and for LAE and LBG samples drawn from the literature \citep{Hashimoto15}. The GPs are similar to the LAEs in the red peak positions, but have a smaller mean blue offset. } \label{fig_symm} \end{figure} \paragraph{\bf Low column density of neutral hydrogen:} The GP Ly$\alpha$\ profiles are unusually narrow, with a small separation between their peaks ($400\pm100$\,km\,s$^{-1}$). We show in Fig.\,\ref{fig_symm}a that this separation is smaller than in typical high-$z$ LAEs ($510\pm60$\,km\,s$^{-1}$) and LBGs ($770\pm60$\,km\,s$^{-1}$), drawn from the \citet{Hashimoto15} sample. While the major difference between LBG and LAE double-peaked profiles is in the red peak position ($400\pm40$\,km\,s$^{-1}$\ in LBGs, $190\pm40$\,km\,s$^{-1}$\ in LAEs), it is the blue peak that drives the difference between the GPs and the high-$z$ samples ($-370\pm50$\,km\,s$^{-1}$\ in LBGs, $-320\pm50$\,km\,s$^{-1}$\ in LAEs, and $-210\pm90$\,km\,s$^{-1}$\ in GPs). The observed small separations show that Ly$\alpha$\ is able to escape close to the systemic redshift and that the effects of radiative transfer are relatively weak, irrespective of the model geometry \citep{Verhamme15}. Several ISM parameters can contribute to achieve this condition: low $N_\mathrm{HI}$, large \ion{H}{i}\ velocities, or a clumpy medium with low-density inter-clump gas. Observations have proved that the Ly$\alpha$\ peak offsets are small in galaxies with escaping Lyman continuum \citep{Izotov16,Izotov16b, Izotov18} and that the double-peak separation correlates with the LyC escape fraction \citep{Verhamme17}. The LyC escape requires a low \ion{H}{i}\ column density along the line of sight, $N_\mathrm{HI}$\,$\lesssim10^{18}$\,cm$^{-2}$ if the ISM is homogeneous. The Ly$\alpha$\ spectra of our GP sample are similar to those of the LyC leakers, therefore there is a high probability that the GPs studied here have analogously low \ion{H}{i}\ column densities (no direct confirmation exists, no LyC observations are available for the present sample). A large opacity in the Ly$\alpha$\ line core produces a trough separating the red and blue peaks. However, the trough does not reach the zero flux level in approximately half of our sample (GP\,0816, GP\,0911, GP\,1133, GP\,1219, and GP\,1424), confirming that the opacity (and thus $N_\mathrm{HI}$) is surprisingly low. \citet{Yang16} found a correlation between the residual flux in the trough and the Ly$\alpha$\ escape fraction, which suggests that the non-zero flux is not an artificial effect caused by an insufficient spectral resolution. Sub-features can be seen in the central trough of GP\,1133: two minima separated by 80\,km\,s$^{-1}$. This could indicate that the line core is either partially refilled with emission, or that the blue minimum corresponds to the deuterium absorption, as discussed in \citet{Dijkstra06} and \citet{Verhamme06}. Another confirmation of the low $N_\mathrm{HI}$\ in the GPs comes from the absence of underlying broad Ly$\alpha$\ absorption, analogously to $z\!\sim\!0.3$ LyC leakers \citep{Verhamme17}. Star-forming galaxies with observed Ly$\alpha$\ emission lines commonly show an underlying absorption trough, with wings visible on a much wider wavelength scale than the emission spectrum \citep[see e.g.][]{Wofford13,Rivera15,Duval16,Schaerer08,Dessauges10,Quider10}. From the modelling point of view, as Ly$\alpha$\ resonantly scatters off \ion{H}{i}, the absorption part of the spectrum is the result of the photon removal from the line of sight, while emission is produced by photons scattered out of resonance. The absorption trough becomes deeper and broader with the increasing column density of the foreground \ion{H}{i}, and with aperture losses \citep{Verhamme06,Verhamme17}. We have done a careful inspection of the continuum in all twelve GPs, and found no signs of absorption, except for a shallow trough in GP\,1458 and possibly in GP\,0303. The absence points to generally low \ion{H}{i}\ column densities in GPs, likely similar to those in LyC leakers of \citet{Izotov16,Izotov16b,Izotov18}. \paragraph{\bf Comparison between Ly$\alpha$\ and H$\beta$\ profiles:} If produced by pure recombination, the intrinsic Ly$\alpha$\ line profile (before radiative transfer) should share the kinematic characteristics of the Balmer lines, and be their scaled version. Radiative transfer effects transform the Ly$\alpha$\ profile by removing photons most strongly from the line centre, redistributing them to the wings, or destroying them by absorption. The resultant profile is broadened, attenuated, and with shifted peaks. Figures \ref{fig_constrained} and \ref{fig_fits} show a direct comparison between the observed Ly$\alpha$\ profile and H$\beta$\ scaled by the Case~B factor of 23.5 \citep{Dopita03}. The figures also present the results of model fitting (Sect.\,\ref{sec_results}), while here we focus on the comparison of the observational profiles. We used H$\beta$\ instead of H$\alpha$,\ which partially blends with the [\ion{N}{ii}]\ lines and would impede studying the line wings. We present the data with subtracted continuum, which was fit by a first order polynomial in the vicinity of H$\beta$\ and Ly$\alpha$. We corrected both H$\beta$\ and Ly$\alpha$\ fluxes for the Milky Way (MW) extinction \citep{Schlafly11} using the \citet{Cardelli89} extinction law. The MW extinction values were obtained from the NASA Extragalactic Database (NED) and were listed in \citet{Henry15} and \citet{Yang16}. The H$\beta$\ flux was additionally corrected for internal extinction using the SDSS H$\alpha$\ and H$\beta$\ fluxes with the assumption of their intrinsic ratio of 2.86 \citep{Dopita03}, and using the \citet{Cardelli89} extinction law. The corrected H$\beta$\ flux was then used to approximate the intrinsic Ly$\alpha$\ line flux. No correction for internal extinction was applied to Ly$\alpha$, as it is not attenuated in the same way as the optically thin lines, and its radiative transfer includes effects of both gas and dust. The observed Ly$\alpha$\ blue wings are as broad as those of the scaled H$\beta$\ line in most of the sample. For two targets, GP\,0303 and GP\,0911, such a similarity is also seen in the red wing. This is unusual, and signifies that the Ly$\alpha$\ profile has not been much broadened by radiative transfer. Remarkably, \citet{Martin15} draw a similar conclusion for a sample of low-$z$ ultraluminous infrared galaxies (ULIRGs). We note that SDSS resolution is worse than that of COS. However, we have tested that degrading the COS spectra to the SDSS resolution does not change our conclusion about the wings. \begin{table*}[ht] \begin{center} \caption{ \label{tab_params} Measured and model-fit parameters. } \tiny \def1.5{1.5} \begin{tabularx}{1.0\textwidth}{c|cccc|cccccccc} \toprule \toprule & \multicolumn{4}{c}{Observed ISM parameters (HST/COS, SDSS)} & \multicolumn{6}{|c}{Shell model results from unconstrained fitting} \\ \midrule ID & $v_\mathrm{LIS}$ & FWHM(H$\beta$) & $\tau_\mathrm{d,obs}$ & EW$_0$(Ly$\alpha$)$_\mathrm{obs}$ \ & $\Delta z$ & $v_\mathrm{exp}$\ & $b$ & $\log N_\mathrm{HI}$ & $\tau_\mathrm{d}$ & FWHM$_0$(Ly$\alpha$) & EW$_0$(Ly$\alpha$) \\ & [km\,s$^{-1}$] & [km\,s$^{-1}$] & FUV & [\AA] & [km\,s$^{-1}$] & [km\,s$^{-1}$] & [km\,s$^{-1}$] & [cm$^{-2}$] & FUV & [km\,s$^{-1}$] & [\AA] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) &(11) & (12) \\ \midrule GP\,0303 & $-200^{+80}_{-80}$ & 160 & 0 & 150$^{+50}_{-50}$ & $60^{+60}_{-0}$ & $150^{+50}_{-0}$ &$20^{+30}_{-10}$ & $19.0^{+0.3}_{-1.2}$ &$1^{+2}_{-1}$ &$500^{+100}_{-120}$ &$15^{+5}_{-5}$ \\ GP\,0816 & $-300^{+100}_{-100}$ & 130 & 0.4$^{+0.2}_{-0.2}$ & 200$^{+100}_{-100}$ & $10^{+50}_{-0}$ & $20^{+30}_{-0}$ & $20^{+0}_{-10}$ & $19.0^{+0.3}_{-1}$ &$0.2^{+0.8}_{-0.2}$ &$400^{+100}_{-0}$ &$75^{+25}_{-25}$ \\ GP\,0911 & $-250^{+50}_{-50}$ & 220 & 2.6$^{+1.4}_{-1.4}$ & 140$^{+110}_{-110}$ & $30^{+0}_{-70}$ & $100^{+50}_{-0}$ & $20^{+20}_{-0}$ & $18.5^{+0.5}_{-0.7}$ &$1.5^{+1.5}_{-0.5}$ &$500^{+0}_{-100}$ &$50^{+25}_{-0}$ \\ GP\,0926 & $-280^{+100}_{-100}$ & 170 & 1.5$^{+0.8}_{-0.8}$ & 120$^{+70}_{-70}$ &$140^{+5}_{-50}$& $150^{+50}_{-0}$ & $20^{+0}_{-10}$ & $18.5^{+0.5}_{-0.7}$ & $1^{+1}_{-1}$ &$550^{+50}_{-50}$ & $40^{+10}_{-10}$ \\ GP\,1054 & $-180^{+80}_{-80}$ & 230 & 1.3$^{0.7}_{-0.7}$ & 140$^{+50}_{-50}$ &$150^{+40}_{-40}$ & $150^{+0}_{-0}$ & $40^{+40}_{-0}$ & $19.3^{+0}_{-1.5}$ &$1^{+2}_{-1}$ &$150^{+250}_{-50}$ &$20^{+10}_{-10}$ \\ GP\,1133 & --- & 160 & 0.2$^{+0.1}_{-0.1}$ & 100$^{+50}_{-50}$ &$160^{+50}_{-0}$ & $50^{+0}_{-30}$ & $10^{+0}_{-0}$ & $18.5^{+0.5}_{-0}$ &$0.5^{+2.5}_{-0.5}$ &$600^{+0}_{-100}$ &$35^{+5}_{-5}$ \\ GP\,1137 & $-150^{+50}_{-50}$ & 210 & 1.0$^{+0.6}_{-0.6}$ & 200$^{+100}_{-100}$ & $100^{+0}_{-50}$ & $150^{+100}_{-0}$ & $40^{+120}_{-20}$ & $18.7^{+0.3}_{-2.7}$ &$1^{+0.5}_{-1}$ &$550^{+60}_{-300}$ &$40^{+10}_{-10}$ \\ GP\,1219 & --- & 160 & 0.03$^{+0.02}_{-0.02}$ & 200$^{+100}_{-100}$ &$115^{+50}_{-0}$ & $50^{+50}_{-0}$ & $20^{+0}_{-10}$ & $18.0^{+0.5}_{-2}$ &$0.2^{+0.8}_{-0.2}$ &$600^{+0}_{-100}$ &$125^{+0}_{-25}$ \\ GP\,1244 & $-90^{+50}_{-50}$ & 170 & 1.1$^{+0.6}_{-0.6}$ & 400$^{+300}_{-300}$ &$150^{+0}_{-100}$ & $150^{+0}_{-100}$ & $40^{+0}_{+0}$ & $18.5^{+1.1}_{-0.5}$ &$0.5^{+1.6}_{-0.3}$ &$1000^{+0}_{-300}$ &$75^{+25}_{-25}$ \\ GP\,1249 & $-180^{+50}_{-50}$ & 160 & 0.7$^{+0.2}_{-0.2}$ & 200$^{+100}_{-100}$ &$80^{+50}_{-50}$ & $250^{+0}_{-50}$ & $10^{+10}_{-0}$ & $16.0^{+2}_{-0}$ &$2^{+2}_{-1}$ &$400^{+0}_{-0}$ &$75^{+5}_{-0}$ \\ GP\,1424 & $-240^{+50}_{-50}$ & 210 & 0.7$^{+0.4}_{-0.4}$ & 200$^{+100}_{-100}$ &$120^{+50}_{-0}$ & $50^{+50}_{-30}$ & $20^{+20}_{-10}$ & $19.0^{+0.3}_{-1.5}$ &$0.2^{+0.8}_{-0.2}$ &$700^{+30}_{-100}$ &$75^{+40}_{-25}$ \\ GP\,1458 & $-30^{+70}_{-70}$ & 130 & 0.4$^{+0.3}_{-0.3}$ & 400$^{+300}_{-300}$ &$55^{+0}_{-45}$ & $20^{+0}_{-0}$ & $40^{+0}_{-20}$ & $20.2^{+0.3}_{-0}$ &$1^{+0.5}_{-0.5}$ &$400^{+100}_{-250}$ &$100^{+200}_{-50}$\\ \bottomrule \end{tabularx} \tablefoot{ (1) Identifiers; (2) Mean LIS line velocities with their 1$\sigma$ error, adopted from \citet{Henry15}, and measured here for GP\,0816 and GP\,1458; (3) $FWHM$ of narrow H$\beta$\ component, determined by double-Gaussian fits to SDSS line profiles, and corrected for instrumental dispersion. We estimate a fitting error of 50\,km\,s$^{-1}$; (4) FUV optical depth of dust near the wavelength of Ly$\alpha$, derived from H$\beta$\ flux, FUV continuum flux, and ISM extinction; (5) Intrinsic Ly$\alpha$\ equivalent width derived from H$\beta$\ flux, FUV continuum flux, and ISM extinction; (6) -- (12) Best-fitting model parameters, obtained from unconstrained Ly$\alpha$\ fitting. Median, 10$^\mathrm{th}$ percentile and 90$^\mathrm{th}$ percentile values of 20 models with the lowest $\chi^2$ for each target; (6) $\Delta z = (z_{\mathrm{fitLy}\alpha} - z_\mathrm{SDSS}) c$, where $z_{\mathrm{fitLy}\alpha}$ is the redshift of the best unconstrained Ly$\alpha$\ fit, $z_\mathrm{SDSS}$ is the redshift derived from SDSS emission line fits (Table\,\ref{tab_sample}), and $c$ is the speed of light; (7) Shell expansion speed; (8) Doppler parameter; (9) Neutral hydrogen column density; (10) FUV optical depth of dust in the vicinity of Ly$\alpha$; (11) $FWHM$ of intrinsic Ly$\alpha$\ line; (12) Equivalent width of intrinsic Ly$\alpha$\ line. } \end{center} \end{table*} \section{Radiative transfer models} \label{sec_rt} \subsection{Code and model grid} We use an enhanced version of the {\tt MCLya} 3D Monte Carlo code of \cite{Verhamme06}, which computes radiative transfer of the Ly$\alpha$\ line and the adjacent UV continuum for an arbitrary three-dimensional (3D) geometry and velocity field. We here assume the geometry of an expanding, homogeneous, spherical shell composed of neutral hydrogen and dust, uniformly mixed. A starburst region, which is the source of the Ly$\alpha$\ and UV continuum photons, is placed at the centre of the sphere filled with ionized gas and surrounded by the neutral shell. The photons are collected after their propagation through the neutral shell in all directions. The expanding geometry was motivated by the \ion{H}{i}\ outflows, which we detect in the studied galaxies \citep{Henry15} and which additionally seem to be ubiquitous at both low- and high-redshifts \citep[e.g.][]{Shapley03,Wofford13,Rivera15,Chisholm15}. The spherical configuration was motivated by the observation of superbubbles in star-forming galaxies, as described by for example \citet{Tenorio99} and \citet{Mas-Hesse03}. The {\tt MCLya} models have previously been successfully applied to reproduce the Ly$\alpha$\ spectra of low- and high-$z$ galaxies \citep{Verhamme08,Schaerer08,Dessauges10, Vanzella10,Lidman12,Leitherer13,Hashimoto15,Duval16,Patricio16}. The modelled shell is characterized by four parameters: the radial expansion velocity $v_\mathrm{exp}$, the \ion{H}{i}\ column density $N_\mathrm{HI}$, the optical depth $\tau_\mathrm{d}$ of dust at wavelengths in the vicinity of Ly$\alpha$, and the \ion{H}{i}\ Doppler parameter $b$ that describes the internal shell kinematics including thermal and turbulent velocities. Photons from the source have an initial frequency distribution, and the path and the frequency change through the shell are followed for each of them. A grid of $>\!6000$ synthetic models was constructed by varying the shell parameters and running the full {\tt MCLya} simulation for each parameter set \citep{Schaerer11}. The resulting spectra were then post-processed to account for the intrinsic line profile, and were further smeared to match the observed spectral resolution estimated from the Ly$\alpha$\ source extent. We define the initial, intrinsic Ly$\alpha$\ profile assuming a flat stellar UV continuum and a Ly$\alpha$\ emission line, which can either be a Gaussian, a double-Gaussian, or another function, such as the observed Balmer line profile. If we assume a Gaussian profile, the model adjustment has seven free parameters. Four parameters describe the \ion{H}{i}\ shell: $v_\mathrm{exp}$, $N_\mathrm{HI}$, $b$, $\tau_\mathrm{d}$; and three parameters describe the Gaussian line: the line width $FWHM_0$\/(\lya), the equivalent width $EW_0$\/(\lya), and the line centre, that is, the redshift $z$. Each of the fitting parameters regulates a different measurable characteristics of the resultant Ly$\alpha$\ profile \citep{Verhamme06,Gronke15}. The profiles are most sensitive to $v_\mathrm{exp}$\ and $N_\mathrm{HI}$, both of which determine the offset of the resultant Ly$\alpha$\ emission peak from the restframe position, while $v_\mathrm{exp}$\ also determines the $EW$ of the Ly$\alpha$\ blue peak. The intrinsic Ly$\alpha$\ equivalent width $EW_0$\/(\lya)\ and the dust optical depth $\tau_\mathrm{d}$ regulate the Ly$\alpha$\ flux scaling, and a large dust content usually results in a prominent absorption trough in the blue part of the profile if the medium is expanding. Large $N_\mathrm{HI}$\ and $\tau_\mathrm{d}$ give rise to broad Ly$\alpha$\ absorption profiles. The role of $b$ is less straightforward to describe and is usually degenerate with other parameters. We use an automated line profile fitting tool \citep{Schaerer11}, which explores the entire grid or its defined part. The fitting parameters can either be constrained to definite values or to intervals of values. The fitting procedure then searches for the minimum $\chi^2$ and computes the $\chi^2$ maps. We here define intervals for the fitting parameters that either correspond to the observational uncertainties (Sect.\,\ref{sec_constrained}) or to the entire grid extent (Sect.\,\ref{sec_unconstrained}). We keep twenty best-fitting models for each fitting run and each target to have a better control of the fitting parameter stability. We describe their use for the different fitting approaches in the respective sections. \subsection{Constraining the free parameters} \label{sec_constrpar} Depending on the availability of ancillary data, several of the fitting parameters can be constrained. In the present sample, we were able constrain up to five parameters using the optical SDSS and UV HST/COS data (Tables \ref{tab_sample} and \ref{tab_params}): \begin{enumerate} \item Redshift $z$ (Table~\ref{tab_sample}): derived from Gaussian fits of $\sim\!20$ SDSS emission lines with a precision better than $10$\,km\,s$^{-1}$\ \citep{Henry15}. Applied to the Ly$\alpha$\ line, the redshift uncertainty is dominated by the wavelength calibration errors, $\sim\!40$\,km\,s$^{-1}$\ (Sect.\,\ref{sec_obs}). \item Expansion speed $v_\mathrm{exp}$\ of the shell: measured from the UV absorption line offsets of low-ionization species, which are commonly observed blueshifted, that is, outflowing. We report the characteristic LIS line velocities $v_\mathrm{LIS}$\ in Table~\ref{tab_params}. The velocities measured by \citet{Henry15} show variations between different transitions, which can be attributed to either noise or different amounts of emission filling \citep{Scarlata15}. We list here their mean and consider the full range of velocities reported in that paper in order to give our Ly$\alpha$\ models the best chance to match the observations. We have measured the missing velocities. \item Optical depth of dust absorption in the FUV, $\tau_\mathrm{d}$: estimated from the SDSS Balmer line ratios by extrapolation to the FUV domain using extinction laws from the literature. An inherent uncertainty associated with this estimate arises from the uncertainty in the extinction law -- the commonly used laws give similar predictions in the optical range, but dramatically vary in the FUV. We report in Table~\ref{tab_params} the mean value and variance of $\tau_\mathrm{d}$ derived from the extinction laws of \citet{Cardelli89}, \citet{Calzetti94}, and \citet{Prevot84}. \item Full width at half maximum of the intrinsic Ly$\alpha$\ line, $FWHM_0$\/(\lya), is assumed to be identical to that of H$\beta$\ if both lines are formed by the same recombination mechanism. We fitted the SDSS H$\beta$\ line profiles with two-component Gaussian functions, narrow and broad. We list the width of the dominating narrow component, $FWHM$\/(\hb), in Table~\ref{tab_params}, and the broad component width and flux contribution in Table~\ref{tab_2G}. The line widths have been corrected for the instrumental dispersion. We assume an uncertainty of 100\,km\,s$^{-1}$\ in the $FWHM$\/(\hb). \item Intrinsic Ly$\alpha$\ equivalent width, $EW_0$\/(\lya): estimated from the observed H$\beta$\ flux and the observed FUV continuum, assuming that both lines were formed by recombination. Pure Case B recombination at $T=10\,000$\,K and $n_\mathrm{e}=10^{-2}$\,cm$^{-3}$ predicts the Ly$\alpha$/H$\beta$\ flux ratio of 23.5 \citep{Dopita03}. Both the measured line and continuum fluxes were corrected for dust attenuation, and therefore the uncertainties in the extinction law induce uncertainties in $EW_0$\/(\lya). In addition, the differences between the nebular and continuum attenuation \citep{Calzetti97,Price14}, the possible need for aperture corrections accounting for the differences between SDSS and COS, and the possible deviations in the temperature gave rise to the range of $EW_0$\/(\lya)\ for each target reported in Table~\ref{tab_params}. \end{enumerate} No constraints are available for the \ion{H}{i}\ column $N_\mathrm{HI}$, and the Doppler parameter $b$. To constrain $b$, high-resolution absorption lines would be necessary, resolving the individual \ion{H}{i}\ clouds. Therefore, $b$ and $N_\mathrm{HI}$\ are considered as free fitting parameters, ranging across the entire grid of models: $10$\,km\,s$^{-1}$\,$\leq b \leq 160$\,km\,s$^{-1}$, and $10^{16}$\,cm$^{-2}$\,$\leq$\,$N_\mathrm{HI}$\,$\leq10^{23}$\,cm$^{-2}$. \bigskip For those parameters that require a comparison between the SDSS and the HST/COS data, we need to consider the differences between the spectrographs: 1) the aperture size, and 2) the spectral resolution. The different aperture sizes, 2.5\arcsec\ in the HST/COS versus 3\arcsec\ in the SDSS, potentially affect the measured fluxes, and can therefore impact the $EW_0$\/(\lya)\ derived from H$\beta$. However, the near-UV HST/COS acquisition images \citep{Henry15} show that the GPs are extremely compact, smaller than the aperture size. The GP optical images are unresolved in the SDSS, but we expect a similar morphology and extent as in the near-UV \citep[based on][]{Hayes13,Hayes14}. Nevertheless, to account for the possibility that the true optical emission is much larger than the acquisition image and that the aperture size matters for the derivation of the intrinsic Ly$\alpha$, we considered both situations, with and without the aperture correction to derive $EW_0$\/(\lya)\ from $EW$\/(\hb). Both cases were reflected in the $EW_0$\/(\lya)\ intervals that we list in Table~\ref{tab_params}. No correction was applied to the observed Ly$\alpha$\ emission due to its unpredictable nature. It is possible that a part of Ly$\alpha$\ is transferred outside the COS aperture, which we will further discuss in the following sections. Concerning the spectral resolution, it plays a role in the derivation of the intrinsic Ly$\alpha$\ profile from H$\beta$. The SDSS resolution is worse (150\,km\,s$^{-1}$) than the HST/COS resolution for Ly$\alpha$\ ($\sim\!100$\,km\,s$^{-1}$, Sect.\,\ref{sec_data}). With the given SDSS resolution, our H$\beta$\ application is restricted to the instrument-corrected $FWHM$, with no other details about the line profile. To account for the COS resolution in the fitting process, we adjust the Ly$\alpha$\ grid model spectra correspondingly. We show in the following sections that the observed Ly$\alpha$\ spectra would be better reproduced with an input line broader than the SDSS profile, which cannot be explained by the difference in spectral resolution. On the other hand, other inconsistencies between models and data could potentially be better understood with the aid of high-resolution spectroscopy. \section{Ly$\alpha$\ line profile modelling} \label{sec_results} \begin{figure*}[t!] \includegraphics[width=1.0\textwidth]{Fig2.eps} \caption{Best-fitting Ly$\alpha$\ models with applied constraints from ancillary observational data (Sect. \ref{sec_constrained}). COS data and their errorbars are shown in black, the best-fitting model with the solid red line, the modelled intrinsic single-Gaussian Ly$\alpha$\ profile with the dashed red line, and the observed SDSS H$\beta$\ scaled by Case B factor 23.5 with the dotted blue line. Parameters of the best-fitting model are shown in each panel. } \label{fig_constrained} \end{figure*} We have carried out Ly$\alpha$\ profile modelling with the use of the model grid described in Sect.\,\ref{sec_rt} and with the application of observational constraints. As we will see in Sections~\ref{sec_constrained} and \ref{sec_2G}, the constrained models do not reproduce all the features of the observed profiles, and we will therefore explore their failure and will relax all the constraints in Sect.~\ref{sec_unconstrained}. \subsection{Ly$\alpha$\ profile fitting with constraints} \label{sec_constrained} We have described the derivation of the constraining parameters in Sect.\,\ref{sec_rt}. Assuming a Gaussian profile of the intrinsic Ly$\alpha$\ line, with the $FWHM$ equal to the dominant, narrow component of H$\beta$, five of the seven fitting parameters can be constrained. For fitting purposes, the parameters were assigned the intervals listed in columns 2\,--\,5 of Table~\ref{tab_params}, which reflect the observed values and their uncertainties. Redshifts and line widths were considered with conservative errors of 50\,km\,s$^{-1}$\ and 100\,km\,s$^{-1}$, respectively, to account for the COS wavelength calibration and for the SDSS line decomposition into Gaussians. The $v_\mathrm{exp}$\ velocity remains a free fitting parameter in two targets, GP\,1133 and GP\,1219, where no UV LIS lines have been detected. The Ly$\beta$\ line is available, but its proper modelling would be necessary to disentangle the stellar and ISM contributions, therefore we have not used it. Before the Ly$\alpha$\ fitting, we convolved the synthetic grid spectra with a Gaussian function, which simulated the finite spectral resolution, 100\,km\,s$^{-1}$\ (Sect.\,\ref{sec_data}). We carried out a series of tests to check how the spectral resolution would affect the fits. Model spectra convolved with too broad or too narrow profiles were not compatible with the observed features of the Ly$\alpha$\ profile (not sharp enough peaks and troughs, or too detailed substructure, respectively). A good match was achieved at $\sim\!100\pm30$\,km\,s$^{-1}$, which is consistent with the COS estimation derived from the target morphology. We note that this convolution was missing in the work of \citet{Yang16}, which may explain some of the differences between their results and this work. We have run the fitting tool on the model sub-grid defined by the constraints, keeping twenty models with the lowest $\chi^2$ for each target. The twenty models served as a visual check of the model matching. We plot the data and the lowest-$\chi^2$ model for each galaxy in Fig.\,\ref{fig_constrained}. We overplot the theoretical intrinsic Ly$\alpha$\ line, derived from the best-fitting model, and the observed SDSS H$\beta$\ line scaled by the Case~B factor of 23.5. To enable the comparison, all observational and modelling data were converted to velocity units, and the respective continuum was subtracted from each line. The intrinsic Ly$\alpha$\ model was convolved with a Gaussian profile corresponding to the SDSS resolution for a direct comparison with the SDSS data. The best-fitting model parameters are listed in each panel. Any differences between the model parameters and the ancillary data parameters, or between the intrinsic Ly$\alpha$\ and H$\beta$\ profiles, are due to the observational uncertainties and the discreteness of the model grid. We immediately notice large discrepancies between the fits and the data. The models dramatically fail in reproducing the blue peak for almost every target, in both flux and position. The match for the red peaks is better and can be considered satisfactory in approximately half of the sample, while the peak positions and widths disagree with the observations in the remaining targets. Furthermore, the central trough positions of the double-peaked models are offset with respect to the observed troughs. The homogeneous shell models regulate the relative blue peak flux by $v_\mathrm{exp}$, while the trough position is the location of the maximum optical depth, dependent on $z$ and $v_\mathrm{exp}$, and cannot be forced by any other combination of the remaining parameters. With $z$ and $v_\mathrm{exp}$\ fixed by the observational constraints, the fitting algorithm can only attempt to reproduce the peak positions by varying $N_\mathrm{HI}$, which was a free parameter here, but cannot produce the correct peak flux ratios and trough positions. In the logic of the homogeneous models, the ancillary data did not provide the correct $z$ and $v_\mathrm{exp}$\ in the studied GPs. The failure to correctly fit troughs in GP\,1133 and GP\,1219, where $v_\mathrm{exp}$\ was a free parameter, indicates that $v_\mathrm{exp}$\ alone is not sufficient to solve the problem. \subsection{Ly$\alpha$\ source modelled by a double Gaussian} \label{sec_2G} \begin{table}[t] \begin{center} \caption{Parameters of double-Gaussian fits of the SDSS H$\beta$\ line.} \label{tab_2G} \small \begin{tabularx}{0.35\textwidth}{ccc} \toprule \toprule ID & $FWHM_\mathrm{broad}$ [km\,s$^{-1}$] & $f_\mathrm{broad}$/$f_\mathrm{narrow}$ \\ & (1) & (2) \\ \midrule GP\,0303 & 400 $\pm$ 100 & 0.20 $\pm$ 0.05\\ GP\,0816 & 350 $\pm$ 100 & 0. \\ GP\,0911 & 450 $\pm$ 100 & 0.5 $\pm$ 0.1\\ GP\,0926 & 500 $\pm$ 100 & 0.50 $\pm$ 0.03\\ GP\,1054 & 500 $\pm$ 100 & 0.33 $\pm$ 0.07\\ GP\,1133 & 450 $\pm$ 100 & 0.1 $\pm$ 0.1\\ GP\,1137 & 500 $\pm$ 100 & 0.30 $\pm$ 0.05\\ GP\,1219 & 500 $\pm$ 100 & 0.33 $\pm$ 0.05\\ GP\,1244 & 400 $\pm$ 100 & 0.25 $\pm$ 0.05\\ GP\,1249 & 350 $\pm$ 100 & 0.2 $\pm$ 0.1\\ GP\,1424 & 500 $\pm$ 100 & 0.20 $\pm$ 0.03\\ GP\,1458 & 400 $\pm$ 100 & 0.10 $\pm$ 0.03\\ \bottomrule \end{tabularx} \tablefoot{ (1) $FWHM$ of the broad H$\beta$\ component obtained from double-Gaussian fits, corrected for instrumental dispersion. (2) Ratio of fluxes in broad and narrow SDSS Balmer line components, resulting from double-Gaussian fits; the values represent the mean derived from H$\alpha$\ and H$\beta$. } \end{center} \end{table} \begin{figure}[t!] \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{Fig3a.eps} \\ \includegraphics[width=0.45\textwidth]{Fig3b.eps} \\ \end{tabular} \caption{Constrained fits using double-Gaussian intrinsic Ly$\alpha$\ profiles, with a realistic proportion of broad and narrow components, derived from SDSS H$\beta$\ spectra. Targets with the largest broad component contribution are shown. } \label{fig_2G} \end{figure} We searched for the reasons for the bad correspondence between the observed and modelled Ly$\alpha$\ profiles resulting from model fitting with constraints. We first considered a better description of the intrinsic Ly$\alpha$\ profiles with several kinematic systems. This effort was motivated by \citet{Amorin12}, who showed complex kinematic structures in GP H$\alpha$\ profiles observed with a high-resolution ($R\!>\!10\,000$) echelle spectrograph. None of their targets have been observed in Ly$\alpha$, therefore no direct proof of the effect of multiple kinematic components on the resultant Ly$\alpha$\ is available. At the SDSS resolution, no strong secondary line components are detected in our sample, but broad H$\alpha$\ and H$\beta$\ wings are consistently seen \citep{Henry15}. We therefore fitted the SDSS H$\alpha$\ and H$\beta$\ lines with double-Gaussian profiles, narrow and broad. At the SDSS resolution, the double-Gaussian fits produce degenerate results, we therefore used both H$\alpha$\ and H$\beta$\ to minimize the degeneracy. The kinematic parameters were tied between the lines, while they were independent between the broad and narrow components. The Gaussian amplitudes were free parameters in each line, and we computed the mean between H$\alpha$\ and H$\beta$\ for each component. We found that the broad and narrow components have redshift differences $\pm30$\,km\,s$^{-1}$, that is, they are the same within the uncertainties. Unlike in dusty galaxies, the red wing is not extinguished, and the broad component is symmetric about the systemic redshift. We obtain broad-to-narrow flux ratios in the range $0\!-\!0.5$. We included the narrow component $FWHM$ in Table~\ref{tab_params}, and we list the broad component $FWHM$ and the flux ratios of the broad and narrow components in Table~\ref{tab_2G}. The flux ratios represent the mean between H$\alpha$\ and H$\beta$. We assume the uncertainty of the broad component $FWHM$ to be $\sim\!100$\,km\,s$^{-1}$, based on multiple trials of the double-Gaussian fitting. All $FWHM$s have been corrected for instrumental dispersion. We assumed that both narrow and broad H$\beta$\ components were produced by recombination, and we used the double-Gaussian profiles as input characterizing the intrinsic Ly$\alpha$\ profiles for Ly$\alpha$\ model fitting. We present the fitting results for two targets that are among those with the largest broad component flux fraction (GP\,0926, GP\,1219) in Fig.\,\ref{fig_2G}. The resulting fits have not significantly improved over those with single Gaussians. Even though the blue peaks have become slightly more pronounced, the Ly$\alpha$\ profiles are not well reproduced and the problems encountered in Sect.\,\ref{sec_constrained} persist. \subsection{Unconstrained fitting} \label{sec_unconstrained} \subsubsection{Good fits, discrepant parameters} \label{sect_unconstrained} \begin{figure*}[th] \includegraphics[width=1.0\textwidth]{Fig4.eps} \caption{Best-fitting Ly$\alpha$\ models, obtained by unconstrained fitting. COS data and their errorbars are shown with the black line, best-fitting model with the solid red line, intrinsic Ly$\alpha$\ profile with the dashed red line, and the observed SDSS H$\beta$\ scaled by Case B factor 23.5 with the dotted blue line. Derived best-fit parameters of the shell are shown in each panel. } \label{fig_fits} \end{figure*} \begin{figure*}[th] \begin{tabular}{lll} \includegraphics[width=0.31\textwidth]{Fig5a.eps} & \includegraphics[width=0.31\textwidth]{Fig5b.eps} & \includegraphics[width=0.31\textwidth]{Fig5c.eps} \\ \end{tabular} \caption{Comparison between parameters derived from the unconstrained Ly$\alpha$\ fitting, and their observed counterparts. a) outflow velocity; b) redshift; c) $FWHM_0$\/(\lya). The dashed lines show the expected identity between the parameters. The points and errorbars correspond to those listed in Table~\ref{tab_params}. } \label{fig_parameters} \end{figure*} To understand which parameters play a role in producing the unsatisfactory results of the constrained model fitting, we finally relaxed all the constraints. We ran the fitting process across the entire grid of $>\!6000$ {\tt MCLya} shell models (with four varying shell parameters $v_\mathrm{exp}$, $b$, $N_\mathrm{HI}$, and $\tau_\mathrm{d}$), and we ignored the constraints from the ancillary data. In addition, the parameters of the Ly$\alpha$\ source, that is, $EW_0$\/(\lya), $FWHM_0$\/(\lya), and $z$, were also let free. A good match between the best-fitting modelled and observed spectra has been found by the automatic fitting procedure for all of the twelve targets, unlike in \citet{Yang16}, where fitting of GP\,1133, GP\,1219 and GP\,1424 failed. We present the lowest $\chi^2$ model for each galaxy in Fig.\,\ref{fig_fits}. To assess how the model parameters differ from the observations, we considered twenty lowest-$\chi^2$ models for each target. A visual check revealed that for the given grid resolution, the twenty models encompassed spectra that were still in reasonable agreement with the observations: they matched the positions and amplitudes of the Ly$\alpha$\ peaks and troughs, and matched the line wings. A larger number of fits was inappropriate due to the deteriorating fit quality, and we did not opt for a narrower set in order to allow for possibly large intervals in all the fitting parameters, while searching for overlaps with the observed ones. The visual check was complementary to the $\chi^2$ parameter computation. This simple method allowed an assessment of the parameter range without a complex fit quality analysis, unnecessary for the problem in hand. To characterize the fitting parameter distribution in each target, we computed their median and the $10^\mathrm{th}$ and $90^\mathrm{th}$ quantiles in the set of twenty best fits, and used them in Table~\ref{tab_params}, columns 6\,--\,12 (the uncertainties here correspond to the quantiles). The quantiles are convenient for the discrete character of the grid and our choice discards only the extremes in each given set. Figure\,\ref{fig_fits} shows that the homogeneous shell models are able to reproduce the GP Ly$\alpha$\ spectra, as in \citet{Yang16}. However, a comparison between the observed and modelled ISM parameters reveals significant disagreement (Table~\ref{tab_params}). We have identified three major discrepancies, which we describe below and illustrate in Fig.\,\ref{fig_parameters}: \begin{enumerate} \item The only way to produce double-peaked Ly$\alpha$\ profiles in the shell models is by assuming a static or low-velocity ($\lesssim\!\!150$\,km\,s$^{-1}$) \ion{H}{i}\ medium \citep[see][]{Neufeld90,Verhamme06,Dijkstra06}. Therefore, except for the single-peak target GP\,1249, all the best fits of our sample have $v_\mathrm{exp}$\,$\leq$\,150\,km\,s$^{-1}$. Spectra with the strongest blue peaks (GP\,0816, GP\,1133, GP\,1219, GP\,1424, and GP\,1458) required even lower velocities, $v_\mathrm{exp}$\,$\leq$\,50\,km\,s$^{-1}$. In contrast, the observed LIS velocities $v_\mathrm{LIS}$\ are significantly larger in at least one third of the sample (Fig.\,\ref{fig_parameters}a). No convincing LIS line detection was possible in GP\,1133 and GP\,1219, but their high-ionization gas velocities from \ion{Si}{iii} and \ion{Si}{iv} are between $-300$ and $-400$\,km\,s$^{-1}$, and we also find Ly$\beta$\ components at similar velocities, inconsistent with the fitted $v_\mathrm{exp}$\,$\sim50$\,km\,s$^{-1}$. We list the measured LIS velocities with a negative sign in Table\,\ref{tab_params} (the lines are blueshifted), while we provide a positive $v_\mathrm{exp}$, which is defined as the expansion velocity of the shell. \item Systemic redshifts derived from the Ly$\alpha$\ models are larger than those from the SDSS emission lines in every target of the sample. With the exception of three targets, the redshift discrepancy $\Delta z$ is larger than the conservative wavelength calibration uncertainty of 40\,km\,s$^{-1}$, and reaches values as high as $\sim\!\!160$\,km\,s$^{-1}$\ (Fig.\,\ref{fig_parameters}b). The targets with redshifted Ly$\alpha$\ troughs (e.g. GP\,1133, GP\,1424) described in Sect.\,\ref{sect_profiles} do not stand out and are part of the trend. The best-fitting model redshifts are driven by the requirement that the central trough should lie at the position of the largest optical depth, close to $-$$v_\mathrm{exp}$. \item Large intrinsic Ly$\alpha$\ line widths, $FWHM_0$\/(\lya), were chosen by the best-fitting models to match the shape of the blue Ly$\alpha$\ peaks, and the broad blue and red wings. These compensate for the lack of profile broadening in low-$N_\mathrm{HI}$\ models, determined by the observed small separation of Ly$\alpha$\ peaks. The typical best-fit $FWHM_0$\/(\lya)\,$\sim$\,500\,km\,s$^{-1}$\ (Fig.\,\ref{fig_parameters}c) are much larger than the measured SDSS $FWHM$\/(\hb)\,$\sim$\,200\,km\,s$^{-1}$, and are similar to or larger than the broad component of H$\beta$. However, we cannot conclude from here that the narrow component was attenuated and only the broad one was transmitted. The modelled intrinsic Ly$\alpha$\ lines have larger fluxes than the broad components of Balmer lines scaled by the Case B factors. Therefore, we need to interpret this discrepancy as another model failure, not a physical effect. \end{enumerate} The remaining free fitting parameters, $b$, $N_\mathrm{HI}$, $\tau_\mathrm{d}$ and $EW_0$\/(\lya), either have no directly observable counterparts or their comparison to observed values is less straightforward. No measurable counterpart was available for $b$ and $N_\mathrm{HI}$. The Doppler parameter $b$ is the least robust fitting parameter \citep[see also][]{Gronke15}, therefore its large spread in some of the targets is unsurprising. On the contrary, $N_\mathrm{HI}$\ is among the most robust parameters due to its strong impact on the Ly$\alpha$\ peak shift \citep{Verhamme06,Gronke15}. The $N_\mathrm{HI}$\ that we here derive from the shell models is generally low ($\lesssim10^{19}\mathrm{\,cm}^{-2}$) compared to typical star-forming galaxies. The modelled FUV attenuation due to dust, $\tau_\mathrm{d},$ has a large scatter for each target. This is most probably caused by the low $N_\mathrm{HI}$. The role of dust in low-$N_\mathrm{HI}$\ environments is reduced due to a relatively small number of scattering events and therefore models with a small and large dust content thus give similar results. Nevertheless, given the large uncertainties in both the modelled and observed $\tau_\mathrm{d}$ (due to the low $N_\mathrm{HI}$\ and the uncertainty in the FUV attenuation law), we cannot properly judge the consistency between the observations and the models: we can only state that the results are consistent within the large error bars. Finally, the best-fitting model $EW_0$\/(\lya)\ is usually lower than that derived from H$\beta$\ observations in the studied sample, or is at the lower edge of the interval of possible $EW_0$\/(\lya)\ values (Table\,\ref{tab_params}). This may indicate that a part of Ly$\alpha$\ has been transferred outside the COS aperture, which is consistent with the result obtained from a comparison of the COS and GALEX observations for two GPs \citep{Henry15}, and from the Ly$\alpha$\ HST imaging for GP\,0926 \citep{Hayes14}. It can also indicate a preference for extinction curves with a low total to selective extinction ratio, typical of low-metallicity galaxies and also found in GP-like galaxies of \citet{Izotov16,Izotov16b}. \subsubsection{Fit parameter discrepancies are tied to spectral shape} \label{sec_ewbr} We have tested whether any correlations exist between the fitting parameters or their discrepancies. The first conclusion is that both $v_\mathrm{exp}$\ and $z$ need to be free parameters in order to reproduce the GP Ly$\alpha$\ peaks and troughs, but nothing can be concluded about the correlation between the discrepancies $\Delta z$ and $\Delta v$. Secondly, a hint of correlation was found between the redshift discrepancy $\Delta z$ and the ratio of the intrinsic Ly$\alpha$\ and H$\beta$\ widths, $FWHM_0$\/(\lya)/$FWHM$\/(\hb). Larger samples are needed to confirm or refute the correlations. A natural question is how the difficulties in Ly$\alpha$\ fitting are linked to the double-peak character of the line profile, common in the GPs. We devote Fig.\,\ref{fig_ewbr} to studying several best-fit and observational parameters as a function of the Ly$\alpha$\ blue peak flux fraction, expressed as $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$. The blue and red peak fluxes were measured as separated by the central trough (and not by the systemic redshift) to better characterize flux in each peak, described in Sect.\,\ref{sect_profiles}. We find that the difference between fitted and measured gas velocities, $v_\mathrm{exp}$$-$|$v_\mathrm{LIS}$|, generally increases with the increasing blue peak strength (Fig.\,\ref{fig_ewbr}a). With the exception of two points that have high $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$$\sim0.4$ and $v_\mathrm{exp}$$-$|$v_\mathrm{LIS}$|$\sim0$, all the other studied galaxies show an anti-correlation between the velocity difference and the blue peak flux fraction (Spearman coefficient $-0.91$ and P-value 0.002). The colour-scale coding of the plot helps clarify that the two outliers have low LIS velocities $\left(<\!100{\textrm{\,km\,s$^{-1}$}}\right)$. Thanks to them, there is a fortuitous agreement between the modelled and observed values, due to the fact that low velocities are the only way to produce strong blue peaks in the shell models. The colour scale of Fig.\,\ref{fig_ewbr}a also helps to visualize that the double peaks appear in GPs across a large range of LIS velocities. If the GP Ly$\alpha$\ blue peaks were due to low gas velocities, we would expect a correlation between the LIS velocities and the observed $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$\ -- which we cannot confirm in our data. The blue peaks may thus not be linked to static environments, or alternatively may be formed in static environments not probed by the LIS lines. \begin{figure}[th] \begin{tabular}{l} \includegraphics[width=0.45\textwidth]{Fig6a.eps} \\ \includegraphics[width=0.45\textwidth]{Fig6b.eps} \\ \includegraphics[width=0.45\textwidth]{Fig6c.eps} \\ \end{tabular} \caption{Relations between the Ly$\alpha$\ blue and red peak $EW$ ratio, and several observed and best-fit parameters. Circles represent the observed GPs, crosses represent shell models. Details can be found in Sect.\,\ref{sec_ewbr}. } \label{fig_ewbr} \end{figure} Figure~\ref{fig_ewbr}b is devoted to the Ly$\alpha$\ trough position, which we described earlier in this paper to be inconsistent with the measured LIS velocities. We observe that as the blue peak grows stronger, the trough shifts from large negative offsets towards the systemic redshift and then to positive offsets. The peculiar shift to positive velocities, unexpected from the modelling, is just a continuation of the overall trend. We also see a trend between the trough position and $v_\mathrm{exp}$\ (colour scale in the plot): the trough position is most negative for largest $v_\mathrm{exp}$\ ($>\!100$km\,s$^{-1}$), and positive mostly for low $v_\mathrm{exp}$. A correlation between the trough position and both $v_\mathrm{exp}$\ and relative blue peak $EW$ is indeed expected in the models where the blue peak originates from radiative transfer in low-velocity or static environments: the optical depth is maximum at approximately $-$$v_\mathrm{exp}$, therefore the trough offset from the systemic velocity becomes larger with increasing $v_\mathrm{exp}$. The blue peak becomes weaker with increasing $v_\mathrm{exp}$. However, the models (cross symbols) are not co-spatial with the observed data in Fig.\,\ref{fig_ewbr}b. The model troughs are shifted further to negative velocities and can never reach positive velocities. This illustrates why an adjustment of the systemic redshift is needed to fit the observed profiles. Finally, Fig.\,\ref{fig_ewbr}c shows the Ly$\alpha$\ double-peak separation as a function of the blue-to-red $EW$ ratio and the observed $f_\mathrm{esc}$\/(Ly$\alpha$). Even though purely observational, this plot elucidates the reasons for the incompatibility between the models and GP data. The diagram shows that the stronger the blue peak, the smaller the peak separation in most cases, and the larger the observed $f_\mathrm{esc}$\/(Ly$\alpha$)\ (colour scale in the same plot). The two outliers with peak separations of $\sim\!500$\,km\,s$^{-1}$\ and $\sim\!800$\,km\,s$^{-1}$\ are the same ones as in Fig.\,\ref{fig_ewbr}a, with nearly static kinematics that favour the blue peak formation. The remaining nine GPs do not seem to have blue peaks related to the outflow kinematics but rather to the low \ion{H}{i}\ column density for the following reasons: 1) Ly$\alpha$\ double-peak separation is a good tracer of the \ion{H}{i}\ column density \citep{Verhamme15,Verhamme17}. As we find the smallest Ly$\alpha$\ separation in GPs with the largest blue peak flux contribution (Fig.\,\ref{fig_ewbr}c), the probability of the LyC escape increases with the increasing blue peak flux. 2) LyC escape from the GP-like galaxies has been shown to correlate with $f_\mathrm{esc}$\/(Ly$\alpha$)\ \citep{Verhamme17}. In our sample, GPs with a large blue peak tend to have a larger Ly$\alpha$\ escape fraction (colour scale in Fig.\,\ref{fig_ewbr}c). The anti-correlation between the $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$\ ratio and $f_\mathrm{esc}$\/(Ly$\alpha$)\ has the Spearman coefficient of 0.85 and p-value of 0.003 if the two outliers with static LIS gas are left out. In addition, we see an anti-correlation between the peak separation and $f_\mathrm{esc}$\/(Ly$\alpha$), with the Spearman coefficient of 0.94 and p-value $10^{-4}.$ Therefore, if the blue peak is not produced in static gas, it is related to the Ly$\alpha$\ and potentially the LyC escape. In complementarity to previous GP studies \citep{Henry15,Yang16,Yang17} we therefore show here that not only the blue peak position but also its relative flux seems to carry information about the transparency of the neutral ISM. \section{Discussion} \label{sec_discussion} \begin{figure} \includegraphics[width=0.48\textwidth]{Fig7.eps} \caption{\label{fig_trough_sep} Observed separation between the red and blue Ly$\alpha$\ peaks as a function of the residual flux in the central Ly$\alpha$\ trough $F_\mathrm{trough}$ divided by the mean flux in the red and blue Ly$\alpha$\ peaks $\bar{F}_\mathrm{peaks}$. An analogous plot was presented for model spectra in \cite{Gronke16}. No correlation was seen for clumpy models, while a trend was present for homogeneous shell models (see Sect.\,\ref{sec_clumpy}). } \end{figure} We devote the following sections to the exploration of possible reasons for the encountered discrepancies, discussion of the possible Ly$\alpha$\ origin, and the compatibility of the observations with other existing models. We have searched for correlations between the fit parameter discrepancies and galaxy properties such as mass, size, metallicity, star-formation rate, amount of dust, UV absorption line $EW$s, emission, and absorption line $FWHM$. We found no clear trends. \subsection{Blue peaks in the literature} The unconstrained Ly$\alpha$\ fits that we presented in Sect.\,\ref{sect_unconstrained} were previously studied by \citet{Yang16}, using the same GP sample and a similar set of homogeneous shell models \citep[][]{Dijkstra06,Gronke15}. In this respect, the present paper reproduces their fitting results, while it further extends the analysis by applying observational constraints and by measuring the differences between the modelled and observed ISM parameters. In this paragraph, we compare the two sets of unconstrained fitting results produced by the two papers. Unlike the present paper, \citet{Yang16} did not convolve the synthetic spectra to the observed spectral resolution, which could be the origin of several discrepancies. While our automatic procedure fitted all of the twelve Ly$\alpha$\ profiles, \citet{Yang16} were able to fit nine of them (their Figure 7). They reported that they manually adjusted the model parameters for the remaining three -- GP\,1133, GP\,1219, and GP\,1424 -- to match the observed peaks and troughs. They published the model parameters for all twelve galaxies in their Table~4, which we now compare with our Table~\ref{tab_params}. The derived shell expansion velocities were similar in both studies (with differences between them of $\lesssim$\,50\,km\,s$^{-1}$), and are equally inconsistent with the measured LIS outflows. Both studies needed similarly broad intrinsic Ly$\alpha$\ profiles to achieve good fits (agreement between the two codes within $\sim$\,100\,km\,s$^{-1}$). The third parameter causing problems in our fitting, the systemic redshift, was not discussed by \cite{Yang16}. They only presented the SDSS redshift in the paper and did not discuss any adjustments, therefore a comparison with our results cannot be done. No data were given for $EW_0$\/(\lya)\ in their paper either. Comparison between the derived \ion{H}{i}\ column densities shows that in approximately one half of the sample the values agree between the two studies. In the other half, we typically find $N_\mathrm{HI}$\ lower by $\sim\!\!1/2$\,dex. We speculate that the reason lies in the absence of correction for spectral resolution in \citet{Yang16}; instead of broadening by the instrumental effect, their synthetic profiles needed to be additionally broadened by a higher $N_\mathrm{HI}$. For the dust optical depth $\tau_\mathrm{d},$ both papers show a large scatter in the best-fitting values for each target. The reason is a weak effect of the dust on Ly$\alpha$\ in low-$N_\mathrm{HI}$\ models ($N_\mathrm{HI}$\,$\lesssim\!\!10^{19}$\,cm$^{-2}$), which are characteristic of the GPs and which allow an efficient Ly$\alpha$\ escape. Both dust-poor and dust-rich \ion{H}{i}\ media can produce similar Ly$\alpha$\ spectra if the $N_\mathrm{HI}$\ is low, and are thus equally likely to be among the best-fitting models. Our median $\tau_\mathrm{d}$ values were lower by $\Delta\tau_\mathrm{d}\!\sim\!0.3\!-\!1$ than those of \citet{Yang16} in one half of the sample, while they were either equivalent or higher by a similar amount in the other half. Given the large spread of $\tau_\mathrm{d}$ for each target, the values can be considered consistent between the two studies (with the exception of GP\,0911), and also in agreement with the observations within the error bars. As for the Doppler parameter $b$ \citep[which corresponds to the temperature in][]{Yang16}, it is the least robust among the fitting parameters \citep{Gronke15}, therefore a large scatter in the fitted values and large differences between the two codes would not be surprising. We see from the \citet{Yang16} results that their model grid admitted temperatures as low as $10^3$\,K, which was an order lower than in our code. We therefore automatically obtained fits with higher temperatures. In summary, we consider the results of the two unconstrained-fitting studies to be mostly consistent within the error bars. We generally considered larger error bars in order to account for the discreteness of the model grid and for the observational uncertainties, and also to provide sufficient parameter space for testing the match between the shell models, the ISM parameters, and the observational data. \begin{figure*}[t!] \includegraphics[width=1.0\textwidth]{Fig8.eps} \caption{Symmetry of the red and blue Ly$\alpha$\ peak positions in the observed and modelled spectra. a) The observed Ly$\alpha$\ double-peak separation as a function of the blue and red Ly$\alpha$\ peak positions: the distributions are similar for both peaks, the correlation is tighter for the blue peak, b) The same as (a) but for shell models; here the correlation is tighter for the blue peak, and the red and blue peaks are not symmetric. c) The same as (a) but measured with respect to the central Ly$\alpha$\ trough; the correlations here are stronger than in (a). d) The same as (b) but measured with respect to the central Ly$\alpha$\ trough; the asymmetry between the red and blue peaks disappears. } \label{fig_symm2} \end{figure*} \bigskip We also note here that problems with the double-peak Ly$\alpha$\ profile fitting have been reported in the literature before. \citet{Chonis13} had problems fitting double-peak profiles of $z\sim2$ LAEs. However, we believe that those problems were mainly caused by the models that they used. Their radiative transfer code only computed the radiative transfer of monochromatic Ly$\alpha$\ radiation, unlike ours that assumes a Gaussian line input plus a continuum. We are able to reproduce their LAE Ly$\alpha$\ profiles with our model grid in the same way as the LBG spectra in \citet{Verhamme08}, \citet{Schaerer08}, and as the GP spectra with unconstrained models in this paper. The problems that we encounter in our GP fitting are of a different nature: due to the availability of the detailed constraints we see the discrepancy between the fitted and observed ISM parameters. Such a detailed study has only been possible in the low redshift so far. Some of the constraints were available in the $z\sim2$ LAE study of \citet{Hashimoto15}, where they also needed to invoke broad intrinsic Ly$\alpha$\ to reproduce the blue peaks. \subsection{Stellar Ly$\alpha$} The double-peaked Ly$\alpha$\ profiles observed in the GPs have their central troughs shifted in velocities compared to what was expected from the LIS absorption lines. In addition, some of the Ly$\alpha$\ troughs are redshifted to positive velocities, unexpected in outflowing media. We have tested whether, despite the LIS line results, the unusual Ly$\alpha$\ troughs could be explained by a combination of outflowing and infalling shell models The answer was negative. Infalling spherical \ion{H}{i}\ shells with the Ly$\alpha$\ source placed in the sphere centre would produce a redshifted trough, but the line profile symmetry would be opposite, with the blue peak dominating over the red \citep{Verhamme06,Dijkstra06}. A simultaneous production of a dominating red peak (as observed) and a redshifted trough (as observed) is challenging. We tested several scenarios that superposed infalling and outflowing shells, and none of them produced a redshifted Ly$\alpha$\ trough for the given flux ratio of the red and blue peaks. We conclude here that the redshifted troughs probably point either to inconsistencies between the redshifts probed by Ly$\alpha$\ and H$\beta$, or to ISM geometries not covered by our models, such as clumps, which we will discuss in Sect.\,\ref{sec_clumpy}. Related to this, \citet{Yang16} discussed the possibility of infalling clumps, but with no direct proof or conclusion. We have tested the possibility that the shift could be due to an underlying stellar Ly$\alpha$\ absorption. Synthetic stellar population models using theoretical stellar atmospheres \citep{Pena13,Verhamme08,Valls93} showed that for a staburst $<$\,5\,Myr, the stellar Ly$\alpha$\ reaches absorption $EW$\/(\lya)$_\mathrm{stell}\!\sim\!-4\pm2$\,\AA, depending on the stellar model and on the star-formation regime. Such young starbursts are expected in our sample due to their large $EW$\/(\ha)\, and were proven by stellar population fitting of similar targets in \citet{Izotov16,Izotov16b}. Therefore, the stellar Ly$\alpha$\ absorption reduces the observed emission fluxes by $<\!\!10\%$ in most of our sample. Furthermore, the discrepancy of the trough position is the largest in the strongest Ly$\alpha$\ emitters of our sample (Sect.\,\ref{sec_ewbr}), where the Ly$\alpha$\ absorption will represent a particularly low fraction of the total flux. We have nevertheless tested the role of the stellar absorption on the final Ly$\alpha$\ line profile, by matching Starburst99 models \citep{Leitherer99} of different ages and metallicities to our observational data. The synthetic spectra over-predict the Ly$\alpha$\ absorption due to their use of observational O-star spectra, contaminated with interstellar features \citep{Pena13}. With this in mind, we selected synthetic spectra that best matched the \ion{N}{v}\,$\lambda1240$ stellar P-Cygni feature. We subtracted them from the observed GP Ly$\alpha$\ spectra. Even the over-predicted stellar Ly$\alpha$\ absorption did not change the Ly$\alpha$\ profile, namely, they did not shift the central trough position. \subsection{Galaxy compactness and Ly$\alpha$\ halos} \label{sec_halo} Numerical studies \citep{Verhamme12, Verhamme15b} have raised the possibility of Ly$\alpha$\ double-peak formation in the Ly$\alpha$\ ``halos'', that is, in extended diffuse Ly$\alpha$\ emission regions with no corresponding H$\alpha$\ or stellar light. Radiative transfer in the cited hydrodynamic simulations showed that Ly$\alpha$\ spectral profiles varied with the aperture size and position. Diffuse Ly$\alpha$\ halos were observationally found in stacks of high-$z$ galaxies \citep{Steidel11,Momose14}, and confirmed in individual galaxies both at low and high redshift \citep{Hayes13,Hayes14,Hayes16,Wisotzki16,Patricio16}. The prevalence of double-peaks distinguishes the GPs from other galaxy samples. We can thus hypothesize that we see the effect of the Ly$\alpha$\ halos here, similar to the simulations. GPs are compact galaxies, with the UV angular size generally smaller than the HST/COS aperture, unlike in the case of other local galaxies where the COS aperture contains the signal from one star-forming knot \citep[such as in][]{Wofford13,Rivera15}. Thanks to the large COS Ly$\alpha$\ escape fractions measured in GPs, we expect that Ly$\alpha$\ does not scatter too far from the production sites and does not create large halos, and that most of their signal is contained in the COS spectra. However, whether or not the double peaks are due to the non-resolved nature of GPs and Ly$\alpha$\ halos needs to be observationally tested in more detail. GP halos have so far been directly observed only for GP\,0926 \citep{Hayes13,Hayes14}, and has recently been suggested for other GPs based on the analysis of two-dimensional COS spectra \citep{Yang17sizes}. The mechanism of the double-peak formation in the halos remains to be clarified, too. In the shell models, only a static or slowly expanding medium leads to the double-peaked Ly$\alpha$, therefore the role of kinematics must be assessed in the full simulation \citep[see][]{Verhamme15b}. Furthermore, the Ly$\alpha$\ halo emission can have an in-situ origin, either from cooling radiation or UV background fluorescence, rather than scattering \citep{Cantalupo12,Rosdahl12,Dijkstra14}. If halos play a role in the Ly$\alpha$\ profiles, then the encountered differences between LIS line and shell model velocities are unsurprising. One could argue that high-$z$ galaxy spectra are typically spatially unresolved, and yet unlike in GPs their majority are single-peaked, while the incidence of multiple peaks is only $\sim$\,30\% \citep{Kulas12,Trainor15}. However, the high-$z$ results are affected by the increasingly neutral IGM, which removes the blue part of the profile \citep{Laursen11}, and also by a typically low spectral resolution that blends the two peaks into one \citep[e.g.][]{Kulas12, Erb14}. Higher resolution data should alleviate at least one part of the problem. The connection between the blue peak and the Ly$\alpha$\ halo is one of the tasks to be explored by both numerical simulations and observations. The galaxy size enters the problem in yet another aspect: the size of the Ly$\alpha$\ photon source. The models that we used here assumed a point source. In contrast, the HST/COS acquisition images of GPs show a multi-knot star-formation structure in the near-UV. A similar structure will probably be reflected in the \ion{H}{ii}\ regions, and therefore we need to ask how would this distribution affect the radiative transfer results. This effect has not been addressed in the homogeneous shell models and is a task for further modelling. \subsection{Clumpy shells and Ly$\alpha$\ produced in outflows} \label{sec_clumpy} Clumpy shell models \citep{Laursen13,Duval14,Verhamme15,Gronke16,Gronke16b,Gronke17} have demonstrated that the Ly$\alpha$\ spectral profiles can be more complex than those from homogeneous shells. In particular, clumpy outflow geometries presented in \citet{Gronke16} may provide an interesting alternative to the homogeneous shells, potentially solving several problems encountered in GP Ly$\alpha$\ profile fitting \citep[as was also invoked by][]{Yang16,Yang17}. First, in clumpy outflow models, the double-peaked Ly$\alpha$\ profiles are not confined to low expansion velocities. This is a convenient property that would remove the conflict with the LIS line velocities and redshifts. Second, some of the clumpy models can achieve a redshifted central trough \citep[Fig.\,8 of][]{Gronke16} -- a property that is unachievable in homogeneous shells but is observed in some GPs. Third, the clumpy models may also remove the problem of the too broad intrinsic Ly$\alpha$, as shown by some of the results in \citet{Gronke16}. However, this may not be due to the clumpy nature of the model ISM, but rather due to the authors' assumption that Ly$\alpha$\ is formed inside the outflow, instead of a point source assumed in our models. More theoretical work needs to be done on the clumpy models to assess the blue and red peak locations and their asymmetries, as well as connections to the Ly$\alpha$\ and LyC escape. A recent observation of a lensed $z=2.4$ galaxy reported a triple-peak Ly$\alpha$\ spectrum, similar to clumpy models, and was interpreted as a possible LyC leakage signature \citep{Rivera17}. Nevertheless, we have designed a test to probe the usefulness of clumpy models for our GPs. \citet{Gronke16} measured the residual flux in the modelled Ly$\alpha$\ central trough $F_\mathrm{trough}$, and found that the ratio of $F_\mathrm{trough}$ and the mean of the peak maxima, $\bar{F}_\mathrm{peak}$, had a different distribution in the clumpy models than in the homogeneous shells (their Fig.\,2). The ratio can span a wide range of values in the clumpy models, $F_\mathrm{trough}/\bar{F}_\mathrm{peak}\!\sim\!0-8,$ with the largest concentration around $F_\mathrm{trough}/\bar{F}_\mathrm{peak}\!\sim\!0.3.$ In contrast, $F_\mathrm{trough}/\bar{F}_\mathrm{peaks}\!<\!0.1$ for most of the homogeneous shell models. For comparison, our Fig.\,\ref{fig_trough_sep} shows that the observed GP $F_\mathrm{trough}/\bar{F}_\mathrm{peaks}\sim 0-0.3.$ These values could be explained by either model, since a small number of homogeneous models reach as far as $F_\mathrm{trough}/\bar{F}_\mathrm{peaks}\sim0.6.$ In addition, the observational data may also be affected by instrumental resolution, which would make the observed $F_\mathrm{trough}$ artificially high. However, besides the interval span, the observed GP data inversely correlate with the Ly$\alpha$\ double-peak separation (Fig.\,\ref{fig_trough_sep}). No such correlation was found in the clumpy models; the models randomly filled all of the plane. Conversely, homogeneous shells showed a large scatter of peak separations ($v_\mathrm{sep}\!\sim\!0-2000$\,km\,s$^{-1}$) at $F_\mathrm{trough}/\bar{F}_\mathrm{peaks} \sim0$, but with increasing $F_\mathrm{trough}/\bar{F}_\mathrm{peaks}$ the maximum separation decreases to $v_\mathrm{sep}\!<$\,500\,km\,s$^{-1}$\ at $F_\mathrm{trough}/\bar{F}_\mathrm{peaks}\!\sim\!0.3$. The trend in homogeneous shells thus resembles the observed GP data (while bearing in mind that the sample size is limited). In homogeneous shells, both $F_\mathrm{trough}$ and the peak separation are governed by $N_\mathrm{HI}$, therefore a correlation between them is expected, unlike in the case of the clumpy models where Ly$\alpha$\ escape is regulated by the properties and the distribution of the clumps. In the view of these results, the role of the clumpy models is still unclear. A similarly hesitant conclusion was drawn by \citet{Gronke16}. The clumps are worth considering as they offer a multitude of possibilites \citep[see also][]{Gronke16b,Gronke17}. Model fitting of observational profiles, using various flavours of the clumpy models, with and without constraining their parameters, remains to be done. Tests similar to our Fig.\,\ref{fig_trough_sep} could complement the spectral fitting. \subsection{Symmetries of the observed and modelled Ly$\alpha$\ profiles} \label{sec_sym_obsmod} \begin{figure}[b! \begin{tabular}{ll} \includegraphics[width=0.43\textwidth]{Fig9.eps} \\ \end{tabular} \caption{Model Ly$\alpha$\ profile symmetries. The solid black line shows the shell model, the dashed blue line its inverse with respect to the systemic redshift. The shell model parameters: $N_\mathrm{HI}$\,$=\!10^{18}$\,cm$^{-2}$, $v_\mathrm{exp}$\,$=50$\,km\,s$^{-1}$, $\tau_\mathrm{d}\!=\!1$, $b\!=\!20$\,km\,s$^{-1}$, $FWHM_0$\/(\lya)\,$=\!500$\,km\,s$^{-1}$, $EW_0$\/(\lya)\,$=\!100$\,km\,s$^{-1}$. } \label{fig_inversemodel} \end{figure} \begin{figure* \begin{tabular}{lll} \includegraphics[width=0.47\textwidth]{Fig10a.eps} & & \includegraphics[width=0.47\textwidth]{Fig10b.eps} \\ \end{tabular} \caption{Observed Ly$\alpha$\ profile symmetries: a) with respect to the systemic velocity derived from SDSS redshift, b) with respect to the systemic velocity derived from best-fitting model. The solid black line shows the observed data, the blue line their inversion. The red and blue wings are more symmetric in case (b) than in case (a). } \label{fig_inverse} \end{figure*} We here explore symmetries of the shell model Ly$\alpha$\ profiles and compare them with those observed in the GPs. Symmetries in peak positions, wings, and troughs could provide additional insight into the incompatibilities between the GP data and shell models or their parameters. It was noted by \citet{Henry15} that it is mainly the blue-peak position that correlates with the GP double-peak separation, and thus with the Ly$\alpha$\ and LyC escape \citep{Verhamme15,Verhamme17}. \citet{Yang17} then showed that both peaks correlate with $f_\mathrm{esc}$\/(Ly$\alpha$)\ in a larger GP sample. We here test if the same is true for the models. We work with twelve GPs, therefore we have to bear in mind the effects of the limited sample size. We presented the GP asymmetric blue and red Ly$\alpha$\ peak positions in Fig.\,\ref{fig_symm}. We here study the Ly$\alpha$\ double-peak separation versus the blue and red peak positions measured from the systemic redshift both in the observed and modelled spectra (Fig.\,\ref{fig_symm2}a,b). The double-peak separation is a proxy for the Ly$\alpha$\ and LyC escape, and we explore how the blue and red peaks correlate with it and what symmetries exist in the observations and the models. We include the shell models that were among the twenty best fits for each target (see Section\,\ref{sec_unconstrained}). We find stronger correlations for the blue peak both in the COS data and in the models. However, a clear shift between the red and blue peak positions exists in the models, not seen in the observations. Models with strong blue peaks (blue-to-red flux ratio $>0.3,$ Table\,\ref{tab_ewbr}) have the most symmetric peak positions (i.e. red and blue squares plotted close to each other in Fig.\,\ref{fig_symm2}b). Conversely, the weak blue peak models are responsible for the scatter in that plot. On the other hand, if we measure the peak positions from the central trough, instead of the systemic velocity, then we obtain plots of Fig.\,\ref{fig_symm2}c,d. Both the observed and modelled spectra show more symmetry with respect to the trough than to the systemic velocity. The correlation with the peak separation is strong for both the blue and red peaks and for both the observed and modelled spectra in this case. This result is another illustration of why our models failed to reproduce the observed spectra with applied constraints, while they were usable in the modelling with free redshift; they attained the required symmetry with the modified redshift. Using the modified redshift, the blue and red peak positions would resemble the models in Fig.\,\ref{fig_symm2}b. We have further tested the symmetry of the Ly$\alpha$\ line wings by exploring the Ly$\alpha$\ profiles together with their plots flipped with respect to the systemic velocity (Figs.\,\ref{fig_inversemodel} and \ref{fig_inverse}). The modelled double-peaks have symmetric wings with respect to the systemic redshift (Fig.\,\ref{fig_inversemodel}) but with the condition of a broad $FWHM_0$\/(\lya), equivalent to the one that fits the observed GP profiles. The wing symmetry is created by small effects of radiative transfer far from the line centre, and by a symmetric intrinsic profile. On the other hand, models with a narrow $FWHM_0$\/(\lya)\ (as in Fig.\,\ref{fig_constrained}) have red wings stronger than the blue ones, due to radiative transfer effects. For the observed GP spectra, the red wing is also mostly stronger than the blue one (Fig.\,\ref{fig_inverse}a), despite the fact that the wings are as broad as in the model of Fig.\,\ref{fig_inversemodel}. On the contrary, if we consider $z$ derived from the Ly$\alpha$\ fits rather than SDSS, the observed wings become symmetric (Fig.\,\ref{fig_inverse}b). We conclude that the free fitting process modifies $z$ to obtain spectra that are more symmetric in the red and blue peak position with respect to the systemic redshift. The new symmetry makes the data compatible with the shell models. As a consequence, the line wings become symmetric. The wing symmetry of the model spectra is in turn achievable by assuming a large intrinsic $FWHM_0$\/(\lya). A comparison with clumpy models and other geometries would be useful to assess how unique is the symmetry resemblance between the data and the shifted models, resulting in the alignment of the trough position and wing shape, and in the possibility of finding models that fit the double peaks. This exercise still does not clarify the reasons for the discrepancy in $z.$ It does not answer the question of how appropriate the shell models are, or if the resemblance between the unconstrained fitting models and the data is a coincidence. We showed in Section\,\ref{sec_ewbr} that the discrepancies in individual fitting parameters were tied to the spectral shape, namely to the blue peak $EW$. We also showed that the blue peaks were related to Ly$\alpha$\ and LyC escape. In this light, the resemblance between symmetries of modelled and observed (shifted) spectra appears surprising and requires more theoretical work. \subsection{Ly$\alpha$\ sources} Our models considered recombination as the only source of Ly$\alpha$. Some of the fitting parameter discrepancies, namely in $z$ and $FWHM_0$\/(\lya), were evaluated by comparing the Ly$\alpha$\ and H$\beta$\ lines under the assumption that the same recombination process was the origin of both lines. However, other Ly$\alpha$\ production mechanisms are possible and could be responsible for part of the fitting problems. Collisional excitation is one such process that could cause kinematic differences between Ly$\alpha$\ and H$\beta$. Collisional excitation affects more the first excited level than the higher energy levels, leading to the Ly$\alpha$/H$\alpha$\ emissivity ratio $\sim\!\!100$ \citep{Dijkstra14}, that is, an order of magnitude higher than in the case of recombination ($\sim\!\!8$). In violent conditions inside GPs, characterized by large star-formation rates and high excitation, strong collisional processes can be expected \citep{Oti12}. Typical GP electron temperatures are relatively high, $\sim\!\!15\,000$\,K, favourable to the collisional excitation scenario \citep{Jaskot13}. Collisional contribution could explain the intrinsic Ly$\alpha$\ profile broadening and the redshift discrepancy between the modelled intrinsic Ly$\alpha$\ and the observed H$\beta$. We have also previously mentioned the possibility of Ly$\alpha$\ production in outflowing medium (Sect.\,\ref{sec_clumpy}), which could be due to a number of different processes; their impact on the resulting spectrum needs to be further explored. Other possible sources of Ly$\alpha$\ emission include fluorescence \citep{Cantalupo12,Cantalupo14} and gravitational cooling \citep{Dijkstra06,Dijkstra14}. Both processes, acting in the outer layers of the ISM, would be able to produce a large $FWHM_0$\/(\lya)\ \citep{Hashimoto15}. In addition, Fermi-like acceleration of Ly$\alpha$\ photons across shock fronts was suggested as an alternative origin of the Ly$\alpha$\ blue peaks \citep{Chung16, Neufeld88}. Tests for these predictions are still missing. \citet{Martin15} searched for the signatures of Fermi acceleration in ULIRGs and concluded that no compelling evidence was found, but they admitted that the process can play a role in some objects. Finally, we note that a detailed exploration of the recombination sources alone would also be useful. The SDSS H$\beta$\ spectra that we used did not have sufficient resolving power ($R\!\sim\!2000$) to show the complete emission line structure. \citet{Amorin12} observed several green peas with a high-resolution echelle spectrograph ($R\!>\!\!10\,000$), and found complex H$\alpha$\ profiles with several distinct kinematic components. If the components come from different regions of the galaxy, conditions for the Ly$\alpha$\ transfer in each of them can be different and could thus affect differently the resulting Ly$\alpha$\ profile. The sample of \citet{Amorin12} unfortunately does not overlap with our Ly$\alpha$\ sample and thus could not be tested. \section{Summary and conclusions} We have studied in detail the Ly$\alpha$\ spectra of twelve green pea galaxies, which are an unusual population of strongly star-forming compact galaxies at $z\sim0.2,$ and which resemble high-redshift Ly$\alpha$\ emitters in their mass, metallicity, star-formation rate, and possibly ionizing continuum leakage. Eleven out of the twelve studied GPs have double-peaked emission-line Ly$\alpha$\ spectral profiles. The spectra show no signs of broad underlying Ly$\alpha$\ absorption (which is often observed in low-$z$ star-forming galaxies), with two weak exceptions. Furthermore, they have non-zero residual flux in the central trough that separates the blue and red peaks. Together with a small peak separation, these properties indicate low \ion{H}{i}\ column densities, based on the criteria of \citet{Verhamme15}. The central Ly$\alpha$\ trough is redshifted from the systemic velocity in several GPs, which is unusual in the context of the known observations and models. We applied the {\tt MCLya} Monte Carlo Ly$\alpha$\ radiative transfer code \citep{Verhamme06}, which uses the geometry of expanding homogeneous shells, to fit the GP Ly$\alpha$\ spectra and derive their ISM parameters. In the first step, we applied detailed constraints on the fitting parameters, inferred from ancillary UV and optical spectra. The models did not correctly reproduce the observed GP Ly$\alpha$\ spectra in this case (Fig.\,\ref{fig_constrained}). We thus proceeded to unconstrained fitting \citep[similar to][]{Yang16,Yang17}, which correctly reproduced the spectral profiles (Fig.\,\ref{fig_fits}), but the best-fitting model parameters were in disagreement with the ancillary data (Fig.\,\ref{fig_parameters}). In particular: 1) The redshifts derived from the Ly$\alpha$\ fitting were in all cases larger than those from the SDSS optical emission lines ($\Delta z\!=\!10\!-\!250$\,km\,s$^{-1}$); 2) The best-fit outflow velocities were typically $\lesssim\!150$\,km\,s$^{-1}$, whereas the UV LIS line velocities were distributed in the interval $0-300$\,km\,s$^{-1}$; 3) The modelled $FWHM_0$\/(\lya)\ of the intrinsic Ly$\alpha$\ line was a factor of two to four larger than the measured $FWHM$\/(\hb)\ in each target. We found a link between the fit parameter discrepancies and the double-peak character of the Ly$\alpha$\ profiles, namely the $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$\ ratio of the blue and red peak equivalent widths (Fig.\,\ref{fig_ewbr}). We propose two interpretations for the data-model disagreement. First, the ancillary data may be inappropriate to constrain the models; the LIS lines may not probe the environment where the Ly$\alpha$\ transfer takes place and the intrinsic Ly$\alpha$\ may not be produced by the same mechanism as H$\beta$. We showed that by modifying $z,$ the observed Ly$\alpha$\ trough positions would become compatible with the models (Fig.\,\ref{fig_ewbr}b) and the observed Ly$\alpha$\ profile symmetries would correspond to those of the homogeneous shell models (Figs.\,\ref{fig_symm2}-\ref{fig_inverse}). Second, the blue Ly$\alpha$\ peaks of GPs may not originate in a static ISM, as is the case in the homogeneous shell models, or at least not in the gas probed by the UV LIS lines. The blue peak formation mechanisms, either at a source or by the radiative transfer, need to be further investigated. The $EW$\/(\lya)$_\mathrm{Blue}$/$EW$\/(\lya)$_\mathrm{Red}$\ ratio correlates with the Ly$\alpha$\ escape fraction, and with the Ly$\alpha$\ peak separation, which suggests that the GP blue peaks are associated with low $N_\mathrm{HI}$, and with the Ly$\alpha$\ and LyC escape, rather than kinematics. A connection between the blue peak position and $N_\mathrm{HI}$\ was previously proposed in the literature \citep{Henry15,Yang16,Yang17}, while we here extend this effect to the blue peak flux. We considered alternative models to reproduce the GP Ly$\alpha$\ profiles. No combination of outflowing and inflowing homogeneous shells was found to be compatible with the observed GP spectra. Clumpy models such as those of \citet{Gronke16} are a promising option, as they produce double peaks of various shapes by other mechanisms than kinematics. However, more theoretical work is needed to check their compatibility with observations. We could not confirm the compatibility based on the measured residual flux in the central trough of Ly$\alpha$\ spectra. We found that it correlates with the red-blue Ly$\alpha$\ peak separation in GPs, which is a trend expected in the homogeneous case and not in the clumpy models \citep{Gronke16}. Nevertheless, the small observational sample may only cover a portion of the parameter space and a larger sample will be needed to extend this study. Also, various versions of the clumpy models may provide different results. For future work, high-resolution H$\alpha$\ or H$\beta$\ spectra would be useful to provide more details about the \ion{H}{ii}\ regions kinematics, that is, about the Ly$\alpha$\ source. If multiple kinematic components are present, their impact on the resulting Ly$\alpha$\ profile needs to be explored in the models. Possible contributions of non-recombination processes to the Ly$\alpha$\ spectra need to be tested. Finally, model fitting with the use of clumpy geometries should clarify whether the clumps are a solution to the problem of mismatching ISM parameters. \begin{acknowledgements} We thank the anonymous referee for improving the clarity of the paper. We thank Matthew Hayes for providing the fitting tool essential to the Ly$\alpha$\ modelling. I.O. appreciated the discussions on stellar atmosphere models with Ji\v r{\'\i} Krti\v cka, and acknowledges the support from the Czech Science Foundation grant number 17-06217Y. This work is based on data from HST-GO-13293, which contributed support to MSO and AEJ, and archive data from HST-GO-12928 and 11727. The data were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. We have made use of SDSS data. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. We have made use of the ADS and NED databases. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-06-05T02:16:03", "yymm": "1806", "arxiv_id": "1806.01027", "language": "en", "url": "https://arxiv.org/abs/1806.01027" }
\section{Introduction} \label{section:Introduction} The Galactic Center is a very active region. It is filled with stars, dust, gas at different temperatures and different ionization states, and an accreting super massive black hole (SMBH) at the very center of the region, i.e. Sagittarius A* (SgrA*, for an overview see, e.g., Eckart et al. 2017). Due to the large mass of SgrA*, it amounts to about 4 million solar masses, the motion and radiation is also subjected to relativistic effects, which can be used to map out space time in the vicinity of SgrA*. This leads to a situation in which many competitive sources of radiation and extinction or dilution act at the same time. Hence, the region can be thought of as an interplay between light and shadow. In this brief overview, we begin the discussion at a scale of several parsecs and investigate the X-ray Bremsstrahlung radiation of the plasma, in which SgrA* and the central stellar cluster is embedded. Further in we detect the relativistic motion of the luminous star S2. Martins et al. (2008) confirmed that the star S2 is a main-sequence star of spectral type B0-2.5 V with a zero-age main-sequence (ZAMS) mass of 19.5M$_{\odot}$\ . Getting close to the last stable orbit around the SMBH we have indications for plasma moving at relativistic speed and finally there may be the possibility to see the shadow of the SMBH as it bends the light generated in its accretion zone. \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig00.eps} \caption{The Chandra image of the X-ray diffuse emission at the Galactic Center as shown by Mossoux \& Eckart (2018) and as obtained for the energy interval between 0.5 and 8~keV. The observations have been carried out between 1999 and 2012. Corresponding to the Chandra angular resolution the pixel size is half an arcsecond. The image intensity represents the count rate. It is shown in an inverted logarithmic scale implying that the brightest areas are represented as the darkest regions. The location of SgrA* and the central stellar cluster is shown. The dashed line shows depressions due to the shadowing effect by the CND. In Mossoux \& Eckart (2018) we explain the algorithm that allowed us to transform these indications of the shadow into an image of it. The inset shows the image of the shadow which looks very similar to the ALMA image of the CND. Its inner edge is identical to the circumference of the bright part of the central stellar cluster (see figures in Mossoux \& Eckart 2018). } \label{fig10} \end{figure} \subsection{Shadow of the Circum Nuclear Disk} \label{section:shadow2} The SgrA complex at the center of the Milky Way consists of several components like the SMBH SgrA*, the mini-spiral, and the Circum Nuclear Disk (CND). The source complex can be observed in the radio/sub-millimeter, infrared and X-ray wavelength domain. However, until now and as a part of SgrA East, the CND had only been detected in the radio and far infrared. Thanks to the 4.6 Ms of Chandra observations of the Galactic Center region, as described by Mossoux \& Eckart (2018), we were now able to detect an X-ray ''shadow'' of the CND against the diffuse X-ray emission of the entire region. Mossoux \& Eckart (2018) aimed at finding out if the CND acts as an absorber for the diffuse X-ray emission or as a barrier for the plasma close to Sgr A* and the central stellar cluster. The basis for this investigation are the ACIS-I and ACIS-S/HETG Chandra data covering the time interval between 1999 and 2012. Subtracting a smooth model of the diffuse X-ray emission from the image, inverting it, clipping it at zero intensity, and smoothing the result gives an image of the CND with identifiable subcomponents. The result is rather insensitive to the diffuse X-ray emission model. The shadow inspired us to define several regions (inside, on, and outside the CND in addition to regions off the entire structure) for which we extracted spectra. From the spectra we derived temperatures and column densities. The general picture Mossoux \& Eckart (2018) get is that best fits to the spectra are obtained assuming a two temperature plasma. For the CND a single temperature model with 1.8~keV and total local column density of 2.3$\times$10$^{22}$cm$^{-2}$ of the hot gas is obtained. Inside the CND temperatures of 1~keV and 5~keV are needed. Similarly for the outside temperatures of 0.35~keV and 1.3~keV are indicated. Details of the MCMC fitting results are given in Table 1 by Mossoux \& Eckart (2018). We find that the plasma at the location of the CND is cooler in general and may in fact act as a barrier between the hot plasma insight and outside of the CND. \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig10.eps} \caption{ Orbital fits for the high velocity stars S2, S38, and S0-103/S55 orbiting the Galactic Center SMBH SgrA* as taken from Parsa et al. (2017). We indicated regions in which the relativistic periastron shift is imprinted onto the orbit and where the effects can be seen easily in comparison to the larger scale orbit. The zoom depicts the (projected) prograde advance of the orbit by the amount $\Delta \omega$. See also text and animation in the ESO press Announcemant ann17051 (http://www.eso.org/public/announcements/ann17051/). } \label{fig10} \end{figure} \section{Relativistic motion of star S2} \label{section:S2} The S-star cluster that surrounds the supermassive black hole SgrA* in the Galactic center is ideally suited to investigate the physics close to this peculiar object. In particular such a study allows us to perform dynamical tests of general relativity. Using positions and radial velocities of three high velocity stars with the shortest periods (S2, S38, and S55/S0-102; see Fig.\ref{fig10}) we derived in Parsa et al. (2017) a black hole mass of $M_{BH} = (4.15 \pm 0.13 \pm 0.57) \times 10^6$M$_{\odot}$\ and a distance to it of $R_0 = 8.19 \pm 0.11 \pm 0.34~~kpc$. The possibility of the detection of faint stars inside the S2 orbit is discussed in Zajacek \& Tursunov (2018). The relativistic character of simulated orbits was investigated via a first-order post-Newtonian approximation to calculate stellar orbits that cover a large range of periapse distances. These calculations were used to derive changes in orbital elements that were obtained from fits to different sections of the orbits. Here we used changes in the eccentricity and the semi-major axis obtained from fits to the lower and upper part of the orbits, these are $\Delta e_l/\Delta e_u$ and $\Delta a_l/\Delta a_u$. We also calculated the periastron shift $\Delta \omega$ between the pre- and post-periapse part of the orbit. Parsa et al. (2017) could show that these quantities are correlated with the relativistic parameter we defined as $Υ=r_s/r_p$. Here, $r_s$ is the Schwarzschild radius and $r_p$ is the pericenter distance of the orbiting star. After establishing these correlations we could use the corresponding data from star S2 to derive its relativistic parameter and, hence, determine the degree of relativity that is shown by its orbit. For S2 Parsa et al. (2017) were able to derive a value of $Υ=0.00088 \pm 0.00080$. This value is consistent with the expected value of $Υ=0.00065$ derived for the star S2 using the SgrA* black hole mass. Parsa et al. (2017) argue that this derived value is most likely not dominated by possible perturbing influences such as noise on the derived stellar positions, rotation of the field that was imaged at different epochs, and possible drifts of the black hole in position. \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig20.eps} \caption{ Sketch depicting the randomization of the shifts for the four orbital sections by $\Delta s$. As an example of effective variations in the periastron shift $\Delta \omega$ and the ratios $\Delta e_l/\Delta e_u$, $\Delta a_l/\Delta a_u$ between lower and upper orbital fits are shown. Orbital fits to these configurations were used to obtain the distributions for the assumption of a fully noise dominated case. } \label{fig20} \end{figure} In Parsa et al. (2017) we present a qualitative analysis of the robustness and significance of the results in chapter~5.3. Here, we present the results of a numerical calculation of the significance in comparison to a fully noise dominated scenario. The uncertainties for the $e$-, and $a$-ratios as well as the $\Delta \omega$ value were obtained by transporting the uncertainties from the measurements, via the reference frames to the final statement. As we used only images in which SgrA* could be detected as well, for the stars in the central arcsecond the positional uncertainties are the most important quantities in order to measure the deviation from a Newtonian orbit. We assume a noise dominated case and estimate the uncertainties relative to it. We use the combination of our uncertainty in R.A. direction (essential in the measurement of the $\Delta \omega$ of the S2 orbit) and the data from the literature. We then find a mean uncertainty of 1.4 mas for an individual position. Knowing that we (Parsa et al. 2017) have about 5 data points per quarter of the three dimensional orbit we derive a positioning uncertainty for each quarter of about $\Delta s$ = 0.5~mas (in the projected view of the orbit). Hence, we randomize the position of each orbital segment by placing it at $\Delta s$=0,+0.5,-0.5~mas (see Fig.\ref{fig20}) with respect to the nominal position in the deprojected Newtonian orbit, i.e., less than that in the projected view of the orbit. For each orbit that we generate with this procedure we calculate the quantities $\Delta e_u/\Delta e_l$, $\Delta a_u/\Delta a_l$, and the periastron shift $\Delta \omega$. The corresponding histograms are shown in Figs.\ref{fig30} and \ref{fig40}. These diagrams can now be used to determine an uncertainty $\sigma$ of the distributions and compare those to the values measured off for star S2. The $e$- and $a$-ratios and $\Delta \omega$ obtained for star S2 represent at least 3-4$\sigma$ excursions including the 3D application of $\Delta s$ (see above). Taking the inclination of S2 of about 137$^o$ into account the 3-4$\sigma$ excursions turn into a 4-6$\sigma$ result. Parsa et al. (2017) shows that the inclusion of a possible small proper motion of SgrA* with respect to the stellar cluster does not change the result on the relativistic periastron shift (see in particular Tab.8 in Parsa et al. 2017). Hence, it also does not effect our evaluation of the result with respect to the noise dominated Newtonian case. \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig30.eps} \caption{Distribution of the periastron shift $\Delta \omega$ obtained from the orbits with randomization of the shifts for the four orbital sections by $\Delta s$ under the assumption of a fully noise dominated Newtonian case.} \label{fig30} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig40.eps} \caption{Distribution of the ratios $\Delta e_l/\Delta e_u$, $\Delta a_l/\Delta a_u$ between lower and upper orbital fits obtained from the orbits with randomization of the shifts for the four orbital sections by $\Delta s$ under the assumption of a fully noise dominated Newtonian case.} \label{fig40} \end{figure} \subsection{Relativistic motion of plasma blobs} \label{section:blobs} Karssen et al. (2017) could show that the lightcurves of bright X-ray SgrA* flares have the exact shape one expects from plasma blobs orbiting the SMBH close to the last stable orbit. This is true for the four bright flares discussed by Karssen et al. (2017) and the recent bright flare reported by Ponti et al. (2017) (see also Eckart et al. 2018). Since the bright X-ray flares are correlated with synchronous NIR flares (see e.g. Eckart et al. 2012) the finding by Karssen et al. (2017) probably also holds for the NIR domain. However, in the X-ray domain, flares occur at a low rate than in the NIR. Therefore, it is less likely that faint X-ray flares overlap the bright flare events resulting in a clearer agreement with lightcurves expected from orbiting blobs. In the NIR flares are much more frequent in it is less likely to obtain a clean single blob light curve undisturbed by secondary flare events. The assumption that goes in is that the blobs stay stable for a substantial fraction of a single orbit. The characteristics of these theoretical light curves is that they all show a shoulder due to lensing amplification on the ascending part of the boosting dominated light curve (The boosting sets in before the lensing occurs). This can be found in the observed lighcurves of all bright X-ray flares as well. Reassuring is also that by fitting the flare curves scale free (in gravitational time scales $GM/c^3$) with the theoretical light curves and then scaling them by introducing the observed flare time in seconds one obtains the mass of SgrA*. This can be understood by the fact that the black hole mass scales linearly with the black hole (or rather the event horizon) size,i.e., with the orbital time scale and, therefore, the flare time scale in our model. Hence, being able to explain the X-ray flare shapes via the assumption of orbiting matter is another indication for the high degree of relativity in the vicinity of SgrA*. \subsection{Shadow of the supermassive black hole} \label{section:shadow1} One of the goals of the Event Horizon Telescope project (EHT) is to get a high angular resolution image at mm-wavelengths of SgrA* in order to measure the so called shadow of the SMBH. This shadow results from the following effect: Luminous plasma is orbiting the SMBH. Due to the bending of light by the SMBH there is the possibility that in addition to a bright Doppler boosted part of the source there may appear a dark region in the overall structure (see Falcke et al. 2000). The expected size of that region is of the order of 30-50$\mu$as. However, if one is able to see it or not will critically depend on the actual observational and geometrical situation (The shadow is also not a feature uniquely associated to Black holes only - see detailed discussion in Eckart et al. 2017). In order to see this shadow one needs to have a special orientation of the system, the accretion must be well ordered and not chaotic, and it would be best if there is no jet that may disturb the impression one gets from the orbiting material. Another problem may also result from the fact that the scattering screen along the line of sight just becomes transparent around a wavelength of 1~mm. Hence, it may be that multiple simultaneous images (like the speckle effect known from observations at optical and infrared wavelengths) may occur. All of these effects and influences will make it difficult to get a clear view of the immediate environment of SgrA*. The first EHT results were recently published by Ru-Sen Lu et al. (2018). The authors report on a detection of intrinsic source structure at about 3 Schwarzschild radii, i.e., about 30$\mu$as and therefore on the expected size scale for the shadow of the black hole. Their Fig.5 shows possible image structures that one would expect from based on the current amplitude and closure phase measurements. The variations in the data and the appearance of the models are also consistent with the expectation of a shadow, however, detailed images that would show that disk structure due to orbiting matter with indications of a shadow have not yet been published. \begin{figure} \centering \includegraphics[width=\columnwidth]{eckart-fig50.eps} \caption{ The gravitational potential that is measured by a number of tests of gravity presented in comparison to the mass responsible for the potential (based on Psaltis 2004). We have selected terrestrial labs, the precession of Mercury, light deflection and the Shapiro delay in the solar system, the Hulse-Taylor pulsar, QSO QPOs, and the gravitational wave detection GW150914 (Abbott et al. 2016). In addition we show the results for the Galactic Center S2 orbit presented by Parsa et al. (2017) and the X-ray lightcurve fitting presented by Karssen et al. (2017). } \label{fig50} \end{figure} \section{Summary and Conclusion} \label{section:summary} Luminous stars and plasma blobs can be used to map the space time close to SgrA*. While the presence of the plasma blobs close to the last stable orbit needs to be inferred from the shape of the light curves the stars can be observed directly. The plasma blobs would be part of the structure surrounding the SMBH. Due to light bending effects that structure is potentially showing a shadow of the black hole. With Parsa et al. (2017) we used three stars to derive the mass and distance of SgrA* in a Newtonian and post-Newtonian solution. A new and simple method that compares properties of Newtonian fits to the lower, upper, pre-, and post-periapse parts of the orbits allows us to determine the degree of relativity. For the high velocity star S2 the values for the $e-$ and $a-$ratios as well as $\Delta \omega$ value lie close to the values expected for S2 and the SgrA* mass (see Iorio 2017). Here we give an analysis of the uncertainties for these quantities with respect to a noise dominated Newtonian situation. In this case the deviations are significant and the S2 values for the e- and a-ratios and $\Delta \omega$ represent a $\ge$4~$\sigma$ result, and it appears to be highly unlikely that the common effect seen in all three quantities originates from a purely noise dominated scenario. Accepting this result, S2 is the first star with a resolvable orbit around a SMBH for which a test for relativity can be performed. In the near future these results will become more significant using interferometric data as obtained by GRAVITY at the VLTI (e.g., Gravity Collaboration 2017, M\'erand et al. 2017). In addition we report on the 'shadow' the Galactic Center casts with respect to the X-ray Bremsstrahlung emission of the plasma in the local for- and background. The 'shadow' is caused by The Circum Nuclear Disk that surrounds SgrA* casts a 'shadow' that allows us to effectively subdivide the hot nuclear plasma into areas inside, at, and outside the CND. Two temperature models and estimates of the column density give us a clear picture of the physical conditions of the plasma. The CND appears to act as a blocking agent that keeps the hot plasma inside the central stellar cluster and close to the super massive black hole SgrA*. The probes of the gravitational potential as they have been presented by Parsa et al. (2017) and Karssen et al. (2017) and summarized in the present publication can be compared to other tests in Fig.~\ref{fig50}. This comparison is based on Fig.6 in Psaltis et al. (2004; see also Fig.1 in Hees et al. 2017, and Figs.4 and 5 in Eckart et al. 2010). This figure shows that the gravitational potential can now be probed over 9 orders of magnitude for masses covering more than 10 orders of magnitude. These results can still be improved with currently running interferometry experiments in the radio mm-wavelength and infrared regime. \\ \\ {\bf Acknowledgements:} We received funding from the European Union Seventh Framework Program (FP7/2007-2013) under grant agreement No. 312789 - Strong gravity: Probing Strong Gravity by Black Holes Across the Range of Masses. This work was supported in part by the Deutsche Forschungsgemeinschaft (DFG) via the Cologne Bonn Graduate School (BCGS), the Max Planck Society through the International Max Planck Research School (IMPRS) for Astronomy and Astrophysics, as well as special funds through the University of Cologne and SFB 956 - Conditions and Impact of Star Formation. E. Hosseini is members of the IMPRS. We thank the Czech Science Foundation - DFG collaboration (No. 13-00070J) - and the German Academic Exchange Service (DAAD) for support under COPRAG2015 (No.57147386). The result on S2 has also been presented at the 'Stellar Dynamics in Galactic Nuclei' workshop, held at the Institute for Advanced Study, in Princeton, NJ, between November 29 and December 1, 2017, and at the 'Second Annual Black Hole Initiative Conference on Black Holes' (Harvard University), held at the Sheraton Commander Hotel, 16 Garden Street, Cambridge, MA, Wednesday, May 9 through Friday, May 11, 2018.
{ "timestamp": "2018-06-05T02:17:59", "yymm": "1806", "arxiv_id": "1806.01096", "language": "en", "url": "https://arxiv.org/abs/1806.01096" }
\section{Introduction} In recent times $B$ physics is going through a challenging phase, several anomalies at the level of $(3-4) \sigma$ \cite{RK-exp, RDstar-LHCb, RKstar-exp, phi-decayrate, Kstar-decayrate, P5p, isospin-kstar} have been observed by the LHCb Collaboration in the rare flavour changing neutral current (FCNC) processes involving the quark level transition $b \to s l^+ l^-$. As these processes are one-loop suppressed in the standard model (SM), they may play a vital role to decipher the signature of new physics (NP) beyond it. To supplement these observations, recently LHCb has reported $2.2 \sigma$ and $2.4\sigma$ discrepancies in the measurement of $R_{K^*}$ observable in the dilepton invariant mass squared bins $q^2 \in [0.045, 1.1]~{\rm GeV}^2$ and $q^2 \in [1.1, 6.0]~{\rm GeV}^2$ \cite{RKstar-exp}, which are in the same line as the previous result on the violation of lepton universality parameter $R_K$ \cite{RK-exp}. Also the lepton nonuniversality (LNU) parameters in the $B \to D^{(*)}$ processes ($R_{D^{(*)}}$) have been measured by Belle, BaBar and LHCb collaborations, which have respectively $1.9 \sigma$ \cite{RD-BaBar, RD-exp} and $3.3\sigma$ \cite{RDstar-LHCb, RD-exp} deviations from their corresponding SM predictions. In Table I, we present the observed LNU ratios, associated with the $b \to s l^+ l^-$ and $b \to c l \nu_l$ processes at LHCb and $B$ factories. Furthermore, the decay rate of $B_s \to \phi \mu^+ \mu^-$ process \cite{phi-decayrate} also has discrepancy of around $3\sigma$ in the low $q^2$ region. \begin{table}[htb] \begin{center} \caption{The LNU parameters observed by the LHCb collaboration and the $B$ factories.} \vspace*{0.1 true in} \begin{tabular}{|c|c|c| c | c|} \hline LNU parameters ~ & ~SM predictions ~&~ Expt. result & ~ Deviation~ \\ \hline $R_K|_{q^2 \in [1.0, 6.0]}$ ~& ~ $1.0003 \pm 0.0001$~\cite{RK-SM} ~&~ $0.745^{+0.090}_{-0.074} \pm 0.036$~\cite{RK-exp}~& ~$2.6 \sigma$~\\ \hline $R_{K^*}|_{q^2 \in [0.045, 1.1]} $ ~& ~ $0.92 \pm 0.02$ ~\cite{RKstar-SM}~& ~$0.66^{+0.11}_{-0.07} \pm 0.03$~\cite{RKstar-exp}~&~ $2.2 \sigma$~\\ \hline $R_{K^*}|_{q^2 \in [1.1, 6.0]} $ ~& ~$ 1.00 \pm 0.01 $~\cite{RKstar-SM}~& $0.69^{+0.11}_{-0.07} \pm 0.05$ ~\cite{RKstar-exp}~&~ $2.4 \sigma$~\\ \hline $R_D$ ~& ~$ 0.300 \pm 0.008 $~\cite{RD-SM}~&~$0.397 \pm 0.040 \pm 0.028$ ~\cite{RD-exp}~& ~$1.9 \sigma$~\\ \hline $R_{D^*} $ ~&~ $0.252 \pm 0.003$ ~\cite{RDstar-SM}~& ~$0.316 \pm 0.016 \pm 0.010 $~\cite{RD-exp}~&~$3.3 \sigma$~ \\ \hline \end{tabular} \end{center} \end{table} In this context, we would like to investigate whether the observed anomalies in the rare $\bar B \to \bar K^* l^+ l^-$ decay processes, mediated through $b \to s l^+ l^-$ transitions, can be explained in the vector leptoquark model. In the last few years, these processes have provided several surprising results and played a very crucial role to look for NP signals, as the measurement of four-body angular distribution provides a large number of observables which can be used to probe NP signature. In the low $q^2 \in [1, 6]~{\rm GeV}^2 $ region (where $q^2$ denotes the dilepton invariant mass square), the theoretical predictions for such observables are very precise and generally free from the hadronic uncertainties. However, the observed forward-backward asymmetry is systematically below the corresponding SM prediction, though the zero crossing point is consistent with it. Moreover, the LHCb Collaboration has reported many other deviations from the SM expectations in the angular observables. The largest discrepancy of $\sim3 \sigma$ in the famous $P_5^\prime$ optimized observable \cite{P5p} and the decay rate \cite{Kstar-decayrate} of these processes provide a sensitive probe to explore NP effects in $b \to s \gamma$, $b \to s ll$ transitions. In addition, the isospin asymmetry \cite{isospin-kstar} is also measured by the LHCb experiment in the full $q^2$ region, which can be used to probe the NP signal. For the first time, recently Belle has measured two new lepton flavour universality violating (LFUV) observables $Q_{4, 5} = P_{4, 5}^{\mu^\prime} - P_{4, 5}^{e^\prime} $ \cite{Q4-Belle}. In order to scrutinize the above results, these processes have already been investigated in the context of various new physics models and also in model-independent ways. The recent measurement on the $R_{K^*}$ parameter at LHCb experiment has drawn much attention to restudy these processes in the low $q^2$ region. In the light of recent $R_{K^*}$ data, several works \cite{recent-arXiv}, have been reported in the literature recently. To understand the origin of the current issues observed at LHCb experiment in a particular theoretical framework, here we extend the SM by adding a single vector leptoquark (LQ) and reinvestigate the rare semileptonic $\bar B \to \bar K^* l^+ l^-$ decay processes. Though there are a few recent studies in the literature \cite{recent-arXiv}, which have investigated $R_{K^*}$ anomalies but no analysis of $R_{K^*}$ has been done with vector LQ, which can induce the process at tree level. In our previous work \cite{mohanta0}, we have made a comparative study of the rare semileptonic $\bar B \to \bar K^* l^+ l^-$ decay modes in both the $(3,2,7/6)$ and $(3,2,1/6)$ scalar LQ models. However, we have not investigated the new $R_{K^*}$, $Q_{4,5}$ and $Q_{F_{L,T}}$ observables. The model-independent analysis of these new set of observables can be found in Ref. \cite{kstar-Q4}. The motivation of this work is to check how the angular analysis of $\bar B \to \bar K^* l^+ l^-$ processes in the context of vector LQ could help establishing the possible existence of NP from the above discussed anomalies. LQs are hypothetical color triplet bosonic particles, which arise naturally from the unification of quarks and leptons, and carry both the lepton and baryon numbers. They can be either scalar (spin 0) or vector (spin 1) in nature. The presence of vector LQs at the TeV scale can be found in many extended SM theories such as grand unified theories based on $SU(5)$, $SO(10)$, etc. \cite{GUT, Pati}, Pati-Salam model \cite{Pati, Pati-salam}, composite model \cite{Composite} and the technicolor model \cite{Technicolor}. The baryon number conserving LQs avoid proton decay and could be light enough to be seen in the current experiments. Thus, in this work, we consider the singlet $(3, 1, 2/3)$ vector LQ, which is invariant under the SM $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge group and conserves both the baryon and lepton numbers. In addition to the (axial)vector operators, this LQ also provides additional (pseudo)scalar operators to the SM. We compute the branching ratios, forward-backward asymmetries, polarization asymmetries and the form factor independent (FFI) observables $(P_i^{(\prime)})$ of the $\bar B \to \bar K^* l^+ l^-$ processes in this model. In this paper, we mainly focus on the $R_{K^*}$ anomaly and the additional observables related to the lepton flavour violation in order to confirm or rule out the presence of lepton nonuniversality in the rare $B$ meson decays. We also investigate the $Q_i$, $Q_{F_{L,T}}$ and $B_i$ observables in the context of vector LQ, so as to reveal the possible interplay of NP. In the literature, the observed anomalies at LHCb experiment in various rare decays of $B$ mesons have been studied in the LQ model \cite{mohanta0, mohanta1, mohanta2, mohanta3, davidson, kosnik-LQ, KL-LQ}. The paper is organized as follows. In section II, we discuss the effective Hamiltonian responsible for the $b \to s l^+ l^-$ processes and the new physics contributions arising due to the exchange of vector LQ. In section III, we show the constraints on LQ couplings from the branching ratios of rare $B_s \to l^+ l^-$, $K_L \to l^+ l^-$ and $B_s \to \mu^\mp e^\pm$ processes. The branching ratios, forward-backward asymmetries, lepton polarizations and the CP violating parameters in the $\bar B \to \bar K^* l^+ l^-$ processes are calculated in section IV. Section V deals with the lepton flavour violating decay $K_L \to \mu^{\mp} e^{\pm}$ and section VI contains the summary. \section{Generalized effective Hamiltonian} In the SM, the most general effective Hamiltonian responsible for the quark level transitions $b \to s l^+ l^-$ is given by \cite{b-s-Hamiltonian} \begin{eqnarray} {\cal H}_{\rm eff} &=& - \frac{ 4 G_F}{\sqrt 2} V_{tb} V_{ts}^* \Bigg[\sum_{i=1}^6 C_i(\mu) \mathcal{O}_i +C_7 \frac{e}{16 \pi^2} \Big(\bar s \sigma_{\mu \nu} (m_s P_L + m_b P_R ) b\Big) F^{\mu \nu} \nonumber\\ &&+ \frac{\alpha}{4 \pi}\left (C_9^{\rm eff} (\bar s \gamma^\mu P_L b) (\bar l \gamma_\mu l )+ C_{10} (\bar s \gamma^\mu P_L b) (\bar l \gamma_\mu \gamma_5 l)\right )\Bigg]\;,\label{ham} \end{eqnarray} where $G_F$ is the Fermi constant, $V_{qq^\prime}$ are the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements, $\alpha$ is the fine structure constant and $P_{L, R} =(1\mp \gamma_5)/2$ are the chiral projection operators. Here $\mathcal{O}_i$'s are the six dimensional operators and $C_i$'s are the corresponding Wilson coefficients, evaluated at the renormalization scale $\mu =m_b$ \cite{b-s-Wilson}. \subsection{Contributions from vector leptoquark } The SM effective Hamiltonian (\ref{ham}) can be modified by adding a single vector LQ and will give measurable deviations from the corresponding SM predictions in the $B$ sector. Here we consider $V^{1}(3, 1, 2/3)$ singlet vector LQ which is invariant under the SM gauge group $SU(3)_C \times SU(2)_L \times U(1)_Y$. In order to avoid rapid proton decay, we assume that the LQ conserves both baryon and lepton numbers. The baryon number conserving vector LQs can have sizeable Yukawa couplings and could be light enough to be accessible in a current collider. The $V^{1}(3, 1, 2/3)$ LQ could potentially contribute to the $b \to s l^+ l^- $ processes and one can constrain the corresponding LQ couplings from the experimental data on $B_s \to l^+ l^-$ processes. The interaction Lagrangian for $V^{1}(3,1,2/3)$ leptoquark is given by \cite{kosnik-LQ, mohanta3} \begin{equation} \mathcal{L}^{(1)} = \left(g_L\, \overline{Q}_L \gamma^\mu L_L + g_R\, \overline{d_R} \gamma^\mu l_R \right)\, V^{1}_\mu + {\rm h.c.}, \end{equation} where $Q_L~(L_L)$ is the left-handed quark (lepton) doublets and $d_R~(l_R)$ is the right-handed down-type quark (charged-lepton) singlets. Here $g_L$ is the coupling of vector LQ with the quark and lepton doublets and $g_R$ is the LQ coupling with down-type quarks and the right-handed leptons. To keep the notations clean, the leptoquark couplings $g_L$ and $g_R$ are considered in the mass basis of down-type quarks, i.e., the couplings $g_L$ and $g_R$ are rotated and expressed in the quark mass basis by the redefinition $U_L^\dagger g_L \rightarrow g_L$ and $U_R^\dagger g_R \rightarrow g_R$, where $U_{L,R}$ connect the mass and gauge bases, i.e., $d_{L,R}^{\rm gauge}=U_{L,R} d_{L,R}^{\rm mass}$. The interaction Lagrangian (2) provides in addition to the vector ($C_{9}^{(\prime)\rm LQ }$) and axial-vector ($C_{10}^{(\prime)\rm LQ}$) new Wilson coefficients, new scalar $C_{S}^{(\prime)\rm LQ}$ and pseudoscalar $C_{P}^{(\prime)\rm LQ}$ coefficients, and is thus non-chiral in nature. The new Wilson coefficients are related to the LQ couplings through the following relations \cite{kosnik-LQ, mohanta3} \begin{subequations} \begin{eqnarray} && C_9^{\rm LQ} = -C_{10}^{\rm LQ} = \frac{\pi}{\sqrt{2} G_F V_{tb}V_{ts}^* \alpha} \frac{(g_L)_{s l} (g_L)^*_{bl}}{M_{\rm LQ}^2}\,,\label{u1c10np}\\ && C_9^{\prime \rm LQ} = C_{10}^{\prime \rm LQ} = \frac{\pi}{\sqrt{2} G_F V_{tb}V_{ts}^* \alpha} \frac{(g_R)_{sl} (g_R)^*_{bl}}{M_{\rm LQ}^2}\,,\label{u1c10pnp} \\ && -C_P^{\rm LQ} = C_{S}^{\rm LQ} = \frac{\sqrt{2}\pi}{G_F V_{tb}V_{ts}^* \alpha} \frac{(g_L)_{sl} (g_R)^*_{bl}}{M_{\rm LQ}^2}\,,\label{u1csnp} \\ && C_P^{\prime \rm LQ} = C_{S}^{\prime \rm LQ}= \frac{\sqrt{2}\pi}{G_F V_{tb}V_{ts}^* \alpha} \frac{(g_R)_{sl} (g_L)^*_{bl}}{M_{\rm LQ}^2}\, \label{u1cspnp}. \end{eqnarray} \end{subequations} \section{Constraint on the vector leptoquark couplings} After knowing about the interplay of possible new Wilson coefficients, we now proceed to constrain the new physics parameters by comparing the theoretical and experimental results on various rare $B(K)$ meson decays. \subsection{$B_s \to l^+ l^-$ processes} In this subsection, we show the constraints on the new LQ couplings from the $B_s \to l^+ l^-$ processes, as these new coefficients also contribute to the $B_s \to l^+ l^-$ processes. These decay processes are very rare in the SM as they occur at loop level and further suffer from helicity suppression. The only non-perturbative quantity involved is the decay constant of $B$ mesons, which can be reliably calculated by using the non-perturbative methods, thus, these processes are theoretically very clean. In the SM, only the $C_{10}^{\rm SM}$ Wilson coefficient contributes to the branching ratio. The branching ratios of $B_s \to l^+ l^-$ processes in the vector LQ model are given by \cite{Buras} \begin{eqnarray} {\rm BR}(B_s \to l^+ l^-) = \frac{G_F^2}{16 \pi^3} \tau_{B_s} \alpha^2 f_{B_s}^2 M_{B_s} m_{l}^2 |V_{tb} V_{ts}^*|^2 \left |C_{10}^{\rm SM}\right |^2 \sqrt{1- \frac{4 m_l^2}{M_{B_s}^2}} \times \left(|P|^2 + |S|^2 \right), \end{eqnarray} where $P$ and $S$ parameters are defined as \begin{eqnarray} &&P \equiv \frac{C_{10}^{\rm SM}+ C_{10}^{\rm LQ}-C_{10}^{\prime \rm LQ}}{C_{10}^{\rm SM}}+\frac{M_{B_s}^2}{2m_{l}} \Big(\frac{m_b}{m_b+m_s} \Big) \Big(\frac{C_{P}^{\rm LQ}-C_{P}^{\prime \rm LQ}}{C_{10}^{\rm SM}}\Big),\nonumber \\ &&S \equiv \sqrt{1- \frac{4 m_l^2}{M_{B_s}^2}} \frac{M_{B_s}^2}{2m_{l}} \Big(\frac{m_b}{m_b+m_s} \Big) \Big(\frac{C_{S}^{\rm LQ}-C_{S}^{\prime \rm LQ}}{C_{10}^{\rm SM}}\Big). \label{P-S} \end{eqnarray} Now to compare the theoretical branching ratios with the experimental results, one can define the parameter $R_q$, which is the ratio of branching fraction to its SM value as \begin{eqnarray} R_{q}=\frac{{\rm BR}(B_s \to l^+ l^-)}{{\rm BR}^{\rm SM}(B_s \to l^+ l^-)} = |P|^2+|S|^2. \label{R-q} \end{eqnarray} Using Eqn. (\ref{R-q}), we constrain the new couplings by comparing the SM predicted branching ratios \cite{Bobeth} of $B_s \to l^+ l^-$ processes with their corresponding experimental results \cite{e-br, mu-br, tau-br}. The constraint on vector LQ couplings from $B_s \to l^+ l^-$ processes has already been extracted in \cite{mohanta1, mohanta3}, therefore, here we will simply quote the results. In Table II, we have presented the obtained bound on the $(g_L)_{sl}(g_L)^*_{b l}$ leptoquark couplings. The constraints on the combination of $C_S^{(\prime) \rm LQ}$ Wilson coefficients i.e., $C_S^{\rm LQ} \pm C_S^{\prime \rm LQ}$ are presented in Table III, from which one can obtain the bound on individual $C_S^{(\prime) \rm LQ}$ Wilson coefficients. \subsection{$K_L \to l^+ l^-$ processes} The effective Hamiltonian responsible for the $s \to d l^+ l^-$ quark level transitions in the SM is given by \cite{KL-2} \begin{eqnarray} \mathcal{H}_{\rm eff} &=& \frac{G_F}{\sqrt{2}} \frac{\alpha}{2 \pi \sin^2\theta_W} \left(\lambda_c Y_{NL} + \lambda_t Y(x_t) \right) \left( \bar{s} \gamma^\mu (1-\gamma_5)d \right) \left(\bar{l} \gamma_\mu (1-\gamma_5)l \right)\\ &=& \frac{G_F}{\sqrt{2}} \frac{\alpha}{2 \pi} \lambda_u C^{\rm SM}_K \left( \bar{s} \gamma^\mu (1-\gamma_5)d \right) \left(\bar{l} \gamma_\mu (1-\gamma_5)l \right), \label{sd-ham} \end{eqnarray} where $\lambda_i = V_{id} V_{is}^*$, $x_t = m_t^2/M_W^2$, $\sin^2 \theta_W = 0.23$ and $C^{\rm SM}_K$ is the SM Wilson coefficient given as \begin{eqnarray} C^{\rm SM}_K = \frac{\left(\lambda_c Y_{NL} + \lambda_t Y(x_t) \right)}{\sin^2\theta_W \lambda_u}\;. \end{eqnarray} Here the functions $Y_{NL}$ and $Y(x_t)$ \cite{KL-3} are the contributions from the charm and top quark respectively. The estimated branching ratio of the short distance (SD) part of the $K_L \to \mu^+ \mu^-$ process is ${\rm BR}(K_L \to \mu^+ \mu^-)|_{\rm SD} < 2.5 \times 10^{-9}$ \cite{KL-1}. Including the $(3,1,2/3)$ vector LQ contributions, the total branching ratios of $K_L \to l^+ l^-$ processes are given by \cite{mohanta3} \begin{eqnarray} \label{br-KL} {\rm BR}(K_L \to l^+ l^-) = \frac{G_F^2}{8 \pi^3} \tau_{K_L} \alpha ^2 f_{K}^2 M_{K} m_{l}^2 |\lambda_u|^2 \left |C_{\rm SM}^{K}\right |^2 \sqrt{1- \frac{4 m_l^2}{M_{K}^2}} \times \left(|P_K|^2 + |S_K|^2 \right), \end{eqnarray} where $P_K $ and $S_K $ parameters have analogous expressions as Eqn. (\ref{P-S}) with the replacement of $M_{B_s} \to M_K$ and the corresponding new Wilson coefficients by $C_{iK}^{\rm LQ}$, which are given as \cite{mohanta3} \begin{subequations} \begin{eqnarray} &&C_{10K}^{\rm LQ} = -\frac{\pi}{G_F \alpha \lambda_u} \frac{{\rm Re}[(g_L)_{dl}(g_L)_{sl}^*]}{M_{\rm LQ}^2}\;, \\ &&C_{10K}^{\prime \rm LQ} = -\frac{\pi}{G_F \alpha \lambda_u} \frac{{\rm Re}[(g_R)_{dl}(g_R)_{sl}^*]}{M_{\rm LQ}^2}\;,\\ &&C_{SK}^{\rm LQ} =-C_{PK}^{\rm LQ} = \frac{\pi}{2 G_F \alpha \lambda_u} \frac{{\rm Re}[(g_L)_{dl}(g_R)_{sl}^*]}{M_{\rm LQ}^2}\;, \\ &&C_{SK}^{\prime \rm LQ} =C_{PK}^{\prime \rm LQ} = \frac{\pi}{2 G_F \alpha \lambda_u} \frac{{\rm Re}[(g_R)_{dl}(g_L)_{sl}^*]}{M_{\rm LQ}^2}\;. \end{eqnarray} \end{subequations} Now using the experimental upper limits \cite{pdg} on the branching ratios of $K_L \to l^+ l^-$ decay processes, the constraint on the new physics parameters are extracted in Ref. \cite{mohanta3}. In Table II, we present the constraints on $(g_L)_{dl}(g_L)_{sl}^*$ couplings, and the bound on $C_S^{\rm LQ} \pm C_S^{\prime \rm LQ}$ Wilson coefficients are given in Table III. \begin{table}[htb] \begin{center} \caption{Constraints on the LQ couplings obtained from various leptonic $B_{s} \to l^+ l^-$ and $K_L \to l^+ l^-$ decay processes .} \vspace*{0.1 true in} \begin{tabular}{|c|c|c|} \hline Decay Process ~& ~Couplings involved ~&~ Upper bound of \\ & &~the couplings \\ \hline $B_s \to e^\pm e^\mp $ &~ $|(g_L)_{s e} (g_L)^*_{b e}|$ ~& ~$ <~ 11.8 $~\\ \hline $B_s \to \mu^\pm \mu^\mp $ &~ $|(g_L)_{s \mu} (g_L)^*_{b \mu}|$ ~& ~$ \leq 2.3 \times 10^{-3} $~\\ \hline $K_L \to e^\pm e^\mp $ &~ $|(g_L)_{d e} (g_L)^*_{s e}|$ ~& ~$ (1.3-2.35) \times 10^{-3} $~\\ \hline $K_L \to \mu^\pm \mu^\mp $ &~ $|(g_L)_{d \mu} (g_L)^*_{s \mu}|$ ~& ~$(1.4-1.5) \times 10^{-4} $~\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{Constraint on combinations of $C_{S(K)}^{(\prime)\rm LQ}$ Wilson coefficients from rare leptonic $B_s \to l^+ l^-$ and $K_L \to l^+ l^-$ decay processes.} \vspace*{0.1 true in} \begin{tabular}{|c|c|c|} \hline Decay Process ~& ~Bound on $C_{S(K)}^{\rm LQ}+ C_{S(K)}^{\prime \rm LQ}$ ~&~Bound on $C_{S(K)}^{\rm LQ}- C_{S(K)}^{\prime \rm LQ}$ \\ \hline $B_s \to e^\pm e^\mp $ &~ $-1.4 \to 1.4$ ~& ~$-1.4 \to 1.4$ ~\\ \hline $B_s \to \mu^\pm \mu^\mp $~~ &~~ $0.0 \to 0.32$ ~~& ~~$0.1 \to 0.18$~\\ \hline $K_L \to e^\pm e^\mp $ &~ $(-2.0 \to 2.0) \times 10^{-4}$ ~& ~$(1.25 \to 2) \times 10^{-4}$ ~\\ \hline $K_L \to \mu^\pm \mu^\mp $~~ &~~ $(-6.0 \to 3.0) \times 10^{-3}$ ~~& ~~$(0.05 \to 5.6) \times 10^{-3}$~\\ \hline \end{tabular} \end{center} \end{table} \subsection{$B_s \to \mu^\mp e^\pm$ process} The constraints on LQ couplings obtained from the branching ratio of the lepton flavor violating (LFV) $B_s \to \mu^\mp e^\pm$ process is discussed in this subsection. In the SM, the LFV decay modes occur at loop level with the presence of tiny neutrinos in one of the loops or proceed via box diagrams. However, these processes can occur at tree level in the vector LQ model. The present experimental upper bound on the branching ratio of $B_s \to \mu^\mp e^\pm$ process is \cite{pdg} \begin{eqnarray} \label{bsmue-EXP} {\rm BR}(B_s \to \mu^\mp e^\pm) < 1.1 \times 10^{-8}. \end{eqnarray} In the presence of $V^1(3,1,2/3)$ vector LQ, the branching ratio of $B_s \to \mu^- e^+$ decay mode is \begin{eqnarray} {\rm BR}(B_s\to \mu^- e^+)&=& \tau_{B_s} \frac{G_F^2\alpha^2 M_{B_s}^5f_{B_s}^2 |V_{tb}V_{ts}^*|^2}{64\pi^3} \left(1-\frac{m_\mu^2}{M_{B_s}^2}\right)^2 \Bigg[\left| \frac{ m_\mu }{M_{B_s}^2} \left(G_9^{\rm LQ}-G_9^{\prime \rm LQ}\right) + \frac{G_S^{\rm LQ}-G_S^{\prime \rm LQ}}{m_b} \right|^2 \nonumber \\ && +\left|\frac{m_\mu}{M_{B_s}^2}\left(G_{10}^{\rm LQ}-G_{10}^{\prime \rm LQ}\right) + \frac{G_P^{\rm LQ}-G_P^{ \prime \rm LQ}}{m_b}\right|^2\Bigg] \, , \label{bsmue1} \end{eqnarray} and the branching ratio of $B_s \to \mu^+ e^-$ decay process is given by \begin{eqnarray} {\rm BR}(B_s\to e^- \mu^+)&=& \tau_{B_s} \frac{G_F^2\alpha^2 M_{B_s}^5f_{B_s}^2 |V_{tb}V_{ts}^*|^2}{64\pi^3} \left(1-\frac{m_\mu^2}{M_{B_s}^2}\right)^2 \Bigg[\left| -\frac{ m_\mu }{M_{B_s}^2} \left(H_9^{\rm LQ}-H_9^{\prime \rm LQ}\right) + \frac{H_S^{\rm LQ}-H_S^{\prime \rm LQ}}{m_b} \right|^2 \nonumber \\ && +\left|\frac{m_\mu}{M_{B_s}^2}\left(H_{10}^{\rm LQ}-H_{10}^{\prime \rm LQ}\right) + \frac{H_P^{\rm LQ}-H_P^{ \prime \rm LQ}}{m_b}\right|^2\Bigg] \,, \label{bsmue2} \end{eqnarray} where the mass of electron is neglected. Here the new $G(H)_a^{(\prime)\rm LQ}~(a=9,10,S, P)$ coefficients have similar expression as Eqns. (\ref{u1c10np},\ref{u1c10pnp}, \ref{u1csnp},\ref{u1cspnp}) with the replacement of LQ couplings $(g_i)_{sl}(g_j)_{bl}^* \to (g_i)_{se}(g_j)_{b\mu}^*$, where $(i,j=L,R)$ for $G_a^{(\prime)\rm LQ}$ and $(g_i)_{sl}(g_j)_{bl}^* \to (g_i)_{s\mu}(g_j)_{be}^*$ for $H_a^{(\prime)\rm LQ}$ coefficients. The total branching ratio of $B_s \to \mu^\mp e^\pm$ process is \begin{eqnarray} {\rm BR}(B_s \to \mu^\mp e^\pm) = {\rm BR}(B_s \to \mu^- e^+)+{\rm BR}(B_s \to \mu^+ e^-). \end{eqnarray} For chiral LQ, only $G(H)_{9,10}^{(\prime)\rm LQ}$ coefficients will be present. Now using the experimental upper limit of the branching ratio (\ref{bsmue-EXP}), we obtain the constraint on LQ couplings as \begin{eqnarray} |(g_L)_{se}(g_L)_{b\mu}^*| < 2.83 \times 10^{-2}. \end{eqnarray} Now neglecting the $V\pm A$ couplings, the constraint on $(G_S^{\rm LQ} \pm G_S^{\prime \rm LQ})$ coefficient is shown in Fig. 1. From the figure, we find the allowed range for the above combinations of Wilson coefficients as \begin{eqnarray} |G_S^{\rm LQ}-G_S^{\prime \rm LQ}| \leq 0.3,~~~ |G_S^{\rm LQ}+G_S^{\prime \rm LQ}| \leq 0.3\,. \end{eqnarray} \begin{figure}[h] \centering \includegraphics[scale=0.55]{bsmues.pdf} \caption{The constraint on $G_S^{\rm LQ} \pm G_S^{\prime \rm LQ}$ couplings obtained from the branching ratio of $B_s \to \mu^- e^+$ process.} \end{figure} \section{$\bar B \to \bar K^* l^+ l^-$ processes} In this section, we present the theoretical framework to calculate the branching ratios for the rare semileptonic $\bar B \to \bar K^* l^+ l^-$ processes. Furthermore, the dileptons present in these processes allow one to formulate several useful observables which can be used to probe and discriminate different scenarios of NP. The full four body angular distribution of the $\bar{B} \rightarrow \bar{K}^{* 0}\left(\rightarrow K^-\pi^+\right) l^+ l^-$ decay processes can be described by four independent kinematic variables, $q^2$ and the three angles $\theta_{K^*}, \theta_l$ and $ \phi$. Here we assume that, $\bar{K}^{* 0}\rightarrow K^-\pi^+$ is on the mass shell. The differential decay distribution of these processes with respect to the four independent variables are given as \cite{kstar-br-1, kstar-br-2, kstar-br-3} \begin{equation} \frac{d^4\Gamma}{dq^2~ d\cos\theta_l ~d\cos\theta_{K^*}~ d\phi} = \frac{9}{32\pi} J\left(q^2, \theta_l, \theta_{K^*}, \phi\right), \end{equation} where the lepton spins have been summed over. Here $q^2$ is the lepton-pair invariant mass square, $\theta_l$ is the angle between the negatively charged lepton and the $\bar{B}$ in the $l^+ l^-$ frame, $\theta_{K^*}$ is defined as the angle between $K^-$ and $\bar{B}$ in the $K^-\pi^+$ center of mass frame and $\phi$ is the angle between the normals of the $K^-\pi^+$ and the dilepton planes. The physically allowed regions of these variables in the phase space are given by \begin{equation} 4m^2_l \leqslant q^2 \leqslant \left(m_B - m_{K^*}\right)^2,\hspace{0.6cm} -1\leqslant \cos\theta_l \leqslant 1,\hspace{0.6 cm} -1 \leqslant \cos\theta_{K^*} \leqslant 1,\hspace{0.7cm} 0\leqslant \phi \leqslant 2\pi, \end{equation} where $m_B ~(m_{K^*}$) and $m_l$ are respectively the masses of $B~(K^*)$ meson and charged-lepton. The explicit dependence of the decay distribution on the above three angles, (i.e., the dependence $J\left(q^2, \theta_l, \theta_{K^*}, \phi\right)$ function) can be written as \begin{eqnarray} J\left(q^2, \theta_l, \theta_{K^*}, \phi\right) &= & J^s_1 \sin^2\theta_{K^*} + J^c_1 \cos^2\theta_{K^*} + \left(J^s_2 \sin^2\theta_{K^*} + J^c_2 \cos^2\theta_{K^*}\right) \cos2\theta_l \nonumber\\ & +& J_3 \sin^2\theta_{K^*} \sin^2\theta_l \cos2\phi + J_4 \sin2\theta_{K^*} \sin2\theta_l \cos\phi + J_5 \sin2\theta_{K^*} \sin\theta_l \cos\phi\nonumber\\ & +&(J_6^s \sin^2\theta_{K^*} +J_6^c \cos^2\theta_{K^*})\cos\theta_l + J_7 \sin2\theta_{K^*} \sin\theta_l \sin\phi \nonumber\\ & +& J_8 \sin2\theta_{K^*} \sin2\theta_l \sin\phi + J_9 \sin^2\theta_{K^*} \sin^2\theta_l \sin2\phi\;, \end{eqnarray} where the coefficients $J_i^{(a)} = J_i^{(a)}\left(q^2\right)$ for $i = 1,....,9$ and $a = s,c$ are functions of the dilepton invariant mass. The complete expression for these coefficients in terms of the transversity amplitudes $A_0$, $A_\parallel$, $A_\perp$, and $A_t$ can be found in the Ref. \cite{kstar-br-1, kstar-br-2, kstar-fl-2}. After performing the integration over all the angles, the decay rate of $\bar{B} \rightarrow \bar{K}^* l^+ l^-$ processes with respect to $q^2$ is given by \cite{kstar-br-1} \begin{equation} \frac{d\Gamma}{dq^2} = \frac{3}{4} \left(J_1 - \frac{J_2}{3}\right), \end{equation} where $J_i = 2J_i^s + J_i^c$. Previously LHCb had measured the LNU parameter in the low $q^2$, i.e., $(1 \leq q^2 \leq 6) ~ {\rm GeV}^2$ region of $B \to K l^+ l^-$ process as \cite{RK-exp} \begin{eqnarray} R_K^{\rm LHCb} = \frac{{\rm BR}(B^+ \to K^+ \mu^+ \mu^-)}{{\rm BR}(B^+ \to K^+ e^+ e^-)}= 0.745^{+0.090}_{-0.074} \pm 0.036 \end{eqnarray} which has $2.6 \sigma$ deviation from the corresponding SM result $R_K^{\rm SM} = 1.0003 \pm 0.0001$ \cite{RK-SM}. Recently, LHCb collaboration has measured analogous lepton flavour universality violating parameter, $R_{K^*}$, in the $\bar B \to \bar K^* l^+ l^-$ processes in two different bins, which also have around $2 \sigma$ deviations from the corresponding SM values as presented in Table-I. Besides the branching ratios and the $R_{K^*}$ parameter, there are many observables associated with $\bar B \to \bar K^* l^+ l^-$ processes which could be sensitive to new physics. The interesting observables are \begin{enumerate} \item The zero crossing of the forward-backward asymmetry, which is defined as \cite{kstar-br-1} \begin{eqnarray} A_{FB}\left(q^2\right) & = & \left[ \int_{-1}^0 d\cos\theta_l \frac{d^2\Gamma}{dq^2 d\cos\theta_l} - \int_{0}^1 d\cos\theta_l \frac{d^2\Gamma}{dq^2 d\cos\theta_l}\right] \Big{/} \frac{d\Gamma}{d q^2}\nonumber \\& =& -\frac{3}{8} \frac{J_6}{d\Gamma/dq^2}\;. \end{eqnarray} \item The longitudinal and transverse polarization fractions of the $K^*$ meson, in terms of the angular coefficients $(J_i)$ can be written as \cite{kstar-fl-1, kstar-fl-2} \begin{equation} F_L\left(q^2\right) = \frac{3J_1^c-J_2^c}{4d\Gamma/dq^2}\;, \hspace{1.5 cm} F_T\left(q^2\right) = 1-F_L(q^2)\,. \end{equation} \item The form factor independent (FFI) optimized observables $P_i$'s, where $i=1, ..,6,8$ are given as \cite{kstar-p1} \begin{eqnarray} P_1\left(q^2\right) &=& \frac{J_3}{2J_2^s}\;, \hspace{1.5cm} P_2\left(q^2\right) = \beta_l\frac{J^s_6}{8J_2^s}\;,\hspace{1.5cm} P_3\left(q^2\right) = - \frac{J_9}{4J_2^s}\;,\nonumber\\ P_4\left(q^2\right) & = & \frac{\sqrt{2}J_4}{\sqrt{-J_2^c\left(2J_2^s-J_3\right)}}\;,\hspace{1.2cm} P_5\left(q^2\right) = \frac{\beta_l J_5}{\sqrt{-2J_2^c\left(2J_2^s+J_3\right)}}\;,\nonumber\\ P_6\left(q^2\right) & = & - \frac{\beta_l J_7}{\sqrt{-2J_2^c\left(2J_2^s-J_3\right)}}\;, \hspace{1cm}P_8\left(q^2\right) = - \frac{\beta_l J_8}{\sqrt{-2J_2^c\left(2J_2^s-J_3\right)}}\;. \end{eqnarray} \item In order to interpret the LHCb measurements more precisely, a slightly modified set of clean observables $P_{4,5,6,8}^\prime$, which related to $P_{4,5,6, 8}$ are defined as \cite{kstar-p4p} \begin{eqnarray} &&P_{4}^\prime \equiv P_{4} \sqrt{1-P_1} = \frac{J_{4}}{\sqrt{-J_2^c J_2^s}}\;,\nonumber \\ &&P_{5}^\prime \equiv P_{5} \sqrt{1+P_1} = \frac{J_{5}}{2\sqrt{-J_2^c J_2^s}}\;,\nonumber \\ &&P_{6, 8}^\prime \equiv P_{6, 8} \sqrt{1-P_1} = -\frac{J_{7, 8}}{2\sqrt{-J_2^c J_2^s}}\;. \end{eqnarray} \item To confirm the existence of the violation of lepton universality, one can define additional LFUV observables as \cite{kstar-Q4} \begin{eqnarray} &&Q_{F_L} =F_L^\mu - F_L^e , ~~~~~Q_{F_T} =F_T^\mu - F_T^e , \\ &&Q_i= P_i^{\mu } - P_i^{e}, ~~~~~~~~B_i = \frac{J_i^\mu}{J_i^e} -1. \end{eqnarray} where $P_i$'s should be replaced by $P_i^\prime$ for $Q_{4,5,6,8}$. \begin{comment} \item One can also study the following additional observables to critically test the presence of NP, given as \cite{kstar-Q4} \begin{eqnarray} && \hat{F}_L=\frac{J_{1c}}{d\Gamma/dq^2}, ~~~ \hat{F}_T=1-\hat{F}_L\\ && \hat{P}_1= \frac{J_3}{2\hat{J}_{2s}}, ~~~~ \hat{P}_2=\frac{J_{6s}}{8\hat{J}_{2s}}, ~~~~~ \hat{P}_3=-\frac{J_{9}}{4\hat{J}_{2s}}, \\ && \hat{P}_4'=\frac{J_{4}}{\sqrt{\hat{J}_{2s}J_{1c}}}, ~~~~ \hat{P}_5'=\frac{J_{5}}{2\sqrt{\hat{J}_{2s}J_{1c}}}, \\ && \hat{P}_6'=- \frac{J_{7}}{2\sqrt{\hat{J}_{2s}J_{1c}}}, ~~ \hat{P}_8'=- \frac{J_{8}}{\sqrt{\hat{J}_{2s}J_{1c}}}, \end{eqnarray} where $\hat{J}_{2s}=\frac{1}{16}(6J_{1s} - J_{1c} - 2J_{2s} - J_{2c})$. \item Similarly from the $\hat F_L$, $\hat F_T$ and $\hat P_i$ observables, we define \cite{kstar-Q4} \begin{eqnarray} \hat{Q}_{F_L} = \hat{F}_L^\mu - \hat{F}_L^e,~~~~\hat{Q}_{F_T} = \hat{F}_T^\mu - \hat{F}_T^e, ~~~~\hat{Q}_i = \hat{P}_i^\mu - \hat{P}_i^e. \end{eqnarray} \end{comment} \end{enumerate} \begin{figure}[h] \centering \includegraphics[scale=0.45]{bre.pdf} \quad \includegraphics[scale=0.45]{brmu.pdf} \caption{The differential branching ratios of $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes with respect to the $q^2$ in the vector LQ model. Here the magenta bands represent the LQ contributions and the dotted lines are for the SM. The theoretical uncertainties arising due to the SM input parameters are shown as grey bands.} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.45]{fbe.pdf} \quad \includegraphics[scale=0.45]{fbmu.pdf} \caption{The $q^2$ variations of the forward-backward asymmetries of $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes in the vector LQ model. } \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.45]{fle.pdf} \quad \includegraphics[scale=0.45]{flmu.pdf} \caption{The $q^2$ variations of the longitudinal polarizations of $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes in the vector LQ model. } \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.45]{fte.pdf} \quad \includegraphics[scale=0.45]{ftmu.pdf} \caption{The $q^2$ variations of the transverse polarizations of $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes in the vector LQ model. } \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.5]{rk.pdf} \caption{The variations of $R_{K^*}(q^2)$ in the $q^2\in [0.045,6.0]~{\rm GeV}^2$ regions in the vector LQ model. } \end{figure} \begin{table}[htb] \begin{center} \caption{The predicted integrated values of the branching ratio, forward-backward asymmetry and lepton polarization asymmetry with respect to low $q^2$ for the $\bar B \rightarrow \bar K^* l^+ l^-$ processes in the SM and the vector LQ model. } \begin{tabular}{|c | c | c| } \hline Observables & SM prediction & Values in LQ model \\ \hline \hline {\rm BR}($\bar B \rightarrow \bar K^* e^+ e^-$) & $(8.97 \pm 0.49~({\rm CKM}) \pm 0.23~({\rm form~factor})) \times 10^{-7}$ & $(1.155 \to 2.882) \times 10^{-6}$ \\ $\langle A_{FB}^e \rangle$ & $-0.084 \pm 0.005$ & $-(0.314 \to 0.064)$ \\ $\langle F_L^e \rangle$ & $0.703 \pm 0.042$ & $0.5 \to 0.76$ \\ $\langle F_T^e \rangle$ & $0.297 \pm 0.018$ & $0.24 \to 0.5$ \\ \hline {\rm BR}($\bar B \rightarrow \bar K^* \mu^+ \mu^-$) & $(8.9 \pm 0.48~({\rm CKM}) \pm ~0.22~({\rm form~factor})) \times 10^{-7}$ & $(0.892 \to 1.45) \times 10^{-6}$ \\ $\langle A_{FB}^\mu\rangle$ &$-0.082 \pm 0.0049$ & $-(0.28 \to 0.083)$ \\ $\langle F_L^\mu \rangle$ & $0.71 \pm 0.043$ & $0.46 \to 0.71$ \\ $\langle F_T^\mu \rangle$ & $0.29 \pm 0.017$ & $0.29 \to 0.54$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{The predicted integrated values of the lepton non-universality $(R_{K^*})$ parameter in the LQ model. } \begin{tabular}{|c | c | c| } \hline Observables & SM prediction & Values in LQ model \\ \hline \hline $\langle R_{K^*} \rangle |_{q^2 \in [0.045,1.1]}$ & $0.913 $ & $0.65 \to 0.9$ \\ $\langle R_{K^*} \rangle|_{q^2 \in [1.1, 6.0]}$ & $0.9926 $ & $0.5 \to 0.73$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.45]{p5pe.pdf} \quad \includegraphics[scale=0.45]{p5pmu.pdf}\\ \caption{The plot in the left panel represent the $P_5^\prime (q^2)$ observable for $\bar B \rightarrow \bar K^* e^+ e^-$ precess in the vector LQ model. The corresponding plot for $\bar B \rightarrow \bar K^* \mu^+ \mu^-$ process is shown in the right panel.} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.45]{Q1.pdf} \quad \includegraphics[scale=0.45]{Q2.pdf}\\ \includegraphics[scale=0.45]{Q4.pdf} \quad \includegraphics[scale=0.45]{Q5.pdf}\\ \includegraphics[scale=0.45]{Q6.pdf} \quad \includegraphics[scale=0.45]{Q8.pdf} \caption{The plots in the left panel represent the $Q_{1} (q^2)$ (top), $Q_{4} (q^2)$ (middle) and $Q_{6} (q^2)$ (bottom) observables in the vector LQ model. The $Q_{2} (q^2)$ (top), $Q_{5} (q^2)$ (middle) and $Q_{8} (q^2)$ (bottom) plots are shown in the right panel.} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.45]{QFL.pdf} \quad \includegraphics[scale=0.45]{QFT.pdf} \caption{The $q^2$ variations of $Q_{F_L} (q^2)$ (left panel) and $Q_{F_T} (q^2)$ (right panel) observables in the vector LQ model. } \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.45]{B5.pdf} \quad \includegraphics[scale=0.45]{B6s.pdf} \caption{The $q^2$ variation of $B_5$ (left panel) and $B_{6s}$ (right panel) observables in the LQ model.} \end{figure} \begin{table}[htb] \begin{center} \caption{The predicted values of the $P_i^{l^{(\prime)}}$ observables in the low $q^2$ $(q^2 \in [1,6]~{\rm GeV}^2)$ region for the $\bar B \rightarrow \bar K^* l^+ l^-$ processes in the SM and the LQ model. } \begin{tabular}{|c | c | c| } \hline Observables & SM prediction & Values in LQ model \\ \hline \hline $\langle P_1^e \rangle$ & $-0.045 \pm 0.0027$ & $-0.0448 \to 0.15$ \\ $\langle P_2^e \rangle$ & $0.188 \pm 0.011$& $0.19 \to 0.415$ \\ $\langle P_3^e \rangle$ &$ (-5.43 \pm 0.326) \times 10^{-4}$ & $-0.0143 \to -5.43 \times 10^{-4}$ \\ $\langle P_4^{e \prime} \rangle$ & $0.43 \pm 0.0258$ & $0.43 \to 0.791$ \\ $\langle P_5^{e \prime} \rangle$ & $-0.226 \pm 0.0136$ & $-0.226 \to 0.682$ \\ $\langle P_6^{e \prime} \rangle$ & $-0.0734 \pm 0.0044$ & $-0.0734 \to -0.042$ \\ $\langle P_8^{e \prime} \rangle$ & $0.02678 \pm 0.0016$ & $-0.014 \to 0.027$ \\ \hline $\langle P_1^\mu \rangle$ & $-0.045 \pm 0.0027$ & $-0.0449 \to 0.133$ \\ $\langle P_2^\mu \rangle$ & $0.185 \pm 0.011$& $0.185 \to 0.34$ \\ $\langle P_3^\mu \rangle$ & $(-5.41 \pm 0.324) \times 10^{-4}$& $-0.013 \to -5.41 \times 10^{-4}$\\ $\langle P_4^{\mu \prime} \rangle$ & $0.437 \pm 0.026$ & $0.44 \to 0.536$ \\ $\langle P_5^{\mu \prime} \rangle$ & $-0.2318 \pm 0.014$ & $-0.231 \to 0.166$\\ $\langle P_6^{\mu \prime} \rangle$ & $-0.0738 \pm 0.0044$ & $-0.0676 \to -0.074$\\ $\langle P_8^{\mu \prime} \rangle$ & $0.02676 \pm 0.001$ & $1.164 \times 10^{-3} \to 0.027$\\ \hline \end{tabular} \end{center} \end{table} After collecting all possible angular observables, we now move on for the numerical analysis. We have taken all the particle masses and the lifetime of $B$ meson from \cite{pdg} for the numerical estimation. We consider the Wolfenstein parametrization with the values $A =0.811 \pm 0.026$, $\lambda=0.22506\pm 0.00050$, $\bar \rho = 0.124 ^{+0.019}_{-0.018}$, and $\bar \eta = 0.356 \pm 0.011$ \cite{pdg} for the CKM matrix elements. The QCD form factors for the $\bar B \to \bar K^* l^+ l^-$ processes in the low $q^2$ region are taken from \cite{kstar-formfactor-1, kstar-formfactor-2}. Now using the constraints on the LQ couplings as discussed in section III, we show in Fig. 2, the $q^2$ variation of the differential branching ratios of $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes in the $V^1(3, 1, 2/3)$ vector LQ model. In the figures, the blue dashed lines stand for the SM contributions and the magenta bands are due to the exchange of vector LQ. Here the grey bands represent the theoretical uncertainties, which arise due to the uncertainties associated with the SM input parameters, such as the CKM matrix elements \cite{pdg} and the hadronic form factors \cite{kstar-formfactor-1, kstar-formfactor-2}. From these figures, one can see that there is certain difference between the new physics contributions to the branching fractions of $\bar B \to \bar K^* e^+ e^-$ and $\bar B \to \bar K^* \mu^+ \mu^-$ processes. The predicted numerical values of the branching ratios in the high recoil limit are presented in Table IV. In the SM, the forward-backward asymmetry parameters of the $\bar B \to \bar K^* l^+ l^-$ processes have negative values in the low-$q^2$ region. However, the contribution of new Wilson coefficients ($C_{9, 10}^{(\prime)}$ and $C_{S, P}^{(\prime)}$) to the SM due to the exchange of $(3, 1, 2/3)$ vector LQ may enhance the rate of forward-backward asymmetries and can shift the zero position of these asymmetries. The plots for the forward-backward asymmetry for the $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes are presented in Fig. 3 and the corresponding integrated values are given in Table IV. For both $B \to K^* e^+ e^- (\mu^+ \mu^-)$ processes, we found that due to the LQ contributions the zero-crossing position of forward-backward asymmetry shifts to the right (i.e., towards high $q^2$ region) of its SM predicted value. The longitudinal and transverse polarisation components for the $\bar B \to \bar K^* e^+ e^-$ (left panel) and $\bar B \to \bar K^* \mu^+ \mu^-$ (right panel) processes both in the SM and in the LQ model are shown in the Fig. 4 and Fig. 5 respectively. The predicted values of $F_L^l(F_T^l)$ asymmetry parameters in the LQ model are given in Table IV. In these observables also we found some difference between the SM values and the LQ contributions. \begin{table}[htb] \begin{center} \caption{The predicted values of the LFUV observables, ($Q_{F_{L,T}}$, $Q_i$ and $B_{5, 6s}$) for the $B \rightarrow K^* l^+ l^-$ processes in the SM and the LQ model. } \begin{tabular}{|c | c | c| } \hline Observables & SM prediction & Values in LQ model \\ \hline \hline $\langle Q_{1} \rangle$ & $0$ & $-0.017 \to -0.0001$ \\ $\langle Q_{2} \rangle$ & $-0.003$ & $-0.075 \to -0.005$ \\ $\langle Q_{3} \rangle$ & $2 \times 10^{-6}$ & $2 \times 10^{-6} \to 1.3 \times 10^3$ \\ $\langle Q_{4} \rangle$ & $0.007$ & $-0.255 \to 0.01$ \\ $\langle Q_{5} \rangle$ & $-0.0058$ & $-0.516 \to -0.005$ \\ $\langle Q_{6} \rangle$ & $-4 \times 10^{-4}$ & $-0.032 \to 5.8 \times 10^{-3}$ \\ $\langle Q_{8} \rangle$ & $-2 \times 10^{-5}$ & $0 \to 0.0152$ \\ \hline $\langle Q_{F_L} \rangle$ & $0.07$ & $-0.04 \to 0.07$ \\ $\langle Q_{F_T} \rangle$ & $-0.007$ & $-0.007 \to 0.05$ \\ \hline $\langle B_5 \rangle$ & $1.25 \times 10^{-3}$ & $-0.85 \to -1.27 \times 10^{-3}$ \\ $\langle B_{6s} \rangle$ & $-0.027$ & $-0.56 \to -0.027$ \\ \hline \end{tabular} \end{center} \end{table} \vspace*{0.3 truein} In Fig. 6, we show the plot for the $R_{K^*}$ observable in the low $q^2$ regime in both the SM and vector LQ model. After the $q^2 \sim 1.1~{\rm GeV^2} $ region, noticeable difference from the SM prediction is found due to the contribution of the vector LQ. From the figure it can be seen that the measured value of $R_{K^*}$ in the $q^2 \in [1.1, 6.0]~{\rm GeV}^2$ region can be described in the LQ model. The predicted values of $R_{K^*}$ in the LQ model for different bins are presented in Table V. We found that our predicted results in the vector LQ model are consistent with the corresponding measured experimental data. Thus, vector LQs could be considered as potential candidates to explain an possibly lepton flavour universality violation, should it be observed. Fig. 7 shows the plots for the FFI observables, $P_5^{\prime l}$ with respect to $q^2$ in the large recoil limit. In this figure, the plot for $P_5^{\prime l}$ for the electron mode is presented in the left panel and the right panel contains the corresponding plot for $\bar B \to \bar K^* \mu^+ \mu^-$ process. One can notice that, the LQ model encompasses the SM, but also exhibits potentially larger values of $P_{5}^{ \prime l}$ observables. In Table VI, we have presented the corresponding numerical results. In addition to the $P_5^{\prime l}$ observable, we have also studied all the FFI observables, $P_{i}^{ (\prime) l}$, where $i=1,2,3,4,6,8$ and the predicted numerical values are listed in Table VI. The measurement of $R_{K^*}$ motivated us to look for other LFUV parameters in this process. Belle has recently measured the new LFUV $Q_4$ and $ Q_5$ parameters \cite{Q4-Belle} in the low $q^2$ region, $(1\leq q^2 \leq 6)~{\rm GeV}^2$ with values \begin{eqnarray} Q_4 = 0.498 \pm 0.527 \pm 0.166, ~~~~~Q_5=0.656 \pm 0.485 \pm 0.103. \end{eqnarray} The $q^2$ variation of $Q_{i}$ parameters in the vector LQ model are presented Fig. 8. In this figure, the left panel contains the plots for the $Q_1(q^2)$ (top), $Q_4(q^2)$ (middle) and $Q_6(q^2)$ (bottom) observables and the $Q_2(q^2)$ (top), $Q_5(q^2)$ (middle) and $Q_8(q^2)$ (bottom) plots are given in the right panel. We observe that the additional contributions due to LQ has provided large shift in some of these observables from their SM values. In Fig. 9, we show the plots for $Q_{F_L}$ (left panel) and $Q_{F_T}$ (right panel) observables. We also show the plots for the $B_5$ (left panel) and $B_{6s}$ (right panel) parameters in Fig. 10. The numerical values of all these LFUV parameters are given in Table VII. \section{$K_L \to \mu^\mp e^\pm $ process} The $V^1_\mu(3,1,2/3)$ vector LQ has also contribution to the lepton flavour violating $K_{L} \to \mu^\mp e^\pm$ decay process. The effective Hamiltonian for $K_{L} \to \mu^- e^+$ LFV decays in the $(3,1,2/3)$ leptoquark model is given by \begin{eqnarray} \mathcal{H}_{\rm LQ}&=& C_{LL} \left(\bar{d}\gamma^\mu \left(1-\gamma_5\right)s\right) \left(\bar{\mu}\gamma_\mu \left(1-\gamma_5\right)e\right)+ C_{RR} \left(\bar{d}\gamma^\mu \left(1+\gamma_5\right)s\right) \left(\bar{\mu}\gamma_\mu \left(1+\gamma_5\right)e\right) \nonumber \\ & + & C_{LR} \left(\bar{d} \left(1+\gamma_5\right)s\right) \left(\bar{\mu} \left(1-\gamma_5\right)e\right)+ C_{RL} \left(\bar{d} \left(1-\gamma_5\right)s\right) \left(\bar{\mu} \left(1+\gamma_5\right)e\right), ~~ \end{eqnarray} where the $C_{LL}, C_{RR}, C_{LR}~{\rm and}~ C_{RL}$ coefficients are given as \begin{eqnarray} \label{KL-Cof} &&C_{LL}=\frac{(g_L)_{de} (g_L)_{s\mu }^*}{4M_{\rm LQ}^2}, ~~~~~~~~C_{RR}=\frac{(g_R)_{de} (g_R)_{s\mu }^*}{4M_{\rm LQ}^2}, \nonumber \\ &&C_{LR}=\frac{(g_L)_{de} (g_R)_{s\mu }^*}{2M_{\rm LQ}^2}, ~~~~~~~~~C_{RL}=\frac{(g_R)_{de} (g_L)_{s\mu }^*}{2M_{\rm LQ}^2}, \end{eqnarray} and for $K_{L} \to \mu^+ e^-$ process \begin{eqnarray} \mathcal{H}_{\rm LQ}&=& D_{LL} \left(\bar{d}\gamma^\mu \left(1-\gamma_5\right)s\right) \left(\bar{e}\gamma_\mu \left(1-\gamma_5\right) \mu \right)+ D_{RR} \left(\bar{d}\gamma^\mu \left(1+\gamma_5\right)s\right) \left(\bar{e}\gamma_\mu \left(1+\gamma_5\right) \mu \right) \nonumber \\ & + & D_{LR} \left(\bar{d} \left(1+\gamma_5\right)s\right) \left(\bar{e} \left(1-\gamma_5\right) \mu \right)+ D_{RL} \left(\bar{d} \left(1-\gamma_5\right)s\right) \left(\bar{e} \left(1+\gamma_5\right) \mu \right), ~~ \end{eqnarray} where the $D_{LL}, D_{RR}, D_{LR}~{\rm and}~ D_{RL}$ coefficients are given as \begin{eqnarray} \label{KL-Cof} &&D_{LL}=\frac{(g_L)_{d\mu} (g_L)_{se }^*}{4M_{\rm LQ}^2}, ~~~~~~~~D_{RR}=\frac{(g_R)_{d\mu} (g_R)_{se}^*}{4M_{\rm LQ}^2}, \nonumber \\ &&D_{LR}=\frac{(g_L)_{d\mu} (g_R)_{se }^*}{2M_{\rm LQ}^2}, ~~~~~~~~~D_{RL}=\frac{(g_R)_{d\mu} (g_L)_{se}^*}{2M_{\rm LQ}^2}. \end{eqnarray} The LFV decay processes do not receive any contribution from the SM. In the literature \cite{KL-LQ, KL-LFV}, the LFV decay of kaon has been investigated in the leptoquark and other new physics models. The branching ratio of $K_L \to \mu^- e^+$ process in the leptoquark model is given by \begin{eqnarray} \label{KL-BR} {\rm BR}(K_L \to \mu^- e^+) &=& \tau_{K_L}\frac{f_K^2}{8\pi M_K^3 } \sqrt{\left(M_K^2 -m_\mu^2 - m_e^2\right)^2-4m_\mu^2 m_e^2} \times \nonumber \\ && \Bigg [ \Big |( C_{LL} + C_{RR}) (m_e-m_\mu)- (C_{LR} + C_{RL} ) \frac{M_K^2}{m_s+m_d} \Big |^2 \left(M_K^2 - (m_\mu + m_e)^2 \right) \nonumber \\ && + \Big | ( C_{LL} - C_{RR}) (m_e + m_\mu)+ (C_{LR} - C_{RL} ) \frac{M_K^2}{m_s+m_d} \Big |^2 \left(M_K^2 - (m_\mu - m_e)^2 \right) \Bigg ].\nonumber \\ \end{eqnarray} Similarly, the branching ratio of $K_L \to \mu^+ e^-$ process can be obtained from Eqn. (\ref{KL-BR}) by replacing the new coefficients $C_{ij} \to D_{ij}$, where ($i,j=L,R$). The branching ratio of $K_L \to \mu^\mp e^\pm$ process is simply the sum of the branching ratios of $K_L \to \mu^- e^+$ and $K_L \to \mu^+ e^- $ processes. For the required LQ couplings, we use the couplings obtained from $K_L \to e^+ e^- (\mu^+ \mu^-)$ process which are given in Table II and III as basis values and assumed that the LQ couplings between different generation of quark and lepton follow the simple scaling law, i.e., $(g_{L(R)})_{ij} = (m_i /m_j )^{1/2} (g_{L(R)} )_{ii}$ with $j > i$. We have taken this ansatz from the Ref. \cite{ansatz}, which successfully explains the decay width of radiative LFV $\mu \to e \gamma$ decay. Now using this ansatz and the particle masses and life time of $K_L$ meson from \cite{pdg}, the predicted branching ratios of $K_{L} \to \mu^\mp e^\pm$ process is given by \begin{eqnarray} {\rm BR}(K_L \to \mu^\mp e^\pm)&=& (1.78-3.564) \times 10^{-12}. \end{eqnarray} The corresponding experimental upper limit on branching ratio of $K_L \to \mu^\mp e^\pm$ process is given as \cite{pdg} \begin{eqnarray} {\rm BR}(K_L \to \mu^\mp e^\pm) ~ \textless ~4.7 \times 10^{-12}. \end{eqnarray} Our predicted branching ratios are within the experimental limit. \section{Conclusion} We have investigated the intriguing anomalies related with the rare semileptonic $\bar B \to \bar K^* l^+ l^-$ decay processes in the context of a $(3,1,2/3)$ vector leptoquark model. We constrain the leptoquark couplings by using the experimental branching ratios of $B_s \to l^+ l^-$, $K_L \to l^+ l^-$ and $B_s \to \mu^\mp e^\pm$ processes. We then calculated the branching ratios, forward-backward asymmetries and the lepton polarization asymmetries of these processes. We found that there is appreciable difference between the SM and LQ model predictions. We have also calculated the form factor independent observables $P_i^{(\prime)}$, where $i=1,..,6,8$ in this model. We observed that vector leptoquark can also explain the $P_{5}^\prime$ anomalies very well. We then looked into the lepton nonuniversality parameter $R_{K^*}$ of the $\bar B \to \bar K^* l^+ l^-$ process in both the $q^2 \in [0.045, 1.1]~{\rm GeV}^2$ and $q^2 \in [1.1, 6.0]~{\rm GeV}^2$ regions and found that the $R_{K^*}$ anomaly could be explained in the vector leptoquark model. We have also investigated a few other lepton nonuniversality observables in order to verify violation of lepton universality in the $B$ sector. Thus, along with the $R_{K^*}$ observable, we also studied some LNU observables, such as $Q_{F_L}$, $Q_{F_T}$, $Q_{i}$ and $B_{5, 6s}$ in the vector leptoquark model. We observed that in the presence of a vector leptoquark, all the observables have some differences from their SM results but in many cases the SM results are within the uncertainties of the LQ model. We have also computed the branching ratio of the lepton flavour violating $K_L \to \mu^\mp e^\pm$ process in the $(3,1,2/3)$ vector leptoquark, which is found to be within the experimental limit. The observation of these observables in the LHCb experiment may provide indirect hints for the possible existence of leptoquark. {\bf Acknowledgments} We would like to thank Science and Engineering Research Board (SERB), Government of India for financial support through grant No. SB/S2/HEP-017/2013.
{ "timestamp": "2018-06-05T02:16:49", "yymm": "1806", "arxiv_id": "1806.01048", "language": "en", "url": "https://arxiv.org/abs/1806.01048" }
\section{Modified quadrupole formula} The IDG equations of motion for a perturbation $h_{\mu\nu}$ around a flat background $\eta_{\mu\nu}$ are given by \cite{Biswas:2011ar} \begin{widetext} \begin{eqnarray} -\kappa T_{\mu\nu} = \frac{1}{2} \bigg[a(\Box)\left(\Box h_{\mu\nu} -\partial_\sigma \left(\partial_\mu h^\sigma_\nu + \partial_\nu h^\sigma_\mu \right)\right) + c(\Box) \left(\partial_\mu \partial_\nu h + \eta_{\mu\nu} \partial_\sigma \partial_\tau h^{\sigma\tau}-\eta_{\mu\nu} \Box h\right)+ f(\Box) \partial_\mu \partial_\nu \partial_\sigma \partial_\tau h^{\sigma\tau}\bigg], \end{eqnarray} \end{widetext} where $\kappa=M^{-2}_P$ and \begin{eqnarray} \label{eq:defnacfforflat} a(\Box) &=& 1 + M^{-2}_P \left(F_2(\Box) + 2 F_3(\Box)\right) \Box, \nonumber\\ c(\Box) &=& 1 - M^{-2}_P\left(4 F_1(\Box) - F_2(\Box) + \frac{2}{3}F_3(\Box)\right)\Box,\nonumber\\ f(\Box) &=& M^{-2}_P \left(4F_1(\Box) + 2F_2(\Box) +\frac{4}{3} F_3(\Box)\right), \end{eqnarray} and it should be noted that as $a(\Box)=c(\Box)$, then $f(\Box)\Box=a(\Box)-c(\Box)=0$. If we take the de Donder gauge $\partial_\mu h^{\mu\nu} = \frac{1}{2} \partial^\nu h$ and assume $a(\Box)=c(\Box)$, then \begin{eqnarray} \label{eq:linaeriseomsinddgauge} -2\kappa T_{\mu\nu} = a(\Box)\Box \bar{h}_{\mu\nu}, \end{eqnarray} where we have defined $\bar{h}_{\mu\nu} \equiv h_{\mu\nu} - \frac{1}{2}g_{\mu\nu} h$ \footnote{Alternatively, we can follow the method of \cite{Naf:2011za} and define the gauge $\partial^\mu \gamma_{\mu\nu}=0$,where \\$\gamma_{\mu\nu} = a(\Box) h_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu} c(\Box) h -\frac{1}{2}\eta_{\mu\nu} f(\Box) \partial_\alpha \partial_\beta h^{\alpha\beta}$. This produces the result $-2\kappa T_{\mu\nu}=\Box \gamma_{\mu\nu}$.}. Note that in the limit $a\to 1$, we return to the GR result. We invert $a(\Box)$ and follow the usual GR method \cite{Carroll:2004st} where we assume the source is far away, composed of non-relativistic matter and isolated. In this approximation, the Fourier transform of $h_{\mu\nu}$ with respect to time is \begin{eqnarray} \tilde{\bar{h}}_{\mu\nu} = 4 G \frac{e^{i k r}}{r} \int d^3 y \frac{\tilde{T}_{\mu\nu}(k,y)}{a(k^2)}. \end{eqnarray} When we insert the definition of the quadrupole moment, $I_{ij}=\int d^3 y~ T^{00}(y) y^i y^j$, write out the full expression for $\tilde{I}_{ij}$ and define the retarded time $t_r=t-r$, we obtain \begin{eqnarray} \label{eq:finaleqnwithgenerala} \bar{h}_{ij} &=& \frac{ - G }{\pi} \frac{1}{r}\frac{d^2}{dt^2}\int dk dt'_r \frac{e^{ik(t_r-t'_r)}}{a(k^2)} I_{ij} (t'_r). \end{eqnarray} \section{Simplest choice of $a(\Box)$} We choose $a(k^2)$ to avoid ghosts, by ensuring there are no poles in the propagator. If we choose $a(k^2)=e^{k^2/M^2}$ and use the formula for the inverse Fourier transform of a Gaussian, we find \begin{eqnarray} \bar{h}_{ij} &=& \frac{- G}{r}\frac{M}{\sqrt{\pi}}\frac{d^2}{dt^2}\int dt'_r e^{-M^2(t_r-t'_r)^2/4} I_{ij} (t'_r). \end{eqnarray} This is the modified quadrupole formula for the simplest case of IDG. We now need to specify $I_{ij}$. For example, when we look at the radiation emitted by a binary system of stars of mass $M_s$ in a circular orbit, the 11 component of $I_{ij}$ is $I_{11}(t)=M_s R^2 \left(1+\cos(2\omega t)\right)$, where $R$\ is the distance between the stars and $\omega$ is their angular velocity. Therefore \begin{eqnarray} \bar{h}_{11} = \frac{4GM_s^2R^2}{r}\left(1+e^{-\frac{4\omega^2}{M^2}} \cos (2\omega t_r)\right), \end{eqnarray} Comparing to the GR case, we see that this matches the GR prediction at large $M$, but at small $M$ there is a reduction in the magnitude of the oscillating term compared to GR. \section{Backreaction equation} There is a second order effect where gravity couples to itself and produces a backreaction. In \cite{Kuntz:2017pjd}, the backreaction was found for Effective Quantum Gravity (EQG). EQG has a similar action to IDG (the $F_i(\Box)$ in \eqref{IDGactionweyl} are replaced by $a_i + b_i \log(\Box/\mu^2)$ where $\mu$ is a mass scale \cite{Donoghue:2012zc,Calmet:2017qqa,Calmet:2018qwg}. In this section we generalise the result of \cite{Kuntz:2017pjd} (see also \cite{Stein:2010pn,Preston:2016sip,Saito:2012xa}) and also extend it to a de Sitter background. Using the Gauss-Bonnet identity and a similar expression for the higher-order terms \cite{Li:2015bqa} we can focus on \eqref{IDGactionweyl} without the Weyl term. Far away from the source, we use the gauge $\nabla^\mu h^\nu_\mu= 0$ and $h=0$, to simplify the linearised and quadratic (in $h_{\mu\nu}$) curvatures around a de Sitter background, given in \eqref{eq:desitterperturbedcurvatures} and \eqref{eq:desitterquadperturbedcurvatures}. The linear vacuum equations of motion around a dS background in this gauge \cite{Conroy:2017uds,Edholm:2017fmw} are \begin{widetext} \begin{eqnarray} \label{eq:linearvacuumdseqnsbackground} (\Box-2H^2)^2F_2(\Box)h^\mu_\nu = -\left(1+24M^{-2}_P H^2 f_{1_0}\right)\left(\Box-2H^2 \right)h^\mu_\nu, \end{eqnarray} \end{widetext} where $f_{1_0}$ is the zeroeth order coefficient of $F_1(\Box)$ and the background Ricci curvature scalar is $\bar{R}=12H^2$, where $H$ is the Hubble constant. Upon inserting \eqref{eq:linearvacuumdseqnsbackground} into the averaged second order equations of motion for the non-GR terms \eqref{eq:averagedquadraticeoms}, \begin{eqnarray} \label{eq:fullbackreactioneqn} \kappa t^\mu_\nu{}^{\text{IDG}} &=&\left(1+24M^{-2}_P H^2 f_{1_0}\right)\bigg[ -\frac{1}{2} \braket{ h^\mu_\sigma\left(\Box-2H^2 \right)h^\sigma_\nu}\nonumber\\ &&+\frac{1}{8}\delta^\mu_\nu \braket{ h^\tau_\sigma \left(\Box-2H^2 \right)h^\sigma_\tau}\bigg], \end{eqnarray} where $f_{1_0}$ corresponds to $b_1$ in the EQG formalism and $\left<X\right>$ represents the spacetime average of $X$ using the same definition as \cite{Kuntz:2017pjd}. \eqref{eq:fullbackreactioneqn} is the full backreaction equation for any action with higher derivative terms which is quadratic in the curvature; we have not used the fact that IDG contains an infinite series of the d'Alembertian and so this method can be applied to finite higher derivative actions, for example \cite{Giacchini:2018gxp,Boos:2018bhd}. So the energy density $\rho=t_{00}$ is given by \begin{eqnarray} \rho^{\text{IDG}}_{dS} &=& \left(M^2_P+24 H^2 f_{1_0}\right)\bigg[ \frac{1}{2} \braket{ h_{0\sigma}\left(\Box-2H^2 \right)h^\sigma_0}\nonumber\\ &&+\frac{1}{8} \braket{ h^\tau_\sigma \left(\Box-2H^2 \right)h^\sigma_\tau}\bigg]. \end{eqnarray} For a plane wave\footnote{There are extra terms due to the de Sitter background $H^2$ \cite{Nowakowski:2008de,Arraut:2012xr}, but to linear order in, these produce only terms which are linear is $\cos$ or $\sin$. The spacetime average therefore vanishes and there is no extra contribution to \eqref{eq:dsdensity} from these terms.} solution $h_{\mu\nu} = \epsilon_{\mu\nu} \cos(\omega t-kz)$, we find (including the GR term) \begin{eqnarray} \label{eq:dsdensity} \rho_{dS} &=&\frac{1}{4}M^2_P\left(1 +24M^{-2}_PH^2 f_{1_0}\right)\bigg\{\omega^2\epsilon^2\nonumber\\ &&+2\left( 4 \epsilon^\sigma_0 \epsilon_{0\sigma}+\epsilon^2 \right)\left(8H^2+ \omega^2-k^2\right)\bigg\}, \end{eqnarray} where $\epsilon^2 = \epsilon_\mu \epsilon^\mu$. Given the current value of the Hubble constant $H_0=$, $H_0^2 M_P^{-2}\approx 10^{-119}$. Therefore $f_{1_0}$ would have to be of the order of $10^{115}$ for the de Sitter background in the present day to have a noticeable impact. Thus we can generally use the Minkowski background as a good approximation. In the EQG notation, $f_{1_0}$ is replaced by $b_1$ which already has the constraint $b_1<10^{61}$ so we can ignore this extra term. For a classical wave, $\omega^2=k^2$ so the term on the second line of \eqref{eq:dsdensity} disappears for a Minkowski background. This is the case for IDG when we assume there are no extra poles in the propagator. On the other hand, EQG does have poles, so for EQG or IDG with a single pole there can be damping \cite{Calmet:2016sba,Calmet:2014gya,Calmet:2017omb,Calmet:2016fsr} and therefore $\omega^2\neq k^2$. Kuntz used LIGO constraints on the density parameter $\Omega_0$ as well as the constraint on the mass of the pole $m>5\times 10^{13}$ GeV to constrain $\epsilon$, the amplitude of the massive mode as $\epsilon<1.4 \times 10^{-33}$ \cite{Kuntz:2017pjd}. Since then, LIGO has found more stringent constraints of $\Omega_0<5.58\times10^{-8}$ \cite{Abbott:2018utx}. Following the same method as \cite{Kuntz:2017pjd}, we divide by the critical density $\rho_c=\frac{3H^2_0}{8\pi G}$ to find \begin{eqnarray} \Omega_0 = \frac{1}{12} \left(\epsilon^\alpha_0 \epsilon_{0\alpha} + \epsilon^2\right) \frac{m^2}{H_0^2}<5.58\times10^{-8} \end{eqnarray} which we use to find a stronger constraint of $\epsilon < 8.0 \times 10^{-34}$. This cuts the allowed parameter space nearly in half and makes it less likely that the detector \cite{Baker:2009zzb} referred to in \cite{Kuntz:2017pjd} would be able to detect this mode. \section{Power emitted} We can use the backreaction equation to find the power radiated to infinity by a system, which is given by \cite{Carroll:2004st} \begin{eqnarray} P=\int_{S_\infty^2} t_{0\mu} n^\mu r^2 d\Omega, \end{eqnarray} where the integral is taken over a two-sphere at spatial infinity $S_\infty^2$ and $n^\mu$ is the spacelike normal vector to the two-sphere. In polar coordinates, $n^\mu=(0,1,0,0)$. We are therefore interested in the $t_{0r}$ component. In the limit $H \to 0$ and including the usual GR term, \eqref{eq:fullbackreactioneqn} becomes \begin{eqnarray} \label{eq:backreactioneom} t_{\mu\nu}=\frac{1}{64\pi G}\bigg[&&2\braket{\partial_\mu h^{TT}_{\alpha\beta} \partial_\nu h^{\alpha\beta}_{TT}} + 4 \braket{ h^{TT}_{\sigma(\mu} \Box_\eta h^{TT\sigma}_{\nu)}} \nonumber\\ &&-\eta_{\mu\nu} \braket{h^{TT}_{\sigma\tau} \Box_\eta h_{TT}^{\tau\sigma}}\bigg]. \end{eqnarray} Note that $h^{TT}_{0\nu}=\eta_{0r}=0$, which means we can discard the second and third terms in the square bracket. The relevant term for the power becomes \begin{eqnarray} t_{0\mu}n^\mu = \frac{- GM^2}{32\pi^2 r^2}\left<\frac{d^3}{dt^3}\left(\hat{I}_{ij}(t_r)\right) \frac{d^3}{dt^3}\left(\hat{I}^{ij}(t_r)\right)\right>. \end{eqnarray} Note that this is the same as the GR expression, but where we have defined $\hat{I}_{ij}=\int dt'_r e^{-M^2(t_r-t'_r)^2/4} I_{ij} (t'_r)$ instead of $I_{ij}$. If we convert to the reduced quadrupole moment $\hat{J}_{ij}$, using $J_{ij}=I_{ij} - \delta_{ij} \delta^{kl} I_{kl}$ \cite{Carroll:2004st}, we can use the identities \eqref{eq:integralidentities} from \cite{Carroll:2004st} to see that the power emitted by a system is \begin{eqnarray} \label{eq:generalradiationpowerformula} P = - \frac{G}{5} \left<\frac{d^3\hat{J}_{ij}}{dt^3} \frac{d^3\hat{J}^{ij}}{dt^3}\right>, \end{eqnarray} where $\hat{J}_{ij}= \int^\infty_{-\infty} dt'_r e^{-M^2(t_r-t'_r)^2} J_{ij} (t'_r)$. This result can then be applied to any system for which we know the reduced quadrupole moment. We will now apply it to binary systems in both circular and elliptical orbits. \subsection{Circular orbits} For a binary system of two stars in a circular orbit, the reduced quadrupole moment $J_{ij}$ in polar coordinates is given in \cite{Carroll:2004st} and depends on the mass of each of the stars $M_s$, the distance between them $R$, and the angular velocity $\omega$.\footnote{The corrections to the orbital motion due to the change in the Newtonian potential from IDG will be negligible as this has already been constrained down to the micrometre scale, much shorter than the distance between the stars.} Using \eqref{eq:generalradiationpowerformula}, our power is (again in the limit $r\to \infty$) and using $\left<\sin ^2(x)\right>\equiv\frac{1}{2}$, \begin{eqnarray} P&=& - \frac{128}{5} GR^2M_s^4 \omega^6 e^{-2 \omega ^2/M^2}. \end{eqnarray} This is the GR result with an extra factor of $e^{-2 \omega ^2/M^2}$ where $M$ is the IDG mass scale. This gives a reduction in the amount of radiation emitted from a binary system of stars in a circular orbit. Note that this factor tends to 1 in the GR limit $M\to \infty$. \subsection{Generalisation to elliptical orbits} \begin{figure}[tbp]\hspace{-6mm} \includegraphics[width=91mm]{enhancementfactor.PNG} \caption{The enhancement factor $f^{\text{IDG}}(e)$ given by \eqref{eq:fullidgenhancementfactor} against the eccentricity $e$ as well as the enhancement factor for the GR term $f^{\text{GR}}(e)$, where the total power is $P_{\text{GR}}^{\text{circ}} f^{\text{GR}}(e)+ P_{\text{IDG}}^{\text{circ}} f^{\text{IDG}}(e)$. This factor describes how the power emitted changes with respect to the eccentricity. The extra IDG term will show up most strongly at around $e=0.7$, which coincidentally is close to the value for the Hulse-Taylor binary (0.617).} \label{enhancementfactorplot} \end{figure} The power radiated by a binary system with a circular orbit is of limited applicability because in GR the power emitted is highly dependent on the eccentricity $e$ of the orbit~\cite{Peters:1963ux}, i.e. $P_{\text{GR}}=P_{\text{GR}}^{\text{circ}} f^{\text{GR}}(e)$. where $ f^{\text{GR}}(e)$ is an \textit{enhancement factor} that reaches $10^3$ at $e=0.9$. The circular orbit is therefore unlikely to be an accurate approximation. For an elliptical orbit, the relevant components of the reduced quadrupole moment are \cite{Peters:1963ux} \begin{eqnarray} \hspace{-2mm}J_{xx}= \mu d^2 \left(\cos^2(\psi) - \frac{1}{3}\right), \quad J_{yy}= \mu d^2 \left(\sin^2(\psi) - \frac{1}{3}\right),~~~~~~ \end{eqnarray} where $\mu$ is the reduced mass $m_1 m_2/(m_1 +m_2)$ and the distance $d$ between the two bodies is given by \\$d= \frac{a(1-e^2)}{1+e\cos(\psi)}$, where $e$ is the eccentricity of the orbit and $a$ is the semimajor axis~\cite{Peters:1963ux}. The change in angular position over time is \begin{eqnarray} \dot{\psi}&=&\frac{\left[G(m_1+m_2)a(1-e^2)\right]^{1/2}}{d^2}. \end{eqnarray} For the $xx$ component, we need to calculate \vspace{4mm} \begin{eqnarray} \label{eq:xxcomponentfirstint} \hspace{-1mm}\hat{J}_{xx}=\mu a^2(1-e^2)^2 \int^\infty_{-\infty} dt'_r e^{-M^2 (t_r-t'_r)^2} \frac{\cos^2(\psi( t'_r))-\frac{1}{3}} {\left(1+e\cos(\psi (t'_r))\right)^2}.~~~~~~ \end{eqnarray} This is a very difficult integration to do. However, if we make the change of coordinates $z=M(t_r-t'_r)$, we can use a Taylor expansion in $\frac{1}{M}$ if it is small and the identities \eqref{eq:evenoddidentities} to see that we can write down \eqref{eq:expansionofhatJxx}, i.e. \begin{eqnarray} \label{eq:totalpowerelliptical} P\approx P_{\text{GR}} + P_{\text{IDG}}=P_{\text{GR}}^{\text{circ}} f^{\text{GR}}(e)+ P_{\text{IDG}}^{\text{circ}} f^{\text{IDG}}(e),~~~ \end{eqnarray} where the IDG power for an elliptical orbit is the power for a circular orbit multiplied by an enhancement factor $f(e)$ which depends on the eccentricity. \vspace{20mm} We find that \begin{eqnarray} P_{\text{IDG}}=P_{\text{IDG}}^{\text{circ}} f^{\text{IDG}}(e) = \frac{256}{5} \frac{\omega ^8}{M^2}GR^2M_s^4 f^{\text{IDG}}(e),~~~ \end{eqnarray} where $f^{\text{IDG}}(e)$ is a polynomial of 22nd order and so is given in the appendix. In the limit $M\to \infty$, $P_{\text{IDG}}\to 0$ and \eqref{eq:totalpowerelliptical} returns to $P_{\text{GR}}$. $f^{\text{IDG}}(e)$ is plotted in Fig \ref{enhancementfactorplot} with a comparison to the enhancement factor for GR, $f^{\text{GR}}(e)$. \interfootnotelinepenalty=10000 The Hulse-Taylor binary has a period of 7.5 hours and ellipticity of 0.617. The radiation emitted from the Hulse-Taylor binary is $0.998\pm 0.002$ of the GR prediction \cite{Weisberg:2016jye}, which leads to the constraint $M>6.9 \times 10^{-49} M_P= 1.0\times 10^{-21} \text{eV}$ on our mass scale $M$, which is much weaker than previous constraints. The previous lower bound \footnote{If we assume IDG is responsible for inflation we can obtain an even stronger lower bound of roughly $10^{14}$ GeV using Cosmic Microwave Background data \cite{Ade:2015xua,Edholm:2016seu,Koshelev:2016xqb}.} is $\sim$0.01 eV from lab-based experiments \cite{Edholm:2016hbt}. In order to produce a comparable constraint, we would need to study radiation produced from systems with orbital periods\footnote{The frequency of the radiation produced is twice the orbital frequency of the system \cite{Abbott:2016bqf}.} of less than $10^{-4}$ seconds. Not only do these systems have an orbital frequency much higher than LIGO and LISA will be able to probe (15-150 Hz \cite{Abbott:2016bqf} and $10^{-4}$-$10^{-1}$\hspace{0.2mm}Hz \cite{Audley:2017drz} respectively), but they would also be out of the weak-field regime we used for our calculations. Therefore lab-based experiments and CMB data are likely to provide the tightest constraints in the near future. \section{Conclusion} We found the modified quadrupole formula for IDG, which describes how the metric changes for a given stress-energy tensor. We generalised the backreaction formula already found for Effective Quantum Gravity (EQG) to a de Sitter background (for both EQG and IDG). We used updated LIGO results to give a tighter constraint of $\epsilon < 8.0 \times 10^{-34}$ on the amplitude of the massive mode in EQG. Finally, we found the power emitted by a binary system, for both circular and elliptical orbits and investigated the example of the Hulse-Taylor binary. We showed that IDG is consistent with the GR predictions. \iffalse LIGO is most sensitive to systems with orbital frequency of around 15-150 Hz \cite{Abbott:2016bqf} , and LISA will probe even lower frequencies ($10^{-4}$-$10^{-1}$\hspace{0.2mm}Hz) \cite{Audley:2017drz}. \fi \vspace{9mm} \section{Acknowledgements} We would like to thank David\ Burton, Iber\^e Kuntz and Sonali Mohapatra for their help in preparing this paper. JE is funded by the Lancaster University Faculty of Science and Technology. \newpage
{ "timestamp": "2018-08-21T02:09:01", "yymm": "1806", "arxiv_id": "1806.00845", "language": "en", "url": "https://arxiv.org/abs/1806.00845" }
\section{Introduction} Although deep neural networks have achieved tremendous success in many domains (e.g., computer vision~\cite{Alexnet12,vggnet15,fastrcnn15}, speech recognition~\cite{hinton2012deep,dahl2012context}, natural language processing~\cite{dahl2012context,collobert2011natural}, games~\cite{silver2017mastering,silver2016mastering}), it still remains a great challenge to design the optimal network structure for a certain task. Most existing works rely on extensive human efforts on designing and experimenting with different structures. \begin{figure}[t] \centering \includegraphics[scale=0.45]{figures/quick-comparison.eps} \caption{A quick comparison with two recent similar works in terms of trade-off between the size of search space and computational costs. The first number in the parenthesis denotes the top 1 accuracy achieved on CIFAR-10 data and the second number denotes the number of GPU hours used. } \label{fig:cs} \end{figure} Optimizing the network structures involves two fundamental issues: how to define the search space of network structures; and how to design an efficient algorithm to search a good network structure in the search space. A great {challenge} in solving these issues lies {in} how to balance the {trade-off} between the size of search space and the computational {cost} of the {search} algorithm. Earlier works based on \textit{neuro-evolution} for automatically discovering network structures usually impose strong restrictions on the search space of the network structures due to limited computational power and scarcity of data~\cite{stanley2002evolving}. Recently, there emerged revived interests in optimizing network structures (especially deep convolutional neural networks) using genetic/evolutionary algorithms~\cite{dufourq2017eden,xie2017genetic,real2017large}. However, the dilemma of computational costs and search space {trade-off} has pushed these works into two ends. At one end, one has to restrict the search space by imposing strong constraints on the network structures. For example, in \cite{xie2017genetic} a network is composed of a fixed number of stages and each stage is composed of a fixed number of nodes representing convolutional operations. In~\cite{dufourq2017eden}, {Dufourq and Bassett} restricted mutation operations to adding, deleting and replacing a randomly selected layer in a network with a predetermined maximum number of layers (e.g., $7$ is used in their experiments). As a result, their evolved networks share a single path structure in contrast to a multiple-path structures as in residual networks~\cite{ResNet_cvpr16} and Inception~\cite{googlenet15}. On the other end, {Real~\textit{et al}.}~\cite{real2017large} investigated large-scale evolution of networks operating at unprecedented scale by using a large amount of computing resources (e.g., spending over 10 days on 250 GPUs), which verifies neuro-evolution can achieve competitive performance as hand-crafted models built on many years of human experience. However, such an brute-force approach is not affordable for general users who have limited computational resources. In this paper, we focus on optimizing deep CNN {structures} for image classification due to the availability of existing results for comparison and its popularity in computer vision. Similar to~\cite{real2017large}, we also study evolution-based algorithms, which search CNNs in a large search space that is defined by a set of mutations. Nevertheless, the difference from previous works~\cite{dufourq2017eden,xie2017genetic,real2017large} is that our main focus is to tackle an important and challenging question for optimizing neural network structures, i.e., {\bf how to maximize the exploration in the search space under limited computational resources~\footnote{computational resources include not only the hardware but also the computing time.}}. Instead of imposing strong restrictions on the search space, we propose new effective strategies to reduce the computational costs. We use an aggressive method to select strong individuals for survival and reproduction. In particular, among a set of individuals (i.e., population) only a small number of fittest individuals that are sufficiently different from each other are selected for producing the next generation. This strategy avoids wasting time on training weaker individuals that may eventually be eliminated in a later stage. However, a potential issue caused by this strategy is that the diversity of population decreases, which is very important for genetic programming. To remedy this issue, we propose to (i) increase the number of possible mutations; (ii) make clones of the selected fittest individuals to undergo different mutations. Additional techniques are also investigated to speed up the search process and to shorten the training time of each individual during evolution. Before ending this section, we give a quick comparison of our work and two recent works highlighting the trade-off between the size of search space and computational costs in Figure~\ref{fig:cs}. The main contribution of this paper can be summarized as following: 1) Through extensive experiments, we show that we can automatically (without any human effort at all, for example, tuning, modification, adding other layers) design network structures to achieve competitive performance, which can be conducted by general users with limited computing resources; 2) From empirical experiments, we found some interesting insights in designing neural networks, for example, skip layers was found in early stage of searching process later on was replaced by other layers; 3) We propose a simple yet efficient selection strategy, which performed better compared to other strategies, has its interest and can easily be adopted by practitioners. \section{Related Work} There exist abundant studies on using genetic or evolutionary algorithms for discovering neural networks structures before the re-emergence of deep learning~\cite{miller1989designing,stanley2002evolving,gruau1993genetic} in 2012. These algorithms are also known as \textit{neuro-evolution}. Most of these works are restricted to feedforward neural networks of a few layers. However, many techniques in these earlier works are also useful for optimizing large convolutional neural networks. In designing a neuro-evolution algorithm, several fundamental questions need to be answered, including (i) how to encode an individual; (ii) what are the allowed mutations; (iii) how to select individuals for reproduction; (iv) what is the fitness function. Existing works may differ from each other on how to address these questions, which are also related to a fundamental issue in neuro-evolution (and also in other meta-heuristic optimization algorithms): how to balance between the size of search space and the computational costs. In the following discussion, we will highlight how to address these fundamental questions and how to balance the trade-off. In~\cite{xie2017genetic}, the authors developed a genetic algorithm using a fixed-length binary string to encode the network structure. The search space consists of all networks with a fixed number of stages, where each stage is composed of a fixed number of nodes representing convolutional operations. Mutations are easily operated by randomly flipping each bit in the string representation, which correspond to adding, deleting and changing connections of nodes within each stage. By restricting the number of stages (e.g., 3) and the number of nodes in each stage, their computational cost is controlled under a manageable level. The selection of individuals for reproduction is done by a Russian roulette process, which selects individuals based on a non-uniform distribution whose probabilities are proportional to the fitness of the individuals, i.e., individuals with higher fitness score will be selected with higher probability. The focus of~\cite{real2017large} is to scale up neuro-evolution to take advantage of the tremendous computational resources Google LLC has. A total of 250 GPUs are used in their experiments. They used a graph to encode an individual, and defined seven mutations to change the structure of networks~\footnote{they also used several other mutations that do not affect the structure of the network.}, and used a standard binary tournament selection method~\cite{goldberg1991comparative} for selecting individuals for reproduction. The fitness function of the above works purely depends on the performance of individual models on a validation data, which are trained by back-propagation. The evolutionary techniques used in~\cite{dufourq2017eden} is similar to that in~\cite{real2017large} except that their mutations are restricted to adding, deleting and replacing one of six predefined layers, which include two-dimension convolution, one-dimension convolution, fully connected, dropout, one-, and two-dimension max pooling. As a result, their algorithm cannot discover multiple-path networks, which are prevalent in modern deep learning community. A common feature of these works is that they use a combined strategy that lets the structure evolve but optimizes the weights of each individual by back-propagation, which is also adopted in the present work. Another fundamental issue in genetic/evolutionary algorithm is the diversity of the population. A traditional approach for encouraging diversity of population is fitness sharing, where the fitness of each individual is scaled based on its proximity to others. It means that originally good solutions in densely populated regions will be given a lower fitness value than comparably good solutions in sparsely populated regions. All the three recent works~\cite{xie2017genetic,real2017large,dufourq2017eden} did not use any type of fitness sharing to encourage diversity. In~\cite{real2017large}, the authors simply use a very large population size (i.e., 1000) to increase the diversity. A key difference between our work and these previous work lies in the selection process and the number of mutation operations. The proposed solution takes both the limitations of computational resources and diversity of the population into account. As a result, even though we do not impose any strong restriction on the search space, we can use less computational costs to achieve competitive if not better prediction performance than~\cite{xie2017genetic,dufourq2017eden,real2017large}. Other related works on automatically discovering network structures include reinforcement learning~\cite{baker2016designing,zoph2016neural} based approaches and Bayesian optimization~\cite{snoek2012practical,bergstra2013making,mendoza2016towards} based approaches. We refer the readers to~\cite{real2017large} for more discussion and references. \section{Our Approach} The proposed algorithm follows the standard flow of neuro-evolution, i.e., population initialization, individual selection, reproduction, mutation/cross-over, and fitness score evaluation. The individuals in the initial population are simple neural network structures with only one global pooling layer or one fully connected layer. We use an acyclic graph to encode an individual with each node in the graph representing a basic operation or connection including \textit{convolution, pooling, fully connected, concatenation and skip}. Those operations are standard in the neural network literature. Please refer to Figure~\ref{fig:examples_mutation_operations} for examples of individuals represented by a graph. We also use the prediction performance on a validation data as fitness score. However, different from~\cite{xie2017genetic,real2017large,dufourq2017eden}, we are not only exploring how to implement a genetic/evolutionary algorithm for optimizing deep convolutional neural networks, but also exploring how to reduce the computational costs under the framework of neuro-evolution without imposing strong restriction on the search space. Next, we will present our strategies for reducing the computational costs. \begin{figure*}[t] \begin{center} \includegraphics[scale=0.34]{figures/selection.eps} \end{center} \caption{Comparison between the proposed aggressive selection and mutation strategy (right) vs conventional tournament selection and mutation strategy (left). Each colored ball denotes an individual, the number within each ball denotes its fitness score, red dashed arrows denote a copy and green solid arrows denote mutations.} \label{fig:top2_selection} \end{figure*} \begin{algorithm}[t] \caption{Aggressive Selection of top-$k$ individuals}\label{alg:agg} \begin{algorithmic}[1] \STATE \textbf{Input}: a population of individuals ranked according to their fitness score from large to small, $\mathcal P_{t-1}=\{i_1, \ldots, i_N\}$. A target number of individuals $k$ and a distance threshold $d$. \STATE Initialize an empty set $\mathcal P_t$ \FOR{$j=1,2,\ldots,N$} \STATE choose the next individual $i_j$ in $\mathcal P_{t-1}$ \IF{the distance between $i_j$ and individuals in $\mathcal P_t$ exceeds a certain threshold $d$ } \STATE add $i_j$ into $\mathcal P_t$ \ENDIF \IF{the size of $\mathcal P_t$ is equal to $k$} \RETURN $\mathcal P_t$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Aggressive selection and mutation} A potential issue in traditional selection strategies (e.g., tournament selection or sampling-based selection) is that weak individuals might survive for a long period. While this feature is helpful to increase the diversity of the population, however it may waste a lot of time to train these weak individuals that will eventually be eliminated. We propose to eliminate these weak individuals at their very early age, and use other approaches to increase the diversity of the population. The algorithmic description of the proposed aggressive selection is presented in Algorithm~\ref{alg:agg}, and an illustration of the proposed selection process is presented in Figure~\ref{fig:top2_selection}. In particular, we greedily select the top $k$ individuals from a population of individuals $\mathcal P_{t-1}$ based on their fitness scores. To encourage the diversity, we also make sure the distance between the selected top individuals exceeds a certain threshold. The distance between two individuals is computed by comparing the nodes of two graph structures from input layer to output layer. If the nodes are represented by an alphabet denoting their operation or connection types, the distance is simply the hamming distance. It is notable that the distance between individuals is also considered by fitness sharing in previous studies~\cite{goldberg1991comparative} to encourage diversity. \paragraph{Multiple Cloning.} With the aggressive selection strategy described above, we can eliminate many weak individuals at their early ages. However, the small number of retained individuals will reduce the size of the next population and thus restrict diversity. To address this issue, we will resort to cloning, i.e., making multiple copies of the selected individuals to undergo different mutations for generating the next population. For comparison, in traditional tournament selection as illustrated in Figure~\ref{fig:top2_selection}, a weak individual might be selected and each survived individual only undergoes one mutation, which is the strategy adopted in~\cite{real2017large,dufourq2017eden}. In conventional sampling-based selection and mutation, each individual has a certain probability to be retained and the retained individual has a certain probability to be mutated. This is the strategy adopted in~\cite{xie2017genetic}, where the mutation probability is set to a small value (e.g. 0.05). We can see that the proposed selection and mutation strategy is more aggressive than the existing works in that only a small number of strong individuals are retained and each survived individual reproduces themselves to undergo more mutations for potential growth in the fitness. \subsection{Mutation operations} \label{sb:mutation-operation} To complement our proposed aggressive selection and mutation, we increase the number of possible mutation operations compared to the existing works ~\cite{real2017large,dufourq2017eden,xie2017genetic}. We define 15 different types of mutation operations as shown in Table \ref{tb:implemented_mutation_operation_list}, which almost doubles the amount considered in~\cite{real2017large}. Note that three mutation operations (\texttt{reset\_weight}, \texttt{continue\_training}, \texttt{alter\_learning\_rate}) appeared in \cite{real2017large} are not included in Table \ref{tb:implemented_mutation_operation_list} since those operations do not change the structure of a neural network. Next, we provide more details regarding each mutation operation. In the following, we discuss some implementation details of each mutation operation. We are going to focus on the \texttt{add} operations. For all \texttt{removal} operations, we randomly select and remove one of existing layers of the chosen type. If no such layer exists, no operation will be applied. \begin{itemize} \item \texttt{add\_convolution}: Firstly, we randomly select the position to add a convolution layer. Then we insert a convolution layer with channel number $32$, stride $1$, filter size $3 \times 3$, and number of padding pixel $1$. For simplicity, those values are chosen to ensure the input and output of the feature map dimensions do not change after the convolution. Note that even though we use a predefined set of channel number, stride and filter size, those values could be altered later through \texttt{alter\_channel\_number}, \texttt{alter\_stride}, \texttt{alter\_filter\_size} mutation operations, which we will discuss shortly. In this work, a convolutional layer is by default followed by batch normalization~\cite{ioffe2015batch} with Relu~\cite{Alexnet12} activation unit. \item \texttt{alter\_channel\_number, alter\_stride, alter\_filter\_size}: These three types of mutation operations are to reset the hyper-parameters in a convolution layer. We randomly choose a new value for the corresponding hyper-parameter from a predefined list, i.e., $\{8,16,32,48,64,96,128\}$ for channel numbers, $\{1\times 1, 3 \times 3, 5 \times 5\}$ for filter sizes, and $\{1,2\}$ for strides. \item \texttt{add\_skip}: A skip layer, illustrated in Figure~\ref{fig:examples_mutation_operations}, is to implement the skip connection introduced in residual networks~\cite{ResNet_cvpr16}. Since a skip layer requires its two bottom layers to share the same feature map dimension and channel number, we first find out all pairs of layers which could potentially be bottom layers of the skip layer. Then, a skip connection is added on top of a randomly selected pair from all possible pairs. \item \texttt{add\_concatenate}: Similar to skip layer, a concatenate layer requires two bottom layers to share the same feature map dimension, but they could have different channel numbers. Thus, the \texttt{add\_concatenate} mutation follows a similar procedure as \texttt{add\_skip} mutation. \item \texttt{add\_pooling:} Here, we restrict the insertion of a pooling layer such that it can only take place right after a convolution layer. For simplicity, we limit the pooling strategy to be max pooling and kernel size to $2\times2$ with stride $2$. This predefined pooling configuration could be relaxed in future work. \item \texttt{add\_fully\_connected}: For this operation, we limit its position to be the last layer or immediately following another fully connected layer. The output dimension of this inserted layer is uniformly chosen from the following set $\{50,100,150,200\}$. \item \texttt{add\_dropout}: For this operation, we limit its position to be immediately after a fully connected layer. For simplicity, we set dropout ratio to $0.5$. \end{itemize} Note that applying some mutation operations (e.g., \texttt{alter\_stride}, \texttt{add\_pooling}) may result in inconsistency of feature map dimensions. In this situation, we will adopt the following strategies to address the above issue: i) adding additional padding pixel; ii) adding a $1 \times 1$ convolution layer to adjust channel numbers. If it still results in an invalid network structure, we simply apply a mutation again. In Figure~\ref{fig:examples_mutation_operations}, we show an example of how a neural network structure evolves to new ones after undergoing some mutation operations: \texttt{add\_convolution, add\_concatenate, add\_skip}. We use the dotted square to mark the new layers. Clearly, we could see that as network evolves, we can explore the diverse neural network structures and obtain neural network structure with potential better performance. \begin{figure}[t] \begin{center} \includegraphics[scale=0.40]{figures/mutation_operation_1.eps} \end{center} \caption{Example of how a neural network structure mutates to new ones after undergoing different mutation operations: \texttt{add\_convolution}, \texttt{add\_concatenate} and \texttt{add\_skip}. The dotted squares mark added convolution layer ``Conv3'', concatenate layer ``Concat1'' and skip layer ``Skip1''.} \label{fig:examples_mutation_operations} \end{figure} \begin{table}[] \begin{center} \begin{tabular}{l|l|l} \hline Mutations & \cite{real2017large} & Ours \\ \hline \hline \texttt{add\_convolution} & \checkmark & \checkmark \\ \hline \texttt{remove\_convolution} & \checkmark & \checkmark \\ \hline \texttt{alter\_channel\_number} &\checkmark &\checkmark \\ \hline \texttt{alter\_filter\_size} &\checkmark &\checkmark \\ \hline \texttt{alter\_stride} &\checkmark &\checkmark \\ \hline \texttt{add\_dropout} & - &\checkmark \\ \hline \texttt{remove\_dropout} & - &\checkmark \\ \hline \texttt{add\_pooling} & - &\checkmark \\ \hline \texttt{remove\_pooling} & - &\checkmark \\ \hline \texttt{add\_skip} &\checkmark &\checkmark \\ \hline \texttt{remove\_skip} &\checkmark &\checkmark \\ \hline \texttt{add\_concatenate} & - & \checkmark \\ \hline \texttt{remove\_concatenate} & - & \checkmark \\ \hline \texttt{add\_fully\_connected} & - &\checkmark \\ \hline \texttt{remove\_fully\_connected} & - &\checkmark \\ \hline \end{tabular} \end{center} \caption{The allowed mutation operations in our work and in~\cite{real2017large};~\checkmark represents that mutation operation is defined while - represents not available} \label{tb:implemented_mutation_operation_list} \vspace{-0.25in} \end{table} \subsection{Training strategy} \label{sb:aggressive-training} To evaluate the fitness score of each individual, a standard approach is to use existing optimization algorithms off-the-shelf to learn the weight parameters. Training a CNN may take tens of thousands gradient descent iterations to achieve a good local minimum. However, we observe that ``deep'' training (i.e., setting a very stringent condition for stopping the training process) is not necessary during the evolution since our goal is to have a ranked list of individuals for selection. Therefore, a rough estimate of the prediction performance for each individual is sufficient for driving the evolution. To reduce the training time of each individual during the evolution, we explore a different learning rate decay strategy to train a deep neural network. There are two popular strategies for decaying the learning rate. One method is an inverse learning rate decay strategy~\cite{jia2014caffe}, where the learning rate $\eta_t$ at the $t$-th iteration is set to \begin{equation}\label{eqn:lr} \eta_t = \eta_0*(1 + \gamma*t)^{-\alpha} \end{equation} where $\eta_0$ is the initial step size and $\gamma, \alpha$ are the hyper-parameters. Another popular method is a multi-stage strategy~\cite{Alexnet12}, where the learning rate is reduced by a fixed factor (e.g., 10) after a large number of iterations. We use a mixture of both strategies. We divide our training process into three stages with a maximum number of $20000$ iterations: with the first stage being the first $10000$ iterations, from $10000$ to $15000$ iterations as the second stage, and the last $5000$ iterations as the final stage. Within each stage, we use the inverse learning rate strategy. The learning rate is reduced by a fixed factor after each stage. This strategy avoids running a large number of iterations with the same step size without improving the prediction performance much at each stage, and also quickly gives a rough estimate of the prediction performance without spending long time at the tail of the learning curve that has little improvement on the prediction performance. Finally, after the neuro-evolution process terminates with a good structure, we switch to existing optimization algorithms for deep training \vspace{-0.12in} \subsection{Mutation operation sampling} To speed up the evolution process, we also use non-uniform sampling probabilities for choosing a mutation operation. Using uniform probabilities to choose a mutation operation will waste lots of time training weak individuals that are mutated by removing convolution, skip, concatenation from their parents in the early stage of evolution process. To avoid this issue, we explicitly set the sampling probabilities of \texttt{add\_convolution}, \texttt{add\_skip}, \texttt{add\_concatenate}, \texttt{alter\_ stride} \texttt{alter\_filter\_size}, and \texttt{alter\_channel\_number} two times larger than that of other mutation operations at the earlier stage of the evolution process. \section{Experiments} In this section, we report some experimental results of the proposed aggressive genetic programming approach for optimizing convolutional neural network structures. We emphasize that we are not aiming to achieve better performance than~\cite{real2017large} due to limited computing resources. Instead, we focus on showing that the proposed strategies can reduce the computational time and also achieve competitive and even better performance than similar works using significantly less computational power. In subsection~\ref{sec::exp::config}, we describe the experiment setup including datasets and data preprocessing. In subsection~\ref{sec::exp::selection}, we show the performance of the aggressive selection strategy under different values of $k$ on CIFAR-10~\cite{krizhevsky2009learning} dataset to justify the proposed aggressive selection. In subsection~\ref{sec::exp::comparison}, we compare the performance of the proposed aggressive evolution with other genetic approaches as well as previous works that achieved the-state-of-art results by hand-crafted networks on four standard benchmark datasets. In subsection~\ref{sec::exp::evolve}, we present the discovered neural network structures for CIFAR-10 and CIFAR-100 \subsection{Experimental setup} \label{sec::exp::config} \noindent{\textbf{Datasets and Preprocessing}}: We conduct experiments on four benchmark datasets: MNIST~\cite{lecun1998gradient}, SVHN~\cite{netzer2011reading}, CIFAR-10~\cite{krizhevsky2009learning} and CIFAR-100~\cite{krizhevsky2009learning}. MNIST dataset contains $60,000$ training images and $10,000$ test image where each gray-scale image contains one of the $10$ digits, $0$ to $9$. CIFAR-10 dataset \cite{krizhevsky2009learning} has 50,000 training images and 10,000 test images. It contains 10 classes and each RGB image has a size of $32 \times 32$. The data is preprocessed by applying a Global Contrast Normalization (GCN) and ZCA whitening \cite{goodfellow2013maxout} and each side is padded with four pixels. In the training phase, a $32\times 32$ patch is randomly cropped from the padded image while in the test phase the original images are used. CIFAR-100 dataset is similar to CIFAR-10 dataset but has 100 classes in total. SVHN is a street view house number dataset which contains about $73,257$ training images and $26,032$ test images. \\ \noindent{\textbf{Experiment configuration:}} In our experiment, the population size is set to be 10. Given a population of $10$ individuals (which are clones of top $k$ individuals in intermediate generations), we let each individual undergo a mutation, and then select top $k$ individuals from the 10 mutated individuals and the original $10$ individuals. A new population will be created by making equal number of clones of the selected individuals to reach the population size $10$. It is worth mentioning that even though we use such a small population size, our performance is competitive and even better than~\cite{dufourq2017eden,xie2017genetic}, in which the population size is set to 100 and 20, respectively. It is expected that using a larger population size will further increase our performance according to~\cite{real2017large}. We use mini-batch Stochastic Gradient Descent (SGD) to train each individual neural network for a maximum of 20,000 iterations with a momentum 0.9. The mini-batch size is fixed to be 128. The weight decay is set to be 0.0005. The learning rate strategy is described in subsection~\ref{sb:aggressive-training}. The initial learning rates for the three stage are set to be $10^{-1}, 10^{-3}, 10^{-5}$, respectively. The parameters in~(\ref{eqn:lr}) are set to $\gamma = 0.001$ and $\alpha = 0.75$. The distance threshold in aggressive selection is set to be $1$. In our experiments, one evolution process is always run on one GPU. \subsection{The effect of aggressive selection} \label{sec::exp::selection} Here, we present the evidence that the proposed aggressive selection strategy can dramatically speed up the evolution process. The following experiments are conducted in the CIFAR-10 dataset. In the left of Figure~\ref{fig:cifar10-acc-topk}, we plot the evolved network performance under four different values of $k=1, 2, 5, 10$ used in our aggressive selection strategy. The smaller the $k$ value, the more aggressive the selection strategy. For each experiment setting, we plot the test accuracy of the best individual among the selected top $k$ individuals from each generation. We observe that aggressive selection with smaller values of $k$ (e.g., $1$ and $2$) evolves faster than non-aggressive selection using larger values of $k$ (e.g., $5, 10$). We further compare the proposed aggressive selection strategy with other existing selection strategies such as \textbf{Tournmanet}, \textbf{Sampling Uniformly} and \textbf{Sampling by Fitness}. In the middle of Figure ~\ref{fig:cifar10-acc-topk}, we plot the test performance of the best individual in one generation by using those different selection strategies, from which we can observe that aggressive selection evolves faster than other strategies dramatically. For researchers and practitioner in genetic programming, this proposed competition strategy is of interest and can be easily adopted. \begin{figure}[t] \centering \subfigure{\includegraphics[scale=0.15]{figures/cifar10_acc_ave_top1_top2_top5_top10}} \subfigure{\includegraphics[scale=0.11]{figures/selection_top1_tournament_random_random_fitness}} \subfigure{ \includegraphics[scale=0.115]{figures/cifar10_model_size_acc_v0}} \caption{\textbf{Left}: The test accuracy of the best individual among the selected top $k$ individuals vs the number of generations on CIFAR-10 dataset, where different curves correspond to aggressive selection with different values of $k$. \textbf{Middle}: The test accuracy of the best individual among the selected individuals by using aggressive, tournament, sampling uniformly and sampling by fitness selection strategies vs the number of generations on CIFAR-10 dataset. \textbf{Right}: The evolution of model size and test accuracy of the best individual in \textbf{AG-Evolution} algorithm on CIFAR-10. } \label{fig:cifar10-acc-topk} \end{figure} \subsection{Comparison with existing methods} \label{sec::exp::comparison} In this section, we compare the performance of the proposed aggressive genetic programming approach with existing genetic approaches on benchmark datasets MNIST, SVHN, CIFAR-10, CIFAR-100. We refer to the genetic approaches presented in~\cite{dufourq2017eden,xie2017genetic,real2017large} as \textbf{EDEN}, \textbf{Genetic-CNN} and \textbf{LS-Evolution}, respectively. For our method, we report the result using aggressive selection with $k=1$, referred to \textbf{AG-Evolution}. For reference and comparison, we also include state-of-the-art results based on hand-crafted neural network structures (referred them as \textbf{SOTA}) as well as the results using non-aggressive selection (i.e., by setting $k=10$ in our framework), which is referred to as \textbf{NA-Evolution}. For \textbf{NA-Evolution} and \textbf{AG-Evolution}, we terminate the evolution process when the performance on the validation data saturates on all datasets except on the CIFAR-100 dataset, in which we terminate the process earlier. It is possible that by continuing the evolution process, the performance might be further improved. We would also like to emphasize that state-of-the-art results could be attributed to not only a good network structure but also some other factors (e.g., using a good pooling function~\cite{lee2016generalizing}), which are not considered in current genetic approaches. For \textbf{Genetic-CNN}, we do not compare with the results in their Table 3 for SVHN, CIFAR-10 and CIFAR-100. The reason is that their results in Table 3 are not directly achieved by the evolution process. They re-trained the networks by using large number of filters, which are not exactly the networks found by their genetic algorithm. Table~\ref{mnist-quantitive-comp},~\ref{svhn-quantitive-comp},~\ref{cifar10-quantitive-comp},~\ref{cifar100-quantitive-comp} show our results on different datasets. For each method, we report both the test accuracy and the computational cost measured by the total number of used GPU hours (GPUH), which is used in \cite{xie2017genetic,dufourq2017eden,real2017large}. The GPUH numbers for other methods are directly from the original papers. It is notable that on some datasets the results of \textbf{EDEN}, \textbf{Genetic-CNN} and \textbf{LS-Evolution} are missing, which is because they are not reported in the original papers. From the results, we have the following observations. \begin{itemize} \item First, the performance of the discovered neural network structures by our genetic approach \textbf{AG-Evolution} on MNIST and SVHN datasets is very close to the state-of-the-art results. \item Second, compared with \textbf{EDEN}~\cite{dufourq2017eden}, \textbf{AG-Evolution} achieves better performance on MNIST and CIFAR-10, and compared with \textbf{Genetic-CNN}~\cite{xie2017genetic}, \textbf{AG-Evolution} can find a much better neural network on CIFAR-10 ($0.9052$ vs $0.7706$ for test accuracy) with much less time (72GPH vs 408GPUH). \item Third, compared with the \textbf{LS-Evolution}~\cite{real2017large} on CIFAR-10 data, which achieves a test accuracy of 0.9180 with $17971$ GPUH, our \textbf{AG-Evolution} achieves a similar performance of $0.9052$ with much less time, i.e., 72GPUH. It is expected that by continuing our evolution process, we might achieve similar test accuracy to 0.9460 but with less amount of time. \item Finally, \textbf{AG-Evolution} uses a much shorter time to find network structures achieving almost similar performance to \textbf{NA-Evolution} with a much longer time on SVHN, CIFAR-10 and CIFAR-100 datasets, which further verifies the benefit of the proposed aggressive selection. \end{itemize In Figure \ref{fig:mnist-cifar10-svhn-cifar100-top1-acc}, we plot the test accuracy of the best individual in each generation versus the number of generations on the four datasets for \textbf{AG-Evolution}, from which we could see the proposed genetic approach gradually improves the performance of neural network structures. We also report the evolution of model size of individuals on CIFAR-10 dataset in the right of Figure~\ref{fig:cifar10-acc-topk}. \begin{table}[t] \begin{center} \begin{tabular}{l|ll} \hline Approach & \textbf{Test Acc} & \textbf{Comp Cost} \\ \hline\hline \textbf{SOTA}~\cite{wan2013regularization} & 0.9979 & -- \\ \textbf{Genetic-CNN}~\cite{xie2017genetic} & 0.9966 & 48 GPUH \\ \textbf{EDEN}~\cite{dufourq2017eden} & 0.9840& --\\ \textbf{AG-Evolution} & 0.9969 &35 GPUH \\ \hline \end{tabular} \end{center} \caption{Comparison of test accuracy and computational cost on {MNIST} dataset. \textbf{NA-Evolution} is not run on this dataset due to that \textbf{AG-Evolution} almost achieves the same performance as the state-of-the-art.} \label{mnist-quantitive-comp} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{l|ll} \hline Approach & \textbf{Test Acc} & \textbf{Comp Cost} \\ \hline\hline \textbf{SOTA}~\cite{lee2016generalizing} & 0.9831 & -- \\ \textbf{NA-Evolution} & 0.9620 &552 GPUH \\ \textbf{AG-Evolution} & 0.9541 &60 GPUH \\ \hline \end{tabular} \end{center} \caption{Comparison of test accuracy and computational cost on {SVHN} dataset.} \label{svhn-quantitive-comp} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{l|ll} \hline Approach & \textbf{Test Acc} & \textbf{Comp Cost} \\ \hline\hline \textbf{SOTA}~\cite{huang2016densely} & 0.9654 & - \\ \textbf{LS-Evolution}~\cite{real2017large} &0.9460 &$>$65,000 GPUH \\ \textbf{LS-Evolution}~\cite{real2017large} &0.9180 &$>$17,500 GPUH \\ \textbf{Genetic-CNN}~\cite{xie2017genetic} & 0.7706 &408 GPUH \\ \textbf{EDEN}~\cite{dufourq2017eden} & 0.7450& --\\ \textbf{NA-Evolution} & 0.9037 &552 GPUH \\ \textbf{AG-Evolution} & 0.9052 &72 GPUH \\ \hline \end{tabular} \end{center} \caption{Comparison of test accuracy and computational cost on {CIFAR-10} dataset.} \label{cifar10-quantitive-comp} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{l|ll} \hline Approach & \textbf{Test Acc} & \textbf{Comp Cost} \\ \hline\hline \textbf{SOTA}~\cite{huang2016densely} & 0.8280 & - \\ \textbf{LS-Evolution}~\cite{real2017large} & 0.7700 & $>$65,000 GPUH\\ \textbf{NA-Evolution} & 0.6560 &552 GPUH \\ \textbf{AG-Evolution} & 0.6804 &184 GPUH \\ \hline \end{tabular} \end{center} \caption{Comparison of test accuracy and computational cost on {CIFAR-100} dataset.} \label{cifar100-quantitive-comp} \end{table} \begin{figure}[h!] \centering \subfigure{\includegraphics[scale=0.14]{figures/mnist_final_acc_1} \subfigure{\includegraphics[scale=0.14]{figures/svhn_final_acc_1}} \subfigure{\includegraphics[scale=0.14]{figures/cifar10_final_acc}} \subfigure{\includegraphics[scale=0.14]{figures/cifar100_final_acc}} \caption{Test Accuracy vs Generation Number on MNIST, SVHN, CIFAR-10, CIFAR-100 dataset} \label{fig:mnist-cifar10-svhn-cifar100-top1-acc} \end{figure} Note that the benefit of this work for practitioners in deep learning and computer vision community is that they can utilize our work to automatically search the optimal neural network structures for their own task with acceptable computing cost. \subsection{Discovered Network Structures} \label{sec::exp::evolve} Finally, we show two network structures found by the proposed \textbf{AG-Evolution} for CIFAR-10 and CIFAR-100 datasets in Figure \ref{fig:learned-neural-network-structure}. It is notable that both networks are multiple path networks with concatenation and skip connections similar to GoogleNet~\cite{googlenet15} and ResNet~\cite{ResNet_cvpr16}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.26]{figures/sample-nn.eps} \end{center} \caption{Discovered neural network structures for CIFAR-10 and CIFAR-100 dataset.} \label{fig:learned-neural-network-structure} \end{figure} \section{Conclusions} In this paper, we have developed an aggressive genetic programing approach to optimize the structure of convolutional neural networks under limited computational resources without imposing strong restrictions on the search space. Our study shows that it is possible to achieve promising result using the proposed aggressive genetic programming approach in a reasonable amount of time. We expect the proposed strategies can be also useful for optimizing other types of neural networks (e.g., recurrent neural networks), which will be left as future work. \bibliographystyle{splncs}
{ "timestamp": "2018-06-05T02:12:11", "yymm": "1806", "arxiv_id": "1806.00851", "language": "en", "url": "https://arxiv.org/abs/1806.00851" }
\section{} \section{acknowledgements}I thank Andrew MacPherson and Benjamin Hennion for numerous useful conversations. Further I thank Ian Grojnowski for shaping my entire outlook on Algebraic Geometry. Finally I thank MPIM Bonn for wonderful working conditions. \section{INTRODUCTION} If $M$ is, say, a manifold, then we can consider the free loop space, $LM=Map(S^{1},M)$, of unbased maps from a circle into $M$. If $M$ is connected then the set of connected components of $LM$ is identified with the fundamental group $\pi_{1}(M)$. There is an algebro-geometric version of $LM$, gotten heuristically by replacing the circle $S^{1}$ with a \emph{formal punctured disc}, $spec(\mathbb{C}((z)))$. The resulting object is an ind-scheme if we take our target to be affine. \\ \\The question of the topology of the $\mathbb{C}$-points of this ind-scheme now arises fairly naturally. We address this in the case of a smooth affine curve. To the best of our knowledge, the first non-trivial such computation is due to Contou-Carr\`{e}re, who studied the algebraic loop space of $\mathbb{G}_{m}$ in his work on what is now called the Contou-Carr\`{e}re symbol, (cf. [2]). In the case of $\mathbb{G}_{m}$ the answer turns out to \emph{agree} with the topological analogue. We will see that this is very much not the case in general. \section{BASICS} For the reader unfamiliar with algebraic loop spaces we include some background here. For more a more detailed introduction we refer the reader to the work of Kapranov and Vasserot in [3]. \begin{definition} \begin{itemize}\item If $S=spec(A)$ is an affine scheme then we write $\mathcal{D}_{S}$ for the affine scheme $spec(A[[z]])$ and $\mathcal{D}_{S}^{*}$ for the affine scheme $spec(A((z)))$. We write $\mathcal{D}^{*}_{\mathbb{C}}$ for $\mathcal{D}^{*}_{spec(\mathbb{C})}$, and we sometimes shorten this further to just $\mathcal{D}^{*}$. \item If $X$ is a scheme then we define the pre-sheaves, $X(\mathcal{D})$ and $X(\mathcal{D}^{*})$, respectively by $X(\mathcal{D})(S)=X(\mathcal{D}_{S})$ and $X(\mathcal{D}^{*})=X(\mathcal{D}^{*}_{S}).$ We refer to them as the \emph{arc space} of $X$ and the \emph{loop space} of $X$ respectively. We will sometimes refer to them as \emph{holomorphic} (resp. \emph{meromorphic}) loops into $X$. The space $X(\mathcal{D})$ is a closed sub-space of $X(\mathcal{D}^{*})$. \item There is a \emph{covering action} of the monoid $\mathbb{N}^{\times}$ on the space $X(\mathcal{D}^{*})$ gotten by pre-composition with the etale covers, $z\mapsto z^{n}$. \end{itemize}\end{definition} We will briefly summarise the representability properties of these pre-sheaves: \begin{lemma}\begin{itemize}\item If $X$ is a scheme then the pre-sheaf $X(\mathcal{D})$ is representable by a scheme, affine if $X$ is.\item If $X$ is an affine scheme then $X(\mathcal{D}^{*})$ is an ind-affine scheme, i.e. a filtered colimit (taken inside the pre-sheaf category) of affine schemes with transition maps closed inclusions.\end{itemize} \end{lemma} \begin{proof} This is well known, we refer the reader to \cite{KV}. One reduces to the case of $\mathbb{A}^{1}$ using compatability with limits and then deals with the $\mathbb{A}^{1}$ case explicitly. \end{proof} We give a couple of examples of loop spaces of affine curves. \begin{example}\begin{itemize}\item We will briefly describe the loop space of $\mathbb{A}^{1}$. It can easily be seen to be the ind-scheme $\underrightarrow{\lim}_{n}\mathbb{A}^{[-n,\infty)}$. A Laurent power series $\gamma(z)=\sum_{i\geq -n}\gamma_{i}z^{i}$ corresponds to the element $(\gamma_{i})_{i}\in\mathbb{A}^{[-n,\infty)}$ in the obvious way. The sub-space of arcs is $\mathbb{A}^{[0,\infty)}$. Endowing the $\mathbb{C}$-points of this space with the ind-topology we note that it is contractible. We shall see below that, unsurprisingly, $\mathbb{A}^{1}$ is the only smooth curve for which this is the case. It is instructive to consider the following $\mathbb{A}^{1}$-family of loops into $\mathbb{A}^{1}$. Define $\gamma: \mathbb{A}^{1}\rightarrow\mathbb{A}^{1}(\mathcal{D}^{*})$ as follows: by the functor of points definition it corresponds to a map, $\mathcal{D}^{*}_{\mathbb{A}^{1}}\rightarrow\mathbb{A}^{1}$, i.e. to a function on the affine scheme $\mathcal{D}^{*}_{\mathbb{A}^{1}}$, and so to an element of the algebra $\mathbb{C}[t]((z))$. We take this element to be $z+tz^{-1}$. The fibre at $t=0$ is holomorphic but the other fibres have poles. We will see below that $\mathbb{A}^{1}$ is the only smooth curve for which this is possible. Indeed for every other smooth curve, $X$, the complex points of the arc space are \emph{open} in the complex points of the loop space.\item We consider now the case of $\mathbb{G}_{m}$. According to work of Contou-Carr\`{e}re, [2], if $A$ is a $\mathbb{C}$-algebra, then an invertible element of $A((z))$ admits a unique expression of the form, $$\alpha(z)=\alpha_{0}z^{\nu(\alpha)}\prod_{0<i<<\infty}(1-\alpha_{-i}z^{-i})\prod_{0<j}(1-\alpha_{j}z^{j}),$$ where $\nu(\alpha)$ is the order, $\alpha_{0}$ is a unit of $A$ and all the $\alpha_{-i}$ are nilpotents. This implies an isomorphism, $$\mathbb{G}_{m}(\mathcal{D}^{*})\cong\mathbb{Z}\times\mathcal{D}^{\infty}\times\mathbb{G}_{m}\times\mathbb{A}^{\infty}.$$ In particular we see that $\pi_{0}(\mathbb{G}_{m}(\mathcal{D}^{*}_{\mathbb{C}}))=\mathbb{Z}$, corresponding to pole/ zero order. We will find it useful, with a view to generalising this result, to decompose $\mathbb{Z}$ as $(\infty\times\mathbb{Z}_{<0})\sqcup\{0\}\sqcup (0\times\mathbb{Z}_{>0})$. Recall that we have a covering action of the monoid $\mathbb{N}^{\times}$ on $\mathbb{G}_{m}(\mathcal{D}^{*}_{\mathbb{C}})$. This descends to an action on $\pi_{0}$. It is easily seen that it acts trivially on the summand, $\{0\}$, of $\pi_{0}$ and via multiplication on the summands $(\infty\times\mathbb{Z}_{<0})$ and $(0\times\mathbb{Z}_{<0})$. Note that in this case the answer agrees with the topological analogue. \\ \\ Taking the quotient by this action, and denoting it $\pi_{0}^{\mathbb{N}^{\times}}$, we see that this has size $3$; one component for each puncture and one for the holomorphic loops. It is in this form that the computation will generalise. \item Already in higher genera it is no longer the case that the complex points of the algebraic loop space $X(\mathcal{D}^{*})$ are homotopy equivalent to their topological analogue, cf. the mathoverflow comment of B. Bhatt, [1]. \end{itemize}\end{example} We introduce some notation before stating our main result; \begin{definition} If $X$ is an affine smooth curve with proper model $\overline{X}$, then we let $\partial X$ denote the complement of $X$ inside $\overline{X}.$ \end{definition} \begin{tcolorbox}\begin{theorem} Let $X$ be a smooth complex affine curve, then we compute the number of connected components, up to coverings, of its loop space as $$\#\pi_{0}^{\mathbb{N}^{\times}}X(\mathcal{D}_{\mathbb{C}}^{*})=1+\#\partial X,$$ if $X$ is not $\mathbb{A}^{1}$, and $1$ otherwise. \end{theorem}\end{tcolorbox}\begin{remark} We can re-state our main result more explicitly as follows: let $X$ be a smooth affine curve. If $X=\mathbb{A}^{1}$ then its loop space is connected. Otherwise the set of connected components of its loop space is naturally identified with $(\mathbb{N}^{\times}\times\partial X)\bigsqcup \{X(\mathcal{D})\}$.\end{remark} \section{THE PROOF} We will break the proof into a couple of cases, namely those curves $X$ with $\#\partial X>1$, and those with $\#\partial X=1,$ the second case proving to tbe the trickier one. Our strategy will be to find invariants of loops which do not change as we continuously deform the loop, and then to use these to differentiate between components. \begin{definition} If $\gamma:\mathcal{D}^{*}_{\mathbb{C}}\rightarrow X$ is a loop, we denote by $\overline{\gamma}$ the extension to a holomorphic loop, $\overline{\gamma}:\mathcal{D}_{\mathbb{C}}\rightarrow\overline{X}$.\end{definition}\begin{remark} The existence of such an extension is guaranteed by the valuative criterion. We caution the reader that this \emph{does not} correspond to a morphism of spaces, $X(\mathcal{D}^{*})\rightarrow\overline{X}(\mathcal{D})$. This can be seen for example by considering the $\mathbb{A}^{1}$-family of loops, $z+tz^{-1}$, defined above, indeed the extensions to the central element $0\in\mathcal{D}$ vary discontinuosly with $t$. \end{remark} The most natural invariant associated to a loop is the pull-back on degree $1$ de Rham cohomology. In fact it makes more sense to consider \emph{continuous de Rham cohomology} of topological $\mathbb{C}$-algebras. We record a simple lemma: \begin{lemma} Let $S$ be a connected affine scheme with $\mathbb{C}$-points $s_{0}$ and $s_{1}$. Let $$\gamma:\mathcal{D}^{*}_{S}\rightarrow X,$$ be an $S$-family of loops. Write $\gamma_{0}$ and $\gamma_{1}$ for the fibres at $s_{0}$ and $s_{1}$. Then the pull-backs, $$s_{i}^{*}:H^{*}_{dR}(X)\rightarrow H^{*}_{dR}(\mathcal{D}^{*}),$$ agree for $i=0,1$, where $H^{*}_{dR}$ denotes continuous de Rham cohomology.\end{lemma} \begin{proof} This is a simple generalisation of the corresponding statement for discrete de Rham cohomology. \end{proof} \begin{remark}We remind the reader that the first continuous de Rham cohomology of $\mathcal{D}^{*}_{\mathbb{C}}$ is one dimensional, generated by $d\log(z)$. Suppose we have a loop, $\gamma$, into $X$, with $\overline{\gamma}(0)=p\in\overline{X}$. Suppose further that we have a meromorphic $1$-form $\omega$ on $X$, then the pull-back in first continuous de Rham cohomology corresponds to a multiple (given by the order at $0$) of the residue.\end{remark} We show now that if two non-holomorphic loops admit extensions with different central value then they must lie in different components. In order to do this we first record a simple lemma: \begin{lemma} Let $p\neq q$ be two points of a smooth proper curve $X$. Then there exists a meromorphic $1$-form, $\omega$, on $X$, which is holomorphic away from $\{p,q\}$ and which satisfies $res_{p}(\omega)=1$, $res_{q}(\omega)=-1$.\end{lemma}\begin{proof} This is a simple computation with the Riemann-Roch Theorem. Let $\Omega_{X}$ dentoe the canonical bundle. We apply Riemann-Roch to the bundle $\Omega_{X}(p+q)$. We note $$deg(\Omega_{X}(p+q))=2g(X),$$ and thus we compute $h^{0}(\Omega_{X}(p+q))=1+g > h^{0}(\Omega_{X})$. Taking a non-zero element, $$\omega\in H^{0}(X,\Omega_{X}(p+q))/H^{0}(X,\Omega_{X}),$$ we may assume without loss of generality that it is not homolorphic at $p$. It has a pole of order at most one at $p$ and thus a pole of order exactly one. Rescaling we may assume $res_{p}(\omega)=1$, the only other point with potentially non-zero residue is $q$ whence we deduce $res_{q}(\omega)=-1$ by the Residue Theorem, completing the proof. \end{proof} We deduce the following: \begin{corollary} Let $X$ be an affine smooth curve with $\#\partial X>1$. Let $\gamma_{0}$ and $\gamma_{1}$ be non-holomorphic loops into $X$ and let $\overline{\gamma_{0}}$ and $\overline{\gamma_{1}}$ be their extensions to holomorphic loops into $\overline{X}$. Assume that the values at $0\in\mathcal{D}$ of these extensions differ. Then $[\gamma_{0}]\neq [\gamma_{1}]$ in $\pi_{0}X(\mathcal{D}^{*}_{\mathbb{C}})$. Further, if $\gamma_{2}$ is a homolorphic loop into $X$ then its class in $\pi_{0}$ differs from either of the classes of the $\gamma_{0/1}$.\end{corollary} \begin{proof}. The lemma above furnishes a meromorphic $1$-form $\omega$, so that the three residues $res_{\overline{\gamma}_{i}(0)}(\omega)$ all differ. Recalling that these residues correspond to de Rham pull-backs and then combining the lemmas above it suffices to show that if two loops, $\gamma_{0}$ and $\gamma_{1}$, are in the same connected component of $X(\mathcal{D}^{*}_{\mathbb{C}})$ then there is a connected affine $S$ and an $S$-family of loops interpolating them.\\ \\ Let the connected component containing them be called $S´$. It is ind-affine and thus we find an affine closed sub-scheme containing the loops $\gamma_{0}$ and $\gamma_{1}$. They must lie in the same component of a suitably large such sub-scheme, which we call $S$. The inclusion $S\hookrightarrow X(\mathcal{D}^{*})$ provides the tautological $S$-family, $\gamma: \mathcal{D}^{*}_{S}\rightarrow X$ and we conclude. \end{proof}\begin{remark} The reader will note that this argument breaks down without the assumption that $\#\partial X>1$. Indeed if $\#\partial X=1$ then any holomorphic form, $\omega$, on $X$ will have trivial residue at the puncture of $\overline{X}$.\end{remark} The next step is to identify when two loops, $\gamma_{0}$ and $\gamma_{1}$, both extending to arcs with central value $p\in\overline{X}$, lie in the same connected component. There is an obvious guess, and it is the correct one. We first note that in a purely local situation, the computation of the connected components of a loop space is easy: \begin{lemma} Let $\mathfrak{X}=\mathcal{D}^{*}(\mathcal{D}^{*})$. Then connected components of $\mathfrak{X}(\mathbb{C})$ are labelled by $\mathbb{Z}_{>0}$, corresponding to the degree of the morphism $\mathcal{D}^{*}\rightarrow\mathcal{D}^{*}$. \end{lemma}\begin{proof} First note that if $\gamma:\mathcal{D}^{*}\rightarrow\mathcal{D}^{*}$ is a loop of order $n$ then $\gamma^{*}(d\log(z))=nd\log(z)$ so that loops of different degree must lie in different connected components according to lemm the lemma above. Writing a loop of order $n$ in the form $\gamma(z)=z^{n}\gamma_{+}(z)$ gives a homeomorphism with $\mathbb{C}^{\times}\times\mathbb{C}^{\infty}$ and thus we conclude the lemma. With more effort we could describe the space $\mathfrak{X}$ in the manner of Contou-Carr\`{e}re (for $\mathbb{G}_{m}$) but this is not needed for this note. \end{proof}\begin{corollary} It follows that after we quotient by the covering action we have just one component, i.e. we have $\pi_{0}^{\mathbb{N}^{\times}}\mathfrak{X}=*.$\end{corollary}\begin{corollary}If two loops, $\gamma_{0}$ and $\gamma_{1}$, both extend to arcs having the same central value, $p\in\overline{X}$, then they lie in the same component of $X(\mathcal{D}^{*}_{\mathbb{C}})$ iff they have the same order at $p$.\end{corollary}\begin{proof} Write $\mathcal{D}^{*}_{p}\hookrightarrow X$ for the inclusion of the formal punctured neighbourhood of $p$. It suffices to note that our loops $\gamma_{i}$ admit unique factorisations through $\mathcal{D}^{*}_{p}$ and then to use the lemma above. \end{proof}\begin{remark} This suffices to prove the main theorem in the case of curves with at least two punctures once we note that the arc space is also connected. In summary we see that in this case we have one component corresponding to the arcs, and one for each pair consisting of a positive integer and a puncture of the curve $X$. After quotienting by the $\mathbb{N}^{\times}$-action, we arrive at the desired claim. \end{remark} \begin{remark} Our methods above, dealing with the case of curves $X$ with $\#\partial X>1$, were all essentially abelian. Abelian invariants, say $H^{*}_{et}(-,\mathbb{Z}/l)$, will not distinguish $X$ from $\overline{X}$ in the case of $\#\partial X=1$. Indeed the set of etale $\mathbb{Z}/l$-torsors for $X$ and $\overline{X}$ agree as they have the same \emph{abelianised} etale fundamental groups. This suggests that we attempt to distinguish non-holomorphic loops from holomorphic ones via the induced map on $\pi_{1}^{et}$. Note that we know from above that there are \emph{at most} two components (up to covers), our goal below is to show that there are exactly two as soon as $g(\overline{X})>0$. We remind the reader that the Newton-Puiseaux theorem implies that we have $\pi_{1}^{et}(\mathcal{D}^{*})=\widehat{\mathbb{Z}}$. Further the Riemann Existence theorem implies that we have $\pi_{1}^{et}(X)=\widehat{\pi_{1}}(X(\mathbb{C}))$, the profinite completion of the topological fundamental group.\end{remark} We begin with the following simple lemma: \begin{lemma} Let $X$ be a smooth curve with $\#\partial X>1$, and $g(\overline{X})>0$, then there exists an etale cover, $Y\rightarrow X,$ which is not pulled back from an etale cover of $\overline{X}$, i.e. does not extend to an etale cover of $\overline{X}$.\end{lemma}\begin{proof} Let us take standard generators for $\pi_{1}(\overline{X}(\mathbb{C}))$ coming from viewing a genus $g$ real surface as a $4g$-gon with appropriate edge identifications. We will call the generators $\{\alpha_{1},\beta_{1},...,\alpha_{g},\beta_{g}\}$. $\pi_{1}(\overline{X}(\mathbb{C}))$ is generated by these elements subject to the one relation $\prod_{i}[\alpha_{i},\beta_{i}]=1$. $\pi_{1}(X(\mathbb{C}))$ is freely generated by them. \\ \\ It suffices, by Riemann Existence, to find a finite group $G$ such that the natural map, $$Hom(\pi_{1}(\overline{X}(\mathbb{C})),G)\rightarrow Hom(\pi_{1}(X(\mathbb{C}),G),$$ is not surjective. We take $G=S_{3}$. We define an element of the right hand side by sending $\alpha_{1}\mapsto (12), \beta_{1}\mapsto (123)$ and the rest of the generators to the unit. This maps the element $\prod_{i}[\alpha_{i},\beta_{i}]=1$ to a non-trivial permutation and thus we are done.\end{proof}\begin{corollary} Let $X$ be as above, and fix any non-holomorphic loop, $\gamma\in X(\mathcal{D}^{*}_{\mathbb{C}})$. Then the induced map of etale fundamental groups, $\widehat{\mathbb{Z}}=\pi_{1}^{et}(\mathcal{D}^{*})\rightarrow\pi_{1}^{et}(X)$, is non-zero. \end{corollary}\begin{proof} Let $\infty\in\overline{X}$ denote the puncture point. We may assume the extension, $\overline{\gamma}:\mathcal{D}\rightarrow\overline{X}$, maps $0$ to $\infty$ to order $1$, since otherwise the result is simply to pre-compose with the map induced on $\pi_{1}^{et}(\mathcal{D}^{*})$ by a degree $n$ cover, which is of course not zero. \\ \\ To show that the induced map, $\widehat{\mathbb{Z}}=\pi_{1}^{et}(\mathcal{D}^{*})\rightarrow\pi_{1}^{et}(X)$, is non-zero it suffices to show that there is a non-trivial etale cover of $X$ which pulls back to a non-trivial etale cover of $\mathcal{D}^{*}$ under $\gamma$. \\ \\ However, let us note that if $\gamma$ pulls back a cover, $Y\rightarrow X$, to a trivial cover of $\mathcal{D}^{*}$, then the pulled-back cover extends over the puncture point $0\in\mathcal{D}$, whence the original cover extends over the puncture $\infty\in\overline{X}$. We have seen above that there exist covers not extendable in this manner, whence the proof is complete. \end{proof} \begin{corollary} Let $\gamma_{0}$ be a holomorphic and $\gamma_{1}$ a be non-holomorphic element of $X(\mathcal{D}^{*}_{\mathbb{C}})$, for $X$ as above, then the classes of $\gamma_{0}$ and $\gamma_{1}$ differ in $\pi_{0}(X(\mathcal{D}^{*}_{\mathbb{C}})).$ Further, two non-holomorphic loops extending to arcs with the same central value lie in the same component iff they have the same degree. \end{corollary}\begin{proof} Note that a holomorphic loop induces zero on $\pi_{1}^{et}$, further we can detect the order from the induced map on $\pi_{1}^{et}$. As such, using arguments similar to Lemma 4.1 above, it suffices to show the following: if $$\gamma:\mathcal{D}^{*}_{S}\rightarrow X,$$ is an $S$-family of loops, with $S$ connected, and with $\gamma_{0}$ and $\gamma_{1}$ the fibres at $s_{0}$ and $s_{1}$, then maps induced on $\pi_{1}^{et}$ by the $\gamma_{i}$ agree. This follows from the results of SGA, specifically it can be deduced from the results of [SGA 1, Exp. X, Cor 2.4, Thm 3.8, Cor. 3.9 \& Section 2 of Exp. XIII], many thanks to Jason Starr for the detailed reference. \end{proof} Putting this all together we deduce the main theorem, which we re-state below: \begin{tcolorbox}\begin{theorem} let $X$ be a smooth affine curve. If $X=\mathbb{A}^{1}$ then its loop space is contractible. Otherwise the set of connected components of its loop space is naturally identified with $(\mathbb{N}^{\times}\times\partial X)\sqcup \{X(\mathcal{D})\}.$ In particular, the number of connected components of the loop space of such an $X$, up to coverings of the punctured disc, is $1$ plus the number of points needed to compactify $X$.\end{theorem}\end{tcolorbox}
{ "timestamp": "2018-06-05T02:12:16", "yymm": "1806", "arxiv_id": "1806.00859", "language": "en", "url": "https://arxiv.org/abs/1806.00859" }
\section{Introduction} Indoor space is likely to be the main workplace for service robots in the near future. In order to work well in an indoor space, the robots should possesses the ability of visual scene understanding. To do so, the semantic segmentation in indoor scene is becoming one of the most popular tasks in computer vision. Over the pass few years, fully convolutional networks (FCNs) type architectures have shown great potential on semantic segmentation task \cite{long2015fully,noh2015learning,badrinarayanan2015segnet,chen2016deeplab,yu2015multi,lin2017refinenet,yu2017dilated}, and have dominated the semantic segmentation task of many datasets \cite{everingham2010pascal,cordts2016cityscapes,song2015sun}. Some of this FCNs-type architectures focus on indoor environment, and usually utilize the depth information as the complementary information for RGB to improve the segmentation \cite{long2015fully,couprie2013indoor,gupta2014learning,hazirbas2016fusenet}. In general, the FCNs architectures can be generally divide into two categories, i.e., the encoder-decoder type architectures and dilated convolution architectures. The encoder-decoder architectures \cite{long2015fully,noh2015learning,badrinarayanan2015segnet,lin2017refinenet,hazirbas2016fusenet} have a downsample path to extract the semantic information from images and a upsample path to recover a full-resolution semantic segmentation map. By contrast, the dilated convolution architectures \cite{chen2016deeplab,yu2015multi,yu2017dilated} employ dilated convolution such that the convolutional network expands receptive field exponentially without downsampling. With less or even zero downsampling operation, dilated architectures keep the spatial information in the image through out the whole networks, so the architectures serve as a discriminative model that classify every pixel on the image. Encoder-decoder architectures, on the other hand, lost spatial information during the discriminative encoder, and thus some of the networks apply skip-architecture to recover the spatial information during the generative decoder path. Even though the dilated convolution architectures have the advantage of keeping the spatial information, they generally have higher memory consumption on the training step. Because the spatial resolution of the activation map is not downsampled as the network proceed and it needs to be stored for gradient computation. Therefore, the high memory consumption stops the network from having a deeper structure. This could cause disadvantages on this method, since convolutional networks learn richer features as the structure gets deeper, which would benefit the inference of the semantic information. \begin{figure}[!t] \centering \includegraphics[width=0.85\linewidth]{overall_structure.pdf} \caption{Overall structure of the proposed network.} \label{fig:Overall} \end{figure} In this paper, we propose a novel structure named RedNet that employ the encoder-decoder network structure for indoor RGB-D semantic segmentation. In RedNet, the residual block is used as the building module to avoid the model degradation problem \cite{he2016deep}. This allows the performance of networks to improve as the structure goes deeper. Moreover, we apply fusion structure to incorporate depth information into the network, and use skip-architecture to bypass the spatial information from encoder to decoder. Further, inspired by the training scheme in \cite{szegedy2015going}, we propose the pyramid supervision that apply supervised learning over different layers on the decoder for better optimization. The overall structure of RedNet is illustrated in Fig. \ref{fig:Overall}. The remainder of this paper is organized in four sections. In section \ref{sec:relatedwork}, the literature on residual networks and indoor RGB-D semantic segmentation is previewed. The architecture of RedNet and the idea of pyramid supervision are stated in detail in section \ref{sec:approach}. In section \ref{sec:experiment}, the comparative experiments are conducted to evaluate the efficiency of the model. Finally, we draw a conclusion of this paper in section \ref{sec:conclusion}. Before ending this section, the main contributions of this paper are listed as the following. \begin{enumerate} \item[1.] A novel residual encoder-decoder architecture (termed RedNet) is proposed for indoor RGB-D semantic segmentation, which applies residual module as the basic building block in both the encoder path and decoder path. \item[2.] A pyramid supervision training scheme is proposed to optimize the network, which applies supervised learning over different layers on the upsample path of the model. \item[3.] Two comparison experiments are conducted on SUN RGB-D benchmark to verify the effectiveness of the proposed RedNet architecture and the pyramid supervision training scheme. \end{enumerate} \section{Related Work} \label{sec:relatedwork} \subsection{Residual Networks} Residual network was first proposed by He et al. in \cite{he2016deep}. In their work, they analyzed the problem of model degradation, which present as saturation and then degradation of accuracy as the network depth increasing. They argued that the degradation problem is an optimization problem, and as the depth of the network increase, the network gets harder to train. It was assumed that the desired mapping of a convnet is comprised of an identity mapping and a residual mapping. Therefore, a deep residual learning framework is proposed. Instead of letting a convnet learn the desired mapping, it fits the residual mapping and uses shortcut connection to merge it with the identity input. With this configuration, the residual network become easy to optimize and can enjoy accuracy gains from greatly increased depth. Veit et al. \cite{veit2016residual} presented a complementary explanation of the increased performance of residual networks, i.e., the residual networks avoid the vanishing gradient problem by introducing the short paths between input and output. Later, He et al. \cite{he2016identity} analyzed the propagation formulations behind the connection mechanisms of residual networks and proposed a new structure of residual unit. In their work, they extended the depth of a deep residual networks to 1001 layers. Zagoruyko et al. \cite{zagoruyko2016wide} investigated the memory consumption of residual networks and propose a novel residual unit that aims to decrease depth and increase width of a deep residual network. The idea of residual learning was later adopted to architectures for semantic segmentation task. Pohlen et al. \cite{pohlen2017full} proposed a fully convolutional network with residual learning for semantic segmentation in street scenes. The network has an encoder-decoder architecture and applies residual module on the skip-connection structure with the full-resolution residual units (FRRUs). Quan et al. \cite{quan2016fusionnet} presented a FCN architecture, named FusionNet, for connectomics image segmentation. Instead of using residual block on skip-connection structure, FusionNet applies them on each layer in the encoder and decoder path along with standard convolution, max-pooling, and transpose of convolution \cite{noh2015learning}. Similarly, Drozdzal et al. \cite{drozdzal2016importance} studied the importance of skip-connection in biomedical image segmentation, showing that the ``short skip connections'' in residual module is more effective than the ``long skip connections'' between encoder and decoder on biomedical image analyzing. Yu et al. \cite{yu2017dilated} combined the idea of residual networks and dilated convolution to build a dilated residual networks for semantic segmentation. In their paper, they also studied the gridding artifact introduced by dilation convolution and developed a `degridding' method to removing these artifacts. Dai et al. \cite{dai2016instance} used ResNet-101 as the basic network and apply the Multi-task Network Cascades for instance segmentation. Lin et al. \cite{lin2017refinenet} and Lin et al. \cite{lin2017cascaded} also used ResNet structure as a feature extractor and employed a multi-path refinement network to exploits information along the down-sampling process for full resolution semantic segmentation. In 2017, Chaurasia et al. \cite{chaurasia2017linknet} proposed a encoder-decoder architecture (named LinkNet) for efficient semantic segmentation. The LinkNet architecture uses ResNet18 as the encoder and applies the bottleneck unit in the decoder for feature upsample. Under this efficient configuration, the network achieve state-of-the-art accuracy on several uban street dataset \cite{cordts2016cityscapes,brostow2008segmentation}. Inspired by this work, we propose a straightforward encoder-decoder structure that apply residual unit on both the downsample path and upsample path, and employs the pyramid supervision to optimize it. \subsection{Indoor RGB-D Semantic Segmentation} Currently, the accurate indoor semantic segmentation is still a challenging problem due to the high similarity of color and structure between objects, and the non-uniform illumination in indoor environment. Therefore, some work started utilizing the depth information as the complementary information to solve the problem. For instance, Koppula et al. \cite{koppula2011semantic} and Huang et al. \cite{huang2014object} used depth information to build 3D point clouds of full indoor scenes, and applied graphical model to capture features and contextual relations of objects in RGB-D data for semantic labeling. Gupta et al. \cite{gupta2013perceptual} proposed a superpixel-based architecture for RGB-D semantic segmentation in indoor scene. Their method applied superpixel regions extraction on RGB image and feature extraction of each superpixel on RGB-D data, then employ Random Forest (RF) and Support Vector Machine (SVM) to classify each superpixel and build a full-resolution semantic map. Later, Gupta et al. \cite{gupta2014learning} improved this segmentation model by introducing a HHA encoding for depth information and use a Convolutional Neural Network (CNN) for feature extraction. In HHA encoding, depth information is encoded into three channel, i.e., horizontal disparity, height above ground, and angle between gravity \& surface normal. These implied that the HHA encoding emphasize the geocentric discontinuities in the image. After the release of several indoor RGB-D datasets \cite{silberman2011indoor,janoch2011category,silberman2012indoor,song2015sun}, many researches started employing deep learning architectures for indoor semantic segmentation. Couprie et al. \cite{couprie2013indoor} presented a multi-scale convolutional network for indoor semantic segmentation. The study showed that the recognition of object classes with similar depth appearance and location is improved when incorporating the depth information. Long et al. \cite{long2015fully} applied FCNs structure on indoor semantic segmentation and compare different inputs to the network, including three channel RGB, stacked four channel RGB-D, and stacked six channel RGB-HHA. The research further showed that the RGB-HHA input outperform all other input form, while the RGB-D have similar accuracy with RGB input. Hazirbas et al. \cite{hazirbas2016fusenet} presented a fusion-based encoder-decoder FCNs for indoor RGB-D semantic segmentation. Their work shows that the HHA encoding does not hold more information than the depth itself. In order to fully utilize the depth information, they apply two branches of convolutional network to compute RGB and depth image respectively and apply features fusion on different layers. Based on the same depth fusion structure, our previous work \cite{jiang2017incorporating} proposed a DeepLab-type architecture \cite{chen2016deeplab} that applies depth incorporation on a dilated FCNs and build a RGB-D conditional random field (CRF) as the post-process. In this work, we will also apply depth fusion structure on the downsample part of the network, and apply skip-connection to bypass the fused information to the decoder for full-resolution semantic prediction. \section{Approach} \label{sec:approach} \subsection{RedNet Architecture} \begin{figure}[!t] \centering \includegraphics[width=0.70\linewidth]{RedNet.pdf} \caption{Layer configuration of the proposed RedNet (ResNet-50).} \label{fig:rednet} \end{figure} The architecture of RedNet is presented in Fig. \ref{fig:rednet}. For clear illustration, we use blocks with different color to indicate different kinds of layers. Notice that each convolution operation in RedNet is followed by a batch normalization layer \cite{ioffe2015batch} before relu function, and it is omitted in the figure for simplification. The upper half of the figure up to Layer4/Layer4\_d is the encoder of the network, it has two convolutional branches, i.e., the RGB branch and the Depth branch. Structures of both encoder branches can be adopted from one of the five ResNet architectures proposed in \cite{he2016deep}, in which we remove the last two layers of ResNet, i.e., the global average pooling layer and fully-connected layer. The RGB branch and the Depth branch in the model have the same network configuration, except that the convolution kernel of Conv1\_d on Depth branch has only one feature channel, since the Depth input presented as an one channel gray image. The encoder starts with two downsample operation, which is the \(7 \times 7\) convolution layer with stride two and a \(3 \times 3\) max-pooling layer with stride two. This max-pooling is the only pooling layer in the whole architecture, all other downsample and upsample operations in the network are implemented with two-stride convolution and transpose of convolution. The following layers in encoder are residual layers with different numbers of residual unit. It is worth pointing out that only Layer1 in the encoder does not have downsample unit, and all other ResLayer have one residual unit that downsample the feature map and increase the feature channel by a factor of 2. The Depth branch ending at Layer4\_d, and its features are fused into RGB branch on five layers. Here, element-wise summation is performed as the feature fusion method. The lower half of Fig. \ref{fig:rednet}, starting with Trans1 layer, is the decoder of the network. Here, except the Final Conv layer, which is a single \(2 \times 2\) transpose of convolution layer, all other layers in the decoder are residual layers. The first four layers, i.e., the Trans1, Trans2, Trans3, and Trans4, have one upsample residual unit to upsample the feature map by a factor of 2. Different from the bottleneck building block in the encoder, we employ the standard residual building block \cite{he2016deep} in the decoder that have two consecutive \(3 \times 3\) convolution layers for residual computation. With regard to the upsample operation, we present a upsample residual unit that is shown in Fig. \ref{fig:encoder_decoder_unit}(c). In Fig. \ref{fig:encoder_decoder_unit}, we compare the downsample unit in ResNet-50 and ResNet-34, as well as the upsample unit we propose in the decoder. Here, for Conv\([(k,k),s,*/c]\), \((k,k)\) means the spatial size of the convolution kernel. Parameter \(s\) is the stride of the convolution, and \(c\) is the increase or decrease factor of the output feature channel. Red block denotes the convolution that changes the spatial size of the input feature map, i.e., downsample or upsample. For example, a \mbox{\(Conv[(2, 2), 0.5, /2]\)} in red means a \(2 \times 2\) kernel size transpose of convolution that upsample the width and height of the feature map by a factor of 2 and decrease the feature channel by a factor of 2. \begin{figure}[!t] \centering \includegraphics[width=0.95\linewidth]{encoder_decoder_unit.pdf} \caption{Downsample and upsample residual unit. (a): a downsample residual unit in (ResNet-50) encoder. (b): a downsample residual unit in (ResNet-34) encoder. (c): a upsample residual unit we propose in decoder.} \label{fig:encoder_decoder_unit} \end{figure}\textbf{} \begin{table}[!b] \footnotesize \caption{Encoder (ResNet-50) and Decoder configuration} \label{tab:maps_configuration} \centering \begin{tabular}{ m{3em} m{0.5em} m{2.5em} m{2.5em} m{2.5em} m{2em} m{3em} m{0.5em} m{2.5em} m{2.5em} m{2.5em} } \toprule \multirow{2}{*}{Block} & &\multicolumn{3}{c}{Encoder} & &\multirow{2}{*}{Block} & &\multicolumn{3}{c}{Decoder} \\ \cmidrule{3-5} \cmidrule{9-11} & &\(m\) &\(n\) &\(l_\text{unit}\) & & & & \(m\) &\(n\) &\(l_\text{unit}\) \\ \midrule Layer4 & &1024 &2048 &3 & &Trans1 & &512 &256 &6\\ Layer3 & &512 &1024 &6 & &Trans2 & &256 &128 &4\\ Layer2 & &256 &512 &4 & &Trans3 & &128 &64 &3\\ Layer1 & &64 &256 &3 & &Trans4 & &64 &64 &3\\ Conv1 & &3 &64 &- & &Trans5 & &64 &64 &3\\ \bottomrule \end{tabular} \end{table} Table \ref{tab:maps_configuration} shows the network configuration when using ResNet-50 as the encoder, here \(m\) denotes the number of input feature channel, \(n\) denote the number of output feature channel, and \(l_\text{unit}\) denote the number of residual unit in that layer. The upsample ResLayer has different residual unit order compared with the downsample ResLayer. The downsample layer starts with a downsample residual unit and followed by several residual units, by contrast, the upsample layer starts with several residual unit and ends with one upsample residual unit. As shown in the table, the output of residual layer in ResNet-50 encoder has large channel size since it use channel expansion. Therefore, we employ the \textit{Agent} layers shown in Fig. \ref{fig:rednet}, which are single \(1 \times 1\) convolutional layer with strides one. It is designed to project the feature map for lower channel size, allowing the decoder to have a lower memory consumption. Notice that the agent layers only exist when ResNet-50 is employed, they will be removed when the encoder employ ResNet-34 structure. This is because it does not have channel expansion on residual unit. In addition, we also remove skip-connection between output of Conv1 and output of Trans4 on ResNet-34 encoder setting for better performance. \subsection{Pyramid Supervision} The pyramid supervision training scheme alleviate the gradient vanishing problem by introducing supervised learning over five different layers. As shown in Fig. \ref{fig:rednet}, the algorithm compute four intermediate outputs from feature maps of four upsample ResLayer in addition to the final output, these intermediate outputs are called side outputs. Each side output score map is computed using a convolution layer with \(1 \times 1\) kernel size and stride one. Therefore, all outputs have different spatial resolutions. The final \textit{Output} of RedNet is a full resolution score map, while the side outputs \textit{Out4}, \textit{Out3}, \textit{Out2}, and \textit{Out1} are downsampled. For instance, the \textit{Out1} has 1/16 the height and width of the \textit{Output}. The four side outputs and the final output are then feed into a softmax layer and cross entropy function to build the loss function. \begin{equation} \label{eq:each_loss} Loss(\mathbf{s}, \mathbf{g}) = \frac{1}{N} \sum_{i} -\log\left(\frac{\exp(s_{i}[g_{i}])}{\sum_{k} \exp(s_{i}[k])}\right) \end{equation} More concretely, the loss function of each output has the same form shown in Eq. \ref{eq:each_loss}. Here, \(g_i \in \mathbb{R}\) denote the class index on the groundtruth semantic map on location \(i\). \(s_i \in \mathbb{R}^{N_c}\) denote the score vector of the network output on location \(i\) with \(N_c\) being the number of classes in the dataset. \(N\) denotes the spatial resolution of the specific output. When dealing with the loss function of \(Out1\) to \(Out4\), the groundtruth map \(\mathbf{g}\) is downsampled using nearest-neighbor interpolation. The overall cross entropy loss is thus the summation of all five cross entropy losses over five outputs. Notice that instead of assigning equally-weighted loss on pixels in different outputs, these overall loss configuration assign more weight on pixels of downsampled output, e.g., \textit{Out1}. In practice, we find that this configuration provide better performance than the equally-weighted loss configuration. \section{Experiment} \label{sec:experiment} In this section, we evaluate the RedNet architectures with ResNet-34 and ResNet-50 as the encoder using the SUN RGB-D indoor scene understanding benchmark suit \cite{song2015sun}. The SUN RGB-D dataset is currently the largest RGB-D indoor scene semantic segmentation dataset It has 10,335 densely annotated RGB-D images taken from 20 different scenes, at a similar scale as the PASCAL VOC RGB dataset \cite{everingham2015pascal}. It also include all images data from NYU Depth v2 dataset \cite{silberman2012indoor}, and selected images data from Berkeley B3DO \cite{janoch2011category} and SUN3D \cite{xiao2013sun3d} dataset. To improve the quality of the depth map, the paper proposes a algorithm that estimates the 3D structure of the scene from multiple frames to conduct depth denoising and fill in the missing values. Each pixel in the RGB-D images is assigned a semantic label in one of the 37 classes or the `unknown' class. In the experiment evaluation, we use the default trainval-test split of the dataset that has 5285 training/validation instances and 5050 testing instances to evaluation our proposed RedNet architecture. \textbf{Training}~ Images in SUN RGB-D dataset were captured by four different kinds of sensors with different resolutions and fields of view. In the training step, we resize all RGB images, Depth images, and the Groundtruth semantic maps into a \(480 \times 640\) height and width spatial resolution, additionally, the Groundtruth maps are further resized into four downsampled maps with resolution from \(240 \times 320\) to \(30 \times 40\) for pyramid supervision of the side output. Here, the RGB images are applied bilinear interpolation while the Depth images and Groundtruth maps are applied nearest-neighbor interpolation. During training, the inputs and Groundtruths data are augmented by applying random scale and crop and the input RGB images are further augmented by applying random hue, brightness, and saturation adjustment. In addition, we calculate the mean and standard deviation of the RGB and Depth images in the whole dataset to normalize each input value. The two networks in the experiment, i.e., the RedNet (ResNet-34) and RedNet (ResNet-50), share the same training strategy and have the identical values of all hyperparameters. We use the PyTorch deep learning framework \cite{paszke2017automatic} for implementation and training of the architecture\footnote{Our source code will be avaliable at \url{https://github.com/JindongJiang/RedNet}}. The encoder of the network is pretrained on the ImageNet object classification dataset \cite{krizhevsky2012imagenet}, while the parameters on other layers are initialized by the Xavier initializer \cite{glorot2010understanding}. Since the imbalance of pixels of each class presented in the dataset, we reweight the training loss of each class in the cross-entropy function using the median frequency setting proposed in \cite{eigen2015predicting}. That is, we weight each pixel by a factor of \(\alpha_{c} = median\_prob/prob(c)\), where c is the groundtruth class of the pixel, \(prob(c)\) is the pixel probability of that class, \(median\_prob\) is the median of all the probabilities of these classes. The network is training with momentum SGD as the optimization algorithm. The initial learning rate of all layers are set to 0.002 and will decay by a factor of 0.8 in every 100 epochs. The momentum of the optimizer is set to 0.9, and a weight decay of 0.0004 is applied for regularization. The network is trained on a NVIDIA GeForce GTX 1080 GPU with a batch size of 5, and we stop the training when the loss no longer decrease. \textbf{Evaluation}~ The network is evaluated on the default testing set of SUN RGB-D dataset. Three criterias for segmentation tasks are used to measure the performance of the network under 5050 testing instances, i.e., the pixel accuracy, the mean accuracy and the intersection-over-union (IoU) score. \begin{table}[t] \small \caption{Comparison of SUN RGB-D testing results} \label{tab:testing_result} \centering \begin{tabular}{ l r r r } \toprule Model &~Pixel &~~Mean &~~mIoU \\ \midrule FCN-32s \cite{long2015fully} &68.4 &41.1 &29.0 \\ SegNet \cite{badrinarayanan2015segnet} &71.2 &45.9 &30.7 \\ Context-CRF \cite{lin2016exploring} &78.4 &53.4 &42.3 \\ RefineNet-152 \cite{lin2017refinenet} &80.6 &58.5 &45.9 \\ CFN (RefineNet-152) \cite{lin2017cascaded} &- &- &48.1 \\ FuseNet-SF5 \cite{hazirbas2016fusenet} &76.3 &48.3 &37.3 \\ DFCN-DCRF \cite{jiang2017incorporating} &76.6 &50.6 &39.3 \\ \midrule RedNet(ResNet-34) &80.8 &58.3 &46.8 \\ RedNet(ResNet-50) &\textbf{81.3} &\textbf{60.3} &\textbf{47.8} \\ \bottomrule \end{tabular} \end{table} Table \ref{tab:testing_result} shows the comparison result of RedNet and other state-of-the-art methods on SUN RGB-D testing set. As we can see in the table, the proposed RedNet(ResNet-34) and RedNet(ResNet-50) architecture outperform most of the exist methods. Here, the FuseNet-SF5 \cite{hazirbas2016fusenet} and DFCN-DCRF \cite{jiang2017incorporating} networks use the same depth fusion technique in RedNet for depth incorporation. The RefineNet-152 \cite{lin2017refinenet} and CFN (RefineNet-152) \cite{lin2017cascaded} architecture use the same residual network in RedNet for feature extraction. Notice that, these two architectures are both using ResNet-152 structure for feature extraction, while RedNet performs a 47.8\% accuracy using the ResNet-50 as the encoder. It also worth notice that the RedNet(ResNet-34) network and the RedNet(ResNet-50) network share the same decoder structure, and the comparison result shows that the deeper structure of encoder in RedNet(ResNet-50) provides a better performance. \begin{table}[t] \small \caption{SUN RGB-D testing results on pyramid supervision} \label{tab:pyramid_result} \centering \begin{tabular}{ l r r r } \toprule Model &~Pixel &~~Mean &~~mIoU \\ \midrule RedNet(ResNet-34) without pyramid &80.3 &55.5 &45.0 \\ RedNet(ResNet-34) &80.8 &58.3 &46.8 \\ RedNet(ResNet-50) without pyramid &80.5 &57.4 &46.0 \\ RedNet(ResNet-50) &81.3 &60.3 &47.8 \\ \bottomrule \end{tabular} \end{table} In addition, to show that the pyramid supervision training scheme is able to effectively improve the performance of the network, a experiment is conducted to compare the performance of the proposed RedNet architectures trained with and without pyramid supervision. The result is shown in Table \ref{tab:pyramid_result}. It shows that the pyramid supervision improve the performance of the network on all three criterias. Notice that the ResNet-34 encoder RedNet with pyramid supervision training scheme outperform the ResNet-50 encoder RedNet without pyramid supervision, this fully demonstrate the effectiveness of pyramid supervision. The testing prediction of side outputs and final output can be obtained in Fig. \ref{fig:outputs}. \begin{figure}[!t] \centering \includegraphics[width=0.7\textwidth,]{predicts.pdf} \caption{Prediction of side outputs and final output} \label{fig:outputs} \end{figure} \section{Conclusion} \label{sec:conclusion} In this work, we propose a RGB-D encoder-decoder residual network named RedNet for indoor RGB-D semantic segmentation. The RedNet combines the short skip-connection in residual unit and the long skip-connection between encoder and decoder for an accurate semantic inference. It also applies fusion structure in the encoder to incorporate the depth information. Moreover, we present the pyramid supervision training scheme that apply supervised learning over several layers on the decoder to improve the performance of the encoder-decoder network. The comparative experiment shows that the proposed RedNet architecture with pyramid supervision achieves state-of-the-art result on SUN RGB-D dataset. \bibliographystyle{splncs04.bst}
{ "timestamp": "2018-08-07T02:22:27", "yymm": "1806", "arxiv_id": "1806.01054", "language": "en", "url": "https://arxiv.org/abs/1806.01054" }
\section{Experiments} \begin{table}[t] \begin{center} \begin{tabular}{cccc} \toprule[1.5pt] KWDLC & \# snt & \# of dep & \# of zero \\ \midrule Train & 11,558 & 9,227 & 8,216 \\ Dev. & 1,585 & 1,253 & 821 \\ Test & 2,195 & 1,780 & 1,669 \\ \bottomrule[1.5pt] \end{tabular} \\ \caption{KWDLC data statistics.} \vspace{-1em} \label{table:stat} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{cccc} \toprule[1.5pt] KWDLC & NOM & ACC & DAT \\ \midrule \# of dep & 7,224 & 1,555 & 448 \\ \# of zero & 6,453 & 515 & 1,248 \\ \bottomrule[1.5pt] \end{tabular} \\ \caption{KWDLC training data statistics for each case. } \label{table:case} \vspace{-1em} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{lcc} \toprule[1.5pt] & Case & Zero \\ \midrule Ouchi+ 2015 & 76.5 & 42.1 \\ Shibata+ 2016 & 89.3 & 53.4 \\ \midrule Gen & 91.5 & 56.2 \\ Gen+Adv & \textbf{92.0}$^\ddagger$ & \textbf{58.4}$^\ddagger$ \\ \bottomrule[1.5pt] \end{tabular} \caption{ The results of case analysis (Case) and zero anaphora resolution (Zero). We use F-measure as an evaluation measure. $\ddagger$ denotes that the improvement is statistically significant at $p<0.05$, compared with Gen using paired t-test. } \vspace{-1.5em} \label{table:resultall} \end{center} \end{table} \begin{table*}[t] \begin{center} \begin{tabular}{lcccccc} \toprule[1.5pt] & \multicolumn{3}{c}{Case analysis} & \multicolumn{3}{c}{Zero anaphora resolution} \\ \cmidrule(r{4pt}){2-4} \cmidrule(l){5-7} Model & NOM & ACC & DAT & NOM & ACC & DAT \\ \midrule Ouchi+ 2015 & 87.4 & 40.2 & 27.6 & 48.8 & 0.0 & 10.7 \\ Shibata+ 2016 & 94.1 & 75.6 & 30.0 & 57.7 & 17.3 & 37.8 \\ Gen & \textbf{95.3} & 83.6 & 39.7 & 60.7 & 30.4 & 41.2 \\ Gen+Adv & \textbf{95.3} & \textbf{85.4} & \textbf{51.5} & \textbf{62.3} & \textbf{31.1} & \textbf{44.6} \\ \bottomrule[1.5pt] \end{tabular} \caption{ The detailed results of case analysis and zero anaphora resolution for the NOM, ACC and DAT cases. Our models outperform the existing models in all cases. All values are evaluated with F-measure. } \vspace{-1em} \label{table:result} \end{center} \end{table*} \begin{table}[t] \begin{center} \begin{tabular}{lcc} \toprule[1.5pt] & Case & Zero \\ \midrule Gen & 91.5 & 56.2 \\ Gen+Aug & 91.2 & 57.0 \\ \midrule Gen+Adv & \textbf{92.0}$^\ddagger$ & \textbf{58.4}$^\ddagger$ \\ \bottomrule[1.5pt] \end{tabular} \caption{ The comparisons of Gen+Adv with Gen and the data augmentation model (Gen+Aug). $\ddagger$ denotes that the improvement is statistically significant at $p<0.05$, compared with Gen+Aug. } \vspace{-0.5em} \label{table:resultaug} \end{center} \end{table} \subsection{Experimental Settings} Following \newcite{shibata2016}, we use the KWDLC (Kyoto University Web Document Leads Corpus) corpus \cite{hangyo2012} for our experiments.\footnote{ The KWDLC corpus is available at \url{http://nlp.ist.i.kyoto-u.ac.jp/EN/index.php?KWDLC}} This corpus contains various Web documents, such as news articles, personal blogs, and commerce sites. In KWDLC, lead three sentences of each document are annotated with PAS structures including zero pronouns. For a raw corpus, we use a Japanese web corpus created by \newcite{hangyo2012}, which has no duplicated sentences with KWDLC. This raw corpus is automatically parsed by the Japanese dependency parser KNP. We focus on intra-sentential anaphora resolution, and so we apply a preprocess to KWDLC. We regard the anaphors whose antecedents are in the preceding sentences as {\usefont{T1}{pcr}{m}{n} NULL} in the same way as \newcite{ouchi2015,shibata2016}. Tables \ref{table:stat} and \ref{table:case} list the statistics of KWDLC. We use the exophora entities, i.e., an author and a reader, following the annotations in KWDLC. We also assign author/reader labels to the following expressions in the same way as \newcite{hangyo2013,shibata2016}: \begin{description} \item[author] {\small ``\begin{CJK*}{UTF8}{ipxm}私\end{CJK*}''} (I), {\small ``\begin{CJK*}{UTF8}{ipxm}僕\end{CJK*}''} (I), {\small ``\begin{CJK*}{UTF8}{ipxm}我々\end{CJK*}''} (we), {\small ``\begin{CJK*}{UTF8}{ipxm}弊社\end{CJK*}''} (our company) \item[reader] {\small ``\begin{CJK*}{UTF8}{ipxm}あなた\end{CJK*}''} (you), {\small ``\begin{CJK*}{UTF8}{ipxm}君\end{CJK*}''} (you), {\small ``\begin{CJK*}{UTF8}{ipxm}客\end{CJK*}''} (customer), {\small ``\begin{CJK*}{UTF8}{ipxm}皆様\end{CJK*}''} (you all) \end{description} Following \newcite{ouchi2015} and \newcite{shibata2016}, we conduct two kinds of analysis: (1) case analysis and (2) zero anaphora resolution. Case analysis is the task to determine the correct case labels when predicates and their arguments have direct dependencies but their case markers are hidden by surface markers, such as topic markers. Zero anaphora resolution is a task to find certain case arguments that do not have direct dependencies to their predicates in the sentence. Following \newcite{shibata2016}, we exclude predicates that the same arguments are filled in multiple cases of a predicate. This is relatively uncommon and 1.5 \% of the whole corpus are excluded. Predicates are marked in the gold dependency parses. Candidate arguments are just other tokens than predicates. This setting is also the same as \newcite{shibata2016}. All performances are evaluated with micro-averaged F-measure \cite{shibata2016}. \subsection{Experimental Results} We compare two models: the supervised generator model (Gen) and the proposed semi-supervised model with adversarial training (Gen+Adv). We also compare our models with two previous models: \newcite{ouchi2015} and \newcite{shibata2016}, whose performance on the KWDLC corpus is reported. Table \ref{table:resultall} lists the experimental results. Our models (Gen and Gen+Adv) outperformed the previous models. Furthermore, the proposed model with adversarial training (Gen+Adv) was significantly better than the supervised model (Gen). \subsection{Comparison with Data Augmentation Model} We also compare our GAN-based approach with data augmentation techniques. A data augmentation approach is used in \newcite{tingliu2017}. They automatically process raw corpora and make drops of words with some rules. However, it is difficult to directly apply their approach to Japanese PAS analysis because Japanese zero-pronoun depends on dependency trees. If we make some drops of arguments of predicates in sentences, this can cause lacks of nodes in dependency trees. If we prune some branches of dependency trees of the sentence, this cause the data bias problem. Therefore we use existing training corpora and word embeddings for the data augmentation. First we randomly choose an argument word $w$ in the training corpus and then swap it with another word $w'$ with the probability of $p(w,w')$. We choose top-20 nearest words to the original word $w$ in the pre-trained word embedding as candidates of swapped words. The probability is defined as $ p(w,w') \propto [v(w)^\top v(w')]^r$, where $r=10$. This probability is normalized by top-20 nearest words. We then merge this pseudo data and the original training corpus and train the model in the same way with the Gen model. We conducted several experiments and found that the model trained with the same amount of the pseudo data as the training corpus achieved the best result. Table \ref{table:resultaug} shows the results of the data augmentation model and the GAN-based model. Our Gen+Adv model performs better than the data augmented model. Note that our data augmentation model does not use raw corpora directly. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.4\textwidth} \hspace{-4em} \includegraphics[scale=0.70]{graph/disc.pdf} \end{subfigure} % \begin{subfigure}[b]{0.4\textwidth} \includegraphics[scale=0.70]{graph/gen.pdf} \end{subfigure} \caption{ Left: validator scores with the development set during adversarial training epochs. Right: generator scores for Zero with the development set during adversarial training epochs. } \label{fig:disc} \end{figure*} \subsection{Discussion} \subsubsection{Result Analysis} We report the detailed performance for each case in Table \ref{table:result}. Among the three cases, zero anaphora resolution of the ACC and DAT cases is notoriously difficult. This is attributed to the fact that these ACC and DAT cases are fewer than the NOM case in the corpus as shown in Table \ref{table:case}. However, we can see that our proposed model, Gen+Adv, performs much better than the previous models especially for the ACC and DAT cases. Although the number of training instances of ACC and DAT is much smaller than that of NOM, our semi-supervised model can learn PAS for all three cases using a raw corpus. This indicates that our model can work well in resource-poor cases. We analyzed the results of Gen+Adv by comparing with Gen and the model of \newcite{shibata2016}. Here, we focus on the ACC and DAT cases because their improvements are notable. \begin{itemize} \item {\small ``\begin{CJK*}{UTF8}{ipxm}パック\underline{は}~~洗って、~~分別して~~リサイクルに~~出さなきゃいけないので~~手間がかかる。\end{CJK*}``} \\ It is bothersome to wash, classify and recycle spent packs. \end{itemize} In this sentence, the predicates {\small ``\begin{CJK*}{UTF8}{ipxm}洗って\end{CJK*}''} (wash), {\small ``\begin{CJK*}{UTF8}{ipxm}分別して\end{CJK*}''} (classify), {\small ``\begin{CJK*}{UTF8}{ipxm}(リサイクルに)~出す\end{CJK*}''} (recycle) takes the same ACC argument, {\small ``\begin{CJK*}{UTF8}{ipxm}パック\end{CJK*}''} (pack). This is not so easy for Japanese PAS analysis because the actual ACC case marker {\small ``\begin{CJK*}{UTF8}{ipxm}\underline{を}\end{CJK*}''} (wo) of {\small ``\begin{CJK*}{UTF8}{ipxm}パック\end{CJK*}''} (pack) is hidden by the topic marker {\small ``\begin{CJK*}{UTF8}{ipxm}\underline{は}\end{CJK*}''} (wa). The Gen+Adv model can detect the correct argument while the model of \newcite{shibata2016} fails. In the Gen+Adv model, each predicate gives a high probability to {\small ``\begin{CJK*}{UTF8}{ipxm}パック\end{CJK*}''} (pack) as an ACC argument and finally chooses this. We found many examples similar to this and speculate that our model captures a kind of selectional preferences. The next example is an error of the DAT case by the Gen+Adv model. \begin{itemize} \item {\small``\begin{CJK*}{UTF8}{ipxm}各専門分野も~~~お任せ下さい。\end{CJK*}''} \\ please leave every professional field (to $\phi$) \end{itemize} The gold label of this DAT case (to $\phi$) is NULL because this argument is not written in the sentence. However, the Gen+Adv model judged the DAT argument as ``author''. Although we cannot specify $\phi$ as ``author'' only from this sentence, ``author'' is a possible argument depending on the context. \subsubsection{Validator Analysis} We also evaluate the performance of the validator during the adversarial training with raw corpora. Figure \ref{fig:disc} shows the validator performance and the generator performance of Zero on the development set. The validator score is evaluated with the outputs of generator. We notice that the NOM case and the other two cases have different curves in both graphs. This can be explained by the speciality of the NOM case. The NOM case has much more author/reader expressions than the other cases. The prediction of author/reader expressions depends not only on selectional preferences of predicates and arguments but on the whole of sentences. Therefore the validator that relies only on predicate and argument representations cannot predict author/reader expressions well. In the ACC and DAT cases, the scores of the generator and validator increase in the first epochs. This suggests that the validator learns the weakness of the generator and vice versa. However, in later epochs, the scores of the generator increase with fluctuation, while the scores of the validator saturates. This suggests that the generator gradually becomes stronger than the validator. \section{Introduction} In pro-drop languages, such as Japanese and Chinese, pronouns are frequently omitted when they are inferable from their contexts and background knowledge. The natural language processing (NLP) task for detecting such omitted pronouns and searching for their antecedents is called zero anaphora resolution. This task is essential for downstream NLP tasks, such as information extraction and summarization. \begin{comment} Since dropped pronouns affected by their syntactically dependent predicates and the surrounding content words in the sentence, it is attempted to learn from large corpus with various predicate and argument structures and content \cite{shibata2016,tingliu2017} [\begin{CJK*}{UTF8}{ipxm}できれば他言語でPAS-analysisにraw corpusを使った試みをあげたい。古い研究のほうがいいかも。\end{CJK*}]. \end{comment} \begin{comment} In Japanese, speakers frequently drop \textit{who}, \textit{what} and \textit{whom} pronouns in sentences. Especially, pronouns representing speakers or listeners are dropped in natural sentences. Such pronoun-dropping appears as null arguments of each predicates and predicates themselves are rarely dropped in Japanese. Since dropped arguments are closely correlated to its predicates, many studies of Japanese anaphora resolution are based on predicate argument structure (PAS) analysis \cite{iida2015,ouchi2015,shibata2016}. \end{comment} For Japanese, zero anaphora resolution is usually conducted within predicate-argument structure (PAS) analysis as a task of finding an omitted argument for a predicate. PAS analysis is a task to find an argument for each case of a predicate. For Japanese PAS analysis, the \textit{ga} (nominative, NOM), \textit{wo} (accusative, ACC) and \textit{ni} (dative, DAT) cases are generally handled. To develop models for Japanese PAS analysis, supervised learning methods using annotated corpora have been applied on the basis of morpho-syntactic clues. However, omitted pronouns have few clues and thus these models try to learn relations between a predicate and its (omitted) argument from the annotated corpora. The annotated corpora consist of several tens of thousands sentences, and it is difficult to learn predicate-argument relations or selectional preferences from such small-scale corpora. The state-of-the-art models for Japanese PAS analysis achieve an accuracy of around 50\% for zero pronouns \cite{ouchi2015,shibata2016,iida2016,ouchi2017,matsubayashi2017}. A promising way to solve this data scarcity problem is enhancing models with a large amount of raw corpora. There are two major approaches to using raw corpora: extracting knowledge from raw corpora beforehand \cite{sasano2011,shibata2016} and using raw corpora for data augmentation \cite{tingliu2017}. In traditional studies on Japanese PAS analysis, selectional preferences are extracted from raw corpora beforehand and are used in PAS analysis models. For example, \newcite{sasano2011} propose a supervised model for Japanese PAS analysis based on case frames, which are automatically acquired from a raw corpus by clustering predicate-argument structures. However, case frames are not based on distributed representations of words and have a data sparseness problem even if a large raw corpus is employed. Some recent approaches to Japanese PAS analysis combines neural network models with knowledge extraction from raw corpora. \newcite{shibata2016} extract selectional preferences by an unsupervised method that is similar to negative sampling \cite{mikolov2013}. They then use the pre-extracted selectional preferences as one of the features to their PAS analysis model. The PAS analysis model is trained by a supervised method and the selectional preference representations are fixed during training. Using pre-trained external knowledge in the form of word embeddings has also been ubiquitous. However, such external knowledge is overwritten in the task-specific training. The other approach to using raw corpora for PAS analysis is data augmentation. \newcite{tingliu2017} generate pseudo training data from a raw corpus and use them for their zero pronoun resolution model. They generate the pseudo training data by dropping certain words or pronouns in a raw corpus and assuming them as correct antecedents. After generating the pseudo training data, they rely on ordinary supervised training based on neural networks. In this paper, we propose a neural semi-supervised model for Japanese PAS analysis. We adopt neural adversarial training to directly exploit the advantage of using a raw corpus. Our model consists of two neural network models: a generator model of Japanese PAS analysis and a so-called ``validator'' model of the generator prediction. The generator neural network is a model that predicts probabilities of candidate arguments of each predicate using RNN-based features and a head-selection model \cite{zhang-cheng-lapata2017}. The validator neural network gets inputs from the generator and scores them. This validator can score the generator prediction even when PAS gold labels are not available. We apply supervised learning to the generator and unsupervised learning to the entire network using a raw corpus. Our contributions are summarized as follows: (1) a novel adversarial training model for PAS analysis; (2) learning from a raw corpus as a source of external knowledge; and (3) as a result, we achieve state-of-the-art performance on Japanese PAS analysis. \begin{comment} On the other hand, Japanese anaphora resolution strongly rely on the knowledge of predicates and arguments preferences. This knowledge of predicates and arguments is known as selectional preferences [\begin{CJK*}{UTF8}{ipxm}shibata2016を引用してもいいが、最初にselectional preferencesと言い始めた論文がほしい\end{CJK*}]. For example, \begin{itemize} \item ``\begin{CJK*}{UTF8}{ipxm}オーブンで~($\phi$を)~焼く\end{CJK*}'' \begin{itemize} \item bake (\textit{what}) in oven \end{itemize} \item ``\begin{CJK*}{UTF8}{ipxm}絵を~($\phi$に)~描く\end{CJK*}'' \begin{itemize} \item drawing a picture (on \textit{what}) \end{itemize} \end{itemize} In the first Japanese phrase, what is baked in oven is dropped. However we can limit the candidate arguments that are related to dishes, such as cookies or breads. In the second example, we can infer \begin{CJK*}{UTF8}{ipxm}($\phi$に)\end{CJK*}, such as canvas, papers and something to draw on. Actually these predicate and argument preferences are numerous. If PAS analysis models learn these preferences, they become strong clues for detecting latent arguments in sentences. However, it is very difficult for PAS analysis models to learn these numerous relationships from limited size of annotated corpus. \end{comment} \section{Conclusion} We proposed a novel Japanese PAS analysis model that exploits a semi-supervised adversarial training. The generator neural network learns Japanese PAS and selectional preferences, while the validator is trained against the generator errors. This validator enables the generator to be trained from raw corpora and enhance it with external knowledge. In the future, we will apply this semi-supervised training method to other NLP tasks. \section*{Acknowledgment} This work was supported by JST CREST Grant Number JPMJCR1301, Japan and JST ACT-I Grant Number JPMJPR17U8, Japan. \subsection{Head Selection-based model} \subsection{Generative Adversarial Networks} Generative adversarial networks are originally proposed in image generation tasks \cite{goodfellow2014,salimans2016,springenberg2015}. In the original model in \newcite{goodfellow2014}, they propose a generator $G$ and a discriminator $D$. The discriminator $D$ is trained to devide the real data distribution $p_{data}(\mathbf{x})$ and images generated from the noise samples $\mathbf{z}^{(i)} \in \mathcal{D}_{\mathbf{z}}$ from noise prior $p(\mathbf{z})$. The discriminator loss is \begin{align} \mathcal{L}_{D}=-\bigl( \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})}[\log D(\mathbf{x})] \nonumber \\ + \mathbb{E}_{\mathbf{z} \sim p_z(\mathbf{z})}[\log(1- D(G(\mathbf{z})))] \bigl)~~, \end{align} and they train the discriminator by minimizing this loss while fixing the generator $G$. Similarly, the generator $G$ is trained through minimizing \begin{align} \mathcal{L}_{G}=\frac{1}{|\mathcal{D}_{\mathbf{z}}|} \sum_{i} \Bigl[ \log\left(1-D(G(\mathbf{z}^{(i)}))\right) \Bigl]~~, \end{align} while fixing the discriminator $D$. By doing this, the discriminator tries to descriminate the generated images from real images, while the generator tries to generate images that can deceive the adversarial discriminator. This training scheme is applied for many generative tasks including sentence generation \cite{subramanian2017}, machine translation \cite{britz-le-pryzant:2017:WMT}, dialog generation \cite{li-EtAl2017}, and text classification \cite{liuqiuhuang2017}. \subsection{Proposed Adversarial Training Using Raw Corpus} Japanese PAS analysis and many other syntactic analyses in NLP are not purely generative, and we can make use of a raw corpus instead of the numerical noise distribution $p(\mathbf{z})$. In this work, we use an adversarial training method using a raw corpus, combined with ordinary supervised learning using an annotated corpus. Let $\mathbf{x}_l \in \mathcal{D}_{l}$ indicate labeled data and $p(\mathbf{x}_l)$ indicate their label distribution. We also use unlabeled data $\mathbf{x}_{ul} \in \mathcal{D}_{ul} $ later. Our generator $G$ can be trained by the cross entropy loss with labeled data: \begin{align} \mathcal{L}_{G/SL}=-\mathbb{E}_{ \mathbf{x}_l, y \sim p(\mathbf{x}_{l})} \bigl[\log G(\mathbf{x}_l) \bigl]~~. \label{eq:cc} \end{align} Supervised training of the generator works by minimizing this loss. Note that we follow the notations of \newcite{subramanian2017} in this subsection. In addition, we train a so-called \textit{validator} against the generator errors. We use the term ``validator'' instead of ``discriminator'' for our adversarial training. Unlike the discriminator that is used for dividing generated images and real images, our validator is used to score the generator results. Assume that $\mathbf{y}_l$ is the true labels and $G(\mathbf{x}_l)$ is the predicted label distribution of data $\mathbf{x}_l$ from the generator. We define the labels of the generator errors as: \begin{align} q(G(\mathbf{x}_l),\mathbf{y}_l)=\delta_{\argmax[G(\mathbf{x}_l)],~\mathbf{y}_l}~~, \label{eq:error} \end{align} where $\delta_{i,j}=1$ only if $i=j$, otherwise $\delta_{i,j}=0$. This means that $q$ is equal to 1 if the argument that the generator predicts is correct, otherwise 0. We use this generator error for training labels of the following validator. The inputs of the validator are both the generator outputs $G(\mathbf{x})$ and data $\mathbf{x} \in \mathcal{D}$. The validator can be written as $V(G(\mathbf{x}))$. The validator $V$ is trained with labeled data $\mathbf{x}_l$ by \begin{align} \mathcal{L}_{V/SL}=-\mathbb{E}_{\mathbf{x}_l, y \sim q(G(\mathbf{x}_l),\mathbf{y}_l)} \bigl[\log V(G(\mathbf{x}_l))\bigl]~~, \label{eqn:disc} \end{align} while fixing the generator $G$. This equation means that the validator is trained with labels of the generator error $q(G(\mathbf{x}_l),\mathbf{y}_l)$. Once the validator is trained, we train the generator with an unsupervised method. The generator $G$ is trained with unlabeled data $\mathbf{x}_{ul} \in \mathcal{D}_{ul}$ by minimizing the loss \begin{align} \mathcal{L}_{G/UL}=- \frac{1}{|\mathcal{D}_{ul}|}\sum_{i} \bigl[\log V(G(\mathbf{x}_{ul}^{(i)}))\bigl]~~, \label{eqn:raw} \end{align} while fixing the validator $V$. This generator training loss using the validator can be explained as follows. The generator tries to increase the validator scores to 1, while the validator is fixed. If the validator is well-trained, it returns scores close to 1 for correct PAS labels that the generator outputs, and 0 for wrong labels. Therefore, in Equation (\ref{eqn:raw}), the generator tries to predict correct labels in order to increase the scores of fixed validator. Note that the validator has a sigmoid function for the output of scores. Therefore output scores of the validator are in $\left[0,1\right]$. We first conduct the supervised training of generator network with Equation (\ref{eq:cc}). After this, following \newcite{goodfellow2014}, we use $k$-steps of the validator training and one-step of the generator training. We also alternately conduct $l$-steps of supervised training of the generator. The entire loss function of this adversarial training is \begin{align} \mathcal{L}=\mathcal{L}_{G/SL}+\mathcal{L}_{V/SL}+\mathcal{L}_{G/UL}~~. \end{align} Our contribution is that we propose the validator and train it against the generator errors, instead of discriminating generated data from real data. \newcite{salimans2016} explore the semi-supervised learning using adversarial training for $K$-classes image classification tasks. They add a new class of images that are generated by the generator and classify them. \newcite{miyato2016} propose virtual adversarial training for semi-supervised learning. They exploit unlabeled data for continuous smoothing of data distributions based on the adversarial perturbation of \newcite{goodfellow2015}. These studies, however, do not use the counterpart neural networks for learning structures of unlabeled data. In our Japanese PAS analysis model, the generator corresponds to the head-selection-based neural network for Japanese anaphora resolution. Figure \ref{fig_overall} shows the entire model. The labeled data correspond to the annotated corpora and the labels correspond to the PAS argument labels. The unlabeled data correspond to raw corpora. We explain the details of the generator and the validator neural networks in Sec.\ref{sec:gen} and Sec.\ref{sec:val} in turn. \subsection{Generator of PAS Analysis} \label{sec:gen} \begin{figure}[t] \hspace{-1.0em} \includegraphics[scale=0.9,clip]{fig/gen3.pdf} \vspace{-2.0em} \caption{ The generator of PAS. The sentence encoder is a three-layer bi-LSTM to compute the distributed representations of a predicate and its arguments: $h_{\mathrm{pred}_i}$ and $h_{\mathrm{arg}_i}$. The argument selection model is two-layer feedforward neural networks to compute the scores, $s_{\mathrm{arg}_i,\mathrm{pred}_j}^{\mathrm{case}_k}$, of candidate arguments for each case of a predicate. } \label{fig:gen1} \end{figure} The generator predicts the probabilities of arguments for each of the NOM, ACC and DAT cases of a predicate. As shown in Figure \ref{fig:gen1}, the generator consists of a sentence encoder and an argument selection model. In the sentence encoder, we use a three-layer bidirectional-LSTM (bi-LSTM) to read the whole sentence and extract both global and local features as distributed representations. The argument selection model consists of a two-layer feedforward neural network (FNN) and a softmax function. For the sentence encoder, inputs are given as a sequence of embeddings $v(x)$, each of which consist of word $x$, its inflection from, POS and detailed POS. They are concatenated and fed into the bi-LSTM layers. The bi-LSTM layers read these embeddings in forward and backward order and outputs the distributed representations of a predicate and a candidate argument: $h_{\mathrm{pred}_j}$ and $h_{\mathrm{arg}_i}$. Note that we also use the exophora entities, i.e., an author and a reader, as argument candidates. Therefore, we use specific embeddings for them. These embeddings are not generated by the bi-LSTM layers but are directly used in the argument selection model. We also use path embeddings to capture a dependency relation between a predicate and its candidate argument as used in \newcite{roth2016}. Although \newcite{roth2016} use a one-way LSTM layer to represent the dependency path from a predicate to its potential argument, we use a bi-LSTM layer for this purpose. We feed the embeddings of words and POS tags to the bi-LSTM layer. In this way, the resulting path embedding represents both predicate-to-argument and argument-to-predicate paths. We concatenate the bidirectional path embeddings to generate $h_{\mathrm{path}_{ij}}$, which represents the dependency relation between the predicate $j$ and its candidate argument $i$. For the argument selection model, we apply the argument selection model \cite{zhang-cheng-lapata2017} to evaluate the relation between a predicate and its potential argument for each argument case. In the argument selection model, a single FNN is repeatedly used to calculate scores for a child word and its head candidate word, and then a softmax function calculates normalized probabilities of candidate heads. We use three different FNNs that correspond to the NOM, ACC and DAT cases. These three FNNs have the same inputs of the distributed representations of $j$-th predicate $h_{\mathrm{pred}_j}$, $i$-th candidate argument $h_{\mathrm{arg}_i}$ and path embedding $h_{\mathrm{path}_{ij}}$ between the predicate $j$ and candidate argument $i$. The FNNs for NOM, ACC and DAT compute the argument scores $s_{\mathrm{arg}_i,\mathrm{pred}_j}^{\mathrm{case}_k}$, where $\mathrm{case}_k \in \{\mathrm{NOM}, \mathrm{ACC}, \mathrm{DAT}\}$. Finally, the softmax function computes the probability $p({\scriptstyle\mathrm{arg}_i}|{\scriptstyle\mathrm{pred}_j,\mathrm{case}_k})$ of candidate argument $i$ for case $k$ of $j$-th predicate as: \begin{align} p({\scriptstyle\mathrm{arg}_i}|{\scriptstyle\mathrm{pred}_j,\mathrm{case}_k}) = \frac{\exp \left(s_{\mathrm{arg}_i,\mathrm{pred}_j}^{\mathrm{case}_k} \right)} {\displaystyle \sum_{\mathrm{arg}_i} \exp \left(s_{\mathrm{arg}_i,\mathrm{pred}_j}^{\mathrm{case}_k} \right)} . \label{eqn:gen} \end{align} Our argument selection model is similar to the neural network structure of \newcite{matsubayashi2017}. However, \newcite{matsubayashi2017} does not use RNNs to read the whole sentence. Their model is also designed to choose a case label for a pair of a predicate and its argument candidate. In other words, their model can assign the same case label to multiple arguments by itself, while our model does not. Since case arguments are almost unique for each case of a predicate in Japanese, \newcite{matsubayashi2017} select the argument that has the highest probability for each case, even though probabilities of case arguments are not normalized over argument candidates. The model of \newcite{ouchi2017} has the same problem. \subsection{Validator} \label{sec:val} We exploit a validator to train the generator using a raw corpus. It consists of a two-layer FNN to which embeddings of a predicate and its arguments are fed. For predicate $j$, the input of the FNN is the representations of the predicate $h_{\mathrm{pred}_j}^{\prime}$ and three arguments $\left\{h_{\mathrm{pred}_j}^{\prime~\mathrm{NOM}}, h_{\mathrm{pred}_j}^{\prime~\mathrm{ACC}}, h_{\mathrm{pred}_j}^{\prime~\mathrm{DAT}}\right\}$ that are inferred by the generator. The two-layer FNN outputs three values, and then three sigmoid functions compute the scores of scalar values in a range of $\left[0,1\right]$ for the NOM, ACC and DAT cases: $\left\{s_{\mathrm{pred}_j}^{\prime~\mathrm{NOM}}, s_{\mathrm{pred}_j}^{\prime~\mathrm{ACC}}, s_{\mathrm{pred}_j}^{\prime~\mathrm{DAT}}\right\}$. These scores are the outputs of the validator $D(x)$. We use dropout of 0.5 at the FNN input and hidden layer. The generator and validator networks are coupled by the attention mechanism, or the weighted sum of the validator embeddings. As shown in Equation (\ref{eqn:gen}), we compute a probability distribution of candidate arguments. We use the weighted sum of embeddings $v'(x)$ of candidate arguments to compute the input representations of the validator: \begin{align} h_{\mathrm{pred}_j}^{\prime~\mathrm{case}_k} &= E_{\mathbf{x} \sim p(\mathrm{arg}_i)}[v'(\mathbf{x})] \nonumber \\ &= \sum_{{\scriptstyle\mathrm{arg}_i}} p({\scriptstyle\mathrm{arg}_i}|{\scriptstyle\mathrm{pred}_j,\mathrm{case}_k})v'({\scriptstyle\mathrm{arg}_i}) . \nonumber \end{align} This summation is taken over candidate arguments in the sentence and the exophora entities. Note that we use embeddings $v'(x)$ for the validator that are different from the embeddings $v(x)$ for the generator, in order to separate the computation graphs of the generator and the validator neural networks except the joint part. We use this weighted sum by the softmax outputs instead of the argmax function. This allows the backpropagation through this joint. We also feed the embedding of a predicate to the validator: \begin{align} h_{\mathrm{pred}_j}^{\prime} &= v'({\scriptstyle\mathrm{pred}_j}). \end{align} Note that the validator is a simple neural network compared with the generator. The validator has limited inputs of predicates and arguments and no inputs of other words in sentences. This allows the generator to overwhelm the validator during the adversarial training. \subsection{Implementation Details} \label{sec:imple} The neural networks are trained using backpropagation. The backpropagation has been done to the word and POS tags. We use Adam \cite{kingma2015adam} at the initial training of the generator network for the gradient learning rule. In adversarial learning, Adagrad \cite{duchi2010adagrad} is suitable because of the stability of learning. We use pre-trained word embeddings from 100M sentences from Japanese web corpus by word2vec \cite{mikolov2013}. Other embeddings and hidden weights of neural networks are randomly initialized. \begin{table}[tb] \begin{center} \footnotesize\begin{tabular}{lc} \toprule[1.5pt] Type & Value \\ \midrule Size of hidden layers of FNNs & 1,000 \\ Size of Bi-LSTMs & 256 \\ Dim. of word embedding & 100 \\ Dim. of POS, detailed POS, inflection form tags & 10,~10,~9 \\ Minibatch size for the generator and validator & 16,~1 \\ \bottomrule[1.5pt] \end{tabular} \caption{ Parameters for neural network structure and training. } \label{table:params} \end{center} \end{table} For adversarial training, we first train the generator for two epochs by the supervised method, and train the validator while fixing the generator for another epoch. This is because the validator training preceding the generator training makes the validator result worse. After this, we alternately do the unsupervised training of the generator ($L_{G/UL}$), $k$-times of supervised training of the validator ($L_{V/SL}$) and $l$-times of supervised training of the generator ($L_{G/SL}$). We use the $N(L_{G/UL})/N(L_{G/SL})=1/4$ and $N(L_{V/SL})/N(L_{G/SL})=1/4$, where $N(\cdot)$ indicates the number of sentences used for training. Also we use minibatch of 16 sentences for both supervised and unsupervised training of the generator, while we do not use minibatch for validator training. Therefore, we use $k=16$ and $l=4$. Other parameters are summarized in Table \ref{table:params}. \section{Model} \input{model_sec1.tex} \input{model_sec2.tex} \input{model_sec3.tex} \input{model_sec4.tex} \input{model_sec5.tex} \section{Related Work} \label{sec:related} \newcite{shibata2016} proposed a neural network-based PAS analysis model using local and global features. This model is based on the non-neural model of \newcite{ouchi2015}. They achieved state-of-the-art results on case analysis and zero anaphora resolution using the KWDLC corpus. They use an external resource to extract selectional preferences. Since our model uses an external resource, we compare our model with the models of \newcite{shibata2016} and \newcite{ouchi2015}. \newcite{ouchi2017} proposed a semantic role labeling-based PAS analysis model using Grid-RNNs. \newcite{matsubayashi2017} proposed a case label selection model with feature-based neural networks. They conducted their experiments on NAIST Text Corpus (NTC) \cite{iida2007,iida2016}. NTC consists of newspaper articles, and does not include the annotations of author/reader expressions that are common in Japanese natural sentences. \begin{comment} These previous studies conducted experiments on NAIST Text Corpus (NTC) \cite{iida2007}. There are several major differences between NTC and KWDLC. (1) The granularity of words and tags are different. This causes several problems when someone tries to train a model and export it to the other dataset. (2) NTC consists of newspaper articles, while KWDLC covers wide varieties of documents collected from the web. Therefore, KWDLC might be suitable for general purposes. (3) NTC does not include author/reader expressions, while KWDLC includes the annotations of frequent drops of these. Actually the drops of author/reader expressions are ubiquitous in Japanese natural sentences. NTC does not include these because sentences in newspaper articles tend to obey specific formats.\footnote{ In many cases, using many pronouns of authors and readers sound strange or even rude in Japanese. However, in some domains including newspaper articles and legal documents, sentences are written with specific forms and pronouns are rarely dropped. } (4) NTC contains more annotation errors than the creators of the corpus expected as described in \cite{iida2016}. Therefore, the scores of NTC are not comparable with those of KWDLC. \end{comment} \section{Task Description} \begin{figure*}[t] \includegraphics[scale=0.9,clip]{fig/overall1.pdf} \vspace{-2.5em} \caption{ The overall model of adversarial training with a raw corpus. The PAS generator $G(x)$ and validator $V(x)$. The validator takes inputs from the generator as a form of the attention mechanism. The validator itself is a simple feed-forward network with inputs of $j$-th predicate and its argument representations: $\{h_{\mathrm{pred}_j}^{\prime}, h_{\mathrm{pred}_j}^{\prime \mathrm{case}_k}\}$. The validator returns scores for three cases and they are used for both the supervised training of the validator and the unsupervised training of the generator. The supervised training of the generator is not included in this figure. } \label{fig_overall} \end{figure*} Japanese PAS analysis determines essential case roles of words for each predicate: \textit{who} did \textit{what} to \textit{whom}. In many languages, such as English, case roles are mainly determined by word order. However, in Japanese, word order is highly flexible. In Japanese, major case roles are the nominative case (NOM), the accusative case (ACC) and the dative case (DAT), which roughly correspond to Japanese surface case markers: {\small \begin{CJK*}{UTF8}{ipxm}\underline{が}\end{CJK*}(ga)}, {\small \begin{CJK*}{UTF8}{ipxm}\underline{を}\end{CJK*}(wo)}, and {\small \begin{CJK*}{UTF8}{ipxm}\underline{に}\end{CJK*}(ni)}. These case markers are often hidden by topic markers, and case arguments are also often omitted. We explain two detailed tasks of PAS analysis: case analysis and zero anaphora resolution. In Table \ref{table:examples}, we show four example Japanese sentences and their PAS labels. PAS labels are attached to nominative, accusative and dative cases of each predicate. Sentence (1) has surface case markers that correspond to argument cases. Sentence (2) is an example sentence for case analysis. Case analysis is a task to find hidden case markers of arguments that have direct dependencies to their predicates. Sentence (2) does not have the nominative case marker {\small \begin{CJK*}{UTF8}{ipxm}\underline{が}\end{CJK*}(ga)}. It is hidden by the topic case marker {\small \begin{CJK*}{UTF8}{ipxm}\underline{は}\end{CJK*}(wa)}. Therefore, a case analysis model has to find the correct NOM case argument {\small \begin{CJK*}{UTF8}{ipxm}列車\end{CJK*}(train)}. Sentence (3) is an example sentence for zero anaphora resolution. Zero anaphora resolution is a task to find arguments that do not have direct dependencies to their predicates. At the second predicate {\small``\begin{CJK*}{UTF8}{ipxm}巻き込まれた\end{CJK*}''(was involved)}, the correct nominative argument is {\small``\begin{CJK*}{UTF8}{ipxm}タクシー\end{CJK*}''(taxi)}, while this does not have direct dependencies to the second predicate. A zero anaphora resolution model has to find {\small``\begin{CJK*}{UTF8}{ipxm}タクシー\end{CJK*}''(taxi)} from the sentence, and assign it to the NOM case of the second predicate. In the zero anaphora resolution task, some correct arguments are not specified in the article. This is called as \textit{exophora}. We consider ``author'' and ``reader'' arguments as exophora \cite{hangyo2013}. They are frequently dropped from Japanese natural sentences. Sentence (4) is an example of dropped nominative arguments. In this sentence, the nominative argument is {\small``\begin{CJK*}{UTF8}{ipxm}あなた\end{CJK*}''} (you), but {\small``\begin{CJK*}{UTF8}{ipxm}あなた\end{CJK*}''} (you) does not appear in the sentence. This is also included in zero anaphora resolution. Except these special arguments of exophora, we focus on intra-sentential anaphora resolution in the same way as \cite{shibata2016,iida2016,ouchi2017,matsubayashi2017}. We also attach {\usefont{T1}{pcr}{m}{n} NULL} labels to cases that predicates do not have.
{ "timestamp": "2018-06-06T02:05:13", "yymm": "1806", "arxiv_id": "1806.00971", "language": "en", "url": "https://arxiv.org/abs/1806.00971" }
\section{Introduction} \label{sec:1} Let $k$ be a perfect field of characteristic $p > 2$, and let $W(k)$ be its ring of Witt vectors. Let $K$ be a finite totally ramified extension over $W(k)[\frac{1}{p}]$, and denote by $\mathcal{O}_K$ its ring of integers. Let $R_0$ be an unramified relative base ring over $W(k)\langle X_1^{\pm 1}, \ldots, X_d^{\pm 1}\rangle$ which is the $p$-adic completion of $W(k)[X_1^{\pm 1}, \ldots, X_d^{\pm 1}]$, and let $R = R_0\otimes_{W(k)}\mathcal{O}_K$. (cf. Section \ref{sec:2.1}). Examples of such $R$ include $\mathcal{O}_K\langle X_1^{\pm 1}, \ldots, X_d^{\pm 1}\rangle$ and the formal power series ring $\mathcal{O}_K[\![Y_1, \ldots, Y_d]\!]$ with $Y_i = X_i-1$. Brinon developed $p$-adic Hodge theory in the relative case in \cite{brinon-relative}, which is studied further by Scholze in \cite{scholze-p-adic-hodge} and Kedlaya-Liu in \cite{kedlaya-liu-relative-padichodge}. Let $\overline{R}$ denote the union of finite $R$-subalgebras $R'$ of a fixed separable closure of $\mathrm{Frac}(R)$ such that $R'[\frac{1}{p}]$ is \'{e}tale over $R[\frac{1}{p}]$. Then $\mathrm{Spec}\overline{R}[\frac{1}{p}]$ is a pro-universal covering of $\mathrm{Spec}R[\frac{1}{p}]$, and $\overline{R}$ is the integral closure of $R$ in $\overline{R}[\frac{1}{p}]$. Let $\mathcal{G}_R \coloneqq \mathrm{Gal}(\overline{R}[\frac{1}{p}]/R[\frac{1}{p}]) = \pi_1^{\text{\'{e}t}}(\mathrm{Spec}R[\frac{1}{p}])$. In \cite{brinon-relative}, the relative crystalline period ring $B_{\mathrm{cris}}(R)$ is constructed, and the notions of \emph{crystalline} representations of $\mathcal{G}_R$ and \emph{filtered} $(\varphi, \nabla)$-\emph{modules over} $R_0[\frac{1}{p}]$ are defined generalizing those when the base is a $p$-adic field. In \emph{loc. cit.}, \emph{punctually weakly admissible} modules and \emph{weakly admissible} modules are also defined to generalize weakly admissible modules over a $p$-adic field. A fundemental open question concerning these objects is the following: \begin{ques} \label{ques:1.1} Which filtered $(\varphi, \nabla)$-modules over $R_0[\frac{1}{p}]$ arise from crystalline representations of $\mathcal{G}_R$? \end{ques} A filtered $(\varphi, \nabla)$-module over $R_0[\frac{1}{p}]$ is said to be \textit{admissible} if it arises from a crystalline representation of $\mathcal{G}_R$. When the base is a $p$-adic field $K$, it is proved in \cite{colmez-fontaine} that a filtered $\varphi$-module over $K$ with zero monodromy is admissible if and only if it is weakly admissible. Another interesting question in relative $p$-adic Hodge theory concerns representations arising from a $p$-divisible group. For a $p$-divisible group $G_R$ over $\mathrm{Spec} R$, let $T_p(G_R) \coloneqq \mathrm{Hom}_{\overline{R}}(\mathbf{Q}_p/\mathbf{Z}_p, ~G_R\times_R \overline{R})$ be the associated Tate module. Then by \cite[Corollary 5.4.2]{kim-groupscheme-relative}, $(T_p(G_R)\otimes_{\mathbf{Z}_p}\mathbf{Q}_p)^{\vee}$ is a crystalline $\mathcal{G}_R$-representation whose Hodge-Tate weights lie in $[0, 1]$ (in this paper, we use the covariant version of the functor $D_{\mathrm{dR}}(\cdot)$ to define Hodge-Tate weights, which is different from the convention using the contravariant one). This raises the following natural question. \begin{ques} \label{ques:1.2} Which crystalline representations of $\mathcal{G}_R$ whose Hodge-Tate weights lie in $[0, 1]$ arise from $p$-divisible groups over $\mathrm{Spec} R$? \end{ques} Kim showed in \cite[Theorem 3.5]{kim-groupscheme-relative} that the category of $p$-divisible groups over $R$ is anti-equivalent to the category of relative Breuil modules, which characterize the linear algebraic structure of corresponding weakly admissible modules. Hence, for crystalline representations with Hodge-Tate weights in $[0, 1]$, Question \ref{ques:1.2} is closely related to Question \ref{ques:1.1}. When the base is a $p$-adic field $K$, Kisin proved in \cite[Corollary 2.2.6]{kisin-crystalline} that every crystalline $\mathrm{Gal(\overline{K}/K)}$-representation whose Hodge-Tate weights lie in $[0, 1]$ arises from a $p$-divisible group over $\mathcal{O}_K$. In this paper, our objects of study center around Question \ref{ques:1.1} and \ref{ques:1.2} for some special cases. We define the category of $B$-pairs in the relative case and study the relations with $\mathbf{Q}_p$-representations and weakly admissible modules. As an application, when $R = \mathcal{O}_K[\![Y]\!]$ with $k = \overline{k}$, we compute $B$-pairs corresponding to certain weakly admissible modules and show the following theorem. \begin{thm} \label{thm:1.3} Let $R = \mathcal{O}_K[\![Y]\!]$ and suppose $k$ is algebraically closed. Let $V$ be a horizontal crystalline $\mathcal{G}_R$-representation of rank $2$ over $\mathbf{Q}_p$ with Hodge-Tate weights in $[0, 1]$ such that its associated isocrystal is reducible. Then there exists a $p$-divisible group $G_R$ over $R$ such that $(T_p(G_R)[\frac{1}{p}])^{\vee} \cong V$ as $\mathcal{G}_R$-representations. Furthermore, there exists a $B$-pair which arises from a weakly admissible $R_0[\frac{1}{p}]$-module but does not arise from a $\mathbf{Q}_p$-representation. \end{thm} In particular, the last statement of Theorem \ref{thm:1.3} shows that the relative case is different from the case when the base is a $p$-adic field where every semi-stable $B$-pair of slope $0$ arises from a $\mathbf{Q}_p$-representation. It also answers negatively the question raised in \cite{brinon-relative} whether weakly admissible implies admissible in the relative case. We remark that it is proved in \cite{liu-moon-rel-cryst} using a completely different method that when $R$ has Krull dimension $2$ with the ramification index $e < p-1$, then every crystalline representation with Hodge-Tate weights in $[0, 1]$ comes from a $p$-divisible group over $R$. However, the argument in \cite{liu-moon-rel-cryst} relies crucially on the assumption that the ramification is small, whereas the result in Theorem \ref{thm:1.3} holds for any ramification. \section*{Acknowledgement} I would like to express my sincere gratitude to Tong Liu for many helpful discussions and suggestions on this topic. \section{$p$-adic Hodge Theory in the relative case} \label{sec:2} \subsection{Crystalline and de Rham period rings} \label{sec:2.1} We follow the notation as in the Introduction. We first recall the constructions and results of relative $p$-adic Hodge theory developed in \cite{brinon-relative}, using the same terminologies such as \emph{punctually weakly admissible modules} and \emph{weakly admissible modules}. Denote by $W(k)\langle X_1^{\pm 1}, \ldots, X_d^{\pm 1}\rangle$ the $p$-adic completion of the polynomial ring $W(k)[X_1^{\pm 1}, \ldots, X_d^{\pm 1}]$. Let $R_0$ be a ring obtained from $W(k)\langle X_1^{\pm 1}, \ldots, X_d^{\pm 1}\rangle$ by a finite number of iterations of the following operations: \begin{itemize} \item $p$-adic completion of an \'{e}tale extension; \item $p$-adic completion of a localization; \item completion with respect to an ideal containing $p$. \end{itemize} \noindent We further assume that either $W(k)\langle X_1^{\pm 1}, \ldots, X_d^{\pm 1}\rangle \rightarrow R_0$ has geometrically regular fibers or $R_0$ has Krull dimension less than $2$, and that $k \rightarrow R_0/pR_0$ is geometrically integral and $R_0$ is an integral domain. $R_0/pR_0$ has a finite $p$-basis given by $X_1, \ldots, X_d$. The Witt vector Frobenius on $W(k)$ extends (not necessarily uniquely) to $R_0$, and we fix such a Frobenius endomorphism $\varphi: R_0 \rightarrow R_0$. Let $\widehat{\Omega}_{R_0} = \varprojlim_{n} \Omega_{(R_0/p^n)/W(k)}$ be the module of $p$-adically continuous K\"{a}hler differentials. Then $\widehat{\Omega}_{R_0} \cong \bigoplus_{i=1}^d R_0 \cdot d{X_i}$ by \cite[Proposition 2.0.2]{brinon-relative}. If $\nabla: R_0[\frac{1}{p}] \rightarrow R_0[\frac{1}{p}]\otimes_{R_0} \widehat{\Omega}_{R_0}$ is the universal continuous derivation, then $(R_0[\frac{1}{p}])^{\nabla = 0} = W(k)[\frac{1}{p}]$. We work over the base ring $R$ given by $R \coloneqq R_0\otimes_{W(k)}\mathcal{O}_K$. The relative de Rham period ring and the crystalline period ring are constructed as follows. Let $\displaystyle \overline{R}^{\flat} = \varprojlim_{\varphi} \overline{R}/p\overline{R}$. There exists a natural $W(k)$-linear surjective map $\theta: W(\overline{R}^{\flat}) \rightarrow \widehat{\overline{R}}$ which lifts the projection onto the first factor. Here, $\widehat{\overline{R}}$ denotes the $p$-adic completion of $\overline{R}$. Define $B_{\mathrm{dR}}^{\nabla +}(R) \coloneqq \varprojlim_{n} W(\overline{R}^{\flat})[\frac{1}{p}]/(\mathrm{ker}(\theta))^n$. Choose compatibly $\epsilon_n \in \overline{R}$ such that $\epsilon_0 =1, ~\epsilon_n = \epsilon_{n+1}^p$ with $\epsilon_1 \neq 1$, and let $\widetilde{\epsilon} = (\epsilon_n)_{n \geq 0} \in \overline{R}^{\flat}$. Then $t \coloneqq \log{[\widetilde{\epsilon}]} \in B_{\mathrm{dR}}^{\nabla +}(R)$ and $B_{\mathrm{dR}}^{\nabla +}(R)$ is $t$-torsion free. The horizontal de Rham period ring is defined to be $\displaystyle B_{\mathrm{dR}}^{\nabla}(R) = B_{\mathrm{dR}}^{\nabla +}(R)[\frac{1}{t}]$, equipped with the filtration $\mathrm{Fil}^j B_{\mathrm{dR}}^{\nabla}(R) = t^j B_{\mathrm{dR}}^{\nabla +}(R)$ for $j \in \mathbf{Z}$. The $\mathcal{G}_R$-action on $W(\overline{R}^{\flat})$ extends uniquely to $B_{\mathrm{dR}}^{\nabla}(R)$. Let $\theta_R: R\otimes_{W(k)}W(\overline{R}^{\flat}) \rightarrow \widehat{\overline{R}}$ be the $R$-linear extension of $\theta$, and denote by $A_{\mathrm{inf}}(\widehat{\overline{R}}/R)$ the completion of $R\otimes_{W(k)}W(\overline{R}^{\flat})$ for the topology given by the ideal $\theta_R^{-1}(p\widehat{\overline{R}})$. Let $B_{\mathrm{dR}}^+(R) = \varprojlim_n A_{\mathrm{inf}}(\widehat{\overline{R}}/R)[\frac{1}{p}]/(\mathrm{ker}(\theta_R))^n$. Define the de Rham period ring to be $B_{\mathrm{dR}}(R) = B_{\mathrm{dR}}^+(R)[\frac{1}{t}]$. For $j \geq 0$, we let $\mathrm{Fil}^j B_{\mathrm{dR}}^+(R) = (\mathrm{ker}(\theta_R))^j$ and $\displaystyle\mathrm{Fil}^0 B_{\mathrm{dR}}(R) = \sum_{n = 0}^{\infty} \frac{1}{t^n}\mathrm{Fil}^n B_{\mathrm{dR}}^+(R)$. For $j \in \mathbf{Z}$, let $\mathrm{Fil}^j B_{\mathrm{dR}}(R) = t^j \mathrm{Fil}^0 B_{\mathrm{dR}}(R)$. $B_{\mathrm{dR}}(R)$ is equipped with the connection $\nabla: B_{\mathrm{dR}}(R) \rightarrow B_{\mathrm{dR}}(R)\otimes_{R_0} \widehat{\Omega}_{R_0}$ which is $W(\overline{R}^{\flat})$-linear and extends the universal continuous derivation of $R$. $\nabla$ satisfies the Griffiths transversality. The $\mathcal{G}_R$-action on $R\otimes_{W(k)}W(\overline{R}^{\flat})$ extends uniquely to $B_{\mathrm{dR}}(R)$, and commutes with $\nabla$. We have a natural embedding $B_{\mathrm{dR}}^{\nabla}(R) \hookrightarrow B_{\mathrm{dR}}(R)$ compatible with the filtrations and $\mathcal{G}_R$-actions, and $B_{\mathrm{dR}}^{\nabla}(R) = (B_{\mathrm{dR}}(R))^{\nabla = 0}$. Furthermore, $B_{\mathrm{dR}}(R)^{\mathcal{G}_R} = R[\frac{1}{p}]$ and $(B_{\mathrm{dR}}^{\nabla}(R))^{\mathcal{G}_R} = K$. For $i = 1, \ldots, d$, choose compatibly $X_{i, n} \in \overline{R}$ such that $X_{i, 0} = X_i$ and $X_{i, n} = X_{i, n+1}^p$, and let $[\widetilde{X_i}] \in \overline{R}^{\flat}$. Let $u_i = X_i\otimes 1-1\otimes [\widetilde{X_i}] \in R\otimes_{W(k)} W(\overline{R}^{\flat})$. The following proposition is proved in \cite{brinon-relative}. \begin{prop} \label{prop:2.1} \emph{(cf. \cite[Proposition 5.1.4, 5.2.2, 5.2.5]{brinon-relative})} The natural embedding \[ B_{\mathrm{dR}}^{\nabla +}(R)[\![u_1, \ldots, u_d]\!] \rightarrow B_{\mathrm{dR}}^{+}(R) \] is an isomorphism. Furthermore, $\displaystyle \mathrm{Fil}^0 B_{\mathrm{dR}}(R) = B_{\mathrm{dR}}^+(R)[\frac{u_1}{t}, \ldots, \frac{u_d}{t}]$. \end{prop} For the horizontal crystalline and crystalline period rings, we first construct the integral ones. Let $A_{\mathrm{cris}}^{\nabla}(R)$ be the $p$-adic completion of the divided power envelope of $W(\overline{R}^{\flat})$ with respect to $\mathrm{ker}(\theta)$. The Witt vector Frobenius and $\mathcal{G}_R$-action on $W(\overline{R}^{\flat})$ extend uniquely to $A_{\mathrm{cris}}^{\nabla}(R)$. Let $\theta_{R_0}: R_0\otimes_{W(k)}W(\overline{R}^{\flat}) \rightarrow \widehat{\overline{R}}$ be the $R_0$-linear extension of $\theta$, and define $A_{\mathrm{cris}}(R)$ to be the $p$-adic completion of the divided power envelope of $R_0\otimes_{W(k)}W(\overline{R}^{\flat})$ with respect to $\mathrm{ker}(\theta_{R_0})$. The $\mathcal{G}_R$-action on $R_0\otimes_{W(k)}W(\overline{R}^{\flat})$ extends uniquely to $A_{\mathrm{cris}}(R)$. The Frobenius endomorphism on $R_0\otimes_{W(k)}W(\overline{R}^{\flat})$ given by $\varphi$ on $R_0$ and the Witt vector Frobenius on $W(\overline{R}^{\flat})$ extends uniquely to $A_{\mathrm{cris}}(R)$. We have the connection $\nabla: A_{\mathrm{cris}}(R) \rightarrow A_{\mathrm{cris}}(R)\otimes_{R_0} \widehat{\Omega}_{R_0}$ which is $W(\overline{R}^\flat)$-linear and extends the universal continuous derivation of $R_0$. The Frobenius on $A_{\mathrm{cris}}(R)$ is horizontal. We have a natural $\mathcal{G}_R$-equivariant embedding $A_{\mathrm{cris}}^{\nabla}(R) \hookrightarrow A_{\mathrm{cris}}(R)$, and $A_{\mathrm{cris}}(R)^{\nabla = 0} = A_{\mathrm{cris}}^{\nabla}(R)$. Moreover, $(A_{\mathrm{cris}}^{\nabla}(R))^{\mathcal{G}_R} = W(k)$ and $A_{\mathrm{cris}}(R)^{\mathcal{G}_R} = R_0$. Note that $t \in A_{\mathrm{cris}}(\mathbf{Z}_p)$ and $p$ divides $t^{p-1}$ in $A_{\mathrm{cris}}(\mathbf{Z}_p)$. $A_{\mathrm{cris}}(R)$ is $t$-torsion free, and we define $B_{\mathrm{cris}}^{\nabla}(R) = A_{\mathrm{cris}}^{\nabla}(R)[\frac{1}{t}]$ and $B_{\mathrm{cris}}(R) = A_{\mathrm{cris}}(R)[\frac{1}{t}]$, equipped with the Frobenius and $\mathcal{G}_R$-action extending those on $A_{\mathrm{cris}}(R)$. We extend the connection on $A_{\mathrm{cris}}(R)$ to $B_{\mathrm{cris}}(R)$ $t$-linearly. $B_{\mathrm{cris}}(R)\otimes_{R_0[\frac{1}{p}]} R[\frac{1}{p}]$ naturally embeds into $B_{\mathrm{dR}}(R)$ compatibly with the connections and $\mathcal{G}_R$-actions. Let $U_1 = \{x \in B_{\mathrm{cris}}^{\nabla}(R) \cap B_{\mathrm{dR}}^{\nabla +}(R), ~\varphi(x) = px\}$. The following proposition is shown in \cite{brinon-relative}. \begin{prop} \label{prop:2.2} \emph{(cf. \cite[Lemma 6.2.22, Proposition 6.2.23]{brinon-relative})} The following sequences are exact: \[ 0 \rightarrow \mathbf{Q}_p\cdot t \rightarrow U_1 \stackrel{\theta}{\rightarrow} B_{\mathrm{dR}}^{\nabla +}(R)/\mathrm{Fil}^1 B_{\mathrm{dR}}^{\nabla +}(R) \rightarrow 0, \] \[ 0 \rightarrow \mathbf{Q}_p \rightarrow (B_{\mathrm{cris}}^{\nabla}(R))^{\varphi = 1} \rightarrow B_{\mathrm{dR}}^{\nabla}(R)/B_{\mathrm{dR}}^{\nabla +}(R) \rightarrow 0. \] \end{prop} For a continuous $\mathcal{G}_R$-representation $V$ over $\mathbf{Q}_p$, we denote $D_{\mathrm{cris}}^{\nabla}(V) = (V \otimes_{\mathbf{Q}_p}B_{\mathrm{cris}}^{\nabla}(R))^{\mathcal{G}_R}$, $D_{\mathrm{cris}}(V) = (V \otimes_{\mathbf{Q}_p}B_{\mathrm{cris}}(R))^{\mathcal{G}_R}$ and $D_{\mathrm{dR}}(V) = (V \otimes_{\mathbf{Q}_p}B_{\mathrm{dR}}(R))^{\mathcal{G}_R}$. The natural morphisms \[ \begin{split} &\alpha_{\mathrm{cris}}^{\nabla}: D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]} B_{\mathrm{cris}}^{\nabla}(R) \rightarrow V\otimes_{\mathbf{Q}_p} B_{\mathrm{cris}}^{\nabla}(R),\\ &\alpha_{\mathrm{cris}}: D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]} B_{\mathrm{cris}}(R) \rightarrow V\otimes_{\mathbf{Q}_p} B_{\mathrm{cris}}(R),\\ &\alpha_{\mathrm{dR}}: D_{\mathrm{dR}}(V)\otimes_{R[\frac{1}{p}]} B_{\mathrm{dR}}(R) \rightarrow V\otimes_{\mathbf{Q}_p} B_{\mathrm{dR}}(R) \end{split} \] are injective. We say $V$ is \emph{horizontal crystalline} (resp. \emph{crystalline}, \emph{de Rham}) if $\alpha_{\mathrm{cris}}^{\nabla}$ (resp. $\alpha_{\mathrm{cris}}$, $\alpha_{\mathrm{dR}}$) is an isomorphism. For any $\mathbf{Q}_p$-representation $V$, we have natural embeddings $D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]}R_0[\frac{1}{p}] \hookrightarrow D_{\mathrm{cris}}(V)$ and $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]} R[\frac{1}{p}] \hookrightarrow D_{\mathrm{dR}}(V)$. If $V$ is horizontal crystalline, then $V$ is crystalline and the map $D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]}R_0[\frac{1}{p}] \rightarrow D_{\mathrm{cris}}(V)$ is an isomorphism. If $V$ is crystalline, then $V$ is de Rham and the map $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]} R[\frac{1}{p}] \rightarrow D_{\mathrm{dR}}(V)$ is an isomorphism. We study the linear algebraic structure of $D_{\mathrm{cris}}(V)$ in the following way. A \emph{filtered} $(\varphi, \nabla)$-\emph{module over} $R_0[\frac{1}{p}]$ is defined to be a tuple $(D, \varphi_D, \nabla_D, \mathrm{Fil}^j D)$ such that \begin{itemize} \item $D$ is a finite projective $R_0[\frac{1}{p}]$-module ; \item $\varphi_D: D \rightarrow D$ is $\varphi$-semilinear endomorphism such that $1\otimes \varphi_D$ is an isomorphism; \item $\nabla_D: D \rightarrow D\otimes_{R_0} \widehat{\Omega}_{R_0}$ is an integrable connection which is topologically quasi-nilpotent, i.e., there exists a finitely generated $R_0$-submodule $M \subset D$ stable under $\nabla$ such that $M[\frac{1}{p}] = D$ and the induced connection on $M/pM$ is nilpotent. The Frobenius $\varphi_D$ is horizontal with respect to $\nabla_D$; \item $\mathrm{Fil}^j D_R$ is a decreasing separated and exhaustive filtration by $R[\frac{1}{p}]$-submodules of $D_R\coloneqq D\otimes_{R_0[\frac{1}{p}]}R[\frac{1}{p}]$ such that the graded module $\mathrm{gr}^{\bullet}D_R$ is projective over $R[\frac{1}{p}]$. Furthermore, Griffiths transversality holds for the induced connection: $\nabla_D(\mathrm{Fil}^j D_R) \subset \mathrm{Fil}^{j-1} D_R\otimes_{R_0} \widehat{\Omega}_{R_0}$. \end{itemize} Denote by $\mathrm{MF}(R)$ be the category of filtered $(\varphi, \nabla)$-modules over $R_0[\frac{1}{p}]$, whose morphisms are $R_0[\frac{1}{p}]$-module morphisms compatible with all structures. It is equipped with tensor product and duality structures as in \cite[Section 7]{brinon-relative}. For $D \in \mathrm{MF}(R)$, its Hodge-Tate weights are defined to be integers $w \in \mathbf{Z}$ such that $\mathrm{gr}^w D \neq 0$. We define its Hodge number \[ t_H(D) \coloneqq \sum_{j \in \mathbf{Z}} j \cdot \mathrm{rank}_{R[\frac{1}{p}]}(\mathrm{gr}^j D_R). \] Let $\mathfrak{p} \in \mathrm{Spec} R_0/pR_0$, and let $\kappa_{\mathfrak{p}}$ be the perfect closure $\varinjlim_{\varphi} (R_0/pR_0)/\mathfrak{p}$. By the universal property of $p$-adic Witt vectors, there exists a unique map $b_{\mathfrak{p}}: R_0 \rightarrow W(\kappa_{\mathfrak{p}})$ lifting $R_0 \rightarrow \kappa_{\mathfrak{p}}$ which is compatible with Frobenius (with the Witt vector Frobenius on $W(\kappa_{\mathfrak{p}})$). Then $D_{\mathfrak{p}} \coloneqq D\otimes_{R_0, b_\mathfrak{p}} W(\kappa_{\mathfrak{p}})$ is a filtered $\varphi$-module over $W(\kappa_{\mathfrak{p}})[\frac{1}{p}]$ with the induced filtration and Frobenius. We define the \emph{Newton number of} $D$ \emph{at} $\mathfrak{p}$ to be the Newton number of $D_{\mathfrak{p}}$, and denote it by $t_N (D, \mathfrak{p})$. We say $D$ is \emph{punctually weakly admissible} if for all $\mathfrak{p} \in \mathrm{Spec}(R_0/pR_0)$, the following conditions hold: \begin{itemize} \item $t_H(D) = t_N(D, \mathfrak{p})$; \item For each sub-object $D'$ of $D$ in $\mathrm{MF}(R)$, $t_H(D') \leq t_N(D', \mathfrak{p})$. \end{itemize} \noindent Denote by $\mathrm{MF}^{\mathrm{pwa}}(R)$ the full subcategory of $\mathrm{MF}(R)$ consisting of punctually weakly admissible modules. For a $\mathcal{G}_R$-representation $V$ over $\mathbf{Q}_p$, we equip $D_{\mathrm{cris}}^{\nabla}(V)$ with the Frobenius induced from $B_{\mathrm{cris}}(R)$, $D_{\mathrm{cris}}(V)$ with the Frobenius and connection induced from $B_{\mathrm{cris}}(R)$, and $D_{\mathrm{dR}}(V)$ with the filtration and connection induced from $B_{\mathrm{dR}}(R)$. Then $D_{\mathrm{cris}}(V)^{\nabla = 0} = D_{\mathrm{cris}}^{\nabla}(V)$ and the map $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}R[\frac{1}{p}] \rightarrow D_{\mathrm{dR}}(V)$ is compatible with connections. If $V$ is horizontal crystalline, then the isomorphism $D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]}R_0[\frac{1}{p}] \rightarrow D_{\mathrm{cris}}(V)$ is compatible with Frobenius. Note that if $V$ is crystalline, then it is horizontal crystalline if and only if the map $D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]}R_0[\frac{1}{p}] \rightarrow D_{\mathrm{cris}}(V)$ is an isomorphism, i.e., if and only if $D_{\mathrm{cris}}(V)$ is generated by its parallel elements. If $V$ is crystalline, we further equip $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}R[\frac{1}{p}]$ with the filtration induced by $D_{\mathrm{dR}}(V)$. Then we have $D_{\mathrm{cris}}(V) \in \mathrm{MF}^{\mathrm{pwa}}(R)$ by \cite[Proposition 8.3.4]{brinon-relative}. For $D \in \mathrm{MF}(R)$, define \[ V_{\mathrm{cris}}(D) \coloneqq (D\otimes_{R_0[\frac{1}{p}]}B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1} \cap \mathrm{Fil}^0((D_R\otimes_{R[\frac{1}{p}]}B_{\mathrm{dR}}(R))^{\nabla = 0}) \] where $D\otimes_{R_0[\frac{1}{p}]}B_{\mathrm{cris}}(R)$ is equipped with the Frobenius and connection given by the tensor product, and $D_R\otimes_{R[\frac{1}{p}]}B_{\mathrm{dR}}(R)$ is equipped with the filtration and connection given by the tensor product. Then $V_{\mathrm{cris}}(D)$ is a continuous $\mathbf{Q}_p$-representation of $\mathcal{G}_R$. We say $D$ is \emph{admissible} if there exists a crystalline representation $V$ such that $D \cong D_{\mathrm{cris}}(V)$ in $\mathrm{MF}(R)$, and denote by $\mathrm{MF}^{\mathrm{a}}(R)$ the full subcategory of $\mathrm{MF}^{\mathrm{pwa}}(R)$ consisting of admissible modules. Then by \cite[Theorem 8.5.2]{brinon-relative}, $D_{\mathrm{cris}}$ and $V_{\mathrm{cris}}$ are quasi-inverse equivalences of Tannakian categories between the category of crystalline representations and $\mathrm{MF}^{\mathrm{a}}(R)$. It is not known precisely which punctually weakly admissible modules are admissible. We say $D \in \mathrm{MF}(R)$ is \emph{weakly admissible} if $D$ is punctually weakly admissible and there exists a finite \'{e}tale extension $R_0'$ over $R_0$ such that $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}R_0'[\frac{1}{p}]$ is free over $R_0'[\frac{1}{p}]$. For characters, $D_{\mathrm{cris}}$ induces an equivalence between the category of crystalline characters of $\mathcal{G}_R$ and the category of weakly admissible $R_0[\frac{1}{p}]$-modules of rank $1$. However, it is not known whether $D_{\mathrm{cris}}(V)$ is weakly admissible for any crystalline representation $V$. \subsection{Relative $B$-pairs} \label{sec:2.2} In \cite{berger-B-pair}, $B$-pairs are studied when the base is a $p$-adic field. There is a natural fully faithful functor from the category of $\mathbf{Q}_p$-representations to the category of $B$-pairs, and the category of $B$-pairs is equivalent to that of $(\varphi, \Gamma)$-modules over the Robba ring. We define the category of $B$-pairs in the relative case and study its relations to $\mathbf{Q}_p$-representations and admissible modules. Let $B_e(R) \coloneqq (B_{\mathrm{cris}}^{\nabla}(R))^{\varphi = 1}$. A $B$-\emph{pair} $W = (W_e, W_{\mathrm{dR}}^{\nabla +})$ is given by a finite free $B_e(R)$-module $W_e$ equipped with a semi-linear $\mathcal{G}_R$-action and a finite free $B_{\mathrm{dR}}^{\nabla +}(R)$-module $W_{\mathrm{dR}}^{\nabla +}$ equipped with a semi-linear $\mathcal{G}_R$-action such that \[ W_e\otimes_{B_e(R)} B_{\mathrm{dR}}^{\nabla}(R) \cong W_{\mathrm{dR}}^{\nabla +}\otimes_{B_{\mathrm{dR}}^{\nabla +}(R)} B_{\mathrm{dR}}^{\nabla}(R) \] as $B_{\mathrm{dR}}^{\nabla}(R)$-modules compatible with $\mathcal{G}_R$-actions. Denote by $B\mathrm{-Pair}(R)$ the category of $B$-pairs whose morphisms are pairs of $B_e(R)$-module and $B_{\mathrm{dR}}^{\nabla +}(R)$-module morphisms compatible with $\mathcal{G}_R$-actions and with isomorphisms over $B_{\mathrm{dR}}^{\nabla}(R)$. We have a natural functor $W_R$ from the category of $\mathbf{Q}_p$-representations of $\mathcal{G}_R$ to $B\mathrm{-Pair}(R)$ given by $W_R(V) = (V\otimes_{\mathbf{Q}_p}B_e(R), ~V\otimes_{\mathbf{Q}_p}B_{\mathrm{dR}}^{\nabla +}(R))$. \begin{prop} \label{prop:2.3} The functor $W_R$ is fully faithful. Furthermore, if $V$ is a crystalline $\mathcal{G}_R$-representation, then \[ W_R(V) \cong ((D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1}, ~\mathrm{Fil}^0(D_{\mathrm{dR}}(V)\otimes_{R[\frac{1}{p}]}B_{\mathrm{dR}}(R))^{\nabla = 0}) \] as $B$-pairs. \end{prop} \begin{proof} We have $B_e(R) \cap B_{\mathrm{dR}}^{\nabla +}(R) = \mathbf{Q}_p$ by Proposition \ref{prop:2.2}, so $W_R$ is fully faithful. If $V$ is crystalline, then the maps \[ \alpha_{\mathrm{cris}}: D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]} B_{\mathrm{cris}}(R) \rightarrow V\otimes_{\mathbf{Q}_p} B_{\mathrm{cris}}(R) \] and \[ \alpha_{\mathrm{dR}}: D_{\mathrm{dR}}(V)\otimes_{R[\frac{1}{p}]} B_{\mathrm{dR}}(R) \rightarrow V\otimes_{\mathbf{Q}_p} B_{\mathrm{dR}}(R) \] are isomorphisms. Thus, \[ (V\otimes_{\mathbf{Q}_p} B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1} = V\otimes_{\mathbf{Q}_p}B_e(R) \cong (D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1} \] and \[ \mathrm{Fil}^0(V\otimes_{\mathbf{Q}_p} B_{\mathrm{dR}}(R))^{\nabla = 0} = V\otimes_{\mathbf{Q}_p}B_{\mathrm{dR}}^{\nabla +}(R) \cong \mathrm{Fil}^0(D_{\mathrm{dR}}(V)\otimes_{R[\frac{1}{p}]}B_{\mathrm{dR}}(R))^{\nabla = 0}. \] Furthermore, the diagram connecting $\alpha_{\mathrm{cris}}$ and $\alpha_{\mathrm{dR}}$ induced by $D_{\mathrm{cris}}(V)\otimes_{R_0[\frac{1}{p}]}R[\frac{1}{p}] \cong D_{\mathrm{dR}}(V)$ and the embedding $B_{\mathrm{cris}}(R)\otimes_{R_0[\frac{1}{p}]}R[\frac{1}{p}] \rightarrow B_{\mathrm{dR}}(V)$ is commutative. This proves the second statement. \end{proof} \noindent If we denote by $B\mathrm{-Pair}^{\mathrm{rep}}(R)$ the full subcategory of $B\mathrm{-Pair}(R)$ given by the essential image of $W_R$, then the functor $(W_e, W_{\mathrm{dR}}^{\nabla +}) \mapsto W_e \cap W_{\mathrm{dR}}^{\nabla +}$ from $B\mathrm{-Pair}^{\mathrm{rep}}(R)$ to the category of $\mathbf{Q}_p$-representations is a quasi-inverse to $W_R$ by Proposition \ref{prop:2.2}. For any weakly admissible $R_0[\frac{1}{p}]$-module $D$, we denote $W_e(D) \coloneqq (D\otimes_{R_0[\frac{1}{p}]}B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1}$ and $W_{\mathrm{dR}}^{\nabla +}(D) \coloneqq \mathrm{Fil}^0(D_R\otimes_{R[\frac{1}{p}]}B_{\mathrm{dR}}(R))^{\nabla = 0}$. \section{Horizontal crystalline representations of rank $2$ when $R = \mathcal{O}_K[\![Y]\!]$ and $k = \overline{k}$} \label{sec:3} In this section, we consider the case when $R_0 = W(k)[\![Y]\!]$ with algebraically closed residue field $k$ and prove Theorem \ref{thm:1.3}. Let $R = R_0\otimes_{W(k)}\mathcal{O}_K$ as above, and choose a uniformizer $\varpi \in \mathcal{O}_K$. Equip $R_0$ with the Frobenius given by $Y \mapsto Y^p$. Note that $R_0$ is isomorphic to the completion of $W(k)\langle X^{\pm 1} \rangle$ with respect to the ideal $(p, X-1)$ via $X \mapsto Y+1$. Thus, by Proposition \ref{prop:2.1}, if we let $u = (Y+1)\otimes 1-1\otimes [\widetilde{Y+1}] \in R\otimes_{W(k)} W(\overline{R}^\flat)$, then \[ B_{\mathrm{dR}}^{\nabla +}(R)[\![u]\!] = B_{\mathrm{dR}}^+(R), ~~\mathrm{Fil}^0 B_{\mathrm{dR}}(R) = B_{\mathrm{dR}}^+(R)[\frac{u}{t}]. \] Let $V$ be a horizontal crystalline $\mathcal{G}_R$-representation of rank $2$ whose Hodge-Tate weights lie in $[0, 1]$. $D_{\mathrm{cris}}^{\nabla}(V)$ is an isocrystal over $W(k)[\frac{1}{p}]$, and we have a $\varphi$-equivariant isomorphism $D_{\mathrm{cris}}^{\nabla}(V)\otimes_{W(k)[\frac{1}{p}]}R_0[\frac{1}{p}] \cong D_{\mathrm{cris}}(V)$. Denote $D = D_{\mathrm{cris}}(V)$ and $D_R^1 = \mathrm{Fil}^1 D_R$. We say $D$ is \'{e}tale (resp. multiplicative) if $D_R^1 = D_R$ (resp. $D_R^1 = 0$). If $D$ is \'{e}tale (resp. multiplicative), then it is induced from an \'{etale} (resp. a multiplicative) filtered $\varphi$-module $D_{\mathrm{cris}}^{\nabla}(V)$. In particular, $V$ arises from a $p$-divisible group over $R$ in both \'{e}tale and multiplicative cases. Now, assume $D_R^1$ has rank $1$ over $R[\frac{1}{p}]$. Then $t_H(D) = 1$, and both $D_R^1$ and $D_R/D_R^1$ are free over $R[\frac{1}{p}]$ since $R[\frac{1}{p}]$ is a principal ideal domain. Suppose further that $D_{\mathrm{cris}}^{\nabla}(V)$ is reducible as an isocrytal over $W(k)[\frac{1}{p}]$. Since $k = \overline{k}$, we can apply the Dieudonn\'{e}-Manin classification. Note that $D$ is weakly admissible and thus the slopes of all isoclinic subobjects of $D_{\mathrm{cris}}^{\nabla}(V)$ are non-negative, since each isoclinic subobject of $D_{\mathrm{cris}}^{\nabla}(V)$ induces a subobject of $D$. Hence, we can choose a $W(k)[\frac{1}{p}]$-basis $(e_1, e_2)$ of $D_{\mathrm{cris}}^{\nabla}(V)$ such that \[ \begin{split} \varphi(e_1) &= pe_1,\\ \varphi(e_2) & = e_2. \end{split} \] Then \[ W_e(D) = (D\otimes_{R_0[\frac{1}{p}]} B_{\mathrm{cris}}(R))^{\nabla = 0, \varphi = 1} = \frac{1}{t}e_1\cdot B_e \oplus e_2\cdot B_e. \] On the other hand, since $D_R^1$ and $D_R/D_R^1$ are free of rank $1$ over $R[\frac{1}{p}]$, we have \[ D_R^1 = (f(Y)e_1+g(Y)e_2)\cdot R[\frac{1}{p}] \] for some $f(Y), g(Y) \in \mathcal{O}_K[\![Y]\!]$ such that either $\varpi \nmid f(Y)$ or $\varpi \nmid g(Y)$ in $\mathcal{O}_K[\![Y]\!]$ and that there exist $h(Y), r(Y) \in \mathcal{O}_K[\![Y]\!]$ with $(f(Y)r(Y)-g(Y)h(Y))$ being a unit in $R[\frac{1}{p}]$. Then, \[ D_R \cong (f(Y)e_1+g(Y)e_2)\cdot R[\frac{1}{p}] \oplus (h(Y)e_1+r(Y)e_2)\cdot R[\frac{1}{p}] \] as $R[\frac{1}{p}]$-modules, and \[ \mathrm{Fil}^0 (D_R\otimes_{R[\frac{1}{p}]} B_{\mathrm{dR}}(R)) = \frac{f(Y)e_1+g(Y)e_2}{t}\cdot \mathrm{Fil}^0 B_{\mathrm{dR}}(R)\oplus (h(Y)e_1+r(Y)e_2)\cdot \mathrm{Fil}^0 B_{\mathrm{dR}}(R). \] Denote $c = [\widetilde{Y+1}]-1$, so that $Y = u+c$. We can write $f(Y) = f(u+c) = f(c)+uf_1(u)$ with $f(c) \in B_{\mathrm{dR}}^{\nabla +}(R)$ and $f_1(u) \in B_{\mathrm{dR}}^+(R) = B_{\mathrm{dR}}^{\nabla +}[\![u]\!]$, and similarly for $g(Y), h(Y), r(Y)$. For any $a(u), b(u) \in B_{\mathrm{dR}}^+(R)$, we have \[ \begin{split} &\frac{f(Y)e_1+g(Y)e_2}{t}(1+ua(u))+(h(Y)e_1+r(Y)e_2)\frac{u}{t}b(u) = \\ &\frac{f(c)e_1+g(c)e_2}{t}+\frac{u}{t}((f(Y)a(u)+h(Y)b(u)+f_1(u))e_1+(g(Y)a(u)+r(Y)b(u)+g_1(u))e_2). \end{split} \] The system of equations \[ \begin{split} f(Y)a(u)+h(Y)b(u) &= -f_1(u),\\ g(Y)a(u)+r(Y)b(u) &= -g_1(u) \end{split} \] has a unique solution $a(u), b(u) \in B_{\mathrm{dR}}^+(R)$, since $f(Y)r(Y)-g(Y)h(Y)$ is a unit in $R[\frac{1}{p}]$. Thus, \[ \frac{f(c)e_1+g(c)e_2}{t} \in W_{\mathrm{dR}}^{\nabla +}(D) = \mathrm{Fil}^0 (D\otimes_R B_{\mathrm{dR}}(R))^{\nabla = 0}. \] Similarly, for any $a(u), b(u) \in B_{\mathrm{dR}}^+(R)$, \[ \begin{split} &\frac{f(Y)e_1+g(Y)e_2}{t}\cdot tua(u)+(h(Y)e_1+r(Y)e_2)(1+ub(u)) =\\ & h(c)e_1+r(c)e_2+u((f(Y)a(u)+h(Y)b(u)+h_1(u))e_1+(g(Y)a(u)+r(Y)b(u)+r_1(u))e_2) , \end{split} \] and the system \[ \begin{split} f(Y)a(u)+h(Y)b(u) &= -h_1(u),\\ g(Y)a(u)+r(Y)b(u) &= -r_1(u) \end{split} \] has a unique solution $a(u), b(u) \in B_{\mathrm{dR}}^+(R)$. Thus, $h(c)e_1+r(c)e_2 \in W_{\mathrm{dR}}^{\nabla +}(D)$. We then have \[ W_{\mathrm{dR}}^{\nabla +}(D) = \frac{f(c)e_1+g(c)e_2}{t}\cdot B_{\mathrm{dR}}^{\nabla +}(R) \oplus (h(c)e_1+r(c)e_2)\cdot B_{\mathrm{dR}}^{\nabla +}(R). \] Note that $W_e(D)\otimes_{B_e(R)}B_{\mathrm{dR}}^{\nabla}(R) = W_{\mathrm{dR}}^{\nabla +}(D)\otimes_{B_{\mathrm{dR}}^{\nabla +}(R)} B_{\mathrm{dR}}^{\nabla}(R) = e_1\cdot B_{\mathrm{dR}}^{\nabla}(R) \oplus e_2\cdot B_{\mathrm{dR}}^{\nabla}(R)$, since $f(c)r(c)-g(c)h(c)$ is a unit in $B_{\mathrm{dR}}^{\nabla +}(R)$. In particular, $(W_e(D), W_{\mathrm{dR}}^{\nabla +}(D))$ is a $B$-pair. The intersection $W_e(D) \cap W_{\mathrm{dR}}^{\nabla +}(D)$ is given by the set of solutions $(x, y, s, z)$ with $x, y \in B_e(R), ~s, z \in B_{\mathrm{dR}}^{\nabla +}(R)$ satisfying \[ \begin{split} \frac{x}{t} &= \frac{f(c)}{t}s+h(c)z,\\ y &= \frac{g(c)}{t}s+r(c)z. \end{split} \] Then $x = f(c)s+th(c)z \in B_e(R) \cap B_{\mathrm{dR}}^{\nabla +}(R) = \mathbf{Q}_p$ by Proposition \ref{prop:2.2}, and $\displaystyle y = \frac{y_1}{t}$ with $y_1 \in U_1$. \begin{prop} \label{prop:3.1} $f(Y)$ is a unit in $R[\frac{1}{p}]$, and for any such $D$, $V_{\mathrm{cris}}(D) = W_e(D) \cap W_{\mathrm{dR}}^{\nabla +}(R)$ has rank $2$ over $\mathbf{Q}_p$. \end{prop} \begin{proof} Suppose $f(Y)$ is not a unit in $R[\frac{1}{p}]$. Since $\theta(c) = Y$, by applying $\theta$ to the equations \[ \begin{split} x &= f(c)s+th(c)z,\\ y_1 &= g(c)s+tr(c)z, \end{split} \] we obtain \[ \begin{split} x &= f(Y)\theta(s),\\ \theta(y_1) &= g(Y)\theta(s). \end{split} \] Since $x \in \mathbf{Q}_p$ and $\theta(s) \in \widehat{\overline{R}}[\frac{1}{p}]$, we have $x = 0 = \theta(s)$. Then $\theta(y_1) = 0$, and by Proposition \ref{prop:2.2}, the $\mathbf{Q}_p$-module $W_e(D) \cap W_{\mathrm{dR}}^{\nabla +}(D)$ has rank $1$. This contradicts to $D$ being admissible. Since $f(Y)$ is a unit in $R[\frac{1}{p}]$, we can choose $h(Y) = 0$ and $r(Y) = 1$. Then $x \in \mathbf{Q}_p$ as above and $s = f(c)^{-1}x \in B_{\mathrm{dR}}^{\nabla +}(R)$. We have $\theta(y_1) = g(Y)\theta(s) = g(Y)f(Y)^{-1}x$ and $\displaystyle z = \frac{y_1-g(c)s}{t} \in B_{\mathrm{dR}}^{\nabla +}(R)$. Hence, $V_{\mathrm{cris}}(D)$ has rank $2$ over $\mathbf{Q}_p$ by Proposition \ref{prop:2.2}. \end{proof} By Proposition \ref{prop:3.1}, we can write $D_R^1 = (e_1+g(Y)e_2)\cdot R[\frac{1}{p}]$ for some $g(Y) \in R[\frac{1}{p}]$. By replacing $e_2$ by $\frac{1}{p^n}e_2$ for some non-negative integer $n$ if necessary, we can further assume $g(Y) = pg_1(Y)$ for some $g_1(Y) \in R = \mathcal{O}_K[\![Y]\!]$. We now show such $D$ arises from a $p$-divisible group over $R$ by constructing the associated relative Breuil module. Let $E(u)$ be the Eisenstein polynomial for $\varpi$ over $R_0$, and let $\mathfrak{S} = R_0[\![u]\!]$ equipped with the Frobenius extending that on $R_0$ by $u \mapsto u^p$. Let $S$ be the $p$-adic completion of the divided power envelope of $\mathfrak{S}$ with respect to the ideal $(E(u))$. Explicitly, the elements of $S$ can be described as \[ S = \{\sum_{n \geq 0} a_n\frac{u^n}{\lfloor n/e\rfloor!}~|~ a_n \in R_0, ~a_n \rightarrow 0 ~p\mbox{-adically}\} \] where $e$ is the degree of $E(u)$. Note that $S/(E(u)) \cong R$, and the Frobenius on $\mathfrak{S}$ extends uniquely to $S$. Let $\mathrm{Fil}^1 S \subset S$ be the $p$-adically completed ideal generated by the divided powers $\displaystyle \frac{E(u)^n}{n!}, ~n \geq 1$. Since $\displaystyle\frac{\varphi(E(u))}{p}$ is a unit in $S$, we have $\varphi(\mathrm{Fil}^1 S) = pS$ as $S$-modules. Denote by $d_S^u: S \rightarrow S\otimes_{R_0} \widehat{\Omega}_{R_0}$ the connection given by \[ d_S^u(\sum_{n \geq 0} a_n\frac{u^n}{\lfloor n/e\rfloor!}) = \sum_{n \geq 0} \frac{u^n}{\lfloor n/e\rfloor!}d_{R_0}(a_n), \] where $d_{R_0}: R_0 \rightarrow R_0\otimes_{R_0}\widehat{\Omega}_{R_0}$ is the universal connection and $a_n \in R_0$ such that $a_n \rightarrow 0$ in the $p$-adic topology. Let $\mathfrak{g}_1 \in S$ be any preimage of $g_1(Y)$ for the map $S \twoheadrightarrow S/\mathrm{Fil}^1 S \cong R$. Let $\mathcal{M} = e_1\cdot S \oplus e_2\cdot S$, equipped with the filtration \[ \mathrm{Fil}^1 \mathcal{M} = \mathrm{Fil}^1 S\cdot \mathcal{M}+(e_1+p\mathfrak{g}_1e_2)\cdot S. \] Note that $\mathcal{M}/\mathrm{Fil}^1 \mathcal{M} \cong D_R/D_R^1$ as $R$-modules. Equip $\mathcal{M}$ with the Frobenius given by $\varphi(e_1) = pe_1, ~\varphi(e_2) = e_2$ as above. Then $\varphi(\mathrm{Fil}^1 \mathcal{M}) = p\mathcal{M}$ as $S$-modules. Let $\nabla: \mathcal{M} \rightarrow \mathcal{M}\otimes_{R_0} \widehat{\Omega}_{R_0}$ be the connection over $d_S^u$ given by $\nabla(e_1) = \nabla(e_2) = 0$. Then $\nabla$ is a topologically quasi-nilpotent integrable connection such that the Frobenius is horizontal. Hence, $\mathcal{M}$ gives a Breuil module over $S$ (as defined in \cite[Section 3]{kim-groupscheme-relative}), and by \cite[Theorem 3.5]{kim-groupscheme-relative}, there exists a $p$-divisible group $G_R$ over $R$ such that $\mathcal{M}^*(G_R) \cong \mathcal{M}$ as Breuil modules (cf. \cite{kim-groupscheme-relative} for the definition of the functor $\mathcal{M}^*(\cdot)$ from the category of $p$-divisible groups over $R$ to the category of Breuil modules over $S$). For integers $n \geq 0$, we choose compatibly $\varpi_n \in \overline{K}$ such that $\varpi_0 = \varpi$ and $\varpi_{n+1}^p = \varpi_n$, and let $R_{\infty}$ be the $p$-adic completion of $\bigcup_{n \geq 0} R(\varpi_n)$. Then $R_{\infty} \subset \overline{R}$, and let $\mathcal{G}_{R_{\infty}}\coloneqq \mathrm{Gal}(\overline{R}[\frac{1}{p}]/R_{\infty}[\frac{1}{p}])$ be the corresponding sub-Galois group of $\mathcal{G}_R$. Let $[\underline{\varpi}] \in W(\overline{R}^\flat)$ be the Teichm\"uller lift of $\underline{\varpi} = (\varpi_n) \in \overline{R}^\flat$. The $R_0$-algebra map $\mathfrak{S} \rightarrow R_0\otimes_{W(k)}W(\overline{R}^\flat)$ given by $u \mapsto [\underline{\varpi}]$ extends uniquely to $S \rightarrow A_{\mathrm{cris}}(R)$ compatibly with $\mathcal{G}_{R_\infty}$-actions, Frobenius and connection. Let $\mathrm{Fil}^1 A_{\mathrm{cris}}(R) \subset A_{\mathrm{cris}}(R)$ be the $p$-adically completed ideal generated by the divided powers of $\mathrm{ker}(\theta_{R_0})$. Then $S \rightarrow A_{\mathrm{cris}}(R)$ is also compatible with filtration. Define \[ T(\mathcal{M}) \coloneqq \mathrm{Hom}_{S, \mathrm{Fil}^1, \varphi, \nabla}(\mathcal{M}, A_{\mathrm{cris}}(R)). \] By \cite[Corollary 5.4.2]{kim-groupscheme-relative}, we have a natural isomorphism $T_p(G_R) \cong T(\mathcal{M})$ as $\mathcal{G}_{R_\infty}$-representations. To study $\mathcal{G}_R$-actions, let $N: S \rightarrow S$ be a derivation given by $N \coloneqq -u\frac{\partial}{\partial u}$. Let $N_{\mathcal{M}}: \mathcal{M} \rightarrow \mathcal{M}$ be the derivation over $N$ given by $N_{\mathcal{M}}(e_1) = N_{\mathcal{M}}(e_2) = 0$. For each integer $n \geq 0$, define a cocycle $\epsilon^{(n)}: \mathcal{G}_R \rightarrow \widehat{\overline{R}}^{\times}$ by \[ \epsilon^{(n)}(g) = g\cdot \varpi_n/\varpi_n \] for $g \in \mathcal{G}_R$. Let $\epsilon(g) = (\epsilon^{(n)}(g))_{n \geq 0} \in \overline{R}^\flat$, and $t(g) \coloneqq \log{[\epsilon(g)]} \in A_{\mathrm{cris}}^{\nabla}(R)$. Note that for any $g \in \mathcal{G}_R$, $t(g)$ is a $\mathbf{Z}_p$-multiple of $t$, and $t(g) = 0$ if and only if $g \in \mathcal{G}_{R_{\infty}}$. Define the $\mathcal{G}_R$-action on $\mathcal{M}\otimes_{S}A_{\mathrm{cris}}(R)$ by \[ g\cdot (x\otimes a) \coloneqq g(a)\sum_{i = 0}^{\infty} N_{\mathcal{M}}^i(x)\otimes \frac{t(g)^i}{i!}. \] Then by \cite[Section 5.5]{kim-groupscheme-relative}, this gives a well-defined $\mathcal{G}_R$-action on $\mathcal{M}\otimes_{S}A_{\mathrm{cris}}(R)$ which recovers the natural $\mathcal{G}_{R_{\infty}}$-action and is compatible with Frobenius and filtration. This induces a $\mathcal{G}_R$-action on $T(\mathcal{M}) \cong \mathrm{Hom}_{A_{\mathrm{cris}}(R), \mathrm{Fil}^1, \varphi, \nabla}(\mathcal{M}\otimes_S A_{\mathrm{cris}}(R), ~A_{\mathrm{cris}}(R))$, and the natural isomorphism $T_p(G_R) \cong T(\mathcal{M})$ is $\mathcal{G}_R$-equivariant. Consider the map of $R_0[\frac{1}{p}]$-modules $f: D \rightarrow \mathcal{M}[\frac{1}{p}]$ given by $e_i \mapsto e_i$. Then $f$ is compatible with Frobenius and connection. Thus, $f$ induces the map \[ \tilde{f}: T(\mathcal{M})[\frac{1}{p}] \rightarrow \mathrm{Hom}_{R_0[\frac{1}{p}], \varphi, \nabla}(D, B_{\mathrm{cris}}(R)) \] which is compatible with $\mathcal{G}_{R_{\infty}}$-actions. Note that $\tilde{f}$ is injective. Furthermore, since $f(D)$ lies in the kernel of $\mathcal{N}_{\mathcal{M}}$, $\tilde{f}$ is compatible with $\mathcal{G}_R$-actions. On the other hand, since $\varpi - [\underline{\varpi}] \in R\otimes_{W(k)}W(\overline{R}^\flat)$ lies in $\mathrm{ker}(\theta_R)$ and $\nabla(\varpi - [\underline{\varpi}]) = 0$, the image of $\tilde{f}$ lies in \[ \mathrm{Hom}_{R_0[\frac{1}{p}], \varphi, \nabla}(D, B_{\mathrm{cris}}(R)[\frac{1}{p}]) \cap \mathrm{Hom}_{R[\frac{1}{p}], \mathrm{Fil}, \nabla}(D_R, B_{\mathrm{dR}}(R)) \cong V_{\mathrm{cris}}(D)^{\vee}, \] where $V_{\mathrm{cris}}(D)^{\vee}$ denotes the dual representation of $V_{\mathrm{cris}}(D)$. So $\tilde{f}$ induces an injective map $\tilde{f}: T(\mathcal{M})[\frac{1}{p}] \hookrightarrow V_{\mathrm{cris}}(D)^{\vee}$ of $\mathbf{Q}_p$-vector spaces, and $V_{\mathrm{cris}}(D)^{\vee}$ is rank $2$ over $\mathbf{Q}_p$ by Proposition \ref{prop:3.1}. Thus, it is an isomorphism, and we have an isomorphism of $\mathcal{G}_R$-representations \[ V_{\mathrm{cris}}(D) \cong (T_p(G_R)\otimes_{\mathbf{Z}_p}\mathbf{Q}_p)^{\vee}. \] Using Proposition \ref{prop:3.1}, we can also construct $B$-pairs which are induced from weakly admissible $R_0[\frac{1}{p}]$-modules but do not arise from $\mathbf{Q}_p$-representations. For example, consider the case $K = W(k)[\frac{1}{p}]$ (so that $R = R_0$ is unrafmied), and let $D = e_1\cdot R_0[\frac{1}{p}]\oplus e_2\cdot R_0[\frac{1}{p}]$ equipped with the filtration $\mathrm{Fil}^0 D_R = D_R, ~\mathrm{Fil}^1 D_R = ((Y+p)e_1+e_2)\cdot R[\frac{1}{p}]$ and $\mathrm{Fil}^2 D_R = 0$. Equip $D$ with Frobenius endomorphism given by \[ \begin{split} \varphi(e_1) &= pe_1,\\ \varphi(e_2) &= e_2, \end{split} \] and equip $D$ with the connection given by $\nabla(e_1) = \nabla(e_2) = 0$. We have $t_H(D) = 1$ and $t_N(D, \mathfrak{p}) = 1$ for any $\mathfrak{p} \in \mathrm{Spec} R_0/pR_0$. Consider $\varphi$-equivariant base change maps $b_0: R_0 = W(k)[\![Y]\!] \rightarrow W(k)$ given by $Y \mapsto 0$ and $b_g: R_0 \rightarrow R_{0, g}$ where $R_{0, g}$ is the $p$-adic completion of $\varinjlim_{\varphi}R_{0, (p)}$. Note that by the universal property of $p$-adic Witt vectors, we have a $\varphi$-equivariant isomorphism $R_{0, g} \cong W(k_g)$ where $k_g$ is the perfect closure $\varinjlim_{\varphi}\mathrm{Frac}(R_0/pR_0)$ of $\mathrm{Frac}(R_0/pR_0)$. To check $D$ is weakly admissible, it suffices to show that the induced filtered $\varphi$-modules $D_0 \coloneqq D\otimes_{R, b_0} W(k)$ and $D_g \coloneqq D\otimes_{R, b_g} W(k_g)$ are weakly admissible. We have $D_0 = \overline{e_1}\cdot W(k)[\frac{1}{p}] \oplus \overline{e_2}\cdot W(k)[\frac{1}{p}]$ with $\mathrm{Fil}^1 D_0 = (p\overline{e_1}+\overline{e_2})\cdot W(k)[\frac{1}{p}]$. It admits a strongly divisible $W(k)$-lattice $M_0 = p\overline{e_1}\cdot W(k)\oplus \frac{\overline{e_2}}{p}\cdot W(k)$ with $\mathrm{Fil}^1 M_0 = (p\overline{e_1}+\overline{e_2})\cdot W(k)$, so $D_0$ is weakly admissible. On the other hand, $D_g = e_1\cdot W(k_g)[\frac{1}{p}] \oplus e_2\cdot W(k_g)[\frac{1}{p}]$ with $\mathrm{Fil}^1 D_g = ((Y+p)e_1+e_2)\cdot W(k_g)[\frac{1}{p}]$. It admits a strongly divisible $W(k_g)$-lattice $M_g = e_1\cdot W(k_g)\oplus \frac{e_2}{p}\cdot W(k_g)$ with $\mathrm{Fil}^1 M_g = ((Y+p)e_1+e_2)\cdot W(k_g)$, so $D_g$ is weakly admissible. Hence, $D$ is a weakly admissible $R_0[\frac{1}{p}]$-module. However, above computations and Proposition \ref{prop:3.1} show that $(W_e(D), W_{\mathrm{dR}}^{\nabla +}(D))$ is a $B$-pair which does not arise from a $\mathbf{Q}_p$-representation, since $(Y+p)$ is not a unit in $R[\frac{1}{p}]$. Thus, the relative case is different from the case when the base ring is a $p$-adic field where every $B$-pair semi-stable of slope $0$ arises from a $\mathbf{Q}_p$-representation. In particular, this answers negatively the question raised in \cite[Section 8]{brinon-relative} whether weakly admissible implies admissible in the relative case. We summarize above results in the following theorem. \begin{thm} \label{thm:3.2} Let $R = \mathcal{O}_K[\![Y]\!]$ whose residue field $k$ is algebraically closed. Let $V$ be a horizontal crystalline $\mathcal{G}_R$-representation of rank $2$ over $\mathbf{Q}_p$ with Hodge-Tate weights in $[0, 1]$ such that its associated isocrystal is reducible. Then $V$ arises from a $p$-divisible group over $R$. Moreover, there exists a $B$-pair which arises from a weakly admissible $R_0[\frac{1}{p}]$-module but does not arise from a $\mathbf{Q}_p$-representation. \end{thm} \begin{rem} \label{rem:3.3} When $R_0$ is a general relative base ring over $W(k)\langle X^{\pm 1}\rangle$ with Krull dimension $2$ and $R = R_0\otimes_{W(k)}\mathcal{O}_K$ with ramification index $e < p-1$, then it is proved in \cite{liu-moon-rel-cryst} using a completely different method that every crystalline $\mathcal{G}_R$-representation with Hodge-Tate weights in $[0, 1]$ arises from a $p$-divisible group over $R$. The argument in \cite{liu-moon-rel-cryst} crucially relies on the assumption that $e$ is small, even for the case $R = \mathcal{O}_K[\![Y]\!]$. \end{rem} \bibliographystyle{amsalpha}
{ "timestamp": "2018-10-16T02:09:28", "yymm": "1806", "arxiv_id": "1806.00867", "language": "en", "url": "https://arxiv.org/abs/1806.00867" }
\section{Introduction}\label{sec:introduction} An \emph{axiomatic theory} (\emph{theory} for short) is a set of axioms that specifies a set of mathematical structures. For example, the usual axioms for a group specify the mathematical structures that contain an associative binary function with an identity element and an inverse operation. Another example is the axioms of a complete ordered field that uniquely specify the real numbers (up to isomorphism). Theories serve as modular units of mathematical knowledge. Theories can be constructed by combining smaller theories. For example, a theory of fields is a combination of two copies of a theory of groups. Theories can also be connected to each other via meaning-preserving mappings called \emph{theory morphisms}\footnote{Theory morphisms are also known as \emph{immersions}, \emph{realizations}, \emph{theory interpretations}, \emph{translations}, and \emph{views}.}. (See~\cite{Farmer93} for some illustrative examples of theory morphisms.) A theory morphism from a theory $T_1$ to a theory $T_2$ maps the formulas valid in $T_1$ to formulas valid in $T_2$. For example, there are two natural theory morphisms from a theory of groups to a theory of fields. Theory morphisms serve as information conduits that enable theory components such as definitions and theorems to be transported from an abstract theory to a more concrete theory or equally abstract theory~\cite{BarwiseSeligman97}. A network of theories connected with theory morphisms is called a \emph{theory graph}. The theories are the nodes of the graph, while the theory morphisms are directed edges. We will argue in the next section that the architecture of a theory graph is well suited for organizing bodies of mathematical knowledge. A theory morphism connects concepts and facts in one theory with concepts and facts in another theory that might be formulated very differently from the first theory. The theory morphisms thus make explicit part of the great interconnectedness of mathematical knowledge. Neither the traditional proofs given in mathematics papers nor the formal proofs produced by proof assistants (such as Coq~\cite{Coq8.8.2}, Isabelle~\cite{IsabelleWebSite}, and Mizar~\cite{MizarWebSite}) fulfill all the purposes that mathematical proofs have. Moreover, they also do not exploit the kind of connections exhibited within a theory graph. This paper introduces a new style of mathematical proof that fulfills the principal purposes of a mathematical proof as well as capitalizes on the connections provided by the theory morphisms in a theory graph. This new style of proof combines the strengths of traditional proofs with the strengths of formal proofs. The rest of the paper is organized as follows. Section~\ref{sec:theory-graphs} gives an introduction to \emph{theory graphs}. Section~\ref{sec:styles} discusses the different \emph{styles of proof}, focusing in particular on the traditional and formal styles. The notion of a \emph{cross check} that compares a new result with previous results is presented in section~\ref{sec:cross-checks}. The \emph{purposes of a mathematical proof} and how well traditional and formal proofs fulfill them are discussed in section~\ref{sec:purposes}. A \emph{new style of proof} in the context of a theory graph is introduced in section~\ref{sec:new-style}. And the paper ends with some concluding remarks in section~\ref{sec:conclusion}. The major contributions of the work presented in this paper are: \begin{enumerate} \item We discuss the importance in mathematics of cross checks. \item We describe eight purposes of a mathematical proof and compare how well traditional and formal proofs fulfill them. \item We introduce a new style of mathematical proof in the context of theory graphs that fulfills the purposes of a proof better than traditional and formal proofs do. \end{enumerate} \section{Theory Graphs}\label{sec:theory-graphs} A \emph{theory} is a triple $T=(L,\Sigma,\Gamma)$ where $L$ is a logic, $\Sigma$ is \emph{language} of $L$, and $\Gamma$ is a set of formulas of $\Sigma$ called the \emph{axioms} of $T$. A \emph{model} for $T$ is an interpretation of $\Sigma$ in $L$ that satisfies all the members of $\Gamma$. A model of $T$ can be represented by a theory whose language has a symbol for each value in the model. Let $T_i = (L_i,\Sigma_i,\Gamma_i)$ be theories for $i=1,2$. A \emph{theory morphism from $T_1$ to $T_2$} is a triple $\Phi = (T_1,T_2,\phi)$ where $\phi$ is a mapping of the expressions in $\Sigma_1$ to the expressions in $\Sigma_2$ such that, if $A$ is formula in $\Sigma_1$ that is a logical consequence of $\Gamma_1$ in $L_1$, then $\phi(A)$ is a formula in $\Sigma_2$ that is a logical consequence of $\Gamma_2$ in $L_2$. Roughly speaking, a theory morphism is a meaning-preserving syntactic mapping from one theory to another. The logics $L_1$ and $L_2$ may be different and the languages $\Sigma_1$ and $\Sigma_2$ may involve totally different vocabulary. $T_1$ and $T_2$ are called the \emph{source theory} and \emph{target theory} of $\Phi$, respectively. An \emph{instance} of $T_1$ is the target theory of any theory morphism whose source theory is $T_1$. An \emph{instance} of an expression $E_1$ in $\Sigma_1$ is $\phi(E_1)$ for any theory morphism of the form $(T_1,T_2,\phi)$. An \emph{inclusion} is a theory morphism $\Phi = (T_1,T_2,\phi)$ where $T_2$ is an extension of $T_1$ and $\phi$ is the identity mapping. Theory morphisms can be composed together: The \emph{composition} of two theory theory morphisms $\Phi_1 = (T_1,T_2,\phi_1)$ and $\Phi_2 = (T_2,T_3,\phi_2)$ is the theory morphism $\Phi_1 \circ \Phi_2 = (T_1,T_3,\phi_1 \circ \phi_2)$. A \emph{theory graph}~\cite{Kohlhase14} is a directed graph whose nodes are theories and whose edges are theory morphisms. In a theory graph, mathematical knowledge is distributed over the set of theories in the graph. A theory graph provides an advantageous architecture for a \emph{digital mathematics library (DML)}~\cite{Bouche10} for the following reasons: \begin{enumerate} \item In accordance with the \emph{little theories method}~\cite{FarmerEtAl92b}, the development of a mathematical topic can be done in the theory in a theory graph that has the most convenient underlying logic, the most convenient level of abstraction, and the most convenient vocabulary and then concepts and facts produced in the development can be transported to many other contexts via the theory morphisms in the theory graph. For example, concepts and facts about a group can be developed in a theory of an abstract group and then transported to both the additive and multiplicative contexts in a theory of fields. The little theories method thus enables a large body of mathematical knowledge to be developed with a minimal amount of redundancy. \item The results developed in a theory $T_1$ can be shared with another theory $T_2$ that has a different underlying logic, vocabulary, or axiomatization as long as $T_2$ exhibits the same conceptual structure as $T_1$. This allows the mathematical knowledge in a theory graph to be highly distributed. \item Two theories $T_1$ and $T_2$ representing different developments of the same mathematical topic based on different axiomatizations of the topic can be shown to be equivalent by producing a theory morphism from $T_1$ to $T_2$ and another from $T_2$ to $T_1$. The set of equivalent theories can be consolidated into a single structure called a \emph{realm}~\cite{CaretteEtAl14}. \item The theories in a theory graph can be developed independently in parallel and then they can be later integrated with each other by defining appropriate theory morphisms between them. This allows a theory graph to built by multiple independent teams working in parallel. \item Concepts and facts in a theory graph that are relevant to the development in a theory $T$ can be found by following theory morphisms to $T$ backwards to their source theories. Concepts and facts several ``steps'' away from $T$ can be found by following compositions of theory morphisms backwards. The concepts and facts found in this way could reside in theories that are quite different from $T$ and possibly even unknown to the developers of $T$. \end{enumerate} Substantial software support is needed to realize the benefits of a theory graph. Although significant progress has been made in developing software based on and inspired by theory graphs, there is not yet a full system for developing and organizing a DML as a theory graph. The {\mbox{\sc imps}} theorem proving system~\cite{FarmerEtAl93,FarmerEtAl96} represents mathematical knowledge as a theory graph. However, all the theories in the {\mbox{\sc imps}} theory library employ the same underlying logic, {\mbox{\sc lutins}}~\cite{Farmer90,Farmer93b,Farmer94}, a version of Church's type theory with undefinedness and partial functions. In {\mbox{\sc imps}}, theory morphisms are used to transport definitions and theorems from one theory to another. They are also used to find for the user relevant theorems that reside outside of the theory in which the user is working. {\mbox{\sc mmt}}~\cite{RabeKohlhase13} is a foundation-independent framework for representing mathematical knowledge as a theory graph developed largely by Florian Rabe within the KWARC research group~\cite{KWARCWebSite} at Friedrich-Alexander University. It does not currently include the kind of tools for developing mathematical knowledge that proof assistants provide, but it is a very major step towards a software system that can be used to build a theory-graph-based DML. Two other notable KWARC projects that build on the notion of a theory graph are the OAF project~\cite{OAFWebSite} on integrating formal libraries and the LATIN project~\cite{LATINWebSite} on formalizing logics and logic translations. Theory morphisms are used in other proof assistants, for example, Isabelle~\cite{Ballarin14} and PVS~\cite{OwreShankar01}. \section{Styles of Mathematical Proof}\label{sec:styles} A \emph{proof} is a deductive argument intended to show that a mathematical statement is a logical consequence of a set of premises. There are many styles of proof. Some proofs \emph{describe} a deduction of the statement from the premises, while other proofs \emph{prescribe} the steps needed to produce the deduction. Many proofs are presented in a \emph{two-column format} where each line in the left column is an intermediate result in a deduction and the corresponding line in the right column explains how the result is obtained. Some proofs contain \emph{computations} (e.g., numeric or algebraic simplifications) or \emph{constructions} (e.g., via straightedge and compass). Some are fully \emph{constructive} in the sense that they strictly adhere to the principles of constructive logic. \emph{Geometry proofs} are deductions guided by a geometric drawing. \emph{Visual proofs} are presented by a series of diagrams or an animation. The proofs presented in mathematical books and articles usually exhibit a particular style that we call the \emph{traditional proof style}. Proofs of this style are arguments written in a stylized form of natural language with a heavy use of special symbols. In traditional proofs the terminology and notation may be ambiguous, assumptions may be unstated, and the argument may contain logical gaps. However, the reader is expected to be able to resolve the ambiguities, identify the unstated assumptions, and fill in the gaps in the argument. The writer --- whose purpose is to serve some particular community of readers --- has the freedom to express the argument in whatever manner is deemed most effective. This includes exhibiting other styles of proof within the traditional style. The \emph{formal proof style} is to present a proof as a derivation in a proof system for a formal logic. Formal proofs can be interactively developed and mechanically checked using proof assistants. This style of proof is highly constrained by the logic, proof system, and the fact that every detail must be verified. On the other hand, there is a very high level of assurance that the statement proved is indeed a theorem of the proof system. Although the traditional proof style dominates mathematics, the formal proof style is beginning to make some modest inroads in mathematical practice. For example, see the special issue of the Notices of the AMS on formal proof~\cite{HalesEtAl08}. \section{Cross Checks}\label{sec:cross-checks} A proof by itself does not establish that the theorem it proves is correct since there is always the possibility of error. Error is even possible if the proof is machine checked because the proof may be valid but the theorem may not be correctly stated. For example, one can conjecture that a mathematical object has a certain property, prove the conjecture, and then conclude that the object does indeed possess the property. But the property may have been expressed incorrectly in a way that is not easily noticed. Since proofs may be incorrect and theorems may be misstated, mathematicians are usually reluctant to accept a theorem on only the basis of its proof. Georg Kreisel has noted in several of his papers, e.g., in \cite[p.~126]{Kreisel77} and \cite[p.~145]{Kreisel85}, that a better way to avoid error than carefully checking a proof is to use \emph{cross checks} to compare the result with known facts. For example, the proof can be checked against similarly structured proofs and the theorem can be compared with consequences of the theorem or related versions of the theorem that have been independently proved. Although cross checks are very important, they are rarely written down and are not considered as part of either a traditional or a formal proof. Here are some examples of cross checks: \begin{enumerate} \item Let $P$ be a proof of a theorem $A$. A cross check would be to verify that a proof similar to $P$ proves a theorem similar to $A$. \item Let $A$ be a theorem asserting that each member of a set $S$ of objects satisfies a certain property $P$. A commonly employed cross check would be to verify independently from the proof of $A$ that $P$ is satisfied by certain special members of $S$ like the empty set, the empty function, constant functions, etc. \item Let $A$ be a theorem and $B$ be a statement that is the ``dual'' of $A$ in some sense and that is expected to hold if $A$ holds. For instance, if $A$ is statement involving universal quantification, $B$ could be the dual statement involving existential instead of universal quantification. A cross check would be to verify $B$ independently from the proof of $A$. \item Let $A$ be a theorem expressed as an algebraic statement. A cross check would be to verify a geometric analog of $A$ independently of the proof of $A$. \end{enumerate} In the context of a theory graph, there are two main ways of representing a cross check. The first is as a tuple $(P_1,T_1,P_2,T_2)$ where $P_i$ is a proof in a theory $T_i$ for $ i=1,2$ and $P_1$ and $P_2$ have a similar structure. This cross check succeeds if the theorems $P_1$ and $P_2$ prove similar theorems and fails otherwise. The second is as a tuple $(A_1,T_1,A_2,T_2,\Phi)$ where: \begin{enumerate} \item $A_i$ is a theorem of a theory $T_i$ for $ i=1,2$. \item $\Phi$ is a theory morphism $(T_1,T_2,\phi)$. \item $A_2$ is expected to follow from $\phi(A_1)$ in $T_2$. \end{enumerate} \noindent $A_2$ could be, for example, a formulation of $A_1$ in a theory $T_2$ that is a more concrete setting than $T_1$ or the dual of $A_1$ under some notion of duality captured by $\Phi$. Notice that, if $\Phi$ is an inclusion, then $A_2$ is actually expected to follow from $A_1$ in $T_2$. In this case, $A_2$ could be a special case of $A_1$ or a corollary of $A_1$. This cross check succeeds if $A_2$ indeed follows from $\phi(A_1)$ in $T_2$ and otherwise fails. A failed cross check could indicate that a mistake has been made or that something is not adequately understood. Thus failed cross checks are valuable because they can lead to finding hidden mistakes and making new discoveries. Also, if the proof or statement of a fully verified theorem with cross checks is ever modified in the future, the cross checks can be used to discover errors that are introduced by the modification. \section{Purposes of a Mathematical Proof}\label{sec:purposes} Mathematical proofs serve several purposes. So what are they? Various purposes have been discussed in the mathematics literature~\cite{Bell76,CadwalladerOlsker11,deVilliers90,deVilliers99,Hanna00,Hersh93,LofwallHemmi09}. Michael de Villiers presents in~\cite{deVilliers99} a list of the following six purposes: \begin{enumerate} \item \emph{Verification} (concerned with the truth of a statement). \item \emph{Explanation} (providing insight into why it is true). \item \emph{Systematization} (the organisation of various results into a deductive system of axioms, major concepts and theorems). \item \emph{Discovery} (the discovery or invention of new results). \item \emph{Communication} (the transmission of mathematical knowledge). \item \emph{Intellectual challenge} (the self-realization/fulfillment derived from constructing a proof). \end{enumerate} We claim that mathematical proofs serve eight principal purposes, four of which are not on de Villiers' list. For each of the eight, we describe what the purpose is and compare how well traditional and formal proofs fulfill the purpose. \subsection*{Purpose 1: Communication} The main purpose of a proof given in a textbook or scientific article is to \emph{communicate} to the reader why a mathematical statement follows from a set of premises. Proofs constructed for communication are used to convey insight and to build intuition. The highly flexible style of traditional proofs is usually a much better vehicle for communication than the highly constrained style of formal proofs. This is especially true when the writer is more concerned about high-level ideas than low-level details (that often can be mechanically checked by computation). However, formal proofs can be much more effective at presenting intricate syntactic manipulations than traditional proofs. (This purpose combines de Villiers' \emph{explanation} and \emph{communication} purposes.) \subsection*{Purpose 2: Certification} Another important purpose of a proof is to \emph{certify} that a mathematical statement follows from a set of premises. Such a proof serves as a certificate that can be independently checked. Since a traditional proof is written for a particular audience, it may not be easily checked by someone outside of this audience. Moreover, a traditional proof may contain mistakes that are not easily noticed by a reader, even a reader in the intended audience. In contrast, a formal proof can be mechanically checked by software alone. A formal proof thus offers the highest level of certification. (This purpose includes de Villiers' \emph{verification} purpose.) \subsection*{Purpose 3: Organization} Mathematical knowledge is usually \emph{organized} as a deductive edifice composed of axioms, definitions, theorems, and proofs. The proofs are the threads that hold the edifice together. Any body of mathematical knowledge built without proofs will almost certainly contain falsehoods and contradictions that compromise its deductive structure. Both traditional and formal proofs are very effective tools for organizing mathematical knowledge as a deductive structure, but formal proofs are somewhat better since their correctness can be machine checked. (This purpose is the same as de Villiers' \emph{systematization} purpose.) \subsection*{Purpose 4: Discovery} A proof is often formulated to be a provisional argument that a mathematician can use to \emph{discover} new theorems. This idea is brilliantly expressed in \emph{Proofs and Refutations} by Imre Lakatos~\cite{Lakatos76}. See also Yehuda Rav, ``Why Do We Prove Theorems?''~\cite{Rav99}. Traditional proofs are well suited for expressing provisional arguments that can be analyzed by humans. Formal proofs are too rigid to express provisional arguments and thus are poorly suited for this task. On the other hand, machines can be used to discover various kinds of structure embodied in a formal proof, but it is much more difficult to analyze traditional proofs in this way. (This purpose is the same as de Villiers' \emph{discovery} purpose.) \subsection*{Purpose 5: Learning} The most effective way to \emph{learn} mathematics is to read and write proofs. Traditional proofs are today generally much easier to read and write than formal proofs. However, a reader of a traditional proof may have to work hard to resolve ambiguities, identify unstated assumptions, and fill in the gaps in the argument, and a writer may have to work hard to verify that each step of the argument is valid. With effective software support, reading and writing formal proofs could become almost as easy as reading and writing traditional proofs. (This purpose is not explicitly included in de Villiers' list of purposes.) \subsection*{Purpose 6: Universality} A proof is \emph{universal} if it is expressed without any superfluous ideas and can thus be applied in every context in which the conditions of the proof hold. Universality is not absolute; it depends on audience and context. A proof can be universal with respect to one audience and context but not with respect to another. Traditional proofs can be expressed in a universal manner, but the underlying mathematical foundation is usually implicit. Traditional proofs are thus untethered; they do not have a precise mathematical home. Formal proofs have a precise mathematical home, but the home is usually not connected to many other contexts in which the proof can be applied. Hence both traditional and formal proofs fall short in achieving universality. (This purpose is not included in de Villiers' list of purposes.) \subsection*{Purpose 7: Coherency} A theorem is \emph{coherent} with a body of mathematical knowledge if it properly fits into the body without any contradictions or unexpected relationships. A traditional or formal proof by itself does not establish that the theorem it proves is coherent with other mathematical knowledge. Coherency is established by cross checks. Although cross checks are very important, they are rarely written down and are not considered as part of either a traditional or a formal proof. (This purpose is not explicitly included in de Villiers' list of purposes.) \subsection*{Purpose 8: Beauty} Mathematics is a utilitarian art form like architecture or industrial design. The desire to create \emph{beauty} (what mathematicians call \emph{elegance}) is one of the strongest driving forces in mathematics. Mathematicians seek to develop proofs that are beautiful as well as correct. Indeed some mathematicians will not accept a theorem until an elegant proof of the theorem has been found. It is safe to say that most mathematicians find it easier to write beautiful proofs in the highly flexible traditional proof style than in the highly constrained formal proof style. (This purpose is not included in de Villiers' list of purposes.) \subsection*{Summary} Table~\ref{tab:summary} summarizes the differences between traditional and formal proofs. As can be seen, neither traditional proofs nor formal proofs fulfill the eight purposes that we claim mathematical proofs have. Furthermore, both styles lack the capacity to fully achieve universality and coherency. \begin{table}[ht] \begin{center} \begin{tabular}{|l|c|c|} \hline & \textbf{Traditional Proofs} &\textbf{Formal Proofs}\\ \hline \textbf{Communication} & {\rotatebox[origin=c]{90}{\CIRCLE}} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}}\\ \hline \textbf{Certification} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}} & {\rotatebox[origin=c]{90}{\CIRCLE}}\\ \hline \textbf{Organization} & {\rotatebox[origin=c]{90}{\RIGHTcircle}} & {\rotatebox[origin=c]{90}{\CIRCLE}}\\ \hline \textbf{Discovery (Human)} & {\rotatebox[origin=c]{90}{\CIRCLE}} & {\rotatebox[origin=c]{90}{\Circle}}\\ \hline \textbf{Discovery (Machine)} & {\rotatebox[origin=c]{90}{\Circle}} & {\rotatebox[origin=c]{90}{\CIRCLE}}\\ \hline \textbf{Learning (Reading)} & {\rotatebox[origin=c]{90}{\RIGHTcircle}} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}}\\ \hline \textbf{Learning (Writing)} & {\rotatebox[origin=c]{90}{\RIGHTcircle}} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}}\\ \hline \textbf{Universality} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}} & {\rotatebox[origin=c]{-90}{\RIGHTcircle}}\\ \hline \textbf{Coherency} & {\rotatebox[origin=c]{90}{\Circle}} & {\rotatebox[origin=c]{90}{\Circle}}\\ \hline \textbf{Beauty} & {\rotatebox[origin=c]{90}{\CIRCLE}} & {\rotatebox[origin=c]{90}{\Circle}}\\ \hline \end{tabular} \medskip $\rotatebox[origin=c]{90}{\CIRCLE}$ : high; $\rotatebox[origin=c]{90}{\RIGHTcircle}$ : medium high; $\rotatebox[origin=c]{-90}{\RIGHTcircle}$ : medium low; $\rotatebox[origin=c]{90}{\Circle}$ : low. \end{center} \caption{Traditional vs.~Formal Proofs} \label{tab:summary} \end{table} \section{A New Style of Proof}\label{sec:new-style} Since traditional and formal proofs do not adequately achieve universality and coherency, they are not adequate for building theory graphs. We therefore propose a new style of proof that fulfills universality and coherency as well as other six purposes described in the previous section. Let \mname{TG} be a theory graph. A proof in \mname{TG} of this new proof style has four components: \begin{enumerate} \item A \emph{home theory} $\mname{HT} = (\mname{Log}, \mname{Lang}, \mname{Axms})$ where {Log} is a formal logic, \mname{Lang} is a language in \mname{Log}, and \mname{Axms} is a set of formulas in \mname{Lang}. \item A \emph{theorem} \mname{Thm} that is a formula in \mname{Lang} purported to be a logical consequence of \mname{Axms}. \item An \emph{argument} \mname{Arg} that shows \mname{Thm} is a logical consequence of \mname{Axms}. \item A set \mname{CC} of \emph{cross checks} of the two forms mentioned in section~\ref{sec:cross-checks} that compare \mname{Arg} with similar arguments in \mname{TG} and \mname{Thm} with related theorems in \mname{HT} or in other theories in \mname{TG}. \end{enumerate} The home theory \mname{HT} is a node in \mname{TG} and a formal context for the proof. It is connected via theory morphisms to other theories in \mname{TG}. Ideally, the home theory is at the optimal level of abstraction for the proof and contains only the concepts and assumptions needed to express the proof's argument and theorem. The theorem \mname{Thm} is a formal statement of what the proof's argument shows. It can be transported via appropriate theory morphisms to other theories in which the conditions of the proof hold. \mname{HT} and \mname{Thm} together thus serve as a specification of the set of theories $T$ and formulas $A$ in \mname{TG} such that $T$ is an instance of \mname{HT} and $A$ is an instance of \mname{Thm} under some theory morphism. In this way, the proof fulfills the purpose of universality. The argument \mname{Arg} has both a traditional component for communication, organization, human-oriented discovery, learning, and beauty and a formal component for certification, organization, learning, and machine-oriented discovery. The two components are tightly integrated so that, for example, a reader of the traditional component can switch, if desired, to the formal component when a gap in the argument is reached. It is not necessary that the formal component is a complete formal proof of the theorem. The formal component can even be totally absent. Thus the proof is \emph{flexiformal}~\cite{Kohlhase13} in the sense that it is mixture of formal and informal components. The set of cross checks should be carefully chosen to show that the theorem is coherent with the web of previously established facts in \mname{TG}. With the set \mname{CC} the proof thus fulfills the purpose of coherency. In summary, the new style of proof we propose is a mixture of the traditional and formal proof styles in which the context of the proof and the statement proved are formal, the argument of the proof is expressed in a traditional style, and parts of the argument may be integrated with formal derivations. The home theory of the proof is a node in a theory graph of a \mname{TG} that is an optimal expression of the context of the proof. And the cross checks of the proof connect the proof and the theorem to similar proofs and related theorems in the theory graph. \section{Conclusion}\label{sec:conclusion} We have shown that a theory graph is a network of theories connected by theory morphisms in which mathematical knowledge is distributed across the network of theories. The underlying logics of the theories can be different and the languages of the theories can vary greatly. The theories can organized according to the little theories method and can be developed independently in parallel. The theory morphisms capture many of the connections in the mathematical knowledge represented in the theory graph. As a result, they can be used to find concepts and facts relevant to a theory $T$ that reside outside of $T$ in other, possibly quite different, theories. With these attributes, a theory graph is well suited to be the architecture for a large-scale, multifoundational, highly connected, and highly distributed DML. This is particularly true for a DML whose mathematical knowledge is intended to be formal or flexiformal. The obvious example of such a DML is the \emph{Global Digital Mathematics Library (GDML)}~\cite{IonWatt17} proposed by the International Mathematical Union (IMU)~\cite{IMUWebSite}. Proofs have a crucial role to play in building a GDML. However, traditional and formal proofs do not adequately fulfill all the purposes of a proof that we presented in section~\ref{sec:purposes}. Traditional proofs are good for communication, organization, human discovery, learning, and beauty, while formal proofs are good for certification, organization machine discovery. But neither traditional nor formal proofs are especially good for universality and coherency. To capitalize on the structure offered by a theory graph, we have proposed a new style of proof that merges the traditional and formal styles of proof, achieves universality using the little theories method, and incorporates cross checks to establish coherency. We believe this proof style will promote the development of highly structured DMLs while preserving the benefits of both traditional and formal proofs. \section*{Acknowledgments} The author would like to thank the referees for their comments. This research was supported by NSERC.
{ "timestamp": "2018-12-04T02:09:53", "yymm": "1806", "arxiv_id": "1806.00810", "language": "en", "url": "https://arxiv.org/abs/1806.00810" }
\section{Introduction} Measuring the magnitude of quantum chaos is an important fundamental problem which has several applications \cite{qsr_FrischNature2000,qsr_PokharelSciRep2018,qsr_ChenarXiv2018}. It is well known that certain quantum mechanical systems are dynamically equivalent to black holes in quantum gravity \cite{qsr_CameliaNature1998,qsr_GiddingsPRD2013,qsr_NomuraPRD2013} which are important theoretical objects studied by physicists \cite{qsr_AharonyPhysRep2000,qsr_BanksPRD1997}. Quantum chaos also has important implications in the fields of quantum information processing and many body quantum systems \cite{qsr_UllmoRPP2008,qsr_KosarXiv2017,qsr_AlessioAP2016}. The scrambling of quantum information has been identified as a good diagnostic tool to measure the magnitude of chaos in a system. Scrambling \cite{qsr_HosurJHEP2016,qsr_SekinoJHEP2008} is a process where a local perturbation spreads over the degrees of freedom of a quantum many body system. Scrambling implies the delocalization of quantum information and once a system has reached a scrambled state, it is impossible to learn about initial perturbations by performing any measurement on the final state. Initially commuting operators grow under time evolution to have large commutators with each other and other operators. Scrambling time refers to the time when the state of the system becomes scrambled. This is a quantum mechanical interpretation of the butterfly effect \cite{qsr_ShenkerJHEP2014}. An object known as the out-of-time-ordered correlator (OTOC) \cite{qsr_LiPRX2017,qsr_MaldacenaJHEP2016,qsr_KitaevFPPS2014} has been proposed to measure the scrambling time. In this work, we present a general theoretical protocol to measure the decay of the OTOC and subsequently measure the scrambling of information in an Ising spin model simulated for Rydberg atoms. While this protocol has been attempted experimentally, it has been performed under certain assumptions. We present a general method to simulate this protocol theoretically which could lead to further probing in the field. It can be applied on strongly correlated quantum many body systems, which have been difficult to study experimentally \cite{qsr_BlochPSSB2010}. We attempt to test the protocol by using the IBM quantum processor, `ibmqx4' by simulating two spin Ising model for Rydberg atoms \cite{qsr_KimIEEE2017,qsr_CortinasIEEE2017,qsr_SchaussQST2018}. The IBM Quantum Experience allows the quantum computing community to access multiple quantum processors and a large number of experiments have been done using their platform \cite{qsr_SrinivasanarXiv2018,qsr_GangopadhyayQIP2018,qsr_AlsinaPRA2016,qsr_GhoshQIP2018,qsr_BeheraarXiv542017,qsr_DasharXiv2017,qsr_GurnaniarXiv2017,qsr_SatyajitarXiv2017,qsr_KandalaNat2017,qsr_VishnuPKarXiv2017,qsr_SisodiaQIP2017,qsr_KalraarXiv2017,qsr_RoyarXiv2017,qsr_ViyuelaNQI2018,qsr_BeheraQIP3122017,qsr_HegadearXiv2017,qsr_DasharXiv782018,qsr_SrinivasanarXiv10928,qsr_Beheraarxiv06530}. As Ising spin model is a two spin level model it is conducive to study through quantum computation techniques by using qubits. The scrambling time is measured through the decay of the out-of-time-ordered correlation (OTOC) function. \begin{equation} F(t) = \big \langle W_t^\dagger V^\dagger W_t V \big \rangle \end{equation} where, V and W are unitary operators which commute at time $t = 0$. $W_t$ and $U(t)$ are the Heisenberg and time evolution operators defined as, $W_t = U(-t)WU(t)$) and $U(t) = e^{-i\mathcal{H} t}$ respectively. Here, $\mathcal{H}$ represents the time-independent dynamic Hamiltonian of the system. F(t) enables us to measure the time when initially commuting operators, V and W, fail to commute. The relation between the OTOC function and the unitary operators is expressed in the following relation. \begin{eqnarray} & \big \langle \big |\big[ W_t,V \big] \big | ^2 \big \rangle = 2(1 - Re[F]) \end{eqnarray} Let's consider a system with N spins. For the system to reach a scrambled state, the information in one spin must spread to all the spins \cite{qsr_SwinglePRA2016}. The smallest perturbation in the system involves at least two spins. If unitary operations were to be performed on any two spins at a regular time interval $\delta t$, the scrambling time will be \begin{equation} t^* = \delta t \log_2 N \end{equation} $\big \langle \big | \big[ W_t,V \big] \big | ^2 \big \rangle$ has an order of magnitude of unity at timescales around the scrambling time. Hence Re[F] is an appropriate quantity for assessing scrambling. \section{Results} \textbf{Interferometric Protocol}: It has been shown that F(t) can be measured via many-body interferometry \cite{qsr_ViyuelaNQI2018,qsr_MullerPRL2009,qsr_AbaninPRL2012,qsr_PedernalesPRL2014,qsr_BohrdtNJP2017}. The interferometric protocol has been discussed by Swingle \emph{et al.} \cite{qsr_SwinglePRA2016,qsr_SwinglearXiv2018}. It lays out a method to measure the OTOC function. However in the experimental realizations of this protocol, some assumptions have always been made owing to the extreme challenges of the experimental setup. The interferometric protocol involves backward and forward evolution in time under an identical Hamiltonian. Realizing this in practice without dissipative effects is a difficult challenge. We present a quantum circuit to theoretically simulate this protocol. As we are considering a simulation, this avoids the inherent experimental challenges \cite{qsr_SwinglePRA2016} and allows us to measure the OTOC function without considering the dissipative effects. The interferometric protocol has been described by Swingle et.al. Here we present a generic quantum circuit to implement it \cite{qsr_SwinglePRA2016,qsr_SwinglearXiv2018}. \par Consider a n-qubit quantum system initially in the state ($| \psi \rangle _s$) and a control qubit ($\ket{\psi_c}$) initially in the state $\ket{0}$. By using the control qubit, we can easily prepare multiple branches for the system by applying appropriate controlled operations. We apply a Hadamard gate to transform the control qubit into the state $\frac{|0 \rangle + |1 \rangle}{\sqrt{2}}$. Then we prepare a final state where one branch undergoes the operation $V W_t$ and the other branch undergoes the operation $W_t V$. The OTOC function F(t), measures the overlap between these two states. This is achieved by preparing the following resultant state, \begin{equation} | \psi _f \rangle = \frac{(V W_t |\psi\rangle_s ) |0\rangle_c + (W_t V |\psi\rangle _s)|1 \rangle _c}{\sqrt{2}} \end{equation} The control qubit is then measured in the X-basis and the expectation value of the control qubit, $\big \langle X \big \rangle$ is equal to Re[F]. This can be obtained via the generic quantum circuit as depicted in Fig. \ref{qsr_fig1}. \begin{figure}[H] \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.95]{qsr_fig1.pdf} \caption{\textbf{Circuit implementing the general interferometric protocol}. Here $\ket{\psi_s}$ and $\ket{\psi_c}$ represent the states of the system and the control qubit respectively. V and W are the initially commuting unitary operators. U(t) is the time-evolution operator. Controlled-V, W and U(t) are decomposed into a product of single qubit and CNOT gates to design the quantum circuit in the IBM quantum processor, ibmqx4. The measurement of the control qubit is performed in the X-basis.} \label{qsr_fig1} \end{figure} Coupling of neutral atoms to highly excited Rydberg states has been shown to be a promising way to simulate the Ising spin model \cite{qsr_KimIEEE2017,qsr_SchaussQST2018,qsr_LabuhnNature2016,qsr_NguyenPRX2018,qsr_AugerPRA2018}. Consider a driving resonant laser with Rabi frequency $\Omega$ that couples atoms in a ground state $\ket{g_i}$ to a highly excited Rydberg state $\ket{r_i}$. The resultant Rydberg atom pairs face strong, repulsive van der Waals interactions of the type $V_{ij} = \frac{C}{R_{ij}}$, where $R_{ij}$ is the distance between Rydberg atom pairs (i, j) and C $\textgreater$ 0. The general Hamiltonian for an Ising spin model for Rydberg atoms \cite{qsr_KimIEEE2017} can be represented by \begin{equation} \mathcal{H} = \hbar \Omega \sum_{i} \hat{\sigma_x}^i + \sum_{i<j} V_{ij} \hat{n_i}\hat{n_j} , \end{equation} where $\hbar$ is the reduced Planck's constant, $\hat{\sigma_x}^i$ is the Pauli-X operator acting on the $i^{th}$ spin and $\hat{n}$ is Rydberg atom number. The total spin operator is given as \begin{equation} S_Z = \sum_{i} \hat{\sigma_z}^i \end{equation} where $\hat{\sigma_z}^i$ is the Pauli-Z operator acting on the $i^{th}$ spin. The unitary operators V and W are chosen such that they commute at time t = 0 and satisfy the required conditions for the application of the interferometric protocol. V and W are defined to be, $V = W = e^{-i \phi S_Z}$, where $\phi = \pi/4$. The time-evolution operator U(t) and the commuting operators V and W are decomposed into unitary gates which can be directly applied on qubits. Without loss of generality, we can assume $\ket{0}$ and $\ket{1}$ to represent the ground state and the excited state of the Rydberg atom respectively. This allows us to perform the simulation using a quantum computer. We consider two different initial states to illustrate the power of this technique. For the first set of data, we consider an initial state where both the qubits are in the ground state, i.e., $\ket{\psi _s} = \ket{00}$. In this case, the interactions between the qubits develop as a result of the dynamics of the system and are expressed through the time evolution operator U(t). For the second set of data, we consider an initial state where the qubits are in an entangled state. The qubits form Rydberg atom pairs under the condition of Rydberg blockades \cite{qsr_UrbanNatPhys2009,qsr_GaetanNatPhys2009,qsr_WilkPRL2010,qsr_ZengPRL2017,qsr_ZhaoNature2017} which implies that both the qubits cannot be simultaneously excited to Rydberg states. This is done by first preparing both the qubits in $\ket{00}$ state. Then a Pauli-X gate and a Hadamard gate are applied on the first and second qubit respectively. A CNOT gate is applied taking the second qubit as the control qubit and the first qubit as the target qubit. After the operation of the above gates, the resultant state is found to be, $\ket{\psi_s} = \frac{\ket{01} + \ket{10}}{\sqrt{2}}$. Now we can easily see that both of the qubits are not in the excited state simultaneously which confirms that they act as a Rydberg blockade. Owing to the entanglement of the system, we expect to have different results from the first case. The interaction in this system is not solely due to time evolution but is also present due to entanglement correlations. To showcase the general technique as described above, we simulate the above model using IBM quantum experience platform. \textbf{Observations}: \begin{figure}[H] \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale = 0.40]{qsr_fig2.pdf} \caption{\textbf{out-of-time-ordered correlator (OTOC) function behaviour (Simulation Result) }: \textbf{(a)} OTOC (Re[F]) as a function of time for a two spin (N = 2) Ising model Rydberg atom when the initial state is $|\psi_s\rangle = |00\rangle$. It is clearly seen that the initial state of both the Rydberg atoms is in the ground state. The graph is obtained by simulating the quantum circuit on the `ibmqx4' quantum processor. It is to be noted that the data has not been taken on the real quantum computer and hence dissipation and noise can be ignored. As we can observe, the OTOC after an initial rise decays from $V_{12}t = 1$ to $V_{12}t = 3$. After this point of time, it undergoes oscillations of large magnitude. The OTOC seems to follow a periodic pattern. From the graph, it is observed that quantum information travels periodically from one end of the system to the other. It does not reach an equilibrium value for the time period of the experiment ($V_{12} t$ = 0 to $V_{12} t$ = 8). \textbf{(b)} OTOC (Re[F]) as a function of time for a two spin (N = 2) Ising model for Rydberg atom when the initial state is $|\psi_s\rangle = \frac{\ket{01} + \ket{10}}{\sqrt{2}}$. It is clearly seen that the qubits form Rydberg atom pairs and exist in an entangled state. The graph is plotted after simulating on the `ibmqx4' quantum processor. As we can notice, the OTOC does not follow any regular pattern. In this case quantum information travels from one end of the system to the other as well. As a whole, the magnitude of the OTOC is much lesser than the case when the initial state is a product state (\textbf{Case (a)}) which can have important implications in the future.} \label{qsr_fig2} \end{figure} \begin{figure}[H] \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.44]{qsr_fig3.pdf} \caption{\textbf{out-of-time-ordered correlator (OTOC) function behaviour (Experimental Result)}: \textbf{(a)} OTOC (Re[F]) as a function of time for a two spin (N = 2) Ising model Rydberg atom when the initial state is $|\psi_s\rangle = |00\rangle$. It is observed that the initial state of both the Rydberg atoms is the ground state. The graph is obtained by running the quantum circuit on the `ibmqx4' quantum processor, i.e. the results are collected after performing the experiment on the real quantum chip. In this case, the effects of dissipation and noise can not be ignored and are clearly visible. It can be observed, the OTOC does not follow a regular pattern. It does not reach an equilibrium value for the time period of the experiment ($V_{12} t$ = 0 to $V_{12} t$ = 8). \textbf{(b)} OTOC (Re[F]) as a function of time for a two spin (N = 2) Ising model for Rydberg atom when the initial state is $|\psi_s\rangle = \frac{\ket{01} + \ket{10}}{\sqrt{2}}$. It is clearly seen that the qubits form Rydberg atom pairs and exist in an entangled state. The graph is obtained by running the quantum circuit on the `ibmqx4' quantum processor, i.e. the results are compiled after execution of the quantum circuit on the real quantum chip. As we can notice, the OTOC does not follow any regular pattern. In the graph, quantum information travels from one end of the system to the other. As a whole, the magnitude of the OTOC is \textit{larger} than the case when the initial state is a product state. This is in direct contrast to the simulation results. From these results it is evident that environmental noise and dissipation greatly affect OTOC measurements.} \label{qsr_fig3} \end{figure} \section{Discussion} These results shed light on scrambling of quantum information in the chosen system. The results have been collected through simulation on the `ibmqx4' quantum processor. two system qubits and one control qubit have been used. In Fig. \ref{qsr_fig2} (a), the initial state was taken as $\ket{00}$ and the qubits were not entangled. So the chosen system represents the simplest case of the chosen model where both the qubits are in the ground state. It has been recently observed that the OTOC decays quickly in large many-body systems \cite{qsr_SwinglePRA2016}. In our case, despite having a relatively smaller system, a considerable decay in the OTOC is observed. In Fig. \ref{qsr_fig2} (b), the initial state is taken as $\frac{\ket{01} + \ket{10}}{\sqrt{2}}$ which is an entangled state. It is observed that, the magnitude of the OTOC is considerably smaller than the magnitude observed in Fig. \ref{qsr_fig2} (a). However in this case, no pattern in the OTOC is recognized. The decay is also not well defined. In Fig. \ref{qsr_fig3}, it is difficult to draw conclusions due to the prevalence of environmental noise. To combat this challenge, a renormalization procedure has been proposed by Swingle and Halpern \cite{qsr_SwinglearXiv2018}. The renormalization procedure allows scrambling measurements to be resilient to environmental noise. This procedure can be easily extended to measure scrambling of quantum information in much larger systems with varying degrees of interaction \cite{qsr_BernienNature2017}. Modifying the circuit for larger qubit systems, although non-trivial, can be done by appropriately simulating the Hamiltonian and operators while keeping in mind the universality of quantum gates. \section{Outlook} As more sophisticated and larger quantum processors are developed, it will be possible to use this technique to measure more complicated systems \cite{qsr_SwinglePRA2016,qsr_BernienNature2017}. We feel that important fundamental breakthroughs can be made in the field of quantum chaos \cite{qsr_ChenarXiv2018,qsr_IyodaPRA2018} by simulating classically chaotic models using techniques similar to the one presented in this paper. Also further probing might reveal important links between faster computation and quantum scrambling and the dissemination of quantum information \cite{qsr_BrownPRL2016}. Developing a quantum system with interactions similar to those in a black hole is an ongoing challenge. If such a strongly correlated, quantum mechanical model was to be developed, we can measure the scrambling of information which can lead to verification of the fast scrambling hypothesis \cite{qsr_SekinoJHEP2008,qsr_LashkariJHEP2013}. Links between the hiding of quantum information and quantum chaos can also be elucidated \cite{qsr_HosurJHEP2016}. \section{Methods} \textbf{Quantum Circuit for simulating operators V and W} The unitary operator V is defined as, \begin{equation} V = e^{ -i \frac{\pi}{4} S_Z } \end{equation} where $S_Z$ is calculated as, \begin{equation} S_Z = \hat{\sigma_z}^1 \otimes I + I \otimes\hat{\sigma_z}^2 \end{equation} V may be decomposed into a product of two-level unitary matrices, which act non-trivially on vectors in the orthonormal basis. Now our immediate goal is to construct a circuit implementing V. This circuit can be built using only single qubit and CNOT gates owing to the universality of quantum computation. To achieve this, we construct a Gray code sequence which links the two-level unitary matrices to ultimately implement V. A Gray code is a sequence of binary numbers which links an initial state to a final state such that any two neighbouring numbers differ only in a single bit. With the help of Gray codes we rotate a multi-qubit system into a state where the corresponding non-trivial, single qubit unitary transformation can be applied directly \cite{qsr_sup_Nielsen2002,qsr_sup_LiarXiv2012}. \begin{figure}[H] \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale = 1.0]{qsr_sup_fig1.pdf} \caption{Here $U_1$ is a physical gate provided by IBM quantum experience which represents a general rotation on the Bloch sphere and has one controllable parameter. This is the circuit of the unitary operator V after being decomposed into a product of two level unitary matrices. The circuit is designed with the help of CNOT gates and single qubit gates. The CNOT operation means when the control qubit is 1, then the target qubit is flipped. If the control qubit is 0, then the target qubit remains unchanged. The control qubit remains unchanged in both the cases.} \label{qsr_sup_Fig1} \end{figure} \textbf{Simulation of Hamiltonian} For simulation of the Hamiltonian we employ a first order Trotter decomposition \cite{qsr_sup_Nielsen2002,qsr_sup_HegadearXiv2017}. \begin{equation*} e^{-i\mathcal{H} t} = e^{-i \mathcal{H}_1 t}e^{-i \mathcal{H}_2 t}...e^{-i \mathcal{H}_n t} +O\big({t^2}\big) \end{equation*} where $\mathcal{H}_1, \mathcal{H}_2,...,\mathcal{H}_n$ are Hamiltonians acting on local subsystems involving k-qubits of an n-qubit system. The system Hamiltonian is, $\mathcal{H} = \sum_{1} ^ {n} \mathcal{H}_k$. The Hamiltonian is then decomposed into a sequence of unitary transformations which can be implemented through any set of universal quantum gates. In the model we have chosen above, the Hamiltonian is \begin{equation*} \mathcal{H} = \hbar\Omega\hat{\sigma_x}^1 + \hbar\Omega\hat{\sigma_x}^1 + V_{12}\hat{\sigma_z}^1\hat{\sigma_z}^2 \end{equation*} To implement the Trotter decomposition, we use $\mathcal{H} = \mathcal{H}_1 + \mathcal{H}_2 + \mathcal{H}_3$. Without loss of generality, we can assume a system of units such that $\hbar\Omega = 1$ to simplify the calculations. \begin{equation*} \mathcal{H}_1 = \hat{\sigma_x} ^1 \otimes I \end{equation*} After exponentiation, we finally get $ \begin{bmatrix} \cos{t} & -\iota\sin{t}\\ -\iota\sin{t} & \cos{t} \end{bmatrix} $ acting on the first qubit and identity matrix acting on the second. Here t is the time elapsed since the beginning of the experiment. As the IBM Q Experience is a static system, by taking t as a controllable parameter we are able to effectively simulate the interferometric protocol. $H_2$ and $H_3$ can be implemented in a similar way \cite{qsr_sup_Whitfield2010}. For producing the net Hamiltonian, we simply have to multiply $e^{-i\mathcal{H}_1t}$, $e^{-i\mathcal{H}_2t}$ and $e^{-i\mathcal{H}_3t}$. The corresponding circuit is given as follows. \begin{figure}[H] \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale = 0.83]{qsr_sup_fig5.pdf} \caption{Here $U_3$ is a physical gate provided by IBM Q experience which represents a general rotation on the Bloch sphere and has 3 controllable parameters. As the IBM Q experience is a static system and we needed to study the dynamics of the system, we took the time `t' as a controllable parameter. The values $\frac{3\pi}{2}$ and $\frac{\pi}{2}$ come from the appropriate decomposition of the U(t) transformation into two-level unitary matrices. $U_1$ is also a general physical gate provided by IBM Q experience and has one controllable parameter. The values of those parameters are also shown here.} \label{qsr_sup_Fig2} \end{figure} \textbf{Experimental Architecture} The experimental device parameters of `ibmqx4' chip are listed in Table \ref{qsr_sup_tab1}. The readout resonator's resonance frequency, qubit frequency, anharmonicity, qubit-cavity coupling strength, relaxation time and coherence time are respectively denoted by $\omega^{R}_{i}$, $\omega_{i}$, $\delta_{i}$, $\chi$, $T_1$ and $T_2$. The connection and control of five superconducting qubits (q[0], q[1], q[2], q[3] and q[4]) is shown in Fig. \ref{qsr_sup_Fig6} \textbf{(b)}. The single-qubit and two-qubit controls are provided by the coplanar wave guide (CPW) resonators. The black and white lines denote the control respectively. The qubits q[2], q[3], q[4] and q[0], q[1], q[2] are coupled via two superconducting CPWs, with resonator frequencies of 6.6 GHz and 7.0 GHz respectively. All the qubits are controlled and read out by different CPWs. The quantum chip, `ibmqx4' is cooled in a dilution refrigerator at a temperature of 0.021 K. The single-qubit gate error is of the order $10^{-3}$. The multi-qubit and readout error are of the order $10^{-2}$. Randomized benchmarking is used to measure the gate errors. \begin{figure}[H] \centering \@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[scale=0.4]{qsr_sup_fig4.pdf} \caption{\textbf{(a)} A schematic diagram of the chip layout of 5-qubit quantum processor `ibmqx4'. The chip is maintained in a dilution refrigerator at temperature 0.021 K. All 5 transmon qubits are connected with the two coplanar waveguide (CPW) resonators as shown. q[2], q[3], q[4] and q[0], q[1], q[2] are coupled with the two CPWs with resonance frequencies 6.6 GHz and 7.0 GHz respectively. Individual qubits in the chip are controlled and readout by particular CPWs. \textbf{(b)} The CNOTs coupling map is as follows : $\{q1 \rightarrow (q[0]), q2 \rightarrow (q[0], q[1], q[4]), q[3] \rightarrow (q[2], q[4])\}$, where $i \rightarrow (j)$ means $i$ and $j$ denote the control and the target qubit respectively for implementing CNOT gate. The errors in gates and readout are of the order $10^{-2}$ to $10^{-3}$.} \label{qsr_sup_Fig6} \end{figure} \textbf{Data availability.} The data that support the findings of this study are available from the corresponding author upon reasonable request. \begin{table}[H] \centering \begin{tabular}{ c c c c c c } \hline \hline Qubits & $\omega^{\star}$ (GHz) & $T^{||}_{1}$ ($\mu s$) & $T^{\perp}_{2}$ ($\mu s$) & GE$^{\dagger}$ & RE$^{\ddagger}$ \\ \hline q[0] & 5.24 & 48.80 & 14.70 & 0.86 & 7.00 \\ q[1] & 5.31 & 49.60 & 55.00 & 1.29 & 5.80 \\ q[2] & 5.35 & 48.00 & 32.60 & 1.20 & 8.60 \\ q[3] & 5.41 & 35.60 & 23.60 & 3.78 & 3.70 \\ q[4] & 5.19 & 55.20 & 31.90 & 1.03 & 5.80 \\ \hline \hline \end{tabular}\\ $\star$ Frequency, $||$ Relaxation time, $\perp$ Coherence time, $\dagger$ Gate Error, $\ddagger$ Readout Error \\ \caption{\textbf{Experimental parameters of the device `ibmqx4' are presented.}} \label{qsr_sup_tab1} \end{table} \bibliographystyle{naturemag}
{ "timestamp": "2018-07-09T02:09:43", "yymm": "1806", "arxiv_id": "1806.00781", "language": "en", "url": "https://arxiv.org/abs/1806.00781" }
\section{Introduction} In general, the superconducting order parameter is a function of two coordinates and two spin indices $\Delta_{\alpha,\beta}({\bf r}, {\bf r}')$. Conventional low-$T_c$ superconductors have a singlet order parameter with $s$-wave symmetry which can be described by a complex field $\Delta_{s} ({\bf r}) = \Delta({\bf r}, {\bf r})$. Qualitatively, this describes the Bose condensation of Cooper pairs into a zero-orbital-momentum state. The propagation amplitude of a Cooper pair between two spatial points can be written as a sum of positive partial amplitudes corresponding to different Feynman paths. In the presence of a magnetic field, these amplitudes acquire phases and partially cancel one another. As a result, $s$-wave superconductivity is suppressed by the magnetic field. This qualitative picture is consistent with the corresponding solution of the Gor'kov equations \cite{AGD}. \begin{figure*} \begin{center} \includegraphics[width=2\columnwidth]{Joint_Single_Composite} \caption{Left: Pictorial representation of a $d$-wave superconducting grain and its normal-metallic environment. The grain, colored gray, hosts a nonzero order parameter having the form indicated above the grain. The wave vector dependence of the order parameter is represented by the red and blue rosette on the grain: red corresponds to $\Delta(\textbf{k}) > 0$, and blue corresponds to $\Delta(\textbf{k}) < 0$. The shading outside the grain represents the sign of the anomalous Green's function produced by the proximity effect: red is again positive, and blue is negative. Right: A granular $d$-wave superconductor sandwiched between two homogenous $s$-wave superconductors, shaded gray. Each individual grain has a randomly oriented order parameter owing to the random orientation of its crystalline axes. $\Gamma_1$ and $\Gamma_2$ indicate two directed paths across the granular system.} \label{fig:sc_grains} \end{center} \end{figure*} Over the last decades a number of superconductors have been discovered in which the order parameter changes sign under rotation. The primary examples are the high-$T_c$ superconductors for which the order parameter has singlet $d$-wave symmetry (see, \emph{e.g.}, \cite{dwave,dwave1}): $\Delta({\bf r}, {\bf r}')$ changes sign under rotation by $\pi /2$, and consequently, $\Delta({\bf r},{\bf r})= 0$. This means that the Fourier transform $\Delta({\bf k})$ changes sign under a $\pi /2$ rotation as well, as shown schematically in Fig.~\ref{fig:sc_grains}. Still, the solution of the Gor'kov equation in crystalline materials demonstrates that the application of a magnetic field suppresses superconductivity. In this paper, we study the magnetic properties of a composite of randomly shaped and randomly oriented $d$-wave superconducting grains embedded in a metallic matrix (see Fig.~\ref{fig:sc_grains}). In such systems, the nodes of the order parameter $\Delta({\bf k})$ are locked to the crystalline axes of each grain. It is known that the macroscopic properties of such granular materials are distinct from both $s$- and $d$-wave superconductors \cite{oreto,oreto1,KivelsonSpivak}. Below we show that the application of a magnetic field enhances the superfluid stiffness $N_s$ and the critical temperature $T_{c}$ of such materials in certain parameter regimes. Granular composites are characterized by the following lengths: the typical superconducting grain size $R$, the intergrain distance $r_{G}$, the elastic electron mean free path in the metal $\ell$, the zero-temperature coherence length of the bulk superconductor $\xi_{0}$, and the coherence length of the normal metal $L_{T}=\sqrt{D/T}$. Here, $D=\ell v_{F}/3$ is the diffusion coefficient, and $v_{F}$ is the Fermi velocity in the metal. In the regime where $R, r_G > \xi_0$ and the temperature $T \ll T_c^b$ is smaller than the critical temperature of the bulk superconductor, one can neglect the fluctuations of the modulus of the order parameter and reduce the Hamiltonian to that of a system of Josephson junctions, \begin{equation}\label{eq:hamiltonian} H = \frac{\hbar}{2e}\textrm{Re} \sum_{i \neq j} J_{ij} e^{i(\theta_i - \theta_j)}. \end{equation} Here, $J_{ij}$ is the Josephson coupling between grains $i$ and $j$, and $\theta_{i}$ is the phase of the order parameter in the $i$th grain. Generally, $J_{ij}$ are complex numbers; however, in the absence of magnetic field they may be chosen to be real but not necessarily positive. Since in random media all spatial symmetries are broken, the anomalous Green's function $F({\bf r}, {\bf r}')$ is an admixture of $s$, $d$, and higher angular momentum components of the spin-singlet state. In the metallic matrix, at distances from the nearest grain greater than $\ell$, only the singlet component survives. Thus, in the simplest case where the intergrain distance $r_G \gg \ell$, the singlet component controls the value of the Josephson couplings $J_{ij}$. In this diffusive regime and within the mean-field approximation, the $s$ components of the normal $G$ and anomalous $F$ Green's functions satisfy the Usadel equation \cite{Usadel}, \newcommand{\mathbf{\hat{\nabla}}}{\mathbf{\hat{\nabla}}} \begin{gather} \label{usadel} \epsilon F_{\epsilon} - \frac{D}{2} \mathbf{\hat{\nabla}} \left(G_\epsilon \mathbf{\hat{\nabla}} F_\epsilon - F_\epsilon \mathbf{\nabla} G_\epsilon \right) = 0,\\ |G_{\epsilon}|^{2}+|F_{\epsilon}|^{2} = 1,\nonumber \end{gather} where $\mathbf{\hat{\nabla}} = \mathbf{\nabla} + 2 e i {\bf A}$ is the covariant derivative, ${\bf A}$ is the vector potential, and $F_{\epsilon}( {\bf r})$ and $G_{\epsilon}( {\bf r})$ are Fourier transforms of the Matsubara Green's functions $F({\bf r},{\bf r}, (t-t'))$ and $G({\bf r},{\bf r}, (t-t'))$. In the case where $T\ll T_c^b$, the size of the grain is larger than $\xi_{0}$, and the Andreev reflection from its boundary is effective, the boundary conditions for Eq.~(\ref{usadel}) at the $d$-$n$ boundary were derived in Ref.~\cite{Tanaka}. Since the relevant energy for computing the Josephson coupling, $\epsilon \sim D/r_G^{2}$, is much smaller than the value of the order parameter in the puddles, the boundary condition for $F({\bf r},\epsilon)$ is independent of $\epsilon$ and depends only on the angle between the unit vector parallel to the direction of a gap node ${\bf n}_{\Delta}$ and the unit vector ${\bf n(r)}$ normal to the boundary at the point ${\bf r}$ on the surface: $F(\epsilon, {\bf r}) = f\{[{\bf n}_{\Delta} \cdot {\bf n(r)}]^2\}$. Here, $f(x)$ is a smooth function, which grows from $f(0) = 0$ to $f(1) \sim 1$. In the absence of magnetic field ${\bf H}$, a typical spatial distribution of the solution of Eq.~(\ref{usadel}) for the anomalous Green's function $F({\bf r}, \epsilon \to 0)$ due to an isolated grain is shown in Fig.~\ref{fig:sc_grains}. Red and blue are used to indicate the regions where $F({\bf r},\epsilon \to 0)$ has positive and negative signs, respectively. The lines where $F=0$ will be of particular interest to us. At ${\bf H}=0$, the phase diagram of the system of $d$-wave droplets embedded in a metal was studied in Refs.~\cite{oreto,oreto1,KivelsonSpivak} . It has been shown that in the case where the droplets are randomly oriented, the Josephson couplings $J_{ij}$ in Eq.~(\ref{eq:hamiltonian}) are real quantities which can be decomposed as \begin{equation}\label{J} J_{ij} = \eta_{i}\eta_{j} I^{(0)}_{ij} + \eta_{ij} I^{(1)}_{ij}. \end{equation} Here, \begin{equation}\label{eta} \eta_{i}=\textrm{sgn }(\int_{s_i} F({\bf r}) d {\bf r})=\pm 1, \end{equation} $\eta_{ij}$ are random signs, and the integral in Eq.~\eqref{eta} is taken over the surface $s_i$ of grain $i$. The positive quantities $I_{ij}^{(0),(1)} $ are randomly distributed on the scales \begin{equation} \label{eq:coupling_scale} I^{(0)}_{ij} \propto \frac{GD}{R^{2}} \frac{R^{d}}{r_{ij}^{d}}\exp(-r_{ij}/L_{T}), \qquad I^{(1)}_{ij} \propto \frac{R^{2}}{r_{ij}^{2}}I^{(0)}_{ij}, \end{equation} where $G$ is the conductance of a block of the metal of linear size $R$. Note that the two terms in Eq.~\eqref{J} have different characters. The first has its sign determined by a product of quantities that depend on the properties of each grain separately, roughly related to the shape of the grains. Conversely, the sign of the second term is determined by a joint property of the pair of grains $i$ and $j$ (related to the relative orientation of their crystalline axes). At large grain concentration where typically $I^{(0)} \ll I^{(1)}$, this problem is a version of the standard model of an XY spin glass~\cite{glass}, while in the opposite limit, the system reduces to the well-known Mattis model~\cite{Mattis}. In the presence of a magnetic field the Josephson couplings $J_{ij}$ in Eq.~\eqref{eq:hamiltonian} become complex. We can generally represent the Josephson coupling at finite $H$ by \begin{equation} \label{eq:josephson_couplings} J_{ij}({\bf H}) = \pm e^{i \zeta_{ij}} \big| A_{ij} - B_{ij} e^{i \chi_{ij}} \big| I_{ij}. \end{equation} Each factor requires some explanation. The overall scale of the coupling is set by $I_{ij}$ and depends on $r_{ij} / R$, while the sign depends on the specific arrangement of grains $i$ and $j$. Together, these factors should be thought of as a rewriting of Eq.~\eqref{J}, with $I_{ij}$ being the modulus and the $\pm$ symbol being the sign. In the limit $r_{ij} \gg R$, $I_{ij}$ maps onto $I_{ij}^{(0)}$, and the $\pm$ sign becomes $\eta_i \eta_j$. In the limit $r_{ij} \ll R$, $I_{ij}$ maps onto $I_{ij}^{(1)}$, and the $\pm$ sign becomes $\eta_{ij}$. The remaining factors indicate the effects of a magnetic field. $\zeta_{ij} = {\bf A}({\bf r}) \cdot {\bf r}_{ij}$, where ${\bf r}_{ij}$ is a vector connecting the centers of grains $i$ and $j$. The factor $\big| A_{ij} - B_{ij} e^{i \chi_{ij}} \big|$ represents the geometry-dependent proportionality constant from Eq.~\eqref{J}. $A_{ij}$ captures the positive-weight diffusion paths, and $B_{ij}$ captures the negative-weight paths. $\chi_{ij} = (H S_{ij}/\Phi_{0})$, where $\Phi_{0}$ is the flux quantum and $S_{ij}$ is the area associated with the diffusion paths, which accounts for the relative phase between positive and negative paths in the field. We will show that the magnetic field corrections to physical quantities of the system associated with Eq.~\eqref{eq:josephson_couplings} are asymptotically larger than $H^2$ for small ${\bf H}$. This is the reason why we neglect the quadratic-in-${\bf H}$ suppression of $A_{ij}$ and $B_{ij}$ in Eq.~\eqref{eq:josephson_couplings}. The value of the area $S_{ij}$ in Eq.~\eqref{eq:josephson_couplings} is also random. Its characteristic value $S$ is not universal. For example, if the diffusion coefficient on the metal in Eq.~\eqref{usadel} does not exhibit spatial fluctuations, $S\sim R^{2}$. \section{Magnetoenhancement of superconductivity in one dimension} \label{sec:one_dim} To illustrate the physical origin of the magnetic field enhancement of superconductivity let us first consider a quasi-one-dimensional case where the droplets are embedded in a metallic wire. In the absence of magnetic field the ground state of the system corresponds to $(\theta_{i}-\theta_{j})=0$ if $J_{ij}>0$ and $(\theta_{i}-\theta_{j})=\pi$ if $J_{ij}<0$. To calculate the macroscopic superfluid stiffness of the system $\langle N_{S} \rangle$, we expand Eq.~\eqref{eq:hamiltonian} up to quadratic terms in $\theta_i - \theta_j$ near the ground state. (We define the superfluid stiffness by the usual equation $\langle {\bf j} \rangle=\langle N_{s} \rangle {\bf \nabla} \theta$, with $\langle {\bf j} \rangle$ being the current density coarse grained on a macroscopic scale.) As a result, we get the expression \begin{align} \label{Ns} \langle N_{s}(H) \rangle &= \lim_{L \to \infty} \left\langle L\left( \sum \frac{1}{|J_{ij}|}\right)^{-1} \right\rangle \nonumber\\ &=r_{G}\left[\int \frac{p(|J|)}{|J|}d |J|\right]^{-1}, \end{align} where the sum is taken over neighbor grains, $L$ is the length of the wire, and brackets $\langle\cdot\rangle$ represent averaging over a random distribution of $J_{ij}$ . At $H=0$, the probability density $p(|J_{ij}|)$ for the random quantity $|J_{ij}|$ is finite at $|J_{ij}| = 0$. As a result, the integral in Eq.~\eqref{Ns} diverges logarithmically, and the superfluid stiffness is zero. Physically, this follows from the presence of arbitrarily weak links in the long wire. At $H\neq 0$, the cancellations which produce small $|J_{ij}|$ are less effective because they must cancel in the complex plane. The upshot is that $p(|J_{ij}| = 0) = 0$ at finite $H$. This cuts off the logarithmic divergence in Eq.~\eqref{Ns}, and we obtain \begin{equation} \label{1dNsEnhancement} \langle N_{s}(H) \rangle \sim \frac{N_{s}^{(0)}}{|\ln(\phi^{2})|}, \end{equation} where $N_{s}^{(0)}= \langle |J_{ij}| \rangle$ and $\phi=(H S / \Phi_0)$ is a dimensionless measure of the characteristic flux between grains. According to Eq.~\eqref{1dNsEnhancement}, the magnetic field enhancement of the superfluid density is nonanalytic, which justifies our neglect of the quadratic-in-$H$ corrections to $J_{ij}$: physically, the magnetic field suppresses the density of weak links in the long wire. \section{Magnetoenhancement of superconductivity in $d>1$ dimensions} \label{sec:dgreater1} In higher dimensions, the disordered $d$-wave composite superconductor can be frustrated and form a superconducting glass. This complicates the theoretical analysis. Below we discuss several cases where we can nonetheless prove the existence of the magnetoenhancement of superconductivity. The suppression of the probability for small couplings $|J_{ij}|$ by a magnetic field is general and independent of dimension, although their effect on the macroscopic superfluid density is dimension dependent. As we will show, in two and three dimensions the magnetoenhancement is smaller than in one dimension but remains nonanalytic in $H$ (namely, $|H|$). We accordingly may neglect all quadratic and higher-order contributions. \subsection{Magnetoenhancement of superfluid stiffness in the Mattis regime} \label{sub:mattis} If the typical intergrain distance is larger than the grain size and the normal metal coherence length, $r_{G}\gg R, L_T$, the second term in Eq.~\eqref{J} can be neglected. In the absence of magnetic field, the Hamiltonian~\eqref{eq:hamiltonian} reduces to a Mattis model, for which the random factors $\eta_{i}$ can be gauged out~\cite{oreto,oreto1,KivelsonSpivak}, and accordingly, in Eq.~(\eqref{eq:josephson_couplings}) the $\pm$ sign may be taken to be positive. In this regime, the phases $\chi_{ij}(H)$ and $\zeta_{ij}(H)$ play different roles. We will show that the factors $\chi_{ij}(H)$ inside the modulus in Eq.~\eqref{eq:josephson_couplings} lead to linear-in-$|H|$ enhancement of the superfluid stiffness $N_s(H)$ and critical temperature $T_c(H)$. On the other hand, the $\zeta_{ij}(H)$ phases produce quadratic-in-$H$ corrections to physical quantities and so we neglect them in the following analysis. Thus, in this section we take for $J_{ij}(\textbf{H})$ the simpler expression \begin{equation} \label{eq:Mattis_Josephson} J_{ij}({\bf H}) = \big| A_{ij} - B_{ij} e^{i \chi_{ij}} \big| J_0 e^{-r_{ij} / L_T}. \end{equation} We take $A_{ij} + B_{ij} = 1$, with $A_{ij}$ being uniformly distributed in $[0, 1]$ and $\chi_{ij}$ uniformly distributed in $[-\pi |\phi|, \pi |\phi|]$. Finally, $J_0$ is the characteristic energy scale of the nonexponential front factors in Eq.~\eqref{eq:coupling_scale}. Neglecting the variation in $J_0$ is a valid approximation because the disorder in the front factors is subleading compared to that of the exponent. It is convenient to represent the Josephson couplings in logarithmic variables, \begin{equation} J_{ij}=J_{0}\exp(-\xi_{ij}), \end{equation} where $\xi_{ij} = \xi^{(0)}_{ij} + \delta \xi_{ij}$, with \begin{equation} \xi_{ij}^{(0)} = r_{ij}/L_T, \,\,\,\ \delta \xi_{ij} = - \ln \big| A_{ij}-B_{ij} e^{i \chi_{ij}} \big| . \end{equation} This decomposition highlights that the distribution of $\delta \xi_{ij}$ is much narrower than that of $\xi_{ij}^{(0)}$ in the $r_G \gg L_T$ limit. To calculate the superfluid stiffness of the system $N_s$ at $H=0$, we expand the Mattis Hamiltonian, Eq.~\eqref{eq:hamiltonian}, up to quadratic terms in $\theta_{i}$. Calculating the superfluid stiffness is then equivalent to calculating the macroscopic conductance of a random resistor network where $\theta_i$ and $|J_{ij}|$ are analogs of the voltage and conductances, respectively. In the $r_G \gg L_T$ regime, $|J_{ij}|$ are broadly distributed and we can estimate $N_s$ using percolation theory, as is well known in the context of hopping conductivity~\cite{EfrosShklovskii}. In this approach, we consider switching on couplings $J_{ij}$ from strongest to weakest, until at a critical value $J_c \equiv J_0 \exp(-\xi_c)$ the network of bonds percolates. If the couplings are broadly distributed, then the superfluid stiffness $N_s$ is essentially given by $J_c$, analogous to the global conductance of a resistor network being set by the bottleneck with lowest individual conductance. Reference~\cite{EfrosShklovskii} gives a more detailed discussion. In the zeroth approximation, where $\delta \xi_{ij} = 0$, we obtain \begin{align} \label{eq:unperturbed_superfluid_density} \langle N_s^{(0)} \rangle = J_0 r_G^{2-d} \left( \frac{L_T}{r_c^{(0)}} \right) ^{\nu} e^{-\frac{r_c^{(0)}}{L_T}} \end{align} where $r_c^{(0)} \equiv L_{T} \xi_{c}^{(0)}$ is the value of $r_{ij}$ at which the network percolates and $\nu$ is the exponent governing the correlation radius of the percolating cluster (e.g., $\nu = 4/3$ in two dimensions, $\nu \approx 0.9$ in three dimensions~\cite{Levinshtein,Sykes}). See Sec.~5.6 of Ref.~\cite{EfrosShklovskii} for details. To calculate the magnetic field correction to the superfluid density we use the perturbation theory of percolation theory developed in Ref.~\cite{EfrosShklovskii} (see Sec.~8.3): the first-order correction $\delta \xi_c$ to the percolation threshold $\xi_c$ for typical $\delta \xi_{ij} \ll \xi_c^{(0)}$ is given by the average perturbation, $\delta \xi_c = \langle \delta \xi_{ij} \rangle$. Thus, \begin{align} \label{eq:exponent_perturbation} \delta \xi_c(H) &= - \left\langle \ln{\big| A_{ij} - B_{ij} e^{i \chi_{ij}} \big| } \right\rangle \nonumber\\ &= - \int_{-\pi \phi}^{\pi \phi} \frac{d\chi}{2\pi \phi} \int_0^1 dA \ln{\left|A - (1 - A)e^{i \chi}\right| } \nonumber \\ &\sim 1-\frac{\pi^2}{8} |\phi|. \end{align} Thus the superfluid density, which is proportional to $\exp[-\xi_c^{(0)} - \delta \xi_c(H)]$, is enhanced in small magnetic field $\phi \ll 1$: \begin{align} \label{eq:superfluid_density_enhancement} \frac{\langle \Delta N_s(H)\rangle}{\langle N_s(0) \rangle} \equiv \frac{\langle N_s(H)\rangle - \langle N_s(0)\rangle}{\langle N_s(0)\rangle} \sim \frac{\pi^2}{8}\frac{|H|S}{\Phi_0}. \end{align} Note that Eq.~\eqref{eq:superfluid_density_enhancement} does not depend on any details of the percolating cluster such as $\nu$, $\xi_c^{(0)}$, or even dimensionality. It depends only on having a nonzero probability density for $J_{ij} = 0$, which comes from the fact that the $d$-wave order parameter changes sign as a function of momentum. The perturbative treatment of the problem which leads to Eq.~\eqref{eq:superfluid_density_enhancement} is valid when the relevant $\delta \xi_{ij}\ll \xi_{c}$. On the other hand, as $\phi\to 0$, the main contribution to Eqs.~\eqref{eq:exponent_perturbation} and \eqref{eq:superfluid_density_enhancement} come from intergrain couplings with $|A_{ij}-B_{ij} |\to 0$ for which $\delta \xi_{ij}$ diverges logarithmically. The magnetic field suppresses the probability of such events. This means that Eqs.~\eqref{eq:exponent_perturbation} and \eqref{eq:superfluid_density_enhancement} are valid if $\phi > \exp(- \xi_{c})$. In the opposite limit, at very small magnetic field, the correction to the superfluid stiffness $ \langle \Delta N_{s}(H)\rangle / \langle N_{s}(0)\rangle \sim c \phi^{2}>0$ is quadratic. However, even in this regime, we expect the magnetic field correction to the stiffness is still positive. Indeed, at $\phi_c \sim e^{-\xi_{c}}$ the linear and quadratic dependences should match. This gives us an estimate for the coefficient, \begin{align} \label{eq:quadratic_coef} c\sim e^{\xi_{c}}\gg 1. \end{align} On the other hand, the conventional negative contributions to the magnetic field dependence of the superfluid density $\langle \Delta N_{s}(H)\rangle/ \langle N_{s}(0)\rangle \sim -a \phi^{2} $ with a coefficient $a$ of order 1. Therefore, they are dominated by the magnetoenhancement we discuss even for $\phi < \phi_c \sim e^{-\xi_c}$. \subsection{Numerical simulations of magnetoenhancement in the Mattis regime} \label{sub:numerics_mattis} In order to verify the applicability of the perturbative analysis, we simulate the model of Eq.~\eqref{eq:hamiltonian} numerically in the Mattis regime. We carry out simulations on a regular square lattice of $L \times L$ Josephson coupled grains as in Fig.~\ref{fig:network}. At the two boundaries in the $x$ direction, the system is put in contact with a large superconducting reservoir at fixed phase, while the $y$ direction is periodic. Since each reservoir is modeled as a single site in contact with all sites on the corresponding boundary, the system has $L^2+2$ sites. In our simulations, we sample couplings according to the form of Eq.~\eqref{eq:Mattis_Josephson} with $\xi_{ij} \equiv r_{ij}/L_T \in [-W,W]$ uniformly distributed (in units where $J_0 = 1$). The parameter $W$ thus represents the typical distance between puddles, in units of $L_T$. \begin{figure} \includegraphics[width=0.7\columnwidth]{Network3} \caption{The numerical simulations of the Mattis regime are carried out on a square lattice of superconducting grains with random couplings $J_{ij}$. Two large superconducting leads are placed at either end, with $\theta_L = 0, \theta_R = \Delta\theta$, while the system is periodic in the transverse direction.} \label{fig:network} \end{figure} To compute the enhancement in the superfluid stiffness $N_S$ as a function of the dimensionless magnetic flux $\phi$, we consider the change in energy due to a small phase difference $\Delta \theta$ (numerically, $\Delta \theta = 1$) applied between the two reservoirs. To leading order, the phases $\theta_i$ at each site $i$ minimize the energy \begin{equation} H=\frac{1}{2}\sum_{ij}J_{ij}(\phi) \left(\theta_i-\theta_j\right)^2. \end{equation} We find the minimal energy $H^*$ using a quadratic optimization algorithm. The superfluid stiffness is simply given by \begin{equation} N_S\propto H^*/\Delta\theta^2. \end{equation} We work at disorder strength $0\leq W\leq 8$ and average each measurement of $N_S$ over $N=1000$ random samples. \begin{figure} \includegraphics[width=\columnwidth]{MultiW3} \caption{ Magnetoenhancement of superfluid stiffness in the Mattis regime of a disordered Josephson network. Square lattice of linear dimension $L=60$ with $N = 1000$ samples per data point; data for smaller sizes are indistinguishable. The main panel shows the relative enhancement of superconductivity $\Delta N_S/N_S$ as a function of dimensionless flux $\phi$ for several disorder strengths $W$. The line $\frac{\pi^2}{8}|\phi|$ is the perturbative prediction of Eq.~\eqref{eq:superfluid_density_enhancement}, which should hold at large $W$ over the range $\phi_c \sim e^{-\xi_c} < \phi < O(1)$. At smaller fields $\phi$, the crossover to quadratic behavior is visible. The crossover point $\phi_c(W)$ is marked by vertical ticks. The inset shows the numerically extracted crossover point $\phi_c$ as a function of $W$. The straight-line fit shows the exponential dependence expected at large $W$.} \label{fig:2d} \end{figure} The main panel of Fig.~\ref{fig:2d} shows the dependence of the relative increase of the superfluid stiffness $\frac{\Delta N_s(H)}{N_s(0)}$ with $\phi$ for several disorder strengths $W$. Two regimes are clearly visible: for $\phi>\phi_c \sim e^{-\xi_c}$, the behavior is linear and matches the prediction from the perturbative treatment of the percolation theory, $\Delta N_S/N_S=\phi\pi^2/8$. For $\phi\ll \phi_c$, however, the curves cross over toward quadratic behavior, as expected. The inset shows the dependence of the crossover point $\phi_c$, at which $\Delta N_S$ becomes linear, on $W$. To extract the crossover point numerically, we evaluate the derivative with respect to $\phi$ of the data. Such a derivative grows for small $\phi$ and then saturates to a finite value. We estimate $\phi_c$ as the point at which the derivative stops growing. The error bars indicate the spacing $\delta\phi$ between two consecutive values of $\phi$. The inset compares the numerical data to an exponential fit function $\phi_c\propto\exp(-pW)$, with $p=0.4\pm0.1$. \subsection{Magnetoenhancement of the critical temperature in the Mattis regime} One can similarly estimate the change in $T_c$ in a magnetic field. At the mean-field level, all couplings $J_{ij}$ greater than $T$ are ``rigid,'' so that the phase on the grains connected by such couplings is locked. Therefore, the critical temperature may be found by determining when the set of rigid couplings defined by the condition \begin{equation} T_{c}(H)<\frac{\hbar}{2e}|J_{ij}| \end{equation} percolates. A similar procedure was applied previously to calculate the critical temperature of disordered ferromagnets~\cite{Korenblit,Kaminski}. The difference is that in the present case $\xi_c \equiv r_c/L_T$ depends on temperature, so that $T_c$ is determined by the equation \begin{equation} \label{eq:T_c_unperturbed_equation} T_c = \frac{\hbar}{2e}J_0 \exp{(-r_c/L_{T_c})}. \end{equation} In the absence of $\delta \xi_{ij}$, $r_c = r_c^{(0)}$ is independent of temperature, as previously discussed in Sec.~\ref{sub:mattis}. Including $\delta \xi_{ij}$ then shifts $r_c / L_T$ according to Eq.~\eqref{eq:exponent_perturbation}; thus, the equation determining $T_c$ is written \begin{equation} \label{eq:T_c_full_equation} T_c = \frac{\hbar}{2e} J_0 \exp{(-r_c^{(0)}/L_{T_c} - 1 + \frac{\pi^2}{8} \big| \phi \big| )}. \end{equation} Expanding with respect to small $\phi$, we obtain \begin{equation} \label{eq:T_c_change} \frac{T_c(H) - T_c(0)}{T_c(0)} \sim \frac{\pi^2}{4} \frac{L_{T_c(0)}}{r_c^{(0)}} \frac{\big| HS \big|}{\Phi_{0}} . \end{equation} The above analysis has a mean-field character in that it neglects fluctuations of the phase between rigid couplings. However, we expect the conclusions to be correct in the strong-disorder limit ($W \gg L_T$), where all but a vanishing fraction of couplings in the percolating network are much stronger than the putative $T_c$. Indeed, the authors of~\cite{Kaminski} checked the validity of the percolation theory via Monte Carlo simulations and found that $T_c$ is given by Eq.~\eqref{eq:T_c_unperturbed_equation} up to a factor of order 1. The magnetoenhancement of $T_c$ behaves analogously to the magnetoenhancement of $N_s$. Namely, the linear dependence on $|H|$ applies for fields larger than the previously mentioned exponentially small cutoff $\phi > e^{-\xi_c}$. \section{Magnetoenhancement in the superconducting glass regime at high temperature.} \label{sec:glass_regime} In the superconducting glass regime ($r_G \lesssim R$), the couplings $J_{ij}$ in Eq.~\eqref{eq:hamiltonian} have random signs in the absence of magnetic field. The frustration this induces at low temperature makes this theoretical problem difficult. As with spin glasses, most physical properties are out of equilibrium and time dependent. It is not even clear how to define the superfluid density in general. Therefore, in this section we restrict ourselves to the case of high temperatures $T \gg (\hbar/2e)|J_{ij}|$, where the system is in the normal state, and show that the superconducting correlation function, \begin{align} \label{eq:corr_function} A_{kl} = \big< e^{i(\theta_k - \theta_l)} \big> = \textrm{Tr} \left[ e^{i(\theta_k - \theta_l)} \frac{e^{-\beta H}}{Z} \right] , \end{align} is enhanced by a magnetic field. Here, $H$ is given by Eq.~\eqref{eq:hamiltonian}, $\beta = 1/T$, and $Z$ is the partition function. This correlation function controls the critical current in a junction composed of two $s$-wave bulk superconductors forming a sandwich around a granular $d$-wave composite (see Fig.~\ref{fig:sc_grains}) in the regime where the temperature is below the critical temperature of the $s$-wave leads. The sign of the coupling $J_{ij}$ in the glass regime depends on the relative orientation of the order parameter between the two grains. We model this dependence by including a factor $\cos{2(\Theta_i - \Theta_j)}$ in Eq.~\eqref{eq:josephson_couplings}, where $\Theta_i$ is the orientation of the positive node of the order parameter on grain $i$. This factor respects the $d$-wave symmetry of the grains: it retains its sign if either grain rotates by $\pi$ and changes sign if either grain rotates by $\pi/2$. Furthermore, since the enhancement of the correlation function relies on long-distance universal behavior, as discussed below, we neglect the variation in all other quantities affecting $J_{ij}$ for simplicity. This includes the relative phases $\chi_{ij}$ in Eq.~\eqref{eq:josephson_couplings}. Thus, we model the Josephson couplings as \begin{equation} \label{eq:nematic_coupling_form} J_{ij} = J_0 \, e^{i \zeta_{ij}} \cos{2 \big( \Theta_i - \Theta_j \big)}, \end{equation} where $\zeta_{ij} = {\bf A}({\bf r}) \cdot {\bf r}_{ij}$ and $\Theta_i$ is uniformly distributed in $[-\pi, \pi]$. The standard high-temperature expansion of Eq.~\eqref{eq:corr_function} gives the correlation function as a sum over paths $\Gamma$ from grain $k$ to grain $l$: \begin{equation} \label{eq:corr_function_path_sum} A_{kl} = \sum_{\Gamma} A_{\Gamma} , \quad A_{\Gamma} \equiv \prod_{\langle ij \rangle \in \Gamma} \big( \frac{\pi \hbar \beta}{2e} J_{ij} \big) . \end{equation} The product over $\langle ij \rangle \in \Gamma$ runs over all links along path $\Gamma$. Furthermore, since $(\hbar \beta/e) |J_{ij}| \ll 1$, the leading-order terms in the path sum~\eqref{eq:corr_function_path_sum} correspond to \textit{directed} paths. See Fig.~\ref{fig:sc_grains} for a qualitative example of such directed paths. In the high-temperature regime, the correlation function decays exponentially at large distance: $\langle \ln{|A_{kl}|} \rangle \sim -r/\Xi(H)$. It follows from Eq.~\eqref{eq:corr_function_path_sum} that \begin{widetext} \begin{equation} \label{eq:corr_length_expression} \frac{1}{\Xi(H)} = \ln{\frac{2e}{\pi \hbar \beta J_0}} - \lim_{r \rightarrow \infty} \frac{1}{r} \ln{\Big| \sum_{\Gamma} \prod_{\langle ij \rangle \in \Gamma} e^{i \zeta_{ij}} \cos{2 \big( \Theta_i - \Theta_j \big)} \Big| }. \end{equation} \end{widetext} We have evaluated Eq.~\eqref{eq:corr_length_expression} using numerical simulations of this model on a 2D square lattice in a uniform perpendicular magnetic field. The average change in correlation length, $\mathbb{E} [\Xi(H)] - \mathbb{E} [\Xi(0)]$, is plotted as a function of $H$ in Fig.~\ref{fig:high_T_magnetoenhancement}. At low magnetic field, $\Xi$ increases in a nonanalytic way: \begin{equation} \label{eq:localization_length} \frac{\Xi(H) - \Xi(0)}{\Xi(0)} \sim \left( \frac{\Xi(0)^2 \, |H|}{\Phi_0} \right) ^{\alpha}, \end{equation} with $\alpha = 0.59 \pm 0.03$. This nonanalyticity derives from the statistics of directed paths in disordered media. Directed path sums have a long history (see Ref.~\cite{HalpinHealy} and references therein), and it is well known that the governing exponents are universal. Different microscopic models for the couplings, as long as they include fluctuations, give the same long-distance behavior upon coarse graining. Thus, we are justified in using the simple Eq.~\eqref{eq:nematic_coupling_form} for $J_{ij}$, and Eq.~\eqref{eq:localization_length} holds. Indeed, the model defined by Eqs.~(\ref{eq:nematic_coupling_form}) and~(\ref{eq:corr_function_path_sum}) belongs to the same universality class as that used to describe negative magnetoresistance in hopping conductivity~\cite{Shklovskii2,Zhao,Shklovskii3,Ioffe,Baldwin}, so the exponents at small field are the same. However, at short distances, the model~\eqref{eq:nematic_coupling_form} has much more constructive interference than that in hopping conductivity because the sign of the paths going to grain $i$ are all correlated with the orientation $\theta_i$. As a result, at large field where the magnetic length becomes comparable to the ``sign disordering length'' of Eq.~\eqref{eq:corr_function_path_sum} at $H=0$, the magneto-correction to $\Theta$ becomes negative, as observed in Fig.~\ref{fig:high_T_magnetoenhancement}. \begin{figure} \begin{center} \includegraphics[scale=0.6]{High_T_Magnetoenhancement.pdf} \caption{The disorder-averaged change in correlation length (in units of lattice spacing) as a function of magnetic field $H$ (in units of flux quantum per lattice plaquette). The sum in Eq.~\eqref{eq:corr_length_expression} is evaluated between opposite corners of a 1000 $\times$ 1000 square lattice. Data for smaller sizes are indistinguishable.} \label{fig:high_T_magnetoenhancement} \end{center} \end{figure} \section{Conclusion} We have shown that in certain parametric regimes, the application of a magnetic field leads to nonanalytic enhancement of both superfluid stiffness and the critical temperature in disordered composites of $d$-wave grains embedded in a metallic matrix. Heuristically, the magnetoenhancement stems from the suppression of destructive interference between Cooper pairs carrying positive and negative amplitudes in the absence of the field, although the length scale on which this suppression takes place varies between the cases we have considered. Specifically, we have considered three cases where analytic control is possible. First, in quasi-one-dimensional wires the macroscopic superfluid stiffness can be inverse logarithmically enhanced from zero by the application of the field. The strength of this effect follows from the suppression of the density of weak nearest-neighbor Josephson couplings by the application of the field. Second, at $d>1$, where the intergrain distance is much larger than the typical grain size and normal metal coherence length ($r_G \gg R, L_T$) , frustration in the effective system of Josephson couplings is suppressed, and we find that the superfluid stiffness $N_s$ and critical temperature $T_c$ are both enhanced linearly in $|H|$ by mapping onto percolation theory. Third, in the geometrically frustrated regime ($r_G \sim R$) but at sufficiently high temperature that the Josephson network is disordered, we find that the superconducting correlation length is enhanced with a nontrivial power law $|H|^{\alpha}$, $\alpha < 1$. We view our results as a proof of principle for magnetoenhancement of superconductivity. In all of the cases we have presented, our analysis is possible because the system is essentially unfrustrated at $H=0$ and we can neglect the effects of glassiness and metastability to leading order. It is an interesting future direction to treat the intermediate, frustrated regimes of this problem directly using more sophisticated numerical techniques. \begin{acknowledgements} The authors would like to especially thank S. A. Kivelson for discussions as well as A. G. Abanov, Y. Cao, B. Gregor, D. Huse, and S. Gopalakrishnan. C.R.L. acknowledges support from the Sloan Foundation through a Sloan Research Fellowship and the NSF through Grant No. PHY-1656234. C.L.B. acknowledges the support of the NSF through a Graduate Research Fellowship, Grant No. DGE-1256082. \end{acknowledgements}
{ "timestamp": "2018-09-11T02:22:36", "yymm": "1806", "arxiv_id": "1806.00912", "language": "en", "url": "https://arxiv.org/abs/1806.00912" }
\section{Introduction} Hundreds of thousands of studies, case reports, or patient records, capture observations in human neuroscience, basic or clinical. Statistical analysis of this large amount of data could provide new insights. Unfortunately, most of the spatial information that these data contain is difficult to extract \emph{automatically}, because it is hidden in unstructured text, in sentences such as: ``[...] in the \underline{anterolateral temporal cortex}, especially the \underline{temporal pole} and \underline{inferior} and \underline{middle temporal gyri}'' \cite{mummery2000voxel}. This data cannot be processed easily by a machine, as a machine does not know where the temporal cortex is. As we will show, simply looking up such terms in atlases does not suffice. Indeed, even atlases disagree \cite{bohland2009brain}. Furthermore, joint processing of many reports faces varying terminologies, with regions represented in different atlases that differ and overlap. Finally, not all terms in a report carry the same importance, and practitioners use terms that are not the exact labels of any atlas. Coordinate-based meta-analyses capture the spatial distribution of a term from the literature \cite{laird2005ale,yarkoni2011large}, but they also lack a model to combine terms. Here, we propose to map case reports automatically to the brain locations that they discuss: we learn mappings of anatomical terms to brain regions from medical publications. We propose a new learning framework for translating anatomical terms to brain images -- a process that we call ``encoding''. We learn such a mapping, quantify its performance, and compare possible choices of representation of spatial data. We then show in a proof of concept that our model can predict the brain area for textual case reports. \newcommand{\ignore}[1]{} \ignore{ While such reports discuss brain structures and loci, systematic statistical analysis of the relevant positions and their spatial variability is challenging. On the opposite, medical images such as segmentations of the observed structures lend themselves well to analysis. To exploit the information in reports, we propose to translate the text into brain maps or distributions over the brain. However, relating text to brain areas calls for more than simple heuristics. Given the name of a brain region, an atlas may provide its precise location, but even atlases disagree \cite{bohland2009brain}. In addition, we wish to jointly process many reports, that mention regions from different terminologies, represented in different atlases that differ and overlap. Further, not all terms inside a report carry the same importance. Practitioners use terms that are not the exact labels of any given atlas. How do we associate a brain image with each of these reports? As we will see, when we make the jump from the controlled vocabulary of atlas labels to free text, rigid rules fail and statistical models of the link between terms and positions in the brain become necessary. Here, we propose a formal framework to learn to translate neuroscience text into brain images - a process that we call "encoding". For the first time, we learn such a mapping, quantify its performance, and compare possible choices of representations of spatial data. } \section{Methods: formalizing text-to-brain-map translation}\label{sec:methods} \subsection{Problem setting: from text to spatial distributions} We want to predict the likelihood of the location of relevant brain structures described in a document. For this purpose, we perform supervised learning on a corpus of brain-imaging studies, each containing: (i) a text, and (ii) the locations -- \textit{i.e.} the stereotactic coordinates -- of its observations. Indeed, \ac{fMRI} studies report the coordinates of activation peaks (\textit{e.g.,} \cite[Table 1]{van2011sound}), and \ac{VBM} analyses report the location of differences in gray matter density (\textit{e.g.,} \cite[Table 2]{mummery2000voxel}). Following neuroimaging meta-analyses \cite{laird2005ale}, we frame the problem in terms of spatial distributions of observations in the brain. In a document, observed locations $\mathcal{L} = \{l_a \in \mathbb{R}^3, a = 1 \dots c\}$ are sampled from a \ac{pdf}~$p$ over the brain. \textbf{Our goal is to predict this \ac{pdf} $p$ from the text $\mathcal{T}$}. We denote $q$ our predicted \ac{pdf}. A predicted \ac{pdf} $q$ should be close to $p$, or take high values at the coordinates actually reported in the study: $\prod_{l \in \mathcal{L}} q(l)$ must be large. In a supervised learning setting, we start from a collection of studies $\mathcal{S} = (\mathcal{T}, \mathcal{L})$, with $\mathcal{T}$ the text and $\mathcal{L}$ the locations. Building the prediction engine then entails the choice of a model relating the predicted \ac{pdf} $p$ to the text $\mathcal{T}$, the choice of a loss, or data-fit term, and some regularization on the model parameters. We now detail how we make each of these choices to construct a prediction. \paragraph{Model.} We start by modelling the dependency of our spatial \ac{pdf} $q$ on the study text $\mathcal{T}$. This entails both choosing a representation for $q$ and writing it as a function of the text. While $q$ is defined on a subvolume of $\mathbb{R}^3$, the brain volume, we build it using a partition to work on a finite probably space: this can be either a regular grid of voxels or a set of anatomical regions (i.e. an atlas) $\mathcal{R} = \{\mathcal{R}_k, k=0\dots m\}$. As such a partitioning imposes on each region to be homogeneous, $q$ is then formally written on $\mathbb{R}^3$ in terms of the indicator functions of the parts\footnote{$\mathcal{R}_0$ denotes the volume outside of the brain, or background, on which $q$ is 0.}: $\{r_k = \frac{\mathbb{I}_k}{\|\mathbb{I}_k\|_1}, ~ k = 1 \dots m\}$. Importantly, the volume of each part $\|\mathbb{I}_k\|_1$ appears as a normalization constant. To link $q$ to the text $\mathcal{T}$ of the study, we start by building a term-frequency vector representation of $\mathcal{T}$, which we denote $\bm{x} \in \mathbb{R}^d$. $d$ is the size of our vocabulary of English words $\mathcal{W} = \{w_t\}$, and $\bm{x}_t$ is the frequency of word $w_t$ in the text. We assign to each atlas region a weight that depends linearly on $\bm{x}$: \begin{equation} q(z) = \sum_{t=1}^d \sum_{k=1}^m \bm{x}_t \bm{\beta}_{t,k}r_k(z) \quad \forall z \in \mathbb{R}^3 \label{eq:search-space} \end{equation} where $\bm{\beta} \in \mathbb{R}^{d \times m}$ are model parameters, which we will learn. Using an atlas is a form of regularization: constraining the prediction to be in the span of $\{r_k\}$ reduces the size of the search space. Fine partitions, \emph{e.g.} atlases with many regions or voxel grids, yield models with more expressive power, but more likely to overfit. Choosing an atlas thus amounts to a bias-variance tradeoff. \paragraph{Label-constrained encoder.} A simple heuristic to turn a text into a brain map is to use atlas labels and ignore interactions between terms. The probability of a region is taken to be proportional to the frequency of its label in the text. The vocabulary is then the set of labels: $d = m$. As the word $w_k$ is the label of $\mathcal{R}_k$, $\bm{\beta}$ is diagonal. For example, for a region $\mathcal{R}_k$ in the atlas labelled ``parietal lobe'', the probability on $\mathcal{R}_k$ depends only on the frequency of the phrase ``parietal lobe'' in the text. We call this model \emph{label-constrained encoder}. \subsection{Loss function: measuring errors on spatial distributions} \paragraph{Strategy.} We will fit the coefficients $\bm{\beta}$ of our model, see \cref{eq:search-space}, by minimizing a risk $\mathcal{E}(p, q)$: the expectation of a distance between $p$ and $q$. \paragraph{A plugin estimator of $p$.} We do not have access to the true \ac{pdf}, $p$; we need a plugin estimator, which we denote $\hat{p}$. By construction of our prediction $q$, the best approximation of $p$ we can hope for belongs to the span of our regions $\{r_k\}$. Hence, we build our estimator $\hat{p}$ in this space, setting the probability of a region to be proportional to the number of coordinates that fell inside it: \begin{equation} \hat{p} = \sum_{k=1}^m \frac{|\{a, \mathbb{I}_k(l_a) = 1\}|}{c} r_k ~ = ~ \sum_{k=1}^m \frac{1}{c} \sum_{a=1}^c\mathbb{I}_k(l_a) r_k ~ \triangleq ~ \sum_{k=1}^m \hat{\bm{y}}_k r_k \quad . \end{equation} When regions are voxels, there are too many regions and too few coordinates. Hence we use Gaussian \ac{KDE} to smooth the estimated \ac{pdf}\footnote{Using an atlas is also a form of KDE, with kernel $(z, z') \mapsto 1 / \|\mathbb{I}_k\|_1$ if $z$ and $z'$ belong to the same region $\mathcal{R}_k, k \in \{1,\dots m\}$, 0 otherwise.}. Our supplementary material details a fast \ac{KDE} implementation. \paragraph{Choice of $\mathcal{E}$.} We use two common distance functions for our loss. The first is \ac{TV}, a common distance for distributions. Note that $p$ defines a probability measure on the finite sample space $\mathcal{R}$, $\mathcal{P}(\mathcal{R}_k)= \int_{\mathcal{R}_k} p(z) dz$, where $\mathcal{R} = \{\mathcal{R}_k, k=1 \ldots m\}$ and $\mathcal{R}_k = \supp(r_k)$. $q$ defines $Q$ in the same way. Then, \begin{equation} \tv(\mathcal{P}, \mathcal{Q}) = \sup_{\mathcal{A} \subset \mathcal{R}}|\mathcal{P}(\mathcal{A}) - \mathcal{Q}(\mathcal{A})| \quad . \end{equation} Since $\mathcal{R}$ is finite, a classical result (see \cite{gibbs2002choosing}) shows that this supremum is attained by taking $\mathcal{A} = \{\mathcal{R}_k | \mathcal{P}(\mathcal{R}_k) > \mathcal{Q}(\mathcal{R}_k)\} $ (or its complementary) and: \begin{equation} \tv(\mathcal{P}, \mathcal{Q}) = \frac{1}{2}\sum_{k=1}^m|\mathcal{P}(\mathcal{R}_k) - \mathcal{Q}(\mathcal{R}_k)| = \frac{1}{2}\int_{\mathbb{R}^3}|p(z) - q(z)| dz \quad . \end{equation} The \ac{TV} is half of the $\ell_1$ distance between the \ac{pdf}s. $\|\hat{p} - q\|_1$ is therefore a natural choice for our loss. The second choice is $\|\hat{p} - q\|_2^2$, which is a popular distance and has the appeal of being differentiable everywhere. \paragraph{Factorizing the loss.} Let us call $v_k$ the volume of $r_k$, i.e. the size of its support: $v_k \triangleq \|\mathbb{I}_k\|_1, ~ k = 1 \dots m$. Remember that $r_k = \frac{1}{v_k}\mathbb{I}_k$. Our loss can now be factorized (see supplementary material for details): \begin{IEEEeqnarray}{rCl} \int_{\mathbb{R}^3} \delta(\hat{p}(z) - q(z))dz & = &% \sum_{k=1}^m v_k \delta \left(\frac{\hat{\bm{y}}_k}{v_k} - \frac{\sum_{t=1}^d\bm{x}_{t}\bm{\beta}_{t,k}}{v_k} \right) \end{IEEEeqnarray} Here, $\delta$ is either the absolute value of the difference or the squared difference. \subsection{Training the model: efficient minimization approaches} To set the model parameters $\bm{\beta}$, we used $n$ example studies $\{\mathcal{S}_i = (\mathcal{T}_i, \mathcal{L}_i), ~ i = 1 \dots n \}$. We learn $\bm{\beta}$ by minimizing the empirical risk on $\{S_i\}$ and an $\ell_2$ penalty on $\bm{\beta}$. We add to the previous notations the index~$i$ of each example: $p_i$, $q_i$, $\hat{\bm{y}}_i$, $\bm{x}_i$. $\hat{\bm{Y}} \in \mathbb{R}^{n \times m}$ is the matrix such that $\hat{\bm{Y}}_{i} = \hat{\bm{y}}_i$, and $\bm{X} \in \mathbb{R}^{n \times d}$ such that $\bm{X}_i = \bm{x}_i$. \paragraph{Case $\delta = \ell_2^2$.} The empirical risk is \begin{equation} \sum_{i=1}^n \sum_{k=1}^m \left(\frac{\hat{\bm{Y}}_{i,k}}{\sqrt{v_k}} - \sum_{t=1}^d\frac{1}{\sqrt{v_k}}\bm{X}_{i,t}\bm{\beta}_{t,k} \right)^2. \end{equation} Defining $\bm{Y}'_{:,k} = \frac{\hat{\bm{Y}}_{:,k}}{\sqrt(v_k)}$ and $\bm{\beta}'_{:,k} = \frac{\bm{\beta}_{:,k}}{\sqrt(v_k)}$, with an $\ell_2$ penalty, the problem is: \begin{equation} \argmin_{\bm{\beta}'}\left(\|\bm{Y}' - \bm{\beta}'\bm{X}\|_2^2 + \lambda \|\bm{\beta}'\|_2^2 \right) \end{equation} where $\lambda \in \mathbb{R}_+$. This is the least-squares ridge regression predicting $\hat{p}$ expressed in the orthonormal basis of our search space $\{\frac{r_k}{\|r_k\|_2}\}$. \paragraph{Case $\delta = \ell_1$.} The empirical risk becomes \begin{equation} \sum_{i=1}^n \sum_{k=1}^m|\hat{\bm{Y}}_{i,k} - \sum_{t=1}^d\bm{X}_{i,t}\bm{\beta}_{t,k}| \end{equation} This problem is also known as a least-deviations regression, a particular case of quantile regression \cite{koenker1978regression}, \cite{chen2005computational}. Unlike $\ell_2$ regression, which provides an estimate of the conditional mean of the target variable, $\ell_1$ provides an estimate of the median. Quantile regression has been studied (e.g. by economists), as it is more robust to outliers and better-suited than least-squares when the noise is not normally distributed \cite{koenker1978regression}. Adding an $\ell_2$ penalty, we have the minimization problem: \begin{equation} \hat{\bm{\beta}} = \argmin_{\bm{\beta}}\left( \|\hat{\bm{Y}} - \bm{X}\bm{\beta}\|_1 + \lambda \|\bm{\beta}\|_2^2 \right) \label{eq:quantile_regression} \end{equation} Unpenalized quantile regression is often written as a linear program and solved with the simplex algorithm \cite{koenker1994remark}, iteratively reweighted least squares, or interior point methods \cite{portnoy1997gaussian}. \cite{yi2017semismooth} uses a coordinate-descent to solve a differentiable approximation of the quantile loss (the Huber loss) with elastic-net penalty. Here, we minimize \cref{eq:quantile_regression} via its dual formulation (c.f. supplementary material): \begin{align} \hat{\bm{\nu}} = \argmax_{\bm{\nu}}\left( Tr(\bm{\nu}^T\hat{\bm{Y}} - \frac{1}{4\lambda}\bm{\nu}^T\bm{X}\bm{X}^T\bm{\nu}) \right) \quad \text{ s.t. } \|\bm{\nu}\|_\infty \leq 1, \end{align} where $\bm{\nu} \in \mathbb{R}^{n \times m}$. The primal solution is given by $\hat{\bm{\beta}} = \frac{\bm{X}^T\hat{\bm{\nu}}}{2\lambda}$. As the dual loss $g$ is differentiable and the constraints are \emph{bound} constraints, we can use an efficient quasi-Newton method (L-BFGS, \cite{byrd1995limited}). $g$ and its gradient are fast to compute as $\bm{X}$ is sparse. $\lambda$ is set by cross-validation on the training set. We use warm-start on the regularization path (decreasing values for $\lambda$) to initialize each problem. \paragraph{Training the label-constrained encoder.} The columns of $\bm{\beta}$ can be fitted independently from each other. If we want $\bm{\beta}$ to be diagonal, we only include one feature in each regression: we fit $m$ univariate regressions $\bm{\hat{y}}_{:,k} \simeq \bm{X}_{:,k}\bm{\beta}_{k,k}$. \subsection{Evaluation: a natural model-comparison metric} Our metric is the mean log-likelihood of an article's coordinates in the predicted distribution, which diverges wherever $q=0$. we add a uniform background to the prediction, to ensure that it is non-zero everywhere: \begin{flalign} \text{the predicted \ac{pdf} is written} && q' = \frac{1}{2} (\sum_{k=1}^m\frac{\mathbb{I}_k}{v_k} + q) && \\ \text{the score for a study $\mathcal{S}_i=(\mathcal{T}_i, \mathcal{L}_i), \mathcal{L}_i = \{l_{i,a}\}$ is} && \frac{1}{c_i}\sum_{a=1}^{c_i}\log(q'_i(l_{i, a})) &&\label{eq:pseudo-ll} \end{flalign} \section{Empirical study} \subsection{Data: mining neuroimaging publications} We downloaded roughly 140K neuroimaging articles from online sources including Pubmed Central and commercial publishers. About 14K of these contain coordinates, which we extracted, as in \cite{yarkoni2011large}. We built a vocabulary of around 1000 anatomical region names by grouping the labels of several atlases and the Wikipedia page ``List of regions in the human brain''\footnote{\url{https://en.wikipedia.org/wiki/List_of_regions_in_the_human_brain}}. So in practice, $n \approx 14\cdot 10^3$ and $d \approx 1000$. $m$ depends on the atlas (or voxel grid) and ranges from 20 to 30K. \subsection{Text-to-brain encoding performance}\label{sec:encoding-performance} \begin{figure}[t!] \begin{minipage}[c]{0.67\textwidth} \includegraphics[width=1.\textwidth]{encoding_box_plot_cropped.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.32\textwidth} \caption{\textbf{Log-Likelihood of coordinates reported by left-out articles in the predicted distribution (\cref{eq:pseudo-ll})}. The vertical line represents the test log-likelihood given a uniform distribution over the brain. Voxel-wise encoding is better than relying on any atlas. In this setting, $\ell_1$ regression significantly outperforms least squares. } \label{fig:box-plot} \end{minipage} \end{figure} \paragraph{Comparison of atlases and models.} We perform 100 folds of shuffle-split cross-validation (10\% in test set). As choices of $\{\mathcal{R}_k\}$, we compare several atlases and a grid of cubic 4-mm voxels. We also compare $\ell_1$ and $\ell_2$ regression, and label-constrained $\ell_2$. The label-constrained encoder is not used for the voxel grid, as it does not have labels. As a baseline, we include a prediction based on the average of the brain maps seen during training (i.e. independent of the text). \cref{fig:box-plot} gives the results: for all models, voxel-wise encoding performs better than any atlas. Large atlas regions regularize too much. Despite its higher dimensionality, voxel-wise encoding learns better representations of anatomical terms. The label-constrained model performs poorly, sometimes below chance, as the labels of a single atlas do not cover enough words and interactions between terms are important. For voxel-wise encoding, $\ell_1$ regression outperforms $\ell_2$. The best encoder is therefore learned using a $\ell_1$ loss and a voxel partition. \paragraph{Prediction examples.} \cref{fig:predictions} shows the true \ac{pdf} (estimated with \ac{KDE}) and the prediction for the articles which obtained respectively the best and the first-quartile scores. The median is shown in the supplementary material. \begin{figure}[b!] \begin{minipage}{.25\textwidth} \includegraphics[width=\textwidth]{encoding_cv_results_2018-02-25T02-42-06_true_y_pmid_20965262.pdf} \end{minipage}% \begin{minipage}{.25\textwidth} \includegraphics[width=\textwidth]{encoding_cv_results_2018-02-25T02-42-06_prediction_pmid_20965262.pdf} \end{minipage} \begin{minipage}{.5\textwidth} \textbf{Best prediction: } ``Where sound position influences sound object representations: a 7-T fMRI study'' \end{minipage} \begin{minipage}{.25\textwidth} \includegraphics[width=\textwidth]{encoding_cv_results_2018-02-25T02-42-06_true_y_pmid_22832732.pdf} \end{minipage}% \begin{minipage}{.25\textwidth} \includegraphics[width=\textwidth]{encoding_cv_results_2018-02-25T02-42-06_prediction_pmid_22832732.pdf} \end{minipage} \begin{minipage}{.5\textwidth} \textbf{First quartile: } ``Interaction of catechol O-methyltransferase and serotonin transporter genes modulates effective connectivity in a facial emotion-processing circuitry.'' \end{minipage} \caption{True map (left) and prediction (right) for best prediction and $1^{st}$ quartile} \label{fig:predictions} \end{figure} \paragraph{Examples of coefficients learned by the linear regression.} The coefficients of the linear regression (rows of $\bm{\beta}$) are the brain maps that the model associates with each anatomical term. For frequent terms, they are close to what experts would expect (see for example \cref{fig:amygdala,fig:anterior-cingulate}). \begin{figure}[t!] \begin{minipage}{0.44\textwidth} \includegraphics[width=0.46\textwidth]{coef_anterior_cingul_left_cropped.pdf} \includegraphics[width=0.46\textwidth]{coef_anterior_cingul_right_cropped.pdf} \caption{regression coefficient for ``anterior cingulate''} \label{fig:anterior-cingulate} \end{minipage}% \hfill% \begin{minipage}{0.5\textwidth} \includegraphics[width=0.29\textwidth]{coef_left_amygdala_stat_map_y.pdf}% \hfill% \includegraphics[width=0.29\textwidth]{coef_amygdala_stat_map_y.pdf} \hfill% \includegraphics[width=0.29\textwidth]{coef_right_amygdala_stat_map_y.pdf} \caption{regression coefficients for ``left amygdala'', ``amygdala'', and ``right amygdala''% \label{fig:amygdala}} \end{minipage} \end{figure} \subsection{Leveraging text without coordinates: neurological examples} \begin{figure}[b!] \begin{minipage}{.5\textwidth} \includegraphics[width=.5\textwidth]{huntington_weighted_encoding_contrast_stat_map_y.pdf}% \includegraphics[width=.5\textwidth]{huntington_weighted_encoding_contrast_stat_map_z.pdf}\llap{% \fboxsep1pt% \raisebox{.25\linewidth}{\colorbox{white}{\sffamily\small Huntington's disease}}\hspace*{.2\linewidth}}% \llap{\rule{1pt}{.27\linewidth}}% \end{minipage}% \hspace*{2pt}% \begin{minipage}{.5\textwidth} \includegraphics[width=.5\textwidth]{parkinson_weighted_encoding_contrast_stat_map_y.pdf}% \includegraphics[width=.5\textwidth]{parkinson_weighted_encoding_contrast_stat_map_z.pdf}\llap{% \fboxsep1pt% \raisebox{.25\linewidth}{\colorbox{white}{\sffamily\small Parkinson's disease}}\hspace*{.2\linewidth}}% \hspace*{-.1ex}% \end{minipage}% \caption{\textbf{Predicted density for Huntington's and Parkinson's.} In agreement with Huntington's physiopathology~\cite{walker2007huntington}, our method highlights the putamen, and the caudate nucleus. Also, in the case of Parkinson's~\cite{davie2008review}, the brain stem, the thalamus, and the motor cortex are highlighted. \label{fig:huntington-parkinson-slices}} \centering \begin{minipage}{.3\textwidth} \includegraphics[width=.5\textwidth]{aphasia_weighted_encoding_contrast_inflated_left_cropped.pdf}% \includegraphics[width=.5\textwidth]{aphasia_weighted_encoding_contrast_inflated_right_cropped.pdf} \end{minipage}% \hfill% \begin{minipage}{.25\textwidth} \includegraphics[width=\textwidth]{aphasia_weighted_encoding_contrast_stat_map_yz.pdf} \end{minipage}% \hfill% \hfill% \begin{minipage}{.4\textwidth} \caption{\textbf{Predicted density for aphasia}, centered on Broca's and Wernicke's areas, in agreement with the literature \cite{damasio1992aphasia}.} \label{fig:aphasia} \end{minipage} \end{figure} Our framework can leverage unstructured spatial information contained in a large corpus of unannotated text. To showcase this, assume that we want to know which parts of the brain are associated with Huntington's disease. Our labelled corpus by itself is insufficient: only 21 documents mention the term ``huntington''. But we use it to learn associations between anatomical terms and locations in the brain (\cref{sec:methods}). This gives us access to the spatial information contained in the unlabelled corpus, which was out of reach before (\cref{sec:encoding-performance}). We contrast the mean encoding of articles which mention ``huntington'' against the mean distribution (taking the difference of their $log$). Since the large corpus contains more information about Huntington's disease (over 400 articles mention it), this is sufficient to see the striatum highlighted in the resulting map (\cref{fig:huntington-parkinson-slices}, left). \cref{fig:huntington-parkinson-slices} (right) shows the experiment for Parkinson, and \cref{fig:aphasia} for Aphasia. \section{Conclusion} We have introduced a theoretical framework to translate textual description of studies into spatial distributions over the brain. Such a translation enables pooling together many studies which only provide text (no images or coordinates), for statistical analysis of their results in brain space. The statistical model gives a natural metric to validate. This metric enables comparing representations, showing that voxel-wise encoding is a better approach than relying on atlases. Building prediction models tailored to our task leads to a linear regression with an $\ell_1$ loss (least absolute deviation), the total-variation distance between the true and the predicted spatial distributions. Such a model can be trained efficiently on dozens of thousands of data points and outperforms simpler approaches. Applied to descriptions of pathologies that lack spatial information, our model synthesizes accurate brain maps that reflect the domain knowledge. Predicting spatial distributions of medical observations from text opens new alleys for clinical research from patient health records and case reports. \noindent \textbf{Acknowledgements} This project received funding from: the European Union’s H2020 Research Programme under Grant Agreement No. 785907 (HBP SGA2), the Metacog Digiteo project, the MetaMRI associate team, and ERC NeuroLang. \afterpage{\clearpage} \bibliographystyle{splncs}
{ "timestamp": "2018-06-29T02:07:20", "yymm": "1806", "arxiv_id": "1806.01139", "language": "en", "url": "https://arxiv.org/abs/1806.01139" }
\section{Introduction} \label{sec:introduction} The quest for modules inside a network is a well-known and deeply studied problem in network analysis, with several application in different fields, like computational biology or social network analysis. A highly investigated problem is that of finding cohesive subgroups inside a network which in graph theory translates in highly connected subgraphs. A common approach is to look for cliques (i.e. complete graphs), and several combinatorial problems have been considered, notable examples being the {\sf Maximum Clique problem} (\cite[GT19]{garey}), the {\sf Minimum Clique Cover} problem (\cite[GT17]{garey}), and the {\sf Minimum Clique Partition} problem (\cite[GT15]{garey}). This last is a classical problem in theoretical computer science, whose goal is to partition the vertices of a graph into the minimum number of cliques. The {\sf Minimum Clique Partition} problem has been deeply studied since the seminal paper of Karp~\cite{DBLP:conf/coco/Karp72}, studying its complexity in several graph classes ~\cite{DBLP:journals/dam/CerioliFFMPR08,DBLP:journals/ita/CerioliFFP11,DBLP:journals/algorithmica/PirwaniS12,DBLP:journals/gc/DumitrescuP11}. In some cases, asking for a complete subgraph is too restrictive, as interesting highly connected graphs may have some missing edges due to noise in the data considered or because some pair may not be directly connected by an edge in the subgraph of interest. To overcome this limitation of the clique approach, alternative definitions of highly connected graphs have been proposed, leading to the concept of \emph{relaxed clique}~\cite{DBLP:journals/algorithms/Komusiewicz16}. A relaxed clique is a graph $G=(V,E)$ whose vertices satisfy a property which is a relaxation of the clique property. Indeed, a clique is a subgraph whose vertices are all at distance one from each other and have the same degree (the size of the clique minus one). Different definitions of relaxed clique are obtained by modifying one of the properties of clique, thus leading to distance-based relaxed cliques, degree-based relaxed cliques, and so on (see for example~\cite{DBLP:journals/algorithms/Komusiewicz16}). In this paper, we focus on a distance-based relaxation. In a clique all the vertices are required to be at distance at most one from each other. Here this constraint is relaxed, so that the vertices have to be at distance at most $s$, for an integer $s \geq 1$. A subgraph whose vertices are all distance at most $s$ is called an \emph{$s$-club} (notice that, when $s=1$, an $s$-club is exactly a clique). The identification of $s$-clubs inside a network has been applied to social networks~\cite{Mokken79,SociometricClique,DBLP:journals/snam/LaanMM16,DBLP:journals/snam/MokkenHL16,DBLP:conf/biostec/ZoppisDSCSM18}, and biological networks~\cite{DBLP:journals/jco/BalasundaramBT05}. Interesting recent studies have shown the relevance of finding $s$-clubs in a network~\cite{DBLP:journals/snam/LaanMM16,DBLP:journals/snam/MokkenHL16}, in particular focusing on finding $2$-clubs in real networks like DBLP or a European corporate network. Contributions to the study of $s$-clubs mainly focus on the {\sf Maximum s-Club} problem, that is the problem of finding an $s$-club of maximum size. {\sf Maximum s-Club} is known to be NP-hard, for each $s\geq 1$~\cite{DBLP:journals/eor/BourjollyLP02}. Even deciding whether there exists an $s$-club larger than a given size in a graph of diameter $s+1$ is NP-complete, for each $s\geq 1$~\cite{DBLP:journals/jco/BalasundaramBT05}. The {\sf Maximum s-Club} problem has been studied also in the approximability and parameterized complexity framework. A polynomial-time approximation algorithm with factor $|V|^{1/2}$ for every $s \geq 2$ on an input graph $G=(V,E)$ has been designed~\cite{Asahiro2017}. This is optimal, since the problem is not approximable within factor $|V|^{1/2 - \varepsilon}$, on an input graph $G=(V,E)$, for each $\varepsilon >0 $ and $s \geq 2$~\cite{Asahiro2017}. As for the parameterized complexity framework, the problem is known to be fixed-parameter tractable, when parameterized by the size of an $s$-club ~\cite{DBLP:journals/ol/SchaferKMN12,DBLP:journals/dam/KomusiewiczS15,DBLP:journals/computing/ChangHLS13}. The {\sf Maximum s-Club} problem has been investigated also for structural parameters and specific graph classes~\cite{DBLP:journals/jgaa/HartungKN15,DBLP:journals/dam/GolovachHKR14}. In this paper, we consider a different combinatorial problem, where we aim at covering the vertices of a network with a set of subgraphs. Similar to {\sf Minimum Clique Partition}, we consider the problem of covering a graph with the minimum number of $s$-clubs such that each vertex belongs to an $s$-club. We denote this problem by $\MinCov{s}$, and we focus in particular on the cases $s=2$ and $s=3$. We show some analogies and differences between $\MinCov{s}$ and {\sf Minimum Clique Partition}. We start in Section~\ref{sec:complexity} by considering the computational complexity of the problem of covering a graph with two or three $s$-clubs. This is motivated by the fact that {\sf Clique Partition} is known to be in P when we ask whether there exists a partition of the graph consisting of two cliques, while it is NP-hard to decide whether there exists a partition of the graph consisting of three cliques~\cite{DBLP:journals/tcs/GareyJS76}. As for {\sf Clique Partition}, we show that it is NP-complete to decide whether there exist three $2$-clubs that cover a graph. On the other hand, we show that, unlike {\sf Clique Partition}, it is NP-complete to decide whether there exist two $3$-clubs that cover a graph. These two results imply also that $\MinCov{2}$ and $\MinCov{3}$ do not belong to the class XP for the parameter "number of clubs" in a cover. Then, we consider the approximation complexity of $\MinCov{2}$ and $\MinCov{3}$. We recall that, given an input graph $G=(V,E)$, {\sf Minimum Clique Partition} is not approximable within factor $O(|V|^{1-\varepsilon})$, for any $\varepsilon > 0$, unless $P=NP$~\cite{DBLP:journals/toc/Zuckerman07}. Here we show that $\MinCov{2}$ has a slightly different behavior, while $\MinCov{3}$ is similar to \ensuremath{\mathsf{Clique~Partition}}. Indeed, in Section~\ref{sec:HardApprox} we prove that $\MinCov{2}$ is not approximable within factor $O(|V|^{1/2 -\varepsilon})$, for any $\varepsilon>0$, unless $P=NP$ , while $\MinCov{3}$ is not approximable within factor $O(|V|^{1 -\varepsilon})$, for any $\varepsilon>0$, unless $P=NP$. In Section~\ref{sec:ApproxAlgo}, we present a greedy approximation algorithm that has factor $2|V|^{1/2}\log^{3/2} |V|$ for $\MinCov{2}$, which almost match the inapproximability result for the problem. We start the paper by giving in Section~\ref{sec:preliminaries} some definitions and by formally defining the problem we are interested in. \section{Preliminaries} \label{sec:preliminaries} Given a graph $G=(V,E)$ and a subset $V' \subseteq V$, we denote by $G[V']$ the subgraph of $G$ induced by $V'$. Given two vertices $u, v \in V$, the distance between $u$ and $v$ in $G$, denoted by $d_G(u,v)$, is the length of a shortest path from $u$ to $v$. The diameter of a graph $G=(V,E)$ is the maximum distance between two vertices of $V$. Given a graph $G=(V,E)$ and a vertex $v \in V$, we denote by $N_G(v)$ the set of neighbors of $v$, that is $N_G(v)= \{u: \{v,u\} \in E \}$. We denote by $N_G[v]$ the close neighborhood of $V$, that is $N_G[v]= N_G(v) \cup \{v\}$. Define $N_G^l(v)= \{u: \text{ $u$ has distance at most $l$ from $v$} \}$, with $1 \leq l \leq 2$. Given a set of vertices $X \subseteq V$ and $l$, with $1 \leq l \leq 2$, define $N^l_G(X)= \bigcup_{u \in X} N_G^l(u)$. We may omit the subscript $G$ when it is clear from the context. Now, we give the definition of $s$-club, which is fundamental for the paper. \begin{definition} \label{def:2-club} Given a graph $G=(V,E)$, and a subset $V' \subseteq V$, $G[V']$ is an $s$-club if it has diameter at most $s$. \end{definition} Notice that an $s$-club must be a connected graph. We present now the formal definition of the \ensuremath{\mathsf{Minimum~s\mhyphen Club~Cover}} problem we are interested in. \smallskip \noindent\ensuremath{\mathsf{Minimum~s\mhyphen Club~Cover}} ($\MinCov{s}$)\\ \textbf{Input:} a graph $G=(V,E)$ and an integer $s \geq 2$.\\ \textbf{Output:} a minimum cardinality collection $\mathcal{S}= \{ V_1, \dots, V_h \}$ such that, for each $i$ with $1 \leq i \leq h$, $V_i \subseteq V$, $G[V_i]$ is an $s$-club, and, for each vertex $v \in V$, there exists a set $V_j$, with $1 \leq j \leq h$, such that $v \in V_j$. We denote by $\Cover{s}{h}$, with $1 \leq h \leq |V|$, the decision version of $\MinCov{s}$ that asks whether there exists a cover of $G$ consisting of at most $h$ $s$-clubs. Notice that while in {\sf Minimum Clique Partition} we can assume that the cliques that cover a graph $G=(V,E)$ partition $V$, hence the cliques are vertex disjoint, we cannot make this assumption for $\MinCov{s}$. Indeed, in a solution of $\MinCov{s}$, a vertex may be covered by more than one $s$-club, in order to have a cover consisting of the minimum number of $s$-clubs. Consider the example of Fig.~\ref{fig:ExCover}. The two $2$-clubs induced by $\{v_1,v_2,v_3,v_4,v_5 \}$ and $\{v_1,v_6,v_7,v_8,v_9 \}$ cover $G$, and both these $2$-clubs contain vertex $v_1$. However, if we ask for a partition of $G$, we need at least three $2$-clubs. This difference between {\sf Minimum~Clique~Partition} and $\MinCov{s}$ is due to the fact that, while being a clique is a hereditary property, this is not the case for being an $s$-club. If a graph $G$ is an $s$-club, then a subgraph of $G$ may not be an $s$-club (for example a star is a $2$-club, but the subgraph obtained by removing its center is not anymore a $2$-club). \begin{figure} \centering \svg[0.31]{ExCover_tex} \caption{A graph $G$ and a cover consisting of two $2$-clubs (induced by the vertices in the ovals). Notice that the $2$-clubs of this cover must both contain vertex $v_1$. } \label{fig:ExCover} \end{figure} \section{Computational Complexity} \label{sec:complexity} In this section we investigate the computational complexity of $\Cov{2}$ and $\Cov{3}$ and we show that $\Cover{2}{3}$, that is deciding whether there exists a cover of a graph $G$ with three $2$-clubs, and $\Cover{3}{2}$, that is deciding whether there exists a cover of a graph $G$ with two $3$-clubs, are NP-complete. \subsection{$\Cover{2}{3}$ is NP-complete} \label{sec:compl:2-3} In this section we show that $\Cover{2}{3}$ is NP-complete by giving a reduction from the $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ problem, that is the problem of computing whether there exists a partition of a graph $G^p=(V^p,E^p)$ in three cliques. Consider an instance $G^p=(V^p,E^p)$ of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$, we construct an instance $G=(V,E)$ of $\Cover{2}{3}$ (see Fig.~\ref{fig:3-club(2)}). The vertex set $V$ is defined as follows: \[ V = \{ w_i: v_i \in V^p \} \cup \{ w_{i,j}: \{ v_i,v_j \} \in E^p \wedge i < j \} \} \] The set $E$ of edges is defined as follows: \begin{equation*} \begin{split} E = \{ \{w_i, w_{i,j} \}, \{w_i, w_{h,i} \}: v_i \in V^p, w_i, w_{i,j}, w_{h,i} \in V \} \cup \\ \{ \{w_{i,j},w_{i,l}\}, \{w_{i,j},w_{h,i}\}, \{w_{h,i}, w_{z,i}\}: w_{i,j}, w_{i,l}, w_{h,i}, w_{z,i} \in V \} \end{split} \end{equation*} Before giving the main results of this section, we prove a property of $G$. \begin{lemma} \label{lem:cover2-3-Prop1} Let $G^p=(V^p,E^p)$ be an instance of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\Cover{2}{3}$. Then, given two vertices $v_i, v_j \in V^p$ and the corresponding vertices $w_i, w_j \in V$: \begin{itemize} \item if $\{ v_i,v_j \} \in E^p$, then $d_G(w_i,w_j) = 2$ \item if $\{ v_i,v_j \} \notin E^p$, then $d_G(w_i,w_j) \geq 3$ \end{itemize} \end{lemma} \begin{proof} Notice that $N_G(w_i)= \{w_{i,z}: \{ v_i,v_z \} \in E^p \wedge i<z \} \cup \{w_{h,i}: \{ v_i,v_h \} \in E^p \wedge h<i \}$. It follows that $w_j \in N^2_G(w_i)$ if and only if there exists a vertex $w_{i,j}$ (or $w_{j,i}$), which is adjacent to both $w_i$ and $w_j$. But then, by construction, $w_j \in N^2_G(w_i)$ if and only if $\{ v_i,v_j \} \in E^p$. \qed\end{proof} We are now able to prove the main properties of the reduction. \begin{lemma} \label{lem:cover2-3-V1} Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\Cover{2}{3}$. Then, given a solution of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ on $G^p=(V^p,E^p)$, we can compute in polynomial time a solution of $\Cover{2}{3}$ on $G=(V,E)$. \end{lemma} \begin{proof} Consider a solution of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ on $G^p=(V^p,E^p)$, and let $V^p_{1}$, $V^p_{2}$, $V^p_{3} \subseteq V^p$ be the sets of vertices of $G^p$ that partition $V^p$. We define a solution of $\Cover{2}{3}$ on $G=(V,E)$ as follows. For each $d$, with $1 \leq d \leq 3$, define \[ V_d = \{ w_j \in V: v_j \in V^p_{d} \} \cup \{ w_{i,j}: v_i \in V^p_{d} \} \] We show that each $G[V_d]$, with $1 \leq d \leq 3$, is a $2$-club. Consider two vertices $w_i, w_j \in V_d$, with $1 \leq i < j \leq |V|$. Since they correspond to two vertices $v_i, v_j \in V^p$ that belong to a clique of $G^p$, it follows that $\{ v_i,v_j \} \in E^p$ and $w_{i,j} \in V_d$. Thus $d_{G[V_d]}(w_i,w_j) = 2$. Now, consider the vertices $w_i \in V_d$, with $1 \leq i \leq |V|$, and $w_{h,z} \in V_d$, with $1 \leq h < z \leq |V|$. If $i = h$ or $i=z$, assume w.l.o.g. $i=h$, then by construction $d_{G[V_d]}(w_i,w_{i,z}) = 1$. Assume that $i \neq h$ and $i \neq z$ (assume w.l.o.g. that $i <h <z$), since $w_{h,z} \in V_d$, it follows that $w_h \in V_d$. Since $w_i, w_h \in V_d$, it follows that $w_{i,h} \in V_d$. By construction, there exist edges $\{ w_{i,h}, w_{h,z} \}$, $\{ w_i, w_{i,h} \}$ in $E^p$, thus implying that $d_{G[V_d]}(w_i,w_{h,z}) = 2$. Finally, consider two vertices $w_{i,j}, w_{h,z} \in V_d$, with $1 \leq i<j \leq |V|$ and $1 \leq h < z \leq |V|$. Then, by construction, $w_i \in V_d$ and $w_h \in V_d$. But then, $w_{i,h}$ belongs to $V_d$, and, by construction, $\{w_{i,j},w_{i,h} \} \in E$ and $\{w_{h,z},w_{i,h} \} \in E$. It follows that $d_{G[V_d]}(w_{i,j},w_{h,z}) = 2$. We conclude the proof observing that, by construction, since $V^p_1, V^p_2, V^p_3$ partition $V^p$, it holds that $V = V_1 \cup V_2 \cup V_3$, thus $G[V_1]$, $G[V_2]$, $G[V_3]$ covers $G$. \qed\end{proof} \begin{figure} \centering \svg[0.55]{2-club_3Newtex} \caption{An example of a graph $G^p$ input of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ and the corresponding graph $G$ input of $\Cover{2}{3}$.} \label{fig:3-club(2)} \end{figure} Based on Lemma~\ref{lem:cover2-3-Prop1}, we can prove the following result. \begin{lemma} \label{lem:cover2-3-V2} Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\Cover{2}{3}$. Then, given a solution of $\Cover{2}{3}$ on $G=(V,E)$, we can compute in polynomial time a solution of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$ on $G^p=(V^p,E^p)$. \end{lemma} \begin{proof} Consider a solution of $\Cover{2}{3}$ on $G=(V,E)$ consisting of three $2$-clubs $G[V_1]$, $G[v_2]$, $G[V_3]$. Consider a $2$-club $G[V_d]$, with $1 \leq d \leq 3$. By Lemma~\ref{lem:cover2-3-Prop1}, it follows that, for each $w_i, w_j \in V_d$, $\{ v_i,v_j \} \in E$. As a consequence, we can define three cliques $G^p[V^p_1]$, $G^p[V^p_2]$, $G^p[V^p_3]$ in $G^p$ as follows. For each $d$, with $1 \leq d \leq 3$, $V^p_d$ is defined as: \[ V^p_d=\{ v_i: w_i \in V_d \} \] Next, we show that $G[V^p_d]$, with $1 \leq d \leq 3$, is indeed a clique. By Lemma~\ref{lem:cover2-3-Prop1} if $w_i, w_j \in V_d$ then it holds $\{ v_i,v_j \} \in E$, thus by construction $\{v_i,v_j\} \in E^p$ and $G[V^p_d]$ is a clique in $G^p$. Moreover, since $V_1 \cup V_2 \cup V_3 = V$, then $V^p_1 \cup V^p_2 \cup V^p_3 =V^p$. Notice that $V^p_1$, $V^p_2$, $V^p_3$ may not be disjoint, but, starting from $V^p_1$, $V^p_2$, $V^p_3$, it is easy to compute in polynomial time a partition of $G^p$ in three cliques. \qed\end{proof} Now, we can prove the main result of this section. \begin{theorem} \label{teo:cover2-3} $\Cover{2}{3}$ is NP-complete. \end{theorem} \begin{proof} By Lemma~\ref{lem:cover2-3-V1} and Lemma~\ref{lem:cover2-3-V2} and from the NP-hardness of $\ensuremath{\mathsf{3 \mhyphen Clique~Partition}}$~\cite{DBLP:conf/coco/Karp72}, it follows that $\Cover{2}{3}$ is NP-hard. The membership to NP follows easily from the fact that, given three $2$-clubs of $G$, it can be checked in polynomial time whether they are $2$-clubs and cover all vertices of $G$. \qed\end{proof} \subsection{$\Cover{3}{2}$ is NP-complete} In this section we show that $\Cover{3}{2}$ is NP-complete by giving a reduction from a variant of $\ensuremath{\mathsf{Sat}}$ called $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$. Recall that a literal is positive if it is a non-negated variable, while it is negative if it is a negated variable. Given a collection of clauses $\mathcal{C}= \{ C_1, \dots, C_p \}$ over the set of variables $X=\{ x_1, \dots, x_q \}$, where each $C_i \in \mathcal{C}$, with $1 \leq i \leq p$, contains exactly five literals and does not contain both a variable and its negation, $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ asks for a truth assignment to the variables in $X$ such that each clause $C_i$, with $1 \leq i \leq p$, is \emph{double-satisfied}. A clause $C_i$ is double-satisfied by a truth assignment $f$ to the variables $X$ if there exist a positive literal and a negative literal in $C_i$ that are both satisfied by $f$. Notice that we assume that there exist at least one positive literal and at least one negative literal in each clause $C_i$, with $1 \leq i \leq p$, otherwise $C_i$ cannot be doubled-satisfied. Moreover, we assume that each variable in an instance of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ appears both as a positive literal and a negative literal in the instance. Notice that if this is not the case, for example a variable appears only as a positive literal, we can assign a true value to the variable, as defining an assignment to false does not contribute to double-satisfy any clause. First, we show that $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ is NP-complete, which may be of independent interest. \begin{theorem} \label{teo:5DoubleSatHard} $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ is NP-complete. \end{theorem} \begin{proof} We reduce from $\ensuremath{\mathsf{3 \mhyphen Sat}}$, where given a set $X_3$ of variables and a set $\mathcal{C}_3$ of clauses, which are a disjunction of 3 literals (a variable or the negation of a variable), we want to find an assignment to the variables such that all clauses are satisfied. Moreover, we assume that each clause in $\mathcal{C}_3$ does not contain a positive variable $x$ and its negation $\overline{x}$, since such a clause is obviously satisfied by any assignment. The same property holds also for the instance of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ we construct. Consider an instance $(X_3,\mathcal{C}_3)$ of $\ensuremath{\mathsf{3 \mhyphen Sat}}$, we construct an instance $(X, \mathcal{C})$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ as follows. Define $X= X_3 \cup X_N$, where $X_3 \cap X_N = \emptyset$ and $X_N$ is defined as follows: \[ X_N = \{ x_{C,i,1}, x_{C,i,2}:C_i \in \mathcal{C}_3 \} \] The set $\mathcal{C}$ of clauses is defined as follows: \[ \mathcal{C}= \{ C_{i,1}, C_{i,2}: C_i \in \mathcal{C}_3 \} \] where $C_{i,1}$, $C_{i,2}$ are defined as follows. Consider $C_i \in \mathcal{C}_3=(l_{i,1} \vee l_{i,2} \vee l_{i,3})$, where $l_{i,p}$, with $1 \leq p \leq 3$ is a literal, that is a variable (a positive literal) or a negated variable (a negative literal), the two clauses $C_{i,1}$ and $C_{i,2}$ are defined as follows: \begin{itemize} \item $C_{i,1} = l_{i,1} \vee l_{i,2} \vee l_{i,3} \vee x_{C,i,1} \vee \overline{x_{C,i,2}}$ \item $C_{i,2} = l_{i,1} \vee l_{i,2} \vee l_{i,3} \vee \overline{x_{C,i,1}} \vee x_{C,i,2}$ \end{itemize} We claim that $(X_3,\mathcal{C}_3)$ is satisfiable if and only if $(X, \mathcal{C})$ is double-satisfiable. Assume that $(X_3,\mathcal{C}_3)$ is satisfiable and let $f$ be an assignment to the variables on $X$ that satisfies $\mathcal{C}_3$. Consider a clause $C_i$ in $\mathcal{C}_3$, with $1 \leq i \leq |\mathcal{C}_3|$. Since it is satisfied by $f$, it follows that there exists a literal $l_{i,p}$ of $C_i$, with $1 \leq p \leq 3$, that is satisfied by $f$. Define an assignment $f'$ on $X$ that is identical to $f$ on $X_3$ and, if $l_{i,p}$ is positive, then assigns value false to both $x_{C,i,1}$ and $x_{C,i,2}$, if $l_{i,p}$ is negative, then assigns value true to both $x_{C,i,1}$ and $x_{C,i,2}$. It follows that both $C_{i,1}$ and $C_{i,2}$ are double-satisfied by $f'$. Assume that $(X,\mathcal{C})$ is double-satisfied by an assignment $f'$. Consider two clauses $C_{i,1}$ and $C_{i,2}$, with $1 \leq i \leq |\mathcal{C}|$, that are double-satisfied by $f'$, we claim that there exists at least one literal of $C_{i,1}$ and $C_{i,2}$ not in $X_N$ which is satisfied. Assume this is not the case, then, if $C_{i,1}$ is double-satisfied, it follows that $x_{C,i,1}$ is true and $x_{C,i,2}$ is false, thus implying that $C_{i,2}$ is not double-satisfied. Then, an assignment $f$ that is identical to $f'$ restricted to $X_3$ satisfies each clause in $\mathcal{C}$. Now, since $\ensuremath{\mathsf{3 \mhyphen Sat}}$ is NP-complete~\cite{DBLP:conf/coco/Karp72}, it follows that $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ is NP-hard. The membership to NP follows from the observation that, given an assignment to the variables on $X$, we can check in polynomial-time whether each clause in $\mathcal{C}$ is double-satisfied or not. \qed\end{proof} Let us now give the construction of the reduction from $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ to $\Cover{3}{2}$. Consider an instance of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ consisting of a set $\mathcal{C}$ of clauses $C_1, \dots, C_p$ over set $X=\{x_1, \dots, x_q\}$ of variables. We assume that it is not possible to double-satisfy all the clauses by setting at most two variables to true or to false (this can be easily checked in polynomial-time). Before giving the details, we present an overview of the reduction. Given an instance $(X,\mathcal{C})$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, for each positive literal $x_i$, with $1 \leq i \leq q$, we define vertices $x_{i,1}^T$, $x_{i,2}^T$ and for each negative literal $\overline{x_i}$, with $1 \leq i \leq q$, we define a vertex $x_i^F$. Moreover, for each clause $C_j \in \mathcal{C}$, with $1 \leq j \leq p$, we define a vertex $v_{C,j}$. We define other vertices to ensure that some vertices have distance not greater than three and to force the membership to one of the two $3$-clubs of the solution (see Lemma~\ref{lem:3-club(2)Prel1}). The construction implies that for each $i$ with $1 \leq i \leq q$, $x_{i,1}^T$ and $x_i^F$ belong to different $3$-clubs (see Lemma~\ref{lem:3-club(2)Prel}); this corresponds to a truth assignment to the variables in $X$. Then, we are able to show that each vertex $v_{C,j}$ belongs to the same $3$-club of a vertex $x_{i,1}^T$, with $1 \leq i \leq q$, and of a vertex $x_h^F$, with $1 \leq h \leq q$, adjacent to $v_{C,j}$ (see Lemma \ref{lem:3-club(2)2}); these vertices correspond to a positive literal $x_i$ and a negative literal $\overline{x_h}$, respectively, that are satisfied by a truth assignment, hence $C_j$ is double-satisfied. Now, we give the details of the reduction. Let $(X,\mathcal{C})$ be an instance of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, we construct an instance $G=(V,E)$ of $\Cover{3}{2}$ as follows (see Fig.~\ref{fig:3club-2}). The vertex set $V$ is defined as follows: \[ V = \{ r, r', r_T, r'_T, r^*_T, r_F, r'_F \} \cup \{ x_{i,1}^T, x_{i,2}^T, x_{i}^F: x_i \in X \} \cup \{v_{C,j}: C_j \in \mathcal{C} \} \cup \{y_1, y_2, y \} \] The edge set $E$ is defined as follows: \begin{equation*} \begin{split} E = \{ \{ r,r'\}, \{ \{ r', r_T\} , \{ r', r^*_T\} \{r', r_F \} \} \cup \{ \{r_T, x_{i,1}^T\}: x_i \in X \} \\ \cup \{ \{r_F, x_{i}^F\}: x_i \in X \} \cup \{ \{r'_T, x_{i,1}^T\}: x_i \in X \} \cup \{ \{r'_F, x_{i}^F\}: x_i \in X \} \cup \\ \{ \{ x_{i,1}^T, x_{i,2}^T\}: x_i \in X \} \cup \{ \{ r^*_T, x_{i,2}^T\}, \{ y_1, x_{i,2}^T\} :x_i \in X \} \cup \\ \{ \{ x_{i,2}^T, x_j^F\}: x_i,x_j \in X, i \neq j \} \cup \{ \{ x_{i,1}^T, v_{C,j} \}: x_i \in C_j \} \cup \{ \{ x_{i}^F, v_{C,j} \}: \overline{x_i} \in C_j \} \cup \\ \{ \{ v_{C,j},y \}: C_j \in \mathcal{C} \} \cup \{ \{ y,y_2 \}, \{ y_1,y_2 \}, \{y_1,r'_T \}, \{y_1,r'_F \} \} \end{split} \end{equation*} We start by proving some properties of the graph $G$. \begin{lemma} \label{lem:3-club(2)Prel1} Consider an instance $(\mathcal{C}, X)$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ and let $G=(V,E)$ be the corresponding instance of $\Cover{3}{2}$. Then, (1) $d_G(r',y)>3$, (2) $d_G(r,y)>3$, (3) $d_G(r,v_{C,j})>3$, for each $j$ with $1 \leq j \leq p$, and (4) $d_G(r,r'_F)>3$, $d_G(r,r'_T)>3$. \end{lemma} \begin{proof} We start by proving (1). Notice that any path from $r'$ to $y$ must pass through $r_T$, $r^*_T$ or $r_F$. Each of $r_T$, $r^*_T$ or $r_F$ is adjacent to vertices $x_{i,1}^T$, $x_{i,2}^T$ and $x_i^F$, with $1 \leq i \leq q$ (in addition to $r'$), and none of these vertices is adjacent to $y$, thus concluding that $d_G(r',y)>3$. Moreover, observe that for each vertex $v_{C,j}$, with $1 \leq j \leq p$, there exists a vertex % $x_{i,1}^T$, with $1 \leq i \leq q$, or $x_h^F$, with $1 \leq h \leq q$, that is adjacent to $v_{C,j}$, with $1 \leq j \leq p$, thus $d_G(r',v_{C_j})=3$, for each $j$ with $1 \leq j \leq p$. As a consequence of (1), it follows that (2) holds, that is $d_G(r,y)>3$. Since $d_G(r',v_{C_j})=3$, for each $j$ with $1 \leq j \leq p$, it holds (3) $d_G(r,v_{C,j})>3$. Finally, we prove (4). Notice that $N^2_G(r) = \{ r', r^*_T, r_T, r_F \}$ and that none of the vertices in $N^2_G(r)$ is adjacent to $r'_F$ and $r'_T$, thus $d_G(r,r'_F)>3$. \qed\end{proof} \begin{figure} \centering \resizebox{.6\textwidth}{!} { \begin{tikzpicture} \tikzstyle{vertex}=[circle, draw, inner sep=0pt, minimum size=4pt, fill=black] \node[vertex, label=$r$] (r) at (0,0) {}; \node[vertex, label=$r'~$, right of=r] (r') {}; \node[vertex, label=$r^*_T$, right of=r'] (rT) {}; \node[vertex, label=$r_T$, above of=rT, node distance=3.5cm] (r*) {}; \node[vertex, label=$r_F$, below of=rT, node distance=3.5cm] (rF) {}; \node[draw,right of=rT] (X1T1){}; \node[draw,below of=X1T1] (X1T2){}; \node[draw,above of=X1T1] (X1T3){}; \node[draw,right of=X1T3] (X1T4){}; \node[draw,right of=X1T2] (X1T5){}; \node[draw,right of=X1T1] (X1T6){}; \node[draw,right of=X1T3,node distance=0.5cm] (X1T7){}; \node[draw,right of=X1T2,node distance=0.5cm] (X1T8){}; \node[right of=r*] (X2T1){}; \node[below of=X2T1] (X2T2){}; \node[above of=X2T1] (X2T3){}; \node[right of=X2T3] (X2T4){}; \node[right of=X2T2] (X2T5){}; \node[right of=X2T1] (X2T6){}; \node[draw,right of=X2T3,node distance=0.5cm] (X2T7){}; \node[draw,right of=X2T2,node distance=0.5cm] (X2T8){}; \node[draw,right of=rF] (XF1){}; \node[draw,below of=XF1] (XF2){}; \node[draw,above of=XF1] (XF3){}; \node[draw,right of=XF3] (XF4){}; \node[draw,right of=XF2] (XF5){}; \node[draw,right of=XF1] (XF6){}; \node[draw,right of=XF3,node distance=0.5cm] (XF7){}; \node[draw,right of=XF2,node distance=0.5cm] (XF8){}; \node[draw,right of=X1T6,node distance=3.5cm] (C1){}; \node[draw,below of=C1,node distance=0.5cm] (C2){}; \node[draw,above of=C1,node distance=0.5cm] (C3){}; \node[draw,right of=C1,node distance=0.5cm] (C4){}; \node[draw,right of=C2,node distance=0.5cm] (C5){}; \node[draw,right of=C3,node distance=0.5cm] (C6){}; \node[vertex, label=$y_1$, right of=X1T6] (r'T) {}; \node[vertex, label=$r'_T$, right of=X2T6] (y1) {}; \node[vertex, label=$r'_F$, right of=XF6] (r'F) {}; \node[vertex, label=$y$, right of=C4] (y) {}; \draw (r) -- (r'); \draw (rT) -- (r'); \draw (r*) -- (r'); \draw (rF) -- (r'); \draw (rT) -- (X1T3) ; \draw (rT) -- (X1T1) ; \draw (rT) -- (X1T2) ; \draw (rT) -- (X1T4) ; \draw (rT) -- (X1T5) ; \draw (rT) -- (X1T6) ; \draw (r'T) -- (X1T3) ; \draw (r'T) -- (X1T1) ; \draw (r'T) -- (X1T2) ; \draw (r'T) -- (X1T4) ; \draw (r'T) -- (X1T5) ; \draw (r'T) -- (X1T6) ; \draw (r*) -- (X2T3); \draw (r*) -- (X2T2); \draw (r*) -- (X2T1); \draw (r*) -- (X2T4); \draw (r*) -- (X2T5); \draw (r*) -- (X2T6); \draw (y1) -- (X2T3); \draw (y1) -- (X2T2); \draw (y1) -- (X2T1); \draw (y1) -- (X2T4); \draw (y1) -- (X2T5); \draw (y1) -- (X2T6); \draw (rF) -- (XF3); \draw (rF) -- (XF2); \draw (rF) -- (XF1); \draw (rF) -- (XF4); \draw (rF) -- (XF5); \draw (rF) -- (XF6); \draw (r'F) -- (XF3); \draw (r'F) -- (XF2); \draw (r'F) -- (XF1); \draw (r'F) -- (XF4); \draw (r'F) -- (XF5); \draw (r'F) -- (XF6); \draw (X1T7) -- (X2T8); \draw (X1T3) -- (X2T3); \draw (X1T4) -- (X2T4); \draw (C3) -- (X2T1); \draw (C3) -- (X2T3); \draw (C3) -- (XF4); \draw (C3) -- (X2T5); \draw (C3) -- (XF6); \draw (XF3) edge[dashed] (X1T2); \draw (XF3) -- (X1T8); \draw (XF3) -- (X1T5); \draw (XF7) -- (X1T2); \draw (XF7) -- (X1T5); \draw (XF4) -- (X1T2); \draw (XF7) edge[dashed] (X1T8); \draw (XF4) -- (X1T8); \draw (XF4) edge[dashed] (X1T5); \draw (y) -- (C1); \draw (y) -- (C2); \draw (y) -- (C3); \draw (y) -- (C4); \draw (y) -- (C5); \draw (y) -- (C6); \node[vertex, label=$y_2$, below right of=r'T, node distance=3cm] (y2) {}; \draw (r'T) edge[bend right=25] (y1); \draw (r'T) -- (y2) -- (y); \draw (r'F) edge[bend right=25] (r'T); \node[draw,fill=white,rectangle,rounded corners,fit= (X1T1) (X1T3) (X1T2) (X1T4)] (X1T) {$x_{i,2}^T$}; \node[draw,fill=white,rectangle,rounded corners,fit= (X2T1) (X2T3) (X2T2) (X2T4)] (X2T) {$x_{i,1}^T$}; \node[draw,fill=white,rectangle,rounded corners,fit= (XF1) (XF3) (XF2) (XF4)] (XF) {$x_{i}^F$}; \node[draw,fill=white,rectangle,rounded corners,fit= (C1) (C2) (C3) (C4) (C5) (C6)] (C) {$v_{C,j}$}; \end{tikzpicture} } \caption{Schematic construction for the reduction from $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ to $\Cover{3}{2}$. \label{fig:3club-2} } \end{figure} Consider two sets $V_1 \subseteq V$ and $V_2 \subseteq V$, such that $G[V_1]$ and $G[V_2]$ are two $3$-clubs of $G$ that cover $G$. As a consequence of Lemma~\ref{lem:3-club(2)Prel1}, it follows that $r$ and $r'$ are in exactly one of $G[V_1]$, $G[V_2]$, w.l.o.g. $G[V_1]$, while $r'_T$, $r'_F$, $y$ and $v_{C,j}$, for each $j$ with $1 \leq j \leq p$, belong to $G[V_2]$ and not to $G[V_1]$. Next, we show a crucial property of the graph $G$ built by the reduction. \begin{lemma} \label{lem:3-club(2)Prel} Given an instance $(\mathcal{C}, X)$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, let $G=(V,E)$ be the corresponding instance of $\Cover{3}{2}$. Then, for each $i$ with $1 \leq i \leq q$, $d_G(x_{i,1}^T,x_i^F)>3$. \end{lemma} \begin{proof} Consider a path $\pi$ of minimum length that connects $x_{i,1}^T$ and $x_i^F$, with $1 \leq i \leq q$. First, notice that, by construction, the path $\pi$ after $x_{i,1}^T$ must pass through one of these vertices: $r_T$, $r'_T$, $x_{i,2}^T$ or $v_{C,j}$, with $1 \leq j \leq p$. We consider the first case, that is the path $\pi$ after $x_{i,1}^T$ passes through $r_T$. Now, the next vertex in $\pi$ is either $r'$ or $x_{h,1}^T$, with $1 \leq h \leq q$. Since both $r'$ and $x_{h,1}^T$ are not adjacent to $x_i^F$, it follows that in this case the path $\pi$ has length greater than three. We consider the second case, that is the path $\pi$ after $x_{i,1}^T$ passes through $r'_T$. Now, after $r'_T$, $\pi$ passes through either $y_1$ or $x_{h,1}^T$, with $1 \leq h \leq q$. Since both $y_1$ and $x_{h,1}^T$ are not adjacent to $x_i^F$, it follows that in this case the path $\pi$ has length greater than three. We consider the third case, that is the path after $x_{i,1}^T$ passes through $x_{i,2}^T$. Now, the next vertex of $\pi$ is either $r^*_T$ or $y_1$ or $x_h^F$, with $1 \leq h \leq q$ and $h \neq i$. Since $r^*_T$, $y_1$ and $x_{h}^F$ are not adjacent to $x_i^F$, it follows that in this case the path $\pi$ has length greater than three. We consider the last case, that is the path after $x_{i,1}^T$ passes through $v_{C,j}$, with $1 \leq j \leq p$. We have assumed that $x_i$ and $\overline{x_i}$ do not belong to the same clause, thus by construction $x_{i}^F$ is not incident in $v_{C,j}$. It follows that after $v_{C,j}$, the path $\pi$ must pass through either $y$ or $x_{h,1}^T$, with $1 \leq h \leq q$, or $x_{z}^F$, $1 \leq z \leq q$ and $z \neq i$. Once again, since $y$, $x_{h,1}^T$ and $x_{z}^F$ are not adjacent to $x_i^F$, it follows that also in this case the path $\pi$ has length greater than three, thus concluding the proof. \qed\end{proof} Now, we are able to prove the main results of this section. \begin{lemma} \label{lem:3-club(2)1} Given an instance $(\mathcal{C}, X)$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, let $G=(V,E)$ be the corresponding instance of $\Cover{3}{2}$. Then, given a truth assignment that double-satisfies $\mathcal{C}$, we can compute in polynomial-time two $3$-clubs that cover $G$. \end{lemma} \begin{proof} Consider a truth assignment $f$ on the set $X$ of variables that double-satisfies $\mathcal{C}$. In the following we construct two $3$-clubs $G[V_1]$ and $G[V_2]$ that cover $G$. The two sets $V_1$, $V_2$ are defined as follows: \[ V_1 = \{ r,r',r_T, r^*_T, r_F \} \cup \{ x_{i,1}^T, x_{i,2}^T:f(x_i)=false \} \cup \{ x_{i}^F, :f(x_i)= true \} \] \[ V_2 = \{r'_T, r'_F, y, y_1, y_2\} \cup \{ x_{i,1}^T, x_{i,2}^T:f(x_i) = true \} \cup \{ x_{i}^F:f(x_i)= false \cup \} \] \[ \{ v_{C,j}: 1 \leq j \leq p \} \] Next, we show that $G[V_1]$ and $G[V_2]$ are indeed two $3$-clubs that cover $G$. First, notice that $V_1 \cup V_2 =V$, hence $G[V_1]$ and $G[V_2]$ cover $G$. Next, we show that both $G[V_1]$ and $G[V_2]$ are indeed $3$-clubs. Let us first consider $G[V_1]$. By construction, $d_{G[V_1]}(r,x_{i,1}^T) = 3$ and $d_{G[V_1]}(r,x_{i,2}^T) = 3$, for each $i$ with $1 \leq i \leq i \leq q$, and $d_{G[V_1]}(r,x_{i}^F) = 3$, for each $i$ with $1 \leq i \leq i \leq q$. Moreover, $d_{G[V_1]}(r',x_{i,1}^T) = 2$ and $d_{G[V_1]}(r',x_{i,2}^T) = 2$, for each $i$ with $1 \leq i \leq q$, and $d_{G[V_1]}(r',x_{i}^F) = 2$, for each $i$ with $1 \leq i \leq i \leq q$. As a consequence, it holds that $r_T$, $r'_T$ and $r_F$ have distance at most three in $G[V_1]$ from each vertex $x_{i,1}^T$, from each vertex $x_{i,2}^T$, and from each vertex $x_{i}^F$. Since $r$, $r_T$, $r^*_T$ and $r_F$ are in $N(r')$, it follows that $r$, $r'$, $r_T$, $r^*_T$ and $r_F$ are at distance at most $2$ in $G[V_1]$. Hence, we focus on vertices $x_{i,1}^T$, with $1 \leq i \leq q$, $x_{h,2}^T$, with $1 \leq h \leq q$ and $x_{j}^F$, with $1 \leq j \leq q$. Since there exists a path that passes trough $x_{i,1}^T$, $r_T$, $x_{h,1}^T$ and $x_{h,2}^T$, vertices $x_{i,1}^T$, $x_{h,1}^T$ are at distance at most two in $G[V_1]$, while $x_{i,1}^T$, $x_{h,2}^T$ are at distance at most three in $G[V_1]$ (if $i=h$ they are at distance one). Vertices $x_{h,2}^T$ and $x_{j}^F$ are at distance one in $G[V_1]$, since $h \neq j$ and $\{ x_{h,2}^T,x_j^F \} \in E$ by construction. Finally, $x_{i,1}^T$ and $x_{j}^F$ are at distance two in $G[V_1]$, since there exists a path that passes trough $x_{i,1}^T$, $x_{i,2}^T$ and $x_{j}^F$ in $G[V_1]$, as $i \neq j$. It follows that $G[V_1]$ is a $3$-club. We now consider $G[V_2]$. We recall that, for each $i$ with $1 \leq i \leq q$, if $x_{i,1}^T$, $x_{i,2}^T \in V_2$, then $x_i^F \in V_1$. Furthermore, we recall that we assume that each $x_i$ appears as a positive and a negative literal in the instance of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, thus each vertex $x_{i,1}^T$, with $1 \leq i \leq q$, and each vertex $x_{h}^F$, with $1 \leq h \leq q$, are connected to some $V_{C,j}$, with $1 \leq j \leq p$. First, notice that vertex $y$ is at distance at most three in $G[V_2]$ from each vertex of $V_2$, since it has distance one in $G[V_2]$ from each vertex $v_{C,j}$, with $1 \leq j \leq p$, thus distance two from $x_{i,1}^T$, with $1 \leq i \leq q$, and $x_{h}^F$, with $1 \leq h \leq q$, and three from $x_{i,2}^T$, with $1 \leq i \leq q$, $r'_T$ and $r'_F$. Since $y$ is adjacent to $y_2$, it has distance one from $y_2$ and two from $y_1$. Now, consider a vertex $v_{C,j}$, with $1 \leq j \leq p$. Since $f$ double-satisfies $\mathcal{C}$, it follows that there exist two vertices in $V_2$, $x_{i,1}^T$, with $1 \leq i \leq q$, and $x_{z}^F$, with $1 \leq z \leq q$, which are connected to $v_{C,j}$. It follows that $v_{C,j}$ has distance $2$ in $G[V_2]$ from $r'_T$ and from $r'_F$, and at most $3$ from each $x_{h,1}^T \in V_2$, with $1 \leq h \leq q$, and from each $x_{z}^F \in V_2$, with $1 \leq z \leq q$. Furthermore, notice that, since $v_{C,j}$ is adjacent to $x_z^F$ and $x_z^F$ is adjacent to each $x_{h,2}^T \in V_2$, with $1 \leq h \leq q$ and $h \neq z$, then $v_{C,j}$ has distance at most two in $G[V_2]$ from each $x_{h,2}^T \in V_2$. Finally, since $v_{C,j}$ is adjacent to $y$, it has distance two and three respectively, from $y_2$ and $y_1$, in $G[V_2]$. Consider a vertex $x_{i,1}^T \in V_2$, with $1 \leq i \leq q$. We have already shown that it has distance at most three in $G[V_2]$ from any $v_{C,j}$, with $1 \leq j \leq p$, and two from $y$. Since $x_{i,1}^T$ is adjacent to $r'_T$, it has distance at most two from each other vertex $x_{h,1}^T$, with $1 \leq h \leq q$, and three from each other vertex $x_{h,2}^T$ of $G[V_2]$. Moreover, it has distance two from $y_1$ and three from $y_2$ and $r'_F$. Since $x_{i,2}^T$ is adjacent to every vertex $x_z^F \in V_2$, with $1 \leq z \leq q$, as $z \neq i$, it follows that $x_{h,1}^T$ has distance at most two from every vertex $x_z^F \in V_2$. Consider a vertex $x_{i,2}^T \in V_2$, with $1 \leq i \leq q$. We have already shown that it has distance at most two from each $v_{C,j}$ in $G[V_2]$. Since it is connected to $x_{i,1}^T$, it has distance three from $y$ and two from $r'_T$ in $G[V_2]$. By construction $x_{i,2}^T$ is adjacent to every vertex $x_z^F \in V_2$, with $1 \leq z \leq q$, $x_{i,2}^T$ has distance at most two from $r'_F$ in $G[V_2]$. Moreover, $x_{i,2}^T$ has distance two from each vertex $x_{h,2}^T$ in $G[V_2]$, with $1 \leq i \leq q$, since by construction they are both adjacent to $y_1$. Since $x_{i,2}^T$ is adjacent to $y_1$, thus it has distance at most two from $y_2$ in $G[V_2]$. Consider a vertex $x_h^F$, with $1 \leq h \leq q$. It has distance one from $r'_F$ in $G[V_2]$, and thus distance two from $y_1$ and three from $y_2$ in $G[V_2]$. Moreover, $x_h^F$ is adjacent to each $x_{i,2}^T \in V_2$, with $1 \leq i \leq q$, thus it has distance two from each $x_{i,1}^T$ and distance three from $r'_T$ in $G[V_2]$. Since by construction there exists at least one $v_{C,j}$, with $1 \leq j \leq p$, adjacent to $x_h^F$, thus $x_h^F$ has distance two from $y$ and three from each $v_{C,z}$ in $G[V_2]$. Finally, we consider vertices $r'_T$, $r'_F$, $y_1$ and $y_2$. Notice that it suffices to show that these vertices have pairwise distance at most three in $G[V_2]$, since we have previously shown that any other vertex of $V_2$ has distance at most three from these vertices in $G[V_2]$. Since $r'_T, r'_F, y_2 \in N(y_1)$, they are all at distance at most two. It follows that $G[V_2]$ is a $3$-club, thus concluding the proof. \qed\end{proof} \begin{lemma} \label{lem:3-club(2)2} Given an instance $(\mathcal{C}, X)$ of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$, let $G=(V,E)$ be the corresponding instance of $\Cover{3}{2}$. Then, given two $3$-clubs that cover $G$, we can compute in polynomial time a truth assignment that double-satisfies $\mathcal{C}$. \end{lemma} \begin{proof} Consider two $3$-clubs $G[V_1]$, $G[V_2]$, with $V_1, V_2 \subseteq V$, that cover $G$. First, notice that by Lemma~\ref{lem:3-club(2)Prel1} we assume that $r,r' \in V_1 \setminus V_2$, while $y,r'_T,r'_F \in V_2 \setminus V_1$ and $v_{C,j} \in V_2 \setminus V_1$, for each $j$ with $1 \leq j \leq p$. Moreover, by Lemma \ref{lem:3-club(2)Prel} it follows that for each $i$ with $1 \leq i \leq q$, $x_{i,1}^T$ and $x_{i}^F$ do not belong to the same $3$-club, that is exactly one belongs to $V_1$ and exactly one belongs to $V_2$. By construction, each path of length at most three from a vertex $v_{C,j}$, with $1 \leq j \leq p$, to $r'_F$ must pass through some $x_h^F$, with $1 \leq h \leq q$. Similarly, each path of length at most three from a vertex $v_{C,j}$, with $1 \leq j \leq p$, to $r'_T$ must pass through some $x_{i,1}^T$. Assume that $v_{C,j}$, with $1 \leq j \leq p$, is not adjacent to a vertex $x_{i,1}^T \in V_2$, with $1 \leq i \leq q$ ($x_{h}^F \in V_2$, with $1 \leq h \leq p$ respectively). It follows that $v_{C,j}$ is only adjacent to $y$ and to vertices $x_w^F$, with $1 \leq w \leq q$ ($x_{u,1}^T$, with $1 \leq u \leq q$, respectively) in $G[V_2]$. In the first case, notice that $y$ is adjacent only to $v_{C,z}$, with $1 \leq z \leq p$, and $y_2$, none of which is adjacent to $r'_T$ ($r'_F$, respectively), thus implying that this path from $v_{C,j}$ to $r'_T$ (to $r'_F$, respectively) has length at least $4$. In the second case, $x_w^F$ ($x_{u,1}^T$, respectively) is adjacent to $r'_F$, $r_F$, $v_{C,j}$ and $x_{i,2}^T$ ($r'_T$, $r_T$, $v_{C,j}$, $x_{u,2}^T$, respectively), none of which is adjacent to $r'_T$ ($r'_F$, respectively), implying that also in this case the path from $v_{C,j}$ to $r'_T$ (to $r'_F$, respectively) has length at least $4$. Since $r'_T, r'_F , v_{C,j} \in V_2$, it follows that, for each $v_{C,j}$, the set $V_2$ contains a vertex $x_{i,1}^T$, with $1 \leq i \leq q$, and a vertex $x_h^F$, with $1 \leq h \leq q$, connected to $v_{C,j}$. By Lemma~\ref{lem:3-club(2)Prel} exactly one of $x_{i,1}^T$, $x_i^F$ belongs to $V_2$, thus we can construct a truth assignment $f$ as follows: $f(x_i):= \text{ true}$, if $x_{i,1}^T \in V_2$, $f(x_i):= \text{ false}$, if $x_{i}^F \in V_2$. The assignment $f$ double-satisfies each clause of $\mathcal{C}$, since each $v_{C,j}$ is connected to a vertex $x_{i,1}^T$, for some $i$ with $1 \leq i \leq q$, and a vertex $x_{h}^F$, for some $h$ with $1 \leq h \leq q$. \qed\end{proof} Based on Lemma~\ref{lem:3-club(2)1} and Lemma~\ref{lem:3-club(2)2}, and on the NP-completeness of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ (see Theorem \ref{teo:5DoubleSatHard}), we can conclude that $\Cover{3}{2}$ is NP-complete. \begin{theorem} \label{teo:cover3-2} $\Cover{3}{2}$ is NP-complete. \end{theorem} \begin{proof} By Lemma~\ref{lem:3-club(2)1} and Lemma~\ref{lem:3-club(2)2}, and from the NP-hardness of $\ensuremath{\mathsf{5 \mhyphen Double \mhyphen Sat}}$ (see Theorem \ref{teo:5DoubleSatHard}), it follows that $\Cover{3}{2}$ is NP-hard. The membership in NP follows easily from the fact that, given two $3$-clubs, it can be checked in polynomial time whether are $3$-clubs and cover all vertices of $G$. \qed\end{proof} \section{Hardness of Approximation} \label{sec:HardApprox} \sloppy In this section we consider the approximation complexity of $\MinCov{2}$ and $\MinCov{3}$ and we prove that $\MinCov{2}$ is not approximable within factor $O(|V|^{1/2 - \varepsilon})$, for each $\varepsilon>0$, and that $\MinCov{3}$ is not approximable within factor $O(|V|^{1 - \varepsilon})$, for each $\varepsilon>0$. The proof for $\MinCov{2}$ is obtained with a reduction very similar to that of Section~\ref{sec:compl:2-3}, except from the fact that we reduce $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ to $\MinCov{2}$. \begin{corollary} \label{cor:cov2hard} Unless $P = NP$, $\MinCov{2}$ is not approximable within factor $O(|V|^{1/2 - \varepsilon})$, for each $\varepsilon>0$. \end{corollary} \begin{proof} We present a preserving-factor reduction from $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ to $\MinCov{2}$. Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$, we compute in polynomial time a corresponding instance $G=(V,E)$ of $\MinCov{2}$ as in Section~\ref{sec:compl:2-3}. In what follows we prove the following results that are useful for the reduction. \begin{lemma} \label{appendix-lem:MinCover2HardApprox1} Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\MinCov{2}$. Then, given a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ consisting of $k$ cliques, we can compute in polynomial time a solution of $\MinCov{2}$ on $G=(V,E)$ consisting of $k$ $2$-clubs. \end{lemma} \begin{proof} Consider a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ where $\{V^p_1, V^p_2, $ $\dots$ $,V^p_k\}$ is the set of $k$ cliques that partition $V^P$. We define a solution of $\MinCov{2}$ on $G=(V,E)$ consisting of $k$ $2$-clubs as follows. For each $d, 1 \leq d \leq k$, let \[ V_d = \{ w_j \in V: v_j \in V^p_{d} \} \cup \{ w_{i,j}: v_i \in V^p_{d} \wedge i < j \} \] As for the proof of Lemma~\ref{lem:3-club(2)1}, it follows that for each $d$, $G[V_d]$ is a $2$-club. Furthermore, $G[V_1], \ldots, G[V_k]$ cover each vertex of $V$, as each $v_i \in V^p$ is covered by one of the cliques $V^p_1, V^p_2 \dots V^p_k$. \qed\end{proof} \begin{lemma} \label{appendix-lem:MinCover2HardApprox2} Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\MinCov{2}$. Then, given a solution of $\MinCov{2}$ on $G=(V,E)$ consisting of $k$ $2$-clubs, we can compute in polynomial time a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ with $k$ cliques. \end{lemma} \begin{proof} Consider the $2$-clubs $G[V_1], \ldots, G[V_k]$ that cover $G$. As for the proof of Lemma~\ref{lem:3-club(2)2}, the result follows from the fact that by Lemma~\ref{lem:cover2-3-Prop1}, given $w_i, w_j \in V_d$, for each $d$ with $1 \leq d \leq k$, it holds that $\{ v_i,v_j \} \in E$. As a consequence, we can define a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ consisting of $k$ cliques as follows, for each $d, 1 \leq d \leq k$: \[ V^p_d=\{v_i : w_i \in V_d \} \] \qed\end{proof} The inapproximability of $\MinCov{2}$ follows from Lemma~\ref{appendix-lem:MinCover2HardApprox1} and Lemma~\ref{appendix-lem:MinCover2HardApprox2}, and from the inapproximability of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$, which is known to be inapproximable within factor $O(|V^p|^{1-\varepsilon'})$~\cite{DBLP:journals/toc/Zuckerman07} (where $G^p=(V^p,E^p)$ is an instance of Hence $\MinCov{2}$ is not approximable within factor $O(|V^p|^{1-\varepsilon'})$, for each $\varepsilon' > 0$, unless $P = NP$, hence $\MinCov{2}$ is not approximable within factor $O(|V^p|^{(1-\varepsilon')})$. By the definition of $G=(V,E)$, it holds $|V|=|V^p|+ |E^p| \leq |V^p|^2$ hence, for each $\varepsilon>0$, $\MinCov{2}$ is not approximable within factor $O(|V|^{1/2 - \varepsilon})$, unless $P = NP$. \qed\end{proof} \sloppy Next, we show that $\MinCov{3}$ is not approximable within factor $O(|V|^{1 - \varepsilon})$, for each $\varepsilon>0$, unless $P = NP$, by giving a preserving-factor reduction from $\ensuremath{\mathsf{Minimum~Clique~Partition}}$. Consider an instance $G^p=(V^p,E^p)$ of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$, we construct an instance $G=(V,E)$ of $\MinCov{3}$ by adding a pendant vertex connected to each vertex of $V^p$. Formally, $V = \{ u_i, w_i: v_i \in V^p \}$, $E = \{ \{u_i,w_i\}: 1 \leq i \leq |V^p| \} \cup \{ \{ u_i,u_j \}: \{ v_i,v_j \} \in E^p \} \} $. We prove now the main properties of the reduction. \begin{lemma} \label{lem:MinCover3HardApprox1} Let $G^p=(V^p,E^p)$ be an instance of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\MinCov{3}$. Then, given a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ consisting of $k$ cliques, we can compute in polynomial time a solution of $\MinCov{3}$ on $G=(V,E)$ consisting of $k$ $3$-clubs. \end{lemma} \begin{proof} Consider a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$, consisting of the cliques $\{ G^p[V_{c,1}], G^p[V_{c,2}], \dots, G^p[V_{c,k}]\}$. Then, for each $i$, with $1 \leq h \leq k$, define the following subset $V_h \subseteq V$: \[ V_h = \{ u_j,w_j \in V: v_j \in V^p_{h} \} \] Since $V^p_{1}, V^p_{2} \dots V^p_{k}$ partition $V^p$, it follows that $V_{1}, V_{2} \dots V_{k}$ partition (hence cover) $G$. Now, we show that each $G[V_h]$, with $1 \leq h \leq k$, is a $3$-club. First, notice that since $G[V^p_{h}]$, is a clique, then the set $\{ u_j: u_j \in V_h \}$ induces a clique in $G$. Then, it follows that, for each $u_i,w_j, w_z \in V_h$, $d_{G[V_h]}(u_i,w_j) \leq 2$ and $d_{G[V_h]}(w_j,w_z) \leq 3$, thus concluding the proof. \qed\end{proof} \begin{lemma} \label{lem:MinCover3HardApprox2} Let $G^p=(V^p,E^p)$ be a graph input of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ and let $G=(V,E)$ be the corresponding instance of $\MinCov{3}$. Then, given a solution of $\MinCov{3}$ on $G=(V,E)$ consisting of $k$ $3$-clubs, we can compute in polynomial time a solution of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$ on $G^p=(V^p,E^p)$ consisting of $k$ cliques. \end{lemma} \begin{proof} Consider the $k$ $3$-clubs $G[V_1],\dots , G[V_k]$ that cover $G$. First, we show that for each $V_h, 1 \leq h, \leq k$, $\forall w_i, w_j \in V_h$, with $1 \leq i,j \leq |V^p|$, it holds that $u_i,u_j \in V_h$. Indeed, notice that $N(w_i)=\{u_i\}$ and $N(w_j)=\{ u_j \}$, and by the definition of a $3$-club we must have $d_{G[v_h]}(w_i,w_j) \leq 3$, it follows that $ u_i,u_j \in V_h$. Hence, we can define a set of cliques of $G^p$. For each $V_h$, with $1 \leq h \leq k$, define a set $V^p_{h}$: \[ V^p_{h}= \{ v_i:w_i \in V_h\} \] Notice that each $V^p_{h}$, $1 \leq h \leq k$, induces a clique in $G^p$, as by construction if $v_i,v_j \in V^p_{h}$, then $w_i, w_j \in V_h$, and this implies $\{v_i,v_j\} \in E^p$. Notice that the cliques $V^p_{1}, \dots, V^p_{k}$ may overlap, but starting from $V^p_{1}, \dots, V^p_{k}$, we can easily compute in polynomial time a clique partition of $G^p$ consisting of at most $k$ cliques. \qed\end{proof} Lemma~\ref{lem:MinCover3HardApprox1} and Lemma~\ref{lem:MinCover3HardApprox2} imply the following result. \begin{theorem} \label{teo:FinalMinCover3HardApprox} $\MinCov{3}$ is not approximable within factor $O(|V|^{1 - \varepsilon})$, for each $\varepsilon>0$, unless $P = NP$. \end{theorem} \begin{proof} The result follows from Lemma~\ref{lem:MinCover3HardApprox1} and Lemma~\ref{lem:MinCover3HardApprox2}, as these results imply that we have defined a factor-preserving reduction, and from the inapproximability of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$, which is known to be inapproximable within factor $O(|V^p|^{1-\varepsilon})$, for each $\varepsilon > 0$, unless $P = NP$~\cite{DBLP:journals/toc/Zuckerman07} (where $G^p=(V^p,E^p)$ is an instance of $\ensuremath{\mathsf{Minimum~Clique~Partition}}$). Thus, $\MinCov{3}$ is not approximable within factor $O(|V^p|^{1-\varepsilon})$, for each $\varepsilon > 0$, unless $P = NP$, and since it holds $|V|=2|V^p|$, $\MinCov{3}$ is not approximable within factor $O(|V|^{1 - \varepsilon})$, unless $P = NP$. \qed\end{proof} \section{An Approximation Algorithm for $\MinCov{2}$} \label{sec:ApproxAlgo} In this section, we present an approximation algorithm for $\MinCov{2}$ that achieves an approximation factor of $2|V|^{1/2}\log^{3/2} |V|$. Notice that, due to the result in Section \ref{sec:HardApprox}, the approximation factor is almost tight. We start by describing the approximation algorithm, then we present the analysis of the approximation factor. \begin{algorithm}[H] \SetAlgoLined \KwData{a graph $G$} \KwResult{a cover $\mathcal{S}$ of $G$} $V': = V$; /* $V'$ is the set of uncovered vertices of $G$, initialized to $V$ */\\ $\mathcal{S} := \emptyset$\; \While{$V' \neq \emptyset$}{ Let $v$ be a vertex of $V$ such that $|N[v] \cap V'|$ is maximum\; Add $N[v]$ to $\mathcal{S}$\; $V' := V' \setminus N[v]$\; } \caption{Club-Cover-Approx} \label{algo:approx} \end{algorithm} Club-Cover-Approx is similar to the greedy approximation algorithm for $\mathsf{Minimum~Dominating~Set}$ and $\mathsf{Minimum~Set~Cover}$. While there exists an uncovered vertex of $G$, the Club-Cover-Approx algorithm greedily defines a $2$-club induced by the set $N[v]$ of vertices, with $v \in V$, such that $N[v]$ covers the maximum number of uncovered vertices (notice that some of the vertices of $N[v]$ may already be covered). While for $\mathsf{Minimum~Dominating~Set}$ the choice of each iteration is optimal, here the choice is suboptimal. Notice that indeed computing a maximum 2-club is NP-hard. Clearly the algorithm returns a feasible solution for $\MinCov{2}$, as each set $N[v]$ picked by the algorithm is a $2$-club and, by construction, each vertex of $V$ is covered. Next, we show the approximation factor yielded by the Club-Cover-Approx algorithm for $\MinCov{2}$. First, consider the set $V_D$ of vertices $v \in V$ picked by the Club-Cover-Approx algorithm, so that $N[v]$ is added to $\mathcal{S}$. Notice that $|V_D|=|\mathcal{S}|$ and that $V_D$ is a dominating set of $G$, since, at each step, the vertex $v$ picked by the algorithm dominates each vertex in $N[v]$, and each vertex in $V$ is covered by the algorithm, so it belongs to some $N[v]$, with $v \in V_D$. Let $D$ be a minimum dominating set of the input graph $G$. By the property of the greedy approximation algorithm for $\mathsf{Minimum~Dominating~Set}$, the set $V_D$ has the following property \cite{DBLP:journals/jcss/Johnson74a}: \begin{equation} |V_D| \leq |D| \log |V| \label{eq:approx} \end{equation} The size of a minimum dominating set in graphs of diameter bounded by $2$ (hence $2$-clubs) has been considered in \cite{DBLP:journals/jgt/DesormeauxHHY14}, where the following result is proven. \begin{lemma}[\cite{DBLP:journals/jgt/DesormeauxHHY14}] \label{lem:DominatingSet} Let $H=(V_H,E_H)$ be a $2$-club, then $H$ has a dominating set of size at most $1 + \sqrt{|V_H|+\ln(|V_H|)}$. \end{lemma} The approximation factor $2|V|^{1/2}\log^{3/2} |V|$ for Club-Cover-Approx is obtained by combining Lemma \ref{lem:DominatingSet} and Equation \ref{eq:approx}. \begin{theorem} \label{teo:approx} Let $OPT$ be an optimal solution of $\MinCov{2}$, then Club-Cover-Approx returns a solution having at most $2|V|^{1/2}\log^{3/2} |V| |OPT|$ $2$-clubs. \end{theorem} \begin{proof Let $D$ be a minimum dominating set of $G$ and let $OPT$ be an optimal solution of $\MinCov{2}$. We start by proving that $|D| \leq 2|OPT| |V|^{1/2} \log^{1/2} |V|$. For each $2$-club $G[C]$, with $C \subseteq V$, that belongs to $OPT$, by Lemma \ref{lem:DominatingSet} there exists a dominating set $D_C$ of size at most $1 + \sqrt{|C|+\ln(|C|)} \leq 2 \sqrt{|C|+\ln(|C|)}$. Since $|C| \leq |V|$, it follows that each $2$-club $G[C]$ that belongs to $OPT$ has a dominating set of size at most $2 \sqrt{|V|+\ln(|V|)}$. Consider $D'=\bigcup_{C \in OPT} D_C$. It follows that $D'$ is a dominating set of $G$, since the $2$-clubs in $OPT$ covers $G$. Since $D'$ contains $|OPT|$ sets $D_C$ and $|D_C| \leq 2\sqrt{|V|+\ln(|V|)}$, for each $G[C] \in OPT$, it follows that $|D'| \leq 2 |OPT| \sqrt{|V|+\ln(|V|)}$. Since $D$ is a minimum dominating set, it follows that $|D| \leq |D'| \leq 2|OPT| (\sqrt{|V|+\ln(|V|)})$. By Equation \ref{eq:approx}, it holds $|V_D| \leq 2|D| \log |V|$ thus $|V_D| \leq 2|V|^{1/2}\ln^{1/2} |V| \log |V| |OPT| \leq 2|V|^{1/2}\log^{3/2} |V| |OPT|$ \qed\end{proof} \section{Conclusion} \label{sec:conclusion} There are some interesting direction for the problem of covering a graph with $s$-clubs. From the computational complexity point of view, the main open problem is whether $\Cover{2}{2}$ is NP-complete or is in P. Moreover, it would be interesting to study the computational/parameterized complexity of the problem in specific graph classes, as done for {\sf Minimum Clique Partition} \cite{DBLP:journals/dam/CerioliFFMPR08,DBLP:journals/ita/CerioliFFP11,DBLP:journals/algorithmica/PirwaniS12,DBLP:journals/gc/DumitrescuP11}. \bibliographystyle{splncs03}
{ "timestamp": "2018-06-05T02:18:22", "yymm": "1806", "arxiv_id": "1806.01119", "language": "en", "url": "https://arxiv.org/abs/1806.01119" }
\section{Introduction} The problem of classifying constant scalar curvature compact hypersurfaces in Euclidean space was proposed by Yau in the problem section of \cite{yau}. One would like to know, for example, if Alexandrov's theorem \cite{aleksandrov}, which states that spheres are the only compact constant mean curvature hypersurfaces embedded in Euclidean space, still holds when the mean curvature is replaced by the scalar curvature. This question was answered positively by Ros in \cite{ros}, by modifying Reilly's proof of Alexandrov's theorem \cite{reilly}. After this, different proofs and generalizations of Ros'es result were given; for example, using the Minkowski formulae \cite{minkowski} and the Heintze-Karcher inequality \cite{HK}, Ros extended his result for any $r$-cur\-va\-ture in \cite{ros2}. This approach also works in hyperbolic space, where the same conclusion holds \cite{montiel2}. Recall that a manifold of the form $M^{n+1}=\mathbb{R}\times_{\rm exp}P^n$, with $P$ a complete Riemannian manifold, is called a {\it pseudo-hyperbolic space} \cite{tashiro}. When $P^n$ is Ricci-flat, $M^{n+1}$ is Einstein with negative Ricci curvature, and when $P^n$ is flat, $M^{n+1}$ is a hyperbolic space form. In \cite{montiel}, among other results, Montiel proves the following Alexandrov type theorem: if $\Sigma$ is either a constant mean curvature or a constant scalar curvature hypersurface bounding a domain into a pseudo-hyperbolic space $\mathbb{R}\times_{\rm exp}P^n$, with $n \geq 2$ and $P^n$ a compact Ricci flat manifold, then it is either a geodesic sphere or a slice $\lbrace s \rbrace \times P^n$, $s \in \mathbb{R}$. The aim of this paper is to give a spinorial proof for the previous result. For this, we will prove a Heintze-Karcher inequality for spin manifolds carrying a nontrivial imaginary Killing spinor: \begin{Th}\label{main} Let $(M,g)$ be a (n+1)-dimensional connected Riemannian spin manifold carrying a nontrivial imaginary Killing spinor $\psi$ and let $\Sigma$ be a hypersurface bounding a compact domain $\Omega$ in $M$. Let $V=|\psi|^2$ and suppose the mean curvature $H$ of the hypersurface $\Sigma$ is positive everywhere. Then \begin{equation}\label{ineq1} \int_\Sigma\frac{V}{H}\,d\Sigma+\int_\Sigma\langle\nabla V,N\rangle\,d\Sigma\geq0, \end{equation} where $\nabla$ is the Levi-Civita connection of $(M,g)$ and $N$ is the inward pointing unit vector field normal to $\Sigma$. Moreover, equality holds if and only if $\Sigma$ is totally umbilical. \end{Th} Let $(M,g)$ be as in Theorem \ref{main}, that is, $(M,g)$ is a Riemannian spin manifold carrying a nontrivial imaginary Killing spinor. When $M$ is complete, Baum proved in \cite{baum} that $M$ is a warped product $\mathbb{R}\times_{\rm exp}P$, with the $n$-dimensional manifold $P$ being a complete Riemannian spin manifold admitting a nontrivial parallel spinor. Hence, by Wang's classification \cite{wang}, $P$ is a {\bf flat manifold, a Calabi-Yau manifold, a hyper-K\"{a}hler manifold or some eight- or seven-dimensional Riemanian manifolds with special holonomy}. Also, as we will see later, the function $V$ satisfies $$ \Delta V = (n+1)V. $$ Thus, Theorem \ref{main} can be rewritten as follows: \begin{Cor}\label{main2} Let $\Sigma$ be a connected compact hypersurface bounding a compact domain in a pseudo-hyperbolic space $M=\mathbb{R}\times_{\rm exp}P$, where $P$ is a complete Riemannian spin manifold admitting a nontrivial parallel spinor. Assume that the mean curvature $H$ of the hypersurface $\Sigma$ is positive everywhere. Then \begin{equation}\label{ineq2} \int_\Sigma\frac{V}{H}\,d\Sigma\geq (n+1)\int_\Omega V\,dvol. \end{equation} Moreover, equality holds if and only if $\Sigma$ is totally umbilical. \end{Cor} An interesting special case of Corollary \ref{main2} is when $P$ is the Euclidean space $\mathbb{R}^n$, which implies that $M$ is the hyperbolic space $\mathbb{H}^n$. Note that, in this case, the conclusion ``$\Sigma$ is totally umbilical'' in the equality case can be changed to ``$\Sigma$ is a geodesic sphere''. This case follows from a very general result of Brendle for warped product ambient manifolds \cite{brendle} (see also \cite{Qiu-Xia,ww}). A spinorial proof of this special case was given by Hijazi, Montiel and Raulot in \cite{hss}. They acomplish this by realizing $\mathbb{H}^{n+1}$ as a spacelike hypersurface in Minkowski space $\mathbb{R}^{n+1,1}$ and using spinorial techniques in such ambient. Our proof, in its turn, is totally implicit and it's valid for a large class of ambient spaces; one of its main ingredients is a holographic principle for the existence of imaginary killing spinors, also due to Hijazi, Montiel and Raulot \cite{hor} (see Theorem \ref{holographic}). As mentioned before, in \cite{montiel}, Montiel showed an Alexandrov type theorem for hypersurfaces with constant mean curvature or constant scalar curvature in some pseudo-hyperbolic spaces. Using spinorial techniques, Hijazi, Montiel and Roldan gave another proof to the constant mean curvature case \cite{hor2}. Here, we give another proof to the constant scalar curvature case; our proof is spinorial in the sense that it uses inequality (\ref{ineq1}), which was proved using spinorial techniques. \begin{Cor}\label{main3} Let $\Sigma$ be a connected hypersurface bounding a domain in a pseudo-hyperbolic space $\mathbb{R}\times_{\rm exp}P$, where $P$ is a complete Riemannian spin manifold admitting a nontrivial parallel spinor. If the scalar curvature of $\Sigma$ is constant, then it is either a round geodesic hypersphere (and, in this case P must be flat) or a slice $\{s\}\times P,\ s\in\mathbb{R}$. \end{Cor} \section{Preliminaires} In this section, we recall some definitions and properties of the spin geometry of hypersurfaces embedded in a spin manifold, as it is done in \cite{hor}. Let $(M,\langle\ ,\ \rangle)$ be a $(n+1)$-dimensional Riemannian spin manifold. We fix a spin structure and let $\mathbb{S}M$ denote the corresponding spinor bundle. We denote by $\overline{\nabla}$ both the Levi-Civita connection of $(M,\langle\ ,\ \rangle)$ and its lift to $\mathbb{S}M$, and by $\overline{\gamma}:\mathbb{C}\ell(M)\to{\rm End}_\mathbb{C}(\mathbb{S}M)$ the Clifford multiplication. On the spinor bundle $\mathbb{S}M$ there exists a natural Hermitian structure (see \cite{lawson}) denoted, as the Riemannian metric on $M$, by $\langle\ ,\ \rangle$. The spinorial Levi-Civita connection and the Hermitian product are compatible with the Clifford multiplication and compatible with each other; that is, for any $X,Y\in\Gamma(TM)$ and any $\psi,\varphi\in\Gamma(\mathbb{S}M)$ the following identities hold: \begin{align} X\langle\psi,\varphi\rangle&=\langle\overline{\nabla}_X\psi,\varphi\rangle+\langle\psi,\overline{\nabla}_X\varphi\rangle\label{c1};\\ \langle\overline{\gamma}(X)\psi,\varphi\rangle&=-\langle\psi,\overline{\gamma}(X)\varphi\rangle\label{c2};\\ \overline{\nabla}_X\left(\overline{\gamma}(Y)\psi\right)&=\overline{\gamma}(\overline{\nabla}_XY)\psi+\overline{\gamma}(Y)\overline{\nabla}_X\psi\label{c3}. \end{align} Also, the Dirac operator $\overline{D}$ on $\mathbb{S}M$ is locally given by \begin{equation} \overline{D}=\sum_{i=1}^{n+1}\overline{\gamma}(e_i)\overline{\nabla}_{e_i}, \end{equation} where $\{e_1,\ldots,e_{n+1}\}$ is a local orthonormal frame of $TM$. Consider an orientable hypersurface $\Sigma$ immersed into $M$. The Riemannian metric on $M$ induces a Riemannian metric on $\Sigma$, also denoted by $\langle\ ,\ \rangle$, whose Levi-Civita connection $\nabla^\Sigma$ satifies the Riemannian Gauss formula \begin{equation}\label{gauss} \nabla^\Sigma_XY=\overline{\nabla}_XY-\langle A(X),Y\rangle N, \end{equation} where $X,Y$ are vector fields tangent to the hypersurface $\Sigma$, the vector field $N$ is the inward pointing unit vector field normal to $\Sigma$, and $A$ is the shape operator with respect to $N$, that is, $$\overline{\nabla}_XN=-AX,\quad\forall X\in\Gamma(T\Sigma).$$ Since the normal bundle of $\Sigma$ is trivial, the hypersurface $\Sigma$ inherits a spin structure from the one of the ambient manifold $M$. Thus, $\Sigma$ has a Hermitian spinor bundle $\mathbb{S}\Sigma$ in the sense of \cite{lawson}, i.e., a \textit{Dirac bundle}. We will denote by $\gamma^\Sigma$ and $D^\Sigma$, respectively, the Clifford multiplication and the intrisic Dirac operator on $\Sigma$. We call such spinor bundle by \textit{intrisic} spinor bundle. We compare the intrisic spinor bundle $\mathbb{S}\Sigma$ to the restriction ${\mathbb S}\!\!\!/\,\!\Sigma={\mathbb{S} M}_{|\Sigma}$. This bundle is isomorphic to either ${\mathbb S}\Sigma$ or ${\mathbb S}\Sigma\oplus{\mathbb S}\Sigma$ acording to the dimension $n$ of $\Sigma$ is either even or odd (\cite{bar,morel}). Since the $n$-dimensional Clifford algebra is the even part of the $(n+1)$-dimensional Clifford algebra, the Clifford multiplication $\gamma\!\!\!/:\mathbb{C}\ell(\Sigma)\to{\rm End}_\mathbb{C}({\mathbb S}\!\!\!/\,\!\Sigma)$ is given by \begin{equation}\label{cliff} \gamma\!\!\!/(X)\psi=\overline{\gamma}(X)\overline{\gamma}(N)\psi, \end{equation} for any $\psi\in\Gamma({\mathbb S}\Sigma)$ and any $X\in\Gamma(T\Sigma)$. Consider on ${\mathbb S}\!\!\!/\,\!\Sigma$ the Hermitian metric $\langle\ ,\ \rangle$ induced from that of ${\mathbb{S} M}$. This metric satisfies te compatibilty condition (\ref{c2}) if one considers on $\Sigma$ the Riemannian metric induced from $M$ and the Clifford multicplication $\gamma\!\!\!/$ defined by (\ref{cliff}). The Gauss formula (\ref{gauss}) implies that the spin connection $\nabla\!\!\!\!/\,$ on ${\mathbb S}\!\!\!/\,\!\Sigma$ is given by the following spinorial Gauss formula \begin{equation}\label{nb} \nabla\!\!\!\!/\,_X\psi=\overline{\nabla}_X\psi-\frac{1}{2}\gamma\!\!\!/(AX)\psi \end{equation} for any $\psi\in\Gamma({\mathbb S}\!\!\!/\,\!\Sigma)$ and any $X\in\Gamma(T\Sigma)$. Observe that the compatibity conditions (\ref{c1}), (\ref{c2}) and (\ref{c3}) are satisfied by $({\mathbb S}\!\!\!/\,\!\Sigma,\gamma\!\!\!/,\langle\ ,\ \rangle,\nabla\!\!\!\!/\,)$. The \textit{extrinsic} Dirac operator $D\!\!\!\!/\,=\gamma\!\!\!/\circ\nabla\!\!\!\!/\,$ on $\Sigma$ defines a first order elliptic operator acting on sections of ${\mathbb S}\Sigma$. By (\ref{nb}), for any spinor field $\psi\in\Gamma({\mathbb S}\Sigma)$ we have \begin{equation}\label{extdirac} D\!\!\!\!/\,\psi=\sum_{i=1}^n\gamma\!\!\!/(e_i)\nabla\!\!\!\!/\,_{e_i}\psi=\frac{n}{2}H\psi-\overline{\gamma}(N)\sum_{i=1}^n\overline{\gamma}(e_i)\overline{\nabla}_{e_i}\psi, \end{equation} and \begin{equation} D\!\!\!\!/\,(\overline{\gamma}(N)\psi)=-\overline{\gamma}(N)D\!\!\!\!/\,\psi, \end{equation} where $\{e_1,\ldots,e_n\}$ is a local orthonormal frame of $T\Sigma$ and $H=\frac{1}{n}\textrm{trace}A$ is the mean curvature of $\Sigma$ in $M$. Thus, we have on $\Sigma$ an intrinsic spinorial structure $({\mathbb S}\Sigma,\nabla^\Sigma,\gamma^\Sigma,D^\Sigma)$ and an extrinsic structure $({\mathbb S}\!\!\!/\,\!\Sigma,\nabla\!\!\!\!/\,,\gamma\!\!\!/,D\!\!\!\!/\,)$. The dimension of $\Sigma$ plays an important role in the isomorphism of such structures. In fact, if $n$ is even, then \begin{equation}\label{restrict_even} ({\mathbb S}\!\!\!/\,\!\Sigma,\nabla\!\!\!\!/\,,\gamma\!\!\!/,D\!\!\!\!/\,)\equiv({\mathbb S}\Sigma,\nabla^\Sigma,\gamma^\Sigma,D^\Sigma) \end{equation} and, if $n$ is odd, then \begin{equation}\label{restrict_odd} ({\mathbb S}\!\!\!/\,\!\Sigma,\nabla\!\!\!\!/\,,\gamma\!\!\!/,D\!\!\!\!/\,)\equiv({\mathbb S}\Sigma\oplus{\mathbb S}\Sigma,\nabla^\Sigma\oplus\nabla^\Sigma,\gamma^\Sigma\oplus-\gamma^\Sigma,D^\Sigma\oplus-D^\Sigma). \end{equation} Now, we recall the definition of a \textit{chirality operator}. A chiralty operator $\omega$ on a Dirac bundle $(\mathcal{E}M,\gamma,\nabla,\langle\ ,\ \rangle)$ is an endomorphism $\omega:\Gamma(\mathcal{E}M)\to\Gamma(\mathcal{E}M)$ such that \begin{align} \omega^2=\textrm{Id}_{\mathcal{E}M},&\qquad \langle\omega\psi,\omega\varphi\rangle=\langle\psi,\varphi\rangle,\label{chi1}\\ \omega(\gamma(X)\psi)=-\gamma(X)\omega\psi,&\qquad\nabla_X(\omega\psi)=\omega(\nabla_X\psi),\label{chi2} \end{align} for any $X\in\Gamma(TM)$ and any $\psi,\varphi\in\Gamma(\mathcal{E}M)$. Now, we set up a new Dirac bundle with a chirality operator. Consider the vector bundle $$\mathcal{E}M:=\left\{ \begin{array}{ll} {\mathbb{S} M}&\textrm{ if $n+1$ is even},\\ {\mathbb{S} M}\oplus{\mathbb{S} M}&\textrm{ if $n+1$ is odd,} \end{array} \right.$$ equipped with a Clifford multiplication $\gamma$ defined by $$\gamma=\left\{ \begin{array}{ll} \overline{\gamma}&\textrm{ if $n+1$ is even},\\ \overline{\gamma}\oplus-\overline{\gamma}&\textrm{ if $n+1$ is odd,} \end{array} \right.$$ and a Levi-Civita connection $$\nabla=\left\{ \begin{array}{ll} \overline{\nabla}&\textrm{ if $n+1$ is even},\\ \overline{\nabla}\oplus\overline{\nabla}&\textrm{ if $n+1$ is odd}. \end{array} \right.$$ Futhermore, $\langle\ ,\ \rangle$ denotes the Hermitian scalar product given by $\langle\ ,\ \rangle_M$ for $n$ odd and by $$\langle\Psi,\Phi\rangle:=\langle\psi_1,\varphi_1\rangle_M+\langle\psi_2,\varphi_2\rangle_M$$ for $n$ even, for any $\Psi=(\psi_1,\psi_2)$, $\Phi=(\varphi_1,\varphi_2)\in\Gamma(\mathcal{E}M)$. It is straightforward to verify that $(\mathcal{E}M,\nabla,\gamma)$ is a Dirac bundle in the sense of \cite{lawson}. The Dirac-type operator acting on sections of $\mathcal{E}M$ and defined by $D:=\gamma\circ\nabla$ is explicity given by $$D=\left\{ \begin{array}{ll} \overline{D}&\textrm{ if $n+1$ is even,} \\ \overline{D}\oplus-\overline{D}&\textrm{ if $n+1$ is odd.} \end{array} \right.$$ As it is done in \cite{hor}, let us examine this bundle and its restriction $(\mathcal{E}\!\!\!/,\nabla\!\!\!\!/\,,\gamma\!\!\!/)$ to $\Sigma$. If $n+1$ is even, the operator $\omega:=\gamma(\omega_{n+1}^\mathbb{C})$ defines a chirality operator on ${\mathbb{S} M}$, where $\omega_{n+1}^\mathbb{C}=i^{\left[\frac{n+2}{2}\right]}e_1\cdot\ldots\cdot e_{n+1}$ is the complex volume element. Moreover, the spinor bundle splits into $$\mathcal{E}M={\mathbb{S} M}={\mathbb S}^+M\oplus{\mathbb S}^-M,$$ where ${\mathbb S}^\pm M$ are the $\pm1$-eigenspaces of the endomorphism $\omega$. On the other hand, the restricted spinor bundle $$\mathcal{E}\!\!\!/:=\mathcal{E}M_{|\Sigma}={\mathbb{S} M}_{|\Sigma}={\mathbb S}\!\!\!/\,\!\Sigma$$ can be identified with the intrinsic data of $\Sigma$ as in (\ref{restrict_odd}). If $(n+1)$ is odd, $\mathcal{E}M={\mathbb{S} M}\oplus{\mathbb{S} M}$ and the map $$ \begin{array}{rcl} \omega:\Gamma(\mathcal{E}M)&\longrightarrow&\Gamma(\mathcal{E}M)\\ \begin{pmatrix} \psi_1\\\psi_2 \end{pmatrix}&\longmapsto&\begin{pmatrix} \psi_2\\\psi_1 \end{pmatrix}, \end{array} $$ satisfies the properties (\ref{chi1}) and (\ref{chi2}), so that it defines a chirality operator on $\mathcal{E}M$. The restriction of $\mathcal{E}M$ to $\Sigma$ is given by $$\mathcal{E}\!\!\!/:=\mathcal{E}M_{|\Sigma}={\mathbb S}\!\!\!/\,\!\Sigma\oplus{\mathbb S}\!\!\!/\,\!\Sigma$$ and can be identified with to copies of the intrisic spinor bundle of $\Sigma$ as in (\ref{restrict_even}). The extrinsic Dirac operator acting on sections of $\mathcal{E}\!\!\!/$ is defined by $D\!\!\!\!/\,:=\gamma\!\!\!/\circ\nabla\!\!\!\!/\,$. We define the modified Dirac-type operators on $\mathcal{E}M$ and $\mathcal{E}\!\!\!/$, respectively, by \begin{equation}\label{modified_dirac_M} D^{\pm}:=D\mp\frac{n+1}{2}i\textrm{ Id}_{\mathcal{E}M} \end{equation} and \begin{equation}\label{dirac+-} D\!\!\!\!/\,^\pm:=D\!\!\!\!/\,\pm\frac{n}{2}i\gamma(N)\textrm{ Id}_{\mathcal{E}\!\!\!/}. \end{equation} If $M$ admits a Killing imaginary spinor field $\psi_\pm\in\Gamma(\mathcal{E}M)$ with Killing number $\pm\frac{i}{2}$ , i.e., $$\nabla_X\psi_{\pm}=\pm\frac{i}{2}\gamma(X)\psi_\pm,$$ for any $X\in\Gamma(TM),$ we can show that \begin{equation} D^\mp\psi_\pm=0 \quad\textrm{and}\quad D\!\!\!\!/\,^\mp\psi_\pm=\frac{nH}{2}\psi_\pm. \end{equation} The previous discussion can be summarized in the following proposition (see \cite{hor}): \begin{Prop}[\cite{hor}] The bundle $(\mathcal{E}M,\gamma,\nabla)$ is a Dirac bundle equipped with a chirality operator $\omega$ whose associated Dirac-type operator $D:=\gamma\circ\nabla$ is a first order elliptic differential operator. The restricted triplet $(\mathcal{E}\!\!\!/,\gamma\!\!\!/,\nabla\!\!\!\!/\,)$ is also a Dirac bundle for which the spinorial Gauss formula \begin{equation} \nabla\!\!\!\!/\,_X\psi=\nabla_X\psi-\frac{1}{2}\gamma\!\!\!/(AX)\psi \end{equation} holds for all $\psi\in\Gamma(\mathcal{E}\!\!\!/)$ and $X\in\Gamma(T\Sigma)$, and such that \begin{equation} D\!\!\!\!/\,\psi=\frac{n}{2}H\psi-\gamma(N)D\psi-\nabla_N\psi \end{equation} and \begin{equation} D\!\!\!\!/\,(\gamma(N)\psi)=-\gamma(N)D\!\!\!\!/\,\psi, \end{equation} where $D\!\!\!\!/\,:=\gamma\!\!\!/\circ\nabla\!\!\!\!/\,$ is the extrinsic Dirac-type operator on $\mathcal{E}\!\!\!/$. Moreover, the Dirac-type operators $$D\!\!\!\!/\,^\pm:=D\!\!\!\!/\,\pm\frac{n}{2}i\gamma(N)\textrm{Id}_{\mathcal{E}\!\!\!/}$$ are first order differential operators which only depend on the Riemannian and spin structures of $\Sigma$. \end{Prop} Now, consider the operator $$G:=\gamma(N)\omega: \Gamma(\mathcal{E}\!\!\!/)\to\Gamma(\mathcal{E}\!\!\!/).$$ This endomorphism is a self-ajoint involution with respect to the pointwise Hermitian scalar product $\langle\ ,\ \rangle$, where $\omega$ is the chirality operator on $\mathcal{E}M$. It induces an orthogonal decomposition of $\mathcal{E}\!\!\!/$: \begin{equation} \mathcal{E}\!\!\!/=\mathcal{V}^+\oplus\mathcal{V}^-, \end{equation} where $\mathcal{V}^\pm$ are eigeinsubbundles over $\Sigma$ corresponding to the $\pm1$-eigenvalues of $G$. Thus, we define the associated projections on $\mathcal{V}^\pm$: \begin{equation} \begin{array}{rcl} P_\pm:L^2(\mathcal{E}M)&\longrightarrow&L^2(\mathcal{V}^\pm)\\ \psi&\longmapsto&P_\pm\psi:=\frac{1}{2}(\textrm{Id}_{\mathcal{E}M}\pm\gamma(N)\omega)\psi, \end{array} \end{equation} where $L^2(\mathcal{E}M)$ and $L^2(\mathcal{V}^\pm)$ denote, respectively, the spaces of $L^2$-integrable sections of $\mathcal{E}M$ and $\mathcal{V}^\pm$. The projections $P_\pm$ are orthogonal to each other and are self-adjoint with respect to the pointwise Hermitian scalar product $\langle\ ,\ \rangle$. Also, we can check that \begin{equation} D\!\!\!\!/\,^+P_\pm=P_\mpD\!\!\!\!/\,^+. \end{equation} We end this section by stating the following result, due to Hijazi, Montiel and Raulot, which will be a key ingredient in the proof of Theorem \ref{main}. \begin{Th} \label{holographic} Let $\Omega$ be a compact, connected Riemannian spin manifold with smooth boundary $\Sigma$. Assume that the scalar curvature of $\Omega$ satisfies $R\geq -n(n+1)k^2$ for some $k>0$ and the mean curvature $H$ of $\Sigma$ is positive. Then for all $\Phi\in\Gamma(\mathcal{E}\!\!\!/)$, one has \begin{equation}\label{ineq} \int_\Sigma\left(\frac{1}{H}|D\!\!\!\!/\,^+\Phi|^2-\frac{n^2}{4}H|\Phi|^2\right)\,d\Sigma\geq0. \end{equation} Moreovoer, equality occurs for $\Phi\in\Gamma(\mathcal{E}\!\!\!/)$ if and only if there exists two imaginary Killing spinor fields $\Psi^+,\Psi^-\in\Gamma(\mathcal{E}\!\!\!/)$ with Killing number $-(i/2)$ such that $\mathcal{P}_+\Psi^+=\mathcal{P}_+\Phi$ and $\mathcal{P}_-\Psi^-=\mathcal{P}_-\Phi$. \end{Th} \section{Proof of Theorem~\ref{main}} In this section we present the proof of Theorem~\ref{main}. \begin{proof} Assume $H>0$ on $\Sigma$. Let $\psi$ a imaginary Killing spinor field with Killing number $i/2$ on $\Omega$, such that $V=|\psi|^2$ (See \cite{baum}). We take the spinor field $\varphi=\psi|_\Sigma$ on $\Sigma$; for such $\varphi$, we have \begin{eqnarray*} D\!\!\!\!/\,\varphi&=&\frac{nH}{2}\psi-\gamma(N)\sum_{i=1}^n\gamma(e_i)\nabla_{e_i}\psi\\ &=&\frac{nH}{2}\varphi+\frac{in}{2}\gamma(N)\psi, \end{eqnarray*} and \begin{equation*} D\!\!\!\!/\,^+\varphi=\frac{nH}{2}\varphi+in\gamma(N)\psi. \end{equation*} Thus, $$|D\!\!\!\!/\,^+\varphi|^2=\frac{n^2H^2}{4}|\varphi|^2+n^2|\psi|^2+n^2H\Re\langle i\gamma(N)\psi,\psi\rangle.$$ Hence, we get $$\int_\Sigma\frac{1}{H}|D\!\!\!\!/\,^+\varphi|^2\,d\Sigma=\int_\Sigma\frac{n^2H}{4}|\varphi|^2\,d\Sigma+n^2\left(\int_\Sigma\frac{|\psi|^2}{H}\,d\Sigma+\int_\Sigma\Re\langle i\gamma(N)\psi,\psi\rangle\,d\Sigma\right).$$ Now, we apply (\ref{ineq}) to obtain \begin{equation}\label{ineqspin} \int_\Sigma\frac{|\psi|^2}{H}\,d\Sigma+\int_\Sigma\Re\langle i\gamma(N)\psi,\psi\rangle\,d\Sigma\geq0. \end{equation} By other hand, we have that \begin{equation}\label{lemma} \langle\nabla V,N\rangle=\Re\langle i\gamma(N)\psi,\psi\rangle. \end{equation} Indeed, using the fact $\nabla_X\psi=\frac{i}{2}\gamma(X)\psi$, for all $X\in\Gamma(TM)$, we get \begin{eqnarray*} \langle\nabla V,N\rangle&=&N|\psi|^2\\ &=&\langle\nabla_N\psi,\psi\rangle+\langle\psi,\nabla_N\psi\rangle\\ &=&2\Re\langle\nabla_N\psi,\psi\rangle\\ &=&\Re\langle i\gamma(N)\psi,\psi\rangle. \end{eqnarray*} Thus, substituing (\ref{lemma}) in (\ref{ineqspin}) and remembering that $V=|\psi|^2$, we obtain $$\int_\Sigma\frac{V}{H}\,d\Sigma+\int_\Sigma\langle\nabla V,N\rangle\,d\Sigma\geq0.$$ The equality holds if and only if we have equality in (\ref{ineq}). In such case, there exists two imaginary Killing spinor fields $\Psi^+,\Psi^-\in\Gamma(\mathcal{E}\!\!\!/)$ with Killing number $-(i/2)$ such that $\mathcal{P}_+\Psi^+=\mathcal{P}_+\varphi$ and $\mathcal{P}_-\Psi^-=\mathcal{P}_-\varphi$ on $\Sigma$. Then, $\varphi=\mathcal{P}_+\Psi^++\mathcal{P}_{-}\Psi^-$, then \begin{eqnarray*} D\!\!\!\!/\,^+\varphi&=&D\!\!\!\!/\,^+(\mathcal{P}_+\Psi^+)+D\!\!\!\!/\,^+(\mathcal{P}_{-}\Psi^-)\\ &=&\mathcal{P}_{-}(D\!\!\!\!/\,^+\Psi^+)+\mathcal{P}_+(D\!\!\!\!/\,^+\Psi^-)\\ &=&\frac{nH}{2}(\mathcal{P}_{-}\Psi^++\mathcal{P}_+\Psi^-). \end{eqnarray*} We deduce that $$\frac{2}{nH}D\!\!\!\!/\,^+\varphi+\varphi=\Psi^++\Psi^-=\widetilde{\Psi},$$ that is $$\frac{2}{nH}\left(\frac{nH}{2}\psi+in\gamma(N)\psi\right)+\psi=\widetilde{\Psi},$$ then $$\psi+\frac{i\gamma(N)}{H}\psi =\frac{1}{2}\widetilde{\Psi}.$$ The spinor field $\widetilde{\Psi}$ is imaginary Killing since $\Psi^+$ and $\Psi^-$ are, moreover $\widetilde{\Psi}$ has Killing number $-i/2$, so the spinor field $$\psi+\frac{i\gamma(N)}{H}\psi$$ is a restrition of a imaginary Killing spinor field with Killing number $-i/2$, therefore, for all $X\in\Gamma(T\Sigma)$: $$H\gamma(X)\psi-\gamma(AX)\psi-\frac{1}{H}X(H)\gamma(N)\psi=0.$$ Now we choose $X=X_i\in\Gamma(\Sigma)$, where $X_i$ is a direction of principal curvatures of $\Sigma$, whose associated principal curvature is $\lambda_i$. Taking the scalar product of the last equality with $\gamma(X_i)\psi$, we get $$H|X_i|^2|\psi|^2-\lambda_i|X_i|^2|\psi|^2=0.$$ Since $|\psi|^2=V\geq1$ and at each point $p\in\Sigma$ we can choose a basis $(X_1,\ldots,X_n)$ of $T_p\Sigma$ such that $X_i$ is a direction of principal curvature, we get $\lambda_i=H$ on $\Sigma$ for all $i\in\{1,\ldots,n\}$, so $$A=H\,{\rm Id}.$$ Thus, $\Sigma$ is totally umbilical. \end{proof} \section{Proof of Corollary~\ref{main2}} \begin{proof} Let $\psi$ be an imaginary Killing spinor with Killing number $i/2$ (after a rescaling of the metric). Thus, for each $X\in\Gamma(TM)$ we have \begin{equation}\label{killing} \nabla_X\psi=\frac{i}{2}\gamma(X)\psi. \end{equation} Setting $V=|\psi|^2$, one can check from (\ref{killing}) that $V$ satisfies \begin{equation}\label{hess} {\rm Hess}\,V=V\langle\ ,\ \rangle. \end{equation} Thus, tracing (\ref{hess}), follows $$\Delta V=(n+1)V.$$ Integrating that equation on the compact domain $\Omega$ and applying the divergence theorem, from (\ref{ineq1}) we obtain (\ref{ineq2}). Now, if the equality holds in (\ref{ineq2}), by Theorem \ref{main}, $\Sigma$ must be totally umbilical. In particular, when $P=\mathbb{R}^n$, the manifold $M$ is isometric to the hyperbolic space $\mathbb{H}^{n+1}$. Thus, $\Sigma$ is a totally umbilical hypersurface of $\mathbb{H}^{n+1}$ and so it is a geodesic sphere. \end{proof} \section{proof of corollary \ref{main3}} We begin this section remembering to the reader some facts about the geometry of hypersurfaces in Riemannian manifolds. On a given hypersurface $\Sigma$ in $M$, we define the $k$-th mean curvature function $$H_k=H_k(\Lambda)=\frac{1}{\binom{n}{k}}\sigma_k(\Lambda),$$ where $\Lambda=(\lambda_1,\cdots,\lambda_n)$ are the principal curvature functions on $\Sigma$ and the homogeneous polynomial $\sigma_k$ of degree $k$ is the $k$-th elementary symmetric function $$\sigma_k(\Lambda)=\sum_{i_1<\cdots <i_k}\lambda_{i_1}\cdots\lambda_{i_k}.$$ The next proposition gives an relation between these curvatures: \begin{Prop}(See \cite{garding,montiel2}) Let $x:\Sigma\to M$ an isometric immersion between two Riemannian manifolds of dimension $n$ and $(n+1)$ respectively, and assume $\Sigma$ is connected. We suppose that there is a point of $\Sigma$ where all principal curvatures are positive. Then, if there exists $k\in\{1,\ldots, n\}$ such that $H_k>0$ on $\Sigma$, then \begin{equation}\label{garding} H\geq H_2^{1/2}\geq\cdots\geq H_r^{1/r}\quad{\rm on}\ \Sigma. \end{equation} If $k\geq 2$, equality holds only at umbilical points. \end{Prop} Now, if $\nabla$ denotes the Levi-Civita connection on $M$ and $N$ the unit normal vector field along $\Sigma$ which points to the inner region, we define the shape operator $A$ by $A(X)=-\nabla_XN$. Thus, the classical Newton transformations $T_k:\Gamma(T\Sigma)\to\Gamma(T\Sigma)$ are defined inductively from $A$ by: $$T_0=I,\quad{\rm and}\quad T_k=\sigma_kI-AT_{k-1},\quad 1\leq k\leq n,$$ where $I$ denotes the identity in $\Gamma(T\Sigma)$. Associated to each Newton transformation $T_k$ one has the second order linear differential operator $L_k:\mathcal{C}^\infty(\Sigma)\to\mathcal{C}^\infty(\Sigma)$ for $k=0,1,\ldots,n-1$, given by $$L_k(u)=\textrm{tr}(T_k\circ\textrm{Hess }u),$$ where ${\rm Hess} u:\Gamma(T\Sigma)\to\Gamma(T\Sigma)$ denotes the symmetric operator defined by $${\rm Hess} u(X)=\nabla^\Sigma_X\nabla^\Sigma u,\quad \forall X\in\Gamma(T\Sigma).$$ In particular $L_0=\Delta$ is the Laplace-Beltrami operator. while $L_1$ is the operator $\square$, introduced by Cheng and Yau \cite{cheng-yau} for the study of hypersurfaces with constant scalar curvature. On the other hand, the divergent of $T_k$ is defined by $${\rm div}_\Sigma T_k=\sum_{i=1}^n\left(\nabla^{\Sigma}_{e_i}T_k\right)(e_i),$$ where $\{e_1,\cdots,e_n\}$ is a local orthonormal frame on $\Sigma$. Thus, we have \begin{equation}\label{Lr} L_k(u)=\textrm{div}_\Sigma(T_k(\nabla^\Sigma u))-\langle\textrm{div}_\Sigma T_k,\nabla^\Sigma u\rangle. \end{equation} From (\ref{Lr}), we conclude that the operator $L_k$ is elliptic if, and only if, $T_k$ is positive definite. Clearly, $L_0=\Delta$ is always elliptic. The ellipticity of $L_1$ is guaranteed by Lemma 3.10 of \cite{elbert} when $H_2>0$. If the ambient space $M$ is equipped with a conformal vector field $Y\in\mathcal{X}(M)$, with conformal function $f$, then is shown in \cite{alias-lira} that \begin{equation}\label{div_Newton} {\rm div}_{\Sigma}(T_kY^{\top})=\langle{\rm div}_\Sigma T_k,Y\rangle+c_k\left(fH_k+\langle Y,N\rangle H_{k+1}\right), \end{equation} where $$c_k=(k+1)\binom{n}{k+1}.$$ Integrating (\ref{div_Newton}) over $\Sigma$ and making use of Divergence theorem we obtain \begin{equation}\label{mink} \int_\Sigma\langle{\rm div}_\Sigma T_k,Y\rangle\,d\Sigma+c_k\int_\Sigma\left(fH_k+\langle Y,N\rangle H_{k+1}\right)\,d\Sigma=0. \end{equation} A useful formula is obtained in \cite{alias-lira} for every tangent field $X\in\Gamma(T\Sigma)$: \begin{equation}\label{div_curv} \langle{\rm div}_\Sigma T_k,X\rangle=\sum_{j=1}^k\sum_{i=1}^n\langle R(N,T_{k-j}e_i)e_i,A^{j-1}X\rangle. \end{equation} In particular, when the ambient space $M$ has constant curvature, then $\langle R(N,V)W,Z\rangle=0$ for every tangent vector fields $V,W,Z\in\Gamma(T\Sigma)$, from (\ref{div_curv}) and (\ref{div_Newton}) we obtain the classical Minkowski integral identity for spaces with constant curvature: $$\int_\Sigma\left(fH_k+\langle Y,N\rangle H_{k+1}\right)\,d\Sigma=0.$$ By other hand, when the ambient space is an Einstein manifold, taking (\ref{div_curv}) with $k=1$ we get $$\langle{\rm div}_\Sigma T_1,X\rangle=Ric(N,X)=0.$$ Thus, for a compact hypersurface in a Einstein space the following is valid: \begin{equation}\label{mink2} \int_\Sigma fH_1\,d\Sigma+\int_\Sigma\langle Y,N\rangle H_{2}\,d\Sigma=0. \end{equation} Since every Riemannian spin manifold admitting a imaginary Killing vector is a Einstein manifold with Ricci curvature $-n$, and taking $Y=\nabla V$, where $V=|\psi|^2$, we have from (\ref{mink2}): \begin{equation}\label{mink3} \int_\Sigma VH_1\,d\Sigma+\int_\Sigma\langle \nabla V,N\rangle H_{2}\,d\Sigma=0. \end{equation} \begin{proof}[ Proof of Corollary \ref{main3}] The scalar curvature $S^{\Sigma}$ of a hypersurface can be related with the scalar curvature $S$ of the ambient space by the following formula: $$S^{\Sigma}=S-2Ric(N,N)+n(n-1)H_2.$$ In our case, we have $$S^{\Sigma}=n(n-1)(H_2-1).$$ Thus, the hypothesis of constant scalar curvature is equivalent to $H_2$ constant. Now, it is easy verify that, with respect to the normal $-\frac{\partial}{\partial t}$, the slices $\Sigma_s=\{s\}\times P$ are totally umbilical hypersurfaces with constant principal curvatures equal 1. Since $\Sigma$ is compact, there is an point $p\in\Sigma$ such that all principal curvatures of $\Sigma$ are bounded from below by 1 (this can be done by choosing a point $p$ where the projetion onto the real line is maximum), thus the constant $H_2$ is bounded from below by 1, and so is $H$ by (\ref{garding}). First, consider the case where $\Sigma$ bounds a compact domain. For this case, the following lemma will be necessary: \begin{Lemma}\label{lemma2} If $H_2$ is constant, we have $$\int_\Sigma \left(\sqrt{H_2}-H\right)\langle\nabla V,N\rangle\,d\Sigma\leq0.$$ Equality holds if and only if $\Sigma$ is totally umbilical. \end{Lemma} \begin{proof} From (\ref{garding}) we have $$\int_\Sigma VH \,d\Sigma\geq\int_\Sigma V\sqrt{H_2}\,d\Sigma=\sqrt{H_2}\int_\Sigma V\,d\Sigma.$$ By (\ref{mink3}), we get $$-\int_\Sigma H_2\langle\nabla V,N\rangle\,d\Sigma\geq\sqrt{H_2}\int_\Sigma V\,d\Sigma.$$ Thus, $$-\int_\Sigma H_2\langle\nabla V,N\rangle\,d\Sigma\geq\sqrt{H_2}\int_\Sigma \langle\nabla V,N\rangle H\,d\Sigma.$$ Finally, we obtain $$\int_\Sigma \left(\sqrt{H_2}-H\right)\langle\nabla V,N\rangle\,d\Sigma\leq0,$$ with equality if and only if $H=\sqrt{H_2}$ on $\Sigma$. This is equivalent to $\Sigma$ being totally umbilical. \end{proof} Let $\psi$ be an imaginary Killing spinor field with spinor number $i/2$ and define $\varphi:=(\sqrt{H_2}+i\gamma(N))\psi$ on $\Sigma$. First, by (\ref{ineq1}) and (\ref{lemma}), we get \begin{align*} \int_\Sigma\langle i\gamma(N)\psi,\varphi\rangle\,d\Sigma&=\sqrt{H_2}\int_\Sigma \langle\nabla V,N\rangle\,d\Sigma+\int_\Sigma V\,d\Sigma\\ &\geq-\int_\Sigma\frac{\sqrt{H_2}}{H}V\,d\Sigma+\int_\Sigma V\,d\Sigma\\ &=\int_\Sigma\left(1-\frac{\sqrt{H_2}}{H}\right)V\,d\Sigma\geq0. \end{align*} By other hand, Lemma \ref{lemma2} yields \begin{align*} \int_\Sigma\langle i\gamma(N)\psi,\varphi\rangle\,d\Sigma&=\sqrt{H_2}\int_\Sigma \langle\nabla V,N\rangle\,d\Sigma+\int_\Sigma V\,d\Sigma\\ &=\int_{\Sigma}\sqrt{H_2}\langle\nabla V,N\rangle\,d\Sigma-\int_\Sigma\langle\nabla V,N\rangle H\,d\Sigma\\ &=\int_\Sigma\left(\sqrt{H_2}-H\right)\langle\nabla V,N\rangle\,d\Sigma\leq0. \end{align*} Thus, we have $$ \int_\Sigma\langle i\gamma(N)\psi,\varphi\rangle\,d\Sigma=0.$$ Thus, we should have equality in Lemma \ref{lemma2}, so $\Sigma$ is totally umbilical. Now, remains to think in the case where $\Sigma$ is compact but, is not the boundary of any compact domain. Define the height function $h\in\mathcal{C}^\infty(\Sigma)$ by setting $h=\pi_\mathbb{R}\circ f$, where $f:\Sigma\to\mathbb{R}\times_{\textrm{exp}}P$ is the isometric immersion ($h$ is nothing but the projection onto the real line). Since $\Sigma$ is compact there exists $p,q\in\Sigma$ such that $h$ attains his maximum and the minimum, respectively. If $h(q)=s_1$ and $h(p)=s_2$, then $\Sigma$ is contained in the region $\Omega_{s_1,s_2}$ bounded by the slices $\{s_1\}\times P$ and $\{s_2\}\times P$. As we have mentioned previously, the slices have constant principal curvatures equal 1, thus the second mean curvature of $\Sigma$ satisfies $H_2(p)\geq1$ and $H_2(q)\leq1$, hence $H_2\equiv1$. Now, choosing $u=e^h$ in (\ref{Lr}) and remembering that $\mathbb{R}\times_{\textrm{exp}}P$ is Einstein, by (\ref{div_curv}) we can obtain $$L_1(e^h)=n(n-1)e^h(H+\langle N,\partial_t\rangle H_2).$$ But $H_2\equiv1$, $H\geq\sqrt{H_2}=1$ and Cauchy-Schwarz implies that $H+\langle N,\partial_t\rangle H_2\geq0$. Hence, $L_1(e^h)\geq0$ on the compact manifold $\Sigma$. Thus, since in this case $L_1$ is elliptic , by the maximum principle applied to $L_1$ we conclude that $e^h$ is constant, and hence $h$ is constant, this implies that $\Sigma$ is a slice. Thus, in both cases, $\Sigma$ is a totally umbilical hypersurface with constant $H_2$ curvature, this implies constant mean curvature. Now, applying lemma 4 of \cite{montiel}, where umbilical hypersurfaces with constant mean curvatures are classified. Thus, the result follows. \end{proof}
{ "timestamp": "2018-06-05T02:18:22", "yymm": "1806", "arxiv_id": "1806.01120", "language": "en", "url": "https://arxiv.org/abs/1806.01120" }
\section{Introduction.} The purpose of the present paper is to explore the norm attainment set and the minimum norm attainment set of a bounded linear operator between Hilbert (Banach) spaces. We would like to remark that such a study was initiated by Carvajal and Neves \cite{C,Ca}, for bounded linear operators between complex Hilbert spaces. However, our study has little intersection with theirs, and, moreover, we also explore the two problems for bounded linear operators between Banach spaces. Without further ado, let us discuss the notations and the terminologies relevant to our study.\\ Let $\mathbb{X},~ \mathbb{Y}$ be normed spaces over the field $\mathbb{K}$, real or complex. We reserve the symbol $ \mathbb{H} $ for Hilbert spaces. Finite-dimensional real Hilbert spaces are called Euclidean spaces. Let $B_{\mathbb{X}} = \{x \in \mathbb{X} \colon \|x\| \leq 1\}$ and $S_{\mathbb{X}} = \{x \in \mathbb{X} \colon \|x\|=1\}$ be the unit ball and the unit sphere of $\mathbb{X}$, respectively. Let $\mathbb{X}^*$ denote the dual space of $\mathbb{X}$. For any $f\in \mathbb{X}^*$, let $ker~f$ denote the null space of $f$. Given any $x\in S_{\mathbb{X}}$, a functional $\psi_x \in S_{\mathbb{X}^*}$ is said to be a support functional at $x$ if $\|\psi_x\|=1= \psi_x (x).$ It is easy to observe that the Hahn-Banach theorem guarantees the existence of at least one support functional at each point of $ S_{\mathbb{X}}. $ We say that $\mathbb{X}$ is smooth if given any $x\in S_{\mathbb{X}}$, there exists a unique support functional at $x$. $ \mathbb{X} $ is said to be strictly convex if for any $ x,y \in \mathbb{X},~ \| x+y \| = \| x \| + \| y \| $ implies that $ y = kx, $ for some $ k \geq 0. $ Next, we give the definition of semi-inner-products \cite{G,L} in normed spaces. \begin{definition} Let $\mathbb{X}$ be a normed space. A function [ , ]: $\mathbb{X}\times \mathbb{X} \rightarrow \mathbb{K}(=\mathbb{C},\mathbb{R})$ is a semi-inner-product if and only if for any $\lambda \in \mathbb{K}$ and for any $x,y,z \in \mathbb{X}$, it satisfies the following properties:\\ (i) $[x+y , z] = [x , z] +[y , z]$,\\ (ii) $[\lambda x ,y] = \lambda [x , y]$,\\ (iii) $[x , x]>0$, whenever $x\neq 0$,\\ (iv) $|[x , y]|^2\leq [x , x][y , y]$,\\ (v) $[x , \lambda y] = \overline \lambda [x , y]$. \end{definition} Let $\mathbb{L}(\mathbb{X},\mathbb{Y})$ denote the normed space of all bounded linear operators from $\mathbb{X}$ to $\mathbb{Y}$, endowed with the usual operator norm. We write $\mathbb{L}(\mathbb{\mathbb{X}, \mathbb{Y}})= \mathbb{L}(\mathbb{X})$ if $\mathbb{X}= \mathbb{Y}$. For any two elements $x,y \in {\mathbb{X}}$, $x$ is said to be Birkhoff-James orthogonal to $y$ \cite{B}, written as $x \perp_B y$, if $ \|x+\lambda y\|\geq\|x\|$ for all $ \lambda \in \mathbb{K} (=\mathbb{C},\mathbb{R}). $ For a bounded linear operator $T$ defined on a normed space $\mathbb{X}$, let $M_T$ denote the norm attainment set of $ T $, i.e., $ M_T $ is the collection of all unit vectors in $\mathbb{X}$ at which $T$ attains norm. To be more precise, \[ M_T= \{ x \in S_ \mathbb{X} \colon\|Tx\| = \|T\| \}. \] Following similar motivations, we define the minimum norm attainment set $ m_T, $ for a bounded linear operator $T$ defined on a normed space $ \mathbb{X}, $ in the following way: \[ m_T= \{ x \in S_ \mathbb{X} \colon\|Tx\| = m(T) \}, \] where, $ m(T)= inf \{ \|Tx\| \colon \|x\|=1 \}. $ For any two elements $ x, y $ in a real normed space $ \mathbb{X}, $ following \cite{Sd} we say that $ y \in x^{+} $ if $ \| x + \lambda y \| \geq \| x \| $ for all $ \lambda \geq 0. $ Similarly, we say that $ y \in x^{-} $ if $ \| x + \lambda y \| \geq \| x \| $ for all $ \lambda \leq 0. $ Let $x^{\perp}=\{y\in \mathbb{X}\colon x\perp_B y\}.$ These notions have been extended to complex normed linear spaces by Paul et.al. \cite{PSMM} in the following way: Let $ x \in \mathbb{X}$ and $ U = \{ \alpha \in \mathbb{C} : | \alpha | = 1, ~\arg \alpha \in [0,\pi) \}.$ For $ \alpha \in U $ define \[ x_\alpha^{+}=\{y \in\mathbb{X}: \|x+\lambda y\|\geq\|x\|~ for ~all~ \lambda = t\alpha, t\geq 0 \}, \] \[ x_\alpha^{-}=\{y \in\mathbb{X}:\|x+\lambda y\|\geq\|x\|~ for~all~ \lambda = t\alpha, t\leq 0 \},\] \[ x_\alpha^{\perp} = \{y \in\mathbb{X}:\|x+\lambda y\|\geq\|x\|~ for~all~ \lambda = t\alpha, t\in\mathbb{R}\}.\] If $ \beta = e^{i\pi} \alpha $ then we define $ x_\beta^{+} = x_\alpha^{-}, ~ x_\beta^{-} = x_\alpha^{+} $, $x_\beta^{\perp} = x_\alpha^{\perp}.$ If $ y \in x_\alpha^{\perp} $ then we write $ x \bot_{\alpha} y.$ The notions of $x^+$, $x^-$ and $x^{\perp}$ \cite{PSMM} are also defined in a complex Banach space in the following way: \[ x^{+} = \bigcap \{ x_\alpha^{+} : \alpha \in U \}, x^{-} = \bigcap \{ x_\alpha^{-} : \alpha\in U \} , x^{\perp} = \bigcap \{ x_\alpha^{\bot} : \alpha\in U \} .\] \smallskip The norm attainment set plays a very crucial role in determining the geometry of the space of bounded linear operators \cite{S,Sa,SP}. Recently, Sain \cite{Sb} obtained a complete characterization of the norm attainment set of a bounded linear operator between real normed spaces, by applying the concept of semi-inner-products in normed spaces. In this paper, we extend the result for bounded linear operators on real or complex normed spaces. We also explore the minimum norm attainment set of a bounded linear operator $ T $ between Hilbert spaces and Banach spaces. First, we obtain a complete characterization of $ m_T, $ for $ T \in \mathbb{L}(\mathbb{H}_1,\mathbb{H}_2), $ where $ \mathbb{H}_1,\mathbb{H}_2 $ are Hilbert spaces. We further explore the geometric structure of $ m_T $ for $ T \in \mathbb{L}(\mathbb{H}_1,\mathbb{H}_2), $ and obtain some interesting properties of $ m_T $ which are analogous to the properties of $ M_T. $ We observe that $ m_T $ must be the unit sphere of some subspace of $ \mathbb{H}_1, $ provided $ m_T $ is non-empty. We next obtain a complete characterization of the minimum norm attainment set of a bounded linear operator between real or complex normed spaces, analogous to the corresponding characterization of the operator norm attainment set. For $ T \in \mathbb{L}(\mathbb{H}_1,\mathbb{H}_2), $ we further study the relative position of $ M_T $ and $ m_T. $ In particular, we prove that if both $ M_T $ and $ m_T $ are non-empty, then either $ M_T = m_T = S_{\mathbb{H}_1} $ or $ M_T $ and $ m_T $ are the unit spheres of two subspaces of $ \mathbb{H}_1, $ which are mutually orthogonal. We would like to remark that in the first case, $ T $ is a scalar multiple of an isometry. On the other hand, as we will see later, the second condition is typical of bounded linear operators, which are not scalar multiples of some isometry, between Hilbert spaces. We prove that for a rank one bounded linear operator $ T $ on a strictly convex reflexive Banach space $ \mathbb{X}, $ it is possible to describe $ M_T $ and $ m_T $ in a particularly convenient way. As an application of this observation, we obtain a complete characterization of reflexive Banach spaces in terms of the norm attainment sets and the minimum norm attainment sets of rank one bounded linear operators on the space. We end this paper with a characterization of Euclidean spaces among all finite-dimensional real Banach spaces, that further illustrates the importance of the study of the operator norm (minimum norm) attainment set. Let us further remark that for the two-dimensional case, we require the additional condition of strict convexity. \\ We would like to remark that unless otherwise stated explicitly, we consider the Banach spaces and the Hilbert spaces to be either real or complex. \section{ Norm attainment set and Minimal norm attainment set} In this section, we first obtain a complete characterization of the norm attainment set for a bounded linear operator $T$ between normed linear spaces $ \mathbb{X}$ and $ \mathbb{Y}.$ We would like to remark that our result holds for both real and complex normed spaces and improves on \cite[Th. 2.3]{Sb}. In order to obtain the desired characterization, we need the following lemma, which again improves on \cite[Lemma 2.2]{Sb} in an elegant way. \begin{lemma}\label{lemma:hyperplane1} Let $\mathbb{X}, \mathbb{Y}$ be normed linear spaces and $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$. Let $x\in M_T$ and $ y=Tx.$ Then there exist hyperspaces $H_x, H_y$ in $ \mathbb{X} $ and $ \mathbb{Y} $ respectively such that $x\perp_B H_x$ and $y\perp_B H_y$ with $T(H_x)\subseteq H_y$. \end{lemma} \begin{proof} If $T$ is the zero operator then we have nothing to prove. Suppose $T$ is nonzero. Since Birkhoff-James orthogonality is homogeneous, and $T$ is nonzero, without any loss of generality we may assume that $\|T\|=1$. Let $x\in M_T .$ For $ y (= Tx) $, there exists a linear functional $g\in S_{\mathbb{Y}^*}$ such that $g(Tx)= \|Tx\|=1$. Let $ker~ g= H_y$. Then by Theorem 2.1 of \cite{Ja}, we have, $Tx\perp_B H_y$. Now, $g\circ T\colon \mathbb{X} \rightarrow \mathbb{K}$ is a linear functional with $g\circ T(x)=\|Tx\|=1=\|x\|$ and $\|g\circ T\|\leq \|g\|\|T\|=\|T\|=1$. Let $H_x= ker~(g\circ T)$. Again, by Theorem 2.1 of \cite{Ja}, we have, $x\perp_B H_x$. Let $h\in H_x$. Then $g\circ T(h)=0\Rightarrow g(Th)=0\Rightarrow Th\in H_y$. Since this is true for all $h\in H_x,$ we have, $T(H_x)\subseteq H_y$. This completes the proof of the lemma. \end{proof} \begin{remark}\label{remark:nonsmooth} The above result may not hold for all hyperspaces, i.e., given $ T\in \mathbb{L}(\mathbb{X}, \mathbb{Y}) $ and $ x \in M_T, $ there may exist a hyperspace $H$ in $ \mathbb{X} $ such that $x \perp_B H$ but $Tx\not\perp_B T(H)$. Let us illustrate the scenario by furnishing the following example.\\ Consider $T\colon l_\infty(\mathbb{R}^2)\rightarrow l_\infty(\mathbb{R}^2)$ defined by \[ T(1,1)=(0,1),~~T(-1,1)=(-1,0).\] Then $ M_T=\{\pm(1,1),\pm(-1,1)\}$. Here, we have, $(1,1)\in M_T$ and $(1,1)\perp_B(0,1)$ but $T(1,1)=(0,1)\not\perp_B T(0,1)=(-\frac{1}{2},\frac{1}{2})$. Therefore, taking $ x = (1,1) $ and $ H $ to be the one-dimensional subspace spanned by $ (0,1) $, we see that $x\perp_B H$ but $Tx\not\perp_B T(H)$.\\ Now, if we replace $\mathbb{R}^2$ by $\mathbb{C}^2$ in this example, then $ M_T=\bigcup_{\theta \in [0, 2\pi)} e^{i\theta}\{\pm(1,1),\pm(-1,1)\}$. Once again, we have, $(1,1)\in M_T$ and $(1,1)\perp_B(0,1)$ but $T(1,1)=(0,1)\not\perp_B T(0,1)=(-\frac{1}{2},\frac{1}{2})$. Therefore, with the same choice of $ x $ and $ H, $ it follows that $x\perp_B H$ but $Tx\not\perp_B T(H)$ \end{remark} Let us now apply Lemma \ref{lemma:hyperplane1} towards obtaining a complete characterization of the norm attainment set of a bounded linear operator between normed spaces. \begin{theorem}\label{theorem:characterization M_T} Let $\mathbb{X}, \mathbb{Y}$ be normed linear spaces and $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$. Let $x\in S_{\mathbb{X}}$. Then $x\in M_T$ if and only if there exist two s.i.p. $[~,~]_\mathbb{X}$ and $[~,~]_\mathbb{Y}$ on $\mathbb{X}$ and $\mathbb{Y}$ respectively such that for any $z\in \mathbb{X}$,\\ \[[Tz, Tx]_\mathbb{Y}= \|T\|^2[ z, x]_\mathbb{X}.\] \end{theorem} \begin{proof} If $T$ is the zero operator, then the theorem holds trivially. Without loss of generality we may assume that $\|T\| = 1.$ Let us first prove the sufficient part of the theorem. Let $x\in S_{\mathbb{X}}$ be such that for any $z\in \mathbb{X}$, $[Tz, Tx]_\mathbb{Y}= \|T\|^2[ z, x]_\mathbb{X}.$ Taking $z=x$, we obtain, $[Tx, Tx]_\mathbb{Y}= \|T\|^2$ , i.e., $\|Tx\|=\|T\|$. This proves that $x\in M_T.$ \\ Next we prove the necessary part. Let $x\in M_T$ and $ y=Tx.$ Then from Lemma \ref{lemma:hyperplane1}, it follows that there exist hyperspaces $H_x, H_y$ in $ \mathbb{X} $ and $ \mathbb{Y} $ respectively such that $x\perp_B H_x$ and $Tx\perp_B H_y$ with $T(H_x)\subseteq H_y$. Since $x\perp_B H_x$, there exists a linear functional $\psi_x\colon \mathbb{X}\rightarrow \mathbb{K}$ such that $\psi_x(x)=\|x\|$ and $ker ~\psi_x=H_x$. Since $T$ is nonzero and $x\in M_T$, we must have $Tx$ is nonzero in $\mathbb{Y}$. Therefore, there exists a linear functional $\psi_{y}$ on $ \mathbb{Y} $ such that $\psi_{y}(y)=\|y\| = \|Tx\| = 1 $ and $ker~ \psi_{y}=H_y$. It follows from the Hahn-Banach theorem that for each $u \in S_{\mathbb{X}}$ and $ v \in S_{\mathbb{Y}}$ there exist at least one $f_u \in S_{\mathbb{X}^*}$ such that $ f_u(u) = 1$ and at least one $ g_v \in S_{\mathbb{Y}^*} $ such that $g_v(v)=1.$ Let us now define two s.i.p. on $\mathbb{X}$ and $\mathbb{Y}$ in the following way: \smallskip For each $ z, u \in \mathbb{X},$ we define $ [z,u]_{\mathbb{X}} = f_u(z)$, with the additional restriction that if $ u = x$ then we take $ f_u= \psi_x.$ Moreover, for any $ \lambda \in \mathbb{K}, $ we choose $f_{\lambda u} = \bar{\lambda} f_u.$ \smallskip For each $ w, v \in \mathbb{Y},$ we define $ [w,v]_{\mathbb{Y}} = g_v(w),$ with the additional restriction that if $ v = y$ then we take $ g_v= \psi_y.$ Moreover, for any $ \lambda \in \mathbb{K}, $ we choose $g_{\lambda v} = \bar{\lambda} g_v.$ \smallskip Then following \cite{G}, it is easy to check that $[~,~]_{\mathbb{X}}$ and $ [~,~]_{\mathbb{Y}}$ are indeed s.i.p. on $\mathbb{X}$ and $\mathbb{Y}$ respectively. Let $z\in \mathbb{X}$ be arbitrary. Clearly $z$ can be written as $z=\alpha x + h$, for some $\alpha\in \mathbb{K}$ and $h\in H_x$. Moreover, we have, \[[z,x]_{\mathbb{X}}=[\alpha x + h, x]_{\mathbb{X}}=\alpha[x,x]_{\mathbb{X}}+[h,x]_{\mathbb{X}}= \alpha \|x\|^2=\alpha.\] Also, \[[Tz,Tx]_{\mathbb{Y}}=[\alpha Tx + Th, x]_{\mathbb{Y}}=\alpha[Tx,Tx]_{\mathbb{Y}}+[Th,Tx]_{\mathbb{Y}}= \alpha \|Tx\|^2=\|T\|^2[z,x]_{\mathbb{X}}.\] Since the above relation holds for all $ z \in \mathbb{X},$ this completes the proof of the theorem. \end{proof} Let us now prove an easy but useful necessary condition for the minimum norm attainment of a nonzero bounded operator on a Banach space, at a particular point of the unit sphere. \begin{theorem}\label{theorem:preserve} Let $\mathbb{X}$ and $\mathbb{Y}$ be Banach spaces. Let $T \in \mathbb{L} (\mathbb{X}, \mathbb{Y})$ be non-zero and $x \in m_T$. Then (i) $T(x^{+})\subseteq (Tx)^{+}.$ (ii) $T(x^{-})\subseteq (Tx)^{-}.$ (iii) $T(x^{\perp})\subseteq (Tx)^{\perp}.$ \end{theorem} \begin{proof} $(i)$ Let us assume that both $\mathbb{X}$ and $ \mathbb{Y}$ are complex Banach spaces. Let $y\in x^{+}.$ Then $ y \in x_{\alpha}^+ $ for each $ \alpha \in U. $ Then $ \| x + \lambda y \| \geq \|x \| = 1 $ for all $ \lambda = t \alpha, t \geq 0.$ Since $ x \in m_T, $ it follows that for any $ t \geq 0, $ we must have, $ \| Tx \| \leq \| T(\frac{x + t \alpha y}{ \| x + t \alpha y \| }) \| = \frac{ \|Tx + t \alpha Ty \|} { \| x + t \alpha y \|} \leq \| Tx + t \alpha Ty \|.$ This implies that $ Ty \in (Tx)_{\alpha}^+$ and so $T(x_{\alpha}^{+})\subseteq (Tx)_{\alpha}^{+}.$ This holds for each $ \alpha \in U$ and so $T(x^{+})\subseteq (Tx)^{+}.$ \\ For real Banach spaces the result follows by noting that $ x^+ = x_{\alpha}^+ $ with $\alpha=1.$ \\ (ii) and (iii) can be proved similarly. \end{proof} \begin{remark} It is interesting to observe that in Theorem \ref{theorem:preserve} $(iii), $ if we assume that $x\in M_T$ instead of assuming $ x \in m_T, $ then to prove the same result, we additionaly require smoothness of $x$ and $Tx$, which follows from Theorem 2.3 of \cite{S}. Moreover, the example given in Remark \ref{remark:nonsmooth} shows that smoothness is necessary in case $ x \in M_T. $ \end{remark} \begin{cor} Let $\mathbb{X}$ be a finite-dimensional Banach space and $T \in \mathbb{L} (\mathbb{X}).$ Then there exists $x \in S_{\mathbb{X}}$ such that $T$ preserves Birkhoff-James orthogonality at $x$, i.e., $x\perp_B y \Rightarrow Tx\perp_B Ty.$ \end{cor} \begin{proof} Since $\mathbb{X}$ is a finite-dimensional Banach space, there exists a unit vector $x_0$ such that $\|Tx_0\|=m(T)$, where $ m(T)= inf \{ \|Tx\| \colon \|x\|=1 \}.$ Therefore, $x_0 \in m_T.$ Now, by Theorem \ref{theorem:preserve}, for any $y\in \mathbb{X}$, $x_0\perp_B y\Rightarrow Tx\perp_B Ty$, i.e., $T$ preserves Birkhoff-James orthogonality at $x_0.$ \end{proof} An easy application of Theorem \ref{theorem:preserve} yields the following result. \begin{cor}\label{cor:hyperplane2} Let $\mathbb{X}, \mathbb{Y}$ be normed linear spaces and $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$. If $x\in m_T$ then for any hyperspace $ H_x $ in $ \mathbb{X}, $ with $x\perp_B H_x, $ there exists a hyperspace $ H_y $ in $ \mathbb{Y} $ such that $y(=Tx)\perp_B H_y$ with $T(H_x)\subseteq H_y$. \end{cor} \begin{proof} Follows from Theorem \ref{theorem:preserve}$(iii).$ \end{proof} As another application of Theorem \ref{theorem:preserve}, it is possible to slightly improve Theorem 2.8 of \cite{S}. Let $|M_T|$ denote the cardinality of $M_T$. It was proved in \cite[Th. 2.8]{S} that if $T\in \mathbb {L}(l_{p}^2)$, where $p\in \mathbb{N}\setminus \{1\}$, is not a scalar multiple of an isometry then $|M_T|\leq 2(8p-5)$. Combining Theorem \ref{theorem:preserve} of the present paper with this result, we obtain the following theorem. \begin{theorem} Let $\mathbb{X}=l_{p}^2(\mathbb{R}), p\in \mathbb{N}\setminus \{1\}$ and let $T\in \mathbb{L}(\mathbb{X})$ be such that $T$ is not a scalar multiple of an isometry. Then $|M_T|\leq 4(4p-3)$. \end{theorem} \begin{proof} It follows from the arguments in the proof of \cite[Th. 2.8]{S} that $T$ can preserve Birkhoff-James orthogonality at not more than $2(8p-5)$ number of points. Since $T$ is not a scalar multiple of an isometry, $M_T\bigcap m_T=\phi$. Furthermore, we have $m_T\neq \phi$, as $\mathbb{X}$ is finite dimensional. Since $m_T$ must contains at least $2$ elements, we must have, $|M_T|\leq 2(8p-5)-2=4(4p-3)$. This completes the proof of the theorem. \end{proof} Let us now obtain a complete characterization of the minimum norm attainment set of a bounded linear operator between Hilbert spaces. We would like to remark that the analogous characterization of the norm attainment set of a bounded linear operator between Hilbert spaces has been obtained in \cite{Sa}. \begin{theorem}\label{theorem:characterization} Let $\mathbb{H}_1, \mathbb{H}_2$ be Hilbert spaces and $ T \in \mathbb{L}(\mathbb{H}_1, \mathbb{H}_2).$ Given any $x\in S_{\mathbb{H}_1}$, the following are equivalent:\\ (i) $x\in m_T$.\\ (ii) (a) Given any $ y \in H_1,~\langle x,y\rangle = 0$ implies that $\langle Tx, Ty\rangle = 0, $\\ (b) inf $\{\|Ty\|\colon \|y\|=1, \langle x,y\rangle = 0\}\geq\|Tx\|.$\\ (iii) $\langle Tx,Ty\rangle= m^2(T)\langle x,y\rangle,$ for every $y\in \mathbb{H}_1$. \end{theorem} \begin{proof} First, we prove $(i)\Rightarrow(ii)$. Suppose $x\in m_T$. Let $ y \in H_1 $ and $ \langle x,y\rangle = 0.$ Then from Theorem \ref{theorem:preserve} it follows that $\langle Tx, Ty\rangle = 0$. Thus (a) holds. Again from the definition of $m(T)$, it follows that inf $\{\|Ty\|\colon \|y\|=1, \langle x,y\rangle = 0\} \geq m(T)=\|Tx\|.$ Therefore, (b) holds. \smallskip Next, we prove $(ii)\Rightarrow(iii)$. Let $ y \in H_1.$ Then $ y = \alpha x + h $ for some scalar $\alpha$ and $ h \in H_1$ such that $\langle x,h \rangle = 0.$ Then $ \langle Tx, Th \rangle = 0$ and $ \langle Tx, Ty \rangle = \langle Tx, \alpha Tx + Th \rangle = \bar{\alpha} \langle Tx, Tx \rangle = m^2(T) \langle x, \alpha x \rangle = m^2(T) \langle x, y \rangle.$ \smallskip Finally we prove $(iii)\Rightarrow(i)$. Let $x\in S_{\mathbb{H}_1}$ be such that $\langle Tx, Ty\rangle=m^2(T)\langle x,y\rangle$ for every $y\in \mathbb{H}_1.$ Taking $y=x$, we get, $\|Tx\|^2=m^2(T),$ which implies that $x\in m_T.$ This establishes the theorem. \end{proof} \begin{remark} It follows from Theorem 2.2 of \cite{SP} that for a bounded linear operator $ T $ between Hilbert spaces, if $ M_T $ is non-empty then $M_T $ is always the unit sphere of some subspace of the domain space. Applying the parallelogram equality, it is easy to see that the same fact holds for $ m_T. $ In other words, $m_T $ is also the unit sphere of some subspace of the domain space, provided it is non-empty. \end{remark} In Theorem \ref{theorem:characterization}, we proved that for a bounded linear operator $T$ on a Hilbert space $\mathbb{H}_1$, $x\in m_T$ if and only if $\langle Tx,Ty\rangle= m^2(T)\langle x,y\rangle,$ for every $y\in \mathbb{H}_1$. Since $m_T $ is always the unit sphere of some subspace of $ \mathbb{H}_1 $, using this characterization of $m_T$, we have the following theorem. \begin{theorem}\label{theorem:dimension} Let $\mathbb{H}_1, \mathbb{H}_2$ be Hilbert spaces and $ T \in \mathbb{L}(\mathbb{H}_1, \mathbb{H}_2).$ The dimension of the subspace, whose unit sphere is $m_T$, is equal to the geometric multiplicity of the least eigen value (which is equal to $m^2(T)$) of $T^*T.$ \end{theorem} \begin{proof} The proof of the theorem can be easily completed by following the same line of arguments, as used in \cite[Th. 2.2]{Sa} for $M_T$. \end{proof} We next obtain a complete characterization of the minimum norm attainment set of a bounded linear operator between any two normed linear spaces. Let us mention that the following result is analogous to Theorem \ref{theorem:characterization M_T}. \begin{theorem}\label{theorem:characterization m_T} Let $\mathbb{X}, \mathbb{Y}$ be normed linear spaces and $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$. Let $x\in S_{\mathbb{X}}$. Then $x\in m_T$ if and only if there exist two s.i.p. $[~,~]_\mathbb{X}$ and $[~,~]_\mathbb{Y}$ on $\mathbb{X}$ and $\mathbb{Y}$ respectively such that for any $y\in \mathbb{X}$,\\ \[[Ty, Tx]_\mathbb{Y}= m^2(T)[ y, x]_\mathbb{X}.\] \end{theorem} \begin{proof} Let us first prove the sufficient part. Let $x\in S_{\mathbb{X}}$ such that for any $y\in \mathbb{X}$, $[Ty, Tx]_\mathbb{Y}= m^2(T)[ y, x]_\mathbb{X}.$ Taking $y=x$, we obtain, $[Tx, Tx]_\mathbb{Y}=m^2(T)$ , i.e., $\|Tx\|=m(T)$. However, this is clearly equivalent to the fact that $x\in m_T.$ \\ Let us now prove the necessary part. If $T$ is the zero operator, then it is clear that the theorem holds true. Suppose that $T$ is nonzero. Let $x\in m_T$. Let $y\in \mathbb{X}$ be arbitrary. If $m(T)=0$ then $\|Tx\|=0\Rightarrow Tx=0,$ and therefore, the theorem holds true. Suppose $m(T)>0$. Then applying Corollary \ref{cor:hyperplane2}, we can complete the proof of the theorem by using similar arguments, as done in the proof of Theorem \ref{theorem:characterization M_T}. \end{proof} \section{Relation between $M_T$ and $ m_T$ } In this section we focus on studying the relation between $M_T$ and $m_T,$ both for bounded linear operators between Hilbert spaces as well as Banach spaces. We begin the study with a bounded linear operator $ T $ between Hilbert spaces $ \mathbb{H}_1 $ and $ \mathbb{H}_2. $ We note that in this case, both $M_T$ and $m_T$ are unit spheres of some subspaces of $ \mathbb{H}_1, $ provided they are non-empty. Indeed, our next theorem implies that these two subspaces are either identical or orthogonal to each other. We note that if $ T \in \mathbb L(\mathbb{H}_1,\mathbb{H}_2) $ is a scalar multiple of an isometry, then $ M_T = m_T = S_{\mathbb{X}}. $ \begin{theorem}\label{theorem:subset} Let $\mathbb{H}_1, \mathbb{H}_2$ be Hilbert spaces and let $T\in \mathbb L(\mathbb{H}_1,\mathbb{H}_2)$ be such that $T$ is not a scalar multiple of an isometry. Then $ m_T\subseteq (M_T)^\perp, $ provided both $ M_T $ and $ m_T $ are non-empty. \end{theorem} \begin{proof} Let us observe that since $T$ is not a scalar multiple of an isometry, we must have, $\|T\|>m(T).$ Let $y\in m_T$ be arbitrary. Choose $ x \in M_T$, which is chosen arbitrarily but is kept fixed after choice. Since every Hilbert space is smooth, there exists a unique hyperspace $H_x$ such that $x\perp_B H_x$. It is easy to see that $ y $ can be written as $y=\alpha x + h$, where $h\in H_x$ and $\alpha $ is a scalar. If $\alpha = 0 $ then clearly $ y = h \in (M_T)^\perp$. If possible, suppose that $ \alpha \neq 0.$ Now, $1=\|y\|^2 =\langle\alpha x + h, \alpha x + h\rangle= |\alpha|^2 + \|h\|^2,$ since $\langle x,h\rangle=0.$ Moreover, from Lemma \ref{lemma:hyperplane1}, it follows that $\langle Tx, Th\rangle=0.$ Now, we have,\\ \begin{eqnarray*} \|Ty\|^2&=&\langle\alpha Tx + Th, \alpha Tx + Th\rangle\\ &=&|\alpha|^2\|Tx\|^2 + \|Th\|^2\\ &=&|\alpha|^2\|T\|^2 + \|h\|^2\|T(\frac{h}{\|h\|})\|^2\\ &>&|\alpha|^2 m^2(T) + \|h\|^2m^2(T)\\ &=&(|\alpha|^2 + \|h\|^2) m^2(T)=m^2(T). \end{eqnarray*} However, this clearly contradicts that $y\in m_T$. Therefore, we must have $\alpha=0$. Thus, for each $ x \in M_T, $ we get $ \langle y, x \rangle = 0 $ and so $ y \in (M_T)^{\bot}.$ As $ y \in m_T$ was chosen arbitrarily, this completes the proof of the theorem. \end{proof} In particular, for a linear operator $T$ on a finite-dimensional Hilbert space $ \mathbb{H}, $ we have that either $ M_T = m_T = S_{\mathbb{H}} $ or $ M_T\perp_B m_T $ and $ m_T\perp_B M_T. $ However, this is not true in general for a bounded linear operator between Banach spaces. Let us furnish the following two examples to illustrate the scenario. \begin{example} Consider $ (\mathbb {R}^2,\|\|), $ whose unit sphere is given by the regular hexagon with vertices at $ \pm (1,0), \pm (\frac{1}{2},\frac{\sqrt{3}}{2}),\pm (-\frac{1}{2},\frac{\sqrt{3}}{2}). $ \\ It is quite straightforward to observe that Birkhoff-James orthogonality is symmetric for this Banach space, though it is not an inner product space.\\ Consider the linear operator $ T = $ \( \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right) \) on this space. \\ It follows immediately that $ \|T\|=1 $ and $ m(T)=0 $. It also follows that $ M_T = \{ \pm (1,0) \} $ and $ m_T = \{\pm (0, \frac{\sqrt{3}}{2}) \}. $ In this case, we indeed have, $M_T\perp_B m_T$ and $m_T\perp_B M_T$. \end{example} \begin{example} Consider the same Banach space, as given in the previous example. Let $ T = $ \( \left( \begin{array}{cc} \frac{3}{4} & -\frac{\sqrt{3}}{4} \\ \frac{\sqrt{3}}{4} & \frac{3}{4} \end{array} \right) \).\\ It follows immediately that $ \|T\|=1 $ and $m(T)=\frac{3}{4}$. It is also easy to check that $ \pm (1,0), \pm (\frac{1}{2},\frac{\sqrt{3}}{2}),\pm (-\frac{1}{2},\frac{\sqrt{3}}{2})\in M_T$ and $\pm (\frac{3}{4},\frac{\sqrt {3}}{4}), \pm (0, \frac{\sqrt{3}}{2}), \pm (-\frac{3}{4},\frac{\sqrt {3}}{4})\in m_T$. Therefore, in this case, $M_T\not\perp_B m_T.$\\ \end{example} In the next theorem, we study the norm (minimum norm) attainment set of a rank one linear operator on a (strictly convex) reflexive Banach space. As we will observe, this will lead us to an interesting characterization of reflexivity, in terms of these two sets. \begin{theorem}\label{theorem:rank1} Let $\mathbb{X}$ be a reflexive real Banach space and $Y$ be a real Banach space. Let $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$ be a rank one linear operator. Then $x \in M_T$ for some $ x \in S_{\mathbb{X}} $ and $m_T=H_x\bigcap S_{\mathbb{X}}$, where $H_x$ is a hyperspace of $\mathbb{X}$ such that $x\perp_B H_x$. In addition, if $\mathbb{X}$ is strictly convex, then $M_T=\{\pm x\}.$ \end{theorem} \begin{proof} Without loss of generality, we may and do assume that $ \| T \| = 1. $ Since $T$ is a rank one operator on a reflexive space so $T$ attains norm at some element $x \in S_{\mathbb{X}}. $ Let $ y = Tx.$ Then by Lemma \ref{lemma:hyperplane1}, there exist hyperspaces $H_x$ and $H_y$ in $\mathbb{X}$ and $\mathbb{Y}$ respectively such that $ x \bot_B H_x, Tx \bot_B H_y $ and $ T(H_x) \subset H_y.$ We further note that $ Tx \neq 0. $ We claim that $ Tz = 0$ for all $z \in H_x.$ If not, then as $Tx \bot_B Tz,$ we must have, $\{Tx,Tz\}$ is linearly independent in $\mathbb{Y}.$ However, this implies that the rank of $T$ is more than one, a contradiction to our hypothesis. Thus, $ Tz =0 $ for all $z \in H_x$ and so $ H_x \cap S_{\mathbb{X}} \subset m_T.$ Next, let $z \in m_T.$ Then $ z = \alpha x + h $ for some scalar $\alpha$ and $ h \in H_x$. Clearly, $ 0 = Tz = \alpha Tx + Th = \alpha Tx,$ so that $\alpha = 0 $ and hence $ z = h \in H_x \cap S_{\mathbb{X}}.$ Thus, $ m_T \subset H_x \cap S_{\mathbb{X}}.$ This proves that $m_T=H_x\bigcap S_{\mathbb{X}}, $ and completes the proof of the fist part of the theorem. \smallskip Next, assume that $ \mathbb{X}$ is strictly convex. We show that $M_T=\{\pm x\}$. Clearly, $w \in S_{\mathbb{X}}$ can be written as $w=\alpha x + h, $ for some scalar $\alpha $ and some $ h\in H_x.$ Since $\mathbb{X}$ is strictly convex and $x\perp_B h$, we have, $1=\|w\|=\|\alpha x + h\| \geq |\alpha|$ and $|\alpha|=1$ if and only if $ h=0$. Now, $\|Tw\|=\|T(\alpha x + h)\|=|\alpha|\|Tx\|=|\alpha|\leq 1$ and equality holds if and only if $h=0$. Therefore, we must have $M_T=\{\pm x\}$. This completes the proof of the theorem. \end{proof} Now, the promised characterization of reflexive Banach spaces: \begin{theorem}\label{theorem:reflexive} Let $\mathbb{X}$ be a real Banach space. Then $\mathbb{X}$ is reflexive if and only if for any closed hyperspace $H$ of $\mathbb{X}$, there exists a rank one linear operator $T\in \mathbb{L}(\mathbb{X}) $ such that \\ (i) $ x \in M_T $, for some $x \in S_{\mathbb{X}}.$\\ (ii) $m_T= H \bigcap S_{\mathbb{X}}.$ \end{theorem} \begin{proof} We first prove the necessary part. Since $\mathbb{X}$ is reflexive, it follows from \cite{Jam} that for any closed hyperspace $H$ of $\mathbb{X},$ there exists a unit vector $x\in \mathbb{X}$ such that $x\perp_B H.$ Clearly, any element $z\in \mathbb{X}$ can be written as $z=\alpha x + h$, where $\alpha \in \mathbb{R},~ h\in H.$ Define $T : \mathbb{X} \longrightarrow \mathbb{Y} $ as $T (\alpha x + h) = \alpha y ,$ where $y \in S_{\mathbb{X}} $ is fixed. Clearly, $T$ is well-defined and $ T $ is a rank one linear operator. Since $x\perp_B H$, it is easy to check that is $T$ bounded and $x \in M_T.$ So from Theorem \ref{theorem:rank1} it follows that $m_T= H\bigcap S_{\mathbb{X}}.$ This completes the proof of the necessary part. We next prove the sufficient part. Let $H$ be a closed hyperspace of $\mathbb{X}.$ According to our hypothesis, there exists a rank one linear operator $T\in \mathbb{L}(\mathbb{X})$ such that \\ $(i)~ x \in M_T, $ for some $x \in S_{\mathbb{X}}.$\\ $(ii)~ m_T= H \bigcap S_{\mathbb{X}}.$\\ Since rank of $ T $ is one, it is immediate that $ m(T) = 0. $ Since $m_T= H \bigcap S_{\mathbb{X}}, $ it follows that $ Th = 0 $ for all $ h \in H. $ In particular, we have that $ x \in M_T $ and $ Tx \perp_{B} Th $ for all $ h \in H. $ Applying Proposition 2.1 of \cite{S}, it now follows that $ x \perp_{B} h $ for all $ h \in H, $ i.e., $ x \perp_{B} H. $ Thus, for each closed hyperspace $H$ of $ \mathbb{X}, $ there exists an element $ x \in S_{\mathbb{X}}$ such that $ x \bot_B H. $ Therefore, it follows from \cite{Jam} that $ \mathbb{X} $ is reflexive. This completes the proof of the sufficient part and establishes the theorem in its entirety. \end{proof} In addition, if we assume that the space $\mathbb{X}$ is strictly convex, then we have the following theorem, the proof of which follows trivially from the previous theorem and the last part of Theorem \ref{theorem:rank1}. \begin{theorem}\label{theorem:reflexive2} Let $\mathbb{X}$ be a strictly convex real Banach space and $ \mathbb{Y}$ be a real Banach space. Then $\mathbb{X}$ is reflexive if and only if for any closed hyperspace $H$ of $\mathbb{X}$, there exists a rank one linear operator $T\in \mathbb{L}(\mathbb{X}, \mathbb{Y})$ such that \\ (i) $ M_T = \{ \pm x \}$, for some $x \in S_{\mathbb{X}}.$\\ (ii) $m_T= H \bigcap S_{\mathbb{X}}.$ \end{theorem} Our next objective is to characterize Euclidean spaces among finite-dimensional real Banach spaces, in terms of the norm attainment set and the minimum norm attainment set of bounded linear operators on them. We first prove the following result for two-dimensional strictly convex Banach spaces. \begin{theorem}\label{theorem:ips dim 2} A two-dimensional strictly convex real Banach space $\mathbb{X}$ is an inner product space if and only if for any $T \in \mathbb{L}(\mathbb{X})$, either (a) or (b) holds:\\ (a) $M_T = m_T = S_{\mathbb{X}}$. \\ (b) $M_T \perp_B m_T$ and $m_T \perp_B M_T.$ \end{theorem} \begin{proof} Let us first prove the necessary part. Let $ \mathbb{X} $ be the two-dimensional Euclidean space. Let $T \in \mathbb{L}(\mathbb{X}).$ If $T$ is a scalar multiple of an isometry then $M_T = m_T = S_X$. On the other hand, if $T$ is not a scalar multiple of an isometry then it follows from Theorem \ref{theorem:subset} that $M_T \perp_B m_T$ and $m_T \perp_B M_T.$ This completes the proof of the necessary part of the theorem. Let us now prove the sufficient part. We first claim that for any $T \in L(\mathbb{X}), $ $M_T= \pm D$, where $D$ is a connected subset of $S_{\mathbb{X}}.$ If (a) holds, i.e., $M_T = S_{\mathbb{X}},$ then our claim is trivially true. Next, suppose $(b)$ holds. We show that $T$ attains norm at only one pair of points. If possible, suppose that $x$, $y \in M_T$, where $x\neq \pm y. $ Let $z\in m_T$. Clearly, $ z \neq \pm x,~ \pm y, $ as $ T $ is not a scalar multiple of an isometry. Therefore, $z$ can be written as $z=\alpha x +\beta y$, where $\alpha ,\beta $ are non-zero scalars. We have, $x \perp_B z$ and $y \perp_B z$. Since $\mathbb{X} $ is a two-dimensional strictly convex Banach space, it follows from \cite{J} that Birkhoff-James orthogonality is left additive in $ \mathbb{X}. $ Therefore, applying the homogeneity property of Birkhoff-James orthogonality, it follows that $\alpha x +\beta y\perp_B z$, i.e., $z\perp_B z$, which is possible only if $ z=0. $ However, this clearly contradicts that $ z \in m_T \subset S_{\mathbb{X}}. $ This completes the proof of the fact that if $ (b) $ holds then $T$ must attain norm only at one pair of points. Therefore, in any case, $M_T= \pm D$, where $D$ is a connected subset of $S_{\mathbb{X}}.$ It now follows from \cite[Th. 2.2]{SP} that $\mathbb{X}$ is an inner product space. This completes the proof of the sufficient part of the theorem and thereby establishes the theoerem. \\ \end{proof} \begin{remark} Let $\mathbb{X}$ be a two-dimensional real Banach space which is not strictly convex. Then the unit sphere of $S_{\mathbb{X}}$ contains a line segment $L$(say). Let $x\in L.$ It is easy to see that there exists $y\in S_{\mathbb{X}}$ such that every point of $ L $ is Birkhoff-James orthogonal to $ y. $ Let us define a linear operator $T$ on $ \mathbb{X} $ in the following way: $ Tx=x, Ty=0.$ It follows trivially that $M_T = \pm L $ and $ m_T = \{ \pm y \}. $ It is also immediate that $M_T\perp_B m_T$ but $m_T$ may not be always Birkhoff-James orthogonal to $M_T$. In particular, if we further assume $ \mathbb{X} $ to be smooth, then it follows that $ \mathbb{X} $ is not an inner product space. \smallskip As for example, consider the linear operator $T$ defined on $ \ell_\infty({\mathbb{R}^2}) $ as $ T(1,0) = (1,0) $ and $ T(0,1) = (0,0).$ Then it easy to check that $ M_T = \{ (a,b) ~:~ \mid a \mid = 1, \mid b \mid \leq 1 \} $ and $m_T = \{\pm (0,1) \}.$ Clearly, $ M_T \bot_B m_T $ but $ m_T \not\perp_B M_T.$ \end{remark} If the dimension of $ \mathbb{X} $ is strictly greater than $ 2, $ then we have the following characterization of Euclidean spaces. \begin{theorem}\label{theorem:ips} Let $ \mathbb{X} $ be a finite-dimensional real Banach space having dimension strictly greater than $ 2. $ Then $\mathbb{X}$ is an Euclidean space if and only if for any $T \in \mathbb{L}(\mathbb{X})$, either (a) or (b) holds:\\ (a) $M_T = m_T = S_{\mathbb{X}}$. \\ (b) $M_T \perp_B m_T$ and $m_T \perp_B M_T.$ \end{theorem} \begin{proof} We note that the proof of the necessary part of the theorem follows similarly as that of the necessary part of Theorem \ref{theorem:ips dim 2}. Let us prove the sufficient part. We claim that Birkhoff-James orthogonality is symmetric in $ \mathbb{X}. $ Let $x, y \in S_{\mathbb{X}}$ be such that $x\perp_B y$. Then there exists a hyperplane $H$ containing $y$ such that $x\perp_B H$. Clearly, any element $z\in \mathbb{X}$ can be written as $z=\alpha x + h$, where $\alpha \in \mathbb{R},~ h\in H.$ Define a linear operator $T$ on $\mathbb{X}$ as follows: \[T(\alpha x + h) = \alpha x,~ \text{for each}~ \alpha \in \mathbb{R}~ \text{and for each}~ h\in H.\] Clearly, $T$ is well-defined, linear and bounded. Since $x\perp_B H$, it is easy to check that $x \in M_T$ and $ y\in m_T.$ Clearly, $M_T\neq S_{\mathbb{X}}$. Therefore, $(b)$ holds. So, we have, $y\perp_B x$. Since $x, y \in S_{\mathbb{X}}$ such that $x\perp_B y$ was chosen arbitrarily, it follows from the homogeneity property of Birkhoff- James orthogonality that Birkhoff- James orthogonality is symmetric in $\mathbb{X}.$ Since the dimension of $ \mathbb{X} $ is strictly greater than $ 2, $ it follows from \cite{J} that $\mathbb{X}$ is an inner product space. This establishes the theorem. \end{proof}
{ "timestamp": "2018-06-05T02:16:57", "yymm": "1806", "arxiv_id": "1806.01051", "language": "en", "url": "https://arxiv.org/abs/1806.01051" }
\section{Introduction and main results}\label{intro} \subsection{The Bessel operator} Let $N\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}$ and $\a = (\a_1, ... , \a_N)$, where $\a_j > -1$ for $j=1,...,N$. Consider the space $X= (0,\infty)^N$ equipped with the Euclidean metric and the measure $d\nu(x) = x^\a dx = x_1^{\a_1}...x_N^{\a_N}\, dx_1 ... dx_N$. It is well-known that $X$ satisfies the doubling property, i.e. \eqx{ \nu(B(x,2r)) \leq C \nu(B(x,r)), \qquad x\in X, \, r>0, } where $B(x,r) = \set{y\in X \ : \ |x-y|<r}$ . In other words, there exist $d, C_d>0$ such that \eq{\tag{D} \label{doublingX} \nu(B(x,\gamma r)) \leq C_d (1+\gamma)^d \nu(B(x,r)), \qquad x\in X, \, r, \gamma>0. } We choose the constant $d$ (''homogeneous dimension'') as small as possible. In this case \eq{\label{d} d = \sum_{j=1}^N \max(1,\a_j+1). } The multidimensional Bessel operator is given by $B = B_1 + ... + B_N$, where \eqx{ B_j f(x) = - \partial_j^2 f(x) - \frac{\a_j}{x_j} \partial_j f(x), \qquad x\in X. } The operator $B$, initially defined on, say, $(C_c^2((0,\infty)))^n$, extends to a self-adjoint operator on $L^2(X)$. Slightly abusing notation, we shall denote this extension by the same symbol $B$. For a precise definition of $B$ we refer the reader to e.g.\cite[Sec. 2]{BCN} (see also \cite{Muckenhoupt_Stein}). Also, $B$ is the infinitesimal generator of the Bessel semigroup $\Tt_t f(x) = \int_X T_t(x,y) f(y)\, d\nu(y)$, where $T_t(x,y) = T_t^{[1]}(x_1,y_1) \cdot ... \cdot T_t^{[N]}(x_N,y_N)$ and \eq{\label{Kt} T_t^{[j]}(x_j,y_j) = \frac{1}{2t} (x_jy_j)^{-(\a_j-1)/2} I_{(\a_j-1)/2}\left(\frac{x_jy_j}{2t}\right) \exp\left(-\frac{x_j^2+y_j^2}{4t}\right), \quad x_j,y_j, t>0. } Here $I_\tau(x) = \sum_{m=0}^\infty \frac{1}{m!\Gamma(m+\tau+1)} \left(\frac{x}{2}\right)^{2m+\tau}$ is the modified Bessel function of the first kind. The kernel $T_t(x,y)$ satisfies the upper and lower gaussian bounds, i.e. there exist constants $c_1,c_2, C_1, C_2 > 0$, such that \eq{\tag{G}\label{gauss_bess} C_1 \nu(B(x,\sqrt{t}))^{-1} \exp\left( - \frac{|x-y|^2}{c_1 t}\right) \leq T_t(x,y) \leq C_2 \nu(B(x,\sqrt{t}))^{-1} \exp\left( - \frac{|x-y|^2}{c_2 t}\right). } This fact is well known and follows from the asymptotics for $\nu(B(x,\sqrt{t}))$ and $I_\tau$. For details see e.g. \cite[Lem. 4.2]{DPW_JFAA}. Since $B$ is self-adjoint and nonnegative, for a Borel function $m: (0,\infty) \to \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C}$ the spectral theorem defines the operator \eqx{ m(B) = \int_0^\infty m(\la) \, dE_B(\la), } where $E_B$ is the spectral resolution of $B$. \subsection{Multiplier theorems for $B$} Multiplier theorems for $B$ and other operators are one of the main topics in harmonic analysis. Many authors investigated assumptions on $m$ that guarantee boundedness of $m(B)$ on various function spaces, such as $L^p(X)$, $H^p(X)$, $L^{p,q}(X)$ and others. For example, in \cite{Gosselin_Stempak} the authors proved weak type (1,1) estimates on $m(B)$ assuming $N=1$, $\a >0$ and \eqx{ \eee{\int_{R/2}^R |m^{(s)}(\la)|^2 d\nu(\la)}^{1/2} \leq C R^{(\a+1)/2-s}, \quad R>0, } where $s=0,...,K$ and $K$ is the least even integer greater than $(\a+1)/2 = d/2$ (see also \cite{Kapelko}). In \cite{DP_Monats}, assuming still $N=1$ and $\a>0$, it is proved that if \eq{\tag{S} \label{Sob} \sup_{t>0} \norm{\eta(\cdot)m(t \cdot )}_{W^{2,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} < \infty } with some $\beta} \newcommand{\e}{\varepsilon > d/2$, then $m(B)$ is bounded on the Hardy space $H^1(B)$ related to $B$. Here and thereafter $W^{2,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})$ is the $L^2$-Sobolev space on $\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$ and $\eta$ is a fixed nonnegative smooth cut-off function such that $\mathrm{supp} \, \eta \subseteq (2^{-1},2)$. In the multidimensional case $N\geq 1$ in \cite{BCC} the authors prove weak type $(1,1)$ estimates for $m(B)$, where $m$ is of Laplace transform type, i.e. there exists $\phi \in L^\infty (0,\infty)$, such that \eqx{ m(x) = |x|^2 \int_0^\infty e^{-t|x|^2} \phi(t) \, dt, \quad x\in (0,\infty)^N. } Notice, that if $m$ is of Laplace transform type, then $m$ is radial and (as a function on $(0,\infty)$) satisfies \eqref{Sob} with any $\beta} \newcommand{\e}{\varepsilon >0$. Another multidimensional result can be found in \cite{DPW_JFAA}, where it is proved that $m(B)$ is weak type $(1,1)$ and bounded on the Hardy space $H^1(X)$ provided that $\a_j>1$ for $j=1,...,N$ and $m$ satisfies \eqref{Sob} with $\beta} \newcommand{\e}{\varepsilon >d/2$. See also e.g. \cite{Garrigos_Seeger, Gasper_Trebels, Wrobel_Hankel} for other multiplier results for the Bessel operator. Our first main goal is to obtain multiplier theorem for $B$ in the most general case $N\geq 1$ and $\a_j >-1$, $j=1,...,N$. Let us notice that many of the results before assumed that $\a_j>0$ and the case $\a_j<0$ is more difficult and less known. One reason for that is the singularity at zero of the measure $x^{\a_j} dx_j$ when $\a_j<0$. Also, so-called ''generalized translation'' operators and convolution structure for $B$ (see, e.g. \cite[Sec. 2]{BDT_d'Analyse}), does not help when $\a_j<0$. This is strictly related to the fact, that the generalized eigenfunctions of $B$ are no longer bounded if $\a_j<0$ for some $j$ and, therefore, the generalized translation is not even bounded on $L^2$. Let us also notice that, we are interested in multiplier results that are sharp in the sense that we assume \eqref{Sob} with $\beta} \newcommand{\e}{\varepsilon$ as small as possible. In this case this is expected to be $\beta} \newcommand{\e}{\varepsilon >d/2$ (we shall discuss this in Subsection \ref{impowers} below). To state the multiplier result let us recall that the weak $L^1$ space is given by the semi-norm \eqx{ \norm{f}_{L^{1,\infty}(X)} = \sup_{\la>0} \la \nu \set{ x\in X \ : \ \abs{f(x)}>\la }, } and the Hardy space $H^1(B)$ related to $B$ can be defined by the norm \eqx{ \norm{f}_{H^1(B)} = \norm{\sup_{t>0} \abs{\Tt_t f}}_{L^1(X)}. } In the case $N=1$ the space $H^1(B)$ was studied in \cite{BDT_d'Analyse}, where $H^1(B)$ was characterized by means of atomic decompositions and the Riesz transforms. In the general case $N\geq 1$ and $\a_j>-1$, $j=1,...,N$ the atomic characterization of $H^1(B)$ can be found in \cite{Dziubanski2017} (see also \cite{DPW_JFAA}, \cite{JD_JGA}). We shall recall this characterization in Subsection \ref{sec-hardy-spaces} below. \sthm{main}{ Let $N\geq 1$ and $\a_j >-1$ for $j=1,...,N$. Assume that $m:(0,\infty) \to \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C}$ satisfies \eqref{Sob} with $\beta} \newcommand{\e}{\varepsilon >d/2$, see \eqref{d}. Then: \en{ \item $m(B)$ is bounded from $L^{1}(X)$ to $L^{1,\infty}(X)$, \item $m(B)$ is bounded from $H^{1}(B)$ to $H^{1}(B)$, \item $m(B)$ is bounded from $L^{p}(X)$ to $L^{p}(X)$, $1<p<\infty$. } } Part {\bf 1.} of Theorem \ref{main} will be proved by using results of \cite{Sikora_JFA2}. More precisely, we shall check the assumptions of \cite[Th. 3.1]{Sikora_JFA2}. The proof of {\bf 2.} will be given in Section \ref{sec2}. In fact, in the proof we shall only use general properties of $B$, such as e.g. \eqref{doublingX}, \eqref{gauss_bess}, and \eqref{plan} below. Thus, the multiplier result in Section \ref{sec2} will be formulated in a more general context. This section can be read independently of the rest of the paper and we shall use different notation. As usual, {\bf 3.} is a consequence of either {\bf 1.} or {\bf 2.} by duality and interpolation, see e.g. \cite{Bernicot}. \subsection{Imaginary powers of $B$} \label{impowers} Another goal of this paper is to study the imaginary powers $B^{ib}$, $b\in\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$, of the Bessel operator and establish lower bounds of these operators on some function spaces. We shall concentrate our attention on the dependence of the lower estimates on $b$ for large $b$. This is related with sharpness of multiplier theorems and may be of independent interest. To state these estimates let us restrict ourselves to the one-dimensional case $N=1$ ($X=(0,\infty)$, $d\nu(x) = x^\a\, dx$, $\a>-1$). Motivated by the identity \eq{\label{ewew} {B}^{ib} = \Gamma(-ib)^{-1} \int_0^\infty t^{-ib} e^{-t{B}} \frac{dt}{t} } let us define for $x\neq y$ the integral kernel \eq{\label{Kb} K_b(x,y) = \Gamma(-ib)^{-1} \int_0^\infty t^{-ib} {T}_t(x,y) \frac{dt}{t}. } Notice, that the integral in \eqref{ewew} is not absolutely convergent, thus we have to explain how the kernel $K_b(x,y)$ is related to the operators $B^{ib}$. Indeed, in Subsection \ref{sec3} we shall prove that for $f\in L^\infty(X)$ with compact support we have \eq{\label{def-im-ker} B^{ib} f (x) = \int_X K_b(x,y) f(y) \, d\nu(y), \qquad x\notin \mathrm{supp} f } One of our goals is to provide lower estimates for $B^{ib}$. \sthm{lower1}{ Assume that $\a>-1$. Then there exist a constant $C>0$ and a function $f$ such that $\norm{f}_{L^1(X)}=1$ and for $|b|$ large enough we have \eqx{ \norm{B^{ib}f}_{L^{1,\infty}(X)} \geq C|b|^{d/2} . } } \sthm{lower2}{ Assume that $\a>0$ and $p\in(1,2)$. Then there exist $C_p>0$ and $f$ such that $\norm{f}_{L^p(X)}=1$ and for $|b|$ large enough we have \eqx{ \norm{B^{ib} f}_{ L^p(X)} \geq C_p |b|^{\frac{d}{2}\frac{(2-p)}{p}}. } } \newcommand{\chi_{loc}(x,y)}{\chi_{loc}(x,y)} \newcommand{\chi_{glob}(x,y)}{\chi_{glob}(x,y)} The proofs of Theorems \ref{lower1} and \ref{lower2} are presented in Subsection \ref{sec3}. To prove Theorem \ref{lower1} we shall carefully analyze the kernels $K_b(x,y)$. More precisely, we prove the following lemma. \lem{kernel}{ Assume that $\a>-1$ and $b\in \mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$. Then \sp{ \label{kernel_eq} K_b(x,y) = &c_1(b)\eee{{x^2+y^2}}^{-ib-(\a+1)/2}\\ & + c_2(b) (xy)^{-\a/2} |x-y|^{-2bi-1}\chi_{\{y/2<x<2y\}}(x,y)\\ &+ c_3(b) R_b(x,y), } where \eqx{ c_1(b) = \frac{2^{2ib+1}}{\Gamma\eee{(\a+1)/{4}}} \frac{\Gamma\eee{ib + (\a+1)/2}}{\Gamma(-ib)}, \quad c_2(b) = \frac{ 2^{2ib}}{\sqrt{\pi}} \frac{\Gamma\eee{ib+1/2}}{\Gamma\eee{-ib}}, \quad c_3(b) = \Gamma(-ib)^{-1}. } Moreover, there exists $C>0$ that does not depend on $b$, such that \eqx{ |R_b(x,y)| \leq C xy(x+y)^{-\a-3}. } } Notice that the kernel $R_b(x,y)$ is related to an operator that is bounded on every $L^p(X)$, $1\leq p \leq \infty$, uniformly in $b\in \mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$. Thus we may think of $R_b(x,y)$ as of some kind of ''error term''. However, for $|b| > 1$ the size of the constants are the following: \eq{\label{gamma_constants} |c_1(b)| \simeq |b|^{(\a+1)/2}, \quad |c_2(b)| \simeq |b|^{1/2}, \quad |c_3(b)| \simeq |b|^{1/2} \exp\eee{\frac{\pi |b|}{2}}, } c.f. Lemma \ref{gamma}. Thus, $c_3(b)$ grows exponentially when $|b|\to \infty$, while the constants $c_1(b)$ and $c_2(b)$ are much smaller. It appears that the growth of the constant $c_3(b)$ will lead to a~problem in deriving lower estimates for $B^{ib}$ (since our goal is to find the exact dependence on $b$). However, we can overcome this difficulty when analyzing weak $(1,1)$ norm as in Theorem \ref{lower1}. The same trick seems not to work in other function spaces (such as $H^1(B)$, $L^p(X)$ and $L^{p,\infty}(X)$ with $p>1$), thus the proof of Theorem \ref{lower2} is different and uses the integral representation of the Bessel function $I_\tau$ instead of Lemma \ref{kernel}. As a corollary of Theorems \ref{lower1} and \ref{lower2} we obtain that Theorem \ref{main} is sharp (at least for $N=1$) in the sense that $d/2$ cannot be replaced by a smaller number. The argument is standard, but we shall present it now for the convenience of the reader. One can check that for $m_b(\la) = \la^{ib}$ we have \eqx{ M_b :=\sup_{t>0} \norm{\eta(\cdot) m_b(t\cdot)}_{W^{2,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} \leq |b|^\beta} \newcommand{\e}{\varepsilon. } Also, Theorem \ref{main} actually gives that $\norm{m_b(B)f}_{L^{1,\infty}(X)} \leq C M_b \norm{f}_{L^1(X)}, $ where $C$ does not depend on $b$. Combining these estimates with Theorem \ref{lower1} for $|b|$ large enough we have \eqx{ {|b|^{d/2}} \leq C \norm{m_b(B)}_{L^1(X) \to L^{1,\infty}(X)} \leq C {|b|^{\beta} \newcommand{\e}{\varepsilon}}. } Therefore $\beta} \newcommand{\e}{\varepsilon \geq d/2$. Actually, one expects that $\beta} \newcommand{\e}{\varepsilon \neq d/2$, but this question is beyond the scope of this paper. Similarly, the constant $d/2$ cannot be improved for the Hardy spaces. If $\a<0$ then $d/2=1/2$ and \eqref{Sob} with $\beta} \newcommand{\e}{\varepsilon<1/2$ would not even guarantee that $m$ is bounded. On the other hand, for $\a>0$ if we could prove multiplier theorem on $H^1(B)$ with a constant lower than $d/2$, then by interpolation we would have better upper bounds for $m_b(B)$ on $L^p(X)$ for $1<p<2$, which contradicts Theorem \ref{lower2} by an argument similar to the one above. \subsection{Organization of the paper and notation.} In Section \ref{sec2} we state and prove a ,,sharp'' multiplier theorem on Hardy spaces for self-adjoint operators on spaces of homogeneous type with certain assumptions (Theorem \ref{multi2}). This is a slight generalization of Theorem \ref{main}~{\bf2.} in the spirit of \cite[Th. 3.1]{Sikora_JFA2}. In Section \ref{sec2} we shall use different notation, so that it can be read independently of the rest of the paper. In Section \ref{sec_Bess} we prove the results stated above. More precisely, first we check that $B$ satisfies assumption $(P_2)$ (see Section \ref{sec2} below) in the full generality $N\geq 1$, $\a_j >-1$ for $j=1,...,N$. Thus Theorem \ref{multi2} can be applied for $B$. Then we prove Lemma \ref{kernel} and Theorems \ref{lower1} and \ref{lower2}. We shall use standard notations, i.e. $C$ and $c$ denote positive constants that may change from line to line. \section{Sharp multiplier theorem on Hardy spaces}\label{sec2} \subsection{Background and general assumptions}\label{ss11} In this section we consider a space $Y$ with a~metric $\rho$ and a nonnegative measure $\mu$. We shall assume that the triple $(Y,\rho, \mu)$ is a space of homogeneous type, i.e. there exists $C>0$ such that $ \mu(B(x,2r)) \leq C \mu(B(x,r)), $ for all $x\in Y$ and $r>0$, where $B(x,r) = \set{y\in Y \ : \ \rho(x,y)<r}$, c.f. \cite{CoifmanWeiss_BullAMS}. It is well-known that this implies the existence of $d,C_d>0$ such that \eq{\tag{D} \label{doubling} \mu(B(x,\gamma r)) \leq C_d (1+\gamma)^d \mu(B(x,r)), \qquad x\in Y, \, \, r, \gamma > 0. } As usual, we choose $d$ as small as possible, even at the cost of enlarging $C_d$. Let $A$ denote a self-adjoint positive operator and let $E_A$ be its spectral measure, i.e. $A = \int_0^\infty \la \, dE_A(\la)$. Denote by $\mathbf{P}_t= \exp(-tA)$ the semigroup generated by $A$. Assume that there exists an integral kernel $P_t(x,y)$ such that $\mathbf{P}_t f(x) = \int_Y P_t(x,y) f(y) \, d\mu(y)$ and that satisfies the upper gaussian bounds, i.e. there exist $c_2, C_2 > 0$ such that \eq{\tag{UG}\label{gauss_up} P_t(x,y) \leq C_2 \mu(B(x,\sqrt{t}))^{-1} \exp\left( - \frac{\rho(x,y)^2}{c_2 t}\right), \quad t>0, \, x,y\in Y. } \subsection{Multiplier theorems} By the spectral theorem, for a Borel function $m$ on $(0,\infty)$, we have the operator \eqx{ m(A) = \int_0^\infty m(\la) \, dE_A(\la). } In the classical case $A=-\Delta$, $Y=\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}^D$, the H\"ormader multiplier theorem states that if $m$ satisfies \eqref{Sob} with $\beta} \newcommand{\e}{\varepsilon>D/2$, then $m(-\Delta)$ is weak type $(1,1)$ and bounded on $L^p(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}^D)$ for $1<p<\infty$. It is well-known that the constant $D/2$ is sharp in the sense that it cannot be replaced by a smaller constant, see e.g. \cite{Sikora_Wright}. At this point let us recall one of many multiplier theorems on spaces of homogeneous type. Suppose $Y$ and $A$ are as in Subsection \ref{ss11}. Following \cite{Sikora_JFA2} we introduce additional assumption. Suppose that there exists $C>0$ and $q\in [2,\infty]$, such that for $R>0$ and every Borel function $m$ on $\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$ satisfying $\mathrm{supp} m\subseteq [R/2, 2R]$ we have \eq{\tag{$P_q$}\label{plan} \int_Y \abs{K_{m(A)}(x,y)}^2 d\mu(x) \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1} \norm{m(R \cdot)}_{L^q(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2. } \thm{multi1}{\cite[Thm. 3.1]{Sikora_JFA2} Assume that on a space of homogeneous type $(Y,\rho, \mu)$ there is a self-adjoint positive operator $A$ that satisfies \eqref{gauss_up}. Moreover, assume that \eqref{plan} holds with some $q\in[2,\infty]$ and $m$ satisfies \eq{\tag{$S_q$} \label{Sq} \sup_{t>0} \norm{\eta(\cdot)m(t \cdot )}_{W^{q,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} < \infty } with some $\beta} \newcommand{\e}{\varepsilon>d/2$. Then $m(A)$ is of weak type $(1,1)$ and bounded on $L^p(Y)$ for $p\in (1,\infty)$. } At this point let us make a few comments. \en{ \item Assuming \eqref{gauss_up} the operators $m(A)$ appearing in \eqref{plan} always have integral kernels $K_{m(A)}(x,y)$, c.f. \cite[Lem. 2.2]{Sikora_JFA2}. \item For the Bessel operator we are interested in \eqref{Sq} and \eqref{plan} for $q=2$ only, $(S) = (S_2)$. However, in Section $2$ the results are stated and proved with an arbitrary $q\in[2,\infty]$. \item The assumption \eqref{plan} in some sense plays a role of Plancherel theorem in the proof of Theorem \ref{multi1}. It is a key to obtain the sharp range $\beta} \newcommand{\e}{\varepsilon >d/2$. For example, if we would allow $m$ to satisfy \eqref{Sq} with $\beta} \newcommand{\e}{\varepsilon >d/2+1/2$, then \eqref{plan} would be superfluous. \item The assumption \eqref{plan} is written in \cite{Sikora_JFA2} for $m$ having support in $[0,R]$ not in $[R/2,2R]$. However, a simple inspection of the proof shows that \eqref{plan} is needed only for $m$ with $\mathrm{supp} \, m \subseteq [R/2,2R]$. This makes no difference for many operators. However, it matters e.g. when considering the Bessel operator with negative parameters $\a_j$. \label{page2} \item Assumption \eqref{plan} in \cite{Sikora_JFA2} is written for $m(\sqrt{A})$, but we use equivalent version with $m(A)$ (therefore we replace $B(y,R^{-1})$ by $B(y,R^{-1/2})$). } One of the main goals of this paper is to establish a multiplier theorem on Hardy spaces. We shall use the definition of the Hardy space $H^1(A)$ associated with $A$ by means of the maximal operator of the semigroup $\mathbf{P}_t$, namely \eqx{ H^1(A) = \set{f\in L^1(Y) \ : \ \norm{f}_{H^1(A)}:= \norm{\sup_{t>0} \abs{\mathbf{P}_t f}}_{L^1(Y)} < \infty}. } To state our result we shall assume additionally that $P_t(x,y)$ satisfies also the the lower Gaussian bounds, namely there exist $c_1, C_1 >0$, such that \eq{\tag{LG}\label{gauss_down} P_t(x,y) \geq C_1 \mu(B(x,\sqrt{t}))^{-1} \exp\left( - \frac{\rho(x,y)^2}{c_1 t}\right), \quad t>0, \, x,y\in Y, } and that the space $(Y,\rho,\mu)$ satisfies the following assumption: \eq{\tag{Y}\label{X} \text{ for all } x\in Y \text{ the function } r \mapsto \mu(B(x,r)) \text{ is a bijection on } (0,\infty). } Notice that \eqref{X} implies that $\mu(Y)=\infty$ and that $\mu$ is non-atomic. Now we are ready to state the theorem. \sthm{multi2}{ Assume that $(Y,\rho,\mu)$ is a space of homogeneous type, $d$ is as in \eqref{doubling}, and \eqref{X} is satisfied. Suppose that there is a self-adjoint positive operator $A$ such that \eqref{gauss_up}, \eqref{gauss_down}, and \eqref{plan} hold with some $q\in[2,\infty]$. If $m$ satisfies \eqref{Sq} and $\beta} \newcommand{\e}{\varepsilon>d/2$, then $m(A)$ is bounded from $H^1(A)$ to $H^1(A)$, i.e. there exists $C>0$, such that \eqx{ \norm{m(A) f}_{H^1(A)} \leq C \norm{f}_{H^1(A)}. } } The history of multiplier theorems for spaces of homogeneous type is long and wide. The interested reader is referred to \cite{Sikora_JFA2,Carbonaro_Drag,Stein,Alexopoulos,Hormander,DPW_JFAA,Garrigos_Seeger,Sikora_et_al_JAnalMath,Christ_Trans,Meda_general,Muller_Stein,Hebisch_multipliers,Chen,DP_Argentina} and references therein. {Let us concentrate for a moment on the range of parameters $\beta} \newcommand{\e}{\varepsilon$ in Theorem \ref{multi2}. Obviously, in general, the range $\beta} \newcommand{\e}{\varepsilon>d/2$ is optimal. However, it may happen that for some particular operators one may obtain multiplier results assuming that $\beta} \newcommand{\e}{\varepsilon >\wt{d}/2$ with $\wt{d}<d$, see e.g. \cite{Martini_Muller,Martini_Sikora,Muller_Stein}. On the other hand, there are known families of operators for which the constant $d/2$ cannot be lower. One of the methods to prove this is to derive lower estimates for $A^{ib}$ in terms of $b\in \mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$, see \cite{Chen_Sikora,Martini_Sikora,Sikora_Wright,Sikora_JFA2}. Lastly, let us mention that some multiplier results hold also in the non-doubling case, see e.g. \cite{Cowling_AnnM83}. Boundedness of operators on the Hardy space $H^1$ is a natural counterpart of weak type $(1,1)$ bound. For example, it is a good end point for the interpolation, see e.g. \cite{Bernicot}. However, the Hardy spaces are strictly related to some cancellation conditions and it is usually more involving to study properties of operators on the Hardy space, than on $L^p$ or $L^{p,\infty}$ spaces. Let us also mention that boundedness from $H^1$ to $H^1$ obviously implies boundedness from $H^1$ to $L^1$, which is usually much easier to prove. \subsection{Hardy spaces} \label{sec-hardy-spaces} The Hardy spaces on spaces of homogeneous type are studied extensively from the 60's, see e.g. \cite{CoifmanWeiss_BullAMS}. In particular, now we have many atomic decompositions for $H^p$ on various spaces and operators acting on this spaces. We refer the reader to e.g. \cite{Hofmann_Memoirs,BDT_d'Analyse,Dziubanski2017,Auscher_McIntosh_Russ_JGA08} and references therein. In this subsection we recall some results on Hardy spaces related to $A$, assuming that \eqref{doubling}, \eqref{gauss_up}, \eqref{gauss_down} and \eqref{X} are satisfied. For the proofs and more details we refer the reader to \cite{Dziubanski2017}. Firstly, there exists the unique (up to a multiplicative constant) $A$-harmonic function $\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta : Y \to \mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$ such that \eqx{ C^{-1} \leq \omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(x) \leq C {, \qquad x\in Y}. } The function $\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta$ plays a special role in the analysis of $A$ and $\mathbf{P}_t$. In particular we have the following H\"older-type estimate. \thm{conlip}{ Suppose that the semigroup $\mathbf{P}_t$ satisfies \eqref{gauss_up}, \eqref{gauss_down}. Then there exist positive constants $\gamma, c, C$, such that if $\rho(y,z)\leq \sqrt{t}$, then \eqx{ \abs{\frac{P_t(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} - \frac{P_t(x,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)}} \leq C \mu(B(x,\sqrt{t}))^{-1}\left( \frac{\rho(y,z)}{\sqrt{t}}\right)^\gamma \exp\left( - \frac{\rho(x,y)^2}{ct} \right).}} \noindent Theorem \ref{conlip} is quite well-known and follows from a general theory. For a short and independent proof see \cite[Sec. 4]{Dziubanski2017}. \cor{lippt}{ There exist $\gamma, C >0$ such that if $\rho(y,z)\leq \sqrt{t}$, then \eqx{ \int_Y \abs{\frac{P_t(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} - \frac{P_t(x,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)}} \, d\mu(x) \leq C \left( \frac{\rho(y,z)}{\sqrt{t}}\right)^\gamma. } } Using Theorem \ref{conlip} the authors of \cite{Dziubanski2017} obtained the following atomic decomposition for the elements of $H^1(A)$. Let us call a function $a: Y \to \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C}$ an $(\mu,\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta)$-atom, if there exists a ball $B$ in~$Y$, such that: \eqx{ \mathrm{supp} \,a \subseteq B, \qquad \norm{a}_\infty \leq \mu(B)^{-1}, \qquad \int_B a(x) \omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(x) \, d\mu(x)=0. } \thm{hardy_atomic}{\cite[Thm. 1]{Dziubanski2017} There exists a constant $C>0$ such that for each $f\in H^1(A)$ there exist $\la_k\in \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C}$ and $(\mu,\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta)$-atoms $a_k$ ($k\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}$), such that \eqx{ f(x) = \sum_{k\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}} \la_k a_k(x), \quad \text{and} \quad C^{-1} \norm{f}_{H^1(A)} \leq \sum_{k\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}} |\la_k| \leq C \norm{f}_{H^1(A)}. } } Let us start by recalling a few consequences of \eqref{doubling} and \eqref{gauss_up}. \lem{s21}{\cite[Lem. 2.1]{Sikora_JFA2} Suppose that \eqref{doubling} and \eqref{gauss_up} hold. Then \eqx{ \int_{ B(y,r)^c} |P_t(x,y)|^2 \, d\mu(x) \leq C \mu(B(y,\sqrt{t}))^{-1} \exp\left( - \frac{r^2}{c_2t}\right). } In particular \eqx{ \label{norm2pt} \norm{P_t(x,\cdot)}^2_{L^2(Y)} \leq C \mu(B(x,\sqrt{t}))^{-1}. }} \lem{s41}{ \cite[Lem. 4.1]{Sikora_JFA2} For $\kappa\geq 0$ there exists a constant $C=C(\kappa)>0$ such that \eqx{ \int_Y |P_{(1+i\tau)R^{-1}}(x,y)|^2 (1+ R^{1/2}\rho(x,y))^\kappa \, d\mu(x) \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1} (1+|\tau|)^\kappa. } } \lem{s44}{ \cite[Lem. 4.4]{Sikora_JFA2} Suppose that \eqref{doubling} holds and $\delta > 0$. Then \eqx{ \int_{B(y,r)^c} (1+R^{1/2}\rho(x,y))^{-d-2\delta} \, d\mu(x) \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)(1+rR^{1/2})^{-2\delta}. } } \newcommand{\ol}[1]{\wt{m}} \subsection{Key kernel estimates} This subsection is devoted to obtain key estimates needed for the proof of Theorem \ref{multi2}. We shall assume (temporarily) that $m$ satisfies $\mathrm{supp}\, m \subseteq [R/2, 2R]$ with some $R>0$. Later we shall use a partition of unity for general $m$. Denote $m_R(\la) = m(R\la)$, so that $\mathrm{supp}\, m_R \subseteq [2^{-1},2]$. Let us notice that below the letter $q\in[2,\infty]$ is always the exponent related to \eqref{plan} and \eqref{Sq}. Moreover, all the spectral operators below admit related integral kernels, which can be seen by using an argument identical as in \cite[Lem. 2.2]{Sikora_JFA2}. Let us denote $\ol{m}_t(\la) = \exp(-t\la) m(\la)$ and let $M_t(x,y)$ be the kernel associated with $\ol{m}_t(A) = \mathbf{P}_t m(A)$. \prop{prop:main}{ Assume that $\mathrm{supp} \, m \subseteq [R/2, 2R]$ and $m_R \in W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})$ with $\beta} \newcommand{\e}{\varepsilon > d/2$. Then, there exist $\delta, \gamma, C>0$ such that for $y,z\in Y$ and $r>0$ we have \eq{ \label{main1} \int_{B(y,r)^c} \sup_{t>0} \, \abs{M_t(x,y)} \, d\mu(x) \leq C \left(1 + rR^{1/2}\right)^{-\delta} \norm{m_R}_{W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})},} and, for $\rho(y,z) < R^{-1/2}$, \eq{\label{main2} \int_{B(y,r)^c} \sup_{t>0}\abs{ \frac{M_t(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} - \frac{M_t(x,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)}} \, d\mu(x) \leq C \left( R^{1/2}\rho(y,z)\right)^{\gamma} \norm{m_R}_{W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } } Let us start by showing the following lemma. \lem{s43}{ For $\e >0,\kappa \geq 0$ there exists a constant $C=C(\kappa,\e)$ such that \eqx{ \int_Y \sup_{t>0} |M_{t}(x,y)|^2 \left(1+R^{1/2}\rho(x,y)\right)^{\kappa} \, d\mu(x) \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1} \norm{m_R}^2_{W^{q,\kappa/2+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. }} \pr{ Fix a cut-off function $\psi\in C_c^\infty(4^{-1},4)$, such that $\psi \equiv 1$ on $[2^{-1}, 2]$. Set \spx{ n_{t,R}(\la) = m_R(\la) \underbrace{e^{-tR\la} e^\la \psi(\la)}_{\la_{t,R}(\la)}. } By the Fourier inversion formula, \eqx{ \ol{m}_{t}(A) = n_{t,R}(AR^{-1})e^{-AR^{-1}} = \frac{1}{2\pi} \int_\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R} \widehat{n}_{t,R}(\tau) \exp \left( (i\tau-1)AR^{-1} \right) \, d\tau } and \eq{\label{eq_Mt} M_t(x,y) = \frac{1}{2\pi} \int_\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R} \widehat{n}_{t,R}(\tau) P_{(1-i\tau)R^{-1}}(x,y) \, d\tau. } Notice that $\mathrm{supp} \, \la_{t,R}^{(N)} \subseteq (4^{-1}, 4)$ for arbitrary $N\in\mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}$. By simple calculus we can find a~constant $C_N$ such that \spx{ \sup_{R>0,t>0} \abs{\widehat{\la}_{t,R}(\tau)} \leq C_N(1+|\tau|)^{-N}. } Since $\widehat{n}_{t,R} = \widehat{m}_R \ast \widehat{\la}_{t,R}$ and $(1+|\tau|) \leq (1+|\theta|)(1+|\tau-\theta|)$, for $\kappa\geq0$ and $\e>0$ we use the Cauchy-Schwarz inequality, getting \spx{ \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} \sup_{t>0} |\widehat{n}_{t,R}(\tau)| (1+|\tau|)^{\kappa/2} \, d\tau & \leq \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} \sup_{t>0} |\widehat{m}_R(\theta)||\widehat{\la}_{t,R}(\tau-\theta)| (1+|\tau|)^{\kappa/2} \, d\theta \,d\tau \\ & \leq \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} \sup_{t>0} |\widehat{m}_R(\theta)||\widehat{\la}_{t,R}(\tau-\theta)| (1+|\theta|)^{\kappa/2} \, (1+|\tau-\theta|)^{\kappa/2} \,d\tau \, d\theta \\ & \leq C \int_{\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}} |\widehat{m}_R(\theta)| (1+|\theta|)^{(\kappa+1)/2+\e} (1+|\theta|)^{-1/2-\e} \, d\theta \\ & \leq C \norm{m_R}_{W^{2,(1+\kappa)/2+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} \left( \int_{-\infty}^\infty (1+|\theta|)^{-1-\e} \, d\theta \right)^{1/2} \\ & \leq C \norm{m_R}_{W^{2,(1+\kappa)/2+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } Hence, by \eqref{eq_Mt}, the Minkowski inequality, and Lemma \ref{s41} we obtain \sp{ \label{W2} & \left( \int_Y \sup_{t>0} |M_t(x,y)|^2 (1+R^{1/2}\rho(x,y))^\kappa \ d\mu(x) \right)^{1/2} \\ & \leq \int_\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R} \sup_{t>0} |\widehat{n}_{t,R}(\tau)| \left( \int_Y |P_{(1-i\tau)R^{-1}}(x,y)|^2 (1+R^{1/2}\rho(x,y))^{\kappa} \, d\mu(x) \right)^{1/2} \, d\tau\\ & \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1/2} \int_\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R} \sup_{t>0} |\widehat{n}_{t,R}(\tau)| (1+|\tau|)^{\kappa/2} \, d\tau \\ & \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1/2} \norm{m_R}_{W^{2,(1+\kappa)/2+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}\\ & \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1/2} \norm{m_R}_{W^{q,(1+\kappa)/2+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } In the last inequality we have used that $\mathrm{supp}\, m_R\subseteq [2^{-1},2]$ and $q\geq 2$. Observe that \eqref{W2} is exactly the estimate we look for, but the Sobolev parameter is higher by $1/2$ than we want. To sharpen this estimate, we make use of known interpolation method. Notice, that $M_t(x,y) = \mathbf{P}_t(K_{m(A)}(\cdot,y))(x)$. It is well-known that \eqref{gauss_up} implies boundedness on $L^2(Y)$ of the maximal operator $\mm f =\sup_{t>0} |\mathbf{P}_t f|$. A second estimate needed for an interpolation is the following \sp{\label{L2} \eee{\int_Y \sup_{t>0} |M_{t}(x,y)|^2 \, d\mu(x)}^{1/2} & = \norm{ \mm K_{m(A)}(\cdot,y)}_{L^2(Y)} \\ & \leq C \norm{ K_{m(A)}(\cdot,y) }_{L^2(Y)}\\ & \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1/2}\norm{m_R}_{L^q(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } In the last inequality we have used \eqref{plan}. Now, Lemma \ref{s43} follows by interpolating \eqref{W2} and \eqref{L2}, see e.g. proofs of \cite[Lem. 4.3(a)]{Sikora_JFA2} and \cite[Lem. 2.2]{DPW_JFAA} for details. } \pr{[Proof of \eqref{main1}] By the Cauchy-Schwarz inequality and Lemmas \ref{s44} and \ref{s43}, \spx{ \label{first} & \int_{B(y,r)^c} \sup_{t>0} |M_{t}(x,y)| \, d\mu(x) \\ & \leq \left( \int_Y \sup_{t>0} |M_{t}(x,y)|^2(1+R^{1/2}\rho(x,y))^{d+2\de} \, d\mu(x) \right)^{1/2} \left( \int_{B(y,r)^c} (1+R^{1/2}\rho(x,y))^{-d-2\de} \, d\mu(x) \right)^{1/2}\\ & \leq C \mu\left(B\left(y,R^{-1/2}\right)\right)^{-1/2} \norm{m_R}_{W^{q,d/2+\de+\e}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}\mu\left(B\left(y,R^{-1/2}\right)\right)^{1/2}(1+rR^{1/2})^{-\de} \\ & \leq C (1+rR^{1/2})^{-\de} \norm{m_R}_{W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}, } where $\de, \e >0$ are such that $d/2+\de+\e \leq \beta} \newcommand{\e}{\varepsilon$. } \renewcommand{\wt}{\widetilde} Consider for a moment the operator $\mathbf{P}_tm(A) \exp(AR^{-1})$ and let $\wt{M}_{t,R}(x,y)$ be its kernel. By almost identical arguments as in the proofs of Lemma \ref{s43} and \eqref{main1}, we can show that for $\beta} \newcommand{\e}{\varepsilon>d/2$ we also have \eq{\label{est_Mtilde} \int_{B(y,r)^c} \sup_{t>0} |\wt{M}_{t,R}(x,y)| \, d\mu(x) \leq C \norm{m_R}_{W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } \pr{[Proof of \eqref{main2}] Notice, that $M_{t}(x,y) = \int_Y \wt{M}_{t,R}(x,u)P_{R^{-1}}(u,y) \, d\mu(u)$. For $\rho(y,z)<R^{-1/2}$, by Corollary \ref{lippt} and \eqref{est_Mtilde}, \spx{ & \int_{B(y,r)^c} \sup_{t>0} \abs{\frac{M_t(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} -\frac{M_t(x,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)}} \, d\mu(x) \\ & = \int_{B(y,r)^c} \sup_{t>0} \abs{\int_Y \wt{M}_{t,R}(x,u)\left( \frac{P_{R^{-1}}(u,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} - \frac{P_{R^{-1}}(u,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)} \right) \, d\mu(u)} \, d\mu(x) \\ & \leq \int_Y \abs{ \frac{P_{R^{-1}}(u,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)} - \frac{P_{R^{-1}}(u,z)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(z)}} \int_{B(y,r)^c} \sup_{t>0} | \wt{M}_{t,R}(x,u)| \, d\mu(x) \, d\mu(u) \\ & \leq C \left( R^{1/2}\rho(y,z)\right)^\gamma \norm{m_R}_{W^{q,\beta} \newcommand{\e}{\varepsilon}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } } \subsection{Proof of Theorem \ref{multi2}} {Theorem \ref{multi2} follows from Proposition \ref{prop:main} by a quite standard argument. We present the details for completeness and convenience of the reader. As usual, by a continuity argument, in order to prove boundedness of the operator $m(A)$ on $H^1(A)$ it is enough to show that there exists $C>0$ such that \eqx{ \norm{m(A)a}_{H^1(A)} = \norm{\mm m(A)a}_{L^1(Y)} \leq C} holds for every $(\mu,\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta)$-atom $a$, see Theorem \ref{hardy_atomic}. Assume then that: $\mathrm{supp} \, a \subseteq B(y_0,r) = : B$, $\norm{a}_\infty \leq \mu(B)^{-1}$, and $\int a \, \omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta d\mu = 0$. As always, the analysis on $2B = B(y_0,2r)$ follows by the Cauchy-Schwarz inequality and boundedness of $\mm$ and $m(A)$ on $L^2(Y)$. More precisely, \spx{ \norm{\mm m(A) a }_{L^1(2B)} & \leq \mu(2B)^{1/2} \norm{\mm m(A) a }_{L^2(Y)} \\ & \leq C \mu(B)^{1/2} \norm{ a (x)}_{L^2(Y)} \leq C. } Therefore, it is enough to prove that \eq{\label{eq:main} \norm{\mm m(A) a }_{L^1((2B)^c)} \leq C. } Let $\eta\in C_c^\infty(2^{-1},2)$ be a fixed function such that $ \sum_{j\in\mathbb{Z}} \newcommand{\Zz}{\mathbf{Z}} \newcommand{\zz}{\mathcal{Z}}\eta(2^{-j}\la) = 1 $ for all $\la \in(0,\infty)$. By using this partition of unity, we decompose $m$ as \eqx{ m(\la) = \sum_{j\in \mathbb{Z}} \newcommand{\Zz}{\mathbf{Z}} \newcommand{\zz}{\mathcal{Z}} \eta(2^{-j}\la) m(\la) = \sum_{j\in\mathbb{Z}} \newcommand{\Zz}{\mathbf{Z}} \newcommand{\zz}{\mathcal{Z}} m_j(\la). } Fix $N\in \mathbb{Z}} \newcommand{\Zz}{\mathbf{Z}} \newcommand{\zz}{\mathcal{Z}$ such that $ 2^{-N} \leq r^2 < 2^{-N+1}. $ Then \spx{ \norm{\mm m(A)a}_{L^1((2B)^c)} & \leq \sum_{j\in\mathbb{Z}} \newcommand{\Zz}{\mathbf{Z}} \newcommand{\zz}{\mathcal{Z}} \norm{\mm m_j(A)a}_{L^1((2B)^c)}= \sum_{j\geq N}... + \sum_{j<N}... = S_1 + S_2. } Denote $m_{j,t}(\la) = \exp(-t\la) m_j(\la)$ and let $M_{j,t}(x,y)$ be the kernel of $m_{j,t}(A) = \mathbf{P}_t m_j(A)$. Obviously, $\mathrm{supp} \, m_{j,t} \subseteq [2^{j-1}, 2^{j+1}]$ and applying \eqref{main1} we obtain that \spx{ \label{s1} S_1 & \leq \sum_{j\geq N} \int_{(2B)^c} \int_{B} \sup_{t>0} |M_{j,t}(x,y)||a(y)| d\mu(y) d\mu(x) \\ & \leq \sum_{j\geq N} \int_{B} |a(y)| \int_{B^c} \sup_{t>0} |M_{j,t}(x,y)| d\mu(x) d\mu(y)\\ & \leq C \norm{a}_{L^1(Y)} \sum_{j\geq N} (1+2^{j/2}r)^{-\delta} \norm{\eta(\cdot) m(2^j \cdot) }_{W^{q,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} \\ & \leq C \sup_{t>0} \norm{\eta(\cdot) m(t \cdot) }_{W^{q,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} \leq C.} If $y\in B$ and $j < N$, then $\rho(y,y_0)< r < 2^{-j/2}$ and we can apply \eqref{main2} for the kernel $M_{j,t}$ with $R=2^j$. Using the cancellation condition of $a$, \spx{ \label{s2} S_2 & \leq \sum_{j < N} \int_{(2B)^c} \sup_{t>0} \abs{\int_{B} M_{j,t}(x,y) a(y) d\mu(y)} d\mu(x) \\ & = \sum_{j < N} \int_{(2B)^c} \sup_{t>0} \abs{\int_{B} \left( \frac{M_{j,t}(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)}- \frac{M_{j,t}(x,y_0)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y_0)}\right) a(y) \omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y) d\mu(y)} d\mu(x)\\ & \leq \sum_{j < N} \int_{B} |a(y)| \int_{B(y,r)^c} \sup_{t>0} \abs{ \frac{M_{j,t}(x,y)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y)}- \frac{M_{j,t}(x,y_0)}{\omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y_0)}} d\mu(x) \, \omega} \newcommand{\Om}{\Omega} \newcommand{\de}{\delta(y) d\mu(y)\\ & \leq C \sum_{j< N} 2^{\frac{j\gamma}{2}} \int_{B} |a(y)| \rho(y,y_0)^{\gamma} d\mu(y) \norm{\eta(\cdot) m(2^j \cdot) }_{W^{q,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} \\ & \leq C \sup_{t>0} \norm{\eta(\cdot) m(t \cdot) }_{W^{q,\beta}(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})} r^\gamma \sum_{j< N} 2^{\frac{j\gamma}{2}} \leq C. } This finishes the proof of \eqref{eq:main} and Theorem \ref{multi2}. \section{The multidimensional Bessel operator}\label{sec_Bess} In this Section we turn back to the analysis related to $B$ and prove the results stated in Section \ref{intro}. \subsection{The Hankel transform} Recall that $N\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}$ and $\a_j >-1$ for $j=1,...,N$. For $x,\xi\in X = (0,\infty)^N$ denote $\varphi_{\a}(x\xi) = \varphi} \newcommand{\vpsi}{\varpsi_1(x_1\xi_1)\cdot ... \cdot \varphi} \newcommand{\vpsi}{\varpsi_N(x_N\xi_N)$, where $$\varphi} \newcommand{\vpsi}{\varpsi_j(z) = 2^{(\a_j-1)/2}\Gamma\left((\a_j+1)/2 \right)z^{-(\a_j-1)/2}J_{(\a_j-1)/2}(z), \quad z>0.$$ Here $J_\tau$ denotes the Bessel function of the first kind. By the asymptotics of $J_\tau$ one has \eq{\label{phi} \abs{\varphi} \newcommand{\vpsi}{\varpsi_j(z)} \leq C (1+z)^{-\a_j/2}, \quad z>0. } The Hankel transform is defined by \eq{\label{hankel} H_\a f(\xi) = \int_X f(x)\varphi_{\a}(x\xi) d\nu(x), \quad \xi\in X, } As we have already mentioned, $\varphi_j \in L^\infty$ if and only if $\a_j \geq 0$. Nevertheless, it is known that $H_\a$ always extends uniquely to an isometric isomorphism on $L^2(X)$, see \cite{BCC} and \cite[Lem. 2.7]{Betancor_Stempak}. The multipliers $m(B)$ and $\hh_\a$ are related in the same way, as $m(-\Delta)$ and the Fourier transform on $\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}^D$. In particular, if \eqx{ n(\la) = m(|\la|^2), \qquad \la \in X, } then $m$ is radial and \eq{\label{m-han} m(B) f = H_\a (n \cdot H_\a). } \subsection{$(P_2)$ for multidimensional Bessel operator} Let us first recall that $\mathbf{T}_t$ satisfies \eqref{gauss_bess} and, obviously, $X$ satisfies \eqref{X}. Therefore, Theorem \ref{main} follows from Theorems \ref{multi1} and \ref{multi2} provided that \eqref{plan} holds with $q=2$, which we now prove. The case $N=1$ follows by similar and simpler argument, thus we shall concentrate on $N\geq 2$. Let $k\in\set{1,...,N-1}$ and $c_j<2^{-N}$ for $j=1,...,k$. Define the sets $$S_{c_1,...,c_k} = \set{x\in X \ : \ 1/2<|x|^2<2 \text{ and } x_{j}<c_j \text{ for } j=1,...,k}.$$ \lem{x_k}{ Suppose that $\mathrm{supp} \,m\subset [R/2,2R]$, $N\geq 2$, $k\leq N-1$, and $c_j <2^{-N}$ for $j=1,2,...,k$. Then there exists $C>0$ such that \eqx{ \int_{S_{c_1,...,c_k}} \abs{m\left(R|x|^2\right)}^2 x_{1}^{\a_{1}} ... x_k^{\a_k} \, dx_1...dx_N \leq C c_1^{\a_{1}+1} ... c_k^{\a_{k}+1} \norm{m(R\cdot)}_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2. }} \pr{ Introduce the spherical coordinates $(r,\theta_1, ... , \theta_{N-1})$ on $\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}^N$, namely \eqx{ \label{spher-var-d} \begin{cases} x_1 = r\sin(\theta_1), & \\ x_i = r \sin(\theta_i) \Pi_{j=1}^{i-1}\cos(\theta_j), \qquad \ \ \ \text{ for } i=2,3,...,N-1, \\ x_N = r \Pi_{j=1}^{N-1}\cos(\theta_j)\\ dx_1\,...\,dx_N = r^{N-1} \prod_{j=1}^{N-2} \cos^{N-1-j}(\theta_j) dr\, d\theta_1 ... \,d\theta_{N-1}. \end{cases} } Since $x\in X$, then $\theta_j \in (0,\pi/2)$ for $j=1,...,N-1$. We claim that if $x\in S_{c_1,...,c_k}$, then \eq{\label{male_katy} \sin(\theta_j) < 2^{-N/2} \leq 2^{-1/2} } for $j=1,...,k$. Obviously, if $\sin(\beta) < 2^{-1/2}$ for $\beta} \newcommand{\e}{\varepsilon \in (0,\pi/2)$, then $(\cos \beta)^{-1} < 2^{1/2}$. Observe that $2^{-1/2}<r<2^{1/2}$, since $\mathrm{supp} \, m \subseteq [R/2, 2R]$. Therefore, \eqref{male_katy} follows easily by induction, i.e. for $i=1,...,k$, \eqx{ \sin \theta_i = x_i r^{-1} (\cos \theta_1)^{-1} ... (\cos \theta_{i-1})^{-1}\leq 2^{-N} 2^{i/2} \leq 2^{-N/2}. } Denote $S=S_{c_1,...,c_k}$. As a consequence of \eqref{male_katy} we have that $\sin \theta_i \simeq \theta_i$ and $\cos \theta_i \simeq C$ for $i=1,...,k$. Using this, $x_i^{\a_i} \simeq r^{\a_i} \theta_i^{\a_i}$ for $i=1,...,k$ and \spx{ & \int_{S} \abs{m(R|x|^2)}^2 x_{1}^{\a_{1}} ... x_k^{\a_k} \, dx_1...dx_N \\ & \leq C \int_S \abs{m(Rr^2)}^2 r^{N-1+\a_1+...+\a_k} \theta_1^{\a_1}...\theta_k^{\a_k} dr d\theta_{1} ... d\theta_{N-1} \\ & \leq C\int_0^{c c_1} \theta_1^{\a_1} \,d\theta_1 \cdot ... \cdot \int_0^{c c_k} \theta_k^{\a_k} \,d\theta_k \cdot \int_{1/2<r^2<2} \abs{m\left(Rr^2\right)}^2 dr \\ & \leq C c_1^{\a_1+1} \cdot ... \cdot c_k^{\a_k+1} \cdot \norm{m(R\cdot)}_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2. }} \prop{prop_P}{ Assume that $N\in \mathbb{N}} \newcommand{\Nn}{\mathbf{N}} \newcommand{\nn}{\mathcal{N}$ and $\a_j>-1$ for $j=1,..,N$. Then \eqref{plan} holds for $B$ {with} $q=2$. } \pr{\label{plan-bessel} In the proof we consider only the case $N\geq 2$. Let $q=2$ and suppose that $m$ is supported in $[R/2,2R]$ for some $R>0$. Notice that by \eqref{m-han} and \eqref{hankel} we have \spx{ m(B)f(x) & = \int_{X} f(y) \int_{X} n(\xi) \varphi_{\a}(y\xi) \varphi_{\a}(x\xi) d\nu(\xi) \, d\nu(y)\\ & = \int_{X} f(y) H_\a\left( n(\cdot) \varphi_{\a}(y\cdot)\right)(x) \, d\nu(y) } and the kernel associated with $m(B)$ has the form \eqx{K_{m(B)}(x,y) = H_\a\left( n(\cdot) \varphi_{\a}(y\cdot)\right)(x).} Therefore, by the Plancherel identity for $H_\a$, $(P_2)$ is equivalent to \eq{\label{upr} \int_{R/2<|x|^2<2R}\abs{m(|x|^2)}^2\abs{ \varphi_{\a}(xy)}^2 d\nu(x) \leq C \nu\left(B\left(y,R^{-1/2}\right)\right)^{-1} \norm{m(R\cdot)}^2_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}. } For each $i=1,...,N$ we consider four cases: \begin{enumerate}[{\bf C1.}] \item $x_i < 2^{-N}$,\quad $\sqrt{R}y_i > 2^N $, \quad $\sqrt{R}y_ix_i < 1$, \item $x_i < 2^{-N}$,\quad $\sqrt{R}y_i \leq 2^N $, \item $x_i < 2^{-N}$,\quad $\sqrt{R}y_{i} > 2^N$, \quad $\sqrt{R}y_ix_i \geq 1$, \item $ 2^{-N} \leq x_i \leq \sqrt{2} $. \end{enumerate} Divide the set $\set{x\in X \ : \ 1/2<|x|^2<2}$ into several disjoint regions using the cases above. Without loss of generality we may consider the set $S$ of points $x\in X$ such that: \ite{ \item $x_i$ satisfies {\bf C1.} for $i=1,..., k_1$, \item $x_i$ satisfies {\bf C2.} for $i=k_1+1,..., k_2$, \item $x_i$ satisfies {\bf C3.} for $i=k_2+1,..., k_3$, \item $x_i$ satisfies {\bf C4.} for $i=k_3+1,..., N$, } where $0\leq k_1\leq k_2 \leq k_3 < N$. The fact that $k_3< N$ is implied by $|x|^2>1/2$. Notice that it may happen that $S$ is empty. Recall that $\nu(B(y,r)) \simeq \Pi_{j=1}^N \nu_j(B(y_j,r))$, where $d\nu_j(x_j) = x_j^{\a_j} dx_j$ is the one-dimensional measure, and \eqx{ \nu_j\left(B\left(y_j, R^{-1/2}\right)\right)^{-1} \simeq R^{(\a_j+1)/2} \eee{1+\sqrt{R}y_j}^{-\a_j}. } Denote $d_{gl} = N+\a_1 +...+\a_N$. Using \eqref{phi} and Lemma \ref{x_k} with $k=k_2$, we have \spx{ &\int_{R/2<|x|^2<2R}\abs{m\left(|x|^2\right)}^2\abs{ \varphi_{\a}(xy)}^2 d\nu(x) \leq C \sum_S R^{\frac{d_{gl}}{2}} \int_S \abs{m\left(R|x|^2\right)}^2 \prod_{j=1}^N\eee{x_j^{-1}+\sqrt{R}y_j}^{-\a_j} dx\\ &\leq C \sum_S R^{\frac{d_{gl}}{2}} \int_S \abs{m\left(R|x|^2\right)}^2 x_1^{\a_1}...x_{k_2}^{\a_{k_2}} \prod_{j=k_2+1}^N \left(1+\sqrt{R}y_{j}\right)^{-\a_{j}} dx_1...dx_N\\ &\leq C \sum_S R^{\frac{d_{gl}}{2}} \prod_{i=1}^{k_1}\left(\sqrt{R}y_i\right)^{-\a_i-1} \prod_{j=k_2+1}^N \left(1+\sqrt{R}y_{j}\right)^{-\a_{j}} \norm{m(R\cdot)}_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2\\ &\leq C \prod_{i=1}^{k_1}R^{(\a_i+1)/2}\left(\sqrt{R}y_i\right)^{-\a_i} \prod_{k=k_1+1}^{k_2} R^{(\a_k+1)/2} \prod_{j=k_2+1}^N R^{(\a_j+1)/2} \left(1+\sqrt{R}y_{j}\right)^{-\a_{j}} \norm{m(R\cdot)}_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2\\ &\leq C \nu\left(B\left(y,R^{-1/2}\right)\right)^{-1} \norm{m(R\cdot)}_{L^2(\mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R})}^2. } } \subsection{Imaginary powers of $B$}\label{sec3} In this subsection we prove Lemma \ref{kernel} and Theorems \ref{lower1} and~\ref{lower2}. From now on we consider one-dimensional Bessel operator, i.e. $N=1$, $X=(0,\infty)$, $\a>-1$, and $d\nu(x) = x^\a \, dx$. Let us start this section by recalling well-known asymptotics of the Bessel function $I_\tau$ \cite{Watson,Lebedev}, i.e. \al{\label{Bessel_loc} &&&&I_{\tau}(x) &= \Gamma\eee{\frac{\tau+1}{2}}^{-1} \eee{\frac{x}{2}}^\tau + O\eee{x^{\tau+1}}, && x \sim 0&&&&\\ \label{Bessel_glob} &&&&I_{\tau} (x) &= (2\pi x)^{-1/2} e^{x}\eee{ 1 + O(x^{-1})}, && x\sim \infty&&&& } Now we provide a short argument for \eqref{def-im-ker}. In \cite[Sec. 4.3]{BCN} it is proved that $B^{ib}$ is associated with the kernel $$-\Gamma(-ib+1)^{-1} \int_0^\infty t^{ib} \partial_t T_t(x,y) \, dt$$ in the sense as in \eqref{def-im-ker} (let us notice that in \cite{BCN} only positive values of $\a$'s are considered, but the proof works for $\a_j>-1$ as well). By integrating by parts, \spx{ - \Gamma(-ib+1)^{-1} \int_0^\infty t^{-ib} \partial_t T_t(x,y)\,dt =& - \Gamma(-ib+1)^{-1} \lim_{\e\to 0} \left( \e^{ib} T_{\e^{-1}}(x,y) - \e^{-ib} T_\e(x,y) \right)\\ & + \Gamma(-ib)^{-1} \int_0^{\infty} t^{-ib} T_t(x,y)\,\frac{dt}{t} = K_b(x,y) . } \pr{[Proof of Lemma \ref{kernel}] Let us first notice that for $\kappa\in \mathbb{R}} \newcommand{\Rr}{\mathbf{R}} \newcommand{\rr}{\mathcal{R}$ and $c,M>0$, there exists $C=C(\kappa,c,M)$ such that \sp{\label{int-conv} \int_{c z}^{\infty} t^\kappa \exp\eee{-\frac{t}{4}} \frac{dt}{t} \leq Cz ^{-M}, \qquad z\geq 1. } Using \eqref{Kb} and \eqref{Kt} one obtains \spx{ 2 \Gamma(-ib) K_b(x,y) & = \int_0^\infty t^{-ib-1} (xy)^{-(\a-1)/2} I_{(\a-1)/2}\left(\frac{xy}{2t}\right) \exp\left( -\frac{x^2+y^2}{4t}\right) \frac{dt}{t} \\ &= \int_0^{xy} ... + \int_{xy}^\infty ...= A_1 + A_2. } Denote $\chi_{loc}(x,y)=\chi_{\set{y/2<x<2y}}(x,y)$ and $\chi_{glob}(x,y) = 1-\chi_{loc}(x,y)$, $x,y\in X$. In the proof below all expressions denoted by $R_k$ shall be parts of the kernel $R_b(x,y)$. Using \eqref{Bessel_glob}, we write $A_1 = A_{1,1}+R_1$, where \spx{ A_{1,1} = &\pi^{-1/2} \int_0^{xy} t^{-ib-1/2} (xy)^{-\a/2} \exp\left( -\frac{|x-y|^2}{4t}\right) \frac{dt}{t} } and \alx{ |R_1| &= \abs{ \int_0^{xy} t^{-ib-1} (xy)^{-(\a-1)/2} \exp\left( -\frac{x^2+y^2}{4t}\right) \left(I_{(\a-1)/2}\left(\frac{xy}{2t}\right) - \left(\frac{\pi xy}{t}\right)^{-1/2}\exp\left(\frac{xy}{2t}\right)\right) \frac{dt}{t}}\\ &\leq C \int_0^{xy} t^{1/2} (xy)^{-\a/2-1} \exp\eee{-\frac{|x-y|^2}{4t}} \frac{dt}{t} \\ &= C |x-y| (xy)^{-\a/2-1} \int_{\frac{|x-y|^2}{xy}}^\infty t^{-1/2} e^{-t/4} \frac{dt}{t}\\ &\leq C xy (x+y)^{-\a-3}. } In the last inequality we have used \eqref{int-conv}. Denoting \eqx{ R_2 = \chi_{glob}(x,y) A_{1,1} \quad \text{and}\quad R_3 = \pi^{-1/2} \chi_{loc}(x,y) \int_{xy}^\infty t^{-ib-1/2} (xy)^{-\a/2} \exp\left( -\frac{|x-y|^2}{4t}\right) \frac{dt}{t} } we have \spx{ A_{1,1} - R_2 + R_3 =&\pi^{-1/2}\chi_{loc}(x,y) \eee{\int_0^\infty t^{-ib-1/2} (xy)^{-\a/2} \exp\left( -\frac{|x-y|^2}{4t}\right) \frac{dt}{t}} \\ =& \pi^{-1/2} {2^{2bi+1}} \Gamma\eee{ib+1/2} \chi_{loc}(x,y) (xy)^{-\a/2}|x-y|^{-2ib-1}. } Notice that $A_{1,1}$ is one of the terms from \eqref{kernel_eq}. Next, by \eqref{int-conv}, \alx{ |R_2| &\leq C \chi_{glob}(x,y) \, |x-y|^{-1} (xy)^{-\a/2} \int_{\frac{|x-y|^2}{xy}}^\infty t^{1/2} e^{-t/4}\frac{dt}{t} \\ &\leq Cxy(x+y)^{-\a-3},\\ |R_3| &\leq C \chi_{loc}(x,y) x^{-\a} |x-y|^{-1} \int_0^{\frac{|x-y|^2}{xy}} t^{1/2} \frac{dt}{t} \\ & \simeq C \chi_{loc}(x,y) x^{-\a-1} . } Now let us turn to study $A_2$. Denote $c_\a = 4^{-(\a-1)/2} \Gamma((\a+1)/4)^{-1}$. Then, by using \eqref{Bessel_loc}, \spx{ A_2 = &c_\a \int_{xy}^\infty t^{-ib-(\a+1)/2} \exp\left( -\frac{x^2+y^2}{4t}\right)\frac{dt}{t} +R_4 = A_{2,1}+R_4\\ } where, by \eqref{int-conv}, \alx{|R_4| &= \abs{ \int_{xy}^\infty t^{-ib-1} (xy)^{-(\a-1)/2} \exp\left( -\frac{x^2+y^2}{4t}\right) \left(I_{(\a-1)/2}\left(\frac{xy}{2t}\right) - \Gamma\eee{\frac{\a+1}{4}}^{-1} \left(\frac{xy}{4t}\right)^{(\a-1)/2} \right)\frac{dt}{t}}\\ &\leq C xy \int_{xy}^\infty t^{-(\a+3)/2} \exp\left( -\frac{x^2+y^2}{4t}\right) \frac{dt}{t} \\ &\leq C xy (x^2+y^2)^{-(\a+3)/2} \simeq C xy (x+y)^{-\a-3}. } Moreover, \spx{ A_{2,1} +R_5 &= c_\a \int_0^\infty t^{-ib-(\a+1)/2} \exp\eee{-\frac{x^2+y^2}{4t}} \frac{dt}{t} \\ &=c_\a 4^{ib+(\a+1)/2} \Gamma\eee{ib+(\a+1)/2} \eee{x^2+y^2}^{-ib-(\a+1)/2} , } where \alx{ \abs{R_5} &= \abs{c_\a \int_0^{xy}t^{-ib-(\a+1)/2} \exp\left( -\frac{x^2+y^2}{4t}\right)\frac{dt}{t}}\\ &\leq C (x^2+y^2)^{-(\a+1)/2} \int_{\frac{x^2+y^2}{xy}}^\infty t^{(\a+1)/2} e^{-t/4} \frac{dt}{t}\\ &\leq C xy(x+y)^{-\a-3}. } } \pr{[Proof of Theorem \ref{lower1} for $\a<0$] Let {$|b|>1$} and $\e\in(0,10^{-1})$ (to be fixed later on). Denote $I=[1, 1+\e]$ and $S=[1+3\e,2]$. Put $f_\e(x) = \e^{-1} \chi_I(x) x^{-\a}$, so that $\norm{f_\e}_{L^1(X)}=1$. If $x\in S$, by Lemma \ref{kernel} and the triangle inequality, \sp{ \label{neg_triangle} \abs{B^{ib} f_\e(x)} & \leq \abs{ c_2(b) } x^{-\a/2} |x-1|^{-1} \\ & + \abs{c_2(b)}\abs{\int_I \left( (xy)^{-\a/2}|x-y|^{-2ib-1} - x^{-\a/2}|x-1|^{-2ib-1} \right) f_\e(y) \, d\nu(y)} \\ & + \abs{ c_1(b)}\abs{\int_I \left( x^2 + y^2 \right)^{-ib-(\a+1)/2} f_\e(y) \, d\nu(y)} \\ & + \abs{c_3(b) }\abs{\int_I R_b(x,y) f_\e(y) \, d\nu(y)}\\ & = \abs{ c_2(b) } x^{-\a/2} |x-1|^{-1} + \Lambda_1 + \Lambda_2 + \Lambda_3. } Observe that for $x\in S$ and $y\in I$ we have $|x-y|\simeq|x-1|$ and $x\simeq y\simeq 1$. By using the Mean Value Theorem for the function $y \mapsto y^{-\a/2} |x-y|^{-2ib-1}$, \sp{ \label{neg_est} \Lambda_1 & \leq C \abs{c_2(b)} \e^{-1} \int_{1}^{1+\e} |b||y-1| |x-1|^{-2} \, dy \leq C \e \abs{bc_2(b)} |x-1|^{-2},\\ \Lambda_2 & \leq C \abs{c_1(b)} \e^{-1} \int_{1}^{1+\e} (x^2+y^2)^{-(1+\a)/2} \, dy \simeq C \abs{c_1(b)}, \\ \Lambda_3 & \leq C \abs{c_3(b)} \e^{-1} \int_{1}^{1+\e} xy(x+y)^{-\a-3} \, dy \simeq C \abs{c_3(b)}. } Fix $|b|\geq 1$ and $\la$ such that $\la > \max(\Lambda_2,\Lambda_3, |b c_2(b)| )$. Recall that $x^{-\a/2} \geq 1$ for $x\in S$, so that for $\e$ small enough \sp{\label{la2} \nu\set{x\in S: \abs{c_2(b)}x^{-\a/2}|x-1|^{-1} > 4\la} & \geq \nu\set{x\in S: \abs{c_2(b)}|x-1|^{-1} > 4\la} \\ & = \int_{1+3\e}^{1+\abs{c_2(b)}/(4\la)} x^{\a} \, dx\\ & \geq C {\abs{c_2(b)}/(4\la)}} and \sp{\label{la1} \nu \set{ x\in S: \abs{\Lambda_1} > \la } & \leq \nu \set{ x\in S: C \e \abs{bc_2(b)} |x-1|^{-2} > \la }\\ & \leq \int_{1+3\e}^{1 + C(\e \la^{-1} \abs{bc_2(b)})^{1/2}} x^\a \, dx \\ & \leq C \left( \e \la^{-1} \abs{bc_2(b)}\right)^{1/2} \\ & \leq C \e^{1/2}. } Hence, using \eqref{neg_triangle}--\eqref{la1} and \eqref{gamma_constants} we get \spx{\label{neg_end} \norm{B^{ib}f_\e}_{L^{1,\infty}(X)} \geq & \la \nu \set{ x\in S: \abs{B^{ib}f_\e(x)} > \la } \geq \la \nu\set{x\in S: \abs{c_2(b)}x^{-\a/2}|x-1|^{-1} > 4\la }\\ &- \la \nu \set{x\in S: \abs{\Lambda_1} > \la } - \underbrace{\la \nu \set{x\in S: \abs{\Lambda_2} > \la } }_{=0}- \underbrace{\la \nu \set{x\in S: \abs{\Lambda_3} > \la }}_{=0} \\ & {\geq C |c_2(b)| - C\la \e^{1/2} \geq C |c_2(b)|} \simeq |b|^{1/2} = |b|^{d/2}. } } } {{Turning} to the case $\a>0$ we could also use Lemma \ref{kernel}. In this case, {the summand with $c_1(b)$ would play the first role. An alternative proof that we shall present here uses integral representation of the modified Bessel function. The same will be used in the proof of } Theorem \ref{lower2}. It is known that for} $\a>0$ \eq{ \label{bint} I_{(\a-1)/2}(z) = \eee{\Gamma(\a/2)^{-1}\sqrt{\pi}}^{-1} \left( \frac{z}{2}\right)^{(\a-1)/2} \int_{-1}^1 e^{-zs}(1-s^2)^{\a/2-1} ds, \quad z>0, }} see \cite[Ch. 6]{Watson}. Therefore, for $\a>0$, using \eqref{Kb}, \eqref{Kt}, and \eqref{bint} we obtain \sp{\label{kernel2} K_b(&x,y) = { (2\Gamma(-ib))^{-1}\int_0^\infty t^{-ib-1} (xy)^{-(\a-1)/2} I_{(\a-1)/2}\eee{\frac{xy}{2t}} \exp\left(-\frac{x^2+y^2}{4t} \right) \frac{dt}{t}} \\ & {= \eee{2^\a \Gamma(-ib) \Gamma(\a/2)\sqrt{\pi}}^{-1} \int_{-1}^1 \int_0^\infty t^{-ib-(\a+1)/2} \exp\left(-\frac{x^2+y^2+2xys}{4t} \right) \frac{dt}{t} \, (1-s^2)^{\frac{\a}{2}-1} ds }\\ & {= \frac{2^{2ib+1} \Gamma\left( ib+ (\a+1)/2\right)}{\Gamma(-ib) \Gamma(\a/2)\sqrt{\pi}} \int_{-1}^1 \left( x^2+y^2+2xys \right)^{-ib-(\a+1)/2} \eee{1-s^2}^{\a/2-1} \, ds} \\ & = {C_\a} c_1(b) \int_{-1}^{1} \left( x^2+y^2+2xys \right)^{-ib-(\a+1)/2} \eee{1-s^2}^{\a/2-1} ds, } {where $C_\a = \pi^{-1/2} \Gamma(\a/2)^{-1} \Gamma((\a+1)/4)$. } \pr{[Proof of Theorem \ref{lower1} for $\a> 0$] Let {$|b|>1$}, $\e\in(0,10^{-1})$, and $f_\e(x) = x^{-\a} \e^{-1} \chi_{[\e,2\e]}(x)$. Similarly as in \eqref{neg_triangle}, using \eqref{kernel2} and the Mean Value Theorem, for $x>3\e$ we have \sp{ \label{pos_triangle} \abs{B^{-ib}f_\e(x)} \leq & \abs{ \int_\e^{2\e} K_b(x,0) f_\e(y) \, d\nu(y) } + \abs{ \int_\e^{2\e} \left( K_b(x,0) - K_b(x,y) \right) f_\e(y) \, d\nu(y) }\\ \leq & C|c_1(b)| \e^{-1} \left\{ \abs{\int_\e^{2\e}\int_{-1}^{1} \eee{1-s^2}^{\a/2-1} x^{-2ib-(\a+1)}\, ds \, dy }\right. \\ & + \left.\int_\e^{2\e} \int_{-1}^{1} \eee{1-s^2}^{\a/2-1} \abs{ \left( x^2+y^2+2sxy \right)^{-ib-\frac{\a+1}{2}} - x^{-2ib-(\a+1)} } \, ds \, dy \right\}\\ \leq & C|c_1(b)| \eee{ x^{-\a-1} + \e^{-1} \int_{-1}^{1} \eee{1-s^2}^{\a/2-1} \int_\e^{2\e} |b|\abs{y^2+2sxy} x^{-\a-3} \, dy \, ds } \\ \leq & C|c_1(b)| \eee{ x^{-\a-1} + \e \abs{b} x^{-\a-2}}. } Let us fix $|b|>1$ and $\la>|b c_1(b)|$. For all $\e$ small enough, we get \spx{ \nu\set{x>3\e: C \abs{c_1(b)} x^{-\a-1} > 2\la } & = \int_{3\e}^{ C (\abs{c_1(b)}/\la)^{1/(1+\a)}} x^{\a} \, dx \geq C \abs{c_1(b)}/\la, } and \spx{ \nu\set{x>3\e: C \abs{bc_1(b)} \e x^{-\a-2} > \la} \leq & \int_0^{ C \left(\abs{bc_1(b)} \e / \la\right)^{1/(\a+2)}} x^{\a} \, dx \\ = &C \left(\abs{bc_1(b)} \e/ \la\right)^{(\a+1)/(\a+2)} \\ \leq & C \e^{(1+\a)/(\a+2)}. } Therefore, by choosing a proper $\e$, we obtain \spx{ \label{pos_end} \norm{B^{ib}f_\e}_{L^{1,\infty}(X)} \geq & \la \nu\set{ x\in X: \abs{B^{ib}f_\e(x)} > \la } \\ \geq & \la \nu \set{ x > 3\e : C|c_1(b)| x^{-\a-1} > 2\la } - \la \nu \set{ x > 3\e : C |c_1(b)| \e x^{-\a-2} > \la } \\ \geq & C |c_1(b)| - C \la \e^{(\a+1)/(\a+2)} \geq C |c_1(b)| \simeq |b|^{(\a+1)/2} = |b|^{d/2} . } } \pr{[Proof of Theorem \ref{lower2}] Set $\a>0$, $p\in(1,2)$, $|b|>1$, $\e\in(0,10^{-1})$, $\de >1$, and a function ${f}\in{L^p(X)}$ such that $\mathrm{supp} f \subseteq (0,\e)$, and $f\geq 0$. Similarly as in \eqref{pos_triangle}, using \eqref{kernel2} and Corollary \ref{gamma2}, \spx{ \norm{\Bb^{ib}f}&_{L^p(X)}^p \geq \int_\de ^\infty \abs{ \int_X \eee{K_b(x,0)- (K_b(x,0) - K_b(x,y)} f(y)\, d\nu(y)}^p\, d\nu(x) \\ \geq & C \norm{f}_{L^1(X)}^p \int_\de^\infty \abs{K_b(x,0)}^p \, d\nu(x) - C \int_\de^\infty \abs{\int_X (K_b(x,y)-K_b(x,0)) f(y)\, d\nu(y)}^p \, d\nu(x) \\ \geq & C \norm{f}_{L^1(X)}^p|b|^{p(\a+1)/2} \eee{ \int_\delta^{\infty} x^{-p(\a+1)+\a} \, dx - \int_\delta^{\infty} \e^p |b|^p x^{-p(\a+2)+\a} \, dx} \\ \geq & C \norm{f}_{L^1(X)}^p |b|^{p(\a+1)/2} \delta^{(\a+1)(1-p)} \eee{ 1 - \e^p |b|^{p} \delta^{-p}}. } Now we take $\delta=|b|$ and fix $\e$ small enough, independent of $b$, getting \spx{\label{lpnorm} \norm{\Bb^{ib}f}_{L^p(X)} \geq C_{p} {|b|^{\frac{(\a+1)(2-p)}{2p}}} \norm{f}_{L^1(X)} \geq C_{p,\e} {|b|^{{\frac{d}{2}}\frac{2-p}{p}}} \norm{f}_{L^p(X)}. } } \section{Appendix - Gamma function estimate}\label{app} \lem{gamma}{ Let $a+bi\in \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C}$. {For $a\geq 0$ fixed and all $|b|\geq 1$ we have} \eqx{ |\Gamma(a+bi)| \simeq |b|^{a-1/2} \exp\eee{-\frac{\pi b}{2}}. } } The result above is known. It is a consequence of the Stirling's Formula, see \cite[Ch. 6]{Abramowitz_Stegun}. For the convenience of the reader we present a short proof. \pr{ Using the reflection formula \eq{ \label{reflection} \Gamma(1-z)\Gamma(z) = \pi/\sin(\pi z),} and the recursion identity \eq{ \label{recursion} z\Gamma(z)= \Gamma(z+1),} we have that $|1-ib| \abs{\Gamma(ib)}^2 = \abs{\Gamma(ib)\Gamma(1-ib)} = \abs{{\pi}/{\sin(\pi ib)}} \simeq \exp\eee{-\pi |b|}$ for $|b|\geq 1$. Thus, \eq{\label{gamma0} \abs{\Gamma(ib)} \simeq |b|^{-1/2}\exp\eee{-\frac{\pi |b|}{2}}, \qquad |b|\geq 1 .} Denote {$S = \set{z\in \mathbb{C}} \newcommand{\Cc}{\mathbf{C}} \newcommand{\cc}{\mathcal{C} \ : \ 1\leq \rm{Re}(z) \leq 2, {\abs{\rm{Im}(z)}} \geq 1}$} and define a holomorphic function \eqx{ F(z) = \Gamma(z)z^{-z+1/2}, \quad z\in S.} Now, we claim that $|F(z)|\leq C$ if $z\in \partial S$. This is clear for $z=a\pm i$, $a\in[1,2]$. For $z=1+ib$, $|b|\geq 1$, we use \eqref{recursion} and \eqref{gamma0} getting \spx{ \abs{F(1+ib)} & = \abs{\Gamma\left(1+ib\right)} \abs{(1+ib)^{-1/2-ib}} = \abs{b}\abs{\Gamma\left(ib\right)} (1+b^2)^{-1/4} e^{b\, \mathrm{arctg} b} \\ & \leq C |b|^{1/2} e^{-\pi|b|/2} |b|^{-1/2} e^{b \, \rm{arctg}(b)} \leq C. } Similarly we show boundedness of $F$ for $z=2+bi$, $|b|\geq 1$. Observe that $\abs{F(z)} \leq |\Gamma(z)| |z|^{|-z+1/2|} \leq C e^{c|z|^2} $ for $z\in S$. Hence, applying the Phragm\'{e}n-Lindel\"{o}f principle, we obtain that $|F(z)| \leq C$ for $z \in S$. Therefore, for a fixed $a\in[1,2]$ and $|b|\geq 1$ we have \sp{\label{up_up} \abs{\Gamma(a+bi)} \leq C \abs{(a+ bi)^{a-1/2+bi}} = \eee{a^2+b^2}^{(2a-1)/4} \cdot e^{-b \, \mathrm{arctg}(b/a)} \simeq C|b|^{a-1/2} e^{-\pi |b|/2}. } This is the desired estimate from above for $a\in[1,2]$. We extend this for all $a\in[0,\infty)$ by using \eqref{recursion}. Then, by \eqref{reflection}, we get estimate from below for $a\in[0,1]$, and extend this for $a\in [0,\infty)$ using \eqref{recursion} once more. } \cor{gamma2}{ For fixed $a_1,a_2 \geq 0$ and $|b|\geq 1$ we have \eqx{ \abs{\frac{\Gamma(a_1+bi)}{\Gamma(a_2+bi)}}\simeq |b|^{a_1-a_2}. } } {\bf Acknowledgments:} The authors would like to thank Jacek Dziuba\'nski, Alessio Martini, Adam Nowak, B\l a\.zej Wr\'obel, and the referees for their helpful comments and suggestions. \bibliographystyle{amsplain} \def$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2020-05-19T02:08:52", "yymm": "1806", "arxiv_id": "1806.01060", "language": "en", "url": "https://arxiv.org/abs/1806.01060" }
\section{Introduction} Over the past few years, generating text from images and videos has gained a lot of attention in the Computer Vision and Natural Language Processing communities and several related tasks have been proposed, such as image labeling, image and video description and visual question answering. In particular, prominent results have been achieved in image description with various deep neural network architectures, e.g. \cite{lin}, \cite{xu}, \cite{karpathy}, \cite{vinyals}. However, the need of generating more narrative texts from images which may reflect experiences, rather than just listing objects and their attributes, has given rise to tasks such as visual storytelling \cite{huang2016}. This task is about generating a story from a sequence of images. Figure~\ref{fig:sisvsdii} shows the difference between descriptions of images in isolation and stories for images in sequence. In this paper, we describe the deep neural network architecture we used for the Visual Storytelling Challenge 2018. The problem to solve in this challenge can be stated as follows: \textit{Given a sequence of 5 images, the system should output a story related to the content and events in the images.} Our architecture is an extension of the image description architecture presented by~\cite{vinyals}. We submitted the generated stories to the internal track of the Visual Storytelling (VIST) Challenge, which were evaluated using the METEOR metric~\cite{banerjee} as well as human ratings. \begin{figure}[tb] \centering \includegraphics[trim={0 .2cm 0 0},clip,width=.5\textwidth]{sisvsdii} \caption{Examples of stories for images in sequence (above) and image descriptions in isolation (below) from the VIST dataset~\cite{huang2016}.} \label{fig:sisvsdii} \end{figure} \section{Previous work} The work by~\cite{parkkim} presented probably the first system for generating stories from an album of images. This early approach involved the use of the NYC and Disney datasets mined from blog posts by the authors. The visual storytelling task and dataset were introduced by~\cite{huang2016}. This was the first dataset specifically created for visual storytelling. They proposed a baseline approach which consists of a sequence to sequence model, where the encoder takes the sequence of images as input and the decoder takes the last state of the encoder as its first state to generate the story. Since this model produces stories with generic phrases, they used decode-time heuristics to improve the generated stories. \cite{licheng} presented a multi-task model that performs album summarization and story generation. Even though the model achieved state-of-the-art scores on the VIST dataset with different metrics, some of the sample stories presented in the paper are incoherent. \section{Model} Our model extends the image description model by~\cite{vinyals}, which consists of an encoder-decoder architecture. The encoder is a Convolutional Neural Network (CNN) and the decoder is a Long Short-Term Memory (LSTM) network, as presented in Figure \ref{fig:im2txt}. The image is passed through the encoder generating the image representation that is used by the decoder to know the content of the image and generate the description word by word. In the following, we describe how we extended this model for the visual storytelling task. \begin{figure}[tb] \centering \includegraphics[width=.5\textwidth]{im2txt} \caption{Show and Tell architecture. Image reproduced from~\cite{vinyals}.} \label{fig:im2txt} \end{figure} \subsection{Encoder} The model's first component is a Recurrent Neural Network (RNN), more precisely an LSTM that summarizes the sequence of images. At every timestep $t$ the network takes as input an image $I_i$ where $i\in\{1,2,3,4,5\}$ from the sequence. At time $t=5$, the LSTM has encoded the 5 images and provides the sequence's context through its last hidden state denoted by $h_e^{(t)}$. The representation of the images was obtained through Inception V3. \subsection{Decoder} The decoder is the second LSTM network that uses the information obtained from the encoder to generate the sequence's story. The first input $x_0$ to the decoder is the image for which the text is being generated. The last hidden state from the encoder $h_e^{(t)}$ is used to initialize the first hidden state of the decoder $h_d^{(0)}$. With this strategy, we provide the decoder with the context of the whole sequence and the content of the current image (i.e. global and local information) to generate the corresponding text that will contribute to the overall story. Our model contains five independent decoders, one for each image in the sequence. All the 5 decoders use the last hidden state of the encoder (i.e. the context) as its first hidden state and take the corresponding image embedding as its first input. In this way, the first decoder generates the sequence of words for the first image in the sequence, the second decoder for the second image in the sequence, and so on. This allows each decoder to learn a specific language model for each position of the sequence. For instance, the first decoder will learn the opening sentences of the story while the last decoder the closing sentences. The word embeddings were computed using word2vec~\cite{mikolov}. \begin{figure*}[h!] \centering \includegraphics[trim={0 2cm 0 0},clip, width=\textwidth]{ourmodel} \caption{Proposed sequence to sequence architecture.} \label{fig:model} \end{figure*} Our proposed architecture is presented in Figure \ref{fig:model}. For each image in the sequence, we obtain its representation $\{e(I_1),...,e(I_5)\}$ using Inception v3. The encoder takes the images in order, one at every timestep $t$. At time $t=5$, we obtain the context vector through $h_e^{(t)}$ (represented by $\mathbf{Z}$). This vector is used to initialize each decoder's hidden state while the first input to each decoder is its corresponding image embedding $e(I_i)$. Each decoder generates a sequence of words $\{p_1,...,p_{n}\}$ for each image in the sequence. The final story is the concatenation of the output of the 5 decoders. \section{Evaluation} \subsection{Methodology} The generated stories were evaluated using both automatic metrics and human ratings. The automatic evaluation was performed by computing the METEOR metric~\cite{banerjee} on a public test set and a hidden test set. The former is a set of $1,938$ image sequences and stories taken from the test set of the VIST dataset~\cite{huang2016}. The latter consists of new stories generated by humans from a subset of image sequences of the public test set. Human ratings of the stories were collected from crowd workers in Amazon Mechanical Turk. Only 200 stories were selected from the hidden test set for this evaluation. The crowd workers evaluated each story on a Likert scale with respect to 6 aspects: \textbf{a)} the story is focused, \textbf{b)} the story has good structure and coherence, \textbf{c)} would you share this story, \textbf{d)} do you think this story was written by a human, \textbf{e)} the story is visually grounded and \textbf{f)} the story is detailed. The crowd workers were also asked to evaluate stories generated by humans for comparison purposes. \subsection{Results} Table \ref{table:meteor} shows the METEOR scores by our model in the public and hidden test set of the Visual Storytelling Challenge 2018. Table \ref{table:human} presents results of the human evaluation. Our model achieved competitive METEOR scores in both test sets and performed well in the human evaluation. \begin{table}[h!] \begin{center} \begin{tabular}{ | c | c | } \hline \bf Public Test Set & \bf Hidden Test Set \\ \hline .3088 & .3100 \\ \hline \end{tabular} \caption{Automatic evaluation of stories generated by our visual storyteller using the METEOR metric.} \label{table:meteor} \end{center} \end{table} \begin{table*}[t] \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline & \bf a) & \bf b) & \bf c) & \bf d) & \bf e) & \bf f) & \bf Total score \\ \hline \bf Ours & 3.347 & 3.278 & 2.871 & 3.222 & 2.886 & 2.893 & 18.498 \\ \hline \bf Human & 4.025 & 3.975 & 3.772 & 4.003 & 3.965 & 3.857 & 23.596 \\ \hline \end{tabular} \caption{Human evaluation of stories generated by our visual storyteller, compared to stories generated by humans.} \label{table:human} \end{center} \end{table*} \begin{table*}[t] \begin{center} \begin{tabular}{| c | c | c | c | c | c | c | c |} \hline & \bf BLEU-1 & \bf BLEU-2 & \bf BLEU-3 & \bf BLEU-4 & \bf METEOR & \bf ROUGE & \bf CIDEr \\ \hline \bf Huang et al. & - & - & - & - & 31.4 & - & -\\ \hline \bf Yu et al. & - & - & 21.0 & - & 34.1 & \bf 29.5 & \bf 7.5\\ \hline \bf Ours & 60.1 & 36.5 & \bf 21.1 & 12.7 & \bf 34.4 & 29.2 & 5.1\\ \hline \end{tabular} \caption{Automatic evaluation on the VIST dataset. A comparison between the baseline~\cite{huang2016}, \cite{licheng} and ours.} \label{table:test} \end{center} \end{table*} \begin{figure*}[hb!] \centering \includegraphics[trim={0 4.5cm 0 0},clip,width=\textwidth]{example2} \includegraphics[trim={0 4cm 0 0},clip, width=\textwidth]{example4} \includegraphics[trim={0 7cm 0 0},clip, width=\textwidth]{example3} \caption{Sample stories generated by our visual storyteller, compared to generated by humans.} \label{fig:mesh2} \end{figure*} An evaluation over the complete VIST test set was also performed and the results are shown in Table \ref{table:test}\footnote{\footnotesize We used the code available at \href{https://github.com/lichengunc/vist_eval}{github.com/lichengunc}}. \newpage Our model obtained the highest scores with the METEOR and BLEU-3 metrics but lagged behind the model by~\cite{licheng} with the ROUGE and CIDEr metrics. Figure \ref{fig:mesh2} shows some sample stories generated by our model from the public test set of the Visual Storytelling Challenge 2018. Although some of the generated stories are grammatically correct and coherent, they tend to contain repetitive phrases or ideas. We can also observe that some stories are not nearly related to the actual content of the images or include generic phrases like \textit{This is a picture of a store}. These limitations of our model reflected on the ratings of the \textbf{visually grounded} and \textbf{detailed} aspects of the human evaluation. \section{Conclusions and Future Work} Our visual storyteller incorporates a context encoder and multiple independent decoders to the image description architecture by \cite{vinyals} to generate stories from image sequences. Having an independent decoder for each position of the image sequence, allowed our visual storyteller to build more specific language models using the context vector as its first state and the image embedding as its first input. In the internal track of the Visual Storytelling Challenge 2018, we obtained competitive METEOR scores in both the public and hidden test sets and performed well in the human evaluation. In the future, we plan to explore the use of an attention mechanism or a bidirectional LSTM to cope with repetitive phrases within the same story. \section{Acknowledgments} This research is supported by the PAPIIT-UNAM research grant IA104016. Diana Gonz\'alez Rico is supported by CONACYT.
{ "timestamp": "2018-06-05T02:09:27", "yymm": "1806", "arxiv_id": "1806.00738", "language": "en", "url": "https://arxiv.org/abs/1806.00738" }
\section{Introduction} Massive amounts of medical knowledge are encapsulated in the form of noisy, unstructured text in sources such as social media and clinical notes. As the volume of electronically available unstructured text continues to surge, the knowledge contained within these resources is becoming increasingly useful for solving many research problems. Over the recent years, research on utilizing data garnered from noisy text sources such as social media has seen steady growth in the broader health domain. Research in the sub-domain of public health, for example, has focused on devising methods that can make population-level estimates from social media data \cite{brownstein09, paul11,sinnenberg17} for tasks such as virus/flu outbreak surveillance \cite{broniatowski13,kagashe17}, pharmacovigilance \cite{sarker15,cocos17} and drug use/abuse \cite{sarker16tox,kazemi17}, to name a few. Similarly, unstructured data (\emph{i.e.}, free text) from clinical notes has been used for notable tasks such as predicting suicide \cite{poulin14}, identifying risk factors for targeted populations \cite{khalifa15} and identifying relations between known clinical entities \cite{luo17}. The adoption of noisy text sources in health research has been driven largely by the advances in technology and data science methods. However, the use of such data is not without its challenges from the perspective of natural language processing (NLP). One key challenge is the inevitable presence of misspellings in human authored texts \cite{keselman12}. The problem is exacerbated for medical text as domain-specific terms are typically difficult to spell \cite{zhou15}. In the recent past, studies have attempted to apply preprocessing techniques to attempt to correct misspellings prior to the application of downstream NLP and machine learning methods such as information extraction and classification \cite{lai15}. Such preprocessing methods for spelling correction are, however, only useful once the required data have already been collected, and they do not aid in the initial step of data collection or concept detection during information retrieval/searching. In this paper, we describe a data-centric system that addresses this issue by generating potential common misspellings given the original terms. We describe the development and evaluation of the system on text from social media, specifically Twitter, which is one of the noisiest sources of free text. The first step in incorporating social media data for operational and research tasks involves designing data extraction/collection strategies, which are typically reliant on keyword-based searches (\emph{e.g.}, \cite{alvaro15}). Twitter is a commonly used resource for social media based NLP research within the health domain, and it provides a public Application Programming Interface (API) that can be queried using predefined keywords. However, as mentioned, complex medical terms, such as medication and disease names, are often misspelled by users on Twitter \cite{karimi15}. For example, when preparing data for this study, we found 46 common misspellings for the medication name \emph{clonazepam} including \emph{klonazapam}, \emph{colnazepam} and \emph{clonazpam}. Presence of such misspellings is problematic from the perspective of data collection and concept detection, as all the possible misspellings are not known \emph{a priori}. Using only the correctly spelled keywords leads to loss of potentially important data, particularly for terms orc concepts with difficult spellings and morphology. Some misspellings also occur more frequently than others, meaning that they are more important for data collection and concept detection Despite the importance of the task of automatic misspelling generation, it has received little research attention from the NLP community. This task can be viewed as the opposite of spelling correction. In spelling correction, the goal is to detect out-of-vocabulary terms and map them to a finite set of in-vocabulary terms \cite{baldwin15}. In contrast to misspelling generation, automatic spelling correction methods have been studied for decades, with early approaches employing the noisy channel framework \cite{church91,brill00}, which require gold-standard lexicons that can be used to learn transformations (generally character edits) required for correcting misspellings. Two categories of approaches have been employed in the recent past for performing spelling correction---lexicon-based and language model-based \cite{han13}. The former category of approaches are rule-based and require the building of extensive lookup tables which map out-of-vocabulary terms to in-vocabulary terms. Such approaches are useful if the target vocabularies are small and the misspellings are unambiguous. Building extensive lookup tables, however, is infeasible for most tasks as the total number of possible misspellings can be very large and the distributions of the misspellings are not known beforehand. For the same reason, misspelling generation from manually built lexicons is not feasible in most cases. Unlike lexicon-based approaches, language model-based approaches are capable of disambiguating potential in-vocabulary candidates by deriving likelihood measures based on the context of the out-of-vocabulary terms in question. Unfortunately, language model-based spelling correction approaches, on their own, have not been very successful \cite{sarker17lexnorm}, and recent, high-performance approaches have been hybrid in nature---combining rules with language models learned from noisy data \cite{berend15}. While it is not possible to reverse and apply the successful spelling correction methods for the task of misspelling generation, our approach is inspired by the same principles---we combine rules with a distributed language model learned from a large, unlabeled, domain-specific dataset to accomplish the task. Some studies have attempted to resolve the misspelling generation problem by manually curating spelling variants or synthetically generating them \cite{sloane15}. A na\"ive approach to addressing the problem is to generate all variants that are lexically similar to the original term. Lexical similarity/dissimilarity between two terms can be measured in terms of edit distance---the minimum number of operations required to transform one term to the other. Thus, the na\"ive approach would generate all character combinations that are within a given edit distance threshold (\emph{e.g.}, 2). This results in the generation of too many character combinations, even at very low thresholds, particularly if the original keyword is long. Incorporating variants that do not represent misspellings for data collection may result in the retrieval of excess noise or face other operational obstacles such as API limits. Twitter, for example, at the time of publication, has a limit of 400 keywords per API key. Due to these reasons, it is important to constrain the number of variants generated to highly precise ones. The current state-of-the-art system by Pimpalkhute \emph{et al.} \cite{pimpalkhute14} attempts to address these problems by incorporating the notion of phonetic similarity. In their approach, the authors limit the number of variants to those that are phonetically close to the original keyword. The authors also use the Google Search API to identify the most frequent misspellings occurring in web search queries. While the approach does significantly limit the number of variants, it faces several drawbacks. First, the variants are limited to only edit distance 1, resulting in the loss of many genuine misspellings that are farther from the original keyword in terms of edit distance. Second, there may be some terms that fit the edit and phonetic distance constraints, but are semantically very dissimilar. For example, for the medication name \emph{activase}, one potential misspelling that fits all the abovementioned criteria is \emph{activate}. Incorporating the term \emph{activate} in data collection or searching results in very noisy data, as this variant occurs at a much higher frequency in free text compared to the medication name and its genuine misspellings. Third, the proposed approach is semi-automatic, and several steps, including manual ones, are required to generate the spelling variants for a single keyword. This makes the process of variant generation very cumbersome when hundreds of keywords need to be processed. Fourth, the approach is dependent on external sources and APIs, such as the Google Search API, which have their own limits. On top of these drawbacks, the number of irrelevant misspellings generated per keyword by this approach is still quite high. We address the abovementioned drawbacks of current approaches by proposing a simple recursive method for generating small sets of highly precise misspellings for restricted-domain keywords. The proposed method relies on a distributed word vector model \cite{mikolov13} learned from large domain-specific text to first identify a set of semantically similar candidate misspellings for a given keyword. Semantic similarity is computed using the \emph{cosine similarity} measure over the vector representations of the terms. Next, a lexical similarity filter using \emph{Levenshtein ratio} is applied to exclude lexically dissimilar terms beyond a given threshold. These two steps are applied recursively to each detected misspelling for a keyword until no further misspellings are found. While the default system is unsupervised, we show that supervision can be incorporated into the system through a weight optimization method for intra-word character sequence similarities, allowing for problem-specific tuning of the system. We evaluated our method intrinsically on the problem of medication name misspelling generation and compared the performances of two settings of the system to the phonetic filter based state-of-the-art. On a manually prepared evaluation set, our approach obtained best $F_1$-$score$ of $0.69$ and $F_{\frac{1}{4}}$-$score$ of $0.78$, significantly outperforming the competing system. We performed extrinsic evaluation by comparing the data collection rates with and without spelling variants for a set of cancer terms, and observed an increase of over 67\% when the misspellings are included. We have made the system and the source code publicly available for the research community.\footnote{The source code is available at \url{https://bitbucket.org/asarker/qmisspell}.} \section{Materials and methods} Our automatic misspelling generation approach can be viewed as a two-step process. In the first step, word or phrase level dense vectors (embeddings) are generated from a large unlabeled dataset. These embeddings capture semantic relatedness between terms occurring in the unlabeled text. In the second step, the embedding model is used as input in a recursive algorithm that selectively filters out unlikely misspelling candidates. We now discuss these two steps in detail. \subsection{Dense vector language model} Dense word and phrase vector models have become very popular for NLP research in recent years due to their ability to very accurately encode semantic information. To learn such language models, large unlabeled data sets are required, of which there is an abundance in social media and over the Internet in general. The word2vec algorithm \cite{mikolov13} is among the most common approaches in generating such dense vector models. This algorithm learns vector representations for terms that occur more frequently than a given threshold (\textit{e.g.}, 10) by training a shallow neural network. Terms that occur in similar contexts result in similar vectors, and, thus, are close to each other in semantic space. Intuitively, since common misspellings of terms and their original forms appear in similar contexts, their vectors are likely to be similar as well. Our misspelling generation process requires a word vector model generated from the relevant domain-specific text, such that the model contains sufficient numbers of misspellings and their in-vocabulary forms. We used our own publicly released model from past work \cite{sarker2017dib}, which was generated from medication-related chatter from social media.\footnote{The model is available at: \url{https://data.mendeley.com/datasets/dwr4xn8kcv/3}. We used the 2.2 GB n-gram model for the experiments reported in this paper.} Each word or phrase in the model is represented via a 400 dimensional vector. The model was learned using a context window size of 5 and only n-grams occurring more than 40 times were included. Figure \ref{densevectorexample} shows the 20 most similar terms to \emph{klonopin} in our vector model. The cosine similarity measure is used to find the closest vectors. The figure also shows that three misspellings for the medication name are among the 20 most similar terms. Interestingly, other medication names appearing closest to \textit{klonopin} (\textit{e.g.}, \textit{ativan} and \textit{xanax}) are typically used for similar purposes (\textit{i.e.}, they are all sedatives). The generic name of the medication (\textit{clonazepam}) also appears close in vector space. The figure verifies that the vector model captures semantic similarities, including misspellings of terms. \begin{figure}[htbp] \centering \scalebox{0.35} {\includegraphics{figure1new}} \caption{The 20 most similar terms for \textit{klonopin} in our dense vector model and their cosine similarities. Misspellings of the original term are underlined.} \label{densevectorexample} \end{figure} \subsection{Gold standard preparation} We prepared a gold standard consisting of 20 medication names, including trade and generic names. We chose over the counter and prescription medications that have high sales volumes (\emph{e.g.}, \emph{ibuprofen} and \emph{adderall}) so that their correct spellings and misspellings are likely to have high frequencies of occurrences in social media data. We also handpicked medication names that appeared to have interesting properties, which could represent difficult cases for our variant generator. These included medications that have semantic and lexical similarities with other medication names (\emph{e.g.}, \emph{paroxetine} and \emph{fluoxetine}). Preparing a gold standard dataset for this task is not trivial. Although, as mentioned, this task has similarities to the task of spelling correction, generating gold standard data for this task is much more complicated than the latter. For spelling correction, the correct spelling for a misspelled term is already known and preparing a dataset simply requires inspecting a set of noisy text and manually mapping the misspellings to their in-vocabulary counterparts \cite{baldwin15}. For the task of variant generation, we have to start with the correct spellings and identify all the possible misspellings from unlabeled data. It is impractical to manually search through millions of social media posts to identify the relevant misspellings and compute their occurrence frequencies. An additional constraint is that the gold standard should only include terms that occur relatively frequently, as including infrequently occurring terms will result in too many variants most of which do not add value to the downstream tasks of data collection and extraction from unlabeled text. We first performed \textit{fuzzy} searching on an unlabeled dataset consisting of approximately 10 million tweets to identify candidate misspellings for the 20 chosen terms. We included all words within an edit distance of $min(6,len(keyword)-2)$ from a given medication name. Given two terms \textit{a} and \textit{b}, the edit distance is the minimum number of edit operations that transforms \textit{a} into \textit{b}. These operations include insertion, substitution and deletion of characters. The $min$ function was used because 6 edit distances retrieved too many keywords for short keywords such as \emph{xanax}. We assumed that frequent misspellings would rarely, if at all, be more than 6 edit distances away. To ensure fair comparison, we searched the vocabulary of the language model previously mentioned to generate the candidate misspellings \cite{sarker2017dib} and excluded terms that are absent in the vocabulary based on the assumption that they do not occur frequently enough to be useful. Although it is an extremely time-consuming process, we manually labeled each term in the resulting set to indicate if it is a misspelling or not. This process resulted in 574 true misspellings for the 20 medication names (28.7 per term on average). Figure \ref{distribs} shows the distribution of misspellings per medication. \begin{figure}[htbp] \centering \scalebox{0.70} {\includegraphics{keyword_distrib}} \caption{Distribution of misspellings for the 20 medication keywords.} \label{distribs} \end{figure} We randomly chose 10 keywords from the set for analysis and algorithm development. We left the remaining 10 for evaluation. We first analyzed the distribution of the frequencies of misspellings against edit distance. Figure \ref{fg3anwe} presents the distribution for the 10 training set medications. The figure illustrates that the frequencies of misspellings and levenshtein distances have an approximately exponential decay-like relationship with no misspellings beyond distance 5, validating our assumption regarding the infrequency of misspellings occurring beyond an edit distance of 6. \begin{figure}[htbp] \centering \scalebox{0.30} {\includegraphics{newfigure3a}} \caption{Distribution of misspelling frequencies against levenshtein distance illustrating that the number of misspellings decrease gradually, with no misspelling occurring beyond a levenshtein distance of 5 (in our dataset).} \label{fg3anwe} \end{figure} We further inspected the 10 training keywords and their misspellings to analyze how the edit distances were distributed within predefined character windows between the keyword-misspelling pairs. Given a keyword (\emph{e.g.}, \emph{paroxetine}), we wanted to analyze how the lexical similarities were distributed for character windows of length $n$ between the true misspellings of the keyword and non-misspellings that are also within a given edit distance range. To perform this investigation, we passed $n$-character ($2<n\leqslant\frac{len(keyword)}{2}$) windows through each keyword and a corresponding misspelling and computed the edit distance for each window position. We performed the same for non-misspellings. Our intuition was that levenshtein distances may be distributed differently across character sequences between the true misspellings and the false positives that made the initial lexical similarity threshold. We observed that compared to the true misspellings, the non-misspellings tend to have low average levenshtein distance for higher relative character positions. This is because for this specific task, medications belonging to the same class are often identically spelled near the end, reflecting the class of the medication (\emph{e.g.}, \emph{diazepam} and \emph{clonazepam}, \emph{amoxicillin} and \emph{penicillin}). Thus, the analysis suggested that for detecting true misspellings (and hence increasing precision), rewarding lexical similarities among sequences in lower relative position values might be beneficial. Figure \ref{fg4new} shows the distribution of average levenshtein distances at different relative positions for the true misspellings and false positives, along with the normalized ratios of them. We employed a method customized for this property, as discussed later in the paper. \begin{figure}[htbp] \centering \scalebox{0.25} {\includegraphics{figure3bnew}} \caption{Distributions of average levenshtein distances between the true misspellings and the false positives. The distribution of the ratios of the two sets are also shown.} \label{fg4new} \end{figure} \subsection{Generating spelling variants} Algorithm 1 outlines the default system implementation using a stack as the data structure. Given a keyword, the algorithm identifies the $ssl$ semantically closest terms by computing cosine similarities over the vectorized vocabulary. These terms are then passed to a function to compute the levenshtein ratio between the original keyword and the semantically similar terms. Levenshtein ratio is computed using the equation $1 - \frac{ldist}{max(len(a),len(b))}$, where $ldist$ is the levenshtein distance\footnote{The levenshtein distance computation in this case considers substitution to be two edits, while insertion and deletion are considered to be one.} and $max(len(a),len(b))$ is the length of the longer string among the keyword and the candidate misspelling. Each candidate is added to the list of possible misspellings if the levenshtein ratio is above a given threshold ($lt$), and, for each identified misspelling, the same procedure is executed recursively. The levenshtein ratio for a potential misspelling is always computed against the original keyword, ensuring that the algorithm converges once all the terms meeting the levenshtein ratio threshold have been discovered. \begin{algorithm}[htbp] \DontPrintSemicolon \SetAlgoLined \SetKwProg{Fn}{Function}{}{} \SetKwInOut{Input}{Inputs}\SetKwInOut{Output}{Output} \Input{\texttt{s} :- the seed keyword/term\\ \texttt{w} :- the word vector model \\ \texttt{ssl} :- integer specifying the limit of semantically close terms to search\\%; typically between\\ 100 and 10000 \\ \texttt{lt} :- the levenshtein ratio threshold; range $[0,1]$ } \Output{A set of spelling variants for \texttt{s}} \BlankLine \Fn{gen\_vars (\texttt{s},\texttt{w},\texttt{ssl},\texttt{lt})}{ $\texttt{vars}\leftarrow \{\}$ \tcp*{empty dictionary} $\texttt{tte}\leftarrow [s]$ \tcp*{stack holding s} $\texttt{aet}\leftarrow [ ]$ \tcp*{empty array} \BlankLine \While{not \texttt{tte} is \texttt{empty()}}{ $\texttt{t}\leftarrow \texttt{tte.pop()}$\; $\texttt{aet.push(t)}$\; $\texttt{sls}\leftarrow \texttt{mostsimilar(w,t,ssl)}$\ \ForEach{\texttt{sl} in \texttt{sls}}{ $\texttt{lss}\leftarrow \texttt{lev\_ratio(sl,s)}$\; \If{$\texttt{lss}\eqslantgtr \texttt{lt}$}{ $\texttt{vars[s]} \stackrel{+}{=} \texttt{lss}$\; \If{$! \texttt{lss}$ in $\texttt{aet}$ and $! \texttt{lss}$ in $\texttt{tte}$ }{ \texttt{tte.push(lss)} } } } } \Return \texttt{vars} } \label{alg11} \caption{Spelling variant generation} \end{algorithm} \subsubsection{Weighted levenshtein ratio} As discussed, for a given term and a potential misspelling, character sequence similarities in the early parts of the two terms are more likely to be higher for true misspellings. We modified our levenshtein ratio computation by incorporating this information, particularly with the goal of providing a high-precision version of the system. In this system variant, a sliding window of $n$ characters is run through a keyword and a potential misspelling, the levenshtein ratio is computed for the character sequences within the window, and the value is weighted so that the final similarity score is the weighted sum of the individual sequence ratios. Weights are assigned for each bucket of relative positions using the ratio $weight = \frac{fpldist[i]}{tpldist[i]}$, where $i$ is the relative position, and $fpldist$ and $tpldist$ are the distributions of the average levenshtein distances at different relative positions for the true misspellings and non-misspellings, respectively. We used bucket sizes of 0.2, resulting in five weights for all the values within the range $[0,1]$. The weights are normalized by dividing by the median and then scaled, allowing for a reward or penalty of up to $k\%$. This method enables the customization of the variant generation tool to the domain-specific regularities of distinct texts, particularly allowing for a mechanism to reduce the number of false positives. Figure \ref{fg4new} depicts the distributions of average levenshtein distances between the true misspellings and the false positives and the distribution of the ratios of the two sets. In Algorithm 1, the $lev\_ratio()$ function is replaced by its weighted counterpart for this setting. All statistics and parameters are computed and tuned using the development set. \section{Evaluation and results} \subsection{Intrinsic evaluation} We performed intrinsic evaluation of our system using the 10 held-out medication names. We compared the default and weighted approaches of our system to the phonetic variant generation approach proposed by Pimpalkhute et al. \cite{pimpalkhute14}. Based on optimizations over the training set, we used the following parameter settings: $n=5\%$, $lt=0.75$ and $ssl=4000$. The $F_\beta-score$ is used for evaluation and is computed from the precision and recall of the system as follows: \begin{equation*} precision = \frac{tp}{tp + fp} ; recall = \frac{tp}{tp + fn}; F_\beta-score = (1+\beta^2) \times \frac{precision \times recall}{ (\beta^2 \times precision) + recall} \end{equation*} where $tp$ is the number of true positives, $fp$ is the number of false positives and $fn$ is the number of false negatives for the system run. The value of $\beta = 1$ is used to compute the $F_1-score$ that gives equal weights to recall and precision, while $\beta = \frac{1}{4}$ is used to compute $F_{\frac{1}{4}}-score$, which provides a higher weight for precision. We included this evaluation metric because for certain health-related data retrieval or concept detection tasks, a more precise version of the system may be preferred over the setting that provides the best $F_1-score$. Table \ref{tab1} presents the best results obtained by the systems in terms of F$_\beta$-score on the test set. The default version of our system ($QMisSpell$) obtains the top $F_1-score$ while the weighted version ($QMisSpell_w$) obtains the highest $F_{\frac{1}{4}}-score$. As predicted, the phonetic variant generator achieves very low precision, as it generates many spelling variants that are not relevant. The weighted version of the system achieves the highest precision by marginally outperforming the default system, albeit at some expense to recall. This suggests that the weighted version of the system maybe suitable for generating smaller numbers of highly precise misspellings. Figure \ref{fig3} illustrates recalls, precisions and F-scores for the two versions of our system perform at different lexical similarity thresholds ($lt$) between 0.55-0.95. It can be observed customization via weighting leads to increases in precision at different thresholds, although recall decreases. The $F_{\frac{1}{4}}-score$, which puts more weight on precision, is also higher for the customized system. The standard approach consistently outperforms the weighted one in terms of overall $F_1$ score, although the differences are small. Table \ref{tab2} presents all the medication keywords and the spelling variants generated by the weighted configuration of our system. False positives generated by the system are underlined. \begin{table}[htbp] \centering \begin{tabular}{l c c c c} \toprule \textbf{System}&\textbf{Recall}&\textbf{Precision}&\textbf{$F_1-Score$}&\textbf{$F_\frac{1}{4}-Score$}\\ \toprule Phonetic&0.49&0.45&0.47&0.45\\ QMisSpell&\textbf{0.61}&0.79&\textbf{0.69}&0.77\\ QMisSpell$_{w}$&0.47&\textbf{0.84}&0.60&\textbf{0.78}\\ \bottomrule \end{tabular} \caption{Performances for the two variants of our system and the benchmark system. Best score in each column are shown in bold face.} \label{tab1} \end{table} \begin{figure}[htbp] \centering \scalebox{0.50} {\includegraphics{fig3n}} \caption{Recall, precision, F$_1$-score and F$_{\frac{1}{4}}$-score for the two system settings. (D) represents the default version and (W) represents the weighted version.} \label{fig3} \end{figure} \begin{longtable}{ll \toprule \textbf{Keyword }&\textbf{Generated misspellings}\\ \toprule omeprazole&\shortstack[l]{omaprazole omeprezole omeprizole omeperzole omperazole omeprozole\\ omeprasole omiprazole omeprazol}\\\midrule paroxetine&\shortstack{paroxotine paroextine paroxitine paroxatine}\\\midrule klonopin&\shortstack[l]{klonopim klonipin klonepin kolnopin clonopin klomopin klonapins klonoin \\klopopin klonodine klonopan klonopon klonapin klonopn klanopin klenopin \\clonopine klonopins klonopen klonipine klolopin klonopine klonipins klononpin}\\\midrule diazepam&\shortstack{diazepan diazepams \underline{oxazepam} diazipam diazapam}\\\midrule fluoxetine&\shortstack{fluoextine fluoxentine fluoxitine fluxoetine flouxetine fluoxotine fluoxatine duloxetine}\\\midrule clonazepam&\shortstack[l]{klonazepam clonazpam clonazapam clonazapan clonazipam clonazepan\\ klonazapam clonezepam clonanzepam clonozepam clonazepham clonazepem\\ clonazopam clorazepam clanazepam clonazipan}\\\midrule zyprexa&\shortstack[l]{zyprea xyprexa xeprexa zyprexia zyprex zypreza zyprexea zyorexa zeprexa zyprxa }\\\midrule amoxicillin&\shortstack[l]{amoxicillan amoxicilin amoxicillian amoxocillin amoxcillin amoxacillin amoxicllin \\amoxicillion amoxycillin amoxcillian}\\\midrule tramadol&\shortstack[l]{tramadal tramado tramedol tramadol tramidol tramadoll tramodol tremadol tramamdol}\\\midrule xanax&\shortstack[l]{xananx zanax xanas xanaax xanaxs xanac xanaz}\\\midrule oxycontin &\shortstack[l]{oxicontin oxycotins oxycontins oxycottin oxycontin oxycotine oxycintin\\ oxycotin oxycontine}\\\midrule quetiapine&\shortstack[l]{quatiapine quietapine quetiapin quitiapine}\\\midrule seroquel &\shortstack[l]{seruquel seraquel seroqule seriquel seroquill seroqel seroquil seroquell\\seroqeul seroqul seroque seroqual seroguel seroquels seroquelxr serequel}\\\midrule oxycodone&\shortstack[l]{oxycodine oxicodone oxycoton ocycodone oxycodones oxycodin oxycodons \\oxycodne oxycondone oxycoden oxycodon oxyxodone oxycodene}\\\midrule ibuprofen&\shortstack[l]{ibufrofen ibuprofine ibeprofin ibuporfen iubprofin ibeprofen ibrupofen ibuprofun\\ iboprofen ibuprofrin ibuprofin ibuprofren ibuprofens ibuprofins ibuprohen ibupropen}\\\midrule venlafaxine &\shortstack[l]{venlafaxin venlaflaxine venlafexine venaflaxine}\\\midrule adderall &\shortstack[l]{\underline{inderall} addarall adderalls adderrall adderral adderalll adderoll adderal adderallxr}\\\midrule metformin &\shortstack[l]{metforman metfromin metform metformen metforim metforin metfornin medformin \\metfomin metformine metforming metfirmin metofrmin }\\\midrule penicillin &\shortstack[l]{penicillium penicillian penecillin penicillan penicilin penicillen \underline{ampicillin} \\penicillins penacillin}\\\midrule trazodone &\shortstack[l]{trazidone trozodone trazdone traxodone trazadone trazadones trazodon trazedone \\trazondone trazadon}\\ \bottomrule \caption{Medication keywords and the misspellings generated by the weighted version of our system. False positives generated by the system are underlined.} \label{tab2} \end{longtable} \subsection{Extrinsic evaluation} We performed extrinsic evaluation of the system by quantifying the change in retrieval rate (\textit{i.e.}, \% difference in the number of posts retrieved) when variants of a set of original health-related keywords generated by our system were included. To ascertain the generalizability of the system, we used health-related keywords that were not medication names. Specifically, we used five cancer related terms: \textit{carcinoma}, \textit{malignant}, \textit{leukemia}, \textit{metastasis} and \textit{chemotherapy}. We deliberately chose non-medication keywords for this evaluation since these keywords exhibit properties that medication names do not, such as the existence of multiple morphological variations (and their possible misspellings). For example, the term \textit{metastasis} has morphological variants such as \textit{metastasize} and \textit{metastatic}, and their common misspellings, which would be typically included for data collection. Our system generated a total of thirty-three variants, including the original keywords and frequent morphological variants, for the five terms (6.6 variants per keyword on average). We queried our in-house Twitter adverse drug reaction database first using the original keywords only and then including the variants.\footnote{Note that this dataset was collected using medication names as keywords, so we only expected a small number of the posts to mention cancer-related terms.} The queried dataset consisted of 7.98 million tweets in total with initially unknown numbers of occurrences of each of these terms. Querying using the original keywords retrieved 5579 tweets. Querying with misspellings along with automatically generated morphological variants retrieved a total of 9348 tweets, an increase of over 67\% in the number of retrieved posts. When morphological variants were excluded and only spelling variants were used, 7677 tweets were retrieved, which represents an increase of 37\% in retrieval rate. \section{Discussion} \subsection{Performance} We have developed a purely data-centric system for generating frequently occurring misspellings and variants for terms that are particularly prone to being incorrectly spelled. The developed system has several advantages over the past phonetic spelling variant generator. Intrinsic evaluations on the 20 medication names showed that the data-centric system significantly outperforms the phonetics-based one \cite{pimpalkhute14}. The proposed system is more precise, as it constrains the variants generated to only those that are semantically close to the original keyword. This ensures that noisy and unrelated variants are not generated by the system. From a practical standpoint, the generation of a small number of common variants, rather than a large number of them, may be crucial since data collection APIs often have restrictions on the total number of keywords that may be used. At the same time, while the phonetic variant generator is restricted to only generating keywords within a levenshtein distance of 1, the proposed system is capable of generating misspellings that are more lexically distant. The brief extrinsic evaluation we performed verifies the usefulness of the system for data retrieval from social media. \subsection{Error analysis} We were particularly interested in identifying reasons behind the relatively low recall of our system. Error analysis revealed that approximately 15-20\% of the terms in the gold standard are never retrieved by the $mostsimilar()$ function shown in the algorithm. This suggests that these misspellings are not close in the vector space of the model we used, although they were present in the vector model. The likely reason for this phenomenon is that these keywords did not occur frequently enough for their contexts to be sufficiently interpreted during the embedding generation process. Incorporating a word embedding model built from a larger data set is likely to improve the performance of the system. However, evaluating the effects of different vector models was outside the scope of this study, and we leave that as future work. The weighted version of our system is capable of removing some lexically similar false positives, compared to the default system, leading to the increase in precision. For example, variants \emph{duloxetine} and \emph{paroxetine} are often lexically very similar while also having high semantic similarity (as they belong to the same medication class and are often prescribed for the same conditions, such as depression). The weighted approach, by putting a higher weight at lower relative positions, is capable of removing false positives in many such cases. However, it still generates false positives when two medication names are particularly close both semantically and lexically, as is the case for \textit{oxazepam} and \textit{diazepam},\footnote{Both of these medications belong to the benzodiazepine family.} which are at a levenshtein distance of 2 edits. \subsection{Usability and customizability} An important goal for us was to develop a system that is easy to use and customize. Unlike past approaches, our spelling variant generation system does not require any manual steps or external resources (\textit{e.g.}, the Google search API). The algorithm only requires a word embedding model, preferably one that is generated from data that is likely to contain many misspellings for the seed keywords. From an operational perspective, for most practical tasks, the standard setting of the system should suffice and the system can be used in a plug-and-play manner. Additionally, the algorithm does not require any supervised machine learning methods, and, therefore, may be used by medical domain researchers without any prior training in NLP or machine learning. It is, however, possible to incorporate supervision for task-specific customization. We have tested our system, via intrinsic and extrinsic evaluations, on social media data, but we believe that the system will also be useful for data collection and text mining from other noisy data sources such as clinical notes and electronic health records. The system typically generates a small number of false positives, if any. For practical use, the false positives can be manually removed. The weighted version of the system can be used if greater precision is needed. The weighted version may also be more effective for spelling variant generation problems associated with other topics or when a larger training set is prepared. The weights may also be modified without requiring in-depth programming expertise in Python, which is the programming language used to implement the system. While we used a data-centric approach to customize the weights, they may also be customized via trial and error if needed. \section{Conclusion} The first step in conducting health-related NLP research on noisy text sources such as social media and electronic health records typically involves data collection using keyword-based searches. However, noisy text sources invariably contain misspellings, and health domain specific keywords are generally harder to spell. This results in loss of relevant data unless misspellings are generated prior to data collection. While some domain-specific NLP studies have proposed automatic misspelling generation methods, they suffer from various weaknesses. In this paper, we proposed a method for the data-centric generation of misspellings. Our proposed method generates semantically similar frequent misspellings for keywords, outperforming the current benchmark spelling variant generation system. The method is simple and fast, and may be used for other restricted-domain data collection tasks from social media and electronic health records. The system may also be customized to reward precision/recall, based on the problem-specific needs. Due to the growing usage of noisy text-based data for research in complex domains and the scarcity of misspelling generation systems, we expect our method to be valuable to the research community. The source code for the system has been made publicly available for the benefit of the health-related text mining and NLP community. \section*{Acknowledgments} This work was partially supported by National Institutes of Health (NIH) National Library of Medicine (NLM) grant number NIH NLM 5R01LM011176. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NLM or NIH. \bibliographystyle{fullname}
{ "timestamp": "2018-06-05T02:13:27", "yymm": "1806", "arxiv_id": "1806.00910", "language": "en", "url": "https://arxiv.org/abs/1806.00910" }
\section*{Acknowledgements} We thanks Chen Zhang, Zhi-Yuan Xie for helpful discussions, and Hubert Scherrer-Paulus for technical supports on Google Tensorflow and GPU. F. Pollmann acknowledges support from DFG through Research Unit FOR 1807 with grant no.\ PO 1370/2-1 and from the Nanosystems Initiative Munich (NIM) by the German Excellence Inititiative, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 771537). X.-F. Zhang acknowledges funding from Project No. 2018CDQYWL0047 supported by the Fundamental Research Funds for the Central Universities, Grant No. cstc2018jcyjAX0399 by Chongqing Natural Science Foundation and from the National Science Foundation of China under Grants No. 11804034 and No. 11874094. \begin{appendix} \section{Prediction of the intermediate supersolid phase} \begin{figure}[h] \includegraphics[width=0.45\textwidth]{FigS1.pdf} \caption{(a) The probabilities of the solid (blue solid line), superfluid (red dotted line), and supersolid (green dashed line) phases on the trajectory along the black dashed line in Fig.~3(a) of the main text, with $N=64$. (b) The phase diagram with colors denoting the probability of the supersolid phase $P_{\mathrm{ss}}$.} \label{fig:S1} \end{figure} In this supplementary material, we explain the details about the calculations which give the probability of the supersolid phase of the extended hard-core Bose-Hubbard model on the triangular lattice. From the results of the main text, we get the signature of the intermediate supersolid phase by inspecting the dependence of predictions on the compression ratio of the input data. The structure of the neural network is fixed when we train it, so if the training set has only two classes of data from two phases, the two output neurons could only give the probabilities of these two phases. Therefore, it is almost impossible to predict the existence of a third class which has no samples in the training set. For example, a neural network could not recognize a picture of a monkey, if it is trained by only the pictures of cats and dogs. Thanks to the tuning parameter $N$ in our proposal, the existence of the third phase could be observed. However, the direct probability of the third phase could not be given in such a fixed structure. To prove the existence of the intermediate supersolid phase, we could collect the data for this phase in the region we inspected before (the data in region $t/V\in[0.080,0.098]$ and $\mu/V\in[3.00,3.20]$ are used), and add them to the training data, too. A third label is given to them to distinguish from the other two phases. Now the number of the output neurons is three, and the probability of the supersolid phase $P_{\mathrm{ss}}$ could be obtained directly. In Fig.~\ref{fig:S1}(a), the predictions of these three phases on the trajectory along the black dashed line in Fig.~3(a) of the main text are given. It shows clearly that there are three phases separated by two phase transition points. The region of the supersolid phase is given in Fig.~\ref{fig:S1}(b) with the color denoting the probability of the supersolid phase. \end{appendix}
{ "timestamp": "2019-03-19T01:23:04", "yymm": "1806", "arxiv_id": "1806.00829", "language": "en", "url": "https://arxiv.org/abs/1806.00829" }
\section{Introduction} Entanglement is a unique feature of quantum systems, which distinguishes them profoundly from classical ones. In a quantum system a convenient measure of entanglement is the entanglement entropy. In quantum theory one defines the entanglement entropy between two sets of degrees of freedom, say $A$ and $B$, which span the Hilbert space of the theory $\mathcal{H}$ in the sense that $\mathcal H = \mathcal H_A \otimes \mathcal H_B$, where $A$ and $B$ span $\mathcal H_A $ and $\mathcal H_B $ respectively. The entanglement entropy measures to what extent the two sets of degrees of freedom are entangled in a given state, and thus to what degree the state of $A$ becomes mixed upon ``integrating out'' the degrees of freedom $B$. In the last two decades this topic has attracted a lot of interest from various physics perspectives~\cite{Calabrese:2004eu,Calabrese:2009qy}. Recently some ideas have been put forward about possible importance of entanglement also in the context of high energy scattering processes. In particular it has been suggested that the observed apparent thermal nature of the produced soft particle spectra at colliders might be caused by entanglement of the different degrees of freedom in the hadronic wave function. Since the bulk of hadronic degrees of freedom are entangled with those that are probed in the scattering process, they may effectively act as a thermal bath even though no actual thermalization occurs since there is no interaction in the final state ~\cite{Baker:2017wtt,Kharzeev:2017qzs}. Related development has been reported in~\cite{Berges:2017zws,Berges:2017hne} where a one dimensional toy model has been studied from a similar point of view. A commonality of these approaches is that they are concerned with the entanglement entropy between spatial regions, as it is also customary in almost all the rest of the field theoretical literature. On the other hand, the recent work of~\cite{Hagiwara:2017uaz} has investigated the entropy of parton distribution functions, particularly their low-$x$ behaviour, a topic already addressed in the early attempt ~\cite{Kutak:2011rb} to compute the entropy of a gluon system produced in a high energy proton-proton collision. Similar earlier ideas along these lines were discussed in~\cite{Elze:1994hj,Elze:1994qa,Peschanski:2012cw}, whereas~\cite{Liu:2018gae} investigates entanglement in high energy parton-parton scattering using the $AdS/CFT$ correspondence. Entanglement entropy in momentum space in relativistic QFT has also been studied, although less extensively. Reference ~\cite{Balasubramanian:2011wt} investigated entanglement between modes inhabiting distinct shells of momentum space. In the context of high energy scattering entanglement in momentum space is a very natural and interesting property. In particular in the framework of the Color Glass Condensate (CGC) approach to high energy evolution, the low longitudinal momentum modes are by construction fundamentally entangled with the valence, or high energy modes. The crucial feature of the CGC wave function is that the soft gluon part of the wave function is nontrivial due to the presence of the valence color charges. The dependence of this wave function on energy is described by the JIMWLK evolution equation~\cite{JalilianMarian:1996xn,JalilianMarian:1997jx,JalilianMarian:1997gr,JalilianMarian:1997dw,JalilianMarian:1998cb,Kovner:1999bj,Kovner:2000pt,Weigert:2000gi, Iancu:2000hn,Iancu:2001ad,Ferreiro:2001qy}. This entanglement entropy between the soft and valence degrees of freedom was investigated in~\cite{Kovner:2015hga}. It was also shown there that this entanglement entropy, albeit not equal, is directly related to the entropy of the particles produced in an energetic hadronic collision. In the present paper, we continue the line of investigation initiated in ~\cite{Kovner:2015hga}. The main question we are asking is whether it is possible to define sensibly production entropy within the CGC framework on an event-by-event basis. The calculation performed in ~\cite{Kovner:2015hga} involves averaging over the valence color charge density in the projectile wave function. Since the valence charges are slow degrees of freedom (in the sense of slow variation in light cone time), a given event corresponds to a given configuration of the color charges. Thus averaging over the slow degrees of freedom in the projectile wave function corresponds to averaging over the event ensemble. In this sense the entropy calculated in ~\cite{Kovner:2015hga} should be understood as the entropy corresponding to the whole ensemble of events. On the other hand intuitively it is clear that one should be able to ascribe entropy even to a single event. Even though in a strict sense the state of soft gluons emerging from a collision in a given event is a pure state, it contains a superposition of eigenstates of the theory with very different energies. The phases of these eigenstates are not infinitely resolvable by a real experimental apparatus, because of its finite resolution power. As a result the information about relative phases of the different eigenstates is scrambled by the measurement. This implies that the pure state produced by a scattering event will always be known - at best - in terms of a density matrix. Thus, beside the entanglement of the soft modes with the valence modes investigated in~\cite{Kovner:2015hga}, another unavoidable source of incomplete information exists. The question is how to describe such a density matrix and the associated entropy with the tools of quantum mechanics. Indeed, time is not a quantum mechanical degree of freedom \emph{per se}, which makes it impossible to directly investigate decoherence in time in terms of a partial trace over a Hilbert subspace - the standard tool in defining a reduced density matrix. The idea investigated in the present paper relies on the energy-time uncertainty relation. We study the decoherence in time of the soft gluon state emerging from the scattering event by interpreting it as a consequence of an entanglement of the final state particles with an imaginary experimental apparatus. The apparatus has a finite time resolution, which determines the entropy of the given event. Put differently, we extend the Hilbert space of our system to accommodate one more auxiliary degree of freedom, which couples directly to energy. For simplicity we take the wave function of this ``calorimeter'' degree of freedom to be a gaussian, and will therefore refer to it as a \emph{white noise}. In this setup we can trace the density operator of the soft gluon system over this auxiliary Hilbert subspace. This allows us to define an event-by-event entropy of the produced gluons which is induced by their time evolution. In the sense just specified, this entropy reflects the incomplete information ascribable to the finite time resolution of the experiment $T$. We analytically compute the entropy of the soft gluons defined in this way in the limit of weak projectile fields. In the second part of the paper we generalize our discussion to include both the event-by-event entropy and the entropy of the ensemble of events discussed in~\cite{Kovner:2015hga}. The plan of the paper is the following. In section \ref{setstage} we review the basic CGC setup with the McLerran-Venugopalan (MV) model as the model for the distribution of valence charges in the projectile wave function. In section \ref{white} we introduce the main idea of this work, and couple the wave function of the soft gluons emerging after scattering on the target to the ``calorimeter'' or \emph{white noise} degree of freedom. Sections \ref{enenfixedrho} and \ref{enenavgrho} contain the central results of this paper: we compute the entanglement entropy of the system of produced soft gluons respectively for a fixed configuration of the color charge density of the projectile wave function (single event) and for the ensemble of events defined by the MV model~\cite{McLerran:1993ni,McLerran:1993ka} and averaged over the target fields. Both computations are performed by resorting to the so-called \emph{replica trick}. We conclude in section \ref{conclusions} with short summary of our results. \section{The Setup}\label{setstage} \subsection{The CGC state} In this paper we work within the framework of the high-energy limit of dilute-dense QCD~\cite{Gelis:2010nm}. The fast hadron, referred to as the projectile, is highly boosted in the $+$ direction and is properly described by a wave function $\psi$ which can be written in a quasi factorised form \begin{equation}\label{wf} \vert \psi \rangle = \vert s \rangle \otimes \vert v \rangle \, , \end{equation} Here $\vert v \rangle$ is the the wave function that depends only on the valence (energetic) degrees of freedom, while $ \vert s \rangle$ is the wave function of the soft gluons. The relevant valence degrees of freedom are described by the set of classical color charge densities $\rho^a({ \bm x})$ which only depend on the transverse coordinates ${ \bm x}$. The wave function in eq.(\ref{wf}) is not a product wave function, since the soft part of it depends on the valence charges. This entanglement between the energetic and soft modes is precisely the source of the entropy studied in ~\cite{Kovner:2015hga}. As long as the color charge density is not too large, a good approximation for the soft part of the wave function at LO in the strong coupling constant is given by a simple coherent state~\cite{Kovner:2005jc} \begin{equation}\label{soft} \Omega \vert 0\rangle = \exp\left\{i\int _{q^+<\Lambda} \tilde{b}^i_a(q)\, \left[a^a_i(\qp,{ \bm q}) + a^{\dagger a}_i(\qp,-{ \bm q}) \right] \right\}\vert 0\rangle \, , \end{equation} As opposed to ~\cite{Kovner:2015hga} we will have to follow the longitudinal momentum dependence of various modes. We will therefore explicitly keep track of the longitudinal momentum dependence of the classical field and, thus, we have introduced the longitudinal momentum dependent classical field \begin{equation} \tilde{b}^a_i(q) \equiv \sqrt{\frac{2}{\qp}}\, b^a_i(-{ \bm q})\, , \end{equation} The standard Weizs\"acker-Williams (WW) field generated by the transverse valence color charges, respectively in coordinate and momentum space, is \begin{eqnarray} b^a_i({ \bm x}) &=& \frac{g}{2\pi}\, \int d^2{ \bm y} \frac{({ \bm x}-{ \bm y})_i}{({ \bm x}-{ \bm y})^2}\, \rho^a({ \bm y}) \, , \nonumber \\ b^a_i({ \bm q}) &=& -\frac{i\,g\,\rho^a({ \bm q})\, { \bm q}_i}{{ \bm q}^2}\, , \nonumber \\ {b^a_i}^\dagger({ \bm q}) &=& b^a_i(-{ \bm q})\, , \label{WWtrans} \end{eqnarray} In the following sometimes we will use the mixed representation of $\tilde{b}^a_i$ in the $(q^+, { \bm x})$ space. The scale $\Lambda$ in eq.(\ref{soft}) is the longitudinal momentum scale that provides the separation between the valence and soft modes. The classical color charge density of the valence modes is as usual \begin{eqnarray} \rho^a({ \bm q}) &=& -i f^{abc}\, \int_{{ k^+}>\Lambda} \frac{d{ k^+}}{2\p} \int \frac{d^2\kt}{(2\pi)^2}\, a^{\dagger\, b}_j({ k^+},\kt)\, a^c_j({ k^+},\kt+{ \bm q}) \, , \nonumber \\ {\rho^{a\,\dagger}({ \bm q}) } &=& \rho^a(-{ \bm q})\, . \end{eqnarray} This notation is the same as in~\cite{Kovner:2015hga}. Whenever integrating over the $3$-momenta of the soft modes we use the shorthand notation \begin{equation} \int_k \equiv \int_{{ k^+}<\Lambda} \frac{dk^+}{2\pi} \int \frac{d^2\kt}{(2\pi)^2} \, , \end{equation} The creation and annihilation operators satisfy the fundamental commutation relation \begin{equation} [ a^a_i(\qp,{ \bm q}) ,a^{\dagger b}_j({p^+},{ \bm p}) ] = (2\pi)^3 \,\delta^{ab} \, \delta_{ij}\, \delta(\qp-{p^+}) \, \delta^{(2)}({ \bm q}-{ \bm p})\, . \label{basicomm} \end{equation} We also introduce the following usual combinations of creation and annihilation operators, \begin{eqnarray} \phi^a_i(\qp,{ \bm q}) &\equiv& a^a_i(\qp,{ \bm q}) + a^{\dagger a}_i(\qp,-{ \bm q}) \, , \nonumber \\ \pi^a_i(\qp,{ \bm q}) &\equiv& -i \left( a^a_i(\qp,{ \bm q})\right) - a^{\dagger a}_i(\qp,-{ \bm q}) \, , \nonumber \\ \left[ \phi^a_i(\qp,{ \bm q}), \pi^b_j({p^+},{ \bm p}) \right] &=& 2\,i\, (2\pi)^3 \delta^{ab}\, \delta_{ij} \, \delta(\qp-{p^+})\, \delta^{(2)}({ \bm q}+{ \bm p}) \, , \label{phidef} \end{eqnarray} Since the operators $\phi^a_i(\qp,{ \bm q}) $ form a mutually commuting set we will choose throughout this paper the basis of their eigenstates as the basis in which to write the density matrix. The free vacuum state $\vert 0\rangle$ entering eq.(\ref{soft}) in this basis has the form \begin{equation} \langle \phi |0 \rangle \equiv \psi(\phi) = \mathcal{N}\, \exp\left\{- \frac{1}{4}\int_{\qp{ \bm q}} \phi^a_i(\qp,{ \bm q}) \phi^a_i(\qp,-{ \bm q})\right\} \, . \label{vwfmom} \end{equation} Note that the coherent operator $\Omega$ appearing in eq.(\ref{soft}) diagonalizes the light cone Hamiltonian of QCD in the first order in perturbation theory. It is also equal to the (appropriately regularized) Heisenberg picture time evolution operator of QCD. These points are reviewed in Appendix A. So far we have discussed the structure of the soft gluon state. To calculate physical observables clearly one also needs information about the state $\vert v\rangle$. This information is non perturbative and therefore has to be supplied independently of the perturbation theory used to calculate the soft wave function eq.( \ref{soft}). In this paper, just like in~\cite{Kovner:2015hga}, we will use the MV model. This model does not specify the valence wave function completely, but it is sufficient for our purposes since it does specify the diagonal matrix elements of the valence space density matrix. In the MV model one treats the color charge density as classical, and the value of the color charge density field specifies the basis of states in the valence part of the Hilbert space. The MV model then postulates \begin{equation} \langle\rho^a({ \bm x})\vert v\rangle\langle v\vert\rho^a({ \bm x})\rangle=N\exp\left\{-\frac{1}{2}\int d^2{ \bm x}\, d^2{ \bm y}\, \rho^a({ \bm x})\mu^{-2}({ \bm x}-{ \bm y})\rho^a({ \bm y})\right\}\, , \end{equation} with the normalization constant $N$ such that $\langle v\vert v\rangle=1$. \subsection{The eikonal scattering} When scattering on a dense target, the partons of the projectile undergo eikonal scattering \begin{equation} a^a({ \bm x},p^+)\rightarrow S^{ab}({ \bm x})a^b({ \bm x},p^+);\ \ \ \ a^{\dagger a}({ \bm x},p^+)\rightarrow S^{ab}({ \bm x})a^{\dagger b}({ \bm x},p^+); \ \ \ \rho^a({ \bm x})\rightarrow S^{ab}({ \bm x})\rho^b({ \bm x}) \, . \end{equation} The eikonally scattered CGC soft ground state therefore is \begin{equation}\label{wf1} \hat S\, \Omega\, \vert 0\rangle\otimes\vert v\rangle=\exp\left\{i\int _{q^+<\Lambda}\int_{\bf x}\tilde{b}^{'a}_i(q^+,{\bf x})\, S^{ab}(x)\phi^b(q^+,{\bf x})\right\}\vert 0\rangle \otimes \hat S\vert v\rangle\, , \end{equation} where the Weizs\"acker-Williams field of the eikonally rotated charge is \begin{equation} b^{' a}_i({ \bm x}) = \frac{g}{2\pi}\, \int d^2 { \bm y} \frac{({ \bm x}-{ \bm y})_i}{({ \bm x}-{ \bm y})^2}\, \bar\rho^a({ \bm y}) \, , \label{scatterWW} \end{equation} where \begin{equation} \bar \rho^a({ \bm y})= S^{ab}({ \bm y})\, \rho^a({ \bm y}) \, . \end{equation} Note that this is the wave function which emerges immediately after the scattering with the target. Between the scattering time and the observation time it evolves with the ``free'' QCD Hamiltonian, so that at any time $t$ we have \begin{equation}\label{psi} \Psi_{out}=U(0,t)\, \hat S\, \Omega\, \vert 0\rangle\otimes \vert v\rangle; \ \ \ \ U(0,t)=\exp\{-iHt\} \, , \end{equation} where $H$ is the QCD light cone Hamiltonian, defined in Appendix \ref{coherent} (here and throughout this paper we denote the light cone time variable by $t$). Also the relation of $\Omega$ to the time evolution operator $U(t,0)$is clarified in Appendix \ref{coherent}. \subsection{The soft gluon density matrix} In this paper we are interested in the density matrix of soft gluons. Given the wave function eq.(\ref{psi}), the density matrix describing the ``remnants'' of the projectile following the high energy collision can be written as \begin{equation}\label{rho} \hat{\rho}_P = U(0,t)\, \hat S\, \Omega \vert 0\rangle \otimes\vert v\rangle\langle v\vert\otimes\langle 0 \vert \Omega^{\dagger}\, \hat S^\dagger\, U^{\dagger}(0,t)\, \end{equation} This however is not quite the density matrix that is of interest to us. We are interested in the density matrix that describes the distribution the soft gluons produced in the scattering. In particular in the second part of this paper we will be interested in the reduced density matrix obtained after integration over the valence degrees of freedom. Naively, one would think that all one has to do is to trace $\hat\rho_P$ over the color charge density. However this is not quite right. Recall that the valence color charge density is always accompanied by its ``native'' WW soft field. This WW field does not describe soft gluons produced in the collision, but is a part of the wave function of receding energetic fragments of the projectile. We are not interested in this part of the wave function. In other words what we need to trace over is not just the color charge density, but the color charge density dressed by its WW field. As discussed in~\cite{Kovner:2015hga}, the change of basis from the color charge density to the color charge density accompanied by its WW field is affected by the unitary operator $\Omega$. Therefore our object of interest is not the density matrix of eq.(\ref{rho}), but rather a unitarily transformed matrix \begin{equation} \hat{\rho}_{P}'= \Omega^\dagger \, \hat\rho_P \, \Omega = \Omega^{\dagger}\, U(0,t)\, \hat S\, \Omega \vert 0\rangle \otimes\vert v\rangle\langle v\vert\otimes \langle 0 \vert \Omega^{\dagger}\, \hat S^\dagger\, U^{\dagger}(0,t)\, \Omega\, .% \label{density1} \end{equation} To reiterate, this expression has a simple physical interpretation~\cite{Kovner:2006wr}: the coherent state $\Omega \vert 0 \rangle$ is scattered and then evolved up to time $t$. The inverse of the coherent operator, $\Omega^{\dagger}$, associates to the scattered valence modes their ``native'' gluon cloud, thus subtracting those bound state gluons from the final state. The resulting soft wave function contains only the gluons which are produced by the scattering process and are not a part of the receding energetic remnants of the projectile. We can write this expression in a more explicit form. First, recall that the operator $\Omega$ perturbatively diagonalizes the light cone Hamiltonian (see Appendix A): \begin{equation} \Omega^\dagger U(0,t) \Omega = \exp\left\{-i\, [H_0 + F(\rho)] t \right\}\, , \end{equation} where $H_0$ is the free noninteracting Hamiltonian of the soft modes, and the vacuum energy $F(\rho)$ ( defined in (\ref{OmegaH}) ) does not depend on the soft degrees of freedom. The vacuum energy only adds a pure phase to the soft wave function, whuch we will disregard in the following. On the other hand using eq.(\ref{wf1}) and the explicit form of the operator $\Omega$ we can write \begin{equation} \Omega^\dagger\, \hat S\, \Omega\, \vert 0\rangle \otimes\vert v\rangle = \exp\bigg\{ i\,\frac{g}{2\pi}\, \int d^2{ \bm x}\,\int d^2{ \bm z}\, \frac{({ \bm x}-{ \bm z})_i}{({ \bm x}-{ \bm z})^2} \, \bigg( S^{ba}({ \bm x}) -S^{ba}({ \bm z}) \bigg)\, \bar\rho^b({ \bm z})\, \int_{q^+}\phi^a_i(q^+,{ \bm x}) \bigg\}\, \vert \, 0 \rangle\otimes \hat S\vert v\rangle \, . \label{deltabintro} \end{equation} Note that \begin{equation} \langle\rho^a({ \bm x})\vert v\rangle=\langle \bar\rho^a({ \bm x})\hat S\vert v\rangle\, . \end{equation} In what comes below, we will change the basis for the valence degrees: $\hat S \vert v \rangle \rightarrow \vert v \rangle$ and simultaneously rename $\bar\rho \rightarrow \rho$. Note that in section \ref{enenavgrho} we will integrate over the distribution of $\rho$. The MV weight used for this integration is invariant under this change of variables. Now, introducing \begin{eqnarray} \Delta b^a_i({ \bm x}) &\equiv& \frac{g}{2\pi}\, \int d^2{ \bm z}\, \frac{({ \bm x}-{ \bm z})_i}{({ \bm x}-{ \bm z})^2} \, \bigg( S^{ab}({ \bm x}) -S^{ab}({ \bm z}) \bigg)\,\rho^b({ \bm z}) \, , \nonumber \\ \Delta b^a_i({ \bm q}) &=& i\,g\,\int \frac{d^2{\bm l}}{(2\pi)^2}\, \bigg[\frac{{ \bm q}_i}{{ \bm q}^2}-\frac{{\bm l}_i}{{\bm l}^2}\bigg]\, S^{ab}({ \bm q}-{\bm l})\, \rho^b({\bm l})\, , \label{deltabform} \end{eqnarray} we obtain \begin{equation} \Omega^\dagger\, \hat S\, \Omega\, \vert 0\rangle \otimes \vert v\rangle= \exp\bigg( i\, \int_{q^+}\int d^2{ \bm x}\,\Delta \tilde b^a_i(q^+,{ \bm x})\, \phi^a_i(q^+,{ \bm x}) \bigg) \, \vert \, 0 \rangle \otimes\vert v\rangle\, , \end{equation} where, as before, $\Delta \tilde{b}^a_i(q) = \sqrt{2/\qp}\, \Delta b^a_i(-{ \bm q})$. Thus, when all said and done, the density matrix that we will be analysing has the form \begin{equation} \hat\rho_{P}'=e^{-iH_0t} e^{ i\, \int_{q^+}\int d^2{ \bm x}\,\Delta \tilde b^a_i(q^+,{ \bm x})\, \phi^a_i(q^+,{ \bm x}) } \, \vert \, 0 \rangle \otimes\vert v\rangle\, \langle v\vert\otimes\langle 0\vert e^{- i\, \int_{q^+}\int d^2{ \bm x}\,\Delta \tilde b^a_i(q^+,{ \bm x})\, \phi^a_i(q^+,{ \bm x}) }e^{iH_0t}\, . \label{notnewrho} \end{equation} At $t=0$, this is the density matrix of~\cite{Altinoluk:2015uaa} used to address two-particle correlations and also the starting point of~\cite{Kovner:2015hga} The focus of~\cite{Kovner:2015hga} was to calculate the entanglement entropy between the valence and soft modes by reducing the density matrix over the valence Hilbert space. In this paper instead our aim is to define the entropy of soft gluons produced in a single event, as discussed in the introduction. What we mean by a single event, is a fixed configuration of the color charge density $\rho$ and the configuration of the target $S$. \section{Density matrix for a single event: the energy resolution and the calorimeter ``white noise''}\label{white} Formally speaking eq.(\ref{notnewrho}) at fixed value of $\rho$ and $S$ is a pure state and therefore has vanishing von Neuman entropy. However this pure state is a superposition of many states with very different energies. When written in the energy basis therefore some of its off diagonal matrix elements are strongly oscillating functions of time. Quite generally, suppose our system is in a pure state described by a wave function \begin{equation} \vert\psi(t)\rangle = \sum_{n} e^{-iE_n t } c_n \vert \psi_n \rangle \, , \end{equation} where $\left\{E_n\right\}$ are energy eigenvalues which, for simplicity, we assume to be non degenerate. The density matrix of this state in the energy eigenbasis has the form \begin{equation} \hat\rho(t) = \vert\psi(t)\rangle\langle\psi(t)\vert = \left( \begin{array}{ccc} \vert c_1 \vert^2 & c_1 c_2^* \, e^{i( E_1-E_2) t} & \dots \\ & & \\ c_2 c_1^* \, e^{i(E_2-E_1)t} & \vert c_2 \vert^2 & \dots \\ \dots & \dots& \dots \\ \end{array} \right) \, . \label{rhonoreduc} \end{equation} If one performs a measurements on this system which takes time $T$ with $T\gg \vert E_1-E_2 \vert^{-1}$, such measurement effectively is sensitive only to the time average of the density matrix over $T$. Thus all the off diagonal matrix elements between states with large enough energy differences effectively vanish for the purpose of such measurement, and our density matrix is equivalent to \begin{equation} \hat\rho \sim \left( \begin{array}{ccc} \vert c_1 \vert^2 & 0 & \dots \\ & & \\ 0 & \vert c_2 \vert^2 & \dots \\ \dots & \dots& \dots \\ \end{array} \right) \, , \label{rhoreduc} \end{equation} Therefore for the purpose of many measurements the pure state of this type is practically equivalent to a mixed density matrix with vanishing off diagonal matrix elements between eigenstates with vastly different energy, where the relevant energy scale is introduced by the time resolution of the measuring apparatus. Such a mixed density matrix has for example a non vanishing von Neuman entropy. In the context of the present paper the wave function $\psi(t)$ is the wave function of a stream of particles emerging from a scattering event which took place at $t=0$. Having a finite time resolution is not only inevitable practically, but is required by the Heisenberg uncertainty relation for an experiment that measures particle energies. Ideally, in the case of an experiment able to resolve energies with great precision, the energy-time uncertainty relationship dictates that the time resolution $T$ of the detector should be very big. This implies that the off diagonal terms in this "measurement-averaged" density matrix will become practically negligible. The technical question one should ask is how to sensibly define the ``time averaged'' density matrix for a single event. Directly averaging over $T$ the operator $\hat \rho$ does not seem to be the right thing to do, since in general such averaging procedure does not preserve all the physical properties of the density matrix, such as positivity of its eigenvalues. The preceding qualitative discussion suggests however a physical way of doing this. Since the coherence of the outgoing quantum state is broken due to the interaction with the measuring apparatus, one should introduce such an apparatus as a separate degree of freedom coupled to the soft gluons. In particular, if the apparatus is a ``calorimeter'', i.e. measures particle energies, then the additional degree of freedom should directly couple to the energy of the state. The state of the ``calorimeter degree of freedom'' itself should contain information about the time resolution via some typical time scale. We can then define a proper time averaged density matrix of soft gluons by reducing it over the calorimeter degree of freedom. This procedure produces a density matrix that satisfies the correct probability properties and which also incorporates the physics of time averaging of the original $\hat \rho$. The procedure we have described is not unique, as in principle it depends on the exact state of the calorimeter degree of freedom. However we expect that for all practical purposes it should not matter much how exactly we specify this state as long as it incorporates the appropriate time resolution scale. In the following we implement these ideas by choosing the simplest possible option - endowing the calorimeter degree of freedom with a simple Gaussian wave function. This corresponds to coupling of the soft gluons to white noise. The extended density matrix (\ref{notnewrho}) that is now defined on a larger Hilbert space which contains an additional degree of freedom $\xi$ is \begin{equation}\label{rhoxi} \hat{\rho}_{P,\xi} = e^{-i H\xi}U(0,t)\, \hat S\, \Omega \vert G\rangle\otimes\vert 0\rangle \otimes\vert v\rangle\langle v\vert\otimes\langle 0 \vert \otimes\langle G\vert \Omega^{\dagger}\, \hat S^\dagger\, U^{\dagger}(0,t) e^{i H\xi}\, \end{equation} with \begin{equation} \langle \xi\vert G\rangle=e^{- \frac{\xi^2 }{2T^2}} \end{equation} As before, dressing this density matrix with the operator $\Omega^\dagger$, and concentrating on the matrix element in the $\xi$-space we are lead to consider \begin{eqnarray} \langle \xi_1\vert\hat \rho'_{P,\xi}\vert \xi_2\rangle &=& \frac{1}{\sqrt \pi\, T}\, e^{- \frac{\xi_1^2 + \xi_2^2}{2T^2}} e^{-i H_0 \xi_1}\, e^{i \int_q \Delta \tilde{b}^a_i(q)\, \phi^a_i(\qp,{ \bm q})} \vert 0\rangle \langle 0 \vert e^{-i \int_{ \bm p} \Delta \tilde{b}^b_j(p)\, \phi^b_j({p^+},{ \bm p})} \, e^{i H_0 \xi_2} \, , \label{newrho} \end{eqnarray} where the normalization is dictated by the requirement $\Trace \hat\rho = 1$. The reduced density matrix of the soft gluons is then calculated as \begin{eqnarray} \hat{\rho}'_{P,\xi} &=& \int_\xi \frac{1}{\sqrt \pi\, T}\, e^{- \frac{\xi^2 }{T^2}} e^{-i H_0 \xi}\, e^{i \int_q \Delta \tilde{b}^a_i(q)\, \phi^a_i(\qp,{ \bm q})} \vert 0\rangle \langle 0 \vert e^{-i \int_{ \bm p} \Delta \tilde{b}^b_j(p)\, \phi^b_j({p^+},{ \bm p})} \, e^{i H_0 \xi} \, . \label{newrho1} \end{eqnarray} We stress again that the formal extension of the Hilbert space allows us to interpret the incomplete experimental knowledge of the wave function in terms of entanglement between the system and the finite-resolution experimental apparatus. The effect of this apparatus is to decohere energy eigenstates which have large enough energy differences. The decoherence occurs over the time $T$, and thus the $T$-dependence of the density matrix eq.(\ref{newrho1}) can be interpreted in terms of time evolution of the density matrix associated with the final state wave function of the hadronic system in a single event. This density matrix has an associated von Neuman entanglement entropy. This entropy describes the increasing loss of information which is ascribable to the decoherence of different energy modes, in the sense discussed above. \section{Event by event entropy production in the weak field limit}\label{enenfixedrho} Our goal now is to calculate the von Neuman entropy associated with the density matrix eq.(\ref{newrho1}). The canonical expression of the von Neumann entropy for a system described by a quantum density matrix $\hat \rho$ is \begin{equation} \sigma^E = - \Trace [\hat\rho \log\hat\rho]\, . \end{equation} This is frequently calculated using the so called replica trick, by representing it as \begin{equation} - \hat \rho \log \hat \rho = - \lim_{\eps \rightarrow 0} \frac{\hat \rho^{1+\eps} - \hat \rho}{\eps} \, . \label{replica} \end{equation} and therefore \begin{equation}\sigma^E=-\lim_{\eps \rightarrow 0}\frac{\Trace[\hat \rho^{1+\eps}]-1}{\eps} \, . \label{repent} \end{equation} The $\xi$ integral in eq.(\ref{newrho1}) is Gaussian, and therefore can be in principle performed exactly. This however leads to a complicated integration over the fields $\phi$, and we were not able to calculate the entropy for arbitrary $\rho$. However it turns out to be possible to perform the calculation for small number of produced particles, i.e at small $\Delta\tilde{b}^2$. This calculation is the subject of the preset section. Even this calculation turns out to be not completely trivial, as a naive expansion in the small parameter $\Delta\tilde{b}^2$ is divergent. \subsection{Naive replica trick does not work} At small value of the field $b$ it would seem naively that one can expand the integrand in eq.(\ref{newrho1}) in powers of $\Delta\tilde{b}^2$. A little thought however shows that this is not possible. The issue is that for vanishing $\Delta \tilde{b}^2$, $\hat\rho$ is the density matrix of a pure state. It therefore has one eigenvalue equal to $1$, while all the other eigenvalues vanish. At small but finite $\Delta \tilde{b}^2$ one expects all eigenvalues to change by an amount proportional to $\Delta \tilde{b}^2$. Thus we expect the density matrix eq.(\ref{newrho1}) to have one eigenvalue close to unity $\Delta_1\equiv 1-\delta_1$, and all other eigenvalues $\delta_i$ small, so that all $\delta_i\sim \Delta \tilde{b}^2$. If this is the case, the main contribution to the entropy arises from the small eigenvalues and it should behave as \begin{equation} \sigma^E\sim -\Delta \tilde{b}^2\ln[\Delta \tilde{b}^2]\, . \end{equation} Such an expression clearly cannot be expanded in powers of $\Delta \tilde{b}^2$, and thus we expect to hit a divergence if we try to do so. This is indeed what happens. We have performed this calculation explicitly and have verified that in the limit $\epsilon\rightarrow 0$ the entropy defined in eq.(\ref{repent}) with $\hat\rho$ expanded in $\Delta\tilde{b}^2$ diverges. We will employ a different strategy than just expanding the entropy in powers of the small parameter. Our logic is the following. Since corrections to all eigenvalues of $\hat\rho$ are governed by the same small parameter, we expect \begin{equation} \delta_{i\geq 2} = \beta_i \, \delta_1 \, , \quad \, \beta_i\geq 0\, ,\quad\, \sum_{i\geq 2} \beta_i = 1\, . \end{equation} The second equality follows from the fact that \begin{equation} \Trace \hat \rho=1-\delta_1+\sum_{i\geq 2}\delta_i=1 \, . \end{equation} In terms of these eigenvalues the entropy is expressed as \begin{equation} \sigma^E=-\left[(1-\delta_1)\log(1-\delta_1)+\sum_{i\geq 2}\delta_i\log\delta_i\right]= -\delta_1 \log \delta_1 - \delta_1 \sum_{i\geq 2} \beta_i \ln \beta_i -(1-\delta_1)\log(1-\delta_1)\approx -\delta_1 \log \delta_1\, . \end{equation} The last equality is valid to leading logarithmic order in the small parameter at hand $\Delta\tilde{b}^2$. Our goal therefore should be to find $\delta_1$. We will use the replica trick to do it, but we will employ the replicas in a somewhat different way than outlined in the beginning of this section. We note that, since $\Delta_1$ is the largest eigenvalue of the density matrix and the only one that does not vanish at small $\Delta\tilde{b}^2$, it must be true that \begin{equation} \Trace[\hat\rho^N]=\Delta^N_1+O((\Delta\tilde{b}^2)^N)\rightarrow_{N\rightarrow\infty}=\Delta^N_1=1-N\delta_1+\frac{N(N-1)}{2}\delta_1^2+... \end{equation} Thus calculating $\Trace[\hat\rho^N]$ at large $N$, and subsequently expanding in $\Delta\tilde{b}^2$, we extract directly $\delta_1$ and therefore the entropy to leading logarithmic accuracy. This will be our strategy. \subsection{Calculating the entropy} The first thing to do is to reduce the integrand of this expression to a function of $\phi$, i.e. to get rid of $H_0$ in the exponential. For this we use \begin{equation}\label{phit} e^{-i H_0 \xi} \phi^a_i(\qp,{ \bm q})e^{i H_0 \xi}= \phi^a_i(\qp,{ \bm q})\cos(E_q\xi)- \pi^a_i(\qp,{ \bm q})\sin(E_q\xi) \, , \end{equation} with $E_q = { \bm q}^2/2\qp$. Calculating the matrix element of the density matrix (\ref{newrho1}) we find \begin{eqnarray} &&\langle\phi^1\vert\hat\rho'_{P,\xi}\vert\phi^2\rangle= \frac{1}{\sqrt \pi\, T} \int d\xi\, e^{-\int_{ \bm q} \frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} }\, e^{- \frac{\xi^2 }{T^2}} e^{i \int_q \Delta \tilde{b}^a_i(q)\, \left[\phi^{1a}_i(\qp,{ \bm q})-\phi^{2a}_i(\qp,{ \bm q})\right]\cos(E_q\xi)} \\ &\times&e^{-\frac{1}{4}\int_q\left\{\left[\phi^{1a}_i(q^+,{ \bm q})-2\Delta\tilde{b}^a_i(q)\sin(E_q\xi)\right]\left[\phi^{1a}_i(q^+,-{ \bm q})-2\Delta\tilde{b}^a_i(-q)\sin(E_q\xi)\right]+\left[\phi^{2a}_i(q^+,{ \bm q})-2\Delta\tilde{b}^a_i(q)\sin(E_q\xi)\right]\left[\phi^{2a}_i(q^+,-{ \bm q})-2\Delta\tilde{b}^a_i(-q)\sin(E_q\xi)\right]\right\}}\nonumber\\ &=& \frac{1}{\sqrt \pi\, T} \int d\xi\, e^{- \frac{\xi^2 }{T^2}} e^{-\int_{ \bm q} \left\{\frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} +i\Delta\tilde{b}^a_i(q)\left[\exp(iE_q\xi)\phi^{2a}_i(q)-\exp(-iE_q\xi)\phi^{1a}_i(q)\right]+2\Delta\tilde{b}^a_i(q)\Delta\tilde{b}^a_i(-q)\sin^2(E_q\xi)\right\}}\nonumber\\&\times& e^{-\frac{1}{4}\int_q\left[\phi^{1a}_i(q^+,{ \bm q})\phi^{1a}_i(q^+,-{ \bm q})+\phi^{2a}_i(q^+,{ \bm q})\phi^{2a}_i(q^+,-{ \bm q})\right] } \, . \nonumber \end{eqnarray} Using this expression we have \begin{eqnarray} \Trace[\left(\hat\rho'_{P,\xi}\right)^N]&=&\left[ \frac{1}{\sqrt \pi\, T} \right]^N \int\prod_{\alpha=1}^N \left[\mathcal D \phi_i^{a\alpha} d\xi_\alpha\right]\, e^{- \frac{\xi_\alpha^2 }{T^2}} e^{-\int_{ \bm q} \left\{2\Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)\sin^2(E_q\xi_\alpha)\right\}}\nonumber\\ &\times&e^{-\int_q\left\{\frac{1}{2}\phi^{\alpha a}_i(q^+,{ \bm q})\phi^{\alpha a}_i(q^+,-{ \bm q})+i\left[\Delta\tilde{b}^a_{i}(q)\exp(iE_q\xi_{\alpha-1})-\Delta\tilde{b}^a_{i}(q)\exp(-iE_q\xi_{\alpha})\right]\phi^{\alpha a}_i(q)\right\}} \, . \end{eqnarray} With the understanding $\xi_0=\xi_N$ and $\phi_0=\phi_N$, the integral over $\phi^\alpha$ is easily performed with the result \begin{eqnarray} \Trace[\left(\hat\rho'^N_{P,\xi}\right)] &=&\left[ \frac{1}{\sqrt \pi\, T} \right]^N \int\prod_{\alpha=1}^N \left[ d\xi_\alpha\right]\, e^{- \frac{\xi_\alpha^2 }{T^2} }\nonumber\\ &\times&e^{-\int_q \left[\Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)-\Delta\tilde{b}^a_{i}(q) \Delta\tilde{b}^a_{i}(q)\exp\left(iE_q(\xi_{\alpha-1}-\xi_\alpha)\right)\right]} \, . \label{rhon1} \end{eqnarray} It is now straightforward to expand the integrand in $\Delta\tilde{b}^2$ and perform the $\xi$ integrals at any finite order. The eigenvalue $\delta_1$ then can be read off the term proportional to $N$. Keeping the first two terms we get \begin{equation} \delta_1=\delta_1(1)+\delta_1(2) \, , \end{equation} with \begin{eqnarray}\label{delta12} \delta_1(1)&=& \int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg)\\ \delta_1(2)&=&\int_{q,p} \Delta \tilde{b}^2(q) \Delta \tilde{b}^2(p) \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} -e^{-\frac{E_p^2T^2}{2}}+e^{-\frac{(E_q+E_p)^2T^2}{2}}\bigg) \, . \nonumber \end{eqnarray} Note that within our approach it is perfectly legal to keep terms in $\delta_1$ beyond the leading order in $\Delta\tilde{b}^2$. The first correction to $\Trace [\hat\rho^N]$ due to $\delta_i, \ \ i\ge 2$ appears only in the order $\Delta\tilde{b}^{2N}$, and thus does not interfere with the extraction of higher order terms in $\delta_1$. Thus in the weak field limit the single event entropy, up to logarithmic accuracy, is given by \begin{equation}\label{entro} \sigma^E_{\Delta b^2 \ll 1} \simeq - \delta_1 \log \delta_1 \approx - \int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg) \log\left[ \int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg)\right]\, . \end{equation} Including the first correction of eq.(\ref{delta12}) gives \begin{eqnarray}\label{entro1} \sigma^E_{\Delta b^2 \ll 1} &\approx& - \left[ \int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg) +\int_{q,p} \Delta \tilde{b}^2(q) \Delta \tilde{b}^2(p) \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} -e^{-\frac{E_p^2T^2}{2}}+e^{-\frac{(E_q+E_p)^2T^2}{2}}\bigg)\right]\nonumber\\ &\times&\log \left[\int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg)\right]\, . \end{eqnarray} Note that we have not added the correction to $\delta_1$ under the logarithm, since that would exceed the leading logarithmic accuracy to which our results are valid. \subsection{Interpreting the entropy} Eq.(\ref{entro}) has a very simple physical interpretation. Recall that, within the CGC formalism, the total number (multplicity) of soft gluons produced in a given event (at fixed $\rho^a$ and fixed $S$) is given by \begin{equation} n= \int_q \Delta \tilde{b}^2(q)\, . \end{equation} The simplest naive expression for Bolzman entropy of a system of $n$ particles would be \begin{equation} \sigma=-n\log n \, . \end{equation} Eq.(\ref{entro}) has a very similar form. If we interpret $T$ as time at which the entropy is calculated, and introduce the number of particles produced up to time $T$ as \begin{equation}\label{nt} n(T)= \int_q \Delta \tilde{b}^2(q)\, \bigg( 1 - e^{-\frac{E_q^2T^2}{2}} \bigg) \, , \end{equation} the entropy given by eq.(\ref{entro}) becomes simply \begin{equation} \sigma^E(T)=-n(T)\log n(T) \, . \end{equation} Interpretation of eq.(\ref{nt}) as the number of produced particles is also very natural. Essentially it says that up to time $T$ only those particles are actually produced, which have energies $E_q>1/T$. Those are precisely the particles that ``decohere'' from the rest of the outgoing wave function during the time $T$, since the phases of their individual wave functions change by a number which is at least of order one. If the time $T$ is short, only very energetic gluons are produced, while if one waits infinite amount of time, all gluons that are present in the wave function immediately after scattering, decohere from each other and are therefore produced in the final state. From the point of view of our ``calorimeter'' degree of freedom this is equivalent to saying that in order to resolve produced gluons with very small energies, calorimetric measurement must be allowed to take very long time $T>1/E_q$. Thus at time $T=0$ our state has vanishing entropy, since it is just the pure state which emerges from the scattering region as no individual gluon states can be resolved, while at $T\rightarrow \infty$ it has the entropy of a completely incoherent ensemble of all gluons produced in the given event. \section{Time dependent entropy for the ensemble of events}\label{enenavgrho} In the previous section we have calculated the time dependence of entropy for single event which arises due to interaction with the calorimeter (``white noise''). It would be interesting to combine this with the calculation of ~\cite{Kovner:2015hga} which gave the entropy for the ensemble of events, but without time evolution, i.e. treating every single event density matrix as pure state. In this section we will do just that. \subsection{Calculation of the entropy} Our starting point is the expression for the soft gluon density matrix, which has to be reduced over the valence modes as well as the white noise degree of freedom. We use the McLerran-Venugopalan model for the distribution of the valence degrees of freedom~\cite{McLerran:1993ni,McLerran:1993ka} \begin{eqnarray} \hat{\rho}_{P,\xi,\rho}[S] &=&\mathcal{N} \frac{1}{\sqrt \pi\, T} \int \left[\mathcal D \rho \right]d\xi\, e^{-\int_{ \bm q} \frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} }\, e^{- \frac{\xi^2 }{T^2}} e^{-i H_0 \xi}\, e^{i \int_q \Delta \tilde{b}^a_i(q)\, \phi^a_i(\qp,{ \bm q})} \vert 0\rangle \langle 0 \vert e^{-i \int_{ \bm p} \Delta \tilde{b}^b_j(p)\, \phi^b_j({p^+},{ \bm p})} \, e^{i H_0 \xi} \, . \label{redrho1} \end{eqnarray} At this point our density matrix is defined for a fixed target field which enters via the dependence of $\Delta\tilde{b}$ on the Wilson line $S$. Since we would like to calculate the entropy for the complete ensemble of events we should also average over all possible configurations of the target. The consistent way of doing this is to treat the target fields as what they are, i.e. additional quantum degrees of freedom. Eq.(\ref{redrho1}) then defines the diagonal element of the density matrix in the target field basis. We should however reduce the density matrix over all degrees of freedom except the soft gluon fields. To achieve this we have to trace over the target fields. Thus we consider \begin{eqnarray} \hat{\rho}_{P,\xi,\rho} &=&\mathcal{N} \frac{1}{\sqrt \pi\, T} \int \left[\mathcal D \rho \right] \left[\mathcal D S \right]d\xi\, W[S]e^{-\int_{ \bm q} \frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} }\, e^{- \frac{\xi^2 }{T^2}} e^{-i H_0 \xi}\, e^{i \int_q \Delta \tilde{b}^a_i(q)\, \phi^a_i(\qp,{ \bm q})} \vert 0\rangle \langle 0 \vert e^{-i \int_{ \bm p} \Delta \tilde{b}^b_j(p)\, \phi^b_j({p^+},{ \bm p})} \, e^{i H_0 \xi} \, . \nonumber \\ \label{redrho} \end{eqnarray} Here $W[S]$ is the normalized weight functional arising from the target wave function. We leave it unspecified for now. As in the previous section we now get rid of $H_0$ in the exponential using eq.(\ref{phit}). Calculating the matrix element of the density matrix we find \begin{eqnarray} \label{redmat} &&\langle\phi^1\vert\hat\rho_{P,\xi,\rho}\vert\phi^2\rangle= \frac{1}{\sqrt \pi\, T} \int \left[\mathcal D \rho \right]\left[\mathcal D S \right]d\xi\, W[S]\, e^{-\int_{ \bm q} \frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} }\, e^{- \frac{\xi^2 }{T^2}} e^{i \int_q \Delta \tilde{b}^a_i(q)\, \left[\phi^{1a}_i(\qp,{ \bm q})-\phi^{2a}_i(\qp,{ \bm q})\right]\cos(E_q\xi)} \\ &\times&e^{-\frac{1}{4}\int_q\left\{\left[\phi^{1a}_i(q^+,{ \bm q})-2\Delta\tilde{b}^a_i(q)\sin(E_q\xi)\right]\left[\phi^{1a}_i(q^+,-{ \bm q}) -2\Delta\tilde{b}^a_i(-q)\sin(E_q\xi)\right]+\left[\phi^{2a}_i(q^+,{ \bm q})-2\Delta\tilde{b}^a_i(q)\sin(E_q\xi)\right]\left[\phi^{2a}_i(q^+,-{ \bm q})-2\Delta\tilde{b}^a_i(-q)\sin(E_q\xi)\right]\right\}}\nonumber\\ &=&\mathcal{N} \frac{1}{\sqrt \pi\, T} \int \left[\mathcal D \rho \right]\left[\mathcal D S \right]d\xi\, W[S]\, e^{- \frac{\xi^2 }{T^2}} e^{-\int_{ \bm q} \left\{\frac{\rho^a({ \bm q})\rho^a(-{ \bm q})}{2\mu^2({ \bm q})} +i\Delta\tilde{b}^a_i(q)\left[\exp(iE_q\xi)\phi^{2a}_i(q)-\exp(-iE_q\xi)\phi^{1a}_i(q)\right]+2\Delta\tilde{b}^a_i(q)\Delta\tilde{b}^a_i(-q)\sin^2(E_q\xi)\right\}} \nonumber\\ &\times& e^{-\frac{1}{4}\int_q\left[\phi^{1a}_i(q^+,{ \bm q})\phi^{1a}_i(q^+,-{ \bm q})+\phi^{2a}_i(q^+,{ \bm q})\phi^{2a}_i(q^+,-{ \bm q})\right] }\, .\nonumber \end{eqnarray} We will now employ the same logic as in the previous section to calculate the entropy in the weak field limit. We note that for weak fields, $\Delta\tilde{b}=0$, our density matrix describes a pure state. This means that for small $\Delta\tilde{b}$ it has one eigenvalue $\Delta=1-\delta_1$ which is close to unity, and others that are small. As we explained in the previous section, to calculate the entropy of such density matrix we only need to find $\delta_1$. We will do it, as before by calculating $\Trace [\hat\rho^N]$ for large $N$. We will set up this calculation a little differently than in the previous section. We keep the reduced density matrix in the form of eq.(\ref{redmat}) while calculating the trace. Obviously for each factor of $\hat\rho$ in the product of density matrices we have to introduce its own $\rho$ and $\xi$, and so these fields also acquire a replica index. We can then write \begin{eqnarray} \Trace[\hat\rho^N_{P,\xi,\rho}] &=& \left[ \mathcal{N} \frac{1}{\sqrt \pi\, T} \right]^N \int\prod_{\alpha=1}^N \left[\mathcal D \rho_\alpha \mathcal D S_\alpha d\xi_\alpha\right]\, W[S_\alpha]e^{- \frac{\xi_\alpha^2 }{T^2}} e^{-\int_{ \bm q} \left\{\frac{\rho_\alpha^a({ \bm q})\rho_\alpha^a(-{ \bm q})}{2\mu^2({ \bm q})}+2\Delta\tilde{b}^a_{i\alpha}(q)\Delta\tilde{b}^a_{i\alpha}(-q)\sin^2(E_q\xi_\alpha)\right\}} \nonumber\\ &\times&e^{-\int_q\left\{\frac{1}{2}\phi^{\alpha a}_i(q^+,{ \bm q})\phi^{\alpha a}_i(q^+,-{ \bm q})+ i \left[\Delta\tilde{b}^a_{i(\alpha-1)}(q)\exp(iE_q\xi_{\alpha-1})-\Delta\tilde{b}^a_{i\alpha}(q)\exp(-iE_q\xi_{\alpha})\right]\phi^{\alpha a}_i(q)\right\}} \, . \end{eqnarray} The integral over $\phi^\alpha$ is easily performed with the result \begin{eqnarray} \Trace[\hat\rho^N_{P,\xi,\rho}]&=&\left[ \mathcal{N} \frac{1}{\sqrt \pi\, T} \right]^N \int\prod_{\alpha=1}^N \left[\mathcal D \rho_\alpha \mathcal D S_\alpha d\xi_\alpha\right]\, W[S_\alpha] e^{- \left\{\int_{ \bm q}\frac{\rho_\alpha^a({ \bm q})\rho_\alpha^a(-{ \bm q})}{2\mu^2({ \bm q})} +\frac{\xi_\alpha^2 }{T^2}\right\}}e^{-\int_q 2\Delta\tilde{b}^a_{i\alpha}(q)\Delta\tilde{b}^a_{i\alpha}(-q)\sin^2(E_q\xi_\alpha) }\nonumber\\ &\times&e^{-\frac{1}{2}\int_q \left[\Delta\tilde{b}^a_{i(\alpha-1)}(q)\exp(iE_q\xi_{\alpha-1})-\Delta\tilde{b}^a_{i\alpha}(q)\exp(-iE_q\xi_{\alpha})\right] \left[\Delta\tilde{b}^a_{i(\alpha-1)}(-q)\exp(iE_q\xi_{\alpha-1})-\Delta\tilde{b}^a_{i\alpha}(-q)\exp(-iE_q\xi_{\alpha})\right] }\nonumber\\ &=&\left[ \mathcal{N} \frac{1}{\sqrt \pi\, T} \right]^N \int\prod_{\alpha=1}^N \left[\mathcal D \rho_\alpha \mathcal D S_\alpha d\xi_\alpha\right]\, W[S_\alpha]e^{- \left\{\int_{ \bm q}\frac{\rho_\alpha^a({ \bm q})\rho_\alpha^a(-{ \bm q})}{2\mu^2({ \bm q})} +\frac{\xi_\alpha^2 }{T^2}\right\}}\nonumber\\ &\times&e^{-\int_q \left[\Delta\tilde{b}^a_{i\alpha}(q)\Delta\tilde{b}^a_{i\alpha}(-q)-\Delta\tilde{b}^a_{i(\alpha-1)}(q) \Delta\tilde{b}^a_{i\alpha}(q)\exp\left(iE_q(\xi_{\alpha-1}-\xi_\alpha)\right)\right]} \, , \label{rhon} \end{eqnarray} with the understanding $\xi_0=\xi_N$, $S_0=S_N$ and $\rho_0=\rho_N$. This expression allows us to calculate $\delta_1$ in expansion in powers of $\mu^2$, i.e. in the weak field limit. We just expand the exponent in eq.(\ref{rhon}) to leading order in $\Delta\tilde{b}^2$. In this calculation the cross term between different replica fields vanishes due to the symmetry of the replica $\rho$ integral. Expanding and collecting terms we find \begin{equation} \Trace[\hat\rho^N_{P,\xi,\rho}]=1-N\langle \int_q \Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)\rangle_{(\rho,S)} \end{equation} where we have introduced the notation $\langle \dots \rangle_{(\rho,S)}$ to denote the averages w.r.t. the projectile and the target density matrix; and thus \begin{equation}\label{d11} \delta_1=\langle \int_q \Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)\rangle_{(\rho, S)} \end{equation} This is an exceedingly simple and somewhat unexpected result. We find that the eigenvalue and, therefore, the entropy of the density matrix averaged over the event ensemble does not depend on time in the weak field limit. Note on the other hand that, if we do not integrate over $\rho$ (and $S$), but simply perform this calculation at fixed value of $\rho$, which we did in the previous section, we have an additional time dependent contribution, as discussed above. It is also straightforward to calculate the next correction in powers of $\mu^2$. Only the terms local in replica space contribute to the result for the eigenvalue: \begin{equation}\label{d1} \delta_1=\langle \int_q \Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)\rangle_{(\rho,S)}-\frac{1}{2}\left[\langle\left[\int_q \Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^a_{i}(-q)\right]^2\rangle_{(\rho,S)}+\int_{q,p}\langle \Delta\tilde{b}^a_{i}(q)\Delta\tilde{b}^b_{j}(p)\rangle_\rho\langle\Delta\tilde{b}^a_{i}(-q) \Delta\tilde{b}^b_{j}(-p)\rangle_{(\rho, S)} e^{-\frac{(E_q+E_p)^2T^2}{2}}\right] \end{equation} The entropy of the whole ensemble of events is therefore \begin{equation} \sigma^E=-\delta_1\log \delta_1 \end{equation} with $\delta_1$ given in eq.(\ref{d1}). We note that the time dependence does survive in eq.(\ref{d1}). Moreover its qualitative behaviour is reasonable, i.e. the absolute value of $\delta_1$ is an increasing function of time $T$, and therefore the entropy grows with time. One does indeed expect such a trend since the entropy produced in an individual event grows with time, as soft gluons decohere from each other. \subsection{Interpretation of the time dependence} As noted above, the time dependence of the entropy for the ensemble of events is much weaker than for a single event. This begs for some explanation. We believe that the reason for this lies in the so called monogamy of entanglement~\cite{Koashi:2004men}. It is known that if a system $A$ is maximally entangled with system $B$, then none of those systems can be entangled with the third system $C$. A plausible interpretation of time independence of eq.(\ref{d11}) is that for very weak fields the soft gluon degrees of freedom are maximally entangled with the valence charges. By the monogamy of entanglement it then means that coupling to another degree of freedom (in our case $\xi$) does not change the entropy of $\phi$. For stronger fields the entanglement is not maximal and some time dependence survives, but it is significantly weaker than for a single event. \section{Conclusions}\label{conclusions} In this paper, we have extended the approach of~\cite{Kovner:2015hga}, by considering the entropy of the soft gluons system produced in a single event in high-energy p-A collisions. Strictly speaking, the system of soft gluons produced in a single event is in a pure state and has zero entropy. However in the energy eigenstate basis the off diagonal matrix elements of the density matrix are strongly oscillating functions of time. When averaged over time they vanish, and such density matrix corresponds to a mixed state with finite von Neuman entropy. Physically averaging over time is always performed since any experimental apparatus has finite time and energy resolution. We therefore define the time dependent entropy as entanglement entropy of the system of soft gluons and the ``calorimeter'' - an auxiliary degree of freedom representing an apparatus that measures the energy of the outgoing gluons. We calculate the time dependent single event entropy in the limit of weak fields and observe that it has a simple interpretation as the Bolzman entropy of the system of gluons produced up to the time at which the entropy is calculated. We also calculate the time dependent entropy for the ensemble of events, and find that it has a much weaker time dependence than the analogous quantity for an individual event. On a qualitative level we attribute this feature to the monogamy of entanglement. Our investigation in this paper is of an academic nature given the fact that entropy is not a directly measurable quantity. We hope however that this will be an important step towards understanding of quantum dynamics of colliding systems. In particular time evolution of entanglement observed in atomic systems has been related to possible eigenstate thermalization~\cite{Srednicki:1994eth}, of which some tentative evidence has recently been reported in condensed matter experiments~\cite{Kaufman:2016}. Analogous phenomenon could arise in high energy scattering potentially explaining important features of observed spectra of produced particles~\cite{Baker:2017wtt,Kharzeev:2017qzs}. Momentum space entanglement considered in this paper is {\it a priori} as good a source of entropy and disorder, possibly leading to thermalization, as its coordinate space counterpart. Since the CGC wave function is an explicit and controlled model of hadronic wave function, it provides an interesting testing ground for this type of ideas. We hope to investigate this aspect of entanglement in future work.
{ "timestamp": "2018-06-05T02:17:51", "yymm": "1806", "arxiv_id": "1806.01089", "language": "en", "url": "https://arxiv.org/abs/1806.01089" }
\section{Introduction} Spallation reactions induced by high-energy protons or neutrons are of vital importance for both fundamental research and technical applications. The most important areas of application of these reactions are the spallation neutron sources \cite{Clausen-2003,ESS}, the energy production techniques based on accelerator driven systems (ADS) \cite{Daniel-1996,Nifenecker-1999}, the transmutation of radioactive waste \cite{Bowman-1998,Rubbia-1995,Nifenecker-2001,Wang-2016} and the radiation shield design for accelerators and space devices \cite{Koning-1998}. Other areas of application include the development of radioactive ion-beams in ISOL-type facilities \cite{Lewitowicz-2011,RIB-2013} and the production of medical isotopes \cite{Uyttenhove-2011,Abbas-2009}. The aforementioned applications require that spallation observables be known with high accuracy in a broad range of projectile energies. Many efforts have been devoted in obtaining experimental data on proton and neutron spallation reactions in the energy range (100--1000 MeV) with target materials used in the numerous applications. Because of the variety of the target nuclei and the wide range of energy, empirical systematics approaches (see, e.g. \cite{SPACS,Prokofiev-2001,Ma-2017} and references therein) and theoretical modelling \cite{spallation_review,Xu-2016,Sharma-2017} with satisfactory predictive power are indispensable. Recently, systematic high-quality experimental data on spallation reactions were obtained by the CHARMS collaboration at GSI, Darmstadt \cite{GSI}. In particular, the production of individual nuclides from charged-particle induced spallation reactions were measured with high-resolution using the inverse kinematics technique with the magnetic spectrometer FRS. In parallel to the various experimental efforts, theoretical developments have lead to advanced codes of the spallation process \cite{spallation_review}. However, there are still uncertainties concerning the description of measured cross sections and other observables. Traditionally, the spallation reaction is described as a two-stage process. The first (fast) stage involves a series of quasifree nucleon-nucleon collisions, the intranuclear cascade, initiated by the incoming nucleon. Several codes have been developed to describe this stage, such as the traditional INC code ISABEL \cite{ISABEL}, the Li\`{e}ge Intranuclear Cascade Model INCL4.6 or INCL++ \cite{Boudard-2013,Mancusi-2014,Mancusi-2015} and the CRISP code \cite{Deppman-2013}. The second stage is described by a statistical deexcitation model, such as the binary decay code GEMINI \cite{Mancusi-2010}, the evaporation-fission code ABRA07 \cite{Kelic-2008} and the generalized evaporation model (GEM) \cite{Furihata-2000}. In the same vein, the Statistical Multifragmentation Model (SMM) \cite{Bondorf-1995,Botvina-2001,Botvina-2005} describes the compound nucleus processes (evaporation-fission) at low energies and the evolution toward multifragmentation at high energies. The primary goal of the present work is the description of the complete dynamics of the spallation process using the microscopic Constrained Molecular Dynamics (CoMD) model \cite{Papa-2001,Papa-2005}. We compared the CoMD results with recent experimental data from the literature and with two-stage calculations employing the INC code ISABEL followed by the deexcitation code SMM. In this work, we obtained mass yield curves, fission and residue cross sections and neutron multiplicities for the proton spallation of targets $^{181}$Ta, $^{208}$Pb and $^{238}$U at 200, 500,and 1000 MeV. We chose these targets because of the availability of relevant experimental data and the importance of these materials in applications, especially for accelerator-driven systems (ADS) and/or spallation neutron sources. The paper is structured as follows: After a brief description of the theoretical models employed in this work, we present comparisons with experimental data from the literature. A discussion and conclusions follow. Finally, in the Appendix, details of the CoMD procedure and the treatment of the surface term are described. \section{Theoretical Models} \subsection{Microscopic Model: CoMD} The primary theoretical model employed in this work enabling us to describe the complete process of spallation is the microscopic Constrained Molecular Dynamics (CoMD) model originally designed for reactions near and below the Fermi energy. The present CoMD procedure is along the lines of our recent work on proton-induced fission \cite{Vonta-2015}. We note that the CoMD code is based on the gereral approach of molecular dynamics as applied to nuclear systems \cite{Aichelin-1991,Bonasera-1994}. In the CoMD code \cite{Papa-2001,Papa-2005,Giuliani-2014}, nucleons are described as localized Gaussian wave packets in coordinate and momentum space. The wave function of the nuclear system is assumed to be the product of these single-particle wave functions. With this Gaussian description, the N-body time-dependent Schr$\ddot{o}$dinger equation leads to (classical) Hamilton's equations of motion for the centroids of the nucleon wavepackets. The potential part of the Hamiltonian consists of a simplified Skyrme-like effective interaction and a surface term. Proper choice of the surface term was necessary to describe the fission/residue competition \cite{Vonta-2015} as detailed in the Appendix. The isoscalar part of the effective interaction corresponds to a symmetric nuclear matter compressibility of K=200 (soft equation of state). For the isovector part, several forms of the density dependence of the nucleon-nucleon symmetry potential are implemented in the code. Two of them were used in the present work, which we called the standard potential and the soft potential. These forms correspond to a symmetry potential proportional to the density and its square root, respectively (see \cite{Papa-2013} and references therein). We note that in the CoMD model, while not explicitly implementing antisymmetrization of the N-body wavefunction, a constraint in the phase space occupation for each nucleon is imposed, restoring the Pauli principle at each time step of the evolution. This constraint restores the fermionic nature of the nucleon motion in the nuclear system. The short range (repulsive) nucleon-nucleon interactions are described as individual nucleon-nucleon collisions governed by the nucleon-nucleon scattering cross section, the available phase space and the Pauli principle, as usually implemented in transport codes. We point out that the present CoMD version fully preserves the total angular momentum (along with linear momentum and energy) \cite{Papa-2005}, features which are critical for the accurate description of the dynamics of fission and/or particle emission. In the present work, the CoMD code with its standard parameters was used. The calculations were performed with both the standard and the soft symmetry potentials. The ground state configurations of the target nuclei were obtained with a simulated annealing approach, and were tested for stability for relatively long times (1500--2000 fm/c). These configurations were used in the CoMD code for the subsequent simulations of the spallation reactions. For a given reaction, a total of approximately 10000 events were collected. For each event, the impact parameter of the collision was chosen in the range b = 0--b$_{max}$ following a triangular distribution (to ensure, as usual, a uniform distribution on the surface of a circle with radius b$_{max}$). Guided by the experimental fission cross section data and/or systematics, we chose the maximum impact parameter b$_{max}$ to be 7.0, 7.5 and 8.0 fm for the three systems p+Ta, p+Pb and p+U, respectively. Each event was followed up to 15000 fm/c (5$\times$10$^{-20}$ s) and the phase space coordinates were registered every 100 fm/c. At each time step, fragments were recognized with the minimum spanning tree method \cite{Papa-2001,Papa-2005} and their properties were reported. Thus, information on the evolution of the system and the properties of the resulting residues or fission fragments were obtained. In this way, for fissioning events, the moment of scission of the deformed heavy nucleus was determined. For these events, we allowed an additional time of t$_{decay}$=5000 fm/c after scission for the nascent fission fragments to deexcite and we analyzed their properties. \subsection{Phenomenological two-stage description: INC/SMM} Apart from the microscopic CoMD calculations, that constitute the primary goal of this work, we also performed calculations based on the customary two-stage scenario of the intranuclear cascade followed by statistical deexcitation \cite{spallation_review}. For the description of the intranuclear cascade stage, we used the code ISABEL \cite{ISABEL} that we will simply refer to as INC in most of the instances in the following. This code is a well tested Monte-Carlo code with a long history of improvements. The target nucleus is simulated by a continuous medium bounded by a diffuse surface. Collisions between the incident nucleon and the nucleons of the target occur with a criterion based on the mean free path. Linear trajectories are assumed between successive collisions. Free nucleon-nucleon cross sections are used. The code allows for elastic and inelastic nucleon--nucleon collisions. Furthermore, it takes full account of Pauli blocking i.e. interactions resulting in nucleons falling below the Fermi sea are forbidden. From a given INC event we obtain the mass number, the atomic number, the velocity, the excitation energy and the angular momentum of the primary residue, as well as the kinematical parameters of the accompanying nucleons. The choice of the impact parameter follows the procedure previously described for CoMD. For this work, 20000 INC events were generated for each reaction. The deexcitation of the hot heavy fragments from the INC stage is performed with the Statistical Multifragmentation Model (SMM) \cite{Bondorf-1995,Botvina-2001,Botvina-2005,Souliotis-2007,Botvina-2013}. The SMM code combines a description of sequential compound nucleus decay with a multifragmentation model in which the effect of angular momentum is carefully considered. More specifically, in SMM, all possible decay processes occurring in the wide excitation energy range realized in a spallation reaction are taken into account. We note that for the deexcitation of low energy ($\epsilon^*$ $<$ 1.0 MeV/nucleon) non-fissionable nuclei \cite{Veselsky-2011,Fountas-2014}), the SMM code has been shown to adequately describe the particle deexcitation process as a cascade of emissions of neutrons and light charged particles using the Weisskopf-Ewing model of statistical evaporation. In regards to fission of heavier excited nuclei \cite{Botvina-2013,Vonta-2016}, the following approach is followed. A "multifragmentation" threshold value of $\epsilon^*_{mult}$ = 2.0 MeV/nucleon is defined, above which the SMM statistical multipartition is applied. This threshold value is, of course, lower than the true nuclear multifragmentation threshold of $\sim$3 MeV/nucleon, but it is employed as a parameter to define when the SMM multipartition scheme will be applied to the decay of the excited nuclear system. Thus, for intermediate and high excitation energies ( $\epsilon^*$ $>$ $\epsilon^*_{mult}$ ), fission is simply described as a special case of multifragmentation, i.e., the binary partition of the excited heavy nucleus. However, at low excitation energy ( $\epsilon^*$ $<$ $\epsilon^*_{mult}$ ), the fission channel is described in the spirit of the liquid-drop model with deformation-dependent shell effects. The Bohr-Wheeler approach is used for the calculation of the partial fission width. The method for obtaining fission mass distributions is described in detail in \cite{Botvina-2013}. We briefly mention that along with a symmetric fission mode, two asymmetric fission modes are included with contributions dependent on the fissioning nucleus and the excitation energy. These contributions, are described by an empirical parameterization based on analysis of a large body of experimental data. In the present work, apart from the value of $\epsilon^*_{mult}$ = 2.0 MeV/nucleon that we used in the calculations presented in the following, we also tested the values 1.5 and 3.0 MeV/nucleon, and we found that the value of 2.0 MeV/nucleon provides a satisfactory description of the experimental data on heavy residues, fission fragments and intermediate-mass fragments. \section{Results and Comparisons} The main objective of the present work is the description of the complete dynamics of the spallation process with the microscopic Constrained Molecular Dynamics (CoMD) model, that we recently used to describe fission at low and intermediate energy \cite{Vonta-2015}. We performed simulations of proton-induced spallation on targets of $^{181}$Ta, $^{208}$Pb and $^{238}$U at energies of 200, 500 and 1000 MeV. We compared the results of CoMD calculations with available experimental data and calculations that we performed with the traditional two-stage phenomenology using the INC/SMM model framework. \begin{figure}[h] \begin{center} \includegraphics[width=0.35\textwidth,keepaspectratio=true]{figure01.eps} \end{center} \caption{ (Color online) Excitation energy distributions of primary residues from proton-induced spallation of $^{208}$Pb at E$_{p}$=200, 500 and 1000 MeV calculated with the INC \cite{ISABEL} model (solid red lines) and the CoMD model (dotted blue lines). In panel (c), the dashed green line is a CoMD calculation with nucleon-nucleon scattering cross sections increased by a factor of 2 (see text). } \label{figure01} \end{figure} In Fig. 1, we first present the excitation energy distribution of primary residues after the intranuclear cascade for the proton-induced spallation of the $^{208}$Pb target at E$_{p}$ = 200, 500 and 1000 MeV calculated with the INC model (full red lines in Figs. 1a, 1b, 1c, respectively). From these distributions we obtain the mean excitation energies of the primary residues which are 70, 110 and 170 MeV at the above proton energies, respectively. These values are in overall agreement with the ones obtained by Cugnon et al. \cite{Cugnon-1997} with the INCL model (Fig. 3 of \cite{Cugnon-1997}). We remind that the INC code ISABEL has been extensively tested and shown to provide an overall reasonable description of the properties of the primary residues. For representative comparisons, we first mention the recent work of Paradella et al. \cite{Paradella-2017} on $^{136}$Xe (200 MeV/nucleon) + p. Moreover, Filges et al. \cite{Filges-2001} performed a comparison of primary residue excitation energy distributions calculated with ISABEL with experimental data from \cite{NESSY-expt} for p(1.2GeV) + $^{197}$Au and saw a good agreement (Fig. 22 of \cite{Filges-2001}). As the main focus of this work is on the microscopic description of the spallation dynamics, we employed only the ISABEL code (referred to as simply INC in the following) as a representative code expected to provide an overall satisfactory description of the intranuclear cascade stage of the reaction. Moreover, we obtained excitation energies of primary residues with the CoMD code by simulating the intranuclear cascade stage with the time evolution of the system up to t=200 fm/c. (We also tested the times t=100 and 300 fm/c with comparable results.) At this time, the binding energy of the excited residue was determined and compared with that of the corresponding ground state nucleus obtained from standard mass tables to get the excitation energy. The excitation energy distributions obtained from CoMD with the above approach are shown in Fig. 1 by the dotted (blue) lines. The CoMD distributions are in good agreement with the INC distributions for the lower two energies, i.e. 200 and 500 MeV. However at 1000 MeV, the CoMD substantially underestimates the residue excitation energy at the high end of the distribution. We attribute this behavior to the fact that in CoMD no inelastic nucleon--nucleon collisions are considered. In order to mimic the effect of inelastic collisions, we performed a CoMD calculation in which the nucleon-nucleon (elastic) scattering cross sections were increased by a factor of 2. The resulting distribution (shown in Fig. 1c by the dashed green line) moved close to the INC distribution. We note that this deficiency of CoMD affects a rather small fraction of the events at the higher end of the excitation energy spectrum. In the present work, we proceeded with the standard values of the scattering cross sections as in the original code \cite{Papa-2001,Papa-2005}. Finally, in the calculation of the excitation energy distributions with CoMD, we observe a concentration of events at low excitation energy. We have seen an analogous behavior of the CoMD approach when applied to heavy-ion collisions \cite{Fountas-2014} leading to rather low (or even negative) excitation energy for some of the primary products. In the future, we plan to improve the excitation energy determination by employing self-consistently calculated CoMD values of the ground-state binding energies of the primary residues, so that we avoid the use of binding energies from mass tables. As a general observation on the excitation energy distributions, the mean values are below the nuclear multifragmentation threshold of 2--3 MeV/nucleon. Only at the higher energy of E$_{p}$=1000 MeV, the tail of the distribution goes well above 2.0 MeV/nucleon (400 MeV for a primary residue of typical mass A=200), thus corresponding to the onset of multifragment emission \cite{Mancusi-2011}. Similar observations pertain to the excitation energy distributions for the spallation of the lighter Ta and the heavier U targets studied in this work. In closing, we mention that in both the INC and the CoMD calculations on the cascade stage, we have observed a close ralation of the mean residue mass with respect to excitation energy that, due to the rather low excitation energy, does not depart substantially from the target mass (it is lower, at most, by 5\% for the higher excitation energy events at E$_{p}$=1000MeV). However, in spallation reactions with heavier projectiles and higher energies, the mean residue mass gets progressively lower with increasing excitation energy, as indicated in \cite{Botvina-1995} (Fig. 4 and 5) and the recent work \cite{Botvina-2017} (Fig. 11). \subsection{Mass yield distributions} \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth,keepaspectratio=true]{figure02.eps} \end{center} \caption{ (Color online) Mass yield distributions of fragments from $^{208}$Pb spallation with protons at 500 MeV (a), and 1000 MeV (b). Solid (red) circles: CoMD calculations. Open (green) circles: INC/SMM calculations. Experimental data from the literature are as follows: in panel (a), the solid (black) triangles are data on fission fragments from \cite{Rodriguez-2015} and the solid (black) circles data on heavy residues from \cite{Audouin-2006}; in panel (b), the solid (black) triagles are data from \cite{Enqvist-2001}. } \label{figure02} \end{figure} In Fig. 2, we show the mass distributions of fragments from the proton induced spallation of $^{208}$Pb at 500 and 1000 MeV. We compare our theoretical results with experimental data from the literature as follows. In panel (a), we display the data of Rodriguez et al. \cite{Rodriguez-2015} (solid triangles) for fission fragments, and the data of Audouin et al. \cite{Audouin-2006} (solid circles) for heavy residues. In panel (b), we show the data of Enqvist et al. \cite{Enqvist-2001} (black triangles) for both fission fragments and heavy residues. Our calculations with CoMD are presented by the solid (red) circles, and the INC/SMM calculations by the open (green) circles. We note that the error bars on the calculation points in Fig. 2 (and all subsequent figures) are due to statistics. As expected, in the mass distributions, we distinguish two regions of fragments. First, the heavy residue region starting from masses close to the target mass and extending to lower masses, and, second, the fission fragment region with a distribution of masses centered slightly lower than half the mass of the target nucleus. As we expect, the shape of the fission fragment yield curve is symmetric, suggesting that no significant contribution of low-energy fission processes is present in the reaction mechanism (that would lead to an asymmetric fission yield curve due to shell effects \cite{Chaudhuri-2015}). We observe, that the CoMD calculated fission fragment yield curve is also symmetric and in good agreement with the experimental data at both energies. We stress that the agreement is satisfactory not only in the shape, but also in the absolute values of the fission cross sections. The INC/SMM calculated fission yield curve shows a two-humped structure which is more prominent in the lower energy reaction, Fig. 2a. The location and hight of these distributions are in agreement with the data and the CoMD calculations, but the width is smaller. A possible explanation for the double-humped structure of the fission yield curve from the INC/SMM calculation may be related in part to the parametrization of the low-energy fission yield shapes adopted in the SMM code \cite{Botvina-2013}. In the heavy residue region the situation is more complicated. First, we see that the INC/SMM is able to reproduce the residue cross sections from the target mass down to approximately $\sim$180. For lower masses the INC/SMM calculation begins to largely underestimate the data at both energies. This may be atributed to the contribution of very asymmetric binary decays or multifragment decays that start to contribute appreciably as the excitation energy of the heavy remnant increases. Furthermore, the CoMD calculation cannot reproduce the shape of the residue distribution. Our efforts so far have lead us to attribute this result mainly to the inability of CoMD to follow correctly the neutron-proton evaporation of a residue event, when we set the surface parameter to the value appropriate to describe fission (see Appendix). Moreover, we expect that in a residue event, the heavy fragment is still excited and would require substantially longer time to evolve (and, thus, give off the remainder of its excitation energy) than the overall evolution time of 15000 fm/c in the present CoMD calculations. Systematic efforts are being devoted at present to understand the above CoMD behavior and possibly achieve the simultaneous fission fragment and residue description. \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth,keepaspectratio=true]{figure03.eps} \end{center} \caption{(Color online) Mass yield distribution of fragments from $^{238}$U spallation with protons at 1000 MeV. Solid (red) circles: CoMD calculations. Open (green) circles: INC/SMM calculations. Experimental data from the literature are as follows: solid (black) triangles: heavy residues from \cite{Taieb-2003}, open (black) triangles: fission fragments from \cite{Bernas-2003}, and open (pink) squares: heavy IMFs from \cite{Ricciardi-2006}. } \label{figure03} \end{figure} In Fig. 3, we present the mass distribution of proton-induced spallation of $^{238}$U at proton energy of 1000 MeV. Our theoretical results with CoMD [solid (red) circles] and INC/SMM [open (green) circles] are compared with the following experimental data sets: heavy residues from Taieb et al. \cite{Taieb-2003} [solid (black) triangles], fission fragments from Bernas et al. \cite{Bernas-2003} [open (black) triangles] and intermediate mass fragments (IMF) from Ricciardi et al. \cite{Ricciardi-2006} [open (pink) squares]. Observations similar to those in Fig. 2 pertain here. As expected though, since the $^{238}$U nucleus is more fissile, the fission yield curve is higher than in the $^{208}$Pb case (see also Fig. 5 and relevant discussion). In the fission fragment region, interestingly, the CoMD calculations are in very good agreement with the experimental data. The INC/SMM fission calculation gives a rather symmetric mass yield curve in fair agreement with the CoMD calculation and the data, having however, a smaller width. In the heavy residue region, the INC/SMM appears to reproduce only the residue cross sections close to the target mass. For lower masses the INC/SMM appears to mostly underestimate the data. The CoMD calculation, as in the case of $^{208}$Pb (Fig. 2) cannot reproduce the shape of the residue distribution. Finally, apart from the heavy residue and fission fragment mass regions, we focus our attention on the region of heavy IMFs (A$<$50), for which the INC/SMM calculations predict yields in reasonable agreement (slighly higher than) the data. Similar IMF products are obtained by the INC/SMM calculations for the p+Pb spallation at 1000 MeV, Fig. 2c, for which no experimental data are available. For the lower energy (500 MeV) p+Pb spallation (Fig. 2b), the INC/SMM calculations also predict IMF products with lower yields (by about a factor of 10). We mention that we found the production of such fragments in CoMD in a small number of events that involve ternary (or possibly higher) partition of the heavy primary fragment. At 1000 MeV, the CoMD calculated cross sections are lower (by a factor of $\sim$5--10) than those obtained from the INC/SMM calculations. We understand that the observed IMFs are products of either very asymmetric fission or multifragmentation, and they may require further special efforts to be adequately described (see also \cite{Colonna-2015,Pysz-2015} and references therein). This will be among the subjects of our future work. In Figs. 2 and 3, we presented mass yield distributions for the reactions with Pb and U at energies where experimental data exist. For the p+Ta reaction, no mass yield distribution data are available at present (only fission cross section data exist and will be discussed below). Moreover, we mention that our CoMD calculated yield distributions for p+Ta are similar in shape to those of the p+Pb reaction at the corresponding energies. From the study of the yield curves presented in Fig. 2 and 3, we see that the CoMD model appears to describe correctly the fission fragment yield curves obtained in the proton-induced spallation of the $^{208}$Pb and $^{232}$U nuclei at E$_p$ of 500 and 1000 MeV. However, in its present form it appears that it cannot describe the residue mass distribution. For the INC/SMM calculation, we can say that some adjustment of relevant parameters is necessary to improve the agreement with the data in the fission fragment region, as well as in the residue mass region. Moreover, as demonstrated in Ref. \cite{Botvina-1990}, the excitation energies and nucleon composition of the excited residues produced after the INC stage may require corrections for preequilibrium emission. As discussed above, systematic efforts are being devoted at present to understand the CoMD behavior for fission and residue events and possibly obtain a good description of both the fission fragment and the heavy residue distributions. Concerning the IMF products, appropriate adjustment of the elastic nucleon-nucleon scattering cross sections or, preferably, inclusion of inelastic channels is necessary for their description. \subsection{Fission and residue cross sections} In Fig. 4, we present the variation of the fission cross section with respect to energy for the proton induced spallation of $^{181}$Ta, $^{208}$Pb and $^{238}$U nuclei. Our calculations were performed at energies 200, 500 and 1000 MeV. The CoMD calculations with the standard symmetry potential are depicted by the full (red) circles connected with full lines, whereas those with the soft symmetry potential are shown by the full (blue) circles connected with dotted lines. The INC/SMM calculations are shown by the the solid (green) diamonds. In this figure, experimental data from the literature are presented as follows: In panel (a), we show the data of Ayyad et al. \cite{Ayyad-2014} (solid triangles). In panel (b), we show the data of Rodriguez et al. \cite{Rodriguez-2014} (solid triangles), Enqvist et al. \cite{Enqvist-2001} (open triangle), Schmidt et al. \cite{Schmidt-2013} (open square), Fernandez et al. \cite{Fernandez-2005} (open circle) and Flerov et al. \cite{Flerov-1972} (star). In panel (c), we present the data of Kotov et al. \cite{Kotov-2006} (solid triangles), Bernas et al. \cite{Bernas-2003} (open square) and Schmidt et al. \cite{Schmidt-2013} (open diamonds). Finally, in all panels the solid (black) line is according to the systematics of Prokofiev \cite{Prokofiev-2001}. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth,keepaspectratio=true]{figure04.eps} \end{center} \caption{ (Color online) Fission cross section as a function of proton energy for the proton-induced spallation of $^{181}$Ta, $^{208}$Pb and $^{238}$U. Full (red) circles connected with full lines: CoMD calculations with the standard symmetry potential. Full (blue) circles connected with dotted lines: CoMD calculations with the soft symmetry potential. Full (green) diamonds: INC/SMM calculations. Experimental data from the literature are as follows. In panel (a), solid triangles from \cite{Ayyad-2014}. In panel (b), solid triangles from \cite{Rodriguez-2014}, open square from \cite{Schmidt-2013}, open circle from \cite{Fernandez-2005}, open triangle from \cite{Leray-2002} and star from \cite{Flerov-1972}. In panel (c), solid triangles from \cite{Kotov-2006}, open square from \cite{Bernas-2003} and open diamonds from \cite{Schmidt-2013}. In all panels, the thick solid line is according to the systematics of Prokofiev \cite{Prokofiev-2001}. } \label{figure04} \end{figure} The following observations pertain to this figure. The experimental data and the empirical systematics show an increasing trend with energy for p+Ta and p+Pb, which is more pronounced for the less fissile p+Ta system (Fig. 4a). This trend is roughly reproduced by the INC/SMM calculations, although with a steeper slope. For the p+U, the data of Kotov et al. \cite{Kotov-2006} (solid triangles) show an increase from 200 to 500 MeV and then they stay constant for the higher energy. The higher energy flat behavior is also followed in the data of Schmidt et al. \cite{Schmidt-2013} (open diamonds). Prokofiev's systematics, however, shows a decreasing trend. The INC/SMM calculations for this system show a rather flat behavior and are higher than the data. Furthermore, we observe that the CoMD calculations do not show an appreciable dependence on energy. (A rather slight decrease is discernible in Fig. 4b and Fig. 4c.) The CoMD calculations with the soft symmetry potential are systematically higher than those with the standard potential, implying that the soft symmetry potential leads to a more repulsive dynamics in the neutron-rich neck region, as noted in our previous work \cite{Vonta-2015}. For the p+Ta system (Fig. 4a), the CoMD calculations with the standard symmetry potential are in agreement with the data at the lower energies, and those with the soft potential are in agreement with the data at the highest energy. Focusing our attention on the p+Pb spallation (Fig. 4b), we observe that at the highest energy, both the CoMD and INC/SMM calculations are in overall agreement with the data of Enqvist et al. \cite{Enqvist-2001} and the systematics of Prokofiev \cite{Prokofiev-2001}. At 500 MeV, the CoMD calculations are on either side of the data of Rodriguez et al. \cite{Rodriguez-2014} (solid triangle) and Schmidt et al. \cite{Schmidt-2013} (open square), while the INC/SMM calculations are lower than the data and the CoMD calculations. We note that the experimental point of Fernandez et al. \cite{Fernandez-2005} (open circle), is higher that the data of Rodriguez et al. \cite{Rodriguez-2014} (solid triangle) and Schmidt et al. \cite{Schmidt-2013} (open square), and is close to the CoMD calculation with the soft symmetry potential. Furthermore at 200 MeV, the CoMD calculations are higher than the INC calculations, whereas the experimental point of Flerov et al. \cite{Flerov-1972} (star) and the systematics of Prokofiev \cite{Prokofiev-2001} (thick line) lie between these calculations. Finally, for the p+U spallation (Fig. 4c), we observe that at 1000 MeV, both CoMD calculations are in good agreement with the data of Bernas et al. \cite{Bernas-2003} (open square), Kotov et al. \cite{Kotov-2006} (solid triangle) and Schmidt et al. \cite{Schmidt-2013} (open diamond). At 500 MeV, the CoMD calculations with the standard symmetry potential are in agreement with the data \cite{Kotov-2006,Schmidt-2013}, whereas at 200 MeV both CoMD calculations are higher than the data but in agreement with Prokofiev systematics (that starts to decrease with energy as noted previously). \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth,keepaspectratio=true]{figure05.eps} \end{center} \caption{ (Color online) Ratio of the fission cross section to residue cross section with respect to proton energy for the proton-induced spallation of $^{181}$Ta, $^{208}$Pb, and $^{238}$U, nuclei. The CoMD and INC/SMM calculations are with squares, circles and triangles for the above targets, rspectively. Full (red) points connected with full lines: CoMD (with standard symmetry potential). Full (blue) points connected with dotted lines: CoMD (with soft symmetry potential). Open (green) points connected with dashed lines: INC/SMM calculations. The three experimental data points are: closed square \cite{Bernas-2003}, closed triangle \cite{Fernandez-2005}, and closed diamond \cite{Enqvist-2001}. The experimental points have been displaced by 50 MeV to the right for viewing purposes. } \label{figure05} \end{figure} In Fig. 5, we present the ratio of the fission cross section to residue cross section as a function of proton energy for the proton-induced spallation of $^{181}$Ta, $^{208}$Pb, and $^{238}$U nuclei. The full (red) points connected with full lines are the CoMD calculations with the standard symmetry potential. The full (blue) points connected with dotted lines are the CoMD calculations with the soft symmetry potential. The open (green) points connected with dashed lines are the INC/SMM calculations. In this figure, experimental data from the literature will be presented in the discussion below. (The experimental points have been displaced to the right by 50 MeV for viewing purposes.) At first, we observe that the CoMD calculation of the ratio for $^{238}$U is about 4, indicating, as expected, a high fissility. We notice also that the CoMD calculation at 1000 MeV is in good agreement with the data of Bernas et al. \cite{Bernas-2003} (closed square). The ratio of fission cross section to residue cross section for $^{208}$Pb calculated with the CoMD is about 10\% indicating the modest fissility of this target. It appears that our calculations are in good agreement with the data of Fernandez et al. \cite{Fernandez-2005} at 500 MeV. We note, however, that this experimental point involves a fission cross section value that is 50\% higher than the trend of other experimental points and systematics, as discussed previously in regard to Fig. 4b. At 1000 MeV, the CoMD calculations are in good agreement with the data of Enqvist et al. \cite{Enqvist-2001}. Finally, for $^{181}$Ta, the ratio is only about 1\%, as calculated from CoMD. This value suggests that $^{181}$Ta has a very low fissility and thus, a tendency to undergo mostly evaporation. There are no experimental data for the residue cross sections from the spallation of this target. Only fission cross section data exist, that we presented in Fig. 4a. Thus, no experimental fission to residue cross section ratios are presented in Fig. 5. The INC/SMM calculations [open (green) points] for p+U are higher than the CoMD calculations and the experimental point. For p+Pb they are lower than the CoMD calculations for the lower two energies and in agreement with the calculations and the data at the highest energy. A similar trend appears for the p+Ta system. In general, in Fig. 5 we observe that the CoMD calculations with the soft potential are higher than those with the standard symmetry potential. We also oberved this trend in Fig. 4, in regard to the fission cross section. Similar behavior was observed in our recent fission studies with CoMD \cite{Vonta-2015} and, as we mentioned before, was attributed to the more repulsive dynamics implied in the neutron-rich neck region of the fissioning nucleus. \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth,keepaspectratio=true]{figure06.eps} \end{center} \caption{ (Color online) Total Neutron Multiplicity as a function of proton energy for the proton-induced spallation of $^{181}$Ta, $^{208}$Pb and $^{238}$U. Full (red) circles connected with full lines: CoMD calculations with the standard symmetry potential. Full (blue) circles connected with dotted lines: CoMD calculations with the soft symmetry potential. Full (green) diamonds: INC/SMM calculations. Experimental data from the literature are as follows: In panel (a) and (b) from \cite{Leray-2002}, and in panel (c) from \cite{Bernas-2003}. } \label{figure06} \end{figure} \subsection{Neutron multiplicities} In Fig. 6, we present the total multiplicity of neutrons emitted in the proton induced spallation of $^{181}$Ta, $^{208}$Pb and $^{238}$U at 200, 500 and 1000 MeV. We note that the total neutron multiplicity refers to the number of neutrons emitted during the intranuclear cascade plus the number of neutrons evaporated during the deexcitation course of the hot primary residue. Specifically, if the deexcitation involves fission, the neutron multiplicity includes the prescission and postscission neutrons. Furthermore, we note that the total neutron multiplicity in a fission event is nearly similar to that in a residue event. Our CoMD calculations of the neutron multiplicity refer only to the fission events. We remind that, at present, the CoMD is not able to describe correctly the residue evolution and, thus, the number of neutrons (and protons) given off by an evolving residue is incorrect. More specifically, the ratio of neutrons to protons given off by a residue in CoMD (about 1--1.5) is much lower than the experimental value of approximately 4. In the figure, the CoMD calculations with the standard symmetry potential are shown by the full (red) points connected with full lines, and those with the soft symmetry potential are shown by the full (blue) points connected with dotted lines. The two CoMD calculations are close to each other. In the same figure, we present the results of the INC/SMM calculations [solid (green) diamonds]. In order to obtain the total multiplicity in these calculations, we added the neutrons emitted in the INC stage with the neutrons given off in the SMM deexcitation stage. The INC/SMM results indicate an increasing trend with proton energy. They are lower than the CoMD calculations for p+Ta, moving closer for the p+Pb and p+U systems. In Fig. 6a, the CoMD and the INC/SMM neutron multiplicity from the p+Ta spallation is in reasonable agreement with the experimental point of Leray et al. \cite{Leray-2002} (open circle) for the p+$^{184}$W spallation at 1000 MeV. Similar agreement is seen in Fig. 6b between our calculations and the measurement of Leray et al. \cite{Leray-2002} (open triangle) for the p+Pb spallation. In this figure, however, the experimental point at 800 MeV is lower than the trend of our calculations. Finally, in Fig. 6c, the CoMD calculation is in agreement with the value reported by Bernas et al. \cite{Bernas-2003} at 1000 MeV, while the INC/SMM calculation is somewhat higher. As an overall observation, we notice that our calculations predict an increase of the spallation neutron multiplicity from 200 MeV to 500 MeV of proton energy. Above that energy, the multiplicity appears to increase rather slightly. Thus, our CoMD calculations verify that the proton energy region 500--1000 MeV is a good choice for the effective generation of spallation neutrons. In practice, as well known, spallation neutrons are produced in thick targets. Consequently, the whole energy region from the beam energy down to low energies contributes to the spallation neutron spectrum. We thus appreciate the importance of understanding the evolution of the reaction mechanism from low energy (50--100 MeV) up to the proton (or neutron) beam energy. For spallation neutron yields from thick targets, we point to the relevant references, e.g. \cite{Letourneau-2000}. Our successful calculation of spallation neutron multiplicity using the microscopic CoMD code is very encouraging for our current efforts on the spallation reactions. Given the fully dynamical nature of the code, neutron energy spectra and angular distributions can be obtained and compared with relevant experimental data (e.g. \cite{Leray-2002}). Similar comparisons can be performed for the multiplicities and the spectra of emitted protons and other light charged particles in conjunction with recent experimental data ( e.g. \cite{Rodriguez-2016a,Rodriguez-2016b}). This is the subject of ongoing systematic efforts in our group. Furthermore we note that, along with the study of proton-induced spallation reactions, the study of neutron induced reactions is of special importance. However, systematic data on these reactions are rather scarce (see e.g. \cite{Meo-2015} and references therein). For such reactions, the predictive power of CoMD can be expoited, after appropriate benchmarking on the existing experimental data. In closing, we wish to comment on the computational demand of our CoMD caclulations. For a typical reaction, e.g. p (500 MeV) + $^{208}$Pb with the current implementation of CoMD, we can generate approximately 30 events per day on a single core of a mid-range modern PC. For a given reaction, we run on 10 independent processors for approximately one month to generate 10000 events that we subsequently analyzed. The corresponding INC/SMM calculation is fast. For 10K events, the INC takes only a few minutes, whereas the SMM deexcitation takes about 20 min. Thus the CoMD calculation is at least $\sim$10$^3$ times more computer-intensive than the typical fast INC/SMM calculation. We may conclude that, while the INC/SMM calculation can be used as an efficient event generator in transport codes for practical applications, the microscopic CoMD approach can shed light on the dynamics of the spallation process and provide guidance to the tuning of the parameters of the phenomenological approaches employed for applications. \section{Discussion and Conclusions} In the present work we studied proton induced spallation of $^{181}$Ta , $^{208}$Pb, and $^{238}$U targets in the energy range 200--1000 MeV. We chose these nuclei because of the availability of recent literature data and their significance in current applications of spallation. We performed calculations of the full dynamics of the spallation process with the microscopic CoMD model. Moreover, we employed the traditional two-stage phenomenological approach based on the intranuclear cascade model (INC) followed by deexcitation with the SMM model. Our CoMD calculations describe rather well the symmetric fission yield distribution in the spallation of $^{208}$Pb and $^{238}$U targets, as concluded from the comparison with available experimental data. We encountered problems in the description of the residue yield distributions that we are currently investigating. The INC/SMM calculations give a rather good description of the residues near the target, but appear to give a narrrower fission yield curve (whose shape is double-humped in the case of the $^{208}$Pb spallation). We attribute this behavior mainly to the empirical description of low-energy fission in the SMM model and we plan to investigate possible improvements. As suggested in \cite{Mancusi-2010,Botvina-1990}, among possible additions, the inclusion of a prequilibrium stage between the INC stage and the deexcitation may lead to improvement of these calculations with the experimental data. Our calculations reproduced rather well the total fission cross sections and the ratio of fission cross sections over residue cross sections, especially at the higher proton energy of 1000 MeV. It is notable that the CoMD calculations do not reproduce the increasing trend of the fission cross sections with energy observed in the data. On the other hand, the INC/SMM calulations appear to give a steeper increase as compared to the data for the less fissile Ta and Pb targets and a nearly flat dependence for the U target. Finally, our CoMD calulations appear to describe reasonably well the spallation neutron multiplicity of spallation/fission events. The INC/SMM calculations describe well the total neutron multiplicities for spallation events leading to fission or residue production at high energies. In regard to the CoMD calculations, because of the fully dynamical CoMD approach, energy spectra and angular distributions of neutrons and light particles (e.g. protons, alphas) can be obtained and compared with experimental data. It is desirable to extend our calculations with detailed comparisons using experimental data on spallation reactions with the inverse kinematics technique. Studies of residue and fission product isotopic and velocity distributions of low-fissility targets, such as $^{181}$Ta on hydrogen, could enhance the understanding of the spallation mechanism of non-fissile systems. In this context, the role of facilities such as RISP \cite{RISP,RISP-2013} (currently in the design stages) will be vital. Summarizing, we have shown that the microscopic CoMD code describes reasonably well the complicated many-body dynamics of the spallation/fission process. We wish to point out that the CoMD code provides results that are not dependent on the specific dynamics being explored and, as such, it may offer valuable predictive power for the spallation observables. We plan to perform further systematic calculations of the observables of spallation reactions and compare them with available experimental data. These observables include, apart from the ones presented in this study, the energy distribution of the fission fragments, the isotopic mass distributions of residues and fission fragments, as well as those of the heavy intermediate mass fragments (IMFs). Finally, as we already mentioned, we plan to study the energy and angular distributions of neutrons and light charged particles. Along the lines of the present study, we expect that these efforts will shed light on the mechanism of the spallation process and contribute to a quantitative physics-based description of spallation properties of importance to applications. \section{Acknowledgements} We are thankful to H. Zheng and G. Giuliani for discussions on recent CoMD implementations. We are also thankful to W. Loveland for enlightening comments and suggestions on this work. Finally, we wish to acknowledge motivating discussions with S.C. Jeong, Y.K. Kwon, K. Tshoo and other members of the RISP facility. Financial support for this work was provided, in part, by ELKE Research Account No 70/4/11395 of the National and Kapodistrian University of Athens. A. Botvina acknowledges the support of HIC for FAIR (Germany). M.V. was supported by the Slovak Scientific Grant Agency under contracts 2/0105/11 and 2/0121/14 and by the Slovak Research and Development Agency under contract APVV-15-0225.
{ "timestamp": "2018-06-05T02:10:28", "yymm": "1806", "arxiv_id": "1806.00767", "language": "en", "url": "https://arxiv.org/abs/1806.00767" }
\section{\label{sec:level1}Introduction} Recently, antihydrogen ($\bar{\mathrm{H}}$) atoms have been produced \cite{ASACUSA} in a unique cusp trap \cite{MY,YY,DC} developed for the in-flight hyperfine spectroscopy of ground state $\bar{\mathrm{H}}$ atoms \cite{SMI,HFS1,HFS2}. The most recent progress is reported in \cite{NK,MT,chloe}. In 2012, the ASACUSA Cusp collaboration developed a $\bar{\mathrm{H}}$ detector consisting of a BGO ($\mathrm{Bi_4Ge_{3}O_{12}}$) scintillator disk in combination with a single anode photomultiplier (PMT) and 5 plastic scintillator plates. The detector was able to reject cosmic backgrounds with a high efficiency \cite{YN1}. In order to further improve the background rejection efficiency, we have developed a new $\bar{\mathrm{H}}$ detector. The single anode PMT has been replaced by 4 multi-anode PMTs (MAPMTs) for 2D photon readout of the BGO. The five plastic scintillators were replaced by a two-layer hodoscope with 32 plastic scintillator bars per layer to determine charged particle tracks with higher resolution~\cite{CS2}.\par \begin{figure}[htb] \includegraphics[width=1.\linewidth]{Fig1.pdf} \caption{\label{fig:detectorsetup} Cross section of the $\bar{\mathrm{H}}$ detector along the beam axis (a) and perpendicular to the $\bar{\mathrm{H}}$ beam axis (b). } \end{figure} \begin{figure}[htb] \includegraphics[width=0.8\linewidth]{Fig2.pdf} \caption{\label{fig:BGOPMT1} Three quarter section view of the 2D sensitive BGO detector. } \end{figure} \section{\label{sec:level2} $\bar{\mathrm{H}}$ detector} Figure~{\ref{fig:detectorsetup}} shows a schematic diagram of the structure of the new $\bar{\mathrm{H}}$ detector consisting of the thin BGO disk and the hodoscope. The BGO disk, has a diameter of 90~mm and a thickness of 5~mm and is housed on the vacuum side ($10^{-7}$~Pa) of a UHV viewport. The front surface of the BGO disk was coated with a carbon layer of thickness 0.7~$\mu$m to reduce multireflections of the light from scintillation on the surface. It was found that the carbon coating improved the position resolution by a factor of $\sim$~2 in our previous device \cite{YN2}. To achieve a position sensitive readout, 4 MAPMTs (Hamamatsu H8500C) each having 8~$\times$~8 anodes with effective area of 49~mm~$\times$~49~mm were directly placed on the view port glass as shown in Fig.~{\ref{fig:BGOPMT1}}. The output of $8\times 8$ anodes were amplified, digitized and stored by an amplifier unit (Clear Pulse 80190) which was a dedicated model for the H8500C and included $8\times 8$ charge amplifiers and analogue-to-digital converters with 12 bit resolution.\par The hodoscope consists of two layers of 32 plastic scintillator bars arranged in an octagonal configuration \cite{CS2}. The scintillator bars are 300~mm~$\times$~20~mm~$\times$~5~mm for the inner layer and 450~mm~$\times$~35~mm~$\times$~5~mm for the outer. With face to face distances of 200~mm and 300~mm respectively for the inner and outer layers (see Fig.~{\ref{fig:detectorsetup}}(a)). The solid angle covered by the scintillator bars in units of $4\pi$ seen from the center of the BGO is $\omega \sim 80$~\%. Silicon photomultipliers (SiPM, KETEK PM3350TS) were connected to both ends of each bar. The output pulses from the SiPMs was amplified by dedicated front end modules described in detail elsewhere (see ref \cite{CS1}) and recorded by 128 channels waveform digitizers (CAEN V1742). \section{\label{sec:level3}2D distribution of cosmic rays} \begin{figure}[hb] \includegraphics[width=1.\linewidth]{Fig3.pdf \caption{\label{fig:compareHamamatsu} (a) Example of a 2D map of averaged output charges of one of the 4 MAPMTs investigated by LED light. (b) Distribution of the gain ratio between the measured values and the data sheet values for each channel. } \end{figure} To obtain the relative sensitivity of each channel, 4 MAPMTs were assembled and irradiated by pulsed (200~ns width) light from a LED with the peak wavelength of 470~nm (see also Ref. \cite{YN2}), the peak emission wavelength of the BGO scintillation light was 480~nm. The voltages applied to the MAPMTs were adjusted such that the total output charge of each MAPMT were equal. Figure~{\ref{fig:compareHamamatsu}}~(a) shows an example of a 2D map of output charges from one of the 4 MAPMTs averaged over $10^4$ pulses of LED light. The channel with the maximum output charge is arbitrarily set to 100 and all other channels are scaled accordingly. The relative gain of the channels of each MAPMT are evaluated using this mapping. This result was compared with the data sheet from the manufacturer where a tungsten filament lamp was used for calibration. By taking the ratio between the measured values and those from the data sheet for each anode, the distribution of the gain ratio as shown in Fig.~{\ref{fig:compareHamamatsu}}~(b) was obtained. The standard deviation of this distribution is around 5\%. Figures~{\ref{fig:cosmicexample}}~(a) and (b) show the 2D charge distribution for two example cosmic rays events after offline gain matching of the individual MAPMT channels described above. In these examples, the BGO surface has been penetrated nearly perpendicular (a) or parallel (b). These figures demonstrate that position sensitive readout has been successfully implemented using 2D MAPMTs. In order to reconstruct particle tracks (to be discussed in detail later), we define the position of a hit on the BGO as the center of the anode giving the highest output charge. When the hit positions observed in the 2D distribution are outside of the BGO, such events are removed in the analysis because they are probably \v{C}erenkov light generated in the glasses of the MAPMT or the viewport when high energy charged particles pass through them. The total charge $Q$ is obtained by summing the charges from all channels (see Figs.~{\ref{fig:cosmicexample}}~(a) and (b)). \begin{figure}[hb] \includegraphics[width=1.\linewidth]{Fig4.pdf} \caption{\label{fig:cosmicexample} Examples of 2D charge distributions of cosmic rays events where the BGO surface has been penetrated nearly perpendicular (a) or parallel (b). } \end{figure} \section{\label{sec:level4}Energy calibration of the BGO detector} \begin{figure}[htb] \includegraphics[width=0.95\linewidth]{Fig5.pdf} \caption{\label{fig:chi2Fit} The blue solid circles show the total measured output charge of cosmic ray events. The red line show the simulated energy deposition distributions. The integral of events with $E\geq50$~MeV was 3 \% of those with $E\geq4.5$~MeV. } \end{figure} For the energy calibration, cosmic rays were measured and the charge distribution $f(Q)$ was compared with the energy deposition distribution $g(E)$ calculated by a Monte-Carlo simulation using the GEANT4 toolkit\footnote{Geant4.9.6 Patch-02 was used.} \cite{Geant}. Blue solid circles in Fig.~{\ref{fig:chi2Fit}} show $f(Q)$ for cosmic rays events when more than 2 inner hodoscope bars are hit in any coincident combination. A bar is considered to be 'hit' when there is a coincidence between the signals of the upstream and downstream SiPMs connected to the bar. The simulation includes, the BGO detector together with the viewport and the vacuum duct. Cosmic rays are generated by the CRY package \cite{Hagmann}, however, \v{C}erenkov light is not taken included. $g(E)$ is convoluted with the energy resolution which is assumed to be proportional to $\sqrt{E}$ \cite{leo}, i.e., $$ G(E) = \int g(E') \frac{1}{\sqrt{2\pi (\alpha \sqrt{E'})^2}} e^{-(E-E')^2/2 (\alpha \sqrt{E'})^2} dE'\textrm{,} $$ where $\alpha$ is a fitting parameter. To compare $f(Q)$ and $G(E)$, the relation $E=\eta Q+E_c$ is assumed, where $\eta$ and $E_c$ are fitting parameters. $E_c$ corresponds to the contribution of \v{C}erenkov light generated in the BGO and the glass. $G(E)$ is fit to $f(Q)$ using a least square method in the energy range from 4.5 to 50 MeV with 3 free parameters $\alpha$, $\eta$ and $E_c$. The result is shown by the red line in Fig.~{\ref{fig:chi2Fit}}. The goodness of fit is given by $\mathrm{\chi^2/ndf}=1.65$ for $\alpha$=0.52~${\sqrt{{\mathrm{MeV}}}}$, $\eta$=2.5 in units proportional to charge over energy and $E_c$=0.50~MeV. The cosmic ray flux distribution is known to follow $\cos^2 \theta_z$, where $\theta_z$ is the zenith angle. In this case, the average path length of cosmic rays in the BGO is evaluated to be approximately 8~mm. Considering that the energy deposited by a minimum ionizing particle (MIP) in the BGO is 9.0~MeV/cm \cite{PDG}, the peak at around 7~MeV in Fig.~{\ref{fig:chi2Fit}} is attributed to MIPs. As explained below (see subsection 5.3), only events for $E>E_{th}=15$~MeV are considered in the following sections. \section{\label{sec:level5}Identification of $\bar{\mathrm{H}}$ atoms and cosmic ray suppression} Figure~{\ref{fig:tracktheta}} shows a simplified back view of the $\bar{\mathrm{H}}$ detector where the signals from the 4 MAPMTs (shown in the square in the center) and the hodoscope bars (the outermost octagonal arrangement) for a typical cosmic event in the experiment are overlaid. In this example, a spot like pattern is seen on the BGO. Green-colored hodoscope bars are simultaneously hit by the cosmic ray. The red shaded area in the upper left side shows the possible spatial range of the trajectory of the charged particle. As a best guess, the bisector of the shaded area is taken to define a track which is shown by the thick black line. In the lower right side, 2 neighboring hodoscope bars are hit, in this case, the red shaded area is defined from the edges of the neighboring hodoscope bars and the BGO hit position. The particle track is also defined by a bisector of the shaded area. In this example, the number of tracks is $k=2$. Solid and dashed lines in Fig.~{\ref{fig:Count15MeV45MeV}} show the exprimental and simulated results of cosmic ray count rate $n^c_k$ as a function of $k$. The highest $n^c_k$ is observed for $k=2$ and decreases by more than one order of magnitude for each additional unit of $k$. The track number analysis for simulations of antiprotons ($\bar{p}$) irradiating the BGO disk uniformly with the typical count rate of the $\bar{\mathrm{H}}$ atoms in the experiment $I_{\bar{p}}=0.1$~Hz results in the chain curve ($n^{\bar{p}}_k$). In this simulation, the CHIPS model was used in Geant4 for $\bar{p}$ annihilation. This model was previously tested with respect to the energy deposition analysis for $\bar{\mathrm{H}}$ annihilation in the BGO \cite{ASACUSA,YN1}. The multiplicity of annihilation products from $\bar{p}$ annihilation was studied using an emulsion detector and agreed with CHIPS results except for annihilation with heavy atoms \footnote{ It is noted that there are no systematic studies of both multiplicity of annihilation products, and their energy deposition for $\bar{p}$ annihilation at rest. To investigate this, fragmentation studies of antiproton-nucleus annihilation are being performed using a Timepix3 detector within ASACUSA collaboration. } \cite{TA}. The distribution of the chain curve line shows a maximum again, but it decreases weakly as a function of $k$. When a $\bar{p}$ annihilates with a nucleus, approximately 3 charged and 2 neutral pions are produced on average \cite{Hori}. Taking into account the charged pions and the solid angle $\omega$ covered by the hodoscope, $n^{\bar{p}}_2$ is estimated to be $3\omega^2(1-\omega)I_{\bar{p}} \sim 0.04$~Hz which can be compared to $n^{\bar{p}}_2 \sim 0.03$~Hz in Fig.~{\ref{fig:Count15MeV45MeV}}. As will be discussed in subsection 5.4, $n^c_k$ can be decreased considerably from the dashed line and is around $5 \times 10^{-3}$~Hz as shown by the open circles in Fig.~{\ref{fig:Count15MeV45MeV}} for $k=1$, 2 and 3. Alternatively, $n^{\bar{p}}_k$ does not decrease very much as seen from the open triangles. In these cases, $n^c_k$ is well below $n^{\bar{p}}_k$. Further to this, $n^c_4$ is more than one order of magnitude lower than the open circles and is negligibly small. Therefore we can assume that events for $k \geq 4$ can be reasonably attributed to $\bar{p}$ annihilation. In the following subsections, the events for $k=2$, $k=3$ and then $k=1$ are considered. \begin{figure}[t] \includegraphics[width=.6\linewidth]{Fig6.pdf} \caption{\label{fig:tracktheta} Example of a cosmic ray event with $k=2$ tracks. The circle surrounded by the square in the center show the BGO disk and the total area covered by the 4 MAPMTs, respectively. The external octagonal configuration show the inner and outer hodoscope layers. } \end{figure} \begin{figure}[t] \includegraphics[width=1.\linewidth]{Fig7.pdf} \caption{\label{fig:Count15MeV45MeV} Solid and dashed lines show experimental and simulated results of cosmic ray count rate $n^c_k$ as a function of $k$, respectively. The chain curve shows the simulation result of the annihilation count rate $n^{\bar{p}}_k$ when $\bar{p}$s irradiate the BGO disk uniformly with a rate of $I_{\bar{p}}=0.1$~Hz. Only events depositing more than 15~MeV in the BGO were considered for both cosmics and $\bar{p}$s in the simulated and experimental data. The open circles and triangles show the simulated data of $n^{c}_k$ and $n^{\bar{p}}_k$ for $k=1$--3 obtained from the analysis described in the subsection~{\ref{sec:level54}}, respectively. } \end{figure} \begin{figure}[t] \includegraphics[width=.5\linewidth]{Fig8.pdf} \caption{\label{fig:tracktheta2} Definitions of $\theta_1$, $\theta_2$ and $\theta_{12}$. } \end{figure} \subsection{\label{sec:level51}Events for $k=2$} \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig9.pdf} \caption{\label{fig:AngleDependenceCosmicNt2} (a) Experimental result of the 2D distribution of $n^c_2$ as a function of $\theta_1$ and $\theta_{12}$. (b) Simulation result of $n^c_2$ as a function of $\theta_1$ and $\theta_{12}$. In both figures, $E_{th}=15$~MeV. The bin widths are 6 degrees on both axes. The ellipse is defined by semi-minor axis $\Delta \theta_2$ and semi-major axis of 90 degrees in $\theta_{12}$ and $\theta_1$, respectively, and is used for background suppression. } \end{figure} \begin{figure}[h] \includegraphics[width=.9\linewidth]{Fig10.pdf \caption{\label{fig:AngleDependencePbarNt2} (a) Experimental result of 2D distribution of $n^{\bar{p}}_2$ as a function of $\theta_1$ and $\theta_{12}$ with $E_{th}=15$~MeV. The bin widths are 6 degrees on both axes. (b) Projection of (a) onto $\theta_{12}$ axis. The ellipse is defined like in Fig.~{\ref{fig:AngleDependenceCosmicNt2}}. } \end{figure} To analyze 2-track events, the track direction is defined by the angle measured anticlockwise from a horizontal line on the $x-y$ plane as shown in Fig.~{\ref{fig:tracktheta2}}. The tracks are numbered in ascending ordered with increasing angle. The corresponding angles of the $1^{\mathrm{st}}$ and the $2^{\mathrm{nd}}$ tracks are named $\theta_1$ and $\theta_2$ ($\theta_1 < \theta_2$), respectively. Further we define $\theta_{12} = \theta_2 - \theta_1$. Figures~{\ref{fig:AngleDependenceCosmicNt2}}~(a) and (b) compare the 2D distribution of cosmic events as a function of $\theta_1$ and $\theta_{12}$ as obtained from experiment and simulation, respectively. It can be seen that the simulation result reproduces the experimental result well. A strong ridge is observed at $\theta_{12} \sim 180$~degrees which corresponds to cosmic rays passing straight through the detector. On the other hand, in $\theta_1$ direction, the distribution spread widely, centered at 90~degrees, which reflects the cosmic ray flux following a $\cos{\theta_z^2}$ distribution. Figure~{\ref{fig:AngleDependencePbarNt2}}~(a) shows the result of simulation of the 2D distribution of $n^{\bar{p}}_2$ as a function of $\theta_1$ and $\theta_{12}$. The distribution is very broad, as is expected from $\bar{p}$ annihilations at low energy. The $n^c_2$ background can be decreased by removing events inside an ellipse defined by semi-minor axis $\Delta \theta_2$ and semi-major axis of 90 degrees in $\theta_{12}$ and $\theta_1$, respectively (see Fig.~{\ref{fig:AngleDependenceCosmicNt2}}). For example, the $n^c_2$ background is reduced by one order of magnitude for $\Delta \theta_2 = 10$~degrees. Using the same cut, only about 10~\% of the $\bar{p}$s are removed because of the different distributions of Fig.~{\ref{fig:AngleDependenceCosmicNt2}} and Fig.~{\ref{fig:AngleDependencePbarNt2}}. It is noted that in both Figs.~{\ref{fig:AngleDependenceCosmicNt2}}~(a) and (b), we observe an additional small peak at $\theta_1 \sim 270$~degrees and $\theta_{12} \sim 40$~degrees. Investigating the corresponding events in the simulation, it was found that energetic $\gamma$ rays in the BGO produce electron-positron pairs and form this peak. The fraction of these events is 1\% of the total events in Fig.~{\ref{fig:AngleDependenceCosmicNt2}}~(a) and (b). In comparison, although the distribution of $n^{\bar{p}}_2$ in Fig.~{\ref{fig:AngleDependencePbarNt2}}~(a) is very broad, it has a peak at $\theta_{12} \sim 180$~degrees as shown in Fig.~{\ref{fig:AngleDependencePbarNt2}}~(b) which is the projection of (a) onto the $\theta_{12}$ axis. The preference at 180 degrees can be explained if we consider a specific type of event, one with three charged pions. When two charged pions hit the hodoscope, with the third pion escaping in a direction close to the beam axis, momentum conservation will favour $\theta_{12}$ around 180~degrees. \subsection{\label{sec:level52}Events for $k=3$} The track directions for $k=3$ are defined in the same manner in the case of $k=2$. Because 3 tracks are involved, there are 3 ways to choose track pairs, which can then be described in an equivalent manner to 2-track events as shown in Figs.~{\ref{fig:track31}}~(a)-(c). Figures~{\ref{fig:AngleDependenceCosmicNt3}}~(a) and (b) show the experimental and simulated results of the 2D distributions of $n^c_3$, obtained by summing 3 distributions of $\theta_{12}$\,vs.\,$\theta_{1}$, $\theta_{13}$\,vs.\,$\theta_{1}$ and $\theta_{23}$\,vs.\,$\theta_{2}$, i.e. every event is represented by 3 points corresponding to Figs.~{\ref{fig:track31}}~(a)-(c) on the plot to conveniently summarize them. The simulation reproduces the experiment very well. A peak at $\theta_{ij} \sim 180$~degrees is observed, which is broader than the peak in Fig.~{\ref{fig:AngleDependenceCosmicNt2}}~(a) and (b). By investigating the corresponding events in the simulation, it was found that a cosmic ray from above generates recoil electrons emitted downward in the BGO which forms the broad peak (see Fig.~{\ref{fig:track31}}~(a) and (b)). Another peak is seen at $\theta_i \sim 270$~degrees which is formed by the recoil electron together with the incident cosmic ray (see Fig.~{\ref{fig:track31}}~(c)). Figure~{\ref{fig:AngleDependencePbarNt3}} shows the simulation result of $n^{\bar{p}}_3$ as per Figs.~{\ref{fig:AngleDependenceCosmicNt3}}~(a) and (b) for $n^c_3$. The event distribution is much broader than $k=2$. Equivalently to the case of $k=2$, the cosmic ray events are expected to be reduced by removing events inside the ellipse (see Fig.~{\ref{fig:AngleDependenceCosmicNt3}}) defined by semi-minor axis $\Delta \theta_3$ and semi-major axis of 90~degrees in $\theta_{ij}$ and $\theta_i$, respectively. Whilst simultaneously not decreasing $n^{\bar{p}}_3$. It is noted that the peak at $\theta_i \sim 270$~degrees in Figs.~{\ref{fig:AngleDependenceCosmicNt3}}~(a) and (b) is not present when the events inside the ellipse are removed. \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig11.pdf} \caption{\label{fig:track31} Combinations of 2 out of 3 tracks. Angles $\theta_1$, $\theta_2$, $\theta_{12}$, $\theta_{13}$ and $\theta_{23}$ are defined. } \end{figure} \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig12.pdf} \caption{\label{fig:AngleDependenceCosmicNt3} (a) Experimental results of $n^c_3$ obtained by summing 3 distributions of $\theta_{12}$\,vs.\,$\theta_{1}$, $\theta_{13}$\,vs.\,$\theta_{1}$ and $\theta_{23}$\,vs.\,$\theta_{2}$. (b) Simulation result of the same distributions as in (a). In both figures, $E_{th}=15$~MeV. The bin widths are 6 degrees on both axes. The ellipse is defined by semi-minor axis $\Delta \theta_3$ and semi-major axis of 90 degrees in $\theta_{ij}$ and $\theta_i$, respectively, and is used for background suppression. } \end{figure} \begin{figure}[h] \includegraphics[width=.5\linewidth]{Fig13.pdf} \caption{\label{fig:AngleDependencePbarNt3} Result from simulations of $n^{\bar{p}}_3$ obtained by summing 3 distributions of $\theta_{12}$\,vs.\,$\theta_{1}$, $\theta_{13}$\,vs.\,$\theta_{1}$ and $\theta_{23}$\,vs.\,$\theta_{2}$ with $E_{th}=15$~MeV. The bin widths are 6 degrees on both axes. The ellipse is defined as in Fig.~{\ref{fig:AngleDependenceCosmicNt3}}. } \end{figure} \subsection{\label{sec:level53}Signal-to-noise ratio (SNR) for $k \geq 2$} \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig14.pdf} \caption{\label{fig:significance} (a) Experimental result of the 2D distribution of $N^c_2$ as a function of $\Delta \theta_2$ and $\Delta \theta_3$. (b) Corresponding simulation result for $N^c_2$. The bin widths in Figs.~(a) and (b) are 1 degree on both axes. (c) and (d) Experimental and simulated results of $N^c_2$ as a function of $\Delta \theta_2$ for $\Delta \theta_3=0$, 30, 60 and 90 degrees. In these figures, $E_{th}=15$~MeV. } \end{figure} \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig15.pdf} \caption{\label{fig:significance2} (a) The simulation result of 2D distribution of $\epsilon_2$ as a function of $\Delta \theta_2$ and $\Delta \theta_3$. (b) The simulation result of $x_2$ as a function of $\Delta \theta_2$ and $\Delta \theta_3$. In these figures, $E_{th}=15$~MeV. The bin widths are 1 degree on both axes. } \end{figure} \begin{figure}[h] \includegraphics[width=.6\linewidth]{Fig16.pdf} \caption{\label{fig:snsig} $x_2$ as a function of $E_{th}$, where $x_2$ is optimized for each $E_{th}$ using the procedure described in Section~5.3. } \end{figure} The total rate of cosmic events after the data cut was obtained by taking the sum of $n^c_k$ for $k$ and is defined by $N^c_i = \Sigma_{k\geq i} n^c_k$. Figures~{\ref{fig:significance}~(a) and (b) show the experimental and simulated results of the 2D distributions of $N^c_2$, respectively, as a function of $\Delta \theta_2$ and $\Delta \theta_3$. The simulations reproduce the experimental data well. To evaluate the difference between the data and the simulation quantitatively, Figs~{\ref{fig:significance}~(c) and (d) show experimental and simulated results of $N^c_2$ as a function of $\Delta \theta_2$ for $\Delta \theta_3=0$, $30$, $60$ and $90$~degrees. The difference between the experimental and simulated results is less than 10\%. In the later discussion, we analyze the simulation data. Figure~{\ref{fig:significance2}~(a) shows the simulation result of the detection efficiency of $\bar{p}$s $\epsilon_2$ defined by $\epsilon_i = N^{\bar{p}}_i / I_{\bar{p}}$ as a function of $\Delta \theta_2$ and $\Delta \theta_3$, where $N^{\bar{p}}_i=\Sigma_{k\geq i}n^{\bar{p}}_k$. It is shown that $\epsilon_2$ decreases as $\Delta \theta_2$ and $\Delta \theta_3$ increase. We define the signal-to-noise ratio (SNR) by $x_i = \frac{N^{\bar{p}}_i}{\sqrt{N^{\bar{p}}_i + N^c_i }}$. Figure~{\ref{fig:significance2}~(b) shows the 2D distribution of $x_2$ as a function of $\Delta \theta_2$ and $\Delta \theta_3$. The maximum $x_2$ is 0.24~${\mathrm{s^{-1/2}}}$ at $\Delta \theta_2 = 16$~degrees and $\Delta \theta_3 = 0$~degrees with $N^c_2 = 6.9$~mHz and $\epsilon_2=65$~\%. As is seen in Fig.~{\ref{fig:significance2}~(b), to maximize $x_2$, $\Delta \theta_3$ should be 0 for all values $\Delta \theta_2$. This suggests that all events of $n^{\bar{p}}_3$ can be identified as $\bar{p}$s. It is noted that this fact and the optimization of the SNR depend on $I_{\bar{p}}$. Figure~{\ref{fig:snsig} shows $x_2$ as a function of $E_{th}$, $x_2$ has a maximum at $E_{th}=15$~MeV, but varying only within 1~\% in the range of 5~MeV~$< E_{th}<20$~MeV. Therefore $E_{th}$ is not critical to the optimization of $x_2$ in this range. \subsection{\label{sec:level54}Events for $k=1$ and SNR} Figure~{\ref{fig:onetrack}~(a) shows the 2D distribution of $n^c_1$ as a function of the deposition energy in the BGO $E$ and $\theta_1$. We observe a ridge at $E \sim 7$~MeV which corresponds to the MIP peak. In the $\theta_1$ direction, the distribution spreads widely with a center at 90 and 270~degrees and its shape is attributed to the cosmic ray flux distribution of $\cos{\theta_z^2}$. The shape of the ridge appears to be an ellipse. However, the tail of the ridge seems to be more a triangular. Figure~{\ref{fig:onetrack}~(b) shows the 2D distribution of $n^{\bar{p}}_1$ as a function of $E$ and $\theta_1$. The distribution is very broad. $n^c_1$ is expected to be reduced by removing events inside the triangle defined by $\Delta E$ and the base of 180~degrees as is seen in Fig.~{\ref{fig:onetrack}. Figures~{\ref{fig:AngleQ}~(a) and (b) show the 2D distributions of $N^c_1$ and $\epsilon_1$ ($k \geq 1$), respectively, as a function of $\Delta \theta_2$ and $\Delta E$. It is seen that $N^c_1$ and $\epsilon_1$ decrease gradually as $\Delta \theta_2$ and $\Delta E$ increase. Figure~{\ref{fig:AngleQ}~(c) shows $x_1$ as a function of $\Delta \theta_2$ and $\Delta E$. The maximum of $x_1$ reached 0.26~${\mathrm{s^{-1/2}}}$ at $\Delta \theta_2 = 14$~degrees and $\Delta E = 93$~MeV with $N^c_1=12$~mHz and $\epsilon_1=81$~\%. This is larger than the maximum value of $x_2$, therefore, the analysis including the events for $k=1$ improves the SNR. Comparing with the $\bar{\mathrm{H}}$ detector developed in 2012 with $x=0.22$~${\mathrm{s^{-1/2}}}$ with the cosmic count rate of $4$~mHz and the detection efficiency of $50$~\% for $I_{\bar{p}}=0.1$~Hz, the detector described in this work improves upon the SNR and the detection efficiency. \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig17.pdf \caption{\label{fig:onetrack} (a) Simulation result of the 2D distribution of $n^c_1$ as a function of $E$ and $\theta_1$. (b) Simulation result of the 2D distribution of $n^{\bar{p}}_1$ as a function of $E$ and $\theta_1$. The bin widths in horizontal and vertical axes are 1~MeV and 6~degrees, respectively. } \end{figure} \begin{figure}[h] \includegraphics[width=1.\linewidth]{Fig18.pdf \caption{\label{fig:AngleQ} (a) Simulation result of 2D distributions of $N_1^c$ as a function of $\Delta \theta_2$ and $\Delta E$. (b) $\epsilon_1$ as a function of $\Delta \theta_2$ and $\Delta E$. (c) $x_1$ as a function of $\Delta \theta_2$ and $\Delta E$. In these figures, $E_{th}$ for $k \geq 2$ is $15$~MeV. The bin widths are 1 degree on both axes. } \end{figure} \section{\label{sec:level6}Conclusion} We have developed a $\bar{\mathrm{H}}$ detector consisting of a thin BGO disk and a hodoscope. We have measured hit positions of cosmic rays in the BGO disk and confirmed that the thin disk with a 2D readout by MAPMTs enables position sensitivity. The energy deposition in the BGO was calibrated by comparing cosmic ray data with Geant4 simulations. Charged particle tracks were determined by connecting the hit position on the BGO and hits on hodoscope bars. By removing the cosmic rays passing through the detector using the cut on $\Delta E$, $\Delta \theta_2$ and $E_{th}$, the background was reduced efficiently to $N^c_1=12$~mHz with a detection efficiency of $\epsilon_1=81$~\%. The SNR was improved to $x_1=0.26$~${\mathrm{s^{-1/2}}}$ which was compared to 0.22~${\mathrm{s^{-1/2}}}$ for the detector used in 2012. \section*{Acknowledgements} We would like to thank Tomohiro Kobayashi for the carbon coating on the BGO disk. This work was supported by the Grant-in-Aid for Specially Promoted Research 24000008 of Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), Special Research Projects for Basic Science of RIKEN, European Research Council under European Union's Seventh Framework Programme (FP7/2007-2013) /ERC Grant Agreement (291242) and the Austrian Ministry of Science and Research, Austrian Science Fund (FWF): W1252-N27.
{ "timestamp": "2018-06-07T02:09:43", "yymm": "1806", "arxiv_id": "1806.00959", "language": "en", "url": "https://arxiv.org/abs/1806.00959" }
\section{Introduction} \label{sec:introduction} \makeatletter{}System security continues to be an arms race between intruders and defenders. In this arms race, attackers adapt in response to defense mechanisms and \emph{always} win. Defeating attackers requires rethinking traditional defeat- and exploit-based mitigation techniques, which lack complete security coverage~\cite{pincus2004mitigations}. We propose taking a holistic, attack-vector-agnostic view of system execution. \emph{We claim that provenance is the ideal data to use for such a task and that provenance graph-based analysis is the ultimate means towards achieving complete security coverage.} Provenance refers to meta-data describing how digital objects came to be in their current state. It provides a complete, structured view of what happened on the system~\cite{bates2015trustworthy} by presenting complex dependencies and causality relationships between digital objects as a directed acyclic graph (DAG). As such, it is well suited for intrusion detection. An intrusion manifests in anomalous interdependencies among data objects that deviate from those found in non-malicious execution. In fact, in attack causality analysis~\cite{king2005enriching}, provenance has long been used to explain intrusions (~\autoref{sec:oc}). Provenance graph analysis strengthens adversarial robustness, because the graphs exhibit long-range correlations and dependencies allowing for causal reasoning about intrusions~\cite{akoglu2015graph} (\autoref{sec:applicability}). Such causal reasoning enables detection of sophisticated attacks, such as network attacks, that remain undetected for long periods of time. In prior work~\cite{han2017frappuccino}, we reduced a host-based intrusion detection problem to a graph-based anomaly detection problem, in which graph analysis identified structured execution traces that represented an intrusion. However, intrusion detection on provenance graphs requires analyzing dynamic, attributed, streaming graphs, which are rarely studied in the literature. Given the development of fine-grained, whole-system provenance capture systems~\cite{pasquier2017practical}, this task becomes even more challening as the graphs rapidly become extraordinarily large~\cite{bates2015trustworthy}. However, we can use domain-specific knowledge of provenance graphs to simplify the challenge of identifying anomalies, making it an easier problem than general-purpose graph analysis suggests. For example, since execution history is immutable, we can assume that provenance graphs only increase in size (i.e., there are never deletions). This property allows us to incrementally and progressively reason about causality without needing to look backwards. \section{Applicability} \label{sec:applicability} \makeatletter{}\newmdtheoremenv[ hidealllines=true, leftline=true, innertopmargin=0pt, innerbottommargin=0pt, linewidth=4pt, linecolor=gray!40, innerrightmargin=0pt, ]{definitiona}{Definition} Data provenance has seen use in areas such as databases and computational sciences. While it now also appears as part of real-time security analysis~\cite{bates2015trustworthy}, most approaches are variations of dynamic taint analysis of provenance data. While simple and effective on their own merits, they are limited to constraining information flows within a system (\eg data loss prevention, access control, and regulatory compliance); little work has been done to detect intrusions from outside the system~\cite{han2017frappuccino, pasquier2017practical}. Host-based anomaly detection systems define some baseline normal behavior and then classify as abnormal any behavior that significantly deviates from the baseline. The approach is predicated on the assumption that intrusions are highly correlated to abnormal behavior. Many existing systems use unstructured collections of multidimensional data (\eg audit logs) to detect outlying points in a high-dimensional feature space, formulating intrusion detection as point-based outlier detection to leverage various learning and data mining techniques. Provenance, however, is structured graph data that represents relationships between a digital item (\ie data entity), a transformation on that item (\ie activity), and agents (\ie persons and organizations) associated with the item and the transformation. Hence, unlike the prior work, we formulate the host-based intrusion detection problem as a graph-based anomaly detection problem defined as follows~\cite{akoglu2015graph}: \begin{definitiona} \noindent{The graph-based intrusion detection problem is to identify components of the graph that are significantly different from those in a learned model of the graph.} \end{definitiona} Using a provenance graph-based approach to intrusion detection is suitable for various reasons: \noindent{{\tiny\ding{108}}}~\noindemph{Provenance captures complete access to security-sensitive kernel objects:} State-of-the-art provenance whole-system capture systems leverage the Linux Security Module (LSM) interface to record provenance for every security-related interaction, rather than intercepting system calls. They can be extended to verifiably monitor all information flows in a system~\cite{georget2017verifying}. \noindent{{\tiny\ding{108}}}~\noindemph{Provenance makes explicit the relationships among objects:} One powerful feature of provenance is its native graphical representation to show system execution as interactions between data objects. However, such interdependencies are innate to every execution trace, even in seemingly unstructured audit data from logging systems such as \texttt{auditd}. In fact, there exist frameworks that reconstruct graph-based provenance from flat audit data to allow for reasoning about system execution~\cite{gehani2012spade}. However, this post hoc approach comes with a caveat: it is harder to ensure completeness or correctness of the graph built from flat audit data~\cite{pohly2012hi}. \noindent{{\tiny\ding{108}}}~\noindemph{Intrusions result from unexpected interactions:} The entry point to a victim system may be a single, isolated event, but its effects must propagate for an intrusion to be fruitful to an attacker. For example, consider an insider attacker who wishes to steal sensitive information from a data server under his control. He first installs a malicious BASH script that discovers and collects all documents (\ie a single entry point to the server). However, to successfully steal the information, he needs to either transfer it to a foreign machine or write it to an external storage device. The key to detecting the data leak is to connect the collection of the data to the transmission of the data, which in a provenance graph is clearly represented as a chain of dependencies between processes, files, and sockets. \noindent{{\tiny\ding{108}}}~\noindemph{Graph representation improves robustness:} graphs are generally more adversarially robust, \ie it is harder for an attacker to camouflage her behavior to fit into the reference graph structures~\cite{akoglu2015graph}. In fact, we claim that \emph{the provenance graph of an intrusion must differ from that of a valid execution when we use an LSM-based whole-system provenance capture system.} As LSM places hooks on any execution path that generates an information flow~\cite{akoglu2015graph}, if the capture system records provenance on every such path, violations of security policies will be evident from the provenance graph. Moreover, the attacker must also have the knowledge of the substructures that are referenced by the IDS, which alone requires significant effort. For example, the attacker from the previous example may evade detection if each step is allowed when performed in isolation. An advanced attacker can even fake the IP address of the foreign machine. However, when considering the chain of actions as a whole (\ie an abnormal graph substructure), we can identify the intrusion. \section{Opportunities and Challenges} \label{sec:oc} \makeatletter{}Analyzing dynamic, attributed graphs is difficult. Graph anomaly detection in this setting requires detecting changes over time, which in turn requires a formal notion of similarity defined specifically for the target domain~\cite{akoglu2015graph}. With attributed vertices and edges, changes can occur both structurally and in labels. Provenance graphs further complicate the matter as each vertex and edge usually has a set of attributes (instead of a single \texttt{type} attribute), and the number of attributes varies depending on the type of the vertex/edge. To enable online intrusion detection, one also receives the provenance graph in a streaming fashion and must perform the analysis in realtime. However, provenance graphs are acyclic, thus having a topological ordering that simplifies computation. Events therefore can be partially ordered as they are streamed for analysis~\cite{pasquier2017practical}. We can then efficiently reason over the vast amount of information contained in vertex and edge labels, which, combined with structural information, reflects various aspects of system execution. In the following sections, we discuss the main opportunities and challenges associated with provenance-based intrusion detection. \subsection{Opportunities} \label{sec:opportunities} \noindgras{Opportunity 1: Provenance graph structures and labels encode the complete, historical context of system execution.} A useful intrusion detection system learns detailed normal behavior from the past. Given flat audit data with no completeness guarantee, an IDS is limited by the data recorded in the audit logs. It is also difficult to obtain higher-order dependencies~\cite{ye2000markov}. In some cases, the type of information it learns from is determined empirically by the attack vectors it is designed to detect. Such an ad hoc approach ultimately leads to the arms race described in~\autoref{sec:introduction}. In contrast, whole-system provenance provides a complete view of information flow that natively reflects higher-order correlations and long-range dependencies. Its graph structure also allows for graph-based analysis. We illustrate its benefits by describing the following principles that provenance analysis embodies. \noindent{{\tiny\ding{108}}}~\noindemph{Principle 1: Identify semantically meaningful substructures.} Provenance graphs can become large, obfuscating important events that require special attention. Complex system interactions within a task and between tasks further cloud understanding. Therefore, it is important to identify substructures/subgraphs that are semantically coherent (\eg describing a single task within a program). Macko \etal~\cite{macko2013local} developed two centrality metrics to perform local clustering on provenance graphs for task separation. Generic metrics used to discover communities are also applicable, albeit expensive in certain cases. Significant changes in those structures usually imply intrusions. For example, most control-data attacks alter the control flow of a program to execute injected malicious code. They typically start a new shell with the privilege of the victim process~\cite{chen2005non}, which inevitably introduces unexpected vertices and edges in the provenance graph. Akoglu \etal~\cite{akoglu2015graph} summarized various distance measures to detect structural anomalies in dynamic graphs. \noindent{{\tiny\ding{108}}}~\noindemph{Principle 2: Incorporate time.} The rate of provenance event creation is proportional to kernel object access rate. As each access to security-sensitive kernel object results in (at least) one edge in the graph, provenance graphs reflect this rate through the number of vertices and/or edges per unit of time. Although some benign workloads exhibit a high rate of provenance generation (\eg building a kernel~\cite{pasquier2017practical}), bursts of intense provenance generation frequently indicate an attack. For example, attackers exploit race conditions to deploy Time-of-Check-to-Time-of-Use (TOCTOU) attacks. The fairly recent Dirty COW attack(CVE-2016-5195), in which the Linux kernel's memory subsystem incorrectly handled copy-on-write (COW), granting write access to private read-only memory mappings, used two threads simultaneously bombarding the system with \texttt{madvise} and \texttt{write} system calls. These calls produce elements of the provenance graph at a rate rarely observed during normal behavior. \noindent{{\tiny\ding{108}}}~\noindemph{Principle 3: Keep history in mind.} Advanced persistent threat (APT) attacks are usually a set of continuous, long-running processes that permeate the victim system. Noticing such attacks requires a holistic understanding of system execution starting from its initialization. In fact, any intrusion that requires retrospective analysis on previously processed portion of the graph can be discovered only if the detection system ``remembers'' history. However, the sheer volume of provenance data renders any attempt at a complete review impractical. One way to mitigate this needle-in-a-haystack problem is to incrementally build a concise yet comprehensive model that memorizes the historical context of the graph. For example, Lemay \etal~\cite{lemay2017automated} designed regular grammars for provenance DAGs to succinctly summarize the graph structure. \noindgras{Opportunity 2: Provenance graphs are topologically and partially ordered.} This property follows naturally from the fact that provenance graphs are DAGs and that they truthfully reflect the causal relationships of events that occurred on the system. We took advantage of this property and designed a real-time provenance analysis framework to enable semantically rich security services~\cite{han2017frappuccino}. In particular, a vertex-centric graph framework facilitates provenance graph analysis with its correctness guaranteed by the two partial ordering properties: 1) once an outgoing edge to a vertex arrives, we know that we have observed all incoming edges to that vertex; 2) we receive all edges and vertices along a path in order. \noindgras{Opportunity 3: Provenance graphs enrich attack attribution and sense-making.} Attribution is an important feature that allows system administrators to quickly understand the source of an intrusion so that they can remedy the issue in a timely fashion and effectively control the damage. Many intrusion detection systems suffer from a high false positive rate. Attribution helps administrators quickly reject false positive alarms, effectively making the IDS more usable. Provenance graphs are causality graphs that naturally allow for sense-making, providing a causal chain of events for reasoning. For example, King \etal~\cite{king2003backtracking} designed a system that structures OS-level audit logs to automatically identify sequences of steps that occurred in an intrusion, starting from a single detection point. \subsection{Challenges} \label{sec:challenges} \noindgras{Challenge 1: It is difficult to obtain a good graph summary.} From Opportunity 1 (\autoref{sec:opportunities}), we see that a good graph summary should at least adhere to all the principles discussed. For principles that do not consider graph structures, we can learn trends via applications of machine learning or empirically, e.g., by finding and setting a threshold. However, the streaming nature of provenance data for online intrusion detection makes graph analysis challenging. One approach is to segment the graph using a time window, though one needs to determine an appropriate window size. \noindgras{Challenge 2: Online intrusion detection requires efficient computation.} Even with the framework described in \autoref{sec:opportunities}, the computation itself (\eg to generate a good graph summary) must be efficient enough to detect an intrusion before it wreaks havoc on the system. Many intrusion detection systems require training on known datasets, which is often performed offline~\cite{axelsson2000intrusion}. Efficiency therefore is usually a primary concern during deployment. Complicated graph algorithms, such as subgraph isomorphism, are often NP-complete and are suitable only for small graphs. Machine learning and data mining approaches on graphs, \eg graph kernels, offer alternatives with polynomial or even linear time complexity. \noindgras{Challenge 3: The complexity of the system makes provenance graphs difficult to understand.} There exists a trade-off between the completeness of provenance and the succinctness of the resulting graph. With whole-system provenance capture, this trade-off becomes even clearer as a large number of underlying system dependencies are captured. For example, Liu \etal~\cite{liutowards} showed that a simple \texttt{sshd} command can trigger a massive number of Linux commands that are used to update Linux environment variables, which results in a large provenance subgraph describing these activities. However, they also proposed an algorithm that takes into account factors, such as rareness and dataflow termination, to determine the priority of events during backward and forward tracking of a provenance graph. \section{Experience} \label{sec:experience} \makeatletter{}In prior work~\cite{han2017frappuccino}, we presented a provenance-based intrusion detection system. As we refined our system~\cite{pasquier2017practical}, we identified idiosyncrasies that differentiate intrustion detection via provenance and via audit logs. In addition to the properties already discussed, provenance captures interactions across applications that are invaluable in intrusion detection. Based on our prior experience, we identify the following keys to provenance-based intrusion detection: \noindent{{\tiny\ding{108}}}~\noindemph{Understand the provenance capture mechanism and the graph it produces:} It is important to understand what information is captured, how it is captured, and at what level of granularity. These all affect graph interpretation. For example, we have worked with capture systems that record both thread-level details~\cite{pasquier2017practical} and process-only details~\cite{gehani2012spade}. They have fundamentally different underlying capture mechanisms, and therefore, we need to make different assumptions about the provenance graphs they generate, even when they are capturing provenance of the same system execution. More importantly, we need to make correct assumptions, which is fundamental to the correctness of any provenance graph analysis. Consequently, it is essential to specify the formalization of the graphs from different capture mechanisms, not to generalize. Sometimes, existing provenance capture systems may not fulfill the needs of an IDS; jointly developing a provenance capture system and a provenance-based IDS is most likely to improve the performance of both systems. \noindent{{\tiny\ding{108}}}~\noindemph{Build datasets to benchmark IDSes:} The lack of labeled datasets is a serious obstacle to work in this area. As provenance capture mechanisms evolve, a plug-and-play system that can automatically rerun experiments is valuable. We use Vagrant to generate experimental data in a virtual environment~\cite{provdata}. However, labeling datasets is tricky~\cite{maggi2010detecting}. One cannot simply label an entire provenance graph as an ``intrusion'', since an IDS could mistakenly interpret a benign subgraph as an intrusion entry point. On the other hand, a provenance graph of seemingly normal system execution might contain unexpected execution errors, which, though not part of an intrusion, still deviate from specified normal behavior. This difficulty leads to misleading comparison metrics, such as precision, recall, and F-measure. Benchmarking IDSes remains an important open problem. \section{Conclusion} \label{sec:conclusion} \makeatletter{}We propose to realize robust, attack-vector-agnostic intrusion detection through analysis on provenance graphs and identify opportunities and challenges specific to whole-system, provenance-based intrusion detection. While the concept of OS-level provenance is almost a decade old, formalization and theoretical studies of its graphs have not yet materialized. Applying whole-system provenance to intrusion detection~\cite{han2017frappuccino} requires a formal understanding of provenance. We invite fellow researchers in both theory and provenance communities to continue this exploration with us. \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-06-05T02:13:55", "yymm": "1806", "arxiv_id": "1806.00934", "language": "en", "url": "https://arxiv.org/abs/1806.00934" }
\section*{Proof of Theorem~\ref{theom:markov}} The next two lemmas are useful towards proving Theorem~\ref{theom:markov}. Lemma~\ref{lemma:meanvar} shows how the variance in a Monte Carlo estimator reduces as a function of the number of samples. \begin{lemma} \label{lemma:meanvar} For $Y$ a random variable with mean $\mu_Y$ and variance $\sigma^2_Y$, define $Y_n$ as follows: \begin{equation*} Y_n = \frac{1}{n}\sum_{i=1}^{n} Y_i, \end{equation*} with $Y_i\sim Y$ and $n\in \mathbb{N}$. Then, we have that $\sigma_{Y_n}^2 = \frac{1}{n} \sigma^2_Y$. \end{lemma} \begin{proof} The variance of $Y_n$ is: \begin{align*} \sigma^2_{Y_n} &= E[(Y_n - \mu_{Y_n})^2]\\ &= E[Y_n^2] - \mu_{Y_n}^2. \end{align*} Note that $\mu_{Y_n} = \mu_Y$, and using the property of linearity in the expectation operator write: \begin{align*} \sigma^2_{Y_n} &= \frac{1}{n^2}\sum_{i,j}^{n} E\big[Y_i Y_j\big]- \mu_{Y}^2\\ &= \frac{1}{n^2}\bigg(\sum_{i}^{n} E\big[Y_i^2\big] + \sum_{i,j:i\neq j}^{n} E\big[Y_i Y_j\big]\bigg) - \mu_{Y}^2. \end{align*} Now, recall that every $Y_i$ is i.i.d as $Y$ and $\sigma^2_Y=E[Y^2]-\mu_Y^2$. Then, \begin{align*} \sigma^2_{Y_n} &= \frac{1}{n^2}\bigg(n\cdotE[Y^2] + n(n-1)\cdot\mu_Y^2\bigg) - \mu_{Y}^2\\ &=\frac{1}{n} \bigg( E[Y^2] + (n-1)\cdot\mu_{Y}^2 - n\cdot \mu_{Y}^2 \bigg)\\ &=\frac{1}{n} \big(E[Y^2] - \mu_{Y}^2)\\ &= \frac{1}{n} \sigma^2_Y. \end{align*} \end{proof} Lemma~\ref{lemma:boost_confidence} shows the link between the number of repetitions of a experiment and the success probability of the majority of repetitions. We will use this argument for constructing a median-based estimate. \begin{lemma} \label{lemma:boost_confidence} Let $X$ be a Bernoulli random variable with success probability $s\in[0,1]$. Define the random variable: \begin{equation*} X_r = \sum_{i=1}^{r} X_i, \end{equation*} with $r\in \mathbb{N}$. Then, the probability of at most $\lfloor r/2 \rfloor$ successes is: \begin{equation*} \Pr(X_r \leq r/2) = \sum_{i=0}^{\lfloor r/2 \rfloor} \binom{r}{i} (s)^{i} (1-s)^{r-i} \end{equation*} \end{lemma} \begin{proof} The proof is straightforward if one realizes that $X_r$ is a Binomial random variable with parameters $s$ and $r$. The desired probability is the cumulative distribution function evaluated at $r/2$. \end{proof} Next, we are ready to prove Theorem~\ref{theom:markov}. \begin{theorem} \label{theom:markov} For a random variable $Y$ with mean $\mu_Y$ and variance $\sigma^2_{Y}$, and user specified parameters $\epsilon,\delta\in(0,1)$, it suffices to draw $O(\sigma^2_{Y}/\mu_Y^2 \epsilon^{-2}\log1/\delta)$ i.i.d samples to compute an estimate $\overline{\mu}_Y$ such that: \begin{equation*} \Pr\bigg(\frac{|\overline{\mu}_Y-\mu_Y|}{\mu_Y}\geq \epsilon\bigg) \leq \delta \end{equation*} \end{theorem} \begin{proof} From the well known Markov (or Chebyshev) inequality, we can write: \begin{equation*} \Pr\big(|Y-\mu_Y|\geq k\big) \leq \frac{\sigma^2_{Y}}{k^2}. \end{equation*} For our purposes, we let $k=\epsilon\mu_Y$ with positive $\mu_Y$. Then, we write: \begin{equation*} \Pr\bigg(\frac{|Y-\mu_Y|}{\mu_Y}\geq\epsilon\bigg) \leq \frac{\sigma^2_{Y}}{\epsilon^2\mu_Y^2}. \end{equation*} If we substitute $Y$ by $Y_n$ such that $n=\frac{\sigma^2_Y}{(1-s)\epsilon^{2}\mu_Y^{2}}$ (Lemma~\ref{lemma:meanvar}), then: \begin{equation*} \Pr\bigg(\frac{|Y_n-\mu_Y|}{\mu_Y}\geq\epsilon\bigg) \leq \frac{\sigma^2_{Y}/n}{\epsilon^2\mu_Y^2} = 1 - s. \end{equation*} Since the experiment's success probability is at least $s$, we boost it up to $1-\delta$ via Lemma~\ref{lemma:boost_confidence}. First, let $\overline{\mu}_Y$ be the median of $r$ samples of $Y_n$. Then, note that estimate $\overline{\mu}_Y$ ``fails''---lays outside the interval $\mu_Y(1\pm\epsilon)$---if and only if $r/2$ or more samples lay outside $\mu_Y(1\pm\epsilon)$. Thus, choosing $s \in (0.5,1)$, the probability that $\overline{\mu}_Y$ fails is at most: \begin{align*} \sum_{i=0}^{\lfloor r/2 \rfloor} \binom{r}{i} (s)^{i} (1-s)^{r-i} &\leq\sum_{i=0}^{\lfloor r/2 \rfloor} \binom{r}{i} (s)^{r/2} (1-s)^{r/2}\\ &\leq (s-s^2)^{r/2}\sum_{i=0}^{\lfloor r/2 \rfloor} \binom{r}{i}\\ &\leq (s-s^2)^{r/2}\cdot 2^{-r}\\ &\leq (4s-4s^2)^{r/2} \end{align*} We use the previous bound to choose $r$ such that $(4s-4s^2)^{r/2}\leq \delta$. In particular, for $s=3/4$, we find: \begin{equation*} r = \frac{2}{\log(4/3)}\log(1/\delta) \end{equation*} To recap: construct a single experiment $Y_n$ using $n=O(\sigma^2_Y/\mu^2_Y\epsilon^{-2})$ samples, repeat the experiment $r=O(\log1/\delta)$ times, and return median $\overline{\mu}_Y$. Using $O(\sigma^2_Y/\mu^2_Y\epsilon^{-2}\log1/\delta)$ samples, we showed this procedure returns $\overline{\mu}_Y$ in the range $(1\pm\epsilon)\cdot\mu_Y$ with at least probability $1-\delta$. \end{proof} \section*{Acknowledgments} The proof of Theorem~\ref{theom:markov} is adapted from Prof. Sinclair's online lecture notes~\cite{Sinclair2018}. \section{Counting-Based Network Reliability Evaluation} \noindent We begin this section with relevant mathematical background and notation, then we introduce the new method, termed {\relnet}. We do so through a fully worked out example for counting-based reliability estimation. \subsection{Principled network reliability approximation} \noindent Given instance {\instance} of the {\kterminal} problem, we represent a realization of the stochastic graph {\real} as an $m$-bit vector $X=(x_e)_{e\in E}$, with $m = |E|$, such that $x_{e}=0$ if edge $e\in E$ is failed, and $x_e=1$ otherwise. Note that $\Pr(x_{e}=0)=p_{e}$, and that the set of possible realizations is $\Omega=\{0,1\}^m$. Furthermore, let $\Phi:\Omega\mapsto\{0,1\}$ be a function such that $\Phi(X)=0$ if some subset of $\mathcal{K}$ becomes disconnected, i.e. $X$ is \textit{unsafe}, and $\Phi(X)=1$ otherwise. Also, we define the \textit{failure} and \textit{safe} domains as $\Omega_f=\{X\in\Omega: \Phi(X)=0 \}$ and $\Omega_s=\{X\in\Omega: \Phi(X)=1 \}$, respectively. In practice, we can evaluate $\Phi$ efficiently using breadth-first-search. Network reliability, denoted as {\rel}, can be computed as follows: \begin{equation} \label{eq:brutef} \relf = 1 - \unrelf =\sum_{X\in \Omega} \Phi(X)\cdot\Pr(X) , \end{equation} \begin{equation}\label{eq:probx} \Pr(X) = \prod_{e_i\in E}^{}p_{e_i}^{(1-x_{i})}\cdot(1-p_{e_i})^{x_{i}}, \end{equation} where Eq.~(\ref{eq:probx}) assumes independent edge failures. Clearly, the number of terms $|\Omega|=2^m$ of Eq.~(\ref{eq:brutef}) grows exponentially, rendering the brute-force approach useless in practice, and motivating the development of network reliability evaluation methods that can be grouped into: exact or bounds, guarantee-less simulation, and probably approximately correct (PAC). When exact methods fail to scale in reliability calculations, simulation is the preferred alternative. However, mainstream applications of simulation lack performance guarantees on error and computational cost. Typically, users embark on a trial and error process for choosing the sample size, trying to meet, if at all possible, a target empirical measure of variance such as the coefficient of variation. However, similar approaches have been shown to be unreliable~\cite{Bayer2014}, jeopardizing reliability applications at a time when uncertainty quantification is key, as systems are increasingly complex~\cite{Ellingwood2016}. \input{pbp_pac} Recently, the authors introduced RelNet~\cite{Duenas-Osorio2017}, a counting based framework for approximating the {\twoterminal} problem that issues $(\epsilon,\delta)$ guarantees. In this paper, we introduce {\relnet}, an extension that, to the best of our knowledge, is the first \textit{efficient} PAC method for the general {\kterminal} problem. Next, we survey important background in Boolean logic definitions before introducing {\relnet}. \subsection{Boolean logic} \noindent A Boolean formula $\psi:X \in\{0,1\}^n \to \{0,1\}$ is in conjunctive normal form (CNF) when written as $\psi(X)=C_1\wedge \cdots \wedge C_m$, with each clause $C_i$ a disjunction of literals, e.g., $C_1=x_1 \vee \neg x_2 \vee x_3$. We are interested in solving the $\#$SAT (``Sharp SAT'') problem, which counts the number of variable assignments satisfying a CNF formula. Formally, $\#\psi = \big|\big\{X\in \{0,1\}^n | \psi(X)=1\big\}\big|$. For example, consider the expression $x_1 \neq x_2$. Its CNF representation is $\psi(X) = (x_1 \vee x_2) \wedge (\neg x_1 \vee \neg x_2)$, and the number of satisfying assignments of $\psi$ is $\#\psi=2$. Furthermore, for Boolean vectors of variables $X=(x_1,\dots,x_n)$ and $S=(s_1,\dots,s_p)$, define a $\Sigma_1^1$ formula as one that is expressed in the form $F(X,S)=\exists S[ \psi(X,S)]$, with $\psi$ a CNF formula over variables $X$ and $S$. Similarly, we are interested in its associated counting problem, called projected counting or ``$\#\exists$SAT.'' Formally, $\#F = \big|\big\{X\in \{0,1\}^n |\exists S\text{ such that }\psi(X,S)=1\big\}\big|$. We use $\Sigma_1^1$ formulas because they let us introduce needed auxiliary variables ($S$) for global-level Boolean constraints, such as reliability, but count strictly over the problem variables ($X$). As an example, consider the expression $[(x_1 \text{ OR } s_1) \neq x_2]$. Its CNF representation is $\psi(X, S) = (x_1 \vee s_1 \vee x_2) \wedge (\neg x_1 \vee \neg s_1 \vee \neg x_2)$, and note the difference between the associated counts $\#\psi=6$ and $\#F=4$. The latter is smaller because the quantifier $\exists$ over variables $S$ ``projects'' the count over variables $X$. To better grasp this projection, observe that $F(X,S)=\exists S [\psi(X,S)]$ is \textit{equivalent} to $\bigvee_{S\in\{0,1\}^p}[\psi(X, S)]$, which in our example simplifies to $(\neg x_1 \vee \neg x_2) \vee (x_1 \vee x_2)=1$, i.e., for every assignment of variables $X\in\{0,1\}^2$, there is $S\in\{0,1\}$ such that $F(X,S)=1$, and thus $\#F=4$. The equivalent form is shown only for illustration purposes, as it is intractable to work with due to its length growing exponentially in the number of variables in $S$. Instead, we feed $F(X,S)=\exists S [\psi(X,S)]$ to a state-of-the-art approximate model counter~\cite{SM19}. Next, we introduce $F_{\mathcal{K}}$, a $\Sigma_1^1$ formula encoding the unsafe property of a graph $G$, and show that $\#F_{\mathcal{K}}=|\Omega_f|$. Recall $\Omega_f$ is the network failure domain $\Omega_f=\{X\in\Omega: \Phi(X)=0 \}$. Moreover, using a polynomial-time reduction to address arbitrary edge failure probabilities, we solve the {\kterminal} problem by computing $\#F_{\mathcal{K}}$. The problem of counting the number of satisfying assignments of a Boolean formula is hard in general, but it can be approximated efficiently via state-of-the-art PAC counters with access to an NP-oracle. In practice, an NP-oracle is a SAT solver capable of handling formulas with up to million a variables, which is orders of magnitude larger than typical network reliability instances. \begin{figure} \centering \subfloat[Weighted instance.\label{subf:weight}]{ \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=0.4cm}] \node [darkstyle] (1) at (0,0) {\scriptsize a}; \node [circle, draw] (2) at (2,2) {\scriptsize b}; \node [circle, draw] (3) at (2,-2) {\scriptsize c}; \node [darkstyle] (4) at (4,0) {\scriptsize d}; \draw (1)--(2) node[midway, above] {$p_{e_1}=1/2$}; \draw (1)--(3) node[midway, above] {$p_{e_2}=3/8$}; \draw (2)--(4) node[midway, above] {$p_{e_3}=1/2$}; \draw (3)--(4) node[midway, above] {$p_{e_4}=1/2$}; \node [] (aux) at (2,0) {$(G,P)$}; \end{tikzpicture} } \subfloat[Unweighted instance.\label{subf:unweight}]{ \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=0.4cm}] \pgfmathsetmacro{\dx}{0} \node [darkstyle] (1) at (0+\dx,0) {\scriptsize a}; \node [circle, draw] (2) at (2+\dx,2) {\scriptsize b}; \node [circle, draw] (13) at (1+\dx-.5,-1-.5) {\scriptsize $v_1$}; \node [circle, draw] (3) at (2+\dx,-2) {\scriptsize c}; \node [darkstyle] (4) at (4+\dx,0) {\scriptsize d}; \draw (1)--(2) node[midway, above] {$e_1$}; \draw (1)--(3) node[midway, above] {$e_2$}; \draw (2)--(4) node[midway, above] {$e_3$}; \draw (3)--(4) node[midway, above] {$e_4$}; \draw (1)--(13) node[midway, left] {$e_5$}; \draw (13)--(3) node[midway, below] {$e_6$}; \node [] (aux) at (2+\dx,0) {$(G',P_{1/2})$}; \end{tikzpicture}}\\ \subfloat[Terms of $\Sigma_1^1$ formula $F_{\mathcal{K}}$ and exact counting calculations.\label{subf:fk}]{ \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=0.4cm}] \pgfmathsetmacro{\dx}{0} \node[] (tk) at (5,-4) {\small \begin{tabular}{l} \small $C_{e_1}=(s_a \wedge x_{e_1} \rightarrow s_b) \wedge (s_b \wedge x_{e_1} \rightarrow s_a)$, \small $C_{e_2}=(s_a \wedge x_{e_2} \rightarrow s_c) \wedge (s_c \wedge x_{e_2} \rightarrow s_a)$.\\ \small $C_{e_3}=(s_b \wedge x_{e_3} \rightarrow s_d) \wedge (s_d \wedge x_{e_3} \rightarrow s_b)$, \small $C_{e_4}=(s_c \wedge x_{e_4} \rightarrow s_d) \wedge (s_d \wedge x_{e_4} \rightarrow s_c)$.\\ \small $C_{e_5}=(s_a \wedge x_{e_5} \rightarrow s_{v_1}) \wedge (s_{v_1} \wedge x_{e_5} \rightarrow s_a)$, \small $C_{e_6}=(s_{v_1} \wedge x_{e_6} \rightarrow s_c) \wedge (s_c \wedge x_{e_6} \rightarrow s_{v_1})$.\\ \multicolumn{1}{l}{\small $S=\{s_a, s_b, s_c, s_d, s_v\}$, $F_{\mathcal{K}}=\exists S \big((s_a\vee s_d) \wedge (\neg s_a\vee \neg s_d) \wedge \bigwedge_{i=1}^{6} C_{e_i} \big)$.}\\ \multicolumn{1}{l}{ \small $\#F_{\mathcal{K}}=33$, $\unrelf=\#F_{\mathcal{K}}/2^{|E'|}=33/64$.} \end{tabular} }; \end{tikzpicture} } \caption{{\relnet} example with $\mathcal{K}$$=$$\{\text{a},\text{b}\}$. (a) Original instance, (b) its reduction to $p_e=1/2$, $\forall e \in E'$, and (c) exact counting $\#F_{\mathcal{K}}$.} \label{fig:relnet} \end{figure} \subsection{Reducing network reliability to counting} \noindent Next we introduce the {\relnet} formulation. Given propositional variables $S=(s_u)_{u\in V}$ and propositional variables $X=(x_{e})_{e\in E}$, then define: \begin{equation}\label{eq-tran} C_{e} = \big[(s_u \land x_{e}) \rightarrow s_v\big] \wedge \big[(s_v \land x_{e}) \rightarrow s_u\big] , \forall e\in E, \end{equation} \begin{equation}\label{eq-relnet} F_\mathcal{K} =\exists S[\psi(X,S)] =\exists S\bigg[\bigg(\bigvee_{j\in \mathcal{K}} s_j\bigg) \land \bigg(\bigvee_{k\in \mathcal{K}} \neg s_k\bigg) \land \bigwedge_{e\in E} C_{e}\bigg], \end{equation} where in Eq.~\ref{eq-tran}, each edge $e\in E$ has end vertices $u,v\in V$. Propositional edge variable $x_{e}$ encodes the state of edge $e\in E$, such that $x_{e}$ is true iff $e$ is not failed, which is consistent with the representation of a realization of the stochastic graph $G(P)$ introduced earlier. An example of $F_{\mathcal{K}}$ is given in Figure~\ref{subf:unweight}-\ref{subf:fk}. Note that $F_{\mathcal{K}}$ is a $\Sigma^1_1$ formula,\footnote{Use identity $(a \wedge b) \rightarrow c \equiv \neg a \vee \neg b \vee c$ for constraints $C_e$ in Eq.~(\ref{eq-tran}).} and we define its associated set of satisfying assignments as $R_{F_\mathcal{K}}= \{X\in \Omega | (\exists S)\psi(X,S)=1\big\}$, such that $\#F_{\mathcal{K}}=|R_{F_\mathcal{K}}|$. Also, recall that the notation for the complement of set $\Theta$ is $\overline{\Theta}$. The next Lemma proves the core result of our reduction. \begin{lemma}\label{lm:pathcount} For a graph $G$ $=$ $(V,E,\mathcal{K})$, edge failure probabilities $P=(p_{e})_{e\in E}$, and $F_{\mathcal{K}}$ and $\Omega_f$ as defined above, we have $\#F_{\mathcal{K}}=|\Omega_f|$. Moreover, for $P_{1/2} = (1/2)_{e \in E}$, we have \begin{equation*} u_G(P_{1/2})=\frac{\#F_{\mathcal{K}}}{2^{|E|}}. \end{equation*} \end{lemma} \begin{proof} We use ideas from our previous work~\cite{Duenas-Osorio2017}, which deals with the special case $|\mathcal{K}|=2$. First, note that for sets $A$ and $B$ such that $|A| + |\overline{A}| = |B| + |\overline{B}|$, we have $|A|=|B|$ iff there is a bijective mapping from $\overline{A}$ to $\overline{B}$. Moreover, the number of unquantified variables in Eq.~(\ref{eq-relnet}) is $|E|$, so we can establish the next equivalence between the number of distinct edge variable assignments and system states: $|R_{F_{\mathcal{K}}}|+|\overline{R_{F_{\mathcal{K}}}}|=|\Omega_f|+|\overline{\Omega_f}|=2^{|E|}$. Next, we prove $X\in \overline{R_{F_{\mathcal{K}}}} \iff X \in \overline{\Omega_f}, \forall X\in \{0,1\}^{|E|}$, via a bijective mapping. 1) Case $\overline{\Omega_f} \to \overline{R_{F_{\mathcal{K}}}}$: assume $X\in\overline{\Omega_f}$, i.e. $\phi(X)=1$ or $G$ is $\mathcal{K}$-connected. Next, we show that $X\in \overline{R_{F_{\mathcal{K}}}}$, i.e., $F_{\mathcal{K}}(X,S)$ evaluates to false for all possible assignments of variables $S$, due to Eqs.~\ref{eq-tran}-\ref{eq-relnet}. We show this by way of contradiction. Assume there is an assignment $S\in\{0,1\}^{|V|}$ such that $F_{\mathcal{K}}(X,S)$ is true. We deduce this happens iff (i) $\exists j,k\in\mathcal{K}$ such that $s_j\neq s_k$, from Eq.~(\ref{eq-relnet}), and (ii) for every edge $e\in E$ with end-vertices $u,v\in V$ we have $s_v=1$ (resp. $s_u=1$) whenever $x_e$ and $s_u$ (resp. $s_v$) are equal to $1$, due to clause $C_{e}$ in Eq.~(\ref{eq-tran}). Without loss of generality, we satisfy condition (i) setting $s_j=1$ and $s_k=0$, with $j,k\in\mathcal{K}$. Recall $X\in\overline{\Omega_f}$, i.e. $\phi(X)=1$, so there is a path $P=\{j,\dots,k\}\subseteq V$ connecting vertices $j,k\in\mathcal{K}$ and traversing edges $T\subseteq E$ such that $x_e =1,\forall e\in T$. By iterating over constraints $C_{e}$, $\forall e\in T$, and since $s_j=1$, we are forced to assign $s_i=1,\forall i\in P$, to satisfy condition (ii). This assignment results into $s_k=1$, which contradicts condition (i) when we have set $s_k=0$ at the beginning. Thus, an $S\in\{0,1\}^{|V|}$ such that $F_{\mathcal{K}}(X,S)$ is true does not exists, and $X \in \overline{R_{F_{\mathcal{K}}}}$. 2) Case $\overline{R_{F_{\mathcal{K}}}} \to \overline{\Omega_f}$: assume $X\in\overline{R_{F_{\mathcal{K}}}}$, i.e. $F_{\mathcal{K}}(X,S)$ is false, to show that $X\in\overline{\Omega_f}$. Again, by way of contradiction, we assume $X\in\Omega_f$ and using the arguments from above we deduce that the set of edges $T=\{e\in E| x_e=1\}$ connects every pair of vertices $i,j\in \mathcal{K}$, i.e. $\phi(X)=1$ by definition of $\phi$. This contradicts the definition $\Omega_F=\{X\in \{0,1\}^{|E|}| \phi(X)=0\}$. Thus, we conclude $X\in \overline{\Omega_f}$. Since we established a bijective mapping between $\overline{R_{F_{\mathcal{K}}}}$ and $\overline{\Omega_f}$, we conclude $\#F_{\mathcal{K}} = |\Omega_f|$. The last part of the lemma follows by noting that $\Pr(X)=1/2^{|E|}$ when $P=P_{1/2}$, so that $\unrelf=\sum_{x\in\Omega} (1-\Phi(X))\cdot\Pr(X)= |\Omega_f|\cdot 1/2^{|E|}=\#F_{\mathcal{K}}/2^{|E|}$. \end{proof} Now we generalize $u_G(P_{1/2}) =\#F_{\mathcal{K}}/2^{|E|}$ to arbitrary edge failure probabilities. To this end, we use a weighted to unweighted transformation~\cite{Duenas-Osorio2017}. \subsection{Addressing arbitrary edge failure probabilities} \noindent The next definitions will be useful for stating our weighted-to-unweighted transformation. Let $0.b_1\cdots b_m$ be the binary representation of probability $q\in(0,1)$, i.e. $q=\sum_{k=1}^{m}b_k/2^k$. Define $z_k$ ($\bar{z}_k$) as the number of zeros (ones) in the first $k$ decimal bits of the binary representation. Formally, $z_k=k-\sum_{i=1}^{k}b_i$ and $\bar{z}_k=k-z_k$, $\forall k\in L$, with $L=\{1,\dots,m\}$. Moreover, for $V=\{v_0,\dots,v_{z_m+1}\}$, define a function $\eta:L\to V\times V$ such that $\eta(k)=(v_{z_{k-1}},v_{z_k})$ if $b_k=0$, and $\eta(k)=(v_{z_{k-1}},v_{z_m+1})$ otherwise. We will show that, for $E=\bigcup_{k\in L}\eta(k)$ and $\mathcal{K}=\{v_0,v_{z_m+1}\}$, $G(V,E,\mathcal{K})$ is a series-parallel graph such that $r_G(P_{1/2})=q$. Thus, our weighted-to-unweighted transformation entails replacing every edge $e\in E$ with failure probability different from 1/2 with a reliability preserving series-parallel graph $G^e$. For example, from Figure~\ref{subf:weight}, the binary representation of $1-p_e=5/8$ is 0.101, so we have $m=3$, $z_m=1$, and $\bar{z}_m=2$. Also, we replace edge $e_2$ with a series parallel graph $G^{e_2}$ using the construction from above, which yields $V^{e_2}=\{v_0, v_1, v_2\}$, $E^{e_2}=\{(v_0, v_{2}),(v_0, v_1),(v_1, v_2)\}$, and terminal set $\mathcal{K}^{e_2}=\{v_0, v_2\}$. Since $u_{G^{e_2}}(P_{1/2})=3/8$, we replace $e_2$ by $G^{e_2}$ as shown in Figure~\ref{subf:unweight}, where $v_0=a$ and $v_2=c$, for consistency with the global labeling of the figure. The next lemma proves the correctness of this transformation. \begin{lemma} \label{lm:chaingraph} Given probability $q=0.b_1\cdots b_m$ in binary form, graph $G=(V,E,\mathcal{K})$ such that $V=\{v_0,\dots,v_{z_m+1}\}$, $E=\{\eta(1),\dots,\eta(m)\}$ and $\mathcal{K}=\{v_0, v_{z_m+1}\}$, and $P_{1/2}=(1/2)_{e\in E}$, we have $r_{G}(P_{1/2})=q$ and $|V|+|E|=z_{m}+2+m$. \end{lemma} \begin{proof} Define $G_k=(V_k,E_k), \forall k\in L$, with $E_k=\{\eta(1),\dots,\eta(k)\}$ and $V_k=\cup_{i=1}^{k}\{v_j:v_j\in\eta(i)\}$. Clearly, $V=V_m$ and $E=E_m$. The key observation is that $G$ is a series-parallel graph and that we can enumerate all paths from $v_0$ to $v_{z_m+1}$ in $G$. Let $k_1=\min\{k\in L:b_{k_1}=1\}$. Then, the edge set $E_{T_1}=E_{k_1}$ forms a path from $v_0$ to $v_{z_m+1}$, denoted $T_1$, with vertex sequence $(v_0,\dots,v_{z_{k_1}},v_{z_m+1})$, size $|E_{T_{1}}|=z_{k_1}+1$, and $\Pr(T_1)=1/2^{z_{k_1}+1}$. Next, for $k_2$ the second smallest element of $L$ such that $b_{k_2}=1$, $G_{k_2}$ contains a total of two paths, $T_1$ and $T_2$, with $T_1$ as before and $E_{T_2}=E_{k_2} \setminus \{(v_{z_{k_1}}, v_{m+1})\}$ of size $z_{k_2}+1$. Also, $E_{k_2}=E_{T_1}\cup E_{T_2}$ and $E_{T_1}\cap E_{T_2}=E_{T_1}\setminus \{(v_{z_{k_1}}, v_{m+1})\}$. Thus, the event $\overline{T}_1T_2$ happens iff edge $(v_{z_{k_1}}, v_{m+1})$ fails and edges in $E_{T_2}$ do not fail, letting us write $\Pr(\overline{T}_1T_2)=1/2\cdot 1/2^{z_{k_2}+1}$. For $k_j$ the $j$-th smallest element of $L$ such that $b_{k_j}=1$, $G_{k_j}$ has a total of $j=\bar{z}_{k_j}$ paths, with $E_{T_j}=E_{k_j} \setminus\cup_{i=1}^{j-1} \{(v_{z_{k_i}}, v_{m+1})\}$, $|E_{T_j}|=z_{k_k}+1$, and $E_{k_j}=\cup_{i=1}^{j}E_{T_i}$. Furthermore, event $\overline{T}_1\cdots \overline{T}_{j-1}T_{j}$ happens iff edges in $\cup_{i=1}^{j-1} \{(v_{z_{k_{i}}}, v_{m+1})\}$ fail and edges in $E_{T_j}$ do not fail. Thus, $\Pr(\overline{T}_1\cdots \overline{T}_{j-1}T_{j})=1/2^{\bar{z}_{k_j}-1}\cdot 1/2^{z_{k_j}+1}=1/2^{k_j}$.This leads to $\relf=\Pr(T_1)+\Pr(\overline{T}_1T_2)+\dots+\Pr(\overline{T}_1\cdots \overline{T}_{\bar{z}_m-1}S_{\bar{z}_m})=\sum_{i=1}^{\bar{z}_m}1/2^{k_i}$. Rewriting the summation over all $k\in L$ yields $\relf=\sum_{k=1}^{m}b_k/2^{k}$, which is the decimal form of $q=0.b_k\cdots b_m$. Furthermore, $|V|=z_m+2$ and $|E|=m$ from their definitions. \end{proof} Now we leverage Lemma~\ref{lm:chaingraph} to introduce our general counting-based algorithm for the {\kterminal} problem. \subsection{The new algorithm: {\relnet}} {\relnet} is presented in Algorithm~\ref{alg:relnet}. Theorem~\ref{th:3} proves its correctness. Figure~\ref{fig:relnet} illustrates the exact version beginning with the reduction to failure probabilities of 1/2, and rounding up with the construction of $F_\mathcal{K}$ and exact counting of its satisfying assignments. In Algorithm~\ref{alg:relnet}, however, we use an approximate counter giving $(\epsilon,\delta)$ guarantees~\cite[][Chapter 4]{Meel2017}. \begin{theorem}\label{th:3} Given an instance {\instance} of the {\kterminal} problem and $M$ defined as in Algorithm~\ref{alg:relnet}: \begin{equation*} \unrelf=\#F_{\mathcal{K}}/2^M. \end{equation*} \end{theorem} \begin{proof} The proof follows directly from Lemmas~\ref{lm:pathcount} and~\ref{lm:chaingraph}. First, note that the transformation in step 1 of {\relnet} outputs an instance ($G',P_{1/2}$) so that $\unrelf=u_{G'}(P_{1/2})$, where $P_{1/2}$ denote edges in $E'$ that fail with probability 1/2~(Lemma~\ref{lm:chaingraph}). Then, step 2 takes $G'$ to output $F_{\mathcal{K}}$ such that $u_{G'}(P_{1/2}) =|R_{F_{\mathcal{K}}}|/2^M$ (Lemma~\ref{lm:pathcount}). Finally, $\unrelf =|R_{F_{\mathcal{K}}}|/2^M$. \end{proof} \begin{algorithm} \caption{{\relnet}} \label{alg:relnet} \begin{algorithmic}[1] \Statex \textbf{Input:} Instance {\instance} and $(\epsilon, \delta)$-parameters. \Statex \textbf{Output:} PAC estimate {\unrele}. \State Construct $G'$$=$$(V',E',\mathcal{K})$ replacing every edge $e\in E$ by $G^e$ such that $1-p_e=0.b_1\cdots b_{m_e}$ and $u_{G^e}(P_{1/2})=p_e$~(Lemma~\ref{lm:chaingraph}). \State Let $M = \sum_{e \in E} m_e = |E'|$, and construct $F_{\mathcal{K}}$ using $G'$ from Eq.~\ref{eq-relnet}. \State Invoke ApproxMC2, a hashing-based counting technique~\cite[][Chapter 4]{Meel2017}, to compute $\overline{\#F_{\mathcal{K}}}$, an approximation of $\#F_{\mathcal{K}}$ with $(\epsilon,\delta)$ guarantees. \Statex ${\unrelef} \gets \overline{\#F_{\mathcal{K}}}/2^{|M|}$ \end{algorithmic} \end{algorithm} Steps 1-2 run in polynomial time on the size of $(G,P)$. Step 3 invokes ApproxMC2~\cite{Meel2017} to approximate $\#F_{\mathcal{K}}$. In turn, ApproxMC2 has access to a SAT-oracle, running in polynomial time on $\log 1/\delta$, $1/\epsilon$, and $|F_{\mathcal{K}}|$. Thus, relative to a SAT-oracle, {$\mathcal{K}$}-RelNet approximates {\unrel} with $(\epsilon,\delta)$ guarantees in the FPRAS theoretical sense. Also, we note that ApproxMC2's $(\epsilon, \delta)$ guarantees are for the multiplicative error $\Pr(1/(1+\epsilon)\unrelf\leq\unrelef\leq(1+\epsilon)\unrelf )\geq 1-\delta$~\cite{Meel2017}. This is a tighter error constraint than the relative error of Eq.~(\ref{eq:pac}), as one can show that $1-\epsilon \leq 1/(1+\epsilon)$ for $\epsilon\in(0,1)$. Thus, if an approximation method satisfies the multiplicative error guarantees, then it also satisfies the relative error guarantees. The converse is not true, and herein we will omit this advantage of {\relnet} over other methods for ease of comparison. Moreover, a SAT-oracle is a SAT-solver able to answer satisfiability queries with up to a million variables in practice. $F_{\mathcal{K}}$ has $|V'|+|E'|$ variables. {\relnet}'s theoretical guarantees now demand context relative to other existing methods, to then perform computational experiments verifying its performance in practice. \section{Context Relative to Competitive Methods}\label{sec:related_work} This section briefly contextualizes our work relative to competitive techniques for network reliability evaluation, so as to facilitate the comparative analyses in Section~4. We arrange methods into three groups: exact or bounds, guarantee-less simulation, and probably approximately correct (PAC). \subsection{Group (i): Exact or Bounds} \noindent Network reliability belongs to the computational complexity class \#P-complete, which is largely believed to be intractable. This means that the task of computing {\unrel} efficiently is seemingly hopeless. While of limited application, the most popular techniques in this group employ approaches such as state enumeration~\cite{Ball1995}, direct decomposition~\cite{Dotson1979}, factoring~\cite{Satyanarayana1983}, or compact data structures like binary-decision-diagrams (BDD)~\cite{Hardy2007}. We refer the reader to the cited literature for a survey of exact methods~\cite{Ball1995,Le2014}. The intractability of reliability problems motivates exploiting properties from graph theory. For example, in the case of bounded therewidth and degree, there are efficient algorithms available~\cite{Hardy2007, Canale2016}. Another promising family of methods issues fast converging bounds~\cite{Dotson1979,Le2013}, an approach that demonstrates practical performance even in earthquake engineering applications~\cite{Lim2012}, and that is applicable beyond connectivity-based reliability as part of the more general state-space-partition principle~\cite{Paredes2018,Alexopoulos1995}. \subsection{Group (ii): Guarantee-less simulation} \noindent When exact methods fail, guarantee-less simulations have found wide applicability. In the context of unbiased estimators,\footnote{The quality of a guarantee-less method being unbiased is key, as boosting confidence by means of repeating experiments leveraging the central limit theorem would lack justification otherwise.} a key property is the relative variance $\sigma^2_Y/\mu_Y^2$, with $Y$ a randomized Monte Carlo procedure such that $E[Y]={\unrelf}$. From Theorem~\ref{theom:markov} (Appendix), we know that should a method verify the bounded relative variance (BRV) property, i.e., $\sigma^2_Y/\mu_Y^2\leq C$ for $C$ some constant, then an efficient $(\epsilon,\delta)$ approximation is guaranteed with a sample size of $N=O(\epsilon^{-2}\log1/\delta)$. While certain methods verify the BRV property, the value of $C$ is typically unknown for general instances of the {\kterminal} problem, and thus the central limit theorem is often invoked for drawing confidence intervals despite known caveats~\cite{Bayer2014}. Some techniques verifying the BRV property include the permutation Monte Carlo-based Lomonosov's Turnip (LT)~\cite{Gertsbakh2016} and its sequential splitting extension, the Split-Turnip (ST)~\cite{Vaisman2016}, and the importance sampling variants of the recursive variance reduction (RVR) algorithm~\cite{Cancela2014}. They significantly outperform the crude Monte Carlo (CMC) method in the rare event setting, with RVR even displaying the VRV property in select instances, as evidenced in empirical evaluations. As we noted, the number of samples in the crude Monte Carlo approach scales like $1/{\unrelf}$, which can be problematic in highly-reliable systems. A more promising approach leverages the Markov Chain Monte Carlo method and the product estimator~\cite{Jerrum1988,Fishman1994}, where the small {\unrel} estimation is bypassed by estimating the product of larger quantities. Significantly, the sample size roughly scales like $\log1/{\unrelf}$~\cite{Dyer1991}. The product estimator is popularly referred to as multilevel splitting as it has independently appeared in other disciplines~\cite{Glasserman1999,kahn1951estimation,rosenbluth1955monte}, and even more recently in the civil and mechanical engineering fields under the name of subset simulation~\cite{Au2001}. In the case of network reliability, the latent variable formulation by Botev et al.~\cite{Botev2012}, termed generalized splitting (GS), delivers unbiased estimates of {\unrel}. The similar approach by Zuev et al.~\cite{Zuev2015} is not readily applicable to the {\kterminal} and delivers biased samples, which can be an issue to rigorously assess confidence. \subsection{Group (iii): PAC methods} \noindent In a breakthrough paper, Karger gave the first efficient approximation for the all-terminal network unreliability problem~\cite{Karger2001}. However, Karger's algorithm is not always practical despite recent improvement~\cite{Karger2016}. Also, unlike {\relnet}, Karger's algorithm is not readily applicable to the more general $\mathcal{K}$-terminal network reliability problem. Besides our network reliability PAC approximation technique, {\relnet}, and that is specialized to the {\kterminal} problem, there are general Monte Carlo sampling schemes that deliver $(\epsilon,\delta)$ guarantees. The reminder of this subsection highlights relevant methods that are readily implementable in Monte Carlo-based network reliability calculations. Denoting $Y$ the random samples produced by unbiased sampling-based estimators, traditional simulation approaches take the average of i.i.d. samples of $Y$. Such estimators can be integrated into optimal Monte Carlo simulation (OMCS) algorithms~\cite{Dagum2000}. An algorithm $A$ is said to be optimal (up to a constant factor) when its sample size $N_A$ is not proportionally larger in expectation than the sample size $N_B$ of any other algorithm $B$ that is also an $(\epsilon,\delta)$ randomized approximation of $\mu_Y$, and that has access to the same information as $A$, i.e., $E[N_A] \leq c \cdot E[N_B]$ with $c$ a universal constant. \begin{algorithm} \caption{ Stopping Rule Algorithm ({\sra})~\cite{Dagum2000}.} \small \label{alg:sra} \begin{algorithmic}[1] \Statex \textbf{Input:} $\epsilon, \delta\in(0,1)$ and random variable $Y$. \Statex \textbf{Output:} Estimate $\unrelef$ with PAC guarantees. \Statex Let $\{Y_i\}$ be a set of i.i.d samples of $Y$. \Statex Compute constants $\Upsilon = 4(e-2)\log(2/\delta)1/\epsilon^2$, $\Upsilon_1 = 1+(1+\epsilon)\cdot\Upsilon$. \Statex Initialize $S\gets 0$, $N\gets 0$. \Statex \textbf{while} $(S<\Upsilon_1)$ \textbf{do:} $N\gets N + 1$, $S\gets S+Y_N$. \Statex $\unrelef\gets \Upsilon_1/N$ \end{algorithmic} \end{algorithm} A simple and general purpose black box algorithm to approximate {\unrel} with PAC guarantees is the \textit{Stopping Rule Algorithm} ({\sra}) introduced by Dagum et al.~\cite{Dagum2000}. The convergence properties of {\sra} were shown through the theory of martingales and its implementation is straightforward (Algorithm~\ref{alg:sra}). Even though {\sra} is optimal up to a constant factor for RVs with support $\{0,1\}$, a different algorithm and analysis leads to the \textit{Gamma Bernoulli Approximation Scheme} ({\gbas})~\cite{Huber2017}, which improves the expected sample size by a constant factor over {\sra} and demonstrates superior performance in practice due to improved lower order terms in its guarantees. {\gbas} has the additional advantage with respect to {\sra} of being unbiased, and it is relatively simple to implement. The core idea of {\gbas} is to construct a RV such that its relative error probability distribution is known. The procedure is shown in Algorithm~\ref{alg:gbas}, $I$ is the indicator function, $\text{Unif}(0,1)$ is a random draw from the uniform distribution bounded in $[0,1]$, and $\text{Exp}(1)$ is a random draw from an exponential distribution with parameter $\lambda=1$. Also, Algorithm~\ref{alg:gbas} requires parameter $k$, which is set as the smallest value that guarantees $\delta \geq \Pr(\mu_Y/\hat\mu_Y<(1+\epsilon)^2 \text{ or } \mu_Y/\hat\mu_Y>(1-\epsilon)^2 )$ with $\mu_Y/\hat\mu_Y\sim \text{Gamma}(k, k-1)$~\cite{Huber2017}. In practice, values of $k$ for relevant $(\epsilon,\delta)$ pairs can be tabulated. Alternatively, if one can evaluate the cumulative density function (cdf) of a Gamma distribution, galloping search can be used to find the optimal value of $k$ with logarithmic overhead (on the number of cdf evaluations). \begin{algorithm} \caption{ Gamma Bernoulli Approximation Scheme ({\gbas})~\cite{Huber2017}.} \small \label{alg:gbas} \begin{algorithmic} \Statex \textbf{Input:} $k$ parameter. \Statex \textbf{Output:} Estimate $\unrelef$ with PAC guarantees. \Statex Let $\{Y_i\}$ be a set of independent samples. \Statex Initialize $S \gets 0$, $R\gets 0$, $N\gets 0$. \While{$(S \neq k)$} \State $N\gets N + 1$, $B \gets I(\text{Unif}(0,1) \leq Y_N )$ \State $S\gets S+ B$, $R \gets R + \text{Exp}(1)$ \EndWhile \Statex $\unrelef\gets \Upsilon_1/N$ \end{algorithmic} \end{algorithm} Note that {\sra} and {\gbas} give PAC estimates with optimal expected number of samples for RVs with support $\{0,1\}$, yet they disregard the variance reduction properties of more advanced techniques. Thus, one can ponder, is there a way to exploit a randomized procedure $Y$ such that $\sigma_{Y} \ll \sigma_{Y^{CMC}}$ in the context of OMCS? The \textit{Approximation Algorithm} (\aaa), introduced by Dagum et al.~\cite{Dagum2000}, and based on sequential analysis~\cite{wald1973sequential}, gives a partially favorable answer. In particular, steps 1 and 2 (Algorithm~\ref{alg:aa}) are trial experiments that give rough estimates of $\mu_Y$ and $\sigma_Y^2/\mu_Y^2$, respectively. Then, step 3 is the actual experiment that outputs {\unrele} with PAC guarantees. {\aaa} assumes $Y\in[0,1]$, and it was shown to be optimal up to a constant factor. The downside of {\aaa}, or any OMCS algorithm as {\aaa} is optimal, is that it requires in expectation $N_{\aaaf}=O(\max\{\sigma^2_{Y}/\mu_Y^2,\epsilon/\mu_Y\}\cdot\epsilon^{-2}\ln1/\delta)$ samples. Thus, despite considering the relative variance $\sigma^2_{Y}/\mu_Y^2$, OMCS algorithms become impractical in the rare-event regime. For example, consider the case in which edge failure probabilities tend to zero and $1/\mu_Y$ goes to infinity. If a technique delivers $Y$ that meets the BRV property, i.e., $\sigma^2_{Y}/\mu_Y^2\leq C$ for $C$ some constant, then, from Theorem~\ref{theom:markov} (Appendix), we know a sample of $N=O(\epsilon^{-2}\ln1/\delta)$ suffices, meanwhile $N_{\aaaf}\rightarrow\infty$. \begin{algorithm} \caption{The Approximation Algorithm ({\aaa})~\cite{Dagum2000}.} \label{alg:aa} \begin{algorithmic}[1] \Statex \textbf{Input:} $(\epsilon, \delta)$-parameters. \Statex \textbf{Output:} Estimate $\unrelef$ with PAC guarantees. \Statex Let $\{Y_i\}$ and $\{Y'_i\}$ be two sets of independent samples of $Y$. \State $\epsilon'\gets \min\{1/2,\sqrt{\epsilon}\}$, $\delta'\gets\delta/3$ \Statex $\hat{\mu}_Y\gets \textit{SRA}(\epsilon',\delta')$ \State $\Upsilon\gets4(e-2)\log(2/\delta)1/\epsilon^2$ \Statex $\Upsilon_2\gets 2(1+\sqrt{\epsilon})(1+2\sqrt{\epsilon})(1+\ln(3/2)/\ln(2/\delta))\Upsilon$ \Statex $N\gets\Upsilon_2\cdot\epsilon/\hat\mu_Y$, $S\gets0$ \Statex \textbf{for} ($i=1,\dots,N$) \textbf{do}: $S\gets S+(Y'_{2i-1}-Y'_{2i})^2/2$ \Statex $\hat{r}^*_Y \gets \max\{S/N,\epsilon\hat{\mu_Y}\}/\hat\mu_Y^2$ \State $N_{\aaaf}\gets \Upsilon_2 \cdot\hat{r}^*_Y$ \Statex \textbf{for} ($i=1,\dots,N_{\aaaf}$) \textbf{do}: $S\gets S+Y_i$ \Statex $\unrelef\gets S/N$ \end{algorithmic} \end{algorithm} We will use {\gbas} for CMC with $(\epsilon,\delta)$ guarantees, and use {\aaa}, given its generality, to turn various existing techniques into PAC methods themselves. For {\aaa}, note that the rough estimate $\hat{\mu}_Y$ in step 1 is computed using $Y^{CMC}$ as it is the cheapest, but from step 2 and on, the estimator that is intended to be tested is used, but the reported runtime will be that of step 3 to measure variance reduction and runtime without trial experiments. \section{Conclusions and Future Work} \noindent We introduced a new logic-based method for the {\kterminal} problem, {\relnet}, which offers rigorous guarantees on the quality of its approximations. We examined this method relative to several other competitive approaches. For non-exact methods we emphasized desired relative variance properties: bounded by a polynomial on the size of the input instance (FPRAS), bounded by a constant (BRV), or tending to zero (VRV). We turned popular estimators in the literature into probably approximately correct (PAC) ones by embedding them into an optimal Monte Carlo algorithm, and showed their practical performance using a set of benchmarks. Our tool, {\relnet}, is the first approximation of the {\kterminal} problem, giving strong performance guarantees in the FPRAS sense (relative to a SAT-oracle). Also, {\relnet} gives rigorous multiplicative error guarantees, which are more conservative than relative error guarantees. However, its performance in practice remains constrained to not too small edge failure probabilities ($\approx0.1$), which remains practical when conditioned on catastrophic hazard events. Thus, our future work will pursue a more efficient encoding and solution approaches, especially when edge failure probabilities become smaller. Moreover, promising advances in approximate model counting and SAT solvers will render {\relnet} more efficient over time, given its reliance on SAT oracles. Embedding estimators with desired relative variance properties into PAC methods proved to be an effective strategy in practice, but only when failure probabilities are not rare. Despite this relative success, the strategy becomes impractical when {\unrel} approaches zero. Thus, future research can address these issues in two fronts: (i) establishing parameterized upper bounds on the relative variance of new and previous estimators when they exist, and (ii) develop new PAC-methods with faster convergence guarantees than those of the canonical Monte Carlo approach. PAC-estimation is a promising yet developing approach to system reliability estimation. Beyond the {\kterminal} problem, its application can be challenging in the rare event regime, but in all other cases it can be used much more frequently as an alternative to the less rigorous---albeit pervasive---empirical study of the variance through replications and asymptotic assumptions appealing to the central limit theorem. In fact, methods such as {\gbas} deliver exact confidence intervals using all samples at the user's disposal. In future work, the authors will explore general purpose PAC-methods that can be employed in the rare-event regime, developing a unified framework to conduct reliability assessments with improved knowledge of uncertainties and further promote engineering resilience and align it with the measurement sciences. \section*{Acknowledgments} \noindent The authors gratefully acknowledge the support by the U.S. Department of Defense (Grant W911NF-13-1- 0340) and the U.S. National Science Foundation (Grants CMMI-1436845 and CMMI-1541033). \biboptions{numbers,sort&compress} \bibliographystyle{elsarticle-num} \section{Computational Experiments}\label{sec:experiments} \noindent A fair way to compare methods is to test them against challenging benchmarks and quantify empirical measures of performance relative to their theoretical promise. We take this approach to test $\mathcal{K}$-RelNet alongside competitive methods. The following subsections describe our experimental setting, listing implemented methods, and their application to various benchmarks. \subsection{Implemented estimation methods} Table~\ref{t:methods} lists reliability evaluation methods that we consider in our numerical experiments. Exact methods run until giving an exact estimate or best bounds until timeout. Each guarantee-less simulation method uses a custom number of samples $N$ that depends on the shared parameter $N_S$ (Table~\ref{t:methods}). This practice, borrowed from Vaisman et al.~\cite{Vaisman2016}, tries to account for the varying computational cost of samples among methods. PAC algorithms {\aaa} or {\gbas} are use in combination with guarantee-less sampling methods to compare runtime given a target precision. For example, {\gbas}($Y^{\text{CMC}}$) denotes Algorithm~\ref{alg:gbas} when samples are drawn from the CMC estimator. Experiments with {\aaa} use $(\epsilon,\delta)=(0.2,0.2)$. Experiments embedded in {\gbas} use two configurations: $(0.2,0.2)$ and $(0.2,0.05)$. {\relnet} uses $(0.8,0.2)$ to avoid time outs. As we will verify, in practice, PAC-methods issue estimates with better precision than the input theoretical $(\epsilon,\delta)$-guarantees. \begin{table}[] \centering \caption{Methods used in computational experiments, and corresponding parameters.} \label{t:methods} {\scriptsize \begin{tabular}{|c|l|l|l|l|c|} \hline \multicolumn{1}{|c|}{\textbf{Group}} & \multicolumn{1}{c|}{\textbf{Methods}} & \multicolumn{1}{c|}{\textbf{IDs}} & \multicolumn{1}{c|}{\textbf{Parameters}} & \multicolumn{1}{c|}{\textbf{Ref.}} \\ \hline i & BDD-based Network Reliability & HLL & n/a & \cite{Hardy2007,Herrmann2010} \\ \hline \multirow{4}{*}{ii} & Lomonosov's-Turnip & LT & $N=N_S$ & \cite{Gertsbakh2016} \\\cline{2-5} & Sequential Splitting Monte Carlo & ST & $B=100$, $N=N_S/B$ & \cite{Vaisman2016} \\\cline{2-5} &Generalized Splitting & GS & $s=2, N_0 =10^3, N=N_S$ & \cite{Botev2012} \\\cline{2-5} &Recursive Variance Reduction & RVR & $N=N_S/\binom{|\mathcal{K}|}{2}$ & \cite{Cancela2014} \\\hline \multirow{3}{*}{iii} & Karger's 2-step Algorithm & K2Simple & $\epsilon,\delta$ & \cite{Karger2016} \\ \cline{2-5} iii & Optimal Monte Carlo Simulation & {\gbas, \aaa} & $\epsilon,\delta$ &\cite{Dagum2000, Huber2017} \\\cline{2-5} & Counting-based Network Unreliability & {\relnet} & $\epsilon,\delta$ & This paper \\ \hline \end{tabular} } \end{table} To the best of our knowledge, methods in Table~\ref{t:methods} are some of the best in their categories as evidenced in the literature. We implemented all methods in a Python prototype for uniform comparability and ran all experiments in the same machine---a 3.60GHz quad-core Intel i7-4790 processor with 32GB of main memory and each experiment was run on a single core. \subsection{Estimator Performance Measures} We use the next empirical measures to assess the performance of reliability estimation methods. Let $\hat{u}$ be an approximation of $u$. We measure the \textit{observed multiplicative error} {\epso} as $(\hat{u}-u)/u$ if $\hat{u}>u$, and $(\hat{u}-u)/\hat{u}$ otherwise. Also, for a fixed PAC-method, target relative error $\epsilon$, and independent measures $\epsof^{(1)},\dots,\epsof^{(M)}$, we compute the \textit{observed confidence} parameter {\deltao} as $1/M\cdot\sum_{i=1}^{M}\mathbbm{1}(|\epsof^{(i)}|\geq\epsilon)$. Satisfaction of ($\epsilon$, $\delta$) is guaranteed, but {\epso} and {\deltao} can expose theoretical guarantees that are too conservative. Furthermore, for guarantee-less sampling methods we measure {\epso} but not {\deltao}, as these do not support confidence a priori. Thus, we use empirical measures of variance reduction to assess the desirability of sampling techniques over the canonical method (CMC). Let $\sigma^2_{Y^{CMC}}=\mu_Y\cdot(1-\mu_{Y})/N$ be the variance associated to CMC, and let $\sigma_{Y^A}^2$ be the sample associated to method $A$. Clearly, $\sigma^2_{Y^{CMC}}/\sigma^2_{Y^{A}}>1$ will favor $A$ over CMC. However, this is not the only important consideration in practice. For respective CPU times $\tau_{Y^\text{CMC}}$ and $\tau_{Y^A}$, a ratio $\tau_{Y^{CMC}}/\tau_{Y^{A}}<1$ would imply a higher computational cost for $A$. To account for both, variance and CPU time, we use the \textit{efficiency ratio}, defined as $\erf(Y^A) = \big(\sigma^2_{Y^{CMC}}/\sigma^2_{Y^{A}})\cdot \big(\tau_{Y^{CMC}}/\tau_{Y^{A}})$~\cite{Fishman1986c}. In practice, when $\erf(Y^A)<1$, one prefers the more straightforward CMC. A similar measure in the literature is the \textit{work normalized relative variance}~\cite{Botev2012}, defined as $\text{wnrv}(Y)=\tau_{Y}\sigma^2_Y/\mu_Y^2$, which is related to the efficiency ratio via $\erf(Y^A)=\text{wnrv}(Y^\textit{CMC})/\text{wnrv}(Y^A)$. We prefer $\erf(Y^A)$ over $\text{wnvr}(Y^A)$ as it is, ipso facto, a measure of adequacy of $A$ over CMC, informing users on whether they need to implement a more sophisticated method than CMC.\footnote{The ratio $\sigma^2_{Y^{\textit{CMC}}}/\sigma^2_{Y^{A}}$ in the {$\erf$} is also the ratio of the relative variances of $Y^{\textit{CMC}}$ and $Y^{A}$, shedding light on how many times larger (or smaller) the sample associated to CMC needs to be with respect to $A$ from Theorem~\ref{theom:markov} (Appendix).} The next subsections introduce the benchmarks we use and discussion of results. Also, in our benchmarks we consider sparse networks, i.e. $|E|=O(|V|)$, which resemble engineered systems. \subsection{Rectangular Grid Networks} We consider $N$$\times$$N$ square grids (Figure~\ref{fg:gridgraph}) because they are irreducible (via series-parallel reductions) for $N>2$, their tree-width is exactly $N$, and they can be grown arbitrarily large until exact methods fail to return an estimate. Also, failure probabilities can be varied to challenge simulation methods. Our goal is to increase $N$ and vary failure probabilities uniformly to verify running time, scalability, and quality of approximation. We evaluate performance until methods fail to give a desirable answer. In particular, we consider values of $N$ in the range $2$ to $100$. Also, assume all edges fail with probability $2^{-i}$, for $i\in\{1,3,\dots,15\}$. Furthermore, we consider extreme cases of $\mathcal{K}$~(Figure~\ref{fg:gridgraph}), namely, all-terminal and two-terminal reliability, and a $\mathcal{K}$-terminal case with terminal nodes distributed in a checkerboard pattern. \begin{figure} \centering \pgfmathtruncatemacro{\n}{3} \pgfmathtruncatemacro{\ni}{\n-1} \pgfmathtruncatemacro{\nsize}{1} \pgfmathtruncatemacro{\uselabel}{0} \pgfmathsetmacro{\mult}{0.7} \ifthenelse{\isodd{\ni}}{\pgfmathtruncatemacro{\aux}{1}}{\pgfmathtruncatemacro{\aux}{0}} \footnotesize \begingroup \captionsetup[subfigure]{width=1.1in} \subfloat[All-Terminal] { \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=\nsize},clearstyle/.style={circle,draw,fill=gray!10,minimum size=\nsize}] \foreach \x in {0,...,\n} \foreach \y in {0,...,\n} {\pgfmathtruncatemacro{\label}{\x + (\n+1) * (\y) + 1} \node [darkstyle] (\x\y) at (\mult*\x,\mult*\y) {\ifthenelse{\equal{\uselabel}{1}}{\label}{}};} \foreach \x in {0,...,\n} \foreach \y [count=\yi] in {0,...,\ni} \draw (\x\y)--(\x\yi) (\y\x)--(\yi\x); \end{tikzpicture} } \qquad\qquad \subfloat[Two-Terminal] { \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=\nsize},clearstyle/.style={circle,draw,fill=gray!10,minimum size=\nsize}] \foreach \x in {0,...,\n} { \foreach \y in {0,...,\n} { \pgfmathtruncatemacro{\test}{\x + (\n+1) * (\y) + 1 } \pgfmathtruncatemacro{\nn}{(\n+1)^2} \pgfmathtruncatemacro{\label}{\x + (\n+1) * (\y) + 1} \ifthenelse {\equal{1}{\test} \OR \equal{\nn}{\test}} {\node [darkstyle] (\x\y) at (\mult*\x,\mult*\y) {\ifthenelse{\equal{\uselabel}{1}}{\label}{}};} {\node [clearstyle] (\x\y) at (\mult*\x,\mult*\y) {\ifthenelse{\equal{\uselabel}{1}}{\label}{}};} } } \foreach \x in {0,...,\n} \foreach \y [count=\yi] in {0,...,\ni} \draw (\x\y)--(\x\yi) (\y\x)--(\yi\x); \end{tikzpicture} } \qquad\qquad \subfloat[$\mathcal{K}$-Terminal\label{sfig:check}] { \begin{tikzpicture}[darkstyle/.style={circle,draw,fill=gray!65,minimum size=\nsize},clearstyle/.style={circle,draw,fill=gray!10,minimum size=\nsize}] \foreach \x in {0,...,\n} { \foreach \y in {0,...,\n} { \pgfmathtruncatemacro{\test}{\x + (\n+\aux) * (\y) + 1 } \pgfmathtruncatemacro{\label}{\x + (\n+1) * (\y) + 1} \ifthenelse {\isodd{\test}} {\node [darkstyle] (\x\y) at (\mult*\x,\mult*\y) {\ifthenelse{\equal{\uselabel}{1}}{\label}{}};} {\node [clearstyle] (\x\y) at (\mult*\x,\mult*\y) {\ifthenelse{\equal{\uselabel}{1}}{\label}{}};} } } \foreach \x in {0,...,\n} \foreach \y [count=\yi] in {0,...,\ni} \draw (\x\y)--(\x\yi) (\y\x)--(\yi\x); \end{tikzpicture} } \endgroup \pgfmathparse{int(\n+1)} \caption{Example of an $N\times N$ grid graph, with $N = 4$. Darkened nodes belong to the terminal set $\mathcal{K}$.} \label{fg:gridgraph} \end{figure} \subsubsection{Exact calculations}\label{sec:exact} \noindent For reference, we obtained exact unreliability calculations using the BDD-based method by Hardy, Lucet, and Limnios~\cite{Hardy2007}, herein termed {\hll} due to its authors. We computed {\unrel} for $N=2,..,10$ and all values of $p_e$. Figure~\ref{fg:bdd} shows a subset of exact estimates (a-b) and exponential scaling of running time (c). Several other exact methods we studied and referenced in Section~3, were used, but {\hll} was the only one that managed to estimate {\unrel} exactly for all $N\leq10$. However, {\hll} became memory-wise more consuming for $N>10$. Thus, if memory is the only concern, the state-space partition can be used instead to get anytime bounds on {\unrel} at the expense of larger runtime, but storing at most $O(|E|)$ vectors $X\in\{0,1\}^m$ simultaneously~\cite{Paredes2018}. Next, we use these exact estimates to compute {\epso} and {\erf} for guarantee-less simulation methods, and to compute {\epso} and {\deltao} for PAC methods. \begin{figure} \centering \subfloat[{\allterminal}.]{\includegraphics[width=0.33\textwidth]{figures/grids/hll_grids_all_u}}\hfill \subfloat[{\twoterminal}.]{\includegraphics[width=0.33\textwidth]{figures/grids/hll_grids_two_u}}\hfill \subfloat[{all cases}.]{\includegraphics[width=0.33\textwidth]{figures/grids/hll_time}}\hfill \caption{(a-b) Exact estimates of {\unrel} and (c) CPU time using HLL for all-, two-, and {$\mathcal{K}$}-terminal cases.} \label{fg:bdd} \end{figure} \subsubsection{Guarantee-less simulation methods} \noindent Figure~\ref{fg:grids_eps} shows values of {\epso} for the case of {\twoterminal} and setting $N_S=10^4$. Most values are below the $\epsof=0.2$ threshold. For RVR we observed values of {\epso} in the order of the float-point precision for the largest values of $i$. We attribute this to the small number of cuts with maximum probability (2-4 in our case) that, together with the fact that RVR finds them all in the decomposition process, endows RVR with the VRE property in this case. Conversely, other methods do not rely as heavily on these small number of cuts. \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/grids/lt_grids_two_epsilon}} \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/grids/st_grids_two_epsilon}} \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/grids/gs_grids_two_epsilon}} \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/grids/rvr_grids_two_epsilon}} \caption{Multiplicative error {\epso} for guarantee-less simulation methods in the {\twoterminal} case.} \label{fg:grids_eps} \end{figure} Moreover, the CPU time varied among methods as shown in Figure~\ref{fg:grids_time}. The only method whose single sample computation is affected by the values of $i$ is GS, consistent with the expected number of levels, which scales as $\log1/\unrelf$. However, matrix exponential operations for handling more cases of $i$ added overhead in LT and ST; the sharp time increase from $N=5$ to $N=6$ is due to this operation, consistent with findings by Botev et al.~\cite{Botev2012}. Instead, RVR does not suffer from numerical issues and appears to verify the VRV property in this grid topology. \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/grids/lt_grids_two_time}} \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/grids/st_grids_two_time}} \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/grids/gs_grids_two_time}} \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/grids/rvr_grids_two_time}} \caption{CPU time for guarantee-less simulation methods in the {\twoterminal} case.} \label{fg:grids_time} \end{figure} Also, to compare all methods in a uniform fashion we used the efficiency ratio (Figure~\ref{fg:grids_er}). Values of $\sigma_{Y^{\text{CMC}}}^2$ for computing the efficiency ratio are exact from HLL, and CPU time $\tau_{Y^{\text{CMC}}}$ is based on $10^4$ samples. Estimates below the horizontal line are less reliable than those obtained with CMC for the same amount of CPU time. In particular, we note that for less rare failure probabilities ($2^{-7}\approx0.008$) some methods fail to improve over CMC. Missing values for RVR show improvements above $10^7$ in the efficiency ratio which, again, can be attributed to it meeting the VRE property in these benchmarks. Furthermore, an interesting trend among simulation methods is that there is a downward trend in their efficiency ratio as $N$ grows. Thus, we can construct an arbitrarily large squared grid for some $N$ that will, ceteris paribus, yield an efficiency ratio below 1 in favor of CMC. We attribute this to the time complexity of CMC samples in sparse graphs, which can be computed in $O(|V|)$ time, whereas other techniques run in $O(|V|^2)$ time or worse. Thus, the larger the graph the far greater the cost per sample by more advanced techniques with respect to CMC. \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/grids/lt_grids_two_efficiency_ratio}} \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/grids/st_grids_two_efficiency_ratio}} \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/grids/gs_grids_two_efficiency_ratio}} \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/grids/rvr_grids_two_efficiency_ratio}} \caption{{\erf} for guarantee-less simulation methods in the {\twoterminal} case.} \label{fg:grids_er} \end{figure} \subsubsection{Probably approximately correct (PAC) methods} \noindent Next, we embedded simulation methods in {\aaa}, except CMC which was run using {\gbas} because the latter is optimal for Bernoulli RVs such as $Y^\textit{CMC}$. Figure~\ref{fg:grids_paced} shows the runtime for methods embedded into {\aaa}. We were able to feasible compute PAC-estimates for edge failure probabilities of $2^{-5}\approx0.03$ or larger. The approximation guarantees turned out to be rather conservative, obtaining far better precision in practice. Variance reduction through {\aaa} can only reduce sample size by a factor of $O(1/\epsilon)$ with respect to the Bernoulli case (i.e. $N_{ZO}$), thus PAC-estimates with advanced simulation methods using {\aaa} seem to be confined to cases where ${\unrelf\geq 0.005}$ for the square grids benchmarks. However, conditioned on disruptive events such as natural disasters in which failure probabilities are larger, {\aaa} can deliver practical PAC-estimates. \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/grids/lt_grids_all_time_aa}} \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/grids/st_grids_all_time_aa}} \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/grids/gs_grids_all_time_aa}} \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/grids/rvr_grids_all_time_aa}} \caption{CPU time for PAC-ized sampling methods via {\aaa} setting $\epsilon=\delta=0.2$ ({\allterminal}).} \label{fg:grids_paced} \end{figure} On the other hand, ${\gbasf}(Y^{CMC})$ turned out to be practical for more cases, and the analysis used by Huber~\cite{Huber2017} seems to be tight as evidenced by our estimates of {\deltao} (Figure~\ref{fg:cmc}, a-b). Yet, as expected, the running time is heavily penalized by a factor $1/\unrelf$ in the expected sample size as shown in Figure~\ref{fg:cmc}~(c). \begin{figure} \centering \subfloat[{\twoterminal}]{\includegraphics[width=0.33\textwidth]{figures/grids/grids_two_zopac_error}}\hfill \subfloat[{{\kterminal}}]{\includegraphics[width=0.33\textwidth]{figures/grids/grids_k_zopac_error}}\hfill \subfloat[{{\allterminal}}]{\includegraphics[width=0.33\textwidth]{figures/grids/grids_all_zopac_t}}\hfill \caption{ (a-b) Multiplicative error for $\gbasf(Y^{CMC})$ setting $\epsilon=\delta=0.2$, and (c) respective running time for various sizes.} \label{fg:cmc} \end{figure} Furthermore, we used {\relnet} to approximate {\unrel} in all cases of $\mathcal{K}$ thanks to our new developments. Figure~\ref{fg:grids_pac}(b) shows runtimes as well as $(\deltaof,\epsof)$ values for edge failure probability cases of $2^{-1},2^{-3},2^{-5}$. The weighted to unweighted transformation appears to be the current bottleneck as it considerably increases the number of extra variables in $F_{\mathcal{K}}$. However, note that, unlike K2Simple that is specialized for the all-terminal case, {\relnet} is readily applicable to any {\kterminal} problem instance. Also, {\relnet} is the only method that, due to its dependence on an external Oracle, can exploit on-going third-party developments, as constrained SAT and weighted model counting are very active areas of research.~\footnote{See past and ongoing competitions: \href{https://www.satcompetition.org/}{https://www.satcompetition.org/}} Also, SAT-based methods are uniquely positioned to exploit breakthroughs in quantum hardware and support a possible quantum version of {\relnet}~\cite{Duenas-Osorio2017b}. \begin{figure} \centering \subfloat[K2Simple ($\epsilon=0.2$ and $\delta=0.05$)]{ \begin{minipage}[t][][t]{.50\textwidth} \centering \includegraphics[width=0.49\textwidth]{figures/grids/pac_zo_karger_grids_k_time} \includegraphics[width=0.49\textwidth]{figures/grids/pac_zo_karger_grids_k_epsilono} \end{minipage}% } \subfloat[{\relnet} ($\epsilon=0.8$ and $\delta=0.2$)]{ \begin{minipage}[t][][t]{.50\textwidth} \centering \includegraphics[width=0.49\textwidth]{figures/grids/pac_relnet_grids_k_time} \includegraphics[width=0.49\textwidth]{figures/grids/pac_relnet_grids_k_epsilono} \end{minipage} } \caption{{\relnet} CPU time and {\epso} values for {\kterminal} case.} \label{fg:grids_pac} \end{figure} Furthermore, our experimental results suggest that the analysis of both, K2Simple and {\relnet}, is not tight. This is observed by values of $(\epsof,\deltaof)$, which are far better than the theoretical input guarantees. This calls for further refinement in their theoretical analysis. Conversely, GBAS delivers practical guarantees that are much closer to the theoretical ones, as demonstrated in Figures~\ref{fg:cmc} and \ref{fg:grids_gbas}, where the target error can be exceeded still satisfying the target confidence overall. \begin{figure} \centering \subfloat[All-terminal]{\includegraphics[width=0.25\textwidth]{figures/grids/pac_gbas_cmc_grids_all_epsilono}} \subfloat[Two-terminal]{\includegraphics[width=0.25\textwidth]{figures/grids/pac_gbas_cmc_grids_two_epsilono}} \subfloat[$\mathcal{K}$-terminal]{\includegraphics[width=0.25\textwidth]{figures/grids/pac_gbas_cmc_grids_k_epsilono}} \subfloat[Two-terminal]{\includegraphics[width=0.25\textwidth]{figures/grids/pac_gbas_cmc_grids_two_time}} \caption{{\epso} values and CPU time for ${\gbasf(Y^{CMC})}$ setting $(\epsilon,\delta)=(0.2,0.05)$.} \label{fg:grids_gbas} \end{figure} The square grids gave us insight on the relative performance of reliability estimation methods. Next, we use a dataset of power transmission networks to test methods on instances with engineered system topologies. \subsection{U.S. Power Transmission Networks} We consider a dataset with 58 power transmission networks in cities across the U.S. A summary discussion of their structural graph properties can be found elsewhere~\cite{Li201684}. Also, we considered the two-terminal reliability problem. To test the robustness of methods, for each instance {\instance}, we considered every possible $s,t\in V$ pair as a different experiment. Thus, totaling $\binom{n}{2}$ experiments per network instance, where $n=|V|$. We used a single edge failure probability across experiments of $p_e=2^{-3}=0.125$ to keep overall computation time practical. Using {\hll} and preprocessing of networks, we were able to get exact estimates for some of the experiments. We used these to measure the observed multiplicative error {\epso} when possible. Computational times are reported for all experiments, even if multiplicative error is unknown. \begin{figure} \centering \subfloat[Multiplicative error {\epso}]{\includegraphics[width=0.5\textwidth]{figures/pt/pt_cmc_eps}}\hfill \subfloat[Running time (seconds)]{\includegraphics[width=0.5\textwidth]{figures/pt/pt_cmc_time}}\hfill \caption{ Two-terminal reliability approximations using {\gbas} setting $\epsilon=\delta=0.2$} \label{fg:pt_cmc} \end{figure} Figure~\ref{fg:pt_cmc} shows PAC-estimates using {\gbas}. As expected, the variation in CPU time was proportional to $1/{\unrelf}$. Furthermore, we used {\relnet} to obtain PAC-estimates and observed consistent values of the multiplicative error (Figure~\ref{fg:pt_relnet}). In some instances, however, {\relnet} failed to return an estimate before timeout. We also tested simulation methods setting $N_S=10^3$. Despite the lack of guarantees they performed well in terms of {\epso} and CPU time~(Figures~\ref{fg:pt_sim_epsilon}-\ref{fg:pt_sim_time}, first 5 benchmarks for brevity). However, the efficiency ratio is reduced as the size of instances grows. \begin{figure} \centering \subfloat[Multiplicative error {\epso}]{\includegraphics[width=0.5\textwidth]{figures/pt/pt_relnet_eps}}\hfill \subfloat[Running time (seconds)]{\includegraphics[width=0.5\textwidth]{figures/pt/pt_relnet_time}}\hfill \caption{Two-terminal reliability approximations using {\relnet} with $(0.8, 0.2)$.} \label{fg:pt_relnet} \end{figure} \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/pt_st/epsilon_lt_stable}}\hfill \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/pt_st/epsilon_st_stable}}\hfill \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/pt_st/epsilon_gs}}\hfill \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/pt_st/epsilon_rvr}}\hfill \caption{Multiplicative error for simulation methods setting $N_S=10^3$.} \label{fg:pt_sim_epsilon} \end{figure} \begin{figure} \centering \subfloat[LT]{\includegraphics[width=0.25\textwidth]{figures/pt_st/time_lt_stable}}\hfill \subfloat[ST]{\includegraphics[width=0.25\textwidth]{figures/pt_st/time_st_stable}}\hfill \subfloat[GS]{\includegraphics[width=0.25\textwidth]{figures/pt_st/time_gs}}\hfill \subfloat[RVR]{\includegraphics[width=0.25\textwidth]{figures/pt_st/time_rvr}}\hfill \caption{Running time for simulation methods setting $N_S=10^3$.} \label{fg:pt_sim_time} \end{figure} \subsection{Analysis of results and outlook} Exact methods are advantageous when a topological property is known to be bounded. {\hll} proved useful not only for medium-sized grids ($N\times N=100$), but also it was instrumental when computing exact estimates for many streamlined power transmission networks. Our research shows that methods exploiting bounded properties, together with practical upper bounds, deliver competitive exact calculations for many engineered systems. In power transmission networks, {\hll} was able to exploit their relatively small treewidth. Among guarantee-less sampling methods, there are multiple paths for improvement. In the cases of LT and ST methods, even when the exponential matrix offers a reliable approach to compute the convolution of exponential random variables, numerically stable computations represent the main bottleneck of the algorithms and in many cases, they are not needed. Thus, future research could devise ways to diagnose these issues and fall-back to the exponential matrix only when needed, or use approximate integration (as in Gertsbakh et al.~\cite{Gertsbakh2015}), or use a more arithmetically robust algorithm (e.g. round-off algorithms for the sample variance~\cite{Chan1983}). Moreover, GS was competitive but its requirement to run a preliminary experiment with an arbitrary number of trajectories $N_0$ to define intermediate levels, and without a formal guidance on its values, can represent a practical barrier when there is no knowledge in the order of magnitude of {\unrel}. Future research could devise splitting mechanisms that use all samples towards the final experiments while retaining its unbiased properties. Finally, RVR was very competitive; however, we noted that (i) the number of terminals adds a considerable overhead in the number of calls to the minimum cut algorithm, and (ii) its performance is tied to the number of maximum probability cuts because larger cuts do not contribute meaningfully towards computing {\unrel}. Future work could use Karger's minimum cut approximation~\cite{Karger1996} and an adaptive truncation of the recursion found in the RVR estimator to address (i) and (ii), respectively. We are currently investigating this very issue and recognized the RVR estimator as an special, yet randomized, case of state-space-partition algorithms~\cite{Paredes2018}. Among PAC-methods, we found {\gbas} to be tight in its theoretical analysis and competitive in practice. Outside the extremely rare-event regime, we contend that the usage of PAC algorithms such as GBAS would benefit the reliability and system safety community as they give exact confidence intervals without the need of asymptotic assumptions and arbitrary choices on the number of samples and replications. Karger's newly suggested algorithms demonstrated practical performance even in the rare-event regime, yet it appears that their theoretic guarantees are still too conservative. Equipping K2Simple with {\gbas} at the first recursion level would instantly yield a faster algorithm for non-small failure probabilities. However, the challenge of proving tighter bounds on the relative variance for the case of small failure probabilities remains. The same argument on theoretic guarantees being too conservative extends to RelNet, which cannot be set too tight in practice. But we expect RelNet to gain additional competitiveness as orthogonal advances in approximate weighted model counting continue to accrue. RelNet remains competitive in the non rare-event regime, delivering rigorous PAC-guarantees for the {\kterminal} problem. Also, its SAT-based formulation makes it uniquely suitable for quantum algorithmic developments, at a time when major technological developers, such as IBM, Google, Intel, etc., are increasing their investment on quantum hardware~\cite{ preskill2018quantum}. \section{Introduction} \noindent Modern societies rely on physical and technological networks such as transportation, power, water, and telecommunication systems. Quantifying their reliability is imperative in design, operation, and resilience enhancement. Typically, networks are modeled using a graph where vertices and edges represent unreliable components. Network reliability problems ask: what is the probability that a complex system with unreliable components will work as intended under prescribed functionality conditions? In this paper, we focus on the {\kterminal} problem~\cite{Ball1986}. In particular, we consider an undirected graph $G=(V,E,\mathcal{K})$, where $V$ is the set of vertices, $E\subseteq V\times V$ is the set of edges, and $\mathcal{K}\subseteq V$ is the set of terminals. We let $G(P)$ be a stochastic graph, where every edge $e\in E$ vanishes from $G$ with respective probabilities $P=(p_e)_{e\in E}$. We assume a binary-system, and say {\real} is \textit{unsafe} if a subset of vertices in $\mathcal{K}$ becomes disconnected, and \textit{safe} otherwise. Thus, given instance {\instance} of the {\kterminal} problem, we are interested in computing the unreliability of $G(P)$, denoted {\unrel}, and defined as the probability that $G(P)$ is unsafe. If $|\Theta|$ is the cardinality of set $\Theta$, then $n=|V|$ and $m=|E|$ are the number of vertices and edges, respectively. Also, when $|\mathcal{K}|=n$ and $|\mathcal{K}|=2$, the {\kterminal} problem reduces to the all-terminal and two-terminal reliability problems, respectively. These are well-known and proven to be~\#P-complete problems~\cite{Ball1986,Valiant1979}. The more general {\kterminal} problem is \#P-hard, so ongoing efforts to compute {\unrel} focus on practical bounds and approximations. Exact and bounding methods are limited to networks of small size, or with bounded graph properties such as treewidth and diameter~\cite{Hardy2007, Canale2016}. Thus, for large $G$ of general structure, researchers and practitioners lean on simulation-based estimates with acceptable Monte Carlo error~\cite{Fishman1986}. However, in the absence of an error prescription, simulation applications can use unjustified sample sizes and lack a priori rigor on the quality of the estimates, thus becoming \textit{guarantee-less} methods. A formal approach to guarantee quality in Monte Carlo applications relies on the so-called $(\epsilon,\delta)$ \textit{approximations}, where $\epsilon$ and $\delta$ are user specified parameters regarding the relative error and confidence, respectively. As an illustration, for $Y$ as a random variable (RV), say we are interested in computing its expected value $E[Y]=\mu_Y$. Then, after \textit{we specify} parameters $\epsilon,\delta\in(0,1)$, an $(\epsilon,\delta)$ approximation returns estimate $\overline{\mu}_Y$ such that $\Pr(|\overline{\mu}_Y/\mu_Y-1|\geq \epsilon)\leq \delta$. In other words, an $(\epsilon,\delta)$ \textit{approximation} returns an estimate with relative error below $\epsilon$ with at least confidence $1-\delta$. We term \textit{Probably Approximately Correct} (PAC) the family of methods whose algorithmic procedures deliver estimates with $(\epsilon,\delta)$ guarantees.\footnote{We borrow the PAC terminology from the field of artificial intelligence~\cite{Valiant1984}.} \input{pbp_sample_size} \input{pbp_fpras} To tackle computational and precision issues, this paper develops {\relnet}, a counting-based PAC method for network unreliability that inherits properties of state-of-the-art approximate model counters in the field of computational logic~\cite{Meel2017}. Our approach delivers rigorous $(\epsilon,\delta)$ guarantees and is \textit{efficient} when given access to an NP-oracle: a black-box that solves nondeterministic polynomial time decision problems. The use of NP-oracles for randomized approximations, first proposed by Stockmeyer~\cite{Stockmeyer1983}, is increasingly within reach as in practice we can leverage efficient solvers for Boolean satisfiability (SAT) that are under active development. Given the variety of methods to compute {\unrel}, we showcase our developments against alternative approaches. In the process, we highlight methodological connections missed in the engineering reliability literature, key theoretical properties of our method, and unveil practical performance through fair computational experiments by using existing and our own benchmarks. The rest of the manuscript is structured as follows. Section~2 gives background on network reliability evaluation and its $(\epsilon,\delta)$ approximation, as well as the necessary background on Boolean logic before introducing our new counting-based approach: {\relnet}, an efficient PAC method for the {\kterminal} problem. Section~3 contextualizes our contribution relative to other techniques for network reliability evaluation. We highlight key properties for users and draw important connections in the literature. Section~4 presents the main results of our computational evaluation. Section~5 rounds up this study with conclusions and promising research directions.
{ "timestamp": "2019-05-03T02:04:49", "yymm": "1806", "arxiv_id": "1806.00917", "language": "en", "url": "https://arxiv.org/abs/1806.00917" }
\section{Primal-dual Distributed Incremental Aggregated Gradient Method}\vspace{-.1cm}\label{sec:iag} We are ready to introduce our algorithm for solving the optimization problem in \eqref{eq:opt_pd}. Since $\prm $ is shared by all the $N$ agents, the agents need to exchange information so as to reach a consensual solution. Let us first specify the communication model. We assume that the $N$ agents communicate over a network specified by a connected and undirected graph $G = (V,E)$, with $V = [N] = \{1,...,N\}$ and $E \subseteq V \times V$ being its vertex set and edge set, respectively. Over $G$, it is possible to define a doubly stochastic matrix ${\bm W}$ such that $W_{ij} = 0$ if $(i,j) \notin E$ and ${\bm W} {\bf 1} = {\bm W}^\top {\bf 1} = {\bf 1}$, note $\lambda \eqdef \lambda_{\sf max} ( {\bm W} - N^{-1} {\bf 1} {\bf 1} ^\top ) < 1$ since $G$ is connected. Notice that the edges in $G$ may be formed independently of the coupling between agents in the MDP induced by the stochastic policy ${\bm \pi}$. We handle problem \eqref{eq:opt_pd} by judiciously combining the techniques of \emph{dynamic consensus} \citep{qu2017harnessing,zhu2010discrete} and \emph{stochastic (or incremental) average gradient} (SAG) \citep{gurbuzbalaban2017convergence,schmidt2017minimizing}, which have been developed independently in the control and machine learning communities, respectively. From a high level viewpoint, our method utilizes a gradient estimator which tracks the gradient over \emph{space} (across $N$ agents) and \emph{time} (across $M$ samples). To proceed with our development while explaining the intuitions, we first investigate a centralized and batch algorithm for solving \eqref{eq:opt_pd}. \textbf{Centralized Primal-dual Optimization}~~ Consider the primal-dual gradient updates. For any $t\geq 1$, at the $t$-th iteration, we update the primal and dual variables by \beq \label{eq:fgd} \prm^{t+1} = \prm^t - \gamma_1 \grd_{\prm} J ( \prm^t, \{ {\bm w}_i^t \}_{i=1}^N ), \qquad {\bm w}_i^{t+1} = {\bm w}_i^t + \gamma_2 \grd_{{\bm w}_i} J( \prm^t, \{ {\bm w}_i^t \}_{i=1}^N ),~i \in [N] \eqs, \eeq where $\gamma_1, \gamma_2 > 0$ are step sizes, which is a simple application of a gradient descent/ascent update to the primal/dual variables. As shown by \citet{du2017stochastic}, when $\hat {\bm A}$ is full rank and $\hat {\bm C}$ is invertible, the Jacobian matrix of the primal-dual optimal condition is full rank. Thus, within a certain range of step size $(\gamma_1, \gamma_2)$, recursion \eqref{eq:fgd} converges linearly to the optimal solution of \eqref{eq:opt_pd}. \textbf{Proposed Method}~~ The primal-dual gradient method in \eqref{eq:fgd} serves as a reasonable template for developing an efficient decentralized algorithm for \eqref{eq:opt_pd}. Let us focus on the update of the primal variable $\prm$ in \eqref{eq:fgd}, which is a more challenging part since $\prm $ is shared by all $N$ agents. To evaluate the gradient \wrt $\prm$, we observe that -- (a) agent $i$ does not have access to the functions, $ \{ J_{j,p} (\cdot), j \neq i \} $, of the other agents; (b) computing the gradient requires summing up the contributions from $M$ samples. As $M \gg 1$, doing so is undesirable since the computation complexity would be ${\cal O}(Md)$. We circumvent the above issues by utilizing a \emph{double gradient tracking} scheme for the primal $\prm$-update and an incremental update scheme for the local dual ${\bm w}_i$-update in the following primal-dual distributed incremental aggregated gradient ({\sf PD-DistIAG}) method. Here each agent $i \in [N]$ maintains a local copy of the primal parameter $\{ \prm _i^t \} _{t\geq 1}$. We construct sequences $\{ {\bm s}_i^t \}_{t\geq 1} $ and $\{ {\bm d}_i^t \}_{t\geq 1}$ to track the gradients with respect to $\prm$ and ${\bm w}_i$, respectively. Similar to \eqref{eq:fgd}, in the $t$-th iteration, we update the dual variable via gradient update using ${\bm d}_i^t$. As for the primal variable, to achieve consensus, each $\prm_i^{t+1} $ is obtained by first combining $\{ \prm_i^t \} _{i \in [N]}$ using the weight matrix ${\bm W}$, and then update in the direction of ${\bm s}_i^t$. The details of our method are presented in Algorithm \ref{algo:main}. \begin{algorithm}[t] \caption{\textbf{{\sf PD-DistIAG} Method} for Multi-agent, Primal-dual, Finite-sum Optimization} \label{algo:main} \begin{algorithmic} \STATE{{ \bf Input}: Initial estimators $\{ \prm_i^1, {\bm w}_i^1 \}_{i \in [N]}$, initial gradient estimators ${\bm s}_i^0 = {\bm d}_i^0 = {\bf 0}$, $\forall~i \in [N]$, initial counter $\tau_p^0 = 0$, $\forall~p \in [M]$, and stepsizes $\gamma_1, \gamma _2 > 0$. } \FOR {$t\geq 1$} \STATE{The agents pick a common sample indexed by $p_t \in \{1,...,M\}$.} \STATE{Update the counter variable as:\vspace{-.3cm} \beq \label{eq:tau_upd} \tau_{p_t}^t = t,~~\tau_p^t = \tau_p^{t-1},~\forall~p \neq p_t \eqs.\vspace{-.4cm} \eeq} \FOR{each agent $i \in \{ 1, \ldots, N\} $} \STATE{Update the gradient surrogates by\vspace{-.2cm} \begin{align} {\bm s}_i^t & \textstyle = \sum_{j=1}^N W_{ij} {\bm s}_j^{t-1} + \frac{1}{M} \Big[ \grd_{\prm} J_{i,p_t} ( \prm_i^t, {\bm w}_i^t ) - \grd_{\prm} J_{i,p_t} ( \prm_i^{\tau_{p_t}^{t-1}}, {\bm w}_i^{\tau_{p_t}^{t-1}} ) \Big] \eqs, \label{eq:s_upd}\\ {\bm d}_i^t & \textstyle = {\bm d}_i^{t-1} + \frac{1}{M} \Big[ \grd_{\bm w_i} J_{i,p_t} ( \prm_i^t, {\bm w}_i^t ) - \grd_{\bm w_i} J_{i,p_t} ( \prm_i^{\tau_{p_t}^{t-1}}, {\bm w}_i^{\tau_{p_t}^{t-1}} ) \Big] \eqs, \label{eq:d_upd} \end{align} where $\grd_{\prm} J_{i,p} ( \prm_i^{0}, {\bm w}_i^{0} ) = {\bm 0} $ and $\grd_{{\bm w}_i} J_{i,p} ( \prm_i^{0}, {\bm w}_i^{0} ) = {\bm 0} $ for all $p \in [M]$ for initialization.}\vspace{.1cm} \STATE{Perform primal-dual updates using ${\bm s}_i^t, {\bm d}_i^t$ as surrogates for the gradients \wrt $\prm$ and ${\bm w}_i$: \beq\textstyle \label{eq:pd_alg} \prm_i^{t+1} = \sum_{j=1}^N W_{ij} \prm_j^t - \gamma_1 {\bm s}_i^t,~~ {\bm w}_i^{t+1} = {\bm w}_i^t + \gamma_2 {\bm d}_i^t \eqs.\vspace{-.1cm} \eeq } \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Let us explain the intuition behind the {\sf PD-DistIAG} method through studying the update \eqref{eq:s_upd}. Recall that the global gradient desired at iteration $t$ is given by $\grd_{\prm} J ( \prm^t, \{ {\bm w}_i^t \}_{i=1}^N )$, which represents a double average -- one over space (across agents) and one over time (across samples). Now in the case of \eqref{eq:s_upd}, the first summand on the right hand side computes a local average among the neighbors of agent $i$, and thereby tracking the global gradient over \emph{space}. This is in fact akin to the \emph{gradient tracking} technique in the context of distributed optimization \citep{qu2017harnessing}. The remaining terms on the right hand side of \eqref{eq:s_upd} utilize an incremental update rule akin to the SAG method \citep{schmidt2017minimizing}, involving a swap-in swap-out operation for the gradients. This achieves tracking of the global gradient over \emph{time}. To gain insights on why the scheme works, we note that ${\bm s}_{i}^{t}$ and ${\bm d}^t_{i}$ represent some surrogate functions for the primal and dual gradients. Moreover, for the counter variable, using \eqref{eq:tau_upd} we can alternatively represent it as $\tau_p^t = \max \{ \ell \geq 0 \!~:\!~ \ell \leq t,~ p_\ell = p \}$. In other words, $\tau_p^t$ is the iteration index where the $p$-th sample is last visited by the agents prior to iteration $t$, and if the $p$-th sample has never been visited, we have $\tau_p^t = 0$. For any $t\geq 1$, define ${\bm g}_{\prm}(t) \eqdef (1/N)\sum_{i=1}^N {\bm s}_i^t$. The following lemma shows that ${\bm g}_{\prm}(t)$ is a double average of the primal gradient -- it averages over the local gradients across the agents, and for each local gradient; it also averages over the past gradients for all the samples evaluated up till iteration $t+1$. This shows that the average over network for $\{ {\bm s}_i^t \}_{i=1}^N$ can always track the double average of the local and past gradients, \ie the gradient estimate ${\bm g}_{\prm}(t) $ is `unbiased' with respect to the network-wide average. \begin{Lemma} \label{lem:avg} For all $t \geq 1$ and consider Algorithm~\ref{algo:main}, it holds that\vspace{-.1cm} \beq \label{eq:track} \textstyle {\bm g}_{\prm}(t) = \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \grd_{\prm} J_{i,p} ( \prm_i^{\tau_p^t}, {\bm w}_i^{\tau_p^t} ) \eqs.\vspace{-.2cm} \eeq \end{Lemma} \textbf{Proof.} We shall prove the statement using induction. For the base case with $t = 1$, using \eqref{eq:s_upd} and the update rule specified in the algorithm, we have\vspace{-.2cm} \beq \label{eq:base_case} {\bm g}_{\prm}(1) = \frac{1}{N} \sum_{i=1}^N \frac{1}{M} \grd_{\prm} J_{i,p_1} ( \prm_i^1, {\bm w}_i^1 ) = \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \grd_{\prm} J_{i,p_t} ( \prm_i^{\tau_p^1}, {\bm w}_i^{\tau_p^1} ) \eqs,\vspace{-.1cm} \eeq where we use the fact $\grd_{\prm} J_{i,p} ( \prm_i^{\tau_p^1}, {\bm w}_i^{\tau_p^1} ) = \grd_{\prm} J_{i,p} ( \prm_i^{0}, {\bm w}_i^{0} ) = {\bm 0} $ for all $p \neq p_1$ in the above. For the induction step, suppose \eqref{eq:track} holds up to iteration $t$. Since ${\bm W}$ is doubly stochastic, \eqref{eq:s_upd} implies \beq \label{eq:induction_step} \begin{split} {\bm g}_{\prm}(t+1) & = \frac{1}{N} \sum_{i=1}^N \bigg \{ \sum_{j=1}^N W_{ij} {\bm s}_j^t + \frac{1}{M} \Big[ \grd_{\prm} J_{i,p_{t+1}} ( \prm_i^{t+1}, {\bm w}_i^{t+1} ) - \grd_{\prm} J_{i,p_{t+1}} ( \prm_i^{\tau_{p_{t+1}}^{t}}, {\bm w}_i^{\tau_{p_{t+1}}^{t}} ) \Big] \bigg\} \\ & = {\bm g}_{\prm}(t) + \frac{1}{NM} \sum_{i=1}^N \Big[ \grd_{\prm} J_{i,p_{t+1}} ( \prm_i^{t+1}, {\bm w}_i^{t+1} ) - \grd_{\prm} J_{i,p_{t+1}} ( \prm_i^{\tau_{p_{t+1}}^{t}}, {\bm w}_i^{\tau_{p_{t+1}}^{t}} ) \Big]\eqs.\\[-.4cm] \end{split}\vspace{-.4cm} \eeq Notice that we have $\tau_{p_{t+1}}^{t+1} = t+1$ and $\tau_{p}^{t+1} = \tau_{p}^{t}$ for all $p \neq p_{t+1}$. The induction assumption in \eqref{eq:track} can be written as \begin{align} \label{eq:induction_step2} {\bm g}_{\prm}(t) = \frac{1}{NM} \sum_{i=1}^N \biggl [ \sum_{p\neq p_{t+1} } \grd_{\prm} J_{i,p} ( \prm_i^{\tau_p^{t+1}}, {\bm w}_i^{\tau_p^{t+1}} ) \biggr ] + \frac{1}{NM}\sum_{i=1}^N \grd_{\prm} J_{i,p_{t+1} } ( \prm_i^{\tau_{p_{t+1}} ^{t}}, {\bm w}_i^{\tau_{p_{t+1}} ^{t } } ) \eqs. \end{align} Finally, combining \eqref{eq:induction_step} and \eqref{eq:induction_step2}, we obtain the desired result that \eqref{eq:track} holds for the $t+1$th iteration. This, together with \eqref{eq:base_case}, establishes Lemma \ref{lem:avg}. \hfill \textbf{Q.E.D.}\vspace{.1cm} As for the dual update \eqref{eq:d_upd}, we observe the variable ${\bm w}_i$ is local to agent $i$. Therefore its gradient surrogate, ${\bm d}_i^t$, involves only the tracking step over \emph{time} [cf.~\eqref{eq:d_upd}], \ie it only averages the gradient over samples. Combining with Lemma~\ref{lem:avg} shows that the {\sf PD-DistIAG} method uses gradient surrogates that are averages over samples despite the disparities across agents. Since the average over samples are done in a similar spirit as the {\sf SAG} method, the proposed method is expected to converge linearly. \textbf{Storage and Computation Complexities}~~Let us comment on the computational and storage complexity of {\sf PD-DistIAG} method. First of all, since the method requires accessing the previously evaluated gradients, each agent has to store $2M$ such vectors in the memory to avoid re-evaluating these gradients. Each agent needs to store a total of $2Md$ real numbers. On the other hand, the per-iteration computation complexity for each agent is only ${\cal O}(d)$ as each iteration only requires to evaluate the gradient over one sample, as delineated in \eqref{eq:d_upd}--\eqref{eq:pd_alg}. \textbf{Communication Overhead}~~ The {\sf PD-DistIAG} method described in Algorithm \ref{algo:main} requires an information exchange round [of ${\bm s}_i^t$ and $\prm_i^t$] among the agents at every iteration. From an implementation stand point, this may incur significant communication overhead when $d \gg 1$, and it is especially ineffective when the progress made in successive updates of the algorithm is not significant. A natural remedy is to perform multiple \emph{local} updates at the agent using different samples \emph{without} exchanging information with the neighbors. In this way, the communication overhead can be reduced. Actually, this modification to the {\sf PD-DistIAG} method can be generally described using a time varying weight matrix ${\bm W}(t)$, such that we have ${\bm W}(t) = {\bm I}$ for most of the iteration. The convergence of {\sf PD-DistIAG} method in this scenario is part of the future work.\vspace{-.1cm} \subsection{Convergence Analysis} The {\sf PD-DistIAG} method is built using the techniques of (a) primal-dual batch gradient descent, (b) gradient tracking for distributed optimization and (c) stochastic average gradient, where each of them has been independently shown to attain linear convergence under certain conditions; see \citep{qu2017harnessing, schmidt2017minimizing,gurbuzbalaban2017convergence,du2017stochastic}. Naturally, the {\sf PD-DistIAG} method is also anticipated to converge at a linear rate. To see this, let us consider the condition for the sample selection rule of {\sf PD-DistIAG}:\vspace{-.1cm} \begin{assumption} \label{ass:bd} A sample is selected at least once for every $M$ iterations, $| t - \tau_p^t | \leq M$ for all $p \in [M]$, $t \geq 1$.\vspace{-.1cm} \end{assumption} The assumption requires that every samples are visited infinitely often. For example, this can be enforced by using a cyclical selection rule, \ie $p_t = (t~{\rm mod}~M) + 1$; or a random sampling scheme \emph{without replacement} (\ie random shuffling) from the pool of $M$ samples. Finally, it is possible to relax the assumption such that a sample can be selected once for every $K$ iterations only, with $K \geq M$. The present assumption is made solely for the purpose of ease of presentation. Moreover, to ensure that the solution to \eqref{eq:opt_pd} is unique, we consider:\vspace{-.1cm} \begin{assumption} \label{ass:fr} The sampled correlation matrix $\hat{\bm A}$ is full rank, and the sampled covariance $\hat{\bm C}$ is non-singular.\vspace{-.1cm} \end{assumption} The following theorem confirms the linear convergence of {\sf PD-DistIAG}: \begin{Theorem} \label{thm:main} Under A\ref{ass:bd} and A\ref{ass:fr}, we denote by $( \prm^\star, \{ {\bm w}_i^\star \}_{i=1}^N )$ the primal-dual optimal solution to the optimization problem in \eqref{eq:opt_pd}. Set the step sizes as $\gamma_2 = \beta \gamma_1 $ with $\beta \eqdef 8 ( \rho + \lambda_{\sf max} ( \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A}) ) / \lambda_{\sf min}( \hat{\bm C})$. Define $\overline{\prm}(t) \eqdef \frac{1}{N} \sum_{i=1}^N \prm_i^t$ as the average of parameters. If the primal step size $\gamma_1$ is sufficiently small, then there exists a constant $0 < \sigma < 1$ that \beq \label{eq:converge} \notag \textstyle \big\| \overline{\prm}(t) - \prm^\star \big\|^2 + (1/\beta N) \sum_{i=1}^N \big\| {\bm w}_i^t - {\bm w}_i^\star \big\|^2 = {\cal O}( \sigma^t ),~~ (1/N) \sum_{i=1}^N \big\| \prm_i^t - \overline{\prm}(t) \big\| = {\cal O}( \sigma^t ) \eqs. \eeq If $N,M \gg 1$ and the graph is geometric, a sufficient condition for convergence is to set $\gamma = {\cal O}(1/ \max\{N^{2},M^2\})$ and the resultant rate is $\sigma = 1 - {\cal O}( 1/ \max\{M N^{2}, M^3\})$. \end{Theorem} The result above shows the desirable convergence properties for {\sf PD-DistIAG} method -- the primal dual solution $( \overline{\prm}(t), \{ {\bm w}_i^t \}_{i=1}^N )$ converges to $( \prm^\star, \{ {\bm w}_i^\star \}_{i=1}^N )$ at a linear rate; also, the consensual error of the local parameters $\bar{\prm}_i^t$ converges to zero linearly. A distinguishing feature of our analysis is that it handles the \emph{worst case} convergence of the proposed method, rather than the \emph{expected} convergence rate popular for stochastic / incremental gradient methods. \textbf{Proof Sketch}~~Our proof is divided into three steps. The first step studies the progress made by the algorithm in one iteration, taking into account the non-idealities due to imperfect tracking of the gradient over space and time. This leads to the characterization of a \emph{Lyapunov vector}. The second step analyzes the \emph{coupled} system of one iteration progress made by the Lyapunov vector. An interesting feature of it is that it consists of a series of independently \emph{delayed} terms in the Lyapunov vector. The latter is resulted from the incremental update schemes employed in the method. Here, we study a sufficient condition for the coupled and delayed system to converge linearly. The last step is to derive condition on the step size $\gamma_1$ where the sufficient convergence condition is satisfied. Specifically, we study the progress of the Lyapunov functions: \beq \notag \begin{array}{c} \| \widehat{\underline{\bm v}}(t) \|^2 \eqdef \Theta \Big( \big\| \overline{\prm}(t) - \prm^\star \big\|^2 + (1/\beta N) \sum_{i=1}^N \big\| {\bm w}_i^t - {\bm w}_i^\star \big\|^2 \Big),~~ {\cal E}_c(t) \eqdef \frac{1}{N} \sqrt{\sum_{i=1}^N \| \prm_i^t - \overline{\prm}(t) \|^2}, \vspace{.1cm} \\ \textstyle {\cal E}_g(t) \eqdef \frac{1}{N} \sqrt{ \sum_{i=1}^N \big\| {\bm s}_i^t - \frac{1}{NM} \sum_{j=1}^N \sum_{p=1}^M \grd_{\prm} J_{j,p} ( \prm_j^{\tau_p^t}, {\bm w}_j^{\tau_p^t} ) \big\|^2} \eqs. \end{array} \eeq That is, $\widehat{\underline{\bm v}}(t)$ is a vector whose squared norm is equivalent to a weighted distance to the optimal primal-dual solution, $ {\cal E}_c(t)$ and ${\cal E}_g(t) $ are respectively the consensus errors of the primal parameter and of the primal \emph{aggregated} gradient. These functions form a non-negative vector which evolves as: \beq \label{eq:fin_sys_paper} \left( \begin{array}{c} \| \widehat{\underline{\bm v}}(t+1) \| \\[.1cm] {\cal E}_c(t+1) \\[.1cm] {\cal E}_g(t+1) \end{array} \right) \leq {\bm Q} (\gamma_1) \left( \begin{array}{c} \max_{ (t-2M)_+ \leq q \leq t } \| \widehat{\underline{\bm v}}(q) \| \\[.1cm] \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \\[.1cm] \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_g(q) \end{array} \right) \eqs, \eeq where the matrix ${\bm Q} ( \gamma_1 ) \in \RR^{3 \times 3}$ is defined by (exact form given in the supplementary material) \beq \label{eq:Q_gamma_mat} {\bm Q} ( \gamma_1 ) = \left( \begin{array}{ccc} 1 - \gamma_1 a_0 + \gamma_1^2 a_1 & \gamma_1 a_2 & 0 \\ 0 & \lambda & \gamma_1 \\ \gamma_1 a_3 & a_4 + \gamma_1 a_5 & \lambda + \gamma_1 a_6 \end{array} \right) \eqs. \eeq In the above, $\lambda \eqdef \lambda_{\sf max} ( {\bm W} - (1/N) {\bf 1}{\bf 1}^\top ) < 1$, and $a_0, ..., a_6$ are some non-negative constants that depends on the problem parameters $N$, $M$, the spectral properties of ${\bm A}$, ${\bm C}$, etc., with $a_0$ being positive. If we focus only on the first row of the inequality system, we obtain \begin{align*} \| \widehat{\underline{\bm v}}(t + 1) \| \leq \big( 1 - \gamma_1 a_0 + \gamma_1^2 a_1 ) \max_{ (t-2M)_+ \leq q \leq t } \| \widehat{\underline{\bm v}}(q) \| + \gamma_1 a_2 \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \eqs. \end{align*} In fact, when the contribution from ${\cal E}_c(q)$ can be ignored, then applying \citep[Lemma 3]{feyzmahdavian2014delayed} shows that $\| \widehat{\underline{\bm v}}(t + 1) \|$ converges linearly if $- \gamma_1 a_0 + \gamma_1^2 a_1 < 0$, which is possible as $a_0 > 0$. Therefore, if ${\cal E}_c(t)$ also converges linearly, then it is anticipated that ${\cal E}_g(t)$ would do so as well. In other words, the linear convergence of $\| \widehat{\underline{\bm v}}(t) \|$, ${\cal E}_c(t)$ and ${\cal E}_g(t)$ are all coupled in the inequality system \eqref{eq:fin_sys_paper}. Formalizing the above observations, Lemma~1 in the supplementary material shows a sufficient condition on $\gamma_1$ for linear convergence. Specifically, if there exists $\gamma_1>0$ such that the spectral radius of ${\bm Q}( \gamma_1 )$ in \eqref{eq:Q_gamma_mat} is strictly less than one, then each of the Lyapunov functions, $\| \widehat{\underline{\bm v}}(t) \|$, ${\cal E}_c(t)$, ${\cal E}_g(t)$, would enjoy linear convergence. Furthermore, Lemma~2 in the supplementary material gives an existence proof for such an $\gamma_1$ to exist. This concludes the proof. \textbf{Remark}~~While delayed inequality system has been studied in \citep{feyzmahdavian2014delayed,gurbuzbalaban2017convergence} for optimization algorithms, the coupled system in \eqref{eq:fin_sys_paper} is a non-trivial generalization of the above. Importantly, the challenge here is due to the asymmetry of the system matrix ${\bm Q}$ and the maximum over the past sequences on the right hand side are taken \emph{independently}. To the best of our knowledge, our result is the first to characterize the (linear) convergence of such coupled and delayed system of inequalities. \textbf{Extension}~~ Our analysis and algorithm may in fact be applied to solve general problems that involves multi-agent and finite-sum optimization, e.g., \beq \label{eq:s_opt} \textstyle \min_{ \prm \in \RR^d } ~J(\prm) \eqdef \frac{1}{ NM}\sum_{i=1}^N \sum_{p=1}^M J_{i,p} ( \prm ) \eqs. \eeq For instance, these problems may arise in multi-agent empirical risk minimization, where data samples are kept independently by agents. Our analysis, especially with convergence for inequality systems of the form \eqref{eq:fin_sys_paper}, can be applied to study a similar double averaging algorithm with just the primal variable. In particular, we only require the sum function $J(\prm)$ to be strongly convex, and the objective functions $J_{i,p}(\cdot)$ to be smooth in order to achieve linear convergence. We believe that such extension is of independent interest to the community. At the time of submission, a recent work \citep{pu2018distributed} applied a related double averaging distributed algorithm to a \emph{stochastic version} of \eqref{eq:s_opt}. However, their convergence rate is sub-linear as they considered a stochastic optimization setting. \section{Proof of Theorem~1} We repeat the statement of the theorem as follows: \begin{Theorem*} \label{thm:main_app} Under A\ref{ass:bd} and A\ref{ass:fr}, we denote by $( \prm^\star, \{ {\bm w}_i^\star \}_{i=1}^N )$ the primal-dual optimal solution to the optimization problem in \eqref{eq:opt_pd}. Set the step sizes as $\gamma_2 = \beta \gamma_1 $ with $\beta \eqdef 8 ( \rho + \lambda_{\sf max} ( \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A}) ) / \lambda_{\sf min}( \hat{\bm C})$. Define $\overline{\prm}(t) \eqdef \frac{1}{N} \sum_{i=1}^N \prm_i^t$ as the average of parameters. If the primal step size $\gamma_1$ is sufficiently small, then there exists a constant $0 < \sigma < 1$ that \beq \label{eq:converge} \notag \textstyle \big\| \overline{\prm}(t) - \prm^\star \big\|^2 + (1/\beta N) \sum_{i=1}^N \big\| {\bm w}_i^t - {\bm w}_i^\star \big\|^2 = {\cal O}( \sigma^t ),~~ (1/N) \sum_{i=1}^N \big\| \prm_i^t - \overline{\prm}(t) \big\| = {\cal O}( \sigma^t ) \eqs. \eeq If $N,M \gg 1$ and the graph is geometric with $\lambda = 1 - c/N$ for $c>0$, a sufficient condition for convergence is to set $\gamma = {\cal O}(1/ \max\{N^2,M^2\})$ and the resultant rate is $\sigma = 1 - {\cal O}( 1/ \max\{M N^2, M^3\})$. \end{Theorem*} \paragraph{Notation} We first define a set of notations pertaining to the proof. For any $\beta > 0$, observe that the primal-dual optimal solution, $( \prm^\star, \{ {\bm w}_i ^{\star}\}_{i=1}^N )$, to the optimization problem \eqref{eq:opt_pd} can be written as \beq \label{eq:opt_cond} \underbrace{\left( \begin{array}{cccc} \rho {\bm I} & \sqrt{\frac{\beta}{N}} \hat{\bm A}^\top & \cdots & \sqrt{\frac{\beta}{N}} \hat{\bm A}^\top \\ -\sqrt{\frac{\beta}{N}} \hat{\bm A} & \beta \hat{\bm C} & \cdots & \cdots \\ \vdots & {\bm 0} & \ddots & {\bm 0} \\ -\sqrt{\frac{\beta}{N}} \hat{\bm A} & \cdots & \cdots & \beta \hat{\bm C} \end{array} \right)}_{\eqdef {\bm G}} \left( \begin{array}{c} \prm^\star \\ \frac{1}{\sqrt{\beta N}}{\bm w}_1^\star \\ \vdots \\ \frac{1}{\sqrt{\beta N}}{\bm w}_N^\star \end{array} \right) = \left( \begin{array}{c} {\bm 0} \\ -\sqrt{\frac{\beta}{N}}{\bm b}_1 \\ \vdots \\ -\sqrt{\frac{\beta}{N}}{\bm b}_N \end{array} \right) \eqs, \eeq where we denote the matrix on the left hand side as ${\bm G}$. This equation can be obtained by checking the first-order optimality condition. In addition, for any $p \in \{ 1, \ldots, M \}$, we define the ${\bm G}_p$ as \beq\label{eq:matrix_Gp} {\bm G}_p \eqdef \left( \begin{array}{cccc} \rho {\bm I} & \sqrt{\frac{\beta}{N}} {\bm A}_p^\top & \cdots & \sqrt{\frac{\beta}{N}} {\bm A}^\top \\ -\sqrt{\frac{\beta}{N}}{\bm A}_p & \beta {\bm C}_p & \cdots & \cdots \\ \vdots & {\bm 0} & \ddots & {\bm 0} \\ -\sqrt{\frac{\beta}{N}}{\bm A}_p & \cdots & \cdots & \beta {\bm C}_p \end{array} \right) \eqs. \eeq By definition, ${\bm G}$ is the sample average of $ \{ {\bm G}_p \}_{p=1}^M$. Define $\bar{\prm}(t) \eqdef (1/N) \sum_{i=1}^N \prm_i^t$ as the average of the local parameters at iteration $t$. Furthermore, we define \begin{align} {\bm h}_{\prm}(t) \eqdef \rho \bar{\prm}(t) + \frac{1}{N} \sum_{i=1}^N \hat{\bm A}^\top {\bm w}_i^t, & \qquad {\bm g}_{\prm}(t) \eqdef \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \big( \rho \prm_i^{\tau_p^t} + {\bm A}_p^\top {\bm w}_i^{\tau_p^t} \big) \eqs, \label{eq:4terms_1} \\ {\bm h}_{ {\bm w}_i } (t) \eqdef \hat{\bm A} \bar{\prm}(t) - \hat{\bm C} {\bm w}_i^t - \hat{\bm b}_i, & \qquad {\bm g}_{ {\bm w}_i } (t) \eqdef \frac{1}{M} \sum_{p=1}^M \big( {\bm A}_p \prm_i^{\tau_p^t} - {\bm C}_p {\bm w}_i^{\tau_p^t} - {\bm b}_{p,i} \big) \eqs, \label{eq:4terms_2} \end{align} where ${\bm h}_{\prm}(t)$ and $ {\bm h}_{\bm w}(t) : = [{\bm h}_{ {\bm w}_1} (t), \cdots, {\bm h}_{ {\bm w}_N } (t)]$ represent the gradients evaluated by a \emph{centralized} and \emph{batch} algorithm. Note that ${\bm g}_{\prm}(t)$ defined in \eqref{eq:4terms_1} coincides with that in \eqref{eq:track}. Using Lemma~\ref{lem:avg}, it can be checked that $\bar{\prm}(t+1) = \bar{\prm}(t) - \gamma_1 {\bm g}_{\prm}(t)$ and ${\bm w}_i^{t+1} = {\bm w}_i^t - \gamma_2 {\bm g}_{ {\bm w}_i } (t)$ for all $t \geq 1$. That is, $\bar{\prm}(t+1)$ and ${\bm w}_i^{t+1} $ can be viewed as primal-dual updates using ${\bm g}_{\prm}(t)$ and ${\bm g}_{ {\bm w}_i } (t)$, which are decentralized counterparts of gradients ${\bm h}_{\prm}(t)$ and ${\bm h}_{ {\bm w}_i} (t)$ defined in \eqref{eq:4terms_1} \eqref{eq:4terms_2}. To simplify the notation, hereafter, we define vectors $\underline{\bm h}(t)$, $\underline{\bm g}(t)$, and $\underline{\bm v}(t) $ by \beq \label{eq:define_3vectors} \underline{\bm h}(t) \eqdef \left( \begin{array}{c} {\bm h}_{\prm}(t) \\ -\sqrt{\frac{\beta}{N}} {\bm h}_{ {\bm w}_1 } (t) \\ \vdots \\ -\sqrt{\frac{\beta}{N}} {\bm h}_{ {\bm w}_N } (t) \end{array} \right),~ \underline{\bm g}(t) \eqdef \left( \begin{array}{c} {\bm g}_{\prm}(t) \\ -\sqrt{\frac{\beta}{N}} {\bm g}_{ {\bm w}_1 } (t) \\ \vdots \\ -\sqrt{\frac{\beta}{N}} {\bm g}_{ {\bm w}_N } (t) \end{array} \right),~ \underline{\bm v}(t) \eqdef \left( \begin{array}{c} \bar{\prm}(t) - \prm^\star \\ \frac{1}{\sqrt{\beta N}} \big( {\bm w}_1^t - {\bm w}_1^\star \big) \\ \vdots \\ \frac{1}{\sqrt{\beta N}} \big( {\bm w}_N^t - {\bm w}_N^\star \big) \end{array} \right) \eqs. \eeq Using \eqref{eq:opt_cond}, it can be verified that (see the detailed derivation in Section~\ref{sec:detailed}) \beq \label{eq:new_equality} \underline{\bm h}(t) = {\bm G} \underline{\bm v}(t) \eqs. \eeq By adopting the analysis in \citep{du2017stochastic} and under Assumption \ref{ass:fr}, it can be shown that with \beq\notag \beta \eqdef \frac{ 8 ( \rho + \lambda_{\sf max} \bigl [ \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A} ) \bigr ]}{ \lambda_{\sf min}( \hat{\bm C}) } \eqs, \eeq then ${\bm G}$ is full rank with its eigenvalues satisfying \beq\label{eq:eigen_G} \lambda_{\sf max} ( {\bm G} ) \leq \left| \frac{\lambda_{\sf max} ( \hat{\bm C} ) }{ \lambda_{\sf min} (\hat{\bm C} )} \right| \lambda_{\sf max} ( \rho {\bm I} + \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A} ),\qquad \lambda_{\sf min} ( {\bm G} ) \geq \frac{8}{9} \lambda_{\sf min} ( \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A} ) > 0 \eqs. \eeq Moreover, let $ {\bm G} \eqdef {\bm U} \bm{\Lambda} {\bm U}^{-1} \eqs. $ be the eigen-decomposition of ${\bm G}$, where ${\bm \Lambda}$ is a diagonal matrix consists of the eigenvalues of ${\bm G}$, and the columns of ${\bm U}$ are the eigenvectors. Then, ${\bm U}$ is full rank with \beq\label{eq:eigen_U} \| {\bm U} \| \leq 8 \bigl [ \rho + \lambda_{\sf max} ( \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A} ) \bigr ]\left| \frac{\lambda_{\sf max} ( \hat{\bm C} ) }{ \lambda_{\sf min} (\hat{\bm C} )} \right| ,\qquad \| {\bm U}^{-1} \| \leq \frac{1}{ \rho + \lambda_{\sf max} ( \hat{\bm A}^\top \hat{\bm C}^{-1} \hat{\bm A} )} \eqs. \eeq Furthermore, we also define the following upper bounds on the spectral norms \beq\label{eq:norm_bounds} G \eqdef \| {\bm G} \|, \quad \overline{G} \eqdef \max_{p=1,...,M} \| {\bm G}_p \|,\quad \overline{A} \eqdef \max_{p=1,...,M} \| {\bm A}_p \|, \quad \overline{C} \eqdef \max_{p=1,...,M} \| {\bm C}_p \| \eqs. \eeq Lastly, we define the following two Lyapunov functions \beq \label{eq:lya_funcs} {\cal E}_c(t) \eqdef \frac{1}{N} \biggl [ \sum_{i=1}^N \| \prm_i^t - \overline{\prm}(t) \|^2 \biggr ]^{1/2 } ,\qquad {\cal E}_g(t) \eqdef \frac{1}{N} \biggl [ \sum_{i=1}^N \| {\bm s}_i^t - {\bm g}_{\prm}(t) \|^2 \biggr ] ^{1/2} \eqs. \eeq Note that we have the following inequalities: \beq \label{eq:equiv_norm} {\cal E}_c(t) \leq \frac{1}{N} \sum_{i=1}^N \| \prm_i^t - \overline{\prm}(t) \|,~~\frac{1}{N}\sum_{i=1}^N \| \prm_i^t - \overline{\prm}(t) \| \leq \sqrt{N} {\cal E}_c(t) \eqs, \eeq which follows from the norm equivalence $\| {\bm x} \|_2 \leq \| {\bm x} \|_1 \leq \sqrt{N} \| {\bm x} \|_2$ for any ${\bm x} \in \RR^N$. \paragraph{Convergence Analysis} We denote that $\gamma_1 = \gamma$ and $\gamma_2 = \beta \gamma$. To study the linear convergence of the {\sf PD-DistIAG} method, our first step is to establish a bound on the difference from the primal-dual optimal solution, $\underline{\bm v}(t)$. Observe with the choice of our step size ratio, \beq \label{eq:taking_norm} \underline{\bm v}(t+1) = ( {\bm I} - \gamma {\bm G} ) \underline{\bm v}(t) + \gamma \big[ \underline{\bm h}(t) - \underline{\bm g}(t) \big] \eqs. \eeq Consider the difference vector $\underline{\bm h}(t) - \underline{\bm g}(t)$. Its first block can be evaluated as \beq \label{eq:last1} \begin{split} & \big[ \underline{\bm h}(t) - \underline{\bm g}(t) \big]_1 = \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \Big[ \rho \big( \bar{\prm}(t) - \prm_i^{\tau_p^t} \big) + {\bm A}_p^\top \big( {\bm w}_i^t - {\bm w}_i^{\tau_p^t} \big) \Big] \\ & = \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \Big[ \rho \big( \bar{\prm}(t) - \bar{\prm} (\tau_p^t) \big) + {\bm A}_p^\top \big( {\bm w}_i^t - {\bm w}_i^{\tau_p^t} \big) \Big] + \frac{\rho}{NM} \sum_{i=1}^N \sum_{p=1}^M \big( \bar{\prm}( \tau_p^t) - \prm_i^{\tau_p^t} \big) \eqs. \end{split} \eeq Meanwhile, for any $i \in \{ 1, \ldots, N\}$, the $(i+1)$-th block is \begin{align} \label{eq:last2} & \big[ \underline{\bm h}(t) - \underline{\bm g}(t) \big]_{i+1} = \sqrt{\frac{\beta}{N}} \frac{1}{M} \sum_{p=1}^M \Big[ {\bm A}_p \big( \prm_i^{\tau_p^t} - \bar{\prm}(t) \big) + {\bm C}_p \big( {\bm w}_i^t - {\bm w}_i^{\tau_p^t} \big) \Big] \\ & = \sqrt{\frac{\beta}{N}} \frac{1}{M} \sum_{p=1}^M \Big[ {\bm A}_p \big( \bar{\prm} (\tau_p^t) - \bar{\prm}(t) \big) + {\bm C}_p \big( {\bm w}_i^t - {\bm w}_i^{\tau_p^t} \big) \Big] + \sqrt{\frac{\beta}{N}} \frac{1}{M} \sum_{p=1}^M {\bm A}_p \big( \prm_i^{\tau_p^t} - \bar{\prm}( \tau_p^t ) \big) \eqs. \notag \end{align} For ease of presentation, we stack up and denote the residual terms (related to consensus error) in \eqref{eq:last1} and \eqref{eq:last2} as the vector $\underline{\bm{\mathcal{E}}}_c(t)$. That is, the first block of $\underline{\bm{\mathcal{E}}}_c(t)$ is $\rho / ( NM) \cdot \sum_{i=1}^N \sum_{p=1}^M \big( \bar{\prm}( \tau_p^t) - \prm_i^{\tau_p^t} \big)$, and the remaining blocks are given by $ \sqrt{ {\beta} / {N}} \cdot 1 / {M} \cdot \sum_{p=1}^M {\bm A}_p \big( \prm_i^{\tau_p^t} - \bar{\prm}( \tau_p^t ) \big) $, $\forall i \in \{ 1, \ldots, N\}$. Then by the definition of ${\bm G}_p$ in \eqref{eq:matrix_Gp}, we obtain the following simplification: \beq\label{eq:grad_diff_telescope} \underline{\bm h}(t) - \underline{\bm g}(t) - \underline{\bm{\mathcal{E}}}_c(t) = \frac{1}{M} \sum_{p=1}^M {\bm G}_p \Big( {\textstyle \sum_{j=\tau_p^t}^{t-1} \underline{\Delta {\bm v}} (j)} \Big), \eeq where we have defined \beq\label{eq:defineDeltav} \underline{\Delta {\bm v}} (j) \eqdef \left( \begin{array}{c} \bar{\prm}(j+1) - \bar{\prm}(j) \\ \frac{1}{\sqrt{\beta N}} \big( {\bm w}_1^{j+1} - {\bm w}_1^j \big) \\ \vdots \\ \frac{1}{\sqrt{\beta N}} \big( {\bm w}_N^{j+1} - {\bm w}_N^j \big) \end{array} \right) \eqs. \eeq Clearly, we can express $\Delta \underline{\bm v}(j)$ as $ \Delta \underline{\bm v}(j) = \underline{\bm v}(j+1) - \underline{\bm v}(j) $ with $\underline {\bm v}(t)$ defined in \eqref{eq:define_3vectors}. Combining \eqref{eq:new_equality} and \eqref{eq:taking_norm}, we can also write $\Delta \underline{\bm v}(j) $ in \eqref{eq:defineDeltav} as \beq\label{eq:some_equation1} \Delta \underline{\bm v}(j) = \gamma \big[ \underline{\bm h}(j) - \underline{\bm g}(j) \big] - \gamma \underline{\bm h}(j) \eqs. \eeq Denoting $\widehat{\underline{\bm v}}(t) \eqdef {\bm U}^{-1} \underline{\bm v}(t)$, multiplying ${\bm U}^{-1}$ on both sides of \eqref{eq:taking_norm} yields \beq\label{eq:some_equation2} \widehat{\underline{\bm v}}(t+1) = ( {\bm I} - \gamma \bm{\Lambda} ) \widehat{\underline{\bm v}}(t) + \gamma~ {\bm U}^{-1} \big( \underline{\bm h}(t) - \underline{\bm g}(t) \big) \eqs. \eeq Combining \eqref{eq:grad_diff_telescope}, \eqref{eq:some_equation1}, and \eqref{eq:some_equation2}, by triangle inequality, we have \begin{align} \label{eq:some_equation3} & \| \widehat{\underline{\bm v}}(t+1) \| \leq \\ & \qquad \big\| {\bm I} - \gamma \bm{\Lambda} \big\| \| \widehat{\underline{\bm v}}(t) \| + \gamma \| {\bm U}^{-1} \| \biggl \{ \| \underline{\bm{\mathcal{E}}}_c(t) \| + \frac{\gamma \overline{G}}{M} \sum_{p=1}^M \sum_{j=\tau_p^t}^{t-1} \big [ \| \underline{\bm h}(j) \| + \| \underline{\bm h}(j) - \underline{\bm g}(j) \| \big ] \bigg \} \eqs, \notag \end{align} where $\overline {G} $ appears in \eqref{eq:norm_bounds} and $\underline{\bm{\mathcal{E}}}_c(t)$ is the residue term of the consensus. Furthermore, simplifying the right-hand side of \eqref{eq:some_equation3} yields \begin{align} \label{eq:bound_v_norm_upper} & \| \widehat{\underline{\bm v}}(t+1) \| \leq \big\| {\bm I} - \gamma \bm{\Lambda} \big\| \| \widehat{\underline{\bm v}}(t) \| + \gamma \|{\bm U}^{-1}\| \biggl \{ \| \underline{\bm{\mathcal{E}}}_c(t) \| + \gamma \overline{G} \sum_{j=(t-M)_+}^{t-1} \big[ \| \underline{\bm h}(j) \| + \| \underline{\bm h}(j) - \underline{\bm g}(j) \| \big ] \biggr \} \notag \\ & \qquad \leq \big\| {\bm I} - \gamma \bm{\Lambda} \big\| \| \widehat{\underline{\bm v}}(t) \| + \gamma \| {\bm U}^{-1} \| \Bigg( \| \underline{\bm{\mathcal{E}}}_c(t) \| + \gamma \overline{G} \cdot \\ & \qquad \qquad \sum_{j=(t-M)_+}^{t-1} \bigg \{ \| \underline{\bm{\mathcal{E}}}_c(j) \| + G \| {\bm U} \| \| \widehat{\underline{\bm v}}(j) \| + \overline{G} \| {\bm U} \| \sum_{\ell=(j-M)_+}^{j-1} \bigl [ \| \widehat{\underline{\bm v}}(\ell+1) \| + \| \widehat{\underline{\bm v}}(\ell) \| \bigr ] \bigg\} \Bigg) \eqs. \notag \end{align} Moreover, using the definition and \eqref{eq:equiv_norm}, we can upper bound $\| \underline{\bm{\mathcal{E}}}_c(t) \|$ by \beq\label{eq:Ec_bound} \| \underline{\bm{\mathcal{E}}}_c(t) \| \leq \frac{ 1 }{ M } \sum_{p=1}^M \bigg[ \big( \rho + \overline{A} \sqrt{\beta N} \big) \frac{1}{N} \sum_{i=1}^N \| \prm_i^{\tau_p^t} - \bar{\prm} ( \tau_p^t ) \| \bigg] \leq \sqrt{N} \big( \rho + \overline{A} \sqrt{\beta N} \big) \max_{ (t-M)_+ \leq q \leq t } {\cal E}_c(q) \eqs . \eeq Thus, combining \eqref{eq:bound_v_norm_upper} and \eqref{eq:Ec_bound}, we bound $\| \widehat{\underline{\bm v}}(t+1) \| $ by \beq \label{eq:v_1} \| \widehat{\underline{\bm v}}(t+1) \| \leq \big\| {\bm I} - \gamma \bm{\Lambda} \big\| \| \widehat{\underline{\bm v}}(t) \| + C_1( \gamma ) \max_{ (t-2M)_+ \leq q \leq t-1 } \| \widehat{\underline{\bm v}}(q) \| + C_2( \gamma ) \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \eqs , \eeq where constants $C_1(\gamma)$ and $C_2(\gamma)$ are given by \beq\notag C_1( \gamma ) \eqdef \gamma^2~ \| {\bm U} \| \| {\bm U}^{-1} \| \overline{G} M \big( G + 2 \overline{G} M \big),~ C_2( \gamma ) \eqdef \gamma \| {\bm U}^{-1} \| \big( 1 + \gamma\overline{G} M \big) \sqrt{N} \big( \rho + \overline{A} \sqrt{\beta N} \big). \eeq Notice that since ${\bm U}^{-1}$ is full rank, the squared norm $\| \widehat{\underline{\bm v}}(t) \|^2$ is proportional to $\| \overline{\prm}(t) - \prm^\star \|^2 + (1/\beta N) \sum_{i=1}^N \| {\bm w}_i^\star - {\bm w}_i^t \|^2$, \ie the optimality gap at the $t$-th iteration. We next upper bound ${\cal E}_c(t+1)$ as defined in \eqref{eq:lya_funcs}. Notice that $N {\cal E}_c(t+1)$ can be written as Frobenius norm of the matrix $\bm{\Theta}^{t+1} - {\bf 1} \overline{\prm}(t+1)^\top$, where $\bm{\Theta}^{t+1} = ( (\prm_1^{t+1})^\top ; \cdots ; (\prm_N^{t+1})^\top )$. Also, we denote ${\bm S}^t = ( ({\bm s}_1^t)^\top; \cdots ({\bm s}_N^t)^\top)$. By the update in \eqref{eq:pd_alg} and using the triangle inequality, we have \beq \label{eq:fin_c02} \begin{split} {\cal E}_c(t+1) & = \frac{1}{N} \| \bm{\Theta}^{t+1} - {\bf 1} \overline{\prm}(t+1)^\top \|_F = \frac{1}{N} \| {\bm W} ( \bm{\Theta}^{t} - {\bf 1} \overline{\prm}(t)^\top ) - \gamma ( {\bm S}^t - {\bf 1} {\bm g}_{\prm}(t)^\top ) \|_F \\ & \leq \frac{1}{N} \big( \| \bm{\Theta}^{t+1} - {\bf 1} \overline{\prm}(t)^\top \|_F + \gamma \!~\| {\bm S}^t - {\bf 1} {\bm g}_{\prm}(t)^\top \|_F \big) \eqs. \end{split} \eeq Notice that we have $\lambda \eqdef \lambda_{\sf max}({\bm W} - (1/N) {\bf 1}{\bf 1}^\top ) < 1$ as the graph is connected. Using the fact that $N {\cal E}_g(t) = \| {\bm S}^t - {\bf 1} {\bm g}_{\prm}(t)^\top \|_F$, the right-hand side of \eqref{eq:fin_c02} can be bounded by \begin{align}\label{eq:fin_c2} {\cal E}_c(t+1) & \leq \lambda ~ {\cal E}_c(t) + \gamma ~ {\cal E}_g(t) \eqs, \end{align} where the Lyapunov function ${\cal E}_g(t) $ is defined in \eqref{eq:lya_funcs}. To conclude the proof, we need to further upper bound ${\cal E}_g(t+1)$. To simplify the notation, let us define ${\bm G}_{p}^t = ( \grd_{\prm} J_{1,p} (\prm_1^t;{\bm w}_1^t)^\top; \cdots ; \grd_{\prm} J_{N,p} (\prm_N^t;{\bm w}_N^t)^\top )$ and observe that \beq {\cal E}_g(t+1) = \frac{1}{N} \Big\| {\bm S}^{t+1} - {\bf 1}{\bm g}_{\prm}(t+1)^\top \Big\|_F = \frac{1}{N} \Big\| {\bm W} {\bm S}^{t} + {\textstyle \frac{1}{M}} \big( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} \big) - {\bf 1}{\bm g}_{\prm}(t+1)^\top \Big\|_F \eeq where we have used \eqref{eq:s_upd}. Furthermore, we observe \begin{align} \label{eq:bound_Eg_first} {\cal E}_g(t+1) & = \frac{1}{N} \Big\| {\bm W} ( {\bm S}^{t} - {\bf 1} {\bm g}_{\prm}(t)^\top )+ {\textstyle \frac{1}{M}} \big( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} \big) - {\bf 1}( {\bm g}_{\prm}(t+1) - {\bm g}_{\prm}(t) )^\top \Big\|_F \notag \\ & \leq \frac{1}{N} \Big( \| {\bm W} ( {\bm S}^t - {\bf 1} {\bm g}_{\prm}(t)^\top ) \|_F + \| {\textstyle \frac{1}{M}} \big( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} \big) - {\bf 1}({\bm g}_{\prm}(t+1) - {\bm g}_{\prm}(t))^\top \|_F \Big) \notag\\ & \leq \lambda \!~ {\cal E}_g(t) + \frac{1}{N} \Bigl \| {\textstyle \frac{1}{M}} \big( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} \big) - {\bf 1}({\bm g}_{\prm}(t+1) - {\bm g}_{\prm}(t))^\top \Bigr \|_F \eqs . \end{align} We observe ${\bm G}_{p}^t = ( ({\bm w}_1^t)^\top {\bm A}_p; \cdots ; ({\bm w}_N^t)^\top {\bm A}_p ) + \rho \!~ \bm{\Theta}^t$ and ${\bm g}_{\prm}(t+1) - {\bm g}_{\prm} (t) = M^{-1} \big( \rho \!~ \bar{\prm}(t+1) - \rho \!~ \bar{\prm}( \tau_{p_{t+1}}^t ) + N^{-1} {\bm A}_{p_{t+1}}^\top \sum_{i=1}^N \big( {\bm w}_i^{t+1} - {\bm w}_i^{\tau_{p_{t+1}}^t} \big) \big)$. Adopting the notations $\bm{\Omega}^t = ( ({\bm w}_1^t)^\top ; \cdots; ({\bm w}_N^t )^\top )$ and $\overline{\bm w}^t = N^{-1} \sum_{i=1}^N {\bm w}_i^t$, we observe that \beq \begin{split} & M^{-1} ( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} ) - {\bf 1} ( {\bm g}_{\prm}(t+1) - {\bm g}_{\prm}(t) )^\top \\ & \qquad = \frac{\rho}{M} \big( \bm{\Prm}^{t+1} - {\bf 1} \overline{\prm}(t+1)^\top - \bm{\Prm}^{\tau_{p_{t+1}}^t} - {\bf 1} \overline{\prm}(\tau_{p_{t+1}}^t)^\top \big) \\ & \quad \qquad+ \frac{1}{M} \big( \bm{\Omega}^{t+1} - {\bf 1} (\overline{\bm w}^{t+1})^\top - \bm{\Omega}^{ \tau_{p_{t+1}}^t } + {\bf 1} (\overline{\bm w}^{\tau_{p_{t+1}}^t})^\top \big) {\bm A}_{p_{t+1}} \eqs . \end{split} \eeq Using the triangular inequality, the norm of the above can be bounded as \beq \begin{split} & \frac{\rho}{M} \Big( \| \bm{\Prm}^{t+1} - {\bf 1} \overline{\prm}(t+1)^\top \|_F + \| \bm{\Prm}^{\tau_{p_{t+1}}^t} - {\bf 1} \overline{\prm}(\tau_{p_{t+1}}^t)^\top \|_F \Big) \\ & \quad + \frac{ \| {\bm A}_{p_{t+1}} \| }{ M } \Big( \| \bm{\Omega}^{t+1} - {\bf 1} (\overline{\bm w}^{t+1})^\top - \bm{\Omega}^{ \tau_{p_{t+1}}^t } + {\bf 1} (\overline{\bm w}^{\tau_{p_{t+1}}^t})^\top \|_F \Big) \eqs. \end{split} \eeq From the norm equivalence $\| {\bm x} \|_2 \leq \| {\bm x} \|_1$, we recognize that $\| \bm{\Omega}^{t+1} - {\bf 1} (\overline{\bm w}^{t+1})^\top - \bm{\Omega}^{ \tau_{p_{t+1}}^t } + {\bf 1} (\overline{\bm w}^{\tau_{p_{t+1}}^t})^\top \|_F \leq \sum_{i=1}^N \| {\bm w}_i^{t+1} - \overline{\bm w}^{t+1} - {\bm w}_i^{\tau_{p_{t+1}}^t } + \overline{\bm w}^{\tau_{p_{t+1}}^t } \|$. It holds for all $t' \leq t$ that \beq \notag {\bm w}_i^{t+1} - {\bm w}_i^{t'} = - \frac{\gamma}{ \beta M} \sum_{\ell=t'}^t \sum_{p=1}^M \Big[ {\bm A}_p ( \prm_i^{\tau_p^\ell} - \prm^\star ) - {\bm C}_p ( {\bm w}_i^{\tau_p^\ell} - {\bm w}_i^\star ) \Big] \eqs . \eeq We thus obtain \begin{align}\label{eq:bound_sum_delta_tt} & \frac{ \| {\bm A}_{p_{t+1}} \| }{ M } \Big( \| \bm{\Omega}^{t+1} - {\bf 1} (\overline{\bm w}^{t+1})^\top - \bm{\Omega}^{ \tau_{p_{t+1}}^t } + {\bf 1} (\overline{\bm w}^{\tau_{p_{t+1}}^t})^\top \|_F \Big) \notag \\ & \qquad \leq \frac{ \| {\bm A}_{p_{t+1}} \| }{ M } \sum_{i=1}^N \bigg\| {\bm w}_i^{t+1} - \frac{1}{N} \sum_{j=1}^N {\bm w}_j^{t+1} - {\bm w}_i^{\tau_{p_{t+1}}^t } + \frac{1}{N} \sum_{j=1}^N {\bm w}_j^{\tau_{p_{t+1}}^t } \bigg\| \notag \\ &\qquad \leq \frac{ 2 \gamma \overline{A} }{ \beta M^2} \sum_{i=1}^N \sum_{\ell = (t-M)_+}^t \sum_{p=1}^M \Big( \big \| {\bm A}_p ( \prm_i^{\tau_p^\ell} - \prm^\star ) - {\bm C}_p ( {\bm w}_i^{\tau_p^\ell} - {\bm w}_i^\star ) \big\| \Big) \notag \\ & \qquad \leq \frac{ 2\gamma \overline{A} }{ \beta M} \sum_{i=1}^N \sum_{\ell = (t-M)_+}^t \biggl [ \max_{ (\ell-M)_+ \leq q \leq \ell } \Big( \overline{A} \| \prm_i^q - \prm^\star \| + \overline{C} \| {\bm w}_i^q - {\bm w}_i^\star \| \Big) \biggr ] \eqs. \end{align} Thus, combining \eqref{eq:fin_c2}, \eqref{eq:bound_sum_delta_tt}, and the definition of ${\cal E}_c$ in \eqref{eq:lya_funcs}, we have \begin{align} \label{eq:bound_delta_avg} & \textstyle \frac{1}{N} \| {\textstyle \frac{1}{M}} \big( {\bm G}_{p_{t+1}}^{t+1} - {\bm G}_{p_{t+1}}^{\tau_{p_{t+1}}^t} \big) - {\bf 1}({\bm g}_{\prm}(t+1) - {\bm g}_{\prm}(t))^\top \|_F \\ & \notag \leq \frac{\rho}{M} \bigl [ {\cal E}_c ( \tau_{p_{t+1}}^t ) + {\cal E}_c(t+1) \bigr ] \\ & \qquad \qquad + \frac{ 2 \gamma \overline{A} (M+1) }{ \beta NM} \sum_{i=1}^N \max_{ (t-2M)_+ \leq q \leq t } \Big( \overline{A} \| \prm_i^q - \prm^\star \| + \overline{C} \| {\bm w}_i^q - {\bm w}_i^\star \| \Big) \notag \\ & \leq \frac{ \rho }{M} \bigl [ {\cal E}_c ( \tau_{p_{t+1}}^t ) + \lambda \!~ {\cal E}_c ( t ) + \gamma~ {\cal E}_g ( t ) \bigr ] \notag \\ & \notag \qquad + \frac{ 2 \gamma \overline{A} (M+1) }{ \beta M} \max_{ (t-2M)_+ \leq q \leq t } \Big( \overline{A}~\sqrt{N} {\cal E}_c(q) + \overline{A}~\| \bar{\prm}(q) - \prm^\star\| + \frac{ \overline{C} }{N} \sum_{i=1}^N \| {\bm w}_i^q - {\bm w}_i^\star \| \Big) \eqs. \end{align} Combining \eqref{eq:bound_Eg_first} and \eqref{eq:bound_delta_avg}, we obtain that \beq \begin{split}\label{eq:final_bound_Eg2} & {\cal E}_g(t+1) \leq \Big( \lambda + \frac{ \gamma \rho }{M} \Big) ~{\cal E}_g( t) + \bigg[ \frac{ 2 \gamma \overline{A}^2 (M+1) \sqrt{N} }{\beta M} + \frac{2 (1 + \lambda)}{M} \bigg] \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \\ & \hspace{2.25cm} + \frac{ 2 \gamma \overline{A} (M+1) }{\beta M} \max_{ (t-2M)_+ \leq q \leq t } \bigg( \overline{A}~\| \bar{\prm}(q) - \prm^\star\| + \frac{ \overline{C} }{N} \sum_{i=1}^N \| {\bm w}_i^q - {\bm w}_i^\star \| \bigg) \eqs. \end{split} \eeq To bound the last term on the right-hand side of \eqref{eq:final_bound_Eg2}, For all $q$, we observe that: \beq \notag \begin{split} & \bigg( \overline{A}~\| \bar{\prm}(q) - \prm^\star\| + \frac{ \overline{C} }{N} \sum_{i=1}^N \| {\bm w}_i^q - {\bm w}_i^\star \| \bigg)^2 \\ & \qquad \leq (N+1) (\overline{A})^2 \bigg[ \| \bar{\prm}(q) - \prm^\star\|^2 + \beta \Big( \frac{\overline{C}}{\overline{A}} \Big)^2 \frac{1}{ \beta N} \sum_{i=1}^N \| {\bm w}_i^q - {\bm w}_i^\star \|^2 \bigg]\\ & \qquad \leq (N+1) ~\| {\bm U} \| \max \Big\{ (\overline{A})^2, \beta (\overline{C})^2 \Big\}~ \| \underline{\bm v}(q) \|^2 \eqs, \end{split} \eeq which further implies that \beq\label{eq:eg_fin} \begin{split} & {\cal E}_g(t+1) \leq \Big( \lambda + \frac{ \gamma \rho }{M} \Big) ~{\cal E}_g( t) + \Big( \frac{ 2 \gamma \overline{A}^2 (M+1) \sqrt{N} }{\beta M} + \frac{2 (1 + \lambda)}{M} \Big) \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \\ & \hspace{3.15cm} + \frac{ 2 \gamma \overline{A} \sqrt{N+1} (M+1) }{\beta M} ~\| {\bm U} \| \max\{ \overline{A}, \sqrt{\beta} \overline{C} \} \max_{ (t-2M)_+ \leq q \leq t } \| \widehat{\underline{\bm v}}(q) \| \eqs. \end{split} \eeq Finally, combining \eqref{eq:v_1}, \eqref{eq:fin_c2}, \eqref{eq:eg_fin} shows: \beq \label{eq:fin_sys} \left( \begin{array}{c} \| \widehat{\underline{\bm v}}(t+1) \| \\[.1cm] {\cal E}_c(t+1) \\[.1cm] {\cal E}_g(t+1) \end{array} \right) \leq {\bm Q} \left( \begin{array}{c} \max_{ (t-2M)_+ \leq q \leq t } \| \widehat{\underline{\bm v}}(q) \| \\[.1cm] \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_c(q) \\[.1cm] \max_{ (t-2M)_+ \leq q \leq t } {\cal E}_g(q) \end{array} \right) \eqs, \eeq where the inequality sign is applied element-wisely, and ${\bm Q}$ is a non-negative $3 \times 3$ matrix, defined by: \beq \label{eq:defq} \left( \begin{array}{ccc} \theta (\gamma) + \gamma^2 \| {\bm U} \| \| {\bm U}^{-1} \| \overline{G} M ( G + 2 \overline{G} M ) & \gamma \sqrt{N} \| {\bm U} \| ( 1 + \gamma \overline{G} M ) ( \rho + \overline{A} \sqrt{\beta N} ) & 0 \\[.1cm] 0 & \lambda & \gamma \\[.1cm] \frac{ 2 \gamma \overline{A} \sqrt{N+1} (M+1) }{\beta M} ~\| {\bm U} \| \max\{ \overline{A}, \sqrt{\beta}~ \overline{C} \} & \sqrt{N} \frac{ 2 \gamma \overline{A}^2 (M+1) }{\beta M} + \frac{2 (1 + \lambda)}{M} & \lambda + \frac{\gamma \rho}{M} \end{array} \right)\eqs, \eeq where $ \theta( \gamma ) \eqdef \| {\bm I} - \gamma {\bm \Lambda } \| = \| {\bm I} - \gamma {\bm G} \| $. Note that the upper bounds for $ \| {\bm U} \|$ and $ \| {\bm U}^{-1} \|$ are provided in \eqref{eq:eigen_U}. Furthermore, also note that the eigenvalues of $\bm G$ are bounded in \eqref{eq:eigen_G}. We could set the stepsize $\gamma$ to be sufficiently small such that such that $\theta( \gamma ) \eqdef \| {\bm I} - \gamma {\bm G} \| < 1$. Finally, we apply Lemmas ~\ref{lem:ineq} and \ref{lem:exist} presented in Section~\ref{sec:useful} to the recursive inequality in \eqref{eq:eg_fin}, which shows that each of $\| \underline{\bm v}(t) \|$, ${\cal E}_c(t)$, ${\cal E}_g(t)$ converges linearly with $t$. Therefore, we conclude the proof of Theorem~\ref{thm:main}. \subsection{Two Useful Lemmas} \label{sec:useful} In this section, we present two auxiliary lemmas that are used in the proof of Theorem~\ref{thm:main}. Our first lemma establish the linear convergence of vectors satisfying recursive relations similar to \eqref{eq:eg_fin}, provided the spectral radius of ${\bm Q}$ is less than one. In addition, the second lemma verifies this condition for ${\bm Q}$ defined in in \eqref{eq:defq}. \begin{Lemma} \label{lem:ineq} Consider a sequence of non-negative vectors $\{ {\bm e}(t) \}_{t \geq 1} \subseteq \RR^n $ whose evolution is characterized by ${\bm e}(t+1) \leq {\bm Q}\!~ {\bm e}( [ (t-M+1)_+ , t ] )$ for all $t \geq 1$ and some fixed integer $M>0$, where ${\bm Q} \in \RR^{n \times n}$ is a matrix whose entries are nonnegative, and we define \beq \notag {\bm e}( {\cal S} ) \eqdef \left( \begin{array}{c} \max_{ q \in {\cal S} } ~e_1 (q) \\ \vdots \\ \max_{ q \in {\cal S} } ~e_n(q) \end{array} \right) \in \RR^n \qquad \text{for any subset}~{\cal S} \subseteq \NN\eqs . \eeq Moreover, if ${\bm Q} $ irreducible in the sense that there exists an integer $m$ such that the entries of ${\bm Q}^m$ are all positive, and the spectral radius of ${\bm Q}$, denoted by $\rho( {\bm Q} )$, is strictly less than one, then for any $t\geq 1$, we have \beq \label{eq:q_result} {\bm e}(t) \leq \rho({\bm Q})^{\lceil \frac{t-1}{ M} \rceil} C_1 {\bm u}_1 \eqs, \eeq where ${\bm u}_1 \in \RR_{++}^n$ is the top right eigenvector of ${\bm Q}$ and $C_1$ is a constant that depends on the initialization. \end{Lemma} {\bf Proof}. We shall prove the lemma using induction. By the Perron-Frobenius theorem, the eigenvector ${\bm u}_1$ associated with $\rho({\bm Q})$ is unique and is an all-positive vector. Therefore, there exists $C_1$ such that \beq\label{eq:case00} {\bm e}(1) \leq C_1\!~ {\bm u}_1 \eqs. \eeq Let us first consider the base case with $t=2,...,M+1$, i.e., $\lceil (t-1) / { M} \rceil = 1$. When $t=2$, by \eqref{eq:case00} we have, \beq\label{eq:case01} {\bm e}(2) \leq {\bm Q} {\bm e}(1) \leq C_1\!~ {\bm Q}\!~ {\bm u}_1 = \rho( {\bm Q})\!~ C_1\!~ {\bm u}_1 \eqs, \eeq which is valid as ${\bm Q}$, ${\bm e}(1)$, ${\bm u}_1$ are all non-negative. Furthermore, we observe that ${\bm e}(2) \leq C_1 {\bm u}_1$. Next when $t = 3$, we have \beq \notag {\bm e}(3) \leq {\bm Q} {\bm e}([1,2]) \overset{(a)}{\leq} C_1\!~ {\bm Q}\!~ {\bm u}_1 = \rho( {\bm Q})\!~ C_1\!~ {\bm u}_1 \eqs, \eeq where (a) is due to the non-negativity of vectors/matrix and the fact ${\bm e}(1), {\bm e}(2) \leq C_1 {\bm u}_1$ as shown in \eqref{eq:case01}. Telescoping using similar steps, one can show ${\bm e}(t) \leq \rho( {\bm Q} )\!~ C_1\!~ {\bm u}_1$ for any $t=2,...,M+1$. For the induction step, let us assume that \eqref{eq:q_result} holds true for any $t$ up to $t= pM + 1$. That is, we assume that the result holds for all $t$ such that $\lceil (t-1) / { M} \rceil \leq p$. We shall show that it also holds for any $t = pM+2, ..., (p+1)M + 1$, i.e., $\lceil (t-1) / { M} \rceil = p+ 1$. Observe that \beq \label{eq:induction_case00} {\bm e}(pM+2) \leq {\bm Q} \!~ {\bm e}( [ (p-1)M + 2, pM+1 ] ) \leq C_1 \!~ \rho( {\bm Q})^p {\bm Q} {\bm u}_1 = \rho( {\bm Q})^{p+1}\!~ C_1 \!~ {\bm u}_1 \eqs, \eeq where we have used the induction hypothesis. It is clear that \eqref{eq:induction_case00} is equivalent to \eqref{eq:q_result} with $t = pM+2$. Similar upper bound can be obtained for ${\bm e}(pM+3)$ as well. Repeating the same steps, we show that \eqref{eq:q_result} is true for any $t = pM+2, ..., (p+1)M + 1$. Therefore, we conclude the proof of this lemma. \hfill \textbf{Q.E.D.}\vspace{.2cm} The following Lemma shows that ${\bm Q}$ defined in \eqref{eq:defq} satisfies the conditions required in the previous lemma. Combining these two lemmas yields the final step of the proof of Theorem \ref{thm:main}. \begin{Lemma} \label{lem:exist} Consider the matrix ${\bm Q}$ defined in \eqref{eq:defq}, it can be shown that (a) ${\bm Q}$ is an irreducible matrix in $\RR^{3 \times 3} $; (b) there exists a sufficiently small $\gamma$ such that $\rho( {\bm Q}) < 1$; and (c) as $N,M \gg 1$ and the graph is geometric, we can set $\gamma = {\cal O}( 1 / \max\{N^2,M^2\})$ and $\rho( {\bm Q}) \leq 1 - {\cal O}( 1 / \max\{N^2,M^2\})$. \end{Lemma} {\bf Proof.} Our proof is divided into three parts. The first part shows the straightforward irreducibility of ${\bm Q}$; the second part gives an upper bound to the spectral radius of ${\bm Q}$; and the last part derives an asymptotic bound on $\rho ( {\bm Q})$ when $N,M \gg 1$. \paragraph{Irreducibility of ${\bm Q}$}To see that ${\bm Q}$ is irreducible, notice that ${\bm Q}^2 $ is a positive matrix, which could be verified by direct computation. \paragraph{Spectral Radius of ${\bm Q}$} In the sequel, we compute an upper bound to the spectral radius of ${\bm Q}$, and show that if $\gamma$ is sufficiently small, then its spectral radius will be strictly less than one. First we note that $\theta(\gamma) = 1 - \gamma \alpha$ for some $\alpha > 0$ and the network connectivity satisfies $\lambda<1$. Also note that $\rho>0$. For notational simplicity let us define the following \begin{align*} & a_1 = \| {\bm U} \| \| {\bm U}^{-1} \| \overline{G} M ( G + 2 \overline{G} M ), ~ a_2 = \|{\bm U}\| \sqrt{N} ( \rho + \overline{A} \sqrt{\beta N} ), ~ a_3 = \overline{G} M \|{\bm U}\| \sqrt{N} ( \rho + \overline{A} \sqrt{\beta N} ) \nonumber\\ & a_4 = \frac{ 2 \overline{A} \sqrt{N+1} (M+1) }{\beta M}~\| {\bm U} \| \max\{ \overline{A}, \sqrt{\beta} \overline{C} \} , \quad a_5 = \frac{ 2 \overline{A}^2 (M+1) \sqrt{N} }{\beta M}, \quad a_6 = \frac{2(1+ \lambda)}{M}. \end{align*} With the above shorthand definitions, the characteristic polynomial for ${\bm Q}$, denoted by $g\colon \RR\rightarrow \RR$, is given by \begin{align*} g(\sigma)& = \mbox{det}\left( \begin{array}{ccc} \sigma - (1- \gamma \alpha + \gamma^2 a_1) & -\gamma a_2 - \gamma^2 a_3 & 0 \\[.1cm] 0 & \sigma-\lambda & -\gamma \\[.1cm] - \gamma a_4 & -\gamma a_5 - a_6 & \sigma - \Big( \lambda + \frac{\gamma \rho}{M} \Big) \end{array} \right). \end{align*} By direct computation, we have \beq \begin{split} g(\sigma) & = (\sigma - (1-\gamma \alpha + \gamma^2 a_1 ) ) \!~ g_0(\sigma) - \gamma^3 ( a_2 + \gamma a_3 ) a_4 \end{split} \eeq where \beq g_0(\sigma) \eqdef (\sigma - \lambda)^2 - \frac{\gamma \rho}{M} (\sigma-\lambda) - \gamma( \gamma a_5 + a_6) \eqs. \eeq Notice that the two roots of the above polynomial can be upper bounded by: \beq \lambda + \frac{\gamma \rho}{2M} \pm \biggl[ \bigg( \frac{\gamma \rho \sqrt{N}}{2M} \bigg)^2 + \gamma ( \gamma a_5 + a_6 ) \biggr ] ^{1/2} \leq \overline{\sigma} \eqdef \lambda + \frac{\gamma \rho}{M} + \sqrt{ \gamma( \gamma a_5 + a_6 ) } \eeq In particular, for all $\sigma \geq \overline{\sigma}$, we have \beq g_0( \sigma) \geq ( \sigma - \overline{\sigma} )^2 \eqs. \eeq Now, let us define \beq \label{eq:sig_upperbd} \sigma^\star \eqdef \max \left\{ \frac{\gamma \alpha}{4} + 1 - \gamma \alpha + \gamma^2 a_1, \overline{\sigma} + \gamma \sqrt{\frac{4 (a_2 + \gamma a_3) a_4}{\alpha}} \right\} \eeq Observe that for all $\sigma \geq \sigma^\star$, it holds that \beq \begin{split} g(\sigma) & \geq (\sigma - (1- \gamma \alpha + \gamma^2 a_1) ) ( \sigma - \overline{\sigma} )^2 - \gamma^3 ( a_2 + \gamma a_3 ) a_4 \\ & \geq \frac{\gamma \alpha}{4} \!~ \gamma^2 \frac{4 ( a_2 + \gamma a_3) a_4}{\alpha} - \gamma^3 ( a_2 + \gamma a_3 ) a_4 = 0\eqs. \end{split} \eeq Lastly, observe that $g(\sigma)$ is strictly increasing for all $\sigma \geq \sigma^\star$. Combining with the Perron Frobenius theorem shows that $\rho( {\bm Q} ) \leq \sigma^\star$. Moreover, as $\lambda < 1$ and $\alpha > 0$, there exists a sufficiently small $\gamma$ such that $\sigma^\star < 1$. We conclude that $\rho ( {\bm Q} ) < 1$ in the latter case. \paragraph{Asymptotic Rate when $M , N \gg 1$} We evaluate a sufficient condition on $\gamma$ for the proposed algorithm to converge, \ie when $\sigma^\star < 1$. Let us consider \eqref{eq:sig_upperbd} and the first operand in the $\max\{ \cdot\}$. The first operand is guaranteed to be less than one if: \beq \label{eq:sig_1} \gamma \leq \frac{\alpha}{2 a_1} \Longrightarrow \frac{\gamma \alpha}{4} + 1 - \gamma \alpha + \gamma^2 a_1 \leq 1 - \frac{\gamma \alpha}{4} \eqs. \eeq Moreover, from the definition of $a_1$, we note that this requires $\gamma = {\cal O}(1/M^2)$ if $M \gg 1$. Next, we notice that for geometric graphs, we have $\lambda = 1 - c/N$ for some positive $c$. Substituting this into the second operand in \eqref{eq:sig_upperbd} gives \beq \label{eq:sig_2} 1 - \frac{c}{N} + \frac{\gamma \rho}{M} + \sqrt{ \gamma( \gamma a_5 + a_6 ) } + \gamma \sqrt{\frac{4 (a_2 + \gamma a_3) a_4}{\alpha}} < 1 \eqs. \eeq Therefore, \eqref{eq:sig_1} and \eqref{eq:sig_2} together give a sufficient condition for $\sigma^\star < 1$. To obtain an asymptotic rate when $M , N \gg 1$. Observe that $a_2 = \Theta( {N})$, $a_3 = \Theta( {N} M)$, $a_4 = \Theta(\sqrt{N})$, $a_5 = \Theta(\sqrt{N})$, $a_6 = \Theta(1/M)$. Moreover, the condition \eqref{eq:sig_1} gives $\gamma = {\cal O}(1/M^2)$, therefore the left hand side of Eq.~\eqref{eq:sig_2} can be approximated by \beq 1 - \frac{c}{N} + \gamma \!~ \Theta \left( {N}^{\frac{3}{4}} \right) + \sqrt{\gamma} \!~ \Theta ( 1 / \sqrt{M} ) \eqs . \eeq Setting the above to $1 - c/(2N)$ requires one to have $\gamma = {\cal O}(1/N^{2})$. Finally, the above discussions show that setting $\gamma = {\cal O}( 1 / \max\{N^{2},M^2\})$ guarantees that $\sigma^\star < 1$. In particular, we have $\sigma^\star \leq \max\{ 1 - \gamma \frac{\alpha}{4}, 1 - c/(2N) \} = 1 - {\cal O}( 1 / \max\{ N^{2}, M^2 \})$. \hfill \textbf{Q.E.D.} \subsection{Derivation of Equation (\ref{eq:new_equality})} \label{sec:detailed} We we establish \eqref{eq:new_equality} with details. Recall that $\underline{\bm h}(t) $ and $\underline{\bm v}(t) $ are defined in \eqref{eq:define_3vectors}. We verify this equation for each block of $\underline{\bm h}(t) $. To begin with, for the first block, for ${\bm h}_{\prm} (t)$ defined in \eqref{eq:4terms_1}, we have \beq \notag {\bm h}_{\prm} (t) = \rho \bar{\prm}(t) + \frac{1}{N} \sum_{i=1}^N \hat{\bm A}^\top {\bm w}_i^t = \rho \big( \bar{\prm}(t) - \prm^\star + \prm^\star \big) + \frac{1}{N} \sum_{i=1}^N \hat{\bm A}^\top {\bm w}_i^t \eqs. \eeq Recall from \eqref{eq:opt_cond} that $\rho \prm^\star = - \frac{1}{N} \sum_{i=1}^N \hat{\bm A}^\top {\bm w}_i^\star$, which implies that \beq \label{eq:fin_eq_1} {\bm h}_{\prm} (t) = \rho \big( \bar{\prm}(t) - \prm^\star \big) + \sum_{i=1}^N \sqrt{\frac{\beta}{N}} \hat{\bm A}^\top \!~ \frac{1}{\sqrt{\beta N}} \big( {\bm w}_i^t - {\bm w}_i^\star \big) = [ {\bm G} \underline{\bm v}(t) ] _{1} \eqs, \eeq where $ [ {\bm G} \underline{\bm v}(t) ] _{1} $ denotes the first block of $ {\bm G} \underline{\bm v}(t) $. It remains to establish the equation for the remaining blocks. For any $i \in \{ 1, \ldots, N\}$, let us focus on the $i+1$-th block. By the definition of ${\bm h}_{\bm w_i}(t)$ in \eqref{eq:4terms_2}, we have \beq\notag - \sqrt{\frac{\beta}{N}} {\bm h}_{\bm w_i}(t) = - \sqrt{\frac{\beta}{N}} \big( \hat{\bm A} \bar{\prm}(t) - \hat{\bm C} {\bm w}_i^t - \hat{\bm b}_i \big) = - \sqrt{\frac{\beta}{N}} \big( \hat{\bm A} ( \bar{\prm}(t) - \prm^\star ) + \hat{\bm A} \prm^\star - \hat{\bm C} {\bm w}_i^t - \hat{\bm b}_i \big) \eqs. \eeq Again from \eqref{eq:opt_cond}, it holds that $\hat{\bm A} \prm^\star = {\bm b}_i + \hat{\bm C} {\bm w}_i^\star$. Therefore, \beq \label{eq:fin_eq_i} - \sqrt{\frac{\beta}{N}} \big( \hat{\bm A} \bar{\prm}(t) - \hat{\bm C} {\bm w}_i^t - \hat{\bm b}_i \big) = - \sqrt{\frac{\beta}{N}} \hat{\bm A} ( \bar{\prm}(t) - \prm^\star ) + \beta \hat{\bm C} \!~ \frac{{\bm w}_i^t - {\bm w}_i^\star}{\sqrt{\beta N}} =[ {\bm G} \underline{\bm v}(t) ]_{i+1}\eqs, \eeq where $[ {\bm G} \underline{\bm v}(t) ]_{i+1}$ denotes the $i+1$-th block of $ {\bm G} \underline{\bm v}(t)$. Combining \eqref{eq:fin_eq_1} and \eqref{eq:fin_eq_i} gives the desired equality. \section{Additional Experiments} An interesting observation from Theorem~\ref{thm:main} is that the convergence rate of {\sf PD-DistIAG} depends on $M$ and the topology of the graph. The following experiments will demonstrate the effects of these on the algorithm, along with the effects of regularization parameter $\rho$. \begin{figure}[H] \centering \includegraphics[width=.425\linewidth]{./Fig/pcrit.eps}~ \includegraphics[width=.425\linewidth]{./Fig/ring.eps} \caption{Illustrating the graph topologies in the additional experiments. (Left) ER graph with connectivity probability of $1.01\log N / N$. (Right) Ring graph.} \label{fig:graph2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.425\linewidth]{./Fig/Revise_N500d300M500_rho1e-2.eps}~ \includegraphics[width=.425\linewidth]{./Fig/Revise_N500d300M500_rho1e-1.eps} \caption{Experiment with \texttt{mountaincar} dataset. For this problem, we only have $d=300$, $M=500$ samples, but yet there are $N=500$ agents. (Left) We set $\rho = 0.01$. (Right) We set $\rho = 0.1$.} \label{fig:two} \end{figure} To demonstrate the dependence of {\sf PD-DistIAG} on the graph topology, we fix the number of agents at $N=500$ and compare the performances on the ring and the ER graph set with probability of connection of $p= 1.01 \log N / N$, as illustrated in Fig.~\ref{fig:graph2}. Notice that the ring graph is not a geometric graph and its connectivity parameter, defined as $\lambda \eqdef \lambda_{\sf max} ( {\bm W} - (1/N) {\bf 1}{\bf 1}^\top )$ from the previous section can be much closer to $1$ than the ER graph. Therefore, we expect the {\sf PD-DistIAG} algorithm to converge slower on the ring graph. This is corroborated by Fig.~\ref{fig:two}. Furthermore, from the figure, we observe that with a larger regularization $\rho$, the disadvantage for using the ring graph has exacerbated. We suspect that this is due to the fact that the convergence speed is limited by the graph connectivity, as seen in \eqref{eq:defq}; while in the case of ER graph, the algorithm is able to exploit the improved problem's condition number. Next, we consider the same set of experiment but increase the number of samples to $M=5000$. \begin{figure}[htpb] \centering \includegraphics[width=.425\linewidth]{./Fig/Revise_N500d300M5000_rho1e-2.eps}~ \includegraphics[width=.425\linewidth]{./Fig/Revise_N500d300M5000_rho1e-1.eps} \caption{Experiment with \texttt{mountaincar} dataset. For this problem, we have $d=300$, $M=5000$ samples, but yet there are $N=500$ agents. (Left) We set $\rho = 0.01$. (Right) We set $\rho = 0.1$.} \label{fig:three} \end{figure} Interestingly, for this example, the performances of the ring graph and the ER graph settings are almost identical in this setting with large sample size $M$. This is possible as we recall from Theorem~\ref{thm:main} that the algorithm converges at a rate of ${\cal O}(\sigma^t)$ where $\sigma = 1 - {\cal O}( 1 / \max\{ MN^2, M^3 \})$. As we have $M \gg N$, the impact from the sample size $M$ becomes dominant, and is thus insensitive to the graph's connectivity. \section{Introduction} Reinforcement learning combined with deep neural networks recently achieves superhuman performance on various challenging tasks such as video games and board games \citep{mnih2015human, silver2017mastering}. In these tasks, an agent uses deep neural networks to learn from the environment and adaptively makes optimal decisions. Despite the success of single-agent reinforcement learning, multi-agent reinforcement learning (MARL) remains challenging, since each agent interacts with not only the environment but also other agents. In this paper, we study collaborative MARL with local rewards. In this setting, all the agents share a joint state whose transition dynamics is determined together by the local actions of individual agents. However, each agent only observes its own reward, which may differ from that of other agents. The agents aim to collectively maximize the global sum of local rewards. To collaboratively make globally optimal decisions, the agents need to exchange local information. Such a setting of MARL is ubiquitous in large-scale applications such as sensor networks \citep{rabbat2004distributed, cortes2004coverage}, swarm robotics \citep{kober2012reinforcement, corke2005networked}, and power grids \citep{callaway2011achieving, dall2013distributed}. \\ A straightforward idea is to set up a central node that collects and broadcasts the reward information, and assigns the action of each agent. This reduces the multi-agent problem into a single-agent one. However, the central node is often unscalable, susceptible to malicious attacks, and even infeasible in large-scale applications. Moreover, such a central node is a single point of failure, which is susceptible to adversarial attacks. In addition, the agents are likely to be reluctant to reveal their local reward information due to privacy concerns \citep{chaudhuri2011differentially,lin14privacy}, which makes the central node unattainable.\\ To make MARL more scalable and robust, we propose a decentralized scheme for exchanging local information, where each agent only communicates with its neighbors over a network. In particular, we study the policy evaluation problem, which aims to learn a global value function of a given policy. We focus on minimizing a Fenchel duality-based reformulation of the mean squared Bellman error in the model-free setting with infinite horizon, batch trajectory, and linear function approximation.\\ At the core of the proposed algorithm is a ``double averaging" update scheme, in which the algorithm performs one average over space (across agents to ensure consensus) and one over time (across observations along the trajectory). In detail, each agent locally tracks an estimate of the full gradient and incrementally updates it using two sources of information: (i) the stochastic gradient evaluated on a new pair of joint state and action along the trajectory and the corresponding local reward, and (ii) the local estimates of the full gradient tracked by its neighbors. Based on the updated estimate of the full gradient, each agent then updates its local copy of the primal parameter. By iteratively propagating the local information through the network, the agents reach global consensus and collectively attain the desired primal parameter, which gives an optimal approximation of the global value function. \textbf{Related Work}~~The study of MARL in the context of Markov game dates back to \cite{littman1994markov}. See also \cite{littman2001value, lauer2000algorithm, hu2003nash} and recent works on collaborative MARL \cite{wang2003reinforcement,arslan2017decentralized}. However, most of these works consider the tabular setting, which suffers from the curse of dimensionality. To address this issue, under the collaborative MARL framework, \cite{zhang2018fully} and \cite{lee2018primal} study actor-critic algorithms and policy evaluation with on linear function approximation, respectively. However, their analysis is asymptotic in nature and largely relies on two-time-scale stochastic approximation using ordinary differential equations \citep{borkar2008stochastic}, which is tailored towards the continuous-time setting. Meanwhile, most works on collaborative MARL impose the simplifying assumption that the local rewards are identical across agents, making it unnecessary to exchange the local information. More recently, \cite{foerster2016learning,foerster2017stabilising, gupta2017cooperative,lowe2017multi,omidshafiei2017deep} study deep MARL that uses deep neural networks as function approximators. However, most of these works focus on empirical performance and lack theoretical guarantees. Also, they do not emphasize on the efficient exchange of information across agents. In addition to MARL, another line of related works study multi-task reinforcement learning (MTRL), in which an agent aims to solve multiple reinforcement learning problems with shared structures \citep{wilson2007multi,parisotto2015actor, macua2015distributed, macua2017diff, teh2017distral}.\\ The primal-dual formulation of reinforcement learning is studied in \cite{liu2015finite, macua2015distributed, macua2017diff, lian2016finite, dai2016learning,chen2016stochastic, wang2017primal, dai2017smoothed, dai2017boosting, du2017stochastic} among others. Except for \cite{macua2015distributed, macua2017diff} discussed above, most of these works study the single-agent setting. Among them, \cite{lian2016finite, du2017stochastic} are most related to our work. In specific, they develop variance reduction-based algorithms \citep{johnson2013accelerating, defazio2014saga, schmidt2017minimizing} to achieve the geometric rate of convergence in the setting with batch trajectory. In comparison, our algorithm is based on the aforementioned double averaging update scheme, which updates the local estimates of the full gradient using both the estimates of neighbors and new states, actions, and rewards. In the single-agent setting, our algorithm is closely related to stochastic average gradient (SAG) \citep{schmidt2017minimizing} and stochastic incremental gradient (SAGA) \citep{defazio2014saga}, with the difference that our objective function is a finite sum convex-concave saddle-point problem.\\ Our work is also related to prior work in the broader contexts of primal-dual and multi-agent optimization. For example, \cite{palaniappan2016stochastic} apply variance reduction techniques to convex-concave saddle-point problems to achieve the geometric rate of convergence. However, their algorithm is centralized and it is unclear whether their approach is readily applicable to the multi-agent setting. Another line of related works study multi-agent optimization, for example, \cite{tsitsiklis1986distributed,nedic2009distributed,chen2012diffusion,shi2015extra,qu2017harnessing}. However, these works mainly focus on the general setting where the objective function is a sum of convex local cost functions. To the best of our knowledge, our work is the first to address decentralized convex-concave saddle-point problems with sampled observations that arise from MARL. \textbf{Contribution}~~Our contribution is threefold: (i) We reformulate the multi-agent policy evaluation problem using Fenchel duality and propose a decentralized primal-dual optimization algorithm with a double averaging update scheme. (ii) We establish the global geometric rate of convergence for the proposed algorithm, making it the first algorithm to achieve fast linear convergence for MARL. (iii) Our proposed algorithm and analysis is of independent interest for solving a broader class of decentralized convex-concave saddle-point problems with sampled observations. \textbf{Organization}~~In \S\ref{sec:pf} we introduce the problem formulation of MARL. In \S\ref{sec:iag} we present the proposed algorithm and lay out the convergence analysis. In \S\ref{sec:ne} we illustrate the empirical performance of the proposed algorithm. We defer the detailed proofs to the supplementary material. \textbf{Notation}~~Unless otherwise specified, for a vector ${\bm x}$, $\| {\bm x} \|$ denotes its Euclidean norm; for a matrix ${\bm X}$, $\| {\bm X} \|$ denotes its spectral norm, \ie the largest singular value.\vspace{-.4cm} \section{Problem Formulation}\label{sec:pf}\vspace{-.2cm} In this section, we introduce the background of MARL, which is modeled as a multi-agent Markov decision process (MDP). Under this model, we formulate the policy evaluation problem as a primal-dual convex-concave optimization problem. \textbf{Multi-agent MDP}~~Consider a group of $N$ agents. We are interested in the multi-agent MDP: \begin{align*} \big( {\cal S} , \{ {\cal A}_i \}_{i = 1}^N, {\cal P}^{\bm a}, \{ {\cal R}_i \}_{i=1}^N, \gamma \big) \eqs, \end{align*} where ${\cal S} $ is the state space and ${\cal A}_i$ is the action space for agent $i$. We write ${\bm s} \in {\cal S}$ and ${\bm a} \eqdef (a_1,...,a_N) \in {\cal A}_1 \times \cdots \times {\cal A}_N$ as the joint state and action, respectively. The function ${\cal R}_i ({\bm s}, {\bm a})$ is the local reward received by agent $i$ after taking joint action ${\bm a}$ at state ${\bm s} $, and $\gamma \in (0,1)$ is the discount factor. Both ${\bm s}$ and ${\bm a}$ are available to all agents, whereas the reward ${\cal R}_i$ is {\it private} for agent $i$. In contrast to a single-agent MDP, the agents are coupled together by the state transition matrix ${\cal P}^{\bm a} \in \RR^{ |{\cal S}| \times |{\cal S}| }$, whose $({\bm s}, {\bm s}')$-th element is the probability of transiting from ${\bm s}$ to ${\bm s}'$, after taking a joint action ${\bm a}$. This scenario arises from large-scale applications such as sensor networks \citep{rabbat2004distributed, cortes2004coverage}, swarm robotics \citep{kober2012reinforcement, corke2005networked}, and power grids \citep{callaway2011achieving, dall2013distributed}, which strongly motivates the development of a multi-agent RL strategy. Moreover, under the collaborative setting, the goal is to maximize the collective return of all agents. Suppose there exists a central controller that collects the rewards of, and assigns the action to each individual agent, the problem reduces to the classical MDP with action space $\cal A$ and global reward function $R_c ({\bm s}, {\bm a} ) = N^{-1} \sum_{i=1}^N {\cal R}_i ({\bm s}, {\bm a})$. Thus, without such a central controller, it is essential for the agents to collaborate with each other so as to solve the multi-agent problem based solely on local information. Furthermore, a joint policy, denoted by $\bm{\pi}$, specifies the rule of making sequential decisions for the agents. Specifically, $\bm{\pi} ( {\bm a} | {\bm s} )$ is the conditional probability of taking joint action ${\bm a} $ given the current state ${\bm s} $. We define the reward function of joint policy $\bm{\pi} $ as an average of the local rewards: \beq \label{eq:reward} \textstyle R_c^{\bm{\pi}} ( {\bm s} ) \eqdef \frac{1}{N} \sum_{i=1}^N R_i^{\bm \pi} ({\bm s} ), \qquad \text{where}~~ R_i^{{\bm \pi }} ({\bm s} ) \eqdef \EE_{ {\bm a} \sim {\bm \pi} ( \cdot | {\bm s} ) } \big[ {\cal R}_i ( {\bm s}, {\bm a} ) \big] \eqs. \eeq That is, $R_c^{\bm {\pi}}({\bm s} ) $ is the expected value of the average of the rewards when the agents follow policy ${\bm \pi}$ at state ${\bm s}$. Besides, any fixed policy ${\bm \pi}$ induces a Markov chain over $\cal S$, whose transition matrix is denoted by ${\bm P} ^{\bm \pi}$. The $({\bm s}, {\bm s}')$-th element of ${\bm P} ^{\bm \pi}$ is given by \begin{align*} \textstyle [ {\bm P} ^{\bm \pi}]_{{\bm s}, {\bm s}' } = \sum_{{\bm a} \in {\cal A} } {\bm \pi }({\bm a} | {\bm s} ) \cdot [ {\cal P}^{\bm a} ]_{{\bm s}, {\bm s}' } . \end{align*} When this Markov chain is aperiodic and irreducible, it induces a stationary distribution $\mu^{\bm \pi}$ over $\cal S$. \textbf{Policy Evaluation}~~ A central problem in reinforcement learning is \emph{policy evaluation}, which refers to learning the \emph{value function} of a given policy. This problem appears as a key component in both value-based methods such as policy iteration, and policy-based methods such as actor-critic algorithms \citep{sutton1998reinforcement}. Thus, efficient estimation of the value functions in multi-agent MDPs enables us to extend the successful approaches in single-agent RL to the setting of MARL. Specifically, for any given joint policy $\bm{\pi}$, the value function of ${\bm \pi} $, denoted by $V^{\bm{\pi}} \colon \cal S\rightarrow \RR$, is defined as the expected value of the discounted cumulative reward when the multi-agent MDP is initialized with a given state and the agents follows policy ${\bm \pi}$ afterwards. For any state ${\bm s} \in \cal S$, we define \beq \label{eq:loc_reward} \textstyle V^{\bm{\pi}} ( {\bm s} ) \eqdef \EE\Big[ \sum_{p=1}^\infty \gamma^p {\cal R}_c^{\bm{\pi}} ( {\bm s}_p ) \!~ | \!~ {\bm s}_1 = {\bm s} , \bm{\pi} \Big] \eqs. \eeq To simplify the notation, we define the vector ${\bm V}^{\bm{\pi}} \in \RR^{|\cal S| } $ through stacking up $V^{\bm{\pi}} ( {\bm s} )$ in \eqref{eq:loc_reward} for all ${\bm s}$. By definition, $V^{\bm {\pi}}$ satisfies the Bellman equation \beq\label{eq:bellman} {\bm V}^{\bm{\pi}} = {\bm R}_c^{\bm{\pi}} + \gamma {\bm P}^{\bm{\pi}} {\bm V}^{\bm{\pi}} \eqs, \eeq where ${\bm R}_c^{\bm{\pi}}$ is obtained by stacking up \eqref{eq:reward} and $[ {\bm P}^{\bm{\pi}} ]_{ {\bm s}, {\bm s}' } \eqdef \EE_{ \bm{\pi} } [ {\cal P}_{{\bm s}, {\bm s}'}^{ {\bm a} } ]$ is the expected transition matrix. Moreover, it can be shown that ${\bm V}^{\bm {\pi}}$ is the unique solution of \eqref{eq:bellman}. When the number of states is large, it is impossible to store ${\bm V}^{\bm {\pi}}$. Instead, our goal is to learn an approximate version of the value function via function approximation. In specific, we approximate ${V}^{\bm{\pi}}({\bm s})$ using the family of linear functions \begin{align*} \bigl \{ V_{\prm} ( {\bm s} ) \eqdef \bm{\phi}^\top ( {\bm s} ) \prm \colon \prm \in \RR^d \} , \end{align*} where $\prm \in \RR^d$ is the parameter, $\bm{\phi} ( {\bm s} ) \colon {\cal S} \rightarrow \RR^d$ is a known dictionary consisting of $d$ features, e.g., a feature mapping induced by a neural network. To simplify the notation, we define $\bm{\Phi} \eqdef ( ... ; \bm{\phi}^\top ( {\bm s} ); ... ) \in \RR^{| {\cal S} | \times d} $ and let ${\bm V} _{\prm} \in \RR^{|\cal S|} $ be the vector constructed by stacking up $\{ V_{\prm}({\bm s}) \}_{{\bm s} \in {\cal S} } $. With function approximation, our problem is reduced to finding a $\prm \in \RR^d$ such that ${\bm V}_{\prm} \approx {\bm V}^{\bm \pi}$. Specifically, we seek for $\prm$ such that the mean squared projected Bellman error (MSPBE) \begin{align} \label{eq:mspbe} {\sf MSPBE}^\star ( \prm ) & \eqdef \frac{1}{2} \Big \| {\bm \Pi}_{\bm{\Phi}} \Bigl ( {\bm V}_{\prm} - \gamma {\bm P}^{\bm{\pi}} {\bm V}_{\prm} - {\bm R}_c^{\bm{\pi}} \Big ) \Big\|_{ {\bm D} }^2 +\rho \| \prm \|^2 \end{align} is minimized, where ${\bm D} = \text{diag} [ \{ \mu^{\bm \pi}({\bm s}) \}_{{\bm s} \in {\cal S} } ] \in \RR^{ |{ \cal S} | \times | {\cal S} | } $ is a diagonal matrix constructed using the stationary distribution of ${\bm \pi}$, $ {\bm \Pi}_{\bm{\Phi}} \colon \RR^{| {\cal S} | } \rightarrow \RR^{| {\cal S} | } $ is the projection onto subspace $\{ \bm{\Phi} \prm \colon \prm \in \RR^d \}$, defined as ${\bm \Pi}_{\bm{\Phi}} = \bm{\Phi} ( \bm{\Phi}^\top {\bm D} \bm{\Phi} )^{-1} \bm{\Phi}^\top {\bm D}$, and $\rho \geq 0$ is a free parameter controlling the regularization on $\prm$. For any positive semidefinite matrix ${\bm A} $, we define $\| {\bm v} \|_{\bm A} = \sqrt{ {\bm v} ^\top {\bm A} {\bm v} } $ for any vector ${\bm v}$. By direct computation, when $\bm{\Phi}^\top {\bm D} \bm{\Phi}$ is invertible, the MSPBE defined in \eqref{eq:mspbe} can be written as \beq\label{eq:population_mspbe} {\sf MSPBE}^\star ( \prm ) = \frac{1}{2} \Big\| \bm{\Phi}^\top {\bm D} \Big( {\bm V}_{\prm} - \gamma {\bm P}^{\bm{\pi}} {\bm V}_{\prm} - {\bm R}_c^{\bm{\pi}} \Big) \Big\|_{ (\bm{\Phi}^\top {\bm D} \bm{\Phi})^{-1} }^2 + \rho \| \prm \|^2 \eqs = \frac{1}{2} \Big\| {\bm A} \prm - {\bm b} \Big\|_{ {\bm C}^{-1} }^2 + \rho \| \prm \| ^2 , \eeq where we define ${\bm A} \eqdef \EE \big[ \bm{\phi} ( {\bm s}_p ) \big( \bm{\phi} ( {\bm s}_p ) - \gamma \bm{\phi} ( {\bm s}_{p+1} ) \big)^\top \big],$ ${\bm C} \eqdef \EE \big[ \bm{\phi} ( {\bm s}_p ) \bm{\phi}^\top ( {\bm s}_p ) \big],$ and $ {\bm b} \eqdef \EE \big[ {\cal R}_c^{\bm{\pi}} ( {\bm s}_p ) \bm{\phi} ( {\bm s}_p ) \big]$. Here the expectations in ${\bm A},$ ${\bm b}$, and ${\bm C}$ are all taken with respect to (\wrt) the stationary distribution $\mu^{\bm \pi}$. Furthermore, when ${\bm A}$ is full rank and ${\bm C}$ is positive definite, it can be shown that the MSPBE in \eqref{eq:population_mspbe} has a unique minimizer. To obtain a practical optimization problem, we replace the expectations above by their sampled averages from $M$ samples. In specific, for a given policy $\bm{\pi}$, a finite state-action sequence $\{ {\bm s}_p, {\bm a}_p \}_{p=1}^{M}$ is simulated from the multi-agent MDP using joint policy ${\bm \pi}$. We also observe ${\bm s}_{M+1}$, the next state of ${\bm s}_{M}$. Then we construct the sampled versions of ${\bm A}$, ${\bm b}$, ${\bm C}$, denoted respectively by $\hat{\bm A}$, $\hat{\bm C}$, $\hat{\bm b}$, as \beq\label{eq:empirical_matrices} \begin{split} & \textstyle \hat{\bm A} \eqdef \frac{1}{M} \sum_{p=1}^{M} {\bm A}_p,~ \hat{\bm C} \eqdef \frac{1}{M} \sum_{p=1}^{M} {\bm C}_p,~ \hat{\bm b} \eqdef \frac{1}{M} \sum_{p=1}^{M} {\bm b}_p,~~\text{with} \\ & {\bm A}_p \eqdef \bm{\phi} ( {\bm s}_p ) \big( \bm{\phi} ( {\bm s}_p ) - \gamma \bm{\phi} ( {\bm s}_{p+1} ) \big)^\top,~ {\bm C}_p \eqdef \bm{\phi} ( {\bm s}_p ) \bm{\phi}^\top ( {\bm s}_p ),~ {\bm b}_p \eqdef {\cal R}_c ( {\bm s}_p, {\bm a}_p ) \bm{\phi} ( {\bm s}_p ) \eqs, \end{split} \eeq where ${\cal R}_c ( {\bm s}_p, {\bm a}_p ) \eqdef N^{-1} \sum_{i=1}^N {\cal R}_i( {\bm s}_p, {\bm a}_p)$ is the average of the local rewards received by each agent when taking action ${\bm a}_p$ at state ${\bm s}_p$. Here we assume that $M$ is sufficiently large such that $\hat {\bm C}$ is invertible and $\hat {\bm A}$ is full rank. Using the terms defined in \eqref{eq:empirical_matrices}, we obtain the empirical MSPBE \beq\label{eqMSPBE2} {\sf MSPBE} ( \prm ) \eqdef \frac{1}{2} \Big\| \hat{\bm A} \prm - \hat{\bm b} \Big\|_{ \hat{\bm C}^{-1} }^2 + \rho \| \prm \| ^2 \eqs, \eeq which converges to $ {\sf MSPBE}^\star ( \prm )$ as $M \rightarrow \infty$. Let $\hat \prm$ be a minimizer of the empirical MSPBE, our estimation of ${\bm V}^{\bm \pi}$ is given by $ {\bm \Phi} \hat \prm$. Since the rewards $ \{ {\cal R}_i ( {\bm s}_p, {\bm a}_p) \}_{i=1}^N $ are private to each agent, it is impossible for any agent to compute $ {\cal R}_c ( {\bm s}_p, {\bm a}_p ) $, and minimize the empirical MSPBE \eqref{eqMSPBE2} independently. \textbf{Multi-agent, Primal-dual, Finite-sum Optimization}~~ Recall that under the multi-agent MDP, the agents are able to observe the states and the joint actions, but can only observe their local rewards. Thus, each agent is able to compute $\hat {\bm A}$ and $\hat {\bm C}$ defined in \eqref{eq:empirical_matrices}, but is unable to obtain $\hat {\bm b}$. To resolve this issue, for any $i\in \{ 1, \ldots, N\}$ and any $p \in \{ 1, \ldots, M \}$, we define ${\bm b}_{p,i} \eqdef {\cal R}_i ( {\bm s}_p, {\bm a}_p ) \bm{\phi} ( {\bm s}_p )$ and $\hat{\bm b}_i \eqdef M^{-1} \sum_{p=1}^{M} {\bm b}_{p,i}$, which are known to agent $i$ only. By direct computation, it is easy to verify that minimizing ${\sf MSPBE}(\prm)$ in \eqref{eqMSPBE2} is equivalent to solving \beq \label{eq:prob} \min_{\prm \in \RR^d} ~\frac{1}{N} \sum_{i=1}^N {\sf MSPBE}_i( \prm ) ~~ \text{where}~~ {\sf MSPBE}_i ( \prm ) \eqdef \frac{1}{2} \Big\| \hat{\bm A} \prm - \hat{\bm b}_i \Big\|_{ \hat{\bm C}^{-1} }^2 + \rho \| \prm \|^2 \eqs. \eeq The equivalence can be seen by comparing the optimality conditions of two optimization problems. Importantly, \eqref{eq:prob} is a \emph{multi-agent optimization problems} \citep{nedic2009distributed} whose objective is to minimize a summation of $N$ local functions coupled together by the common parameter $\prm$. Here ${\sf MSPBE}_i ( \prm ) $ is private to agent $i$ and the parameter $\prm$ is shared by all agents. As inspired by \citep{nedic2003least, liu2015finite, du2017stochastic}, using Fenchel duality, we obtain the conjugate form of ${\sf MSPBE}_i(\prm)$, \ie \beq\label{eq:fenchel} \frac{1}{2} \Big\| \hat{\bm A} \prm - \hat{\bm b}_i \Big\|_{ \hat{\bm C}^{-1} }^2 + \rho \| \prm \|^2 = \max_{ {\bm w}_i \in \RR^d } \Big( {\bm w}_i^\top \big( \hat{\bm A} \prm - \hat{\bm b}_i \big) - \frac{1}{2} {\bm w}_i^\top \hat{\bm C} {\bm w}_i \Big) + \rho \| \prm \|^2 \eqs. \eeq Observe that each of $\hat{\bm A}, \hat{\bm C}, \hat{\bm b}_i$ can be expressed as a finite sum of matrices/vectors. By \eqref{eq:fenchel}, problem \eqref{eq:prob} is equivalent to a \emph{multi-agent}, \emph{primal-dual} and \emph{finite-sum} optimization problem: \beq \label{eq:opt_pd} \min_{\prm \in \RR^d} \max_{ {\bm w}_i \in \RR^d ,i = 1,...,N } \frac{1}{NM} \sum_{i=1}^N \sum_{p=1}^M \underbrace{\big( {\bm w}_i^\top {\bm A}_p \prm - {\bm b}_{p,i}^\top {\bm w}_i - \frac{1}{2} {\bm w}_i^\top {\bm C}_p {\bm w}_i + \frac{\rho}{2} \| \prm \|^2 \big)}_{\eqdef J_{i,p} ( \prm, {\bm w}_i )} \eqs. \eeq Hereafter, the global objective function is denoted by $J(\prm, \{ {\bm w}_i \}_{i=1}^N ) \eqdef (1/NM) \sum_{i=1}^N \sum_{p=1}^M J_{i,p} (\prm, {\bm w}_i )$, which is convex \wrt the primal variable $\prm$ and is concave \wrt the dual variable $\{{\bm w}_i\}_{i=1}^N$. It is worth noting that the challenges in solving \eqref{eq:opt_pd} are three-fold. First, to obtain a saddle-point solution $( \{{\bm w}_i\}_{i=1}^N, \prm )$, any algorithm for \eqref{eq:opt_pd} needs to update the primal and dual variables simultaneously, which can be difficult as objective function needs not be strongly convex with respect to $\prm$. In this case, it is nontrivial to compute a solution efficiently. Second, the objective function of \eqref{eq:opt_pd} consists of a sum of $M$ functions, with $M \gg 1$ potentially, such that conventional primal-dual methods \citep{chambolle2016ergodic} can no longer be applied due to the increased complexity. Lastly, since $\prm$ is shared by all the agents, when solving \eqref{eq:opt_pd}, the $N$ agents need to reach a consensus on $\prm$ without sharing the local functions, e.g., $J_{i,p}(\cdot)$ has to remain unknown to all agents except for agent $i$ due to privacy concerns. Although finite-sum convex optimization problems with shared variables are well-studied, new algorithms and theory are needed for convex-concave saddle-point problems. Next, we propose a novel decentralized first-order algorithm that tackles these difficulties and converges to a saddle-point solution of \eqref{eq:opt_pd} with linear rate. \vspace{-.1cm} \section{Numerical Experiments}\label{sec:ne} To verify the performance of our proposed method, we conduct an experiment on the \texttt{mountaincar} dataset \citep{sutton1998reinforcement} under a setting similar to \citep{du2017stochastic} -- to collect the dataset, we ran Sarsa with $d=300$ features to obtain the policy, then we generate the trajectories of actions and states according to the policy with $M$ samples. For each sample $p$, we generate the local reward, $R_i( s_{p,i}, a_{p,i} )$ by assigning a random portion for the reward to each agent such that the average of the local rewards equals ${\cal R}_c ( {\bm s}_p, {\bm a}_p )$. We compare our method to several centralized methods -- {\sf PDBG} is the primal-dual gradient descent method in \eqref{eq:fgd}, {\sf GTD2} \citep{sutton2009convergent}, and {\sf SAGA} \citep{du2017stochastic}. Notably, {\sf SAGA} has linear convergence while only requiring an incremental update step of low complexity. For {\sf PD-DistIAG}, we simulate a communication network with $N=10$ agents, connected on an Erdos-Renyi graph generated with connectivity of $0.2$; for the step sizes, we set $\gamma_1 = 0.005 / \lambda_{\sf max}( \hat{\bm A} )$, $\gamma_2 = 5 \times 10^{-3}$. \begin{figure}[h] \centering \includegraphics[width=.32\linewidth]{./Fig/graph_small.eps}~ \includegraphics[width=.32\linewidth]{./Fig/rho1e-2_n5000d300.eps}~ \includegraphics[width=.32\linewidth]{./Fig/rho0_n5000d300.eps} \caption{Experiment with \texttt{mountaincar} dataset. For this problem, we have $d=300$, $M=5000$ samples, and there are $N=10$ agents. (Left) Graph Topology. (Middle) $\rho = 0.01$. (Right) $\rho = 0$.} \label{fig:one} \end{figure} Figure~\ref{fig:one} compares the optimality gap in terms of MSPBE of different algorithms against the epoch number, defined as $(t / M)$. For {\sf PD-DistIAG}, we compare its optimality gap in MSPBE as the average objective, \ie it is $(1/N) \sum_{i=1}^N {\sf MSPBE} ( \prm_i^t ) - {\sf MSPBE}( \prm^\star )$. As seen in the left panel, when the regularization factor is high with $\rho > 0$, the convergence speed of {\sf PD-DistIAG} is comparable to that of {\sf SAGA}; meanwhile with $\rho = 0$, the {\sf PD-DistIAG} converges at a slower speed than {\sf SAGA}. Nevertheless, in both cases, the {\sf PD-DistIAG} method converges faster than the other methods except for {\sf SAGA}. Additional experiments are presented in the supplementary materials to compare the performance at different topology and regularization parameter. \textbf{Conclusion}~~ In this paper, we have studied the policy evaluation problem in \emph{multi-agent} reinforcement learning. Utilizing Fenchel duality, a double averaging scheme is proposed to tackle the primal-dual, multi-agent, and finite-sum optimization arises. The proposed {\sf PD-DistIAG} method demonstrates linear convergence under reasonable assumptions.
{ "timestamp": "2019-01-10T02:04:13", "yymm": "1806", "arxiv_id": "1806.00877", "language": "en", "url": "https://arxiv.org/abs/1806.00877" }
\section{Preliminaries} We use the following notation $\partial_j=\partial/\partial x_j$, $\theta_j = x_j\,\partial_j$. For a multiindex $\alpha\in\N_0^d$ we set $\partial^\alpha = \partial_1^{\alpha_1}..\partial_d^{\alpha_d}$, likewise for $\theta^\alpha$. For a polynomial $P(z)=\sum_\alpha c_\alpha z^\alpha$ we consider the \it Euler operator \rm $P(\theta)=\sum_\alpha c_\alpha \theta^\alpha$ and also the operator $P(\partial)$, defined likewise. $P(\theta)$ and $P(\partial)$ are connected in the following way. For $x\in\Rn$ we set $\Exp(x)=(\exp(x_1),..,\exp(x_d)).$ $\Exp$ is a diffeomorphism from $\Rn$ onto $Q:=]0,+\infty[^d$. Its inverse is $\Log(x)=(\log(x_1),..,\log(x_d))$. The map $C_\Exp:f\To f\circ\Exp$ is a linear topological isomorphism from $C^\infty(Q)$ onto $C^\infty(\Rn)$. For $f\in C^\infty(Q)$ we have $P(\partial)(f\circ\Exp)=(P(\theta)f)\circ\Exp$ that is $P(\partial)\circ C_\Exp =C_\Exp\circ P(\theta)$. In this way solvability properties of $P(\theta)$ on $C^\infty(Q)$ can be reduced to solvability properties of $P(\partial)$ on $C^\infty(\Rn)$. This has be done in $\cite{V1}$ and essentially used in \cite{Vtemp}. There it was shown that every non-trivial Euler operator is surjective on $\cS'(\Rn)$, the space of temperate distributions. This result will be used in Section \ref{s2}. We will also make use of the fact that for elliptic $P(\partial)$ distributional zero solutions of $P(\theta)$ on some open $\Om\subset Q$ are real analytic on $\Om$. Throughout the paper we use standard notation of Functional Analysis, in particular, of distribution theory, and of the theory of partial differential operators. For unexplained notation we refer to \cite{DK}, \cite{H1}, \cite{MV}, \section{Examples for non-solvability}\label{s1} We assume $d\ge 2$ and on $\R^{d+1}$ we use the variables $(x,y)$, $x\in\Rn$, $y\in\R$. We consider Euler differential operators on $\cD'(\R^{d+1})$ and study the solvability of equations $P(\theta)S=T$ where $$P(\theta)= Q(\theta_1,\dots,\theta_d) + \theta_y^p$$ and $$T=\sum_{n=0}^\infty f_n(x)\otimes \delta^{(n)}(y).$$ with $f_n\in \Dip$ and $\supp f_n\subset \{x\,:\,|x|\ge n\}$. We assume that $Q$ and the $f_n$ are chosen in such a way that every solution $F_n$ of $(Q(\theta)+(-n-1)^p) F_n=f_n$ does not vanish on any open subset of the `quadrant' $Q$. We might choose $Q(\theta)=\sum_{j=1}^d \theta_j^2$ and $f_n= \delta_\bn$ where $\bn=(n,\dots,n)$. If we have such $F_n$ a natural candidate would be $S=\sum_n F_n(x)\otimes \delta^{(n)}(y)$ because \begin{equation}\label{eq3} P(\theta)(F_n(x)\otimes \delta^{(n)}(y))=((Q(\theta) + (-n-1)^p)F_n(x))\otimes \delta_y^{(n)}. \end{equation} However, the series is not locally of finite order, hence does not define a distribution. Anyhow this shows the heart of the problem. We study the problem on a strip around $\Rn$. We fix a function $\chi\in\cD([-2,+2]$ with $\chi(y)=1$ for $y\in[-1,+1]$ and consider functions $\vp(x,y)\in\cD(\R^{d+1})$ of the form \begin{equation}\label{eq7}\vp(x,y)=\sum_{n=0}^m \vp_n(x)\chi(y)y^n/n!=\psi(x,y)\chi(y)\end{equation} with \begin{equation}\label{eq8}\psi(x,y)=\sum_{n=0}^m \vp_n(x)y^n/n!.\end{equation} We obtain $$P(\theta^*)\vp(x,y)=\sum_{n=0}^m (Q(\theta^*)+(-n-1)^p)\vp_n(x)\chi(y)y^n/n!+ (L\psi)(x,y)$$ where $L$ has the form $$(L\psi)(x,y)= \sum_{j=1}^p(L_j\psi))(x,y)\chi^{(j)}(y).$$ In particular $\supp (L\psi)(x,y)\subset \{(x,y)\,:\,1\le |y| \le 2\}$. Let now $S$ be a solution of $P(\theta)S=T$. We define $S_n\in\Dip$ by $$S_n(\gamma) = S(\gamma(x)\chi(y)y^n/n!).$$ for $\gamma\in \Di$. We obtain \begin{eqnarray*} (P(\theta)S)\vp &=& \sum_{n=0}^m S_n((Q(\theta^*)+(-n-1)^p)\vp_n)+S(L\psi) \\ &=& \sum_{n=0}^m ((Q(\theta)+(-n-1)^p) S_n)\vp_n+S(L\psi)\\ &=& \sum_{n=0}^m (-1)^n f_n(\vp_n). \end{eqnarray*} We define $R_n\in\Dip$ by $$R_n = (-1)^n f_n -(Q(\theta)+(-n-1)^p) S_n)$$ and obtain for $\psi$ like in (\ref{eq8}) \begin{equation}\label{eq5} \big(\sum_{n=0}^\infty (-1)^n R_n\otimes \delta^{(n)}\big)\psi = R(\psi) \end{equation} where $$R(\psi)=S(L\psi)= \sum_{j=1}^p S((L_j\psi))(x,y)\chi^{(j)}(y)).$$ Both sides of (\ref{eq5}) define distributions in $(\Di\otimes \cE([-2,+2])'$ which coincide on the dense subspace of functions $\psi$ as in (\ref{eq8}), hence they coincide. The left hand side has support in $\Rn\times\{0\}$, while the right hand side has support in $\{(x,y)\,:\,1\le |y|\le 2\}$. So both sides are zero and we have \begin{equation}\label{eq6}(Q(\theta)+(-n-1)^p)S_n=(-1)^n f_n \end{equation} for all $n$. Returning to $\vp$ of the form like in (\ref{eq7}) we obtain \begin{eqnarray*} S\vp &=& \sum_{n=0}^m S_n(\vp_n)= \sum_{n=0}^m S_n(\vp^{(0,n)}(x,0)) = \sum_{n=0}^m (-1)^n (S_n\otimes \delta^{(n)})\vp. \end{eqnarray*} So on the dense linear subspace of $\chi(y)\cdot\cD(\R^{d+1})$ consisting of functions $\vp(x,y)$ of the form (\ref{eq7}) we have $$S = \sum_{n=0}^\infty (-1)^n S_n\otimes \delta^{(n)}$$ which implies that the sum must be locally finite contradicting our assumptions on the solutions of equation (\ref{eq6}). \qed We have shown: \begin{proposition}\label{t1} $P(\theta)$ as above is not surjective on $\cD\,'(\R^{d+1})$, $d\ge 2$. \end{proposition} This leads to the first main result of this paper. \begin{theorem}\label{t2} For $d\ge 3$ there are Euler operators which are not surjective on $\Dip$. \end{theorem} We have shown non-surjectivity in the following cases. \begin{example} { \rm For even $p$ and $d\ge 3$ the operator $\sum_{j=1}^d \theta_j^p$ is not surjective in $\Dip$. This holds, in particular for the ``Laplace-Euler''-operator $\sum_{j=1}^d \theta^2$. The same holds for $\sum_{j=1}^{d-1}\theta^2+i\theta_d$ and $\sum_{j=1}^{d-1}\theta^2+\theta_d$, the Euler-operators corresponding to the Schr\"odinger and to the heat equation and also for $\sum_{j=1}^{d-1}\theta_j^2-\theta_d^2$, the analogue to the wave equation. So for $d\ge 3$ we have counterexamples for the classical elliptic, parabolic and hyperbolic polynomials. } \end{example} For $d=1$ the situation is different (see the remarks at the end of \cite{Vtemp}). \begin{theorem}\label{t3} Every non-trivial Euler-operator is surjective in $\dip$. \end{theorem} \Proof By the fundamental theorem of algebra it is enough to show it for $\theta-a$, $a\in \C$. For $T\in\dip$ we want to solve $(\theta-a)S=T$. We set $\cD_0(\R)=\{\vp\in\di\,:\,\vp \text{ flat in } 0\}$. We find $S_0\in\cD_0(\R)'$ such that $x |x|^a S_0'=T_0 := T|_{\cD_0(\R)}$ and put $U_0=|x|^a S_0$. Then $\theta U_0= a |x|^a S_0 + x |x|^a S_0' = a U_0+T_0$. Therefore $(\theta -a)U_0=T_0$. We extend $U_0$ by the Hahn-Banach theorem to $U\in\Dip$. Then $\supp (\theta-a)U-T\subset \{0\}$. By \cite{Vtemp} we find $R\in\cS'(\R)$ such that $(\theta - a)R=(\theta-a)U-T$. Then $S=U-R$ solves the problem. \qed \sc Remark: \rm Let us remark, that our counterexample does not work for $d=1$, that is, in $\cD\,'(\R^2)$. Because in this case the $F_n(x)$ (see the notation at the beginning of this section) can be chosen with support in half-lines beginning with $n$, hence the sum $S=\sum_n F_n(x)\otimes \delta^{(n)}(y)$ is locally finite and defines a distribution. \section{Solvability in distributions of finite order}\label{s2} In our examples of differential equations $P(\theta) S= T$ the distribution $T$ always was of infinite order and, of course, locally of finite order. We showed that there cannot exist a solution $S$ locally of finite order. In the proof in \cite{Vtemp}, that for temperate $T$ there is always a temperate solution $S$, one essential feature was that temperate distributions are always of finite order. We use the result of \cite{Vtemp} to show that for $T$ of finite order there is always a solution $S$ of finite order. Moreover we develop a theory for arbitrary open subsets of $\Rn$. We should remark that the essential difficulty in \cite{Vtemp} was to handle the behaviour in the singular locus of the differential operator, that is, at the union of the coordinate hyperplanes. This was overcome in \cite{Vtemp} and we can use it here to provide local solvability. From now on $P(\theta)$ is an arbitrary non-trivial Euler operator. $\Om$ and $\om$ denote open subsets of $\Rn$, $\cD_k'(\Om)$ the distributions of Sobolev order $k$ on $\Om$, see below. We follow the notation in \cite[\S 14]{MV}. We denote by $H^k$ the Sobolev space of order $k\in\N$. For open $\Om\subset\Rn$ the space $H_0^k(\Om)$ is the closure of $\Do$ in $H^k$. We set $\Hb^{-k}(\Om)=H^k_0(\Om)'$. Every $T\in \Hb^{-k}(\Om)$ can be extended to $\widetilde{T}\in (H^k)'\subset\cS'(\Rn)$. Due to \cite[Theorem 3.5]{Vtemp} and the open mapping theorem, applied to the surjective endomorphism $P(\theta)$ of $\cS'(\Om)$, we obtain: \begin{lemma}\label{lem1} For every $k\in\N$ there is $m\in\N$ such that for every $\Om$ the following holds: for very $T\in\Hb^{-k}(\Om)$ there is $S\in\Hb^{-m}(\Rn)$ such that $P(\theta)S=T$ on $\Om$. \end{lemma} For every $k$ and $m=m_k$ chosen according to Lemma \ref{lem1} we set $$E(\Om)=\{T\in \Hb^{-m}(\Om)\,:\,P(\theta)T\in \Hb^{-k}(\Om)\}.$$ We obtain an exact sequence $$0\To N^m(\Om)\stackrel{j}{\To} E(\Om)\stackrel{P(\theta)}{\To} \Hb^{-k}(\Om)\To 0$$ where $N^m(\Om)=\{T\in\Hb^{-m}(\Om)\,:\,P(\theta)T=0\}$ and $j$ is the imbedding. This yields, due to reflexivity: \begin{equation}\label{eq4} N^m(\Om)'= E(\Om)'/P(\theta^*)H_0^k(\Om). \end{equation} We need some notation. \begin{definition}\label{d1} For $\Om\subset\Rn$ open $\cD_F'(\Om)$ is the space of distributions of finite order. We set $$\cD_k'(\Om)=\{T\in\Dop\,:\, T|_\om \subset \Hb^{-k}(\om)\text{ for all open }\om\ssubset\Om\}.$$ \end{definition} Then we have $\cD_F'(\Om)=\bigcup_{k\in\N} \cD_k'(\Om)$. \begin{definition}\label{d2}An open set $\Om\subset\Rn$ is called $P(\theta)$ convex if for every $k\in\N$ the following holds: for every $\om_1\ssubset\Om$ there is $\om_2\ssubset\Om$ such that for every $\om\ssubset\Om$ and $\vp\in H_0^k(\om)$ with $P(\theta^*)\vp\in H_0(\om_1)$ we have $\vp\in H_0(\om_2)$. \end{definition} Let now a $P(\theta)$-convex open set $\Om\ssubset\Rn$ be given. The we can find an exhaustion $\om_1\ssubset\om_2\ssubset..$ of $\Om$ such that for every $n\in\N$ the sets $\om_n\ssubset\om_{n+1}\ssubset\om_{n+2}$ are in the relation described in Definition \ref{d2}. We obtain a projective spectrum of exact sequences $$0\To N^m(\om_n)\stackrel{j}{\To} E(\om_n)\stackrel{P(\theta)}{\To} \Hb^{-k}(\om_n)\To 0.$$ From equation (\ref{eq4}) it follows that every $\mu\in N^m(\om_n)'$ which vanishes on $N^m(\om_{n+2})$ also vanishes on $N^m(\om_{n+1})$, and therefore $N^m(\om_{n+2})|_{\om_n}$ is dense in $N^m(\om_{n+1})|_{\om_n}$ in the topology of $N^m(\om_n)$. Then $P(\theta):\mathcal{E}(\Om)\to \cD_k'(\Om)$ is surjective, where $\mathcal{E}(\Om)=\{T\in\cD_m'(\Om)\,:\, P(\theta)T\in\cD'_k(\Om)\}=\proj_n E(\om_n)$ and, of course, $\cD_k'(\Om)=\proj_n \Hb^{-k}(\om_n)$. The argument is standard. For the convenience of the reader we give the proof: for every $n$ we find $S_n\in E(\om_n)$ such that $P(\theta)S_n= T|_{\om_n}$. We set $R_1=R_2=0$ and determine inductively $R_n\in N^m(\om_n)$. Let $R_n$ be determined. Then $U_n=S_{n+1}-S_n+R_n \in N^m(\om_n)$ and for $n\ge 2$ we find $R_{n+1}\in N^m(\om_{n+1})$ such that $\|U_n-R_{n+1}\|_{\om_{n-1}}=\|(S_{n+1}-R_{n+1}) -(S_n-R_n)\|_{\om_{n-1}}\le 2^{-n}$. Clearly $S=\lim_n (S_n-R_n)$ exists everywhere on $\Om$ and $P(\theta)S=T$. We have shown the second main result of this paper: \begin{theorem}\label{t4} If $\Om\subset\Rn$ is open and $P(\theta)$-convex, then $P(\theta)$ is surjective in $\cD'_F(\Om)$. \end{theorem} To get examples of $P(\theta)$-convex sets we need some preparation. We set $Q=]0,+\infty[^d$. For $e\in\{-1,+1\}^d$ we set $Q_e=eQ$. We remark that for any $e$ we have $M_e\circ P(\theta)=P(\theta)\circ M_e$ where $M_e \vp(x)=\vp(ex)$. So the behaviour of $P(\theta)$ on $Q$ determines the behaviour on all `quadrants'. A set $M\subset Q$ is called {\em m-convex} (cf. \cite{V1}) if $x^t y^{1-t}\in M$ for all $x,y\in M$ and $0<t<1$. $M\subset Q$ is m-convex if and only if $\Log \, M$ is convex. We call it strictly m-convex if $\Log \, M$ is strictly convex. A set $M\subset \NZ$ is called {\em strictly m-convex} if $e (M\cap Q_e)\subset Q$ is strictly m-convex for all $e\in\{-1,+1\}^d$. We obtain: \begin{proposition}\label{p1} If $\Om$ has an exhaustion $\om_1\ssubset\om_2\ssubset\dots$ of open sets such that $\om_n\cap \NZ$ is strictly m-convex with $C^2$-boundary for all $n$, then $\Om$ is $P(\theta)$-convex for all $P(\theta)$. \end{proposition} \Proof On test functions $\vp$ we have $P(\theta^*) \vp=P(-1-\theta) \vp=:P^*(\theta)\vp$. Assume that $\vp\in H_0^k(\om_N)$ and $\supp P(\theta^*)\vp\subset \om_n$. Then $\tilde{\vp}:=\vp\circ \Exp$ is a function on $\Log \, (Q\cap \om_N)$ and $\supp P^*(\partial)\tilde{\vp} \subset \Log (Q\cap \om_n)$. By assumption $\Log \, (Q\cap \om_n)$ is strictly convex with $C^2$-boundary. Therefore it is, for any non-trivial $P$, intersection of non-characteristic half-spaces. By Holmgren's theorem (see \cite[Theorem 8.6.8]{H1}), $\supp \tilde{\vp} \subset \Log \,(Q\cap \om_n)$, hence $Q\cap \supp \vp \subset \om_n$. Applying this to $M_e \vp$ for all $e$ we obtain $\supp \vp\cap\NZ\subset \om_n$, hence $\supp \vp\subset \overline{\om}_n\subset \om_{n+1}$. \qed \sc Remark: \rm The property of the $\om_n$ we really used was, that $\Log (e\,\om_n\cap Q)$ is an intersection of non-characteristic half spaces for all $n$. Due to the concavity of $\log$ we get: \begin{lemma}\label{lem2} If $M\subset Q$ is strictly convex and with any $y\in M$ and $x\le y$ (that is $x_j\le y_j$ for all $j$) also $x\in M$, then $M$ is strictly m-convex. \end{lemma} From Lemma \ref{lem2}, Proposition \ref{p1}, Theorem \ref{t4} and (for $p=\infty$) the Remark above we get the following examples. \begin{example} For every non-trivial $P$ the Euler differential operator $P(\theta)$ is surjective on $\cD'_F(\Rn)$ and on $\cD'_F(\Om)$ where $\Om$ is the open unit ball of $\ell_p^d$, ($1\le p \le +\infty$), in particular, the open euclidian unit ball in $\Rn$. \end{example}
{ "timestamp": "2018-06-05T02:10:13", "yymm": "1806", "arxiv_id": "1806.00763", "language": "en", "url": "https://arxiv.org/abs/1806.00763" }
\section{Introduction} \label{sec:intro} The virtual element method (VEM) was proposed in~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} as a variational reformulation of the \emph{nodal} mimetic finite difference (MFD) method~\cite{% BeiraodaVeiga-Manzini-Putti:2015,% BeiraodaVeiga-Lipnikov-Manzini:2011,% Brezzi-Buffa-Lipnikov:2009,% Manzini-Lipnikov-Moulton-Shashkov:2017% } for solving diffusion problems on unstructured polygonal meshes. A survey on the MFD method can be found in the review paper~\cite{Lipnikov-Manzini-Shashkov:2014} and the research book~\cite{BeiraodaVeiga-Lipnikov-Manzini:2014}. The VEM inherits the great flexibility of the MFD method with respect to the admissible meshes, and, despite its introduction dates back to a few years ago a huge amount of development has taken place, see, for example, ~\cite{% Antonietti-BeiraodaVeiga-Scacchi-Verani:2016,% Cangiani-Georgoulis-Pryer-Sutton:2016,% BeiraodaVeiga-Chernov-Mascotto-Russo:2016,% BeiraodaVeiga-Brezzi-Marini-Russo:2016a,% BeiraodaVeiga-Brezzi-Marini-Russo:2016b,% BeiraodaVeiga-Brezzi-Marini-Russo:2016c,% BeiraodaVeiga-Brezzi-Marini-Russo:2016d,% Benedetto-Berrone-Borio-Pieraccini-Scialo:2016b,% Berrone-Borio-Scialo:2016,% Benedetto-Berrone-Scialo:2016,% Berrone-Pieraccini-Scialo:2016,% Wriggers-Rust-Reddy:2016,% BeiraodaVeiga-Manzini:2015,% Natarajan-Bordas-Ooi:2015,% Berrone-Pieraccini-Scialo-Vicini:2015,% Vacca-BeiraodaVeiga:2015,% BeiraodaVeiga-Brezzi-Marini-Russo:2014,% BeiraodaVeiga-Brezzi-Marini-Russo:2014b,% BeiraodaVeiga-Manzini:2014,% Benedetto-Berrone-Pieraccini-Scialo:2014,% Brezzi-Marini:2013,% Berrone-Benedetto-Borio:2016chapter,% Berrone-Borio:2017,% Benedetto-Berrone-Borio-Pieraccini-Scialo:2016eccomas% }. We emphasize that the VEM is not the only existing way to treat partial differential equations numerically on unstructured meshes. Other methods or families of methods that are available from the literature include the polygonal/polyhedral finite element method (PFEM)~\cite{Wachspress:1975,Sukumar-Tabarraei:2004}, the BEM-based FEM~\cite{Hofreither-Langer-Weisser:2016,Weisser:2011}, the finite volume methods~\cite{Droniou:2014,Droniou-Eymard-Gallouet-Herbin:2010}, hybrid high-order (HHO) method~\cite{DiPietro-Ern:2014}, the discontinuous Galerkin (DG) method~\cite{DiPietro-Ern:2011,Cangiani-Georgoulis-Houston:2014}, and the hybridized discontinuous Galerkin (HDG) method~\cite{Cockburn-Gopalakrishnan-Lazarov:2009}. Many of these methods are also part of the Gradient Scheme framework recently proposed by~\cite{Droniou-Eymard-Herbin:2016,Droniou-Eymard-Herbin:2013}. Moreover, the connection between VEM and finite elements on polygonal/polyhedral meshes is touroughly investigated in~\cite{Manzini-Russo-Sukumar:2014,Cangiani-Manzini-Russo-Sukumar:2015,DiPietro-Droniou-Manzini:2018}, between VEM and BEM-based FEM method in~\cite{Cangiani-Gyrya-Manzini-Sutton:2017:GBC:chbook}. The virtual element method is a finite element method, but is dubbed \emph{virtual} because its formulation does not require the explicit knowledge of a set of shape functions and gradients of shape functions to compute the bilinear forms, e.g., mass and stiffness matrices. The global approximation space is defined over the whole domain by gluing together local elemental spaces under some regularity constraint. Each elemental space is formed by the solutions of a local Poisson problem with a polynomial right-hand side and nonhomogeneous polynomial Dirichlet (or Neumann) boundary conditions. Clearly, a subspace of polynomials up to a given degree always belongs by construction to each elemental space. The remarkable fact is that we can compute \emph{exactly} the projections of the virtual functions and their first derivatives onto such polynomials by using only the degrees of freedom. Therefore, a straightforward strategy to approximate the bilinear forms is to substitute the shape functions and their derivatives in their arguments with their polynomial projections. This approach yields the so-called \emph{consistency term}, to which we add a \emph{stability term} that ensures the nonsingularity of the resulting discretization. The stability term is designed to be easily computable from the degrees of freedom. The VEM was originally formulated in~\cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} as a conforming FEM for the Poisson problem. It was later extended to convection-reaction-diffusion problems with variable coefficients in~\cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013,BeiraodaVeiga-Brezzi-Marini-Russo:2016b}. Meanwhile, the nonconforming formulation for diffusion problems was proposed in~\cite{AyusodeDios-Lipnikov-Manzini:2016} as the finite element reformulation of~\cite{Lipnikov-Manzini:2014} and later extended to general elliptic problems~\cite{Cangiani-Manzini-Sutton:2016}, Stokes problem~\cite{Cangiani-Gyrya-Manzini:2016}, and the biharmonic equation~\cite{Antonietti-Manzini-Verani:2018,Zhao-Chen-Zhang:2016}. The two major differences between the conforming and nonconforming formulations are: \begin{description} \item $(i)$ at the elemental level the virtual space is formed by the solution of a Poisson problem with Neumann boundary conditions; \item $(ii)$ at the global level we relax the interelement conformity requirement, and the definition of the global discrete space just relays on some form of weaker regularity according to~\cite{Crouzeix-Raviart:1973}. \end{description} Nonconforming finite element spaces were historically proposed to approximate the velocity field of the Stokes equations on triangular meshes~\cite{Crouzeix-Raviart:1973}. The functions in these finite element spaces are piecewise polynomials of degree $k=1$~\cite{Crouzeix-Raviart:1973}, $k=2$~\cite{Fortin-Soulie:1983}, $k=3$~\cite{Crouzeix-Falk:1989}, and $k>3$~\cite{Stoyan-Baran:2006,Baran-Stoyan:2007,Ainsworth:2005}. In such formulations, continuity is required only at a discrete set of special points located at cell interfaces, which are the roots of the one-dimensional $k^{th}$-order Legendre polynomials defined over each edge, i.e., the nodes of the Gauss-Legendre quadrature rule of order $k$. This minimal continuity requirement ensures the optimal convergence rate; see, for instance,~\cite{Crouzeix-Raviart:1973}. Attempts to extend non-conforming finite elements to quadrilaterals, tetrahedra and hexahedra are found in~\cite{Rannacher-Turek:1992,Matthies-Tobiska:2005,Matthies:2007}. A major issue of the nonconforming formulations is that they may strongly depend on the parity of the underlying polynomial space, the geometric shape of the element and its spatial dimensionality (2-D or 3-D). For example, on triangles the nonconforming finite element space for even $k\geq 2$ in~\cite{Fortin-Soulie:1983,Stoyan-Baran:2006,Baran-Stoyan:2007} must be enriched by a one-dimensional subspace generated by a bubble function. Also, the definition of nonconforming spaces is substantially different from 2D and 3D and requires a simple geometric shape for the element (e.g., a simplex, a quadrilateral, or an hexahedral cell), and also differs from 2D to 3D. Instead, the nonconforming virtual element space proposed in~\cite{AyusodeDios-Lipnikov-Manzini:2016} has the same construction for every $k$ regardless of the parity, the space dimension, and the elemental geometric shape. In the case of the convection-dominated regime, a stabilization must be included in the variational formulation to deal with high P\'{e}clet number situations. In finite element approximations, different strategies have been designed to such purpose, as, for example, local projections~\cite{Ganesan-Tobiska:2010}, bubble functions~\cite{Brezzi-Franca-Russo:1998,Franca-Tobiska:2002} % the SUPG method~\cite{% Brooks-Hughes:1982,% Franca-Frey-Hughes:1992,% Gelhard-Lube-Olshanskii-Starcke:2005,% Roos-Stynes-Tobiska:2008,% Burman-Smith:2011,% Burman:2010% }. The SUPG stabilization in the conforming virtual element formulation was previously considered in~\cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}. The main goal of this work is the development of the nonconforming formulation with SUPG stabilization suitable to solve convection-dominated transport problems with a moderate reaction term. In such a situation the SUPG stabilization parameters becomes dependent on a local P\'{e}clet number and a local Karlovitz-like number. In case of a vanishing reaction the SUPG stabilization parameter converges to its expected classical definition. We prove the robustness of the method with respect to high P\'eclet numbers when the problem coefficients are constants and a conforming formulation is considered, whereas a weak dependence on the P\'eclet number is observed, due to the non-consistency of the VEM bilinear form \cite{BeiraodaVeiga-Brezzi-Cangiani-Manzini-Marini-Russo:2013} when the coefficients are variable or a non conforming formulation is considered. The presented analysis is also valid for SUPG-stabilized conforming virtual elements as presented in \cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}. \medskip The outline of the paper is as follows. In Section~\ref{sec:problem} we introduce the mathematical model of the convection-reaction-diffusion problem. In Section~\ref{sec:vem} we present the nonconforming VEM with the SUPG stabilization for the convection-dominated regime. In Section~\ref{sec:estimate} we carry out the convergence analysis and derive optimal a priori error estimates. In Section~\ref{sec:numerics} we show the performance of the method on a set of representative problems. In Section~\ref{sec:conclusions} we offer our final remarks and conclusions. \subsection{Notation} The notation throughout the paper is as follows: $\scal{\cdot}{\cdot}$ and $\norm{\,\cdot\,}$ denote the $\lebl{\Omega}$ scalar product and norm, and $\scal[\omega]{\cdot}{\cdot}$ and $\norm[\omega]{\,\cdot\,}$ denote the $\lebl{\omega}$ scalar product and norm defined on the subdomain $\omega\subseteq\Omega$; $\norm[\alpha]{\,\cdot\,}$ and $\seminorm[\alpha]{\,\cdot\,}$ denote the $\sobh{\alpha}{\Omega}$ norm and semi-norm; $\norm[\alpha,\omega]{\,\cdot\,}$ and $\seminorm[\alpha,\omega]{\,\cdot\,}$ denote the $\sobh{\alpha}{\omega}$ norm and semi-norm; $\norm[\sob{q}{p}{\omega}]{\,\cdot\,}$ and $\seminorm[\sob{q}{p}{\omega}]{\,\cdot\,}$ denote the $\sob{q}{p}{\omega}$ norm and semi-norm, where $p\geq 1$ is the Lebesgue regularity index and $q$ is the order of the Sobolev space. Moreover, $\Poly{k}{\omega}$ denotes the space of polynomial functions of degree up to the integer number $k\geq 0$ that are defined on the $d$-dimensional subset $\omega\subseteq\Omega$ with $d=1,2,3$. If $\Th$ is a partitioning of $\Omega$ in a set of non-overlapping polytopal elements $E$, i.e., the \emph{mesh}, (for the formal definition see Section~\ref{sec:mesh}), by $\Poly{k}{\Th}$ we denote the space of discontinuous functions defined on $\Omega$ whose restriction to any element $E$ is a polynomial of degree less than or equal to $k$; hence, $p\in\Poly{k}{\Th}$ iff $p_{|E}\in\Poly{k}{E}$. Finally, $\sobh{t}{\Th}$ for any $t\geq 1$ is the \emph{broken Sobolev space} of globally $\lebl{}$-integrable functions on $\Omega$ whose restriction to any mesh element $E$ of the mesh $\Th$ belongs to $\sobh{t}{E}$; formally, we can write that \begin{align} \sobh{t}{\Th} := \Big\{ v\in\lebl{\Omega}\,:\, v_{|E}\in\sobh{t}{E},\,\forall E\in\Th \Big\}. \end{align} To ease the notation, since these spaces contain discontinuous functions, in the following we will intend all norms and seminorms to be ``broken'' on the mesh. For example: \begin{equation*} \norm{\nabla v} = \left( \sum_{E\in\Th}\norm[E]{\nabla v}^2 \right)^{\frac12}. \end{equation*} Furthermore, we will use the symbol $\mathcal{C}$ to denote a generic constant independent of the mesh size and the problem data $\K$, $\beta$ and $\gamma$. In the estimates this constant may have a different value for each occurrence. \section{The variational formulation} \label{sec:problem} Let $\Omega\subset\mathbb{R}^d$, $d=2,3$ be a polytopal domain with boundary $\partial\Omega$ and consider the convection-diffusion-reaction problem: \begin{equation} \label{eq:problem} \begin{array}{rl} -\div\left(\K\nabla u\right) +\beta\cdot\nabla u + \gamma u= f & \mbox{\text{in $\Omega$}}, \\ u=0 & \mbox{\text{on $\partial\Omega$}}, \end{array} \end{equation} We assume that $\K\in\left[\lebl[\infty]{\Omega}\right]^{d\times d}$ is a strongly elliptic and symmetric tensor almost everywhere (a.e.) on $\Omega$. Hence, there exist two positive constant $\kappa_*$ and $\kappa^*$ such that $\kappa_*{\bm\xi}\cdot{\bm\xi}\leq{\bm\xi}\cdot\K(x){\bm\xi}\leq\kappa^*{\bm\xi}\cdot{\bm\xi}$ for every ${\bm\xi}\in\mathbb{R}^d$ and almost every $x\in\Omega$. We denote $\mathcal{C}_{\kappa}=\kappa^*\slash{\kappa_*}$. Moreover, we assume that $\beta\in\left[\lebl[\infty]{\Omega}\right]^d$ with $\div\beta=0$ and $\gamma\in\lebl[\infty]{\Omega}$ such that $\inf_{x\in\Omega}\gamma(x)=\gamma_0\geq 0$. To ease the exposition, we present the virtual element formulation and the convergence analysis assuming homogeneous Dirichlet boundary conditions. However, all the results presented in this paper can readily be extended to more general situations. Consider the bilinear form $\B{}{}\colon \sobho{\Omega}{}\times\sobho{\Omega}{}\to \mathbb{R}$ defined by \begin{equation} \label{eq:defB} \B{w}{v} := \scal{\K\nabla w}{\nabla v} + \scal{\beta\cdot\nabla w}{v} + \scal{\gamma w}{v} \quad \forall w,v \in \sobho{\Omega}{}, \end{equation} and the linear functional $\F{}\colon \sobho{\Omega}{}\to \mathbb{R}$ defined by \begin{equation*} \label{eq:defF} \F{v} := \scal{f}{v} \quad \forall v \in \sobho{\Omega}{}. \end{equation*} The variational formulation of \eqref{eq:problem} reads as: \emph{Find $u\in\sobho{\Omega}{}$ such that} \begin{equation} \label{eq:exvarform} \B{u}{v} = \F{v}\quad \forall v\in\sobho{\Omega}{}. \end{equation} The bilinear form $\B{}{}$ is coercive and bounded, and the variational problem~\eqref{eq:exvarform} has a unique solution in view of the Lax-Milgram lemma. \section{The virtual element formulation} \label{sec:vem} Hereafter, we consider only the case for $d=2$. However, the nonconforming virtual element formulation is almost the same for $d=2$ and $3$, the main substantial difference being necessarily in the mesh assumptions that for $d=3$ must also consider a star-shaped condition on the faces. Therefore, most of the results presented in the next sections can easily be generalized to the three-dimensional case with minor or no changes at all. \subsection{General assumptions} \label{sec:mesh} Let $\big\{\Th\big\}_{h}$ be a sequence of meshes of $\Omega$, i.e., a sequence of non-overlapping polygonal partitions of the domain $\Omega$. Each $\Th$ is labeled by the subscript $h$, the maximum diameter of its polygonal elements $E$. The polygonal elements can have a different number of edges and hanging node-like configurations are possible with nodes placed on an edge and forming a flat angle. We denote the set of all the mesh edges $e$ of the polygonal cells in $\Th$ by $\mathcal{E}_h$. We also distinguish between the subset of internal edges $\mathcal{E}_h^{int}$ and the subset of the boundary edges $\mathcal{E}_h^{bnd}$; clearly, $\mathcal{E}_h=\mathcal{E}_h^{int}\cup\mathcal{E}_h^{bnd}$. \medskip We assume that the members of the sequence $\big\{\Th\big\}_{h}$ satisfy the following regularity assumptions: \emph{There exists a global constant $\rho>0$ such that for each mesh $\Th$:} \begin{enumerate} \item[$(i)$] every polygon $E\in\Th$ is star-shaped with respect to a ball whose radius is greater than or equal to $\rho h_E$, where $h_E=\max_{\mathbf{x},\mathbf{y}\in E}\norm{\mathbf{x}-\mathbf{y}}$ is the element diameter; \smallskip \item[$(ii)$] $\forall E\in\Th$, each side $e$ of $E$ is such that $h_e\geq \rho h_E$, where $h_e$ is the length of $e$; \smallskip \end{enumerate} \begin{remark} Assumption $(i)$ implies that each element is simply connected. Assumption $(ii)$ implies that the number of sides of each polygon of the mesh is uniformly bounded over the mesh sequence. \end{remark} The restriction of $\K$ to any element $E\in\Th$ is still a strongly elliptic tensor and its spectrum can be locally bounded by using two constants $\Kmin[E]$ and $\K[E]$, so that for any vector-valued field ${\bm\xi}(x)$ defined on $E$ it holds that \begin{align} \Kmin[E]{\bm\xi}(x)\cdot{\bm\xi}(x) \leq{\bm\xi}(x)\cdot\K(x){\bm\xi}(x) \leq\K[E]{\bm\xi}(x)\cdot{\bm\xi}(x) \quad\,\forall x\in E. \label{eq:K:local:ellipticity} \end{align} We will find convenient for the next theoretical developments to assume that the inequalities $0<\kappa_* \leq \Kmin[E] \leq \K[E] \leq \kappa^*$ holds true for every mesh element $E$. Since $\K$ is represented by a symmetric and positive definite matrix we consider the decomposition $\K={\left(\sqrt{\K}\right)}^\intercal \sqrt{\K}$ and we write $\scal[E]{\K\nabla v}{\nabla v}=\scal[E]{\sqrt{\K}\nabla v}{\sqrt{\K}\nabla v}=\norm[E]{\sqrt{\K}\nabla v}^2$ for any sufficiently regular function $v$. Therefore, setting ${\bm\xi}=\nabla v$ in~\eqref{eq:K:local:ellipticity} yields \begin{align*} \Kmin[E]\norm[E]{\nabla v}^2\leq\norm[E]{\sqrt{\K}\nabla v}^2 \leq\K[E]\norm[E]{\nabla v}^2. \end{align*} We will use this relation extensively in the analysis of the next sections. For each element $E\in\Th$ we also set \begin{align*} \beta_E &:= \sup_{x\in E}\norm[\mathbb{R}^2]{\beta(x)}\,, & \gamma_E &:= \norm[\infty,E]{\gamma}. \end{align*} Let $k\geq 0$ be an integer number and $\boldsymbol{\alpha}=(\alpha_1,\alpha_2)$ a two-dimensional multi-index of order $\abs{\boldsymbol{\alpha}}=\alpha_1+\alpha_2\leq k$. The polynomial space $\Poly{k}{E}$ is spanned by the monomials $m_{\boldsymbol{\alpha}}\in\monom{k}{E}$ defined as \begin{equation*} \label{eq:defmonomials} m_{\boldsymbol{\alpha}}(\mathbf{x}) := \frac{(\mathbf{x}-\mathbf{x}_E)^{\boldsymbol{\alpha}}} {h_E^{\abs{\boldsymbol{\alpha}}}} \quad \forall\mathbf{x}\in E, \end{equation*} where $\mathbf{x}_E$ is the center of the ball with respect to which $E$ is star-shaped. Similarly, $\Poly{k}{e}$, the space of polynomials of degree $k$ defined on edge $e$, is spanned by the monomials $m_{\alpha}(\xi):=(\xi-\xi_{e})^{\alpha}\slash{h_e^\alpha}\in\monom{k}{e}$ for $0\leq\alpha\leq k$, where $\xi$ is a local coordinate defined on $e$, $\xi_{e}$ the midpoint of $e$, and $h_e$ the length of $e$. In the formulation of the method we will make use of the \emph{elliptic projection operator} $\proj[\nabla]{k}{}\colon\sobh{1}{\Th}\rightarrow\Poly{k}{\Th}$, whose restriction to each element $E$ is the solution of the local problem: \begin{equation} \label{eq:defPinabla} \begin{cases} \scal[E]{\nabla\proj[\nabla]{k}{}v}{\nabla p} = \scal[E]{\nabla v}{\nabla p} & \quad\forall p\in\Poly{k}{E} \\[0.5em] \scal[\partial E]{\proj[\nabla]{k}{}v}{1} = \scal[\partial E]{v}{1} & \quad\text{if $k=1$}, \\[0.5em] \scal[E]{\proj[\nabla]{k}{}v}{1} = \scal[E]{v}{1} & \quad\text{if $k>1$}. \end{cases} \end{equation} We will also consider the $\lebl{}$-\emph{projection operator} $\proj[0]{l}{}:\sobh{1}{\Th}\rightarrow\Poly{l}{\Th}$ whose restriction to each element $E$ is the $\lebl{}$-projection onto $\Poly{l}{E}$. A crucial property of these projection operators, which will be discussed in the next section (see Remark~\ref{remark:projections}), is that they are computable on the functions of the virtual element space using only their degrees of freedom. \subsection{The local nonconforming virtual element space} \label{subsec:local:VEM:space} The local nonconforming virtual element space of order $k\geq 1$ is defined as follows: \begin{align*} V_h^E := \bigg\{ v_h\in \sobh{1}{E}\colon & \Delta v_h\in\Poly{k}{E},\,\pder{v_h}{\n{e}} \in\Poly{k-1}{e} \,\forall e\subset\partial E, \\ & \scal[E]{v_h}{p}=\scal[E]{\proj[\nabla]{k}{}v_h}{p}\,\forall p\in\Poly{k}{E}/ \Poly{k-2}{E} \bigg\}, \end{align*} where $\Poly{k}{E}/ \Poly{k-2}{E}$ is the subspace of $\Poly{k}{E}$ of the polynomials that are $\lebl{}$-orthogonal to $\Poly{k-2}{E}$ (or, alternatively, the polynomials whose degree is exactly $k-1$ and $k$), and for $k=1$ we conventionally take $\Poly{-1}{E}=\{0\}$. The definition of $V_h^E$ is based on the \emph{enhancement strategy} that was introduced in \cite{Ahmad-Alsaedi-Brezzi-Marini-Russo:2013} for the conforming case and extended to the nonconforming case in \cite{Cangiani-Manzini-Sutton:2016}. From the definition above it follows immediately that $\Poly{k}{E}$ is a linear subspace of $V_h^E$. \begin{figure}[!t] \centering \begin{tabular}{cccc} \includegraphics[width=0.2\textwidth]{dofs_hexa_0.pdf} &\quad \includegraphics[width=0.2\textwidth]{dofs_hexa_1.pdf} &\quad \includegraphics[width=0.2\textwidth]{dofs_hexa_2.pdf} &\quad \includegraphics[width=0.2\textwidth]{dofs_hexa_3.pdf} \\ $\mathbf{k=1}$ & $\mathbf{k=2}$ & $\mathbf{k=3}$ & $\mathbf{k=4}$ \end{tabular} \caption{Degrees of freedom of a hexagonal cell for $k=1,2,3,4$; edge moments are marked by a circle; cell moments are marked by a square.} \label{fig:dofs} \end{figure} \medskip A function $v_h\in V_h^E$ is uniquely identified by the following set of degrees of freedom: \begin{itemize} \item for $k\geq 1$, the moments of $v_h$ of order up to $k-1$ on each mesh interface $e$: \begin{equation} \frac{1}{\abs{e}}\int_{e}v_h m_{\alpha}d\xi \qquad\forall m_{\alpha}\in\mathcal{M}_{k-1}(e)\,; \end{equation} \item for $k>1$, the moments of $v_h$ of order up to $k-2$ inside element $E$: \begin{equation} \frac{1}{|E|}\int_{E}v_h m_{\alpha}d\mathbf{x} \qquad\forall m_{\alpha}\in\mathcal{M}_{k-2}(E), \end{equation} \end{itemize} The unisolvency of these degrees of freedom is proved in~\cite{AyusodeDios-Lipnikov-Manzini:2016}. A counting argument shows that the cardinality of this set of degrees of freedom, which is also the dimension of $V_h^E$, is equal to $n_E (k-1) + k(k-1)/2$, where $n_E$ is the number of edges of $E$. The degrees of freedom for an hexagonal cell are shown in Figure~\ref{fig:dofs}. \begin{remark}\label{remark:projections} The elliptic projection $\proj[\nabla]{k}{}v_h$ is computable from the degrees of freedom of $v_h$. In fact, an integration by parts of the right-hand side of~\eqref{eq:defPinabla} yields: \begin{align*} \scal[E]{\nabla v_h}{\nabla p} = - \scal[E]{v_h}{\Delta p} + \sum_{e\in\partial E} \scal[e]{v_h}{\mathbf{n}_e\cdot\nabla p}, \end{align*} The terms on the right can be expressed by using the $(k-2)$-order moments of $v_h$ inside $E$ and the $(k-1)$-order moments of $v_h$ on each edge $e\in\partial E$ and are thus computable. A similar argument shows that also $\proj[0]{k-1}{}v_h$ and $\proj[0]{k-1}{}\nabla v_h$ are computable from the degrees of freedom of $v_h$. \end{remark} \subsection{Global nonconforming virtual element spaces} \label{subsec:global:VEM:space} For the construction of the global virtual element spaces we introduce the nonconforming functional space \begin{align*} \sobnc{k}{\Th}:=\bigg\{ v\in\sobh{1}{\Th}\,:\, \int_{e}\jump{v}\,q\,d\xi = 0\quad\forall q\in\Poly{k-1}{e}\; \forall e\in\mathcal{E}_h \bigg\}, \end{align*} where $\jump{\,\cdot\,}$ denotes the \emph{jump operator} $\jump{\,\cdot\,}$ across a mesh interface, which is defined as follows. If $e$ is an internal edge, we fix a unique unit normal vector $\n{e}$ and we set $\jump{v}:=v^{+} - v^{-}$, where $v^{\pm}$ are the traces of $v$ on $e$ from within the two elements $E^{\pm}$ sharing the edge, being $E^+$ the element for which $\n{e}$ is pointing outward. If $e$ is a boundary edge, $\n{e}$ is orthogonal to $e$ and pointing out of the computational domain $\Omega$ and $\jump{v}:=v^{+}$. \medskip Finally, the \emph{global nonconforming virtual element space of order $k$} is defined by \begin{align} V_h := \Big\{ v_h\in\sobnc{k}{\Th}\,:\,{v_h}_{|E}\in V_h^E\quad\forall E\in\Th \Big\}. \end{align} Each function $v_h$ of $V_h$ is uniquely characterized by: \begin{itemize} \item for $k\geq 1$, the moments of order up to $k-1$ on each internal mesh edge $e\in\mathcal{E}_h^{int}$: \begin{equation} \frac{1}{|e|}\int_{e}v_h m_{\alpha}d\xi \qquad\forall m_{\alpha}\in\mathcal{M}_{k-1}(e); \end{equation} \item for $k>1$, the moments of order up to $k-2$ inside each element $E\in\Th$: \begin{equation} \frac{1}{|E|}\int_{E}v_h m_{\alpha}d\mathbf{x} \qquad\forall m_{\alpha}\in\mathcal{M}_{k-2}(E). \end{equation} \end{itemize} The unisolvency of these degrees of freedom in $V_h$ is a direct consequence of the unisolvency of the local degrees of freedom introduced in section~\ref{subsec:local:VEM:space} and the definition of the nonconforming space $\sobnc{k}{\Th}$, cf.~\cite{AyusodeDios-Lipnikov-Manzini:2016}. \subsection{SUPG-VEM formulation} \label{sec:supg} The discretization of the variational formulation \eqref{eq:exvarform} may lead to instabilities when the convective term $\scal{\beta\cdot\nabla w}{v}$ is dominant with respect to the diffusive term $\scal{\K\nabla w}{\nabla v}$. Here we consider also a moderate reaction term that we assume not to be source of instabilities. In this section we recast the classical \emph{Streamline Upwind Petrov Galerkin} (SUPG) approach \cite{Franca-Frey-Hughes:1992} in the framework of the nonconforming VEM, showing that the optimal order of convergence can be preserved. To this end, we assume that $\K\in\left[\sob{1}{\infty}{\Omega}\right]^{d\times d}$. Then, we introduce the functional space \begin{equation} \label{eq:defV} V :=\left\{v\in\sobho{\Omega}{}\colon \Delta v\in\lebl{E}\quad\forall E\in\Th\right\}, \end{equation} the bilinear form $\Bsupg{}{}\colon V\times \sobh[0]{1}{\Omega}\rightarrow \mathbb{R}$ given by \begin{equation} \label{eq:defBsupg} \Bsupg{w}{v} := \a{w}{v} + \b{w}{v} + \c{w}{v} + \d{w}{v}, \end{equation} where \begin{align} \label{eq:defa} \a{w}{v} & :=\sum_{E\in\Th}\scal[E]{\K\nabla w}{\nabla v}+\tau_E\scal[E]{\beta\cdot\nabla w}{\beta\cdot\nabla v}, \\ \label{eq:defb} \b{w}{v} & := \frac12\sum_{E\in\Th} \Big[ \scal[E]{\beta\cdot\nabla w}{v}-\scal[E]{w}{\beta\cdot \nabla v} \Big], \\[0.75em] \label{eq:defc} \c{w}{v} & := \sum_{E\in\Th}\scal[E]{\gamma w}{v + \tau_E \beta \cdot \nabla v}, \\ \label{eq:defd} \d{w}{v} & := -\sum_{E\in\Th}\tau_E \scal[E]{\div(\K\nabla w)}{\beta\cdot\nabla v}. \end{align} Furthermore, let $\Fsupg[]{}\colon\sobh[0]{1}{\Omega}\rightarrow\mathbb{R}$ be the linear functional given by \begin{equation} \label{eq:defFsupg} \Fsupg[]{v} = \scal{f}{v} + \sum_{E\in\Th} \tau_E\scal[E]{f}{\beta\cdot\nabla v}. \end{equation} The real positive factor $\tau_E$ is the \emph{local SUPG parameter} and is discussed in section~\ref{subsec:tau_E}. The SUPG variational formulation of problem~\eqref{eq:problem} reads as: \emph{Find $u\in V$ such that} \begin{equation} \label{eq:exvarform:supg} \Bsupg{u}{v} = \Fsupg[]{v}\quad \forall v\in \sobh[0]{1}{\Omega}. \end{equation} \begin{remark} Under the assumptions of Section \ref{sec:problem}, the bilinear term $\b{}{}$ in~\eqref{eq:defB} that corresponds to the convective flux is equivalent to the skew-symmetric term $\b{}{}$ in~\eqref{eq:defb}. \end{remark} \begin{remark} \label{remark:defa:alt} By introducing the matrix $\K[\beta,E]=\K+\tau_E\beta\beta^\intercal$, the bilinear form $\a{}{}$ in~\eqref{eq:defa} can be reformulated as: \begin{equation} \label{eq:defa:alt} \a{w}{v} := \sum_{E\in\Th} \aE{w}{v} = \sum_{E\in\Th} \scal[E]{\K[\beta,E]\nabla w}{\nabla v}. \end{equation} Since matrix $\K[\beta,E]$ is positive definite, we can use the decomposition $\K[\beta,E]=\sqrt{\K[\beta,E]}\sqrt{\K[\beta,E]}$ and prove that the bilinear form is continuous, i.e, \begin{equation} \aE{w}{v} \leq \norm[E]{\sqrt{\K[\beta,E]}\nabla w}\, \norm[E]{\sqrt{\K[\beta,E]}\nabla v}, \label{eq:a:continuity} \end{equation} which holds for every pair of nonconforming functions $v$, $w$. \end{remark} \medskip The SUPG-stabilized virtual element approximation of \eqref{eq:exvarform} reads as: \emph{Find $u_h\in V_h$ such that} \begin{equation} \label{eq:vemvarform_supg} \Bsupg[h]{u_h}{v_h} = \Fsupg[h]{v_h} \quad \forall v_h\in V_h, \end{equation} where the bilinear form $\Bsupg[h]{}{}\colon V_h\times V_h\rightarrow\mathbb{R}$ and the right-hand side $\Fsupg[h]{} \colon V_h \rightarrow \mathbb{R}$ are the virtual element approximation of $\Bsupg[]{}{}$ and $\Fsupg[]{}$, respectively. The bilinear form $\Bsupg[h]{}{}$ is given by \begin{equation} \label{eq:defBhsupg} \Bhsupg{w_h}{v_h} := \a[h]{w_h}{v_h} + \b[h]{w_h}{v_h} + \c[h]{w_h}{v_h} + \d[h]{w_h}{v_h} \end{equation} for any $w_h,v_h\in V_h$, where \begin{align} \a[h]{w_h}{v_h} & := \sum_{E\in\Th}\Big(\scal[E]{\K\Pi_{k-1}^0\nabla w_h}{\Pi_{k-1}^0\nabla v_h} + \tau_E \scal[E]{\beta\cdot\Pi_{k-1}^0\nabla w_h}{\beta\cdot\Pi_{k-1}^0\nabla v_h} \nonumber\\[-0.5em] &\phantom{\quad:=\sum_{E\in\Th}\Big(} + \shE{\left(I-\proj{k-1}{}\right)w_h}{\left(I-\proj{k-1}{}\right)v_h}\Big), \label{eq:defah} \\[0.25em] \b[h]{w_h}{v_h} & := \sum_{E\in\Th} \frac12\Big( \scal[E]{\beta\cdot\proj{k-1}{}\nabla w_h} {\proj{k-1}{}v_h}-\scal[E]{\proj{k-1}{}w_h} {\beta\cdot\proj{k-1}{}\nabla v_h} \Big), \label{eq:defbh} \\[0.5em] \c[h]{w_h}{v_h} & := \sum_{E\in\Th} \scal[E]{\gamma\Pi^0_{k-1}w_h}{\Pi^0_{k-1}v_h + \tau_E\beta\cdot\Pi^0_{k-1}\nabla v_h}, \label{eq:defch} \\[0.5em] \d[h]{w_h}{v_h} & := -\sum_{E\in\Th}\tau_E \scal[E]{\div\big(\K\Pi_{k-1}^0\nabla w_h\big)}{\beta\cdot\Pi_{k-1}^0\nabla v_h}, \label{eq:defdh} \end{align} the local VEM stabilization term in $\a[h]{w_h}{v_h}$ is given by \begin{equation} \shE{\left(I-\proj{k-1}{}\right)w_h}{\left(I-\proj{k-1}{}\right)v_h} := (\K[E]+\tau_E\beta_E^2) \vemstabAST[E]{\left(I-\proj{k-1}{}\right)w_h} {\left(I-\proj{k-1}{}\right)v_h}, \label{eq:local:SUPG:stabterm} \end{equation} where $\vemstabAST[E]{\left(I-\proj{k-1}{}\right)v_h}{\left(I-\proj{k-1}{}\right)v_h}$ is such that there exist two constants $\sigma^\ast,\,\sigma_\ast>0$, independent of $h$ and the problem parameters satisfying, $\forall v_h\in V_h $, \begin{equation} \label{eq:SASTequivalence} \begin{split} \sigma_\ast \norm[E]{\nabla \left(I-\proj{k-1}{}\right) v_h}^2 \leq \vemstabAST[E]{\left(I-\proj{k-1}{}\right)v_h} {\left(I-\proj{k-1}{}\right)v_h} \leq \sigma^\ast \norm[E]{\nabla \left(I-\proj{k-1}{}\right) v_h}^2 \,. \end{split} \end{equation} Moreover, the linear functional $\Fsupg[h]{v_h}$ is given by \begin{equation} \label{eq:defFsupgh} \Fsupg[h]{v_h} = \scal{f}{\Pi_{k-1}^0 v_h} + \sum_{E\in\Th} \tau_E\scal[E]{f}{\beta\cdot\Pi^0_{k-1}\nabla v_h}, \end{equation} for any $v_h\in V_h$. In view of Remark~\ref{remark:defa:alt}, we can define the local bilinear form \begin{align*} \ahE{w_h}{v_h} = \scal[E]{\K[\beta,E]\Pi_{k-1}^0\nabla w_h}{\Pi_{k-1}^0\nabla v_h} \end{align*} such that \begin{align*} \a[h]{w_h}{v_h} = \sum_{E\in\Th}\left(\ahE{w_h}{v_h} +\shE{\left(I-\proj{k-1}{} \right) w_h} {\left(I-\proj{k-1}{}\right) v_h}\right) \,. \end{align*} Notice that, by \eqref{eq:a:continuity}, \eqref{eq:local:SUPG:stabterm} and \eqref{eq:SASTequivalence}, the stabilization term $\vemstab[E]{}{}\colon V_h\times V_h\rightarrow\mathbb{R}$ satisfies \begin{equation} \label{eq:vemstab_coercivity} \begin{split} \vemstab[E]{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} &\geq \sigma_\ast \left( \K[E] + \tau_E\beta_E^2 \right) \norm[E]{\nabla v_h - \nabla \proj{k-1}{} v_h}^2 \\ &\geq \sigma_\ast \left( \K[E] + \tau_E\beta_E^2 \right) \norm[E]{\nabla v_h - \proj{k-1}{} \nabla v_h}^2 \\ &\geq \sigma_\ast \norm[E]{\sqrt{\K[\beta,E]}\left(\nabla v_h - \proj{k-1}{} \nabla v_h\right)}^2 \,, \end{split} \end{equation} being $\norm[E]{\nabla v_h - \proj{k-1}{}\nabla v_h} \leq \norm[E]{\nabla v_h - \nabla \proj{k-1}{} v_h}$. According to~\cite{Cangiani-Manzini-Sutton:2016}, a possible choice for $\vemstabAST[E]{}{}$ is given by \begin{equation} \label{eq:defSE} \vemstabAST[E]{\left(I-\proj{k-1}{}\right)w_h} {\left(I-\proj{k-1}{}\right)v_h} = \sum_{i=1}^{N_E} \chi_i\left(\left(I-\proj{k-1}{}\right)w_h\right) \chi_i \left(\left(I-\proj{k-1}{}\right)v_h\right), \end{equation} where $N_E$ is the number of degrees of freedom on the element $E$ and $\chi_i$ is the operator that selects the $i$-th degree of freedom. The effect of the SUPG stabilization in the VEM stabilization is reflected by the term $\tau_E\beta_E^2$ that appears in the local coefficient multiplying $\vemstab[E]{}{}$ in definition~\eqref{eq:local:SUPG:stabterm}. \subsection{The SUPG parameter $\tau_E$} \label{subsec:tau_E} According to~\cite{Franca-Frey-Hughes:1992, Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}, the stability parameter $\tau_E$ when there is no reaction term is defined by \begin{equation} \tau_E =\frac{h_E}{2\beta_E}\min\left\{\Pe[E],1\right\}, \qquad{\rm where}\qquad \label{eq:defPe} \Pe[E] := m_k^E\frac{\beta_E h_E}{2 \K[E] }, \end{equation} and \begin{gather} \notag \label{eq:defmk} m_k^E := \begin{cases} \frac13 & \text{if $\div\left(\K\nabla v_h\right) = 0$ $\forall v_h\in V_h^E$,} \\ 2\tilde{C}_k^E & \text{otherwise,} \end{cases} \end{gather} is the \emph{mesh P\'{e}clet number} of $E$; $\tilde{C}^E_k$ is the biggest constant number satisfying the following inverse inequality: \begin{equation} \label{eq:estimnormlapl} \tilde{C}^E_k h_E^2\norm[E]{\div(\K\nabla v_h)}^2 \le \norm[E]{\K\nabla v_h}^2 \quad \forall v_h\in V_h^E. \end{equation} A proof of such a local inverse inequality for the virtual element space $V_h^E$ is provided in \cite[Lemma~10]{Cangiani-Georgoulis-Pryer-Sutton:2016} for constant $\K$ and any function with polynomial laplacian under the current mesh regularity assumptions. For a nonconstant $\K$, using standard manipulations we obtain \eqref{eq:estimnormlapl} with a constant $\tilde{C}_k^E$ that may depend on the variations of $\K$ on the element. In~\cite{Franca-Frey-Hughes:1992, Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a} no reaction term was considered. Since here we have such a term we need to modify the definition of $\tau_E$ by adding a constraint that guarantees the coercivity of $\Bsupg{}{}$ and $\Bsupg[h]{}{}$. As in the proof of Lemma \ref{lem:coercivity_Bh} we need that $\frac{1}{2}-\frac{\gamma_E\tau_E}{2}\geq C>0$, we may assume that there exists a constant $C_\tau \in (0,1)$ such that \begin{equation} \label{eq:deftau} \tau_E := \min\left\{ \frac{\tilde{C}^E_k h^2_E}{\K[E]}, \frac{h_E}{2\beta_E}, \frac{C_\tau}{\gamma_E}\right\}. \end{equation} Finally, we introduce the local \emph{Karlovitz number}, i.e., the dimensionless parameter associated with each mesh element $E$, \begin{equation*} \mathrm{Ka}_E := \frac{2\beta_E C_\tau}{h_E\gamma_E}, \end{equation*} and we redefine $\tau_E$ as \begin{equation*} \tau_E = \frac{h_E}{2\beta_E}\min\left\{\Pe[E],1,\mathrm{Ka}_E\right\}. \end{equation*} The comparison between $\Pe[E]$ and $\mathrm{Ka}_E$ determines whether the value of $\tau_E$ is dominated by the convective term, the diffusive term, or the reactive term. The two curves in Figure \ref{fig:tauregimes} show the behaviour of $\tau_E$ for two possible choices of the problem coefficients. The curves are parametrized by the diameter $h_E$ with decreasing values from left to right along each curve. We see that for small values of $h_E$ (right-most part of each curve), $\tau$ falls in the diffusive regime, possibly passing through the convective regime, as expected. \begin{figure} \centering \includegraphics[width=.5\textwidth]{tau_regimes.pdf} \caption{Different regimes of $\tau_E$ for different values of $\Pe[E]$ and $\mathrm{Ka}_E$. } \label{fig:tauregimes} \end{figure} \section{Error Analysis} \label{sec:estimate} In the following we assume that the problem is well written in the non-dimensional way, and consequently $\K[E]\leq 1$, $h_E\leq 1$, $\beta_E=O(1)$, $\gamma_E \leq O(1)$. Let $h:= \max_{E\in\Th}h_E$ and define the following norms: \begin{align} \label{eq:defennormKbeta} \ennorm[\K\beta]{v} &:= \left( \a{v}{v} + \a[h]{v}{v} \right)^{\frac12} \,, \\ \label{eq:defennormKbetagamma} \ennorm[\K\beta\gamma]{v} &:= \left( \ennorm[\K\beta]{v}^2 + \norm{\sqrt{\gamma} \proj{k-1}{} v}^2 \right)^{\frac12} \,, \end{align} on the nonconforming space $\sobnc{k}{\Th}$. In \eqref{eq:defennormKbeta}, for the evaluation of $\vemstab[E]{v}{v}$, we assume to use the VEM interpolant of the function $v$, see \cite[Theorem 11]{Cangiani-Georgoulis-Pryer-Sutton:2016}. Clearly, $\ennorm[\K\beta]{v}\leq\ennorm[\K\beta\gamma]{v}$. \subsection{Discretization errors} \label{sec:discretization_errors} The following Lemmas~\ref{lem:estim_a-ah}, \ref{lem:estim_b-bh}, \ref{lem:estim_c-ch}, and~\ref{lem:estim_d-dh} provide a continuity bound for the discrete bilinear forms~\eqref{eq:defah}-\eqref{eq:defdh} and an estimate of the approximation error when compared with the corresponding continuous ones~\eqref{eq:defa}-\eqref{eq:defd}. Throughout the section, we use the approximation results for the local polynomial projections of a function $v\in\sobh{s+1}{E}$, cf. \cite[Lemma 5.1]{BeiraodaVeiga-Brezzi-Marini-Russo:2015}, given by: \begin{align} \label{eq:estimP0k-1} \norm[E]{v-\proj{k-1}{}v} + h_E\seminorm[1,E]{v-\proj{k-1}{}v} & \leq\mathcal{C} h_E^{s+1}\seminorm[s+1,E]{v}\quad 1\leq s+1\leq k, \\[0.25em] \label{eq:estimPnablak} \norm[E]{v-\proj[\nabla]{k}{}v} + h_E\seminorm[1,E]{v-\proj[\nabla]{k}{}v} & \leq \mathcal{C}h_E^{s+1}\seminorm[s+1,E]{v} \quad 1\leq s+1\leq k+1, \end{align} which hold for every mesh element $E$ and polynomial degree $k\geq 1$ under the mesh assumptions of Section~\ref{sec:mesh}. For every internal edge $e=\partial E^+\cap\partial E^-$ and functions $v\in\sobh{s+1}{\omega_e}$ with $\omega_e=E^+\cup E^-$ we will also consider the trace inequality \begin{align} \label{eq:estim:trace:Pnablak} \norm[e]{v-\proj[0,e]{k-1}v}+h_{e}\seminorm[1,e]{v-\proj[0,e]{k-1}v} & \leq\mathcal{C}h_{e}^{s+\frac{1}{2}}\seminorm[s+1,\omega_e]{v}\quad 1\leq s+1\leq k \,. \end{align} We use the error estimate for the virtual element interpolant of order $k$ of a function $\varphi\in\sobh{s+1}{E}$, $1 \leq s+1\leq k+1$ \cite{AyusodeDios-Lipnikov-Manzini:2016}: \begin{equation} \label{eq:estim_veminterp} \norm[E]{\varphi-\varphi_I} + h_E\seminorm[1,E]{\varphi-\varphi_I} \leq\mathcal{C} h_E^{s+1} \seminorm[s+1,E]{\varphi}. \end{equation} Furthermore, \eqref{eq:estim_veminterp} implies that \begin{equation} \label{eq:H1:apriori:proof:00} \begin{split} \ennorm[\K\beta\gamma]{\varphi-\varphi_I}^2 & = \sum_{E\in\Th} \left( \norm[E]{\sqrt{\K}\nabla(\varphi-\varphi_I)}^2 + \tau_E\norm[E]{\beta\cdot\nabla(\varphi-\varphi_I)}^2 + \right. \\ & \quad + \norm[E]{\sqrt{\K} \proj{k-1}{ \nabla(\varphi-\varphi_I)}}^2 + \tau_E\norm[E]{\beta\cdot \proj{k-1}{\nabla(\varphi-\varphi_I)}}^2 \\ & \left. \quad + \shE{\left(I-\proj{k-1}{}\right) \left( \varphi-\varphi_I \right)}{\left(I-\proj{k-1}{}\right) \left( \varphi-\varphi_I \right)} \right. \\ &\left. \quad+ \norm[E]{\sqrt{\gamma} \proj{k-1}{\varphi-\varphi_I}}^2\right) \\ &\leq \mathcal{C} \sum_{E\in\Th}\max \left\{\K[E] \,, \tau_E\beta_E^2 \,, h_E^2\gamma_E \right\} h^{2s}_E \seminorm[s+1,E]{\varphi}^2, \end{split} \end{equation} for some positive constant $\mathcal{C}$ independent of $h$ and the local problem coefficients $\K$, $\beta$, $\gamma$. \begin{assumption} We assume that the solution $u$ to \eqref{eq:exvarform:supg} belongs to $\sobh{s+1}{\Th}\cap V$, with $1 < s+1 \leq k+1$ and that $\K\in\big[\sob{s}{\infty}{\Omega}\big]^{d\times d}$, $\beta\in\left[\sob{s+1}{\infty}{\Omega}\right]^{d}$, $\gamma\in\left[\sob{s+1}{\infty}{\Omega}\right]$. \end{assumption} The following technical lemma is needed in the upcoming proofs. \begin{lemma} \label{lem:estim_ab-Pab} Let $a,b \in \sob{s}{\infty}{E}$ be given, $E\in\Th$. Then, \begin{equation} \label{eq:estim_ab-Pab} \norm[\sob{s}{\infty}{E}]{ab - \proj{0}{ab}} \leq \frac32 \left( \norm[\sob{s}{\infty}{E}]{a} \norm[\sob{s}{\infty}{E}]{b - \proj{0}{}b} + \norm[\sob{s}{\infty}{E}]{b} \norm[\sob{s}{\infty}{E}]{a - \proj{0}{}a} \right) \,. \end{equation} \end{lemma} \begin{proof} We consider the following decomposition, exploiting the fact that $\proj{0}{}a \proj{0}{}b = \proj{0}{a\proj{0}{}b} = \proj{0}{b\proj{0}{}a}$: \begin{equation*} \begin{split} ab - \proj{0}{ab} &= \frac12 \left( ab + ab - \proj{0}{ab} - \proj{0}{ab} \right) \\ &= \frac12 \left[ ab - \proj{0}{a}b + \proj{0}{a}b - \proj{0}{a}\proj{0}{b} + \proj{0}{b\proj{0}{a}} - \proj{0}{ab} \right. \\ & \left. \quad + ab - a \proj{0}{b} + a \proj{0}{b} - \proj{0}{a} \proj{0}{b} + \proj{0}{a\proj{0}{b}} - \proj{0}{ab} \right] \\ &= \frac12 \left[\left(a-\proj{0}{}a\right) b + \left(b-\proj{0}{}b\right) \proj{0}{}a + a \left(b-\proj{0}{}b\right) + \left(a-\proj{0}{}a\right) \proj{0}{}b \right. \\ & \left. \quad \proj{0}{\left(\proj{0}{}a - a\right) b} + \proj{0}{\left(\proj{0}{}b - b\right)a} \right]\,. \end{split} \end{equation*} The proof is concluded by the triangle inequality, the Cauchy-Schwarz inequality and by exploiting the fact that $\norm[\sob{s}{\infty}{E}]{\proj{0}{}a} = \abs{\eval{ \left( \proj{0}{}a \right)}{E}} \leq \norm[\infty,E]{a} \leq \norm[\sob{s}{\infty}{E}]{a}$. \end{proof} We now estimate the terms inside $\Bsupg[h]{}{}$ to analyse their continuity, and their consistency with respect to polynomials of order $k$. \begin{lemma} \label{lem:estim_a-ah} For every function $w\in\sobnc{k}{\Th}$ and $v_h\in V_h\subset\sobnc{k}{\Th}$, \begin{equation} \label{eq:ah_continuity} \a[h]{w}{v_h} \leq \ennorm[\K\beta\gamma]{w}\ennorm[\K\beta\gamma]{v_h} \,. \end{equation} Moreover, if $w\in\sobnc{k}{\Th}\cap\sobh{s+1}{\Th}$, then \begin{equation} \label{eq:estim_a-ah} \abs{\a{\proj{k-1}{}w}{v_h} - \a[h]{\proj{k-1}{}w}{v_h}} \leq \mathcal{C} \max_{E\in\Th}\left\{\mathcal{C}^{nc}_{a,E} \right\} h^s \norm[s+1]{w} \ennorm[\K\beta]{v_h} \,, \end{equation} where \begin{equation} \label{eq:defCNCa} \mathcal{C}^{nc}_{a,E} = \frac{\norm[\sob{s}{\infty}{E}]{\K[\beta,E] - \proj{0}{\K[\beta,E]}}}{\sqrt{\Kmin[E]}} \,. \end{equation} \end{lemma} \begin{proof} Regarding \eqref{eq:ah_continuity}, to estimate the continuity of $\a[h]{}{}$, we use the Cauchy-Schwarz and H\"{o}lder inequalities and the definition of the norm \eqref{eq:defennormKbeta}: \begin{equation*} \a[h]{w}{v_h} \leq \left(\a[h]{w}{w}\a[h]{v_h}{v_h}\right)^{\frac12} \leq \ennorm[\K\beta]{w} \ennorm[\K\beta]{v_h}\,. \end{equation*} To prove~\eqref{eq:estim_a-ah}, we first notice that $\vemstab[E]{\left(I-\proj{k-1}{}\right)\proj{k-1}{}w}{v_h}=0$ and $\proj{k-1}{}\nabla \proj{k-1}{}w = \nabla \proj{k-1}{}w$: \begin{align*} &\abs{\a{\proj{k-1}{} w}{v_h} - \a[h]{\proj{k-1}{} w}{v_h}} \leq \sum_{E\in\Th} \abs{ \scal[E]{\K[\beta,E]\nabla \proj{k-1}{} w}{\nabla v_h} \right. \\ & \left. \quad -\scal[E]{\K[\beta,E]\proj{k-1}{}\nabla\proj{k-1}{} w} {\proj{k-1}{}\nabla v_h}} =\sum_{E\in\Th} \abs{ \scal[E]{\K[\beta,E]\nabla \proj{k-1}{} w}{\nabla v_h-\proj{k-1}{}\nabla v_h} }\,. \end{align*} The local terms are bounded using the $k$-consistency $\ahE{\cdot}{\cdot} = \aE{\cdot}{\cdot}$ when the coefficients are constants and one of the arguments is a polynomial: \begin{equation*} \begin{split} &\scal[E]{\K[\beta,E]\nabla \proj{k-1}{} w} {\nabla v_h-\proj{k-1}{}\nabla v_h} = \scal[E]{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w}{\nabla v_h-\proj{k-1}{}\nabla v_h} \\ &= \scal[E]{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w - \proj{k-1}{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w}}{\nabla v_h-\proj{k-1}{}\nabla v_h} \\ &\leq \norm[E]{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w - \proj{k-1}{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w}} \norm[E]{\nabla v_h-\proj{k-1}{}\nabla v_h} \\ & \leq \mathcal{C}h^s_E \seminorm[s,E]{\left(\K[\beta,E]-\proj{0}{\K[\beta,E]}\right) \nabla \proj{k-1}{} w} \left(\norm[E]{\nabla v_h}+\norm[E]{\proj{k-1}{}\nabla v_h}\right) \\ & \leq \mathcal{C} h^{s}_E \frac{\norm[\sob{s}{\infty}{E}]{\K[\beta,E]-\proj{0}{\K[\beta,E]}}} {\sqrt{\Kmin[E]}} \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} \end{proof} \begin{remark} We can bound $\mathcal{C}^{nc}_{a,E}$ as follows, using \eqref{eq:estim_ab-Pab} and \eqref{eq:deftau}: \begin{equation*} \begin{split} \mathcal{C}^{nc}_{a,E} &= \frac{\norm[\sob{s}{\infty}{E}]{\K[\beta,E]-\proj{0}{\K[\beta,E]}}} {\sqrt{\Kmin[E]}} \\ & \leq \frac{1}{\sqrt{\Kmin[E]}} \left(\norm[\sob{s}{\infty}{E}]{\K - \proj{0}{}\K} + \tau_E \norm[\sob{s}{\infty}{E}]{ \beta\beta^\intercal - \proj{0}{\beta\beta^\intercal}} \right) \\ & \leq \frac{\mathcal{C}}{\sqrt{\Kmin[E]}} \left(\norm[\sob{s}{\infty}{E}]{\K - \proj{0}{}\K} + \frac{h_E}{\beta_E} \norm[\sob{s}{\infty}{E}]{\beta} \norm[\sob{s}{\infty}{E}]{ \beta - \proj{0}{\beta} } \right) \,. \end{split} \end{equation*} \end{remark} \begin{lemma} \label{lem:estim_b-bh} For every function $w\in\sobnc{k}{\Th}$ and $v_h\in V_h\subset\sobnc{k}{\Th}$ it holds that \begin{equation} \label{eq:bh_continuity} \begin{split} \abs{\b[h]{w}{v_h}} &\leq \mathcal{C} \left[ \max_{E\in\Th} \left( \tau_E^{-\frac12} , h^{-1}_E \mathcal{K}^{nc}_{b,E}\right) \norm{w} + \max_{E\in\Th} \left( \mathcal{C}^{nc,1}_{b,E} , \mathcal{K}^{nc}_{b,E} \right) \norm{\nabla w} \right] \ennorm[\K\beta]{v_h}\,, \end{split} \end{equation} where \begin{align} \label{eq:defCNCb} \mathcal{C}^{nc,r}_{b,E} &= \frac{h_E \norm[\sob{r}{\infty}{E}]{\beta-\proj{0}{}\beta}} {\sqrt{\Kmin[E]}} \,,\ r\geq 1 \,, \\ \label{eq:defKNCb} \mathcal{K}^{nc}_{b,E} &= \frac{h_E \norm[\sob{1}{\infty}{\partial E}]{\beta\cdot\n{}}} {\sqrt{\Kmin[E]}} \,. \end{align} Moreover, for every function $w\in\sobnc{k}{\Th}\cap\sobh{s+1}{\Th}$, it holds that \begin{equation} \label{eq:estim_b-bh} \abs{\b{\proj{k-1}{}w}{v_h} - \b[h]{\proj{k-1}{}w}{v_h}} \leq\mathcal{C} \max_{E\in\Th} \mathcal{C}^{nc,s+1}_{b,E} \, h^{s}\norm[s+1]{w} \ennorm[\K\beta]{v_h} \,. \end{equation} \end{lemma} \begin{proof} To obtain \eqref{eq:bh_continuity}, we first introduce the following decomposition: \begin{equation*} \b[h]{w}{v_h} = \sum_{E\in\Th} \TERM{T}{E,1} + \TERM{T}{E,2} + \TERM{T}{E,3} + \TERM{T}{E,4}\,, \end{equation*} where, $\forall E\in\Th$, \begin{align} \label{eq:bh_continuity-TERM1} \TERM{T}{E,1} &= \scal[E]{\beta\cdot\left(\proj{k-1}{}\nabla w - \nabla w\right)}{\proj{k-1}{} v_h} \,, \\ \label{eq:bh_continuity-TERM2} \TERM{T}{E,2} &= \scal[E]{\beta\cdot\nabla w} {\proj{k-1}{} v_h - v_h} \,, \\ \label{eq:bh_continuity-TERM3} \TERM{T}{E,3} &= \scal[E]{\beta\cdot\nabla w}{v_h} \,, \\ \label{eq:bh_continuity-TERM4} \TERM{T}{E,4} &= -\scal[E]{\proj{k-1}{} w}{ \beta\cdot\proj{k-1}{}\nabla v_h} \,. \end{align} We estimate $\TERM{T}{E,1}$ in \eqref{eq:bh_continuity-TERM1} as follows: \begin{equation*} \begin{split} \abs{\TERM{T}{E,1}} &= \abs{\scal[E]{\proj{k-1}{}\nabla w - \nabla w}{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h}} \\ &= \abs{\scal[E]{\nabla w}{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h - \proj{k-1}{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h} } } \\ &\leq \norm[E]{\nabla w} \norm[E]{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h - \proj{k-1}{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h} } \\ & \leq \mathcal{C} h_E \norm[E]{\nabla w} \seminorm[1,E]{(\beta-\proj{0}{}\beta)\proj{k-1}{}v_h} \\ & \leq \mathcal{C} h_E \norm[E]{\nabla w} \norm[\sob{1}{\infty}{E}]{\beta-\proj{0}{}\beta} \norm[1,E]{\proj{k-1}{}v_h} \\ & \leq \mathcal{C} \frac{h_E \norm[\sob{1}{\infty}{E}]{\beta - \proj{0}{}\beta}}{\sqrt{\Kmin[E]}} \norm[E]{\nabla w} \ennorm[\K\beta]{v_h}\,. \end{split} \end{equation*} $\TERM{T}{E,2}$ in \eqref{eq:bh_continuity-TERM2} is estimated similarly, using also \eqref{eq:SASTequivalence}: \begin{equation*} \begin{split} \abs{\TERM{T}{E,2}} &= \abs{\scal[E]{\beta\cdot\nabla w} {\proj{k-1}{} v_h - v_h}} \\ &\leq \abs{\scal[E]{w}{\beta\cdot\nabla\left(\proj{k-1}{} v_h - v_h\right)}} + \abs{\int_{\partial E} w \left(\beta\cdot\n{}\right) \left(\proj{k-1}{}v_h - v_h \right)} \\ &\leq \tau^{-\frac12}_E \norm[E]{w} \tau^{\frac12}_E \norm[E]{\beta\cdot\nabla\left(\proj{k-1}{} v_h - v_h\right)} \\ &\quad + \mathcal{C} \norm[\partial E]{w} \norm[\infty,\partial E]{\beta\cdot\n{}} \norm[\partial E]{v_h-\proj{k-1}{} v_h} \\ &\leq \tau^{-\frac12}_E \norm[E]{w} \cdot \sqrt{\frac{1}{\sigma_\ast} \tau_E \beta_E^2 \vemstabAST[E]{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h}} \\ &\quad + \mathcal{C} h^{-\frac12}_E\norm[E]{w} \norm[\infty,\partial E]{\beta\cdot\n{}} \cdot h^{\frac12}_E \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} \left(\tau_E^{-\frac12} + \frac{\norm[\infty,\partial E]{\beta\cdot\n{}}}{\sqrt{\Kmin[E]}}\right) \norm[E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} The estimation of $\TERM{T}{E,3}$ in \eqref{eq:bh_continuity-TERM3} requires an application of Green's formula, as follows: \begin{equation*} \begin{split} \abs{\sum_{E\in\Th}\TERM{T}{E,3}} &\leq \abs{\sum_{E\in\Th}\scal[E]{w}{\beta\cdot \nabla v_h}} + \abs{\sum_{E\in\Th}\int_{\partial E}\left(\beta\cdot\n{}\right) w v_h } \\ &= \abs{\sum_{E\in\Th}\scal[E]{w}{\beta\cdot \nabla v_h}} + \abs{\frac12 \sum_{E\in\Th}\int_{\partial E}\left(\beta\cdot\n{}\right) \jmp{w v_h} } \,. \end{split} \end{equation*} The first term is estimated locally by the Cauchy-Schwarz inequality: \begin{equation*} \scal[E]{w}{\beta\cdot\nabla v_h} \leq \tau_E^{-\frac12}\norm[E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{equation*} The boundary terms are estimated exploiting $\int_{\partial E}\beta\cdot\n{} = \int_E \nabla\cdot\beta = 0$, and denoting by $\proj[0,\partial E]{k-1}{}v$ the piecewise polynomial projection of $v$ on each $e\subset \partial E$: \begin{equation*} \begin{split} \int_{\partial E}\left(\beta\cdot\n{}\right) \jmp{w v_h} &= \sum_{R\in\omega_E} \int_{E\cap R}\left(\beta\cdot\n{}\right) \left( \eval{w}{R} \jmp{v_h} + \jmp{w} \eval{v_h}{E} \right) \\ &= \sum_{R\in\omega_E} \int_{E\cap R} \left( \left(\beta\cdot\n{}\right)\eval{w}{R} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{w}{R}} \right) \jmp{v_h} \\ &\quad + \sum_{R\in\omega_E} \int_{E\cap R} \jmp{w} \left( \left(\beta\cdot\n{}\right)\eval{v_h}{E} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{v_h}{E}} \right) \\ &= \sum_{R\in\omega_E} \int_{E\cap R} \left( \left(\beta\cdot\n{}\right)\eval{w}{R} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{w}{R}} \right) \jmp{v_h - \proj[0,\partial E]{k-1}{v_h}} \\ &\quad + \sum_{R\in\omega_E} \int_{E\cap R} \jmp{w - \proj{k-1}{}w } \left( \left(\beta\cdot\n{}\right)\eval{v_h}{E} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{v_h}{E}} \right) \\ &= \sum_{R\in\omega_E} \norm[R\cap E]{\left(\beta\cdot\n{}\right)\eval{w}{R} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{w}{R}}} \norm[E\cap R]{\jmp{v_h - \proj[0,\partial E]{k-1}{v_h}}} \\ &\quad + \sum_{R\in\omega_E} \norm[E\cap R]{\jmp{w - \proj{k-1}{}w }} \norm[E\cap R]{ \left(\beta\cdot\n{}\right)\eval{v_h}{E} - \proj[0,\partial E]{k-1}{\left(\beta\cdot\n{}\right)\eval{v_h}{E}}} \\ &\leq \mathcal{C}_1 \left( \sum_{R\in\omega_E} h_E \seminorm[1,R\cap E]{\left(\beta\cdot\n{}\right)\eval{w}{R}} \right) h^{\frac12}_E \norm[\omega_E]{\nabla v_h} \\ &\quad + \mathcal{C}_2 \left( \sum_{R\in\omega_E} h_E \seminorm[1,E\cap R]{ \left(\beta\cdot\n{}\right)\eval{v_h}{E}} \right) h^{\frac12}_E \norm[\omega_E]{\nabla w } \\ & \leq \mathcal{C}_1 h_E^{\frac12} \norm[\sob{1}{\infty}{\partial E}]{\beta\cdot\n{}} \norm[\omega_E]{\nabla w} \cdot h_E^{\frac12}\norm[\omega_E]{\nabla v_h} \\ &\quad + \mathcal{C}_2 h_E^{\frac12} \norm[\sob{1}{\infty}{\partial E}]{\beta\cdot\n{}} \norm[\omega_E]{\nabla v_h} \cdot h_E^{\frac12}\norm[\omega_E]{\nabla w} \\ & \leq \mathcal{C} \frac{h_E \norm[\sob{1}{\infty}{\partial E}]{\beta\cdot\n{}}} {\sqrt{\Kmin[E]}} \norm[\omega_E]{\nabla w} \ennorm[\K\beta,\omega_E]{v_h} \,. \end{split} \end{equation*} The estimate of $\TERM{T}{E,4}$ defined by \eqref{eq:bh_continuity-TERM4} is obtained by the Cauchy-Schwarz inequality, the continuity of projections and the definition of the norm \eqref{eq:defennormKbeta}: \begin{equation*} \begin{split} \abs{ \TERM{T}{E,4}} &= \abs{\scal[E]{\proj{k-1}{} w}{ \beta\cdot\proj{k-1}{}\nabla v_h}} \leq \mathcal{C} \tau_E^{-\frac12} \norm[E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} To derive \eqref{eq:estim_b-bh}, we set \begin{equation*} \b{\proj{k-1}{}w}{v_h} - \b[h]{\proj{k-1}{}w}{v_h} = \sum_{E\in\Th} \big(\TERM{R}{E,1} - \TERM{R}{E,2}\big)\,, \end{equation*} where, recalling that $\proj{k-1}{\nabla\proj{k-1}{}w} = \nabla\proj{k-1}{}w$, \begin{align} \label{eq:estim_b-bh-TERM1} \TERM{R}{E,1}&=\scal[E]{\beta\cdot\nabla\proj{k-1}{} w}{v_h - \proj{k-1}{}v_h} \,, \\ \label{eq:estim_b-bh-TERM2} \TERM{R}{E,2}&=\scal[E]{\proj{k-1}{}w}{\beta\cdot \left( \nabla v_h - \proj{k-1}{}\nabla v_h \right)} \,. \end{align} $\TERM{R}{E,1}$ can be estimated as follows: \begin{equation*} \begin{split} \TERM{R}{E,1} &= \scal[E]{\beta\cdot\nabla\proj{k-1}{} w}{v_h - \proj{k-1}{}v_h} = \scal[E]{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w}{v_h - \proj{k-1}{}v_h} \\ &= \scal[E]{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w - \proj{k-1}{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w}}{v_h - \proj{k-1}{}v_h} \\ &\leq \norm[E]{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w - \proj{k-1}{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w}} \norm[E]{v_h - \proj{k-1}{}v_h} \\ &\leq \mathcal{C} h^{s}_E \seminorm[s,E]{\left(\beta - \proj{0}{}\beta\right) \cdot \nabla\proj{k-1}{} w} \cdot h_E \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} h^{s+1}_E \norm[\sob{s}{\infty}{E}]{\beta-\proj{0}{}\beta} \norm[s+1,E]{w} \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} \frac{h_E \norm[\sob{s}{\infty}{E}]{\beta - \proj{0}{}\beta}}{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,E]{w} \ennorm[\K\beta]{v_h} \,. \end{split} \end{equation*} The estimate of $\TERM{R}{E,2}$ in \eqref{eq:estim_b-bh-TERM2} is obtained as follows: \begin{equation*} \begin{split} \TERM{R}{E,2}&=\scal[E]{\proj{k-1}{}w}{\beta\cdot \left( \nabla v_h - \proj{k-1}{}\nabla v_h \right)} = \scal[E]{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w}{ \nabla v_h - \proj{k-1}{}\nabla v_h} \\ &= \scal[E]{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w - \proj{k-1}{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w}}{ \nabla v_h } \\ &\leq \norm[E]{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w - \proj{k-1}{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w}} \norm[E]{ \nabla v_h } \\ &\leq \mathcal{C}h^{s+1}_E\seminorm[s+1,E]{\left( \beta - \proj{0}{}\beta \right)\proj{k-1}{}w} \norm[E]{ \nabla v_h } \\ &\leq \mathcal{C}h^{s+1}_E\norm[\sob{s+1}{\infty}{E}]{\beta - \proj{0}{}\beta} \norm[s+1,E]{\proj{k-1}{}w} \norm[E]{ \nabla v_h } \\ &\leq \mathcal{C}\frac{h_E \norm[\sob{s+1}{\infty}{E}]{\beta - \proj{0}{}\beta}}{\sqrt{\Kmin[E]}} h^{s}_E \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} \end{proof} \begin{remark} The coefficient $\mathcal{K}^{nc}_{b,E}$ can be rewritten, considering that $\overline{\beta\cdot\n{}}^{\,\partial E}=\int_{\partial E}\beta\cdot\n{}=0$, in the following way: \begin{equation*} \mathcal{K}^{nc}_{b,E} = \frac{h_E \norm[\sob{1}{\infty}{\partial E}]{\beta\cdot\n{}}} {\sqrt{\Kmin[E]}} = \frac{h_E \norm[\sob{1}{\infty}{\partial E}] {\beta\cdot\n{} - \overline{\beta\cdot\n{}}^{\,\partial E}}} {\sqrt{\Kmin[E]}} \end{equation*} \end{remark} \begin{lemma} \label{lem:estim_c-ch} For every function $w\in\sobnc{k}{\Th}$ and $v_h\in V_h$ it holds that \begin{equation} \label{eq:ch_continuity} \abs{\c[h]{w}{v_h}} \leq (1 + \sqrt{C_\tau})\ennorm[\K\beta\gamma]{w} \ennorm[\K\beta\gamma]{v_h}\,. \end{equation} Moreover, for every function $w\in\sobnc{k}{\Th}\cap\sobh{s+1}{\Th}$, it holds that \begin{equation} \label{eq:estim_c-ch} \begin{split} & \abs{\c{\proj{k-1}{}w}{v_h} - \c[h]{\proj{k-1}{}w}{v_h} } \leq \mathcal{C} \max_{E\in\Th} \mathcal{C}^{nc}_{c,E} \, h^s \norm[s+1]{w} \ennorm[\K\beta]{v_h} \,, \end{split} \end{equation} \begin{equation} \label{eq:defCNCc} \mathcal{C}^{nc}_{c,E} = \max \left\{ \frac{h^2_E \norm[\sob{s+1}{\infty}{E}]{\gamma-\proj{0}{}\gamma}}{\sqrt{\Kmin[E]}} , \frac{h_E\tau_E \norm[\sob{s+1}{\infty}{E}] {\gamma\beta-\proj{0}{\gamma\beta}}}{\sqrt{\Kmin[E]}} \right\} \end{equation} \end{lemma} \begin{proof} Inequality \eqref{eq:ch_continuity} follows easily from the definition of the norm \eqref{eq:defennormKbetagamma} and the definition of $\tau_E$ \eqref{eq:deftau}: \begin{equation*} \begin{split} &\scal[E]{\gamma \proj{k-1}{}w}{\proj{k-1}{}v_h} + \tau_E \scal[E]{\gamma \proj{k-1}{}w}{\beta\cdot\proj{k-1}{}\nabla v_h} \leq \norm[E]{\sqrt{\gamma} \proj{k-1}{}w} \norm[E]{\sqrt{\gamma} \proj{k-1}{} v_h} \\ &\quad + \sqrt{\tau_E\gamma_E} \norm[E]{\sqrt{\gamma} \proj{k-1}{} w}\cdot \sqrt{\tau_E} \norm[E]{\beta\cdot \proj{k-1}{}\nabla v_h} \leq \left(1+\sqrt{C_\tau}\right) \ennorm[\K\beta\gamma,E]{w} \ennorm[\K\beta\gamma,E]{v_h} \,. \end{split} \end{equation*} To prove \eqref{eq:estim_c-ch}, we start with: \begin{equation*} \c{\proj{k-1}{}w}{v_h}-\c[h]{\proj{k-1}{}w}{v_h} = \sum_{E\in\Th}\left( \TERM{R}{E,1} +\TERM{R}{E,2}\right)\,, \end{equation*} where \begin{align} \label{eq:estim_c-ch-TERM1} \TERM{R}{E,1}&=\scal[E]{\gamma \proj{k-1}{} w}{v_h - \proj{k-1}{}v_h} \,, \\ \label{eq:estim_c-ch-TERM2} \TERM{R}{E,2}&=\tau_E\scal[E]{\gamma\proj{k-1}{}w} {\beta\cdot\nabla v_h - \beta\cdot\proj{k-1}{}\nabla v_h} \,. \end{align} The first term, given by \eqref{eq:estim_c-ch-TERM1}, can be bounded as follows: \begin{equation*} \begin{split} \TERM{R}{E,1} &= \scal[E]{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w}{v_h - \proj{k-1}{}v_h} \\ &= \scal[E]{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w - \proj{k-1}{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w}}{v_h - \proj{k-1}{}v_h} \\ &\leq \norm[E]{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w - \proj{k-1}{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w}} \norm[E]{v_h - \proj{k-1}{}v_h} \\ &\leq \mathcal{C} h^{s+1}_E \seminorm[s+1,E]{\left(\gamma - \proj{0}{}\gamma \right) \proj{k-1}{} w} \cdot h_E \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} h^{s+2}_E \norm[\sob{s+1}{\infty}{E}]{\gamma - \proj{0}{}\gamma} \norm[s+1,E]{\proj{k-1}{}w} \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} \frac{h_E^2 \norm[\sob{s+1}{\infty}{E}]{\gamma - \proj{0}{}\gamma}}{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} The term $\TERM{R}{E,2}$ in \eqref{eq:estim_b-bh-TERM2} can be bounded as follows: \begin{equation*} \begin{split} \TERM{R}{E,2}& = \tau_E \scal[E]{\gamma\proj{k-1}{}w} {\beta\cdot\nabla v_h - \beta\cdot\proj{k-1}{}\nabla v_h} \\ &= \tau_E \scal[E]{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w}{\nabla v_h - \proj{k-1}{}\nabla v_h} \\ &= \tau_E \scal[E]{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w - \proj{k-1}{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w}}{\nabla v_h - \proj{k-1}{}\nabla v_h} \\ &\leq \tau_E \norm[E]{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w - \proj{k-1}{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w}} \norm[E]{\nabla v_h - \proj{k-1}{}\nabla v_h} \\ &\leq \mathcal{C} h_E^{s+1} \tau_E \seminorm[s+1,E]{\left(\gamma\beta - \proj{0}{\gamma\beta}\right) \proj{k-1}{}w} \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} h_E \tau_E \norm[\sob{s+1}{\infty}{E}]{\gamma\beta-\proj{0}{\gamma\beta}} h_E^{s} \norm[s+1,E]{w} \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} \frac{h_E \tau_E \norm[\sob{s+1}{\infty}{E}]{\gamma\beta - \proj{0}{\gamma\beta} } }{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} \end{proof} \begin{remark} The second argument of the max in \eqref{eq:defCNCc} can be bounded by \eqref{eq:estim_ab-Pab} and \eqref{eq:deftau}: \begin{equation*} \begin{split} \frac{h_E\tau_E \norm[\sob{s+1}{\infty}{E}] {\gamma\beta-\proj{0}{\gamma\beta}}}{\sqrt{\Kmin[E]}} \leq \frac{\mathcal{C}}{\sqrt{\Kmin[E]}} \left( \frac{C_\tau h_E}{\gamma_E} \norm[\sob{s+1}{\infty}{E}]{\gamma} \norm[\sob{s+1}{\infty}{E}]{\beta - \proj{0}{}\beta} \right. \\ \left. + \frac{h^2_E}{\beta_E} \norm[\sob{s+1}{\infty}{E}]{ \beta} \norm[\sob{s+1}{\infty}{E}]{\gamma - \proj{0}{}\gamma} \right) \end{split} \end{equation*} \end{remark} \begin{lemma} \label{lem:estim_d-dh} For any $w\in \sobnc{k}{\Th}$ and $\forall v_h\in V_h$, \begin{equation} \label{eq:dh_continuity} \d[h]{w}{v_h} \leq \ennorm[\K\beta]{w} \ennorm[\K\beta]{v_h} \,. \end{equation} Moreover, if $w\in V\cap\sobh{s+1}{\Th}$, then \begin{equation} \label{eq:estim_d-dh} \begin{split} \abs{\d{\proj{k-1}{}w}{v_h} - \d[h]{\proj{k-1}{}w}{v_h}} &\leq \mathcal{C} \max_{E\in\Th} \mathcal{C}^{nc}_{d,E} \, h^s_E \norm[s+1]{w} \ennorm[\K\beta]{v_h} \,, \end{split} \end{equation} where \begin{equation} \label{eq:defCNCd} \mathcal{C}^{nc}_{d,E} = \max \left\{ \frac{h^{-1}_E \tau_E {\displaystyle\sum_{i=1}^d} \norm[\sob{s}{\infty}{E}]{\beta_i\K - \proj{0}{\beta_i\K}}} {\sqrt{\Kmin[E]}} , \frac{\tau_E \norm[\sob{s}{\infty}{E}]{(\nabla \beta)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} }}{\sqrt{\Kmin[E]}} \right\} \,. \end{equation} \end{lemma} \begin{proof} To prove \eqref{eq:dh_continuity}, we use the inverse inequality \eqref{eq:estimnormlapl} and the definition of the norm \eqref{eq:defennormKbeta}: $\forall E\in\Th$, \begin{equation*} \begin{split} \tau_E\scal[E]{\nabla\cdot\left(\K\proj{k-1}{}\nabla w\right)}{\beta\cdot\proj{k-1}{}\nabla v_h} &\leq \sqrt{\tau_E} \norm[E]{\nabla\cdot\left(\K\proj{k-1}{}\nabla w\right)} \cdot\sqrt{\tau_E}\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h} \\ &\leq \frac{1}{\sqrt{\K[E]}}\norm[E]{\K\proj{k-1}{}\nabla w}\ennorm[\K\beta]{v_h} \leq \ennorm[\K\beta]{w} \ennorm[\K\beta]{v_h} \,. \end{split} \end{equation*} Regarding \eqref{eq:estim_d-dh}, we procede as follows: $\forall E\in\Th$, \begin{equation*} \begin{split} &\tau_E\scal[E]{\nabla\cdot\left(\K \proj{k-1}{} \nabla w\right)}{\beta\cdot\left(\nabla v_h - \proj{k-1}{}\nabla v_h\right)} = \TERM{R}{E,1} + \TERM{R}{E,2}\,, \end{split} \end{equation*} where, with the notation $\mathcal{E}_{k-1} = I-\proj{k-1}{}$, \begin{align} \label{eq:estim_d-dh-TERM1} \TERM{R}{E,1} &= \tau_E\sum_{i=1}^d \scal[E]{\nabla\cdot \left( \beta_i\K \proj{k-1}{} \nabla w \right)} {\mathcal{E}_{k-1}\left(\pder{v_h}{x_i}\right)} \,, \\ \label{eq:estim_d-dh-TERM2} \TERM{R}{E,2} &= \tau_E \scal[E]{ -\left( \nabla \beta \right)^\intercal \K \proj{k-1}{} \nabla w} {\mathcal{E}_{k-1}\left(\nabla v_h \right)} \,. \end{align} The term $\TERM{R}{E,1}$ in \eqref{eq:estim_b-bh-TERM1} can be estimated as follows, using \eqref{eq:estim_ab-Pab}, \begin{equation*} \begin{split} \TERM{R}{E,1} &= \tau_E\sum_{i=1}^d \scal[E]{\nabla\cdot \left( \left(\beta_i\K - \proj{0}{\beta_i\K}\right) \proj{k-1}{} \nabla w \right)} {\mathcal{E}_{k-1}\left(\pder{v_h}{x_i}\right)} \\ & = \tau_E\sum_{i=1}^d \scal[E]{\nabla\cdot \left( \mathcal{E}_{k-1}\left( \left(\beta_i\K - \proj{0}{\beta_i\K}\right) \proj{k-1}{} \nabla w \right) \right)} {\mathcal{E}_{k-1}\left(\pder{v_h}{x_i}\right)} \\ & \leq \tau_E\sum_{i=1}^d \norm[E]{\nabla\cdot \left( \mathcal{E}_{k-1}\left( \left(\beta_i\K - \proj{0}{\beta_i\K}\right) \proj{k-1}{} \nabla w \right) \right)} \norm[E]{\mathcal{E}_{k-1}\left(\pder{v_h}{x_i}\right)} \\ & \leq \mathcal{C} h^{-1}_E\tau_E \sum_{i=1}^d\norm[E]{\mathcal{E}_{k-1}\left( \left(\beta_i\K - \proj{0}{\beta_i\K}\right) \proj{k-1}{} \nabla w \right)} \norm[E]{\pder{v_h}{x_i}} \\ & \leq \mathcal{C} h^{-1}_E\tau_E h_E^{s} \sum_{i=1}^d\seminorm[s,E]{\left(\beta_i\K - \proj{0}{\beta_i\K}\right) \proj{k-1}{} \nabla w} \norm[E]{\nabla v_h} \\ & \leq \mathcal{C}h^{-1}_E\tau_E h_E^{s} \left(\sum_{i=1}^d \norm[\sob{s}{\infty}{E}]{\beta_i\K - \proj{0}{\beta_i\K}} \right) \norm[s,E]{\proj{k-1}{} \nabla w} \norm[E]{\nabla v_h} \\ & \leq \mathcal{C} \frac{h^{-1}_E \tau_E \sum_{i=1}^d \norm[\sob{s}{\infty}{E}]{\beta_i\K - \proj{0}{\beta_i\K}}}{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} The term $\TERM{R}{E,2}$ in \eqref{eq:estim_b-bh-TERM2} can be estimated as follows, using also \eqref{eq:estim_ab-Pab}: \begin{equation*} \begin{split} \abs{\TERM{R}{E,2}} &= \tau_E \abs{\scal[E]{ \left( \nabla \beta \right)^\intercal \K \proj{k-1}{} \nabla w} {\mathcal{E}_{k-1}\left(\nabla v_h \right)}} \\ &= \tau_E \abs{ \scal[E]{ \mathcal{E}_{k-1}\left( \left( \left( \nabla \beta \right)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} \right) \proj{k-1}{} \nabla w \right)}{\nabla v_h } } \\ &\leq \tau_E \norm[E]{ \mathcal{E}_{k-1}\left( \left( \left( \nabla \beta \right)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} \right) \proj{k-1}{} \nabla w \right)} \norm[E]{\nabla v_h } \\ &\leq \mathcal{C}\tau_E h^{s}_E \seminorm[s,E]{ \left( \left( \nabla \beta \right)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} \right) \proj{k-1}{} \nabla w } \norm[E]{\nabla v_h} \\ &\leq \mathcal{C} \frac{\tau_E \norm[\sob{s}{\infty}{E}]{(\nabla \beta)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} }}{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,E]{w} \ennorm[\K\beta,E]{v_h} \,. \end{split} \end{equation*} \end{proof} \begin{remark} The first argument of the max in \eqref{eq:defCNCd} can be bounded as follows, using \eqref{eq:estim_ab-Pab} and \eqref{eq:deftau}: \begin{equation*} \begin{split} & \frac{h^{-1}_E \tau_E \sum_{i=1}^d \norm[\sob{s}{\infty}{E}]{\beta_i\K - \proj{0}{\beta_i\K}}}{\sqrt{\Kmin[E]}} \leq \frac{3h_E^{-1}\tau_E}{2\sqrt{\Kmin[E]}} \left( 2\norm[\sob{s}{\infty}{E}]{\K} \norm[\sob{s}{\infty}{E}]{\beta - \proj{0}{}\beta} \right. \\ &\qquad \left. + 2\norm[\sob{s}{\infty}{E}]{\beta} \norm[\sob{s}{\infty}{E}]{\K - \proj{0}{}\K} \right) \\ &\quad \leq \frac{\mathcal{C}}{\sqrt{\Kmin[E]}} \left( h_E \norm[\sob{s}{\infty}{E}]{\frac{\K}{\K[E]}} \norm[\sob{s}{\infty}{E}]{\beta - \proj{0}{}\beta} + \norm[\sob{s}{\infty}{E}]{\frac{\beta}{\beta_E}} \norm[\sob{s}{\infty}{E}]{\K - \proj{0}{}\K} \right) \,. \end{split} \end{equation*} Similarly, the second argument in \eqref{eq:defCNCd} can be bounded as follows: \begin{equation*} \begin{split} \frac{\tau_E \norm[\sob{s}{\infty}{E}]{(\nabla \beta)^\intercal \K - \proj{0}{\left( \nabla \beta \right)^\intercal \K} }}{\sqrt{\Kmin[E]}} &\leq \frac{\mathcal{C}}{\sqrt{\Kmin[E]}} \left( h^2_E \norm[\sob{s}{\infty}{E}]{\frac{\K}{\K[E]}} \norm[\sob{s}{\infty}{E}]{\nabla \beta} \right. \\ &\left. \quad + h_E \norm[\sob{s}{\infty}{E}]{\frac{\nabla\beta}{\beta_E}} \norm[\sob{s}{\infty}{E}]{\K - \proj{0}{}\K} \right) \,. \end{split} \end{equation*} \end{remark} Finally, the following Lemma states the continuity of $\Bsupg{}{}$, defined by \eqref{eq:defBsupg}. \begin{lemma} \label{lem:Bsupg_continuity} Let $w\in\sobnc{k}{\Th}\cap \sobh{2}{\Th}$ and $v_h\in V_h$. Then, \begin{equation} \label{eq:Bsupg_continuity} \begin{split} \Bsupg{w}{v_h} \leq \mathcal{C} \left[ \max_{E\in\Th} \left( \tau_E^{-\frac12},h_E^{-1}\mathcal{K}^{nc}_{b,E} \right) \norm{w} + \frac{\sqrt{\K[E]}}{\sqrt{\Kmin[E]}} \ennorm[\K\beta\gamma]{w} + \max_{E\in\Th} \mathcal{K}^{nc}_{b,E} \norm{\nabla w} \right. \\ \left. + \max_{E\in\Th} \left( \sqrt{\tau_E} \norm[\sob{1}{\infty}{E}]{\K} \right) \norm[2,E]{w-\proj{k-1}{}w} \right] \ennorm[\K\beta\gamma,E]{v_h} \,. \end{split} \end{equation} \end{lemma} \begin{proof} The proof of the continuity of $\a{}{}$, $\b{}{}$ and $\c{}{}$ follows the same arguments of Lemmas \eqref{lem:estim_a-ah}, \eqref{lem:estim_b-bh} and \eqref{lem:estim_c-ch}. The proof of the continuity of $\d{}{}$ is slightly different, and can be done as follows: \begin{equation*} \begin{split} & \tau_E\scal[E]{\nabla\cdot\left(\K\nabla w\right)}{\beta\cdot\nabla v_h} = \tau_E\scal[E]{\nabla\cdot\left(\K\nabla w-\K\nabla \proj{k-1}{}w\right)}{\beta\cdot\nabla v_h} \\ &\qquad + \tau_E\scal[E]{\nabla\cdot\left(\K\nabla \proj{k-1}{}w\right)}{\beta\cdot\nabla v_h} \\ &\quad \leq \tau_E\norm[E]{\nabla\cdot\left(\K\nabla\left( w-\proj{k-1}{}w\right)\right)} \norm[E]{\beta\cdot\nabla v_h} + \tau_E\norm[E]{\nabla\cdot\left(\K\nabla \proj{k-1}{}w\right)} \norm[E]{\beta\cdot\nabla v_h} \\ &\quad \leq \left(\sqrt{\tau_E} \norm[E]{ \left( \nabla\cdot\K \right) \nabla\left( w-\proj{k-1}{}w \right)} + \sqrt{\tau_E} \norm[E]{ \K \Delta \left( w-\proj{k-1}{}w \right)} \right. \\ &\qquad \left. + \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla w} \right) \sqrt{\tau_E}\norm[E]{\beta\cdot\nabla v_h} \\ &\quad \leq \mathcal{C} \left( \sqrt{\tau_E} \norm[\sob{1}{\infty}{E}]{\K} \norm[2,E]{w-\proj{k-1}{}w} + \frac{\sqrt{\K[E]}}{\sqrt{\Kmin[E]}} \norm[E]{\sqrt{\K} \nabla w}\right) \ennorm[\K\beta]{w} \,. \end{split} \end{equation*} \end{proof} The above lemmas can be summarized in the following lemma, estimating the error of approximation of the exact bilinear form by the discrete bilinear form. \begin{lemma} For any given $w\in\sobnc{k}{\Th} \cap \sobh{s+1}{\Th}$ and any $v_h\in V_h$, \begin{equation} \label{eq:estim_Bsupg-Bsupgh} \begin{split} \abs{\Bsupg{w}{v_h} - \Bsupg[h]{w}{v_h}} &\leq \mathcal{C}\max_{E\in\Th} \left\{ \frac{\norm[\sob{1}{\infty}{E}]{\K}}{\sqrt{\K[E]}}, \sqrt{h_E\beta_E}, h_E\sqrt{\gamma_E}, \right. \\ &\left. \qquad \mathcal{C}^{nc}_{a,E} , \mathcal{C}^{nc}_{b,E} , \mathcal{C}^{nc}_{c,E} , \mathcal{C}^{nc}_{d,E} , \mathcal{K}^{nc}_{b,E} \right\} h^s \seminorm[s+1]{w} \ennorm[\K\beta]{v_h}\,, \end{split} \end{equation} where $\mathcal{C}^{nc}_{a,E}$ , $\mathcal{C}^{nc}_{b,E}$ , $\mathcal{C}^{nc}_{c,E}$, $\mathcal{C}^{nc}_{d,E}$ and $\mathcal{K}^{nc}_{b,E}$ are defined by \eqref{eq:defCNCa}, \eqref{eq:defCNCb}, \eqref{eq:defCNCc}, \eqref{eq:defCNCd} and \eqref{eq:defKNCb}. \end{lemma} \begin{proof} Collecting the results of Lemmas \ref{lem:estim_a-ah}, \ref{lem:estim_b-bh}, \ref{lem:estim_c-ch}, \ref{lem:estim_d-dh} and \ref{lem:Bsupg_continuity} and the approximation estimates on the polynomial projections, we get \begin{equation*} \begin{split} &\abs{\Bsupg{w}{v_h} - \Bsupg[h]{w}{v_h}} \leq \abs{ \Bsupg{w-\proj{k-1}{}w}{v_h} } + \abs{ \Bsupg[h]{w-\proj{k-1}{} w}{v_h} } \\ & \qquad + \abs{ \Bsupg{\proj{k-1}{}w}{v_h} - \Bsupg[h]{\proj{k-1}{}w}{v_h} } \\ &\quad \leq \mathcal{C}\left( \max_{E\in\Th} \left( \frac{\sqrt{\K[E]}}{\sqrt{\Kmin[E]}} \right)\ennorm[\K\beta\gamma]{w-\proj{k-1}{}w} \right. \\ & \left. \qquad + \max_{E\in\Th} \left( \sqrt{\tau_E} \norm[\sob{1}{\infty}{E}]{\K} \right) \norm[2,E]{w-\proj{k-1}{}w} + \max_{E\in\Th} \tau_E^{-\frac12} \norm{w - \proj{k-1}{}w} \right. \\ &\qquad \left. + \max_{E\in\Th} \left\{ \mathcal{C}^{nc}_{a,E} , \mathcal{C}^{nc,s+1}_{b,E} , \mathcal{C}^{nc}_{c,E} , \mathcal{C}^{nc}_{d,E} , \mathcal{K}^{nc}_{b,E} \right\} h^s \seminorm[s+1]{w} \right) \ennorm[\K\beta]{v_h} \\ &\quad \leq \mathcal{C}\max_{E\in\Th} \left\{ \frac{\norm[\sob{1}{\infty}{E}]{\K}}{\sqrt{\K[E]}}, \sqrt{h_E\beta_E}, h_E\sqrt{\gamma_E}, \mathcal{C}^{nc}_{a,E} , \mathcal{C}^{nc,s+1}_{b,E} , \mathcal{C}^{nc}_{c,E} , \mathcal{C}^{nc}_{d,E} , \mathcal{K}^{nc}_{b,E} \right\} h^s \seminorm[s+1]{w} \ennorm[\K\beta]{v_h} . \end{split} \end{equation*} \end{proof} Due to the non-conformity of our approach and since the functions in the global virtual element space $V_h$ may be discontinuous, for the exact solution $u\in\sobh{2}{\Th}\cap \sobh[0]{1}{\Omega}$ and every $v_h\in V_h$ it holds that \begin{equation*} \Bsupg{u}{v_h} = \Fsupg{v_h} + \Nh{u}{v_h}, \end{equation*} where \begin{equation} \label{eq:Nh:def} \Nh{u}{v_h} := \sum_{E\in\Th} \scal[\partial E]{(\K\nabla{u})\cdot\n{}- \frac12 \left( \beta\cdot\n{} \right)u}{v_h}\,. \end{equation} is called the \emph{conformity error}. This term is a generalization of the one of the pure diffusion problem that is introduced and estimated in \cite[Lemma 4.1]{AyusodeDios-Lipnikov-Manzini:2016}. \begin{lemma}[\textbf{Conformity error}] \label{lemma:estim_Nh} Let $u\in\sobh{s+1}{\Th} \cap \sobh[0]{1}{\Omega}$, $1\leq s\leq k$, be the solution of the variational problem~\eqref{eq:exvarform}. Let $\beta\in\sob{s+1}{\infty}{\Omega}$ and suppose $\K\nabla u\in\sobh{}{\mathrm{div},\Omega}$. Under the mesh regularity assumptions of Section \ref{sec:mesh}, for every $v_h\in V_h$ it holds that \begin{equation} \label{eq:estim_Nh} \abs{\Nh{u}{v_h}} \leq \mathcal{C} \max_{E\in\Th}\left\{\frac{\norm[\sob{s}{\infty}{E}]{\K}} {\sqrt{\Kmin[E]}} , \mathcal{K}^{nc}_{\mathcal{N},E} \right\} h^{s} \norm[s+1]{u} \ennorm[\K\beta\gamma]{v_h} \,, \end{equation} where \begin{equation} \label{eq:defCNCNh} \mathcal{K}^{nc}_{\mathcal{N},E} = \frac{h_E \norm[\sob{s+\frac12}{\infty}{\partial E}]{\beta\cdot\n{}} }{\sqrt{\Kmin[E]}} \,. \end{equation} \end{lemma} \begin{proof} The first term in \eqref{eq:Nh:def} is bounded following \cite[Lemma 4.1]{AyusodeDios-Lipnikov-Manzini:2016}, using the fact that, by hypothesis, $\K\nabla u\cdot\n{}$ is continuous: \begin{equation*} \begin{split} \sum_{E\in\Th}\scal[\partial E]{(\K\nabla u)\cdot\n{}}{v_h} & = \sum_{e\in\mathcal{E}_h} \scal[e]{(\K\nabla u)\cdot\n{}}{\jmp{v_h}} = \sum_{e\in\mathcal{E}_h} \scal[e]{(\K\nabla u - \proj{k-1}{\K\nabla u})\cdot\n{}}{\jmp{v_h}} \\ &\leq \sum_{e\in\mathcal{E}_h} \norm[e]{(\K\nabla u - \proj{k-1}{\K\nabla u})\cdot\n{}} \norm[e]{\jmp{v_h - \proj[0,e]{k-1}{}v_h}} \\ &\leq \sum_{e\in\mathcal{E}_h} \mathcal{C}h_e^{s-\frac12} \seminorm[s,\omega_e]{\K\nabla u}\cdot h^{\frac12}_e \norm[\omega_e]{\nabla v_h} \\ &\leq \frac{\norm[\sob{s}{\infty}{E}]{\K}}{\sqrt{\Kmin[E]}} h^s_E \norm[s+1,\omega_e]{u} \ennorm[\omega_e]{v_h}\,. \end{split} \end{equation*} The second term in \eqref{eq:Nh:def} is estimated using the fact that $v_h\in\sobnc{k}{\Th}$. Denoting by $\proj[0,\partial E]{k-1}{}v_h$ the piecewise polynomial projection of $v_h$ on each $e\subset \partial E$ and since $\jmp{\proj[0,e]{k-1}{}v_h}_{e} = \proj[0,e]{k-1}{\jmp{v_h}_{e}} =0$ $\forall e \subset \partial E$ because $\int_{e}\jmp{v_h} q = 0$ $\forall q\in\Poly{k-1}{e}$, and since $(\beta\cdot \n{})u$ is continuous across the edges being $\beta$ a divergence-free vector and $u\in\sobh[0]{1}{\Omega}$, we get \begin{equation*} \begin{split} \sum_{E\in\Th}\scal[\partial E]{(\beta\cdot \n{})u}{v_h} &= \sum_{E\in\Th} \scal[\partial E]{ \left(\beta \cdot \n{}\right)u}{v_h} = \frac12 \sum_{E\in\Th} \scal[\partial E]{\left( \beta \cdot \n{}\right)u}{\jmp{v_h} } \\ &= \frac12 \sum_{E\in\Th} \scal[\partial E]{\left( \beta \cdot \n{}\right)u}{\jmp{v_h - \proj[0,\partial E]{k-1}{} v_h} } \\ & = \frac12 \sum_{E\in\Th} \scal[\partial E]{\left( \beta \cdot \n{}\right)u - \proj[0,\partial E]{k-1}{\left( \beta \cdot \n{}\right)u}}{\jmp{v_h - \proj[0,\partial E]{k-1}{} v_h} } \\ & \leq \frac12 \sum_{E\in\Th} \norm[\partial E]{\left( \beta \cdot \n{}\right)u - \proj[0,\partial E]{k-1}{\left( \beta \cdot \n{}\right)u}} \norm[\partial E]{\jmp{v_h - \proj[0,\partial E]{k-1}{} v_h} } \\ & \leq \mathcal{C} \sum_{E\in\Th} h^{s+\frac12}_E \seminorm[s+\frac12,\partial E]{\left(\beta \cdot \n{}\right) u} \cdot h_E^{\frac12} \norm[\omega_E]{\nabla v_h} \\ & \leq \mathcal{C} \sum_{E\in\Th} h^{s+1}_E \frac{\norm[\sob{s+\frac12}{\infty}{\partial E}]{\beta \cdot \n{}}}{\sqrt{\Kmin[E]}} \norm[s+1,\omega_E]{u} \ennorm[\K\beta,\omega_E]{v_h} \,. \end{split} \end{equation*} \end{proof} \subsection{Well-posedness of the discrete problem} \label{sec:coercivity} The following theorem proves the well-posedness of the discrete formulation. \begin{theorem}[Coercivity of $\Bhsupg{}{}$] \label{lem:coercivity_Bh} For any $v_h\in V_h$, \begin{equation} \label{eq:coercivity_Bh} \Bsupg[h]{v_h}{v_h} \geq \min\left\{\frac14 ,\frac{\sigma_\ast}{2}\right\} \frac{1-C_\tau}{2} \ennorm[\K\beta\gamma]{v_h} \,, \end{equation} where $C_\tau$ is the constant introduced in~\eqref{eq:deftau}. \end{theorem} \begin{proof} Let $v_h\in V_h$. By definition \eqref{eq:defbh}, it holds that \begin{equation*} \b{v_h}{v_h} = \frac12\sum_{E\in\Th} \left[ \scal[E]{\beta\cdot\proj{k-1}{}\nabla v_h}{\proj{k-1}{}v_h} - \scal[E]{\proj{k-1}{}v_h}{\beta\cdot\proj{k-1}{}\nabla v_h} \right]= 0 \,. \end{equation*} Moreover, using Cauchy-Schwarz and Young inequalities we find that \begin{equation*} \begin{split} \tau_E\abs{\scal[E]{\gamma \proj{k-1}{} v_h}{\beta\cdot\proj{k-1}{} \nabla v_h}} &\leq \sqrt{\gamma_E}\tau_E\norm[E]{\sqrt{\gamma} \proj{k-1}{}v_h} \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h} \\ &\leq \frac{1}{2} \norm{\sqrt{\gamma} \proj{k-1}{}v_h}^2 + \frac{\gamma_E\tau_E^2}{2} \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2\,, \end{split} \end{equation*} which implies that \begin{equation} \tau_E\scal[E]{\gamma\proj{k-1}{} v_h}{\beta\cdot\proj{k-1}{}\nabla v_h} \geq - \frac{1}{2} \norm{\sqrt{\gamma} \proj{k-1}{}v_h}^2 - \frac{\gamma_E\tau_E^2}{2} \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2\,. \label{eq:Bsupg:coercivity:proof:00} \end{equation} Inverse inequality \eqref{eq:estimnormlapl} imply that \begin{equation} \begin{split} \tau_E\norm[E]{\div\left(\K \proj{k-1}{}\nabla v_h\right)}^2 &\leq \frac{\tilde{C}_k^E h^2_E}{\K[E]} \norm[E]{\div\left(\K \proj{k-1}{} \nabla v_h\right)}^2 \leq \frac{1}{\K[E]}\norm[E]{\K\proj{k-1}{}\nabla v_h}^2 \\ &\leq \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2, \label{eq:Bsupg:coercivity:proof:10} \end{split} \end{equation} since $\norm[E]{\K\proj{k-1}{}\nabla v_h}\leq\sqrt{\K[E]}\norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}$. Using the definition of $\Bsupg[h]{}{}$, cf.~\eqref{eq:defBhsupg}, Cauchy-Schwarz inequality, inverse inequality \eqref{eq:estimnormlapl}, inequalities~\eqref{eq:Bsupg:coercivity:proof:00}, \eqref{eq:Bsupg:coercivity:proof:10} and \eqref{eq:deftau}, we have \begin{equation*} \begin{split} &\Bsupg[h]{v_h}{v_h} = \sum_{E\in\Th}\left\{ \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ &\qquad + \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} \\ &\qquad + \norm[E]{\sqrt{\gamma} \proj{k-1}{}v_h}^2 + \tau_E\scal[E]{\gamma \proj{k-1}{}v_h}{\beta\cdot\proj{k-1}{}\nabla v_h} \\ &\left. \qquad - \tau_E \scal[E]{\div\left(\sqrt{\K}\proj{k-1}{}\nabla v_h\right)}{\beta\cdot\proj{k-1}{}\nabla v_h} \right\} \\ &\quad \geq \sum_{E\in\Th} \left\{ \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \left(1-\frac{\gamma_E\tau_E}{2}\right) \tau_E \norm[E]{\beta\cdot \proj{k-1}{} \nabla v_h}^2 \right. \\ &\qquad+ \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} + \left(1-\frac{1}{2}\right)\norm[E]{\sqrt{\gamma} \proj{k-1}{}v_h}^2 \\ & \left. \qquad - \tau_E \norm[E]{\div\left(\sqrt{\K}\proj{k-1}{}\nabla v_h\right)} \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h} \right\} \\ &\quad \geq \sum_{E\in\Th} \left\{ \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \left(\frac{1}{2}-\frac{C_{\tau}}{2}\right) \tau_E \norm[E]{\beta\cdot \proj{k-1}{} \nabla v_h}^2 \right. \\ &\qquad+ \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} + \frac{1}{2} \norm[E]{\sqrt{\gamma} \proj{k-1}{} v_h}^2 \\ &\left. \qquad - \sum_{E\in\Th}\frac12\tau_E\norm[E]{\div \left(\K\proj{k-1}{} \nabla v_h \right)}^2 \right\} \\ &\quad \geq \sum_{E\in\Th}\left\{ \frac{1}{2} \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \left( \frac{1}{2}-\frac{C_{\tau}}{2} \right) \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ &\left. \qquad + \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} + \frac{1}{2} \norm[E]{\sqrt{\gamma} \proj{k-1}{}v_h}^2\right\} \\ &\quad \geq \frac{1-C_\tau}{2} \sum_{E\in\Th} \left( \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ &\qquad \left. + \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} + \norm[E]{\sqrt{\gamma} \proj{k-1}{}v_h}^2 \right) \,. \end{split} \end{equation*} Next, using the coercivity of the VEM stabilization in \eqref{eq:vemstab_coercivity} we get $\forall E\in\Th$, \begin{equation*} \begin{split} &\norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 + \shE{\left(I-\proj{k-1}{}\right) v_h}{\left(I-\proj{k-1}{}\right) v_h} \\ &\quad \geq \frac12 \a[h]{v_h}{v_h}+ \frac12 \left(\norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ & \qquad \left. + \sigma_\ast \left( \K[E]+\tau_E \beta_E^2 \right) \norm[E]{\nabla v_h - \nabla \proj{k-1}{} v_h}^2 \right) \\ & \quad \geq \frac12 \a[h]{v_h}{v_h} + \min\left\{\frac12,\sigma_\ast\right\} \left(\norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ &\qquad \left. + \left(\K[E] + \tau_E \beta_E^2\right) \norm[E]{\nabla v_h - \proj{k-1}{} \nabla v_h }^2 \right) \\ & \quad \geq \frac12 \a[h]{v_h}{v_h} + \min\left\{\frac12,\sigma_\ast\right\} \left(\norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \tau_E\norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 \right. \\ &\qquad \left. + \norm[E]{\sqrt{\K} \left( \nabla v_h - \proj{k-1}{} \nabla v_h \right)}^2 + \tau_E \norm[E]{\beta \cdot \left( \nabla v_h - \proj{k-1}{} \nabla v_h \right) }^2 \right) \\ &\quad \geq \min\left\{ \frac{1}{4},\frac{\sigma_\ast}{2} \right\} \left( \a[h]{v_h}{v_h} + \a{v_h}{v_h} \right) \,. \end{split} \end{equation*} In the last line we use the following inequalities: \begin{align*} \norm[E]{\sqrt{\K}\proj{k-1}{}\nabla v_h}^2 + \norm[E]{\sqrt{\K}\left( \nabla v_h - \proj{k-1}{} \nabla v_h \right) }^2 &\geq \frac12\norm[E]{\sqrt{\K}\nabla v_h}^2\,, \\ \tau_E\left( \norm[E]{\beta\cdot\proj{k-1}{}\nabla v_h}^2 + \norm[E]{\beta \cdot \left( \nabla v_h - \proj{k-1}{} \nabla v_h \right) }^2 \right) &\geq \frac12\norm[E]{\beta\cdot\nabla v_h}^2 \,. \end{align*} \end{proof} \subsection{A priori error estimates} \label{sec:aprior_estim} Here, we prove the a priori error estimates showing that the stabilized formulation of the problem has optimal rates of convergence. Several constants in the error inequalities are numbered to track their dependence on the local problem coefficients. \begin{theorem} \label{teo:apriori} Let $u\in\sobh{s+1}{\Th}\cap\sobh[0]{1}{\Omega}$, $2\leq s+1\leq k+1$, be the solution of the variational problem~\eqref{eq:exvarform:supg} with $f\in\sobh{s-1}{\Omega}$, $\K\in\big[\sob{s+1}{\infty}{\Omega}\big]^{d\times d}$, $\beta\in\left[\sob{s+1}{\infty}{\Omega}\right]^{d}$ and $\gamma\in\sob{s+1}{\infty}{\Omega}$. Let $u_h\in V_h$ be the solution of the VEM~\eqref{eq:vemvarform_supg} under the mesh assumption of Section~\ref{sec:mesh}. Then, for $h$ sufficiently small, it holds \begin{equation} \label{eq:apriori_estim_ennorm} \begin{split} \ennorm[\K\beta\gamma]{u-u_h} &\leq \mathcal{C} h^s \left\{ \max_{E\in\Th} \left( \frac{\norm[\sob{s}{\infty}{E}]{\K}}{\sqrt{\Kmin[E]}} , \sqrt{h_E\beta_E} , h_E\sqrt{\gamma_E} , \mathcal{C}^{nc}_{a,E} , \mathcal{C}^{nc,s+1}_{b,E} , \mathcal{C}^{nc}_{c,E} , \mathcal{C}^{nc}_{d,E} , \mathcal{K}^{nc}_{\mathcal{N},E} ,\right. \right. \\ &\qquad \left. \left. \vphantom{\frac{\norm[\sob{s}{\infty}{E}]{\K}}{\sqrt{\Kmin[E]}}} \mathcal{K}^{nc}_{b,E} \right) \norm[s+1]{u} + \max_{E\in\Th} \mathcal{C}^{nc}_{f,E} \right\} \,, \end{split} \end{equation} where $\mathcal{C}^{nc}_{a,E}$, $\mathcal{C}^{nc,s+1}_{b,E}$, $\mathcal{C}^{nc}_{c,E}$, $\mathcal{C}^{nc}_{d,E}$, $\mathcal{K}^{nc}_{\mathcal{N},E}$ and $\mathcal{K}^{nc}_{b,E}$ are defined by \eqref{eq:defCNCa}, \eqref{eq:defCNCb}, \eqref{eq:defCNCc}, \eqref{eq:defCNCd}, \eqref{eq:defCNCNh} and \eqref{eq:defKNCb} respectively, and \begin{equation} \label{eq:defCNCf} \mathcal{C}^{nc}_{f,E} = \max \left\{ \frac{\seminorm[s-1,E]{f - \proj{0}{}f}}{\sqrt{\Kmin[E]}} , \frac{h^{-1}_E\tau_E \seminorm[s-1,E]{f\beta - \proj{0}{f\beta}}} {\sqrt{\Kmin[E]}} \right\} \,. \end{equation} \end{theorem} \begin{proof} \noindent First, by using the triangle inequality we have \begin{equation*} \ennorm[\K\beta\gamma]{u-u_h} \leq \ennorm[\K\beta\gamma]{u-u_I} + \ennorm[\K\beta\gamma]{u_h - u_I} \,. \end{equation*} The first term is bounded using~\eqref{eq:H1:apriori:proof:00} with $\psi=u$. We are left to estimate the norm of $e_h:= u_h-u_I$. Since $e_h\in V_h$, by \eqref{eq:coercivity_Bh} we know that \begin{equation} \begin{split} \alpha \ennorm[\K\beta\gamma]{e_h}^2 &\leq \Bhsupg{u_h - u_I}{e_h} = \Fsupg[h]{e_h} - \Bhsupg{u_I}{e_h} \\ & = \Fsupg[h]{e_h} - \Fsupg{e_h} - \Nh{u}{e_h} + \Bsupg{u}{e_h} - \Bhsupg{u_I}{e_h} \\ & \leq \abs{\Fsupg[h]{e_h} - \Fsupg{e_h}} + \abs{\Nh{u}{e_h}} + \abs{\Bsupg[h]{u-u_I}{e_h}} \\ & \quad + \abs{ \Bsupg{u}{e_h} - \Bsupg[h]{u}{e_h} } \,. \label{eq:apriori_firsteqn} \end{split} \end{equation} We estimate the first term as follows: \begin{equation*} \label{eq:apriori_Fh-F} \begin{split} &\sum_{E\in\Th}\abs{\scal[E]{f}{e_h - \proj{k-1}{}e_h} + \tau_E\scal[E]{f}{\beta\cdot\left(\nabla e_h - \proj{k-1}{}\nabla e_h\right)}} \\ &\quad = \sum_{E\in\Th}\abs{\scal[E]{f - \proj{0}{}f}{e_h - \proj{k-1}{}e_h}} + \tau_E \abs{\scal[E]{f\beta - \proj{0}{f\beta}}{\nabla e_h}} \\ &\quad = \sum_{E\in\Th}\abs{\scal[E]{f - \proj{0}{}f - \proj{k-1}{f - \proj{0}{}f}}{e_h - \proj{k-1}{}e_h}} \\ &\qquad + \tau_E \abs{\scal[E]{f\beta - \proj{0}{f\beta} - \proj{k-1}{f\beta - \proj{0}{f\beta}}}{\nabla e_h}} \\ &\quad \leq \sum_{E\in\Th}\norm[E]{f - \proj{0}{}f - \proj{k-1}{f - \proj{0}{}f}} \norm[E]{e_h - \proj{k-1}{}e_h} \\ &\qquad + \tau_E \norm[E]{f\beta - \proj{0}{f\beta} - \proj{k-1}{f\beta - \proj{0}{f\beta}}} \norm[E]{\nabla e_h} \\ &\quad \leq \mathcal{C} h^s \sum_{E\in\Th} \left(\seminorm[s-1,E]{f - \proj{0}{}f } + h^{-1}_E\tau_E \seminorm[s-1,E]{f\beta - \proj{0}{f\beta}} \right) \norm[E]{\nabla e_h} \\ &\quad \leq \mathcal{C} h^s \sum_{E\in\Th} \frac{ \seminorm[s-1,E]{f - \proj{0}{}f} + h^{-1}_E\tau_E \seminorm[s-1,E]{f\beta - \proj{0}{f\beta}} }{\sqrt{\Kmin[E]}} \ennorm[\K\beta,E]{e_h} \,. \end{split} \end{equation*} Using the continuity estimate \eqref{eq:ah_continuity} to bound $\a[h]{}{}$, \eqref{eq:bh_continuity} to bound $\b[h]{}{}$, \eqref{eq:ch_continuity} to bound $\c[h]{}{}$, \eqref{eq:dh_continuity} to bound $\d[h]{}{}$, and the estimate of the VEM interpolant~\eqref{eq:H1:apriori:proof:00}, we estimate the third term as follows: \begin{equation*} \begin{split} \abs{\Bhsupg{u-u_I}{e_h}} &\leq \mathcal{C} h^s \left\{ \ennorm[\K\beta\gamma]{u-u_I} \ennorm[\K\beta\gamma]{e_h} + \left(\max_{E\in\Th} \tau_E^{-\frac12} \norm{u-u_I} \right. \right. \\ &\quad \left.\left.\vphantom{\max_{E\in\Th} \tau_E^{-\frac12}} + \max_{E\in\Th} \left(\mathcal{C}^{nc,1}_{b,E} + \mathcal{K}^{nc}_{b,E} \right) \norm{\nabla (u-u_I) }\right) \ennorm[\K\beta]{e_h} \right\} \\ &\leq \mathcal{C} h^s \max_{E\in\Th} \left\{ \sqrt{\K[E]}, \sqrt{h_E\beta_E}, h_E\sqrt{\gamma_E}, \mathcal{C}^{nc,1}_{b,E} , \mathcal{K}^{nc}_{b,E} \right\} \norm[s+1]{u} \ennorm[\K\beta\gamma]{e_h} \,. \end{split} \end{equation*} The proof of \eqref{eq:apriori_estim_ennorm} is concluded by using the above estimates, the estimate \eqref{eq:estim_Nh} on the non-conformity term and \eqref{eq:estim_Bsupg-Bsupgh}. \end{proof} \begin{remark} The second argument of the max in \eqref{eq:defCNCf} can be estimated as follows, using \eqref{eq:estim_ab-Pab}: \begin{equation*} \begin{split} \frac{h^{-1}_E\tau_E \seminorm[s-1,E]{f\beta - \proj{0}{f\beta}}}{\sqrt{\Kmin[E]}} \leq \mathcal{C}\frac{ \norm[\sob{s-1}{\infty}{E}]{\beta - \proj{0}{}\beta} \norm[s-1,E]{f} + \norm[\sob{s-1}{\infty}{E}]{\beta} \norm[s-1,E]{f-\proj{0}{}f}}{\beta_E \sqrt{\Kmin[E]}} \,. \end{split} \end{equation*} \end{remark} \begin{remark} When we consider constant coefficients and a costant right-hand side all the non-consistency terms in \eqref{eq:apriori_estim_ennorm} vanish, yielding the following estimate: \begin{equation*} \begin{split} \ennorm[\K\beta\gamma]{u-u_h} &\leq \mathcal{C} h^s \max_{E\in\Th} \left( \sqrt{\K[E]} \,, \sqrt{h_E\beta_E} \,, h_E\sqrt{\gamma_E} , \mathcal{K}^{nc}_{\mathcal{N},E}, \mathcal{K}^{nc}_{b,E} \right) \norm[s+1]{u} \,. \end{split} \end{equation*} Moreover, if we consider a conforming discretization, Theorem \ref{teo:apriori} proves a robust estimate with respect to the P\'eclet number: \begin{equation*} \ennorm[\K\beta\gamma]{u-u_h} \leq \mathcal{C} h^s \max_{E\in\Th} \left( \sqrt{\K[E]}\,, \sqrt{h_E\beta_E} \,, h_E\sqrt{\gamma_E} \right) \norm[s+1]{u} \,, \end{equation*} as obtained for classical Finite Elements. \end{remark} \section{Numerical Results} \label{sec:numerics} \newcommand{\MeshONE} {$\mathcal{M}_1$} \newcommand{\MeshTWO} {$\mathcal{M}_2$} \newcommand{$\mathcal{M}_3$}{$\mathcal{M}_3$} \newcommand{\MeshFOUR} {$\mathcal{M}_4$} \newcommand{\HAT}[1]{\widehat{#1}} \newcommand{n} \newcommand{\nR} {N_{P}} \newcommand{\nF}{n} \newcommand{\nR} {N_{P}} \newcommand{\nF} {N_{F}} \newcommand{\nV} {N_{V}} \newcommand{\ndof} {\#\chi} \newcommand{\hmax} {h_{\text{max}}} The numerical experiments of this section are aimed at confirming the convergence rates predicted by the \emph{a priori} analysis developed in the previous sections and comparing the performance of the nonconforming VEM with that of the conforming VEM. In a preliminary stage, the consistency of the numerical method, i.e. the exactness of these methods for polynomial solutions, has been tested numerically by solving the elliptic equation with boundary and source data determined by the monomials $u(x,y)=x^{\mu}y^{\nu}$ on different set of polygonal meshes and for all possible combinations of nonnegative integers $\mu$ and $\nu$ such that $\mu+\nu\leq k$, with $k=1,2,3$. In all the cases, the error magnitude was within the arithmetic precision, thus confirming the consistency of the VEM. To study the accuracy of the method we solve the convection-reaction-diffusion equation on the domain $\Omega=]0,1[\times]0,1[$. The variable coefficients of the equation are given by \begin{align} \label{eq:accuracy_test-K} \K(x,y) &= \begin{array}{l} \alpha \left[ \begin{array}{cc} 1+x^2 & xy \\ xy & 1+y^2 \end{array} \right], \quad\alpha=\,10^{-7} \end{array}, \\[1em] \label{eq:accuracy_test-beta} \beta(x,y) &= \big(\cos(2\pi x),\sin(2\pi y)\big)^T, \\[1.em] \gamma(x,y) &= \exp(x+y). \label{eq:accuracy:test} \end{align} Since the P\'{e}clet number here is in the range $\big[10^6,10^7\big]$, all calculations are in the convection dominated regime. The forcing term and the Dirichlet boundary conditions are set such that the exact solution is \begin{align} u(x,y) = \sin(2\pi x)\sin(2\pi y)+ \,x^5+\,y^5+1. \end{align} The performances of the methods presented above are investigated by evaluating the rate of convergence on four different sequences of unstructured meshes, labeled by~\MeshONE{}, \MeshTWO{}, $\mathcal{M}_3${}, and \MeshFOUR{} respectively. The top panels of Fig.~\ref{fig:Meshes} show the first mesh of each sequence and the bottom panels show the mesh of the first refinement. \begin{figure} \centering \begin{tabular}{cccc} \includegraphics[scale=0.135]{M400-mesh2D_0.pdf} & \includegraphics[scale=0.135]{Md201-mesh2D_0.pdf}& \includegraphics[scale=0.135]{M103-mesh2D_0.pdf} & \includegraphics[scale=0.135]{M901-mesh2D_0.pdf} \\[1em] \includegraphics[scale=0.135]{M400-mesh2D_1.pdf} & \includegraphics[scale=0.135]{Md201-mesh2D_1.pdf}& \includegraphics[scale=0.135]{M103-mesh2D_1.pdf} & \includegraphics[scale=0.135]{M901-mesh2D_1.pdf} \\[1em] \text{(a)} & \text{(b)} & \text{(c)} & \text{(d)} \end{tabular} \caption{Base mesh (top row) and first refinement (bottom row) of the four mesh families: (a) regular hexagonal mesh; (b) remapped hexagonal mesh; (c) highly distorted quadrilateral mesh; (d) non-convex regular mesh. } \label{fig:Meshes} \vspace{-0.25cm} \end{figure} The meshes in \MeshONE{} are built by partitioning the domain $\Omega$ into regular hexagonal cells. At the boundaries of $\Omega$ each mesh is completed by half hexagonal cells. The meshes in \MeshTWO{} are built as follows. First, we determine a primal mesh by remapping the position $(\HAT{x},\HAT{y})$ of the nodes of a uniform square partition of $\Omega$ by the smooth coordinate transformation: \begin{align*} x &= \HAT{x} + (1\slash{10}) \sin(2\pi\HAT{x})\sin(2\pi\HAT{y}),\\ y &= \HAT{y} + (1\slash{10}) \sin(2\pi\HAT{x})\sin(2\pi\HAT{y}). \end{align*} The corresponding mesh of \MeshTWO{} is built from the primal mesh by splitting each quadrilateral cell into two triangles and connecting the barycenters of adjacent triangular cells by a straight segment. The mesh construction is completed at the boundary by connecting the barycenters of the triangular cells close to the boundary to the midpoints of the boundary edges and these latters to the boundary vertices of the primal mesh. The meshes in $\mathcal{M}_3${} are taken from the mesh suites of the FVCA-6 Benchmark~\cite{Eymard-Henri-Herbin-Hubert-Klofkorn-Manzini:2011:FVCA6:benchmark:proc-peer}, and are formed by highly skewed quadrilateral cells. The meshes in \MeshFOUR{} are obtained by filling $\Omega$ with a suitably scaled non-convex octagonal reference cell. All the meshes are parametrised by the number of partitions in each direction. The starting mesh of every sequence is built from a $5\times 5$ regular grid, and the refined meshes are obtained by doubling this resolution. All errors, computed as in \cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a,Cangiani-Manzini-Sutton:2016}, are reported in Figs.~\ref{fig:M400}, \ref{fig:Md201}, \ref{fig:M103}, and~\ref{fig:M901}. Error values are labeled by a circle for the nonconforming VEM and by a square for the conforming VEM, that are stabilized by the method developed in \cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}. Each figure shows the relative errors with respect to the maximum diameter of the discretization, in the $L^2$ norms (left panel) and in the $H^1$ norms (right panel). In the same figures we report the slopes $k+1$ for the $\lebl{}$-norm and $k$ for the $\sobh{1}{}$-norm. The numerical results confirm the theoretical rate of convergence for the $\sobh{1}{}$-norm. The conforming and nonconforming VEMs provide very close results on any fixed mesh, with the conforming method slightly over performing the nonconforming VEM in few cases. To test the robustness of the approach with respect to very large P\'eclet numbers, we have performed some tests with values of $\K$ and $\beta$ in the form of \eqref{eq:accuracy_test-K} and \eqref{eq:accuracy_test-beta}, with $\alpha$ spanning a wide range of orders of magnitude ($\alpha\in\left\{10^{-i}\colon i=4,\ldots,11\right\}$), with $\gamma(x,y)=0$. In Figure \ref{fig:H1error_smallK} we display the $\sobh{1}{}$ approximation error plotted with respect to the values of $\alpha$, on two of the meshes previously used. We can see that, as far as the presented tests are concerned, the error is bounded independently of the values of $\alpha$, even on non convex polygons, thus confirming the robustness of the approach. \subsection{Approximation of internal and boundary layers} The second test is the classic problem from~\cite{Franca-Frey-Hughes:1992}. The computational domain and the boundary conditions are as shown in Figure~\ref{fig:test-2:domain}. The velocity forms an angle $\theta$ with the x-axis, and propagates the non-homogeneous boundary condition $u=1$ inside $\Omega$, thus generating an internal discontinuity, which is numerically approximated by an internal layer, a sharp transition between the constant solution states $u=0$ and $1$. The homogeneous boundary condition at the top of the computational domain produces a boundary layer. The diffusion coefficient is constant on $\Omega$ and given by $\K=10^{-6}$, while the velocity is $\beta=(\cos\,\theta,\sin\,\theta)$, and $\theta=\arctan(1)$. The P\'eclet number is about $10^6$. We solve this problem using the remapped and the regular hexagonal meshes (see plots $(a)-(b)$ of Figure~\ref{fig:Meshes}), with resolution $40\times 40$ (third refinement). Figures~\ref{fig:test-case-2b} and~\ref{fig:test-case-2c} show the results obtained with the conforming VEM \cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a} (left panels) and the nonconfoming VEM (right panels) for the polynomial degrees $k=1$ and $k=3$. The results are quite similar to those presented in~\cite{Franca-Frey-Hughes:1992,% Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}, and are coherent with the expected behaviour of the method. Undershoots and overshoots are present near the internal layer, as is normal for this problem. However, by increasing the accuracy order of the VEM, the numerical solution becomes smoother. A thorough inspection of these plots also reveals that the nonconforming VEM tends to provide a sharper internal layer than that of the conforming VEM at the price of a relatively bigger amplitude of the spurious oscillations in the transition region. \begin{figure}[t] \hfill \begin{tabular}{cc} \begin{overpic}[width=0.40\textwidth]{plot-res-01.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $L^2$ Approximation Error}\end{sideways}} \put(71,66){\textbf{2}} \put(79,44){\textbf{3}} \put(79,23){\textbf{4}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-02.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $H^1$ Approximation Error}\end{sideways}} \put(71,78.5){$\mathbf{1}$} \put(71,54 ){$\mathbf{2}$} \put(71,28 ){$\mathbf{3}$} \end{overpic} \end{tabular} \caption{Relative approximation errors obtained using the conforming VEM (dashed lines labeled with squares) and the nonconforming VEM (solid lines labeled with circles) for $k=1,2,3$ (from top to bottom). Calculations are carried out using the regular hexagonal meshes of Figure~\ref{fig:Meshes}(a). Errors are measured in the $L^2$ norm (left panels) and $H^1$ norm (right panels), and plotted versus $h$.} \label{fig:M400} \vspace{-0.25cm} \end{figure} \begin{figure}[t] \hfill \begin{tabular}{cc} \begin{overpic}[width=0.40\textwidth]{plot-res-03.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $L^2$ Approximation Error}\end{sideways}} \put(72,69){\textbf{2}} \put(72,52){\textbf{3}} \put(72,30){\textbf{4}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-04.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $H^1$ Approximation Error}\end{sideways}} \put(72,79){$\textbf{1}$} \put(72,54){$\textbf{2}$} \put(72,28){$\textbf{3}$} \end{overpic} \end{tabular} \caption{Relative approximation errors obtained using the conforming VEM (dashed lines labeled with squares) and the nonconforming VEM (solid lines labeled with circles) for $k=1,2,3$ (from top to bottom). Calculations are carried out using the remapped hexagonal meshes of Figure~\ref{fig:Meshes}(b). Errors are measured in the $L^2$ norm (left panels) and $H^1$ norm (right panels), and plotted versus $h$.} \label{fig:Md201} \vspace{-0.25cm} \end{figure} \begin{figure}[t] \hfill \begin{tabular}{cc} \begin{overpic}[width=0.40\textwidth]{plot-res-05.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $L^2$ Approximation Error}\end{sideways}} \put(73,79){\textbf{2}} \put(73,56){\textbf{3}} \put(72,40){\textbf{4}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-06.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $H^1$ Approximation Error}\end{sideways}} \put(72,81){\textbf{1}} \put(72,62){\textbf{2}} \put(72,37){\textbf{3}} \end{overpic} \end{tabular} \caption{Relative approximation errors obtained using the conforming VEM (dashed lines labeled with squares) and the nonconforming VEM (solid lines labeled with circles) for $k=1,2,3$ (from top to bottom). Calculations are carried out using the highly distorted quadrilateral meshes of Figure~\ref{fig:Meshes}(c). Errors are measured in the $L^2$ norm (left panels) and $H^1$ norm (right panels), and plotted versus $h$.} \label{fig:M103} \vspace{-0.25cm} \end{figure} \begin{figure}[t] \hfill \begin{tabular}{cc} \begin{overpic}[width=0.40\textwidth]{plot-res-07.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $L^2$ Approximation Error}\end{sideways}} \put(72,67){\textbf{2}} \put(72,50){\textbf{3}} \put(72,32){\textbf{4}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-08.pdf \put(31,-5){\textbf{Mesh size $h$}} \put(-5,13){\begin{sideways}\textbf{ $H^1$ Approximation Error}\end{sideways}} \put(72,79){\textbf{1}} \put(72,55){\textbf{2}} \put(72,28){\textbf{3}} \end{overpic} \end{tabular} \caption{Relative approximation errors obtained using the conforming VEM (dashed lines labeled with squares) and the nonconforming VEM (solid lines labeled with circles) for $k=1,2,3$ (from top to bottom). Calculations are carried out using the non convex meshes of Figure~\ref{fig:Meshes}(d). Errors are measured in the $L^2$ norm (left panels) and $H^1$ norm (right panels), and plotted versus $h$.} \label{fig:M901} \vspace{-0.25cm} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \hspace{0.2cm}\textbf{Conforming VEM} & \hspace{0.6cm}\textbf{Nonconforming VEM}\\[0.25em] \begin{overpic}[width=0.40\textwidth]{plot-res-09.pdf \put(15,84){$\mathbf{k=1}$} \put(15,60){$\mathbf{k=2}$} \put(15,32){$\mathbf{k=3}$} \put(29,-5){\textbf{Coefficient $\alpha$}} \put(-5,17){\begin{sideways}\textbf{ $H^1$Approximation Error }\end{sideways}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-10.pdf \put(15,84.0){$\mathbf{k=1}$} \put(15,60.5){$\mathbf{k=2}$} \put(15,32.5){$\mathbf{k=3}$} \put(29,-5){\textbf{Coefficient $\alpha$}} \put(-5,17){\begin{sideways}\textbf{$H^1$ Approximation Error}\end{sideways}} \end{overpic} \\[2em] \multicolumn{2}{c}{\begin{large}\textbf{Mesh of regular hexagons (a)}\end{large}} \\[1.5em] \hspace{0.2cm}\textbf{Conforming VEM} & \hspace{0.6cm}\textbf{Nonconforming VEM}\\[0.25em] \begin{overpic}[width=0.40\textwidth]{plot-res-11.pdf \put(15,83){$\mathbf{k=1}$} \put(15,60){$\mathbf{k=2}$} \put(15,30){$\mathbf{k=3}$} \put(29,-5){\textbf{Coefficient $\alpha$}} \put(-5,17){\begin{sideways}\textbf{ $H^1$ Approximation Error }\end{sideways}} \end{overpic} &\quad \begin{overpic}[width=0.40\textwidth]{plot-res-12.pdf \put(15,83){$\mathbf{k=1}$} \put(15,60){$\mathbf{k=2}$} \put(15,30){$\mathbf{k=3}$} \put(29,-5){\textbf{Coefficient $\alpha$}} \put(-5,17){\begin{sideways}\textbf{ $H^1$ Approximation Error }\end{sideways}} \end{overpic} \\[2.em] \multicolumn{2}{c}{\begin{large}\textbf{Mesh of regular non convex cells (d)}\end{large}} \\ \end{tabular} \caption{$H^1$ relative approximation error versus the viscous coefficient $\alpha\in[10^{-11},10^{-4}]$ using the first refined mesh of mesh families $(a)$ (top panels) and $(d)$ (bottom panels) for the test case with $\gamma(x,y)=0$. The problem is solved by applying the conforming VEM (left panel) and nonconforming VEM (right panel) of degree $k=1$ (circles), $k=2$ (squares), $k=3$ (diamonds).} \label{fig:H1error_smallK} \end{figure} \begin{figure}[t] \centering \begin{overpic}[scale=0.27]{test2.pdf \put(95,-5){$\mathbf{x}$} \put(-5,95){$\mathbf{y}$} \put(76,-4){$\mathbf{x\!\!=\!\!1}$} \put(-14,82){$\mathbf{y\!\!=\!\!1}$} \put(7,17){$\mathbf{y\!\!=\!\!0.2}$} \put(35,-5){$\mathbf{u=1}$} \put(35,87){$\mathbf{u=0}$} \put(-20,10){$\mathbf{u=1}$} \put(-20,48){$\mathbf{u=0}$} \put(88, 42){$\mathbf{u=0}$} \put(30,54){\textbf{velocity}} \put(50,41){$\bm{\theta}$} \end{overpic} \vspace{0.25cm} \caption{Test~2: domain and boundary conditions.} \label{fig:test-2:domain} \vspace{-0.25cm} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[scale=0.14]{plot-res-13.pdf}& \includegraphics[scale=0.14]{plot-res-14.pdf}\\[0.175em] \multicolumn{2}{c}{$\mathbf{k=1}$}\\[1em] \includegraphics[scale=0.14]{plot-res-15.pdf}& \includegraphics[scale=0.14]{plot-res-16.pdf}\\[0.175em] \multicolumn{2}{c}{$\mathbf{k=3}$ \end{tabular} \caption{Test Case~2: conforming VEM (left panels) and nonconforming VEM (right panels) for $k=1$ and $k=3$ and using a regular hexagonal mesh (third refinement). } \label{fig:test-case-2b} \centering \begin{tabular}{cc} \includegraphics[scale=0.14]{plot-res-17.pdf}& \includegraphics[scale=0.14]{plot-res-18.pdf}\\[0.175em] \multicolumn{2}{c}{$\mathbf{k=1}$}\\[1em] \includegraphics[scale=0.14]{plot-res-19.pdf}& \includegraphics[scale=0.14]{plot-res-20.pdf}\\[0.175em] \multicolumn{2}{c}{$\mathbf{k=3}$ \end{tabular} \caption{Test Case~2: conforming VEM (left panels) and nonconforming VEM (right panels) for $k=1$ and $k=3$ and using a remapped hexagonal mesh (third refinement). } \label{fig:test-case-2c} \vspace{-0.25cm} \end{figure} \section{Conclusions} \label{sec:conclusions} In this paper, we proposed a nonconforming VEM for the advection-diffusion-reaction problem in the convection-dominated regime. Due to the strong convective field with respect to the diffusion term, we introduced the SUPG stabilization by extending to the nonconforming VEM the stabilization technique proposed in~\cite{Benedetto-Berrone-Borio-Pieraccini-Scialo:2016a}. The stabilization included in the virtual element formulation is a natural extension of the classical SUPG stabilization for the standard FEM. To ensure coercivity of the discrete operators, we modify the SUPG stabilization by introducing a VEM stabilization of the SUPG stabilization term. Optimal convergence rates are obtained from the convergence analysis under proper assumptions on the regularity of problem coefficients, the meshes, and the exact solution. The numerical results confirm the behaviour of the VEM that is expected from the theory and the stabilizing effect of the additional SUPG term provides stable discrete solutions even for very large P\'eclet numbers in the order of $10^6$.
{ "timestamp": "2018-06-05T02:12:46", "yymm": "1806", "arxiv_id": "1806.00879", "language": "en", "url": "https://arxiv.org/abs/1806.00879" }
\section{Introduction} Killing vector fields generate the isometries of a manifold and they constitute a Lie algebra structure called the symmetry algebra of the manifold under the Lie bracket of vector fields. A manifold admitting a spin structure is called a spin manifold and the isometries of a spin manifold are related to the special types of spinors called geometric Killing spinors \cite{Lichnerowicz1, Alekseevsky Cortes1, Acik}. The squaring map of geometric Killing spinors correspond to Killing vector fields and they together form a superalgebra structure called the symmetry superalgebra on the manifold \cite{Klinker}. The even part of the superalgebra is the symmetry algebra of Killing vector fields and the odd part is the space of geometric Killing spinors \cite{OFarrill}. The brackets of the superalgebra correspond to the Lie bracket of vector fields for the even-even case, Lie derivative of spinor fields with respect to Killing vectors for the even-odd case and the squaring map of spinors for the odd-odd case. If all the brackets satisfy the Jacobi identities, then the symmetry superalgebra correspond to a Lie superalgebra. Besides the geometric Killing spinors, supergravity Killing spinors which are supersymmetry generators of bosonic supergravity theories can also be used in the construction of symmetry superalgebras called Killing superalgebras on supergravity backgrounds that are solutions of the supergravity field equations in various dimensions \cite{OFarrill HackettJones Moutsopoulos Simon, OFarrill Santi, deMedeiros OFarrill Santi}. These Killing superalgebras are important tools in the classification problem of supergravity backgrounds in all dimensions \cite{OFarrill Meessen Philip, OFarrill HackettJones Moutsopoulos, OFarrill Hustler1, OFarrill Hustler2}. They reduce to the symmetry superalgebras generated by geometric Killing spinors in constant curvature backgrounds since in that case supergravity Killing spinors reduce to geometric Killing spinors. In a flat background, the symmetry superalgebra is called the Poincare superalgebra and its subalgebra deformations can give way to find the Killing superalgebras of supergravity backgrounds \cite{OFarrill Santi1, OFarrill Santi2}. Antisymmetric generalizations of isometries to higher degree differential forms are Killing-Yano (KY) forms which are called hidden symmetries of the manifold. They are related to the $p$-form Dirac currents of geometric Killing spinors which are generalizations of the squaring map of spinors to the higher degree bilinears \cite{Acik Ertem1}. The Poincare superalgebra can be extended to include these hidden symmetries in a consistent superalgebra structure \cite{Alekseevsky Cortes Devchand Proeyen, Alekseevsky Cortes2}. This comes from the fact that KY forms constitute a graded Lie algebra structure under the Schouten-Nijenhuis (SN) bracket of differential forms on constant curvature manifolds \cite{Kastor Ray Traschen}. Moreover, on constant curvature manifolds, the symmetry superalgebras can also be extended to include odd degree KY forms \cite{Ertem1, Ertem2}. The even part of the extended superalgebras correspond to the Lie algebra of odd degree KY forms under the SN bracket and the odd part is the space of geometric Killing spinors. The brackets of the superalgebra are the SN bracket for the even-even part, the symmetry operators of geometric Killing spinors which are generalizations of the Lie derivative on spinor fields on constant curvature manifolds for the even-odd part and the $p$-form Dirac currents of spinors for the odd-odd part. These extended superalgebras does not correspond to Lie superalgebras in general. However, these extensions cannot exhaust all the hidden symmetries constructed out of geometric Killing spinors and all backgrounds they can be defined. In this paper, we generalize all the above symmetry superalgebras and their extensions to include all the hidden symmetries generated by geometric Killing spinors on manifolds they can be defined which exhaust all the possibilities. In that way, the algebraic structure of the hidden symmetries and geometric Killing spinors can be constructed for a given manifold admitting not only isometries but also the KY form generalizations of them. We show that the $p$-form Dirac currents of geometric Killing spinors correspond to special KY forms and special closed conformal KY (CCKY) forms which satisfy some special integrability conditions. The even or odd degree character of these forms are dependent on the spinor inner product defined on the manifold and the real or imaginary character of the geometric Killing spinor. We also prove that the CKY bracket defined in \cite{Ertem3} is the natural Lie algebra bracket for the special KY and special CCKY forms. We construct the symmetry operators of geometric Killing spinors which are relevant on all manifolds by using special KY and special CCKY forms. We finally show that all of these constructions together form superalgebra structures which correspond to the generalizations of symmetry superalgebras for all manifolds admitting geometric Killing spinors although they are not Lie superalgebras in general. The superalgebra structure is defined for all dimensions in the case that the even part of the superalgebra consists of special odd KY forms and special even CCKY forms. However, if the even part of the superalgebra is the Lie algebra of special even KY forms and special odd CCKY forms, then the generalized symmetry superalgebra is defined only in even dimensions. The brackets of the superalgebra structures in two different cases are slightly different from each other. We also demonstrate the structure of these generalized symmetry superalgebras on the examples of weak $G_2$ and nearly K\"{a}hler manifolds. The paper is organized as follows. In Section 2, it is proved that the $p$-form Dirac currents of geometric Killing spinors correspond to special KY and special CCKY forms. The construction of the Lie algebra structure for the special KY and special CCKY forms is given in Section 3. The symmetry operators of geometric Killing spinors constructed out of special KY and special CCKY forms are the subject of Section 4. Section 5, includes the theorems proving the generalizations of symmetry superalgebras including all the hidden symmetries of a manifold. The examples of generalized symmetry superalgebras for weak $G_2$ and nearly K\"{a}hler manifolds are given in Section 6. Section 7 concludes the paper. \section{Bilinears of Killing Spinors} We consider an $n$-dimensional spin manifold $M$, on which one can define a spin structure. Besides the exterior bundle $\Lambda M$, we can also construct Clifford bundle $Cl(M)$ and spinor bundle $\Sigma M$ on $M$. The sections of $Cl(M)$ corresponds to inhomogeneous differential forms and the sections of $\Sigma M$ are spinor fields. For $\psi, \phi \in\Sigma M$, we define the spin-invariant inner product $(\,,\,)$ with the property \begin{equation} (\psi,\phi)=\pm(\phi,\psi)^j \end{equation} where $j$ is an involution in the relevant Clifford algebra. So, $j$ can be identity (Id), complex conjugation ($^*$), quaternionic conjugation ($\bar{\,\,}$) or quaternionic reversion ($\widehat{\,\,}$) depending on the Clifford algebra corresponding to the matrix algebras of $\mathbb{D}=\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ in relevant dimensions \cite{Benn Tucker}. For $\psi,\phi\in\Sigma M$, $\alpha\in Cl(M)$ and $c\in\mathbb{D}$, the inner product has the following properties \begin{eqnarray} (\psi,\alpha.\phi)&=&(\alpha^{\cal{J}}.\psi,\phi)\\ (c\psi,\phi)&=&c^j(\psi,\phi) \end{eqnarray} where $.$ denotes the Clifford product and ${\cal{J}}$ is an involution on $Cl(M)$. The involution $\eta:Cl(M)\rightarrow Cl(M)$ gives the $\mathbb{Z}_2$ gradation to $Cl(M)=Cl^0(M)\oplus Cl^1(M)$ and has the property $\eta(\alpha.\beta)=\eta\alpha.\eta\beta$ for $\alpha,\beta\in Cl(M)$. For $\omega\in Cl(M)$, if $\eta\omega=\omega$, then $\omega\in Cl^0(M)$ and if $\eta\omega=-\omega$, then $\omega\in Cl^1(M)$. If, $\omega$ is a homogeneous $p$-form, then the action of $\eta$ on $\omega$ is defined by $\eta\omega=(-1)^p\omega$. The anti-involution $\xi:Cl(M)\rightarrow Cl(M)$ has the property $\xi(\alpha.\beta)=\xi\beta.\xi\alpha$ for $\alpha,\beta\in Cl(M)$. For a homogeneous $p$-form $\omega$, the action of $\xi$ is defined by $\xi\omega=(-1)^{\lfloor p/2 \rfloor}\omega$ where $\lfloor\rfloor$ denotes the floor function which takes the integer part of the argument. So, ${\cal{J}}$ in (2) can be $\xi$, $\xi\eta$, $\xi^*$ or $\xi\eta^*$ depending on the relevant Clifford algebra with $^*$ denoting the complex conjugation. The definition of the spinor inner product gives rise to the definition of the space of dual spinor fields $\Sigma^*M$. For a dual spinor $\overline{\psi}\in\Sigma^*M$, the action of it on a spinor $\phi\in\Sigma M$ is defined as \cite{Charlton} \begin{equation} \overline{\psi}(\phi)=(\psi,\phi). \end{equation} If we consider the tensor product of $\Sigma M\otimes\Sigma^*M$, its action on $\Sigma M$ is given by \begin{equation} (\psi\otimes\overline{\phi})\kappa=(\phi,\kappa)\psi \end{equation} for $\psi,\phi,\kappa\in\Sigma M$ and $\overline{\phi}\in\Sigma^*M$. This means that the elements of $\Sigma M\otimes\Sigma^*M$ correspond to the linear transformations of $\Sigma M$. Since $Cl(M)$ also acts on $\Sigma M$ via Clifford product corresponding to the linear transformations on $\Sigma M$, the tensor product $\Sigma M\otimes\Sigma^*M$ is isomorphic to $Cl(M)$. Then, the elements $\psi\otimes\overline{\phi}\in\Sigma M\otimes\Sigma^*M$ can be written as a sum of different degree differential forms corresponding to the elements of $Cl(M)$. For any co-frame basis $\{e^a\}$, the Fierz identity which is the decomposition of the tensor product of spinors and dual spinors in terms of different degree differential forms is written as \begin{eqnarray} \psi\overline{\phi}&=&(\phi,\psi)+(\phi,e_a.\psi)e^a+(\phi,e_{ba}.\psi)e^{ab}+...+\nonumber\\ &&+(\phi,e_{a_p...a_2a_1}.\psi)e^{a_1a_2...a_p}+...+(-1)^{\lfloor n/2\rfloor}(\phi,z.\psi)z \end{eqnarray} where we denote $\psi\overline{\phi}=\psi\otimes\overline{\phi}$, $e^{a_1a_2...a_p}=e^{a_1}\wedge e^{a_2}\wedge...\wedge e^{a_p}$ and $z$ is the volume form. \begin{definition} For any $\psi\in\Sigma M$, the vector field $V_{\psi}$ which is the metric dual of the 1-form projection of $\psi\overline{\psi}$ that is \begin{equation} \widetilde{V}_{\psi}=(\psi\overline{\psi})_1=(\psi,e_a.\psi)e^a \end{equation} is called as the Dirac current of $\psi$ where $\,\widetilde{\,\, }$ denotes the metric dual. \end{definition} As a generalization of this definition, we also have \begin{definition} For any $\psi\in\Sigma M$, the $p$-form projection of $\psi\overline{\psi}$ that is \begin{equation} (\psi\overline{\psi})_p=(\psi,e_{a_p...a_2a_1}.\psi)e^{a_1a_2...a_p} \end{equation} is called as the $p$-form Dirac current of $\psi$. \end{definition} Two first-order differential operators can be defined on $\Sigma M$ via the Levi-Civita connection $\nabla$ induced on $\Sigma M$. The first one is the Dirac operator which is defined as \begin{equation} \displaystyle{\not}D=e^a.\nabla_{X_a} \end{equation} for the frame basis $\{X_a\}$ and the co-frame basis $\{e^a\}$. The second one is the twistor operator which is written as \begin{equation} \nabla_{X_a}-\frac{1}{n}e_a.\displaystyle{\not}D \end{equation} in $n$ dimensions. The spinors that are in kernel of the Dirac operator are called harmonic spinors and that are in the kernel of the twistor operator are called twistor spinors \cite{Lichnerowicz2, Habermann, Baum Leitner, Baum Friedrich Grunewald Kath, Bourguignon et al}. \begin{definition} A spinor field $\psi\in\Sigma M$, which is an eigenspinor of the Dirac operator $\displaystyle{\not}D\psi=m\psi$ and also a twistor spinor at the same time, satisfies the following Killing spinor equation \begin{equation} \nabla_{X_a}\psi=\lambda e_a.\psi \end{equation} and is called a geometric Killing spinor with the Killing number $\lambda:=\frac{m}{n}$. The Killing number $\lambda$ is a real or pure imaginary number. \end{definition} We call geometric Killing spinors as Killing spinors for simplicity. The existence of Killing spinors on a manifold $M$ constrains the curvature characteristics of the manifold. Integrability condition of the Killing spinor equation (11) is written as \begin{equation} R_{ab}.\psi=-4\lambda^2(e_a\wedge e_b).\psi \end{equation} where $R_{ab}$ are curvature 2-forms. Moreover, this implies \begin{equation} {\cal{R}}=-4\lambda^2n(n-1) \end{equation} where ${\cal{R}}$ is the curvature scalar. The Dirac current $V_{\psi}$ of a Killing spinor $\psi$ corresponds to a Killing vector field, namely its metric dual satisfies the following equality \begin{equation} \nabla_X\widetilde{V}_{\psi}=\frac{1}{2}i_Xd\widetilde{V}_{\psi} \end{equation} for any vector field $X$ where $d$ is the exterior derivative operator and $i_X$ is the interior derivative (contraction) operator with respect to $X$. Killing vector fields have antisymmetric generalizations to higher-degree differential forms. \begin{definition} If a $p$-form $\omega$ satisfies \begin{equation} \nabla_X\omega=\frac{1}{p+1}i_Xd\omega \end{equation} for any vector field $X$, then $\omega$ is called as a Killing-Yano (KY) $p$-form. \end{definition} KY forms are special cases of conformal Killing-Yano (CKY) forms. \begin{definition} If a $p$-form $\omega$ satisfies \begin{equation} \nabla_X\omega=\frac{1}{p+1}i_Xd\omega-\frac{1}{n-p+1}\widetilde{X}\wedge\delta\omega \end{equation} for any vector field $X$, its metric dual $\widetilde{X}$ and the co-derivative operator $\delta$, then $\omega$ is called as a CKY $p$-form. \end{definition} As can be seen from the last definition that the co-closed CKY forms satisfying $\delta\omega=0$ correspond to KY forms. On the other hand, another subset of CKY forms constitute of closed CKY (CCKY) forms satisfying $d\omega=0$, namely the following equation \begin{equation} \nabla_X\omega=-\frac{1}{n-p+1}\widetilde{X}\wedge\delta\omega. \end{equation} CKY equation (16) has Hodge duality invariance, namely if $\omega$ is a CKY $p$-form then $*\omega$ is also a CKY $(n-p)$-form with $*$ denoting the Hodge map \cite{Semmelmann}. One can show that this property leads to the fact that for a KY $p$-form $\omega$, its Hodge dual $*\omega$ is a CCKY $(n-p)$-form and conversely for a CCKY $p$-form $\omega$, its Hodge dual $*\omega$ is a KY $(n-p)$-form. So, KY $p$-forms and CCKY $(n-p)$-forms are Hodge duals of each other \cite{Ertem4}. Besides the relation between Dirac currents of Killing spinors and Killing vectors, $p$-form Dirac currents of Killing spinors are also related to KY and CCKY forms. For a Killing spinor $\psi$, its $p$-form Dirac current $(\psi\overline{\psi})_p$ is a KY $p$-form or a CCKY $p$-form depending on the chosen involution of the inner product, on the real or imaginary character of $\lambda$ and on the parity (evenness or oddness) of $p$ \cite{Acik Ertem1}. Moreover, we have the following \begin{proposition} For two Killing spinors $\psi$ and $\phi$ satisfying (11), the symmetric combination of $p$-form bilinears $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ corresponds to a KY $p$-form or a CCKY $p$-form depending on the chosen involution of the inner product, on the real or imaginary character of $\lambda$ and on the parity (evenness or oddness) of $p$. \end{proposition} \begin{proof} From the compatibility of the Levi-Civita connection $\nabla$ with the spinor inner product and the spinor duality operation, we can write the covariant derivative of $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ with respect to any frame basis vector $X_a$ as \begin{eqnarray} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&\left((\nabla_{X_a}\psi)\overline{\phi}\right)_p+\left(\psi(\overline{\nabla_{X_a}\phi})\right)_p+\left((\nabla_{X_a}\phi)\overline{\psi}\right)_p+\left(\phi(\overline{\nabla_{X_a}\psi})\right)_p\nonumber\\ &=&\left(\lambda e_a.\psi\overline{\phi}\right)_p+\left(\psi(\overline{\lambda e_a.\phi})\right)_p+\left(\lambda e_a.\phi\overline{\psi}\right)_p+\left(\phi(\overline{\lambda e_a.\psi})\right)_p\nonumber \end{eqnarray} where we have used (11) in the second line. From the properties (5), (3) and (2), we can write the equality $\psi(\overline{\lambda e_a.\phi})=\lambda^j(\psi\overline{\phi}).e_a^{\cal{J}}$ and similarly $\phi(\overline{\lambda e_a.\psi})=\lambda^j(\phi\overline{\psi}).e_a^{\cal{J}}$. So, we have \begin{eqnarray} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=\left(\lambda e_a.\psi\overline{\phi}\right)_p+\left(\lambda^j(\psi\overline{\phi}).e_a^{\cal{J}}\right)_p+\left(\lambda e_a.\phi\overline{\psi}\right)_p+\left(\lambda^j(\phi\overline{\psi}).e_a^{\cal{J}}\right)_p.\nonumber \end{eqnarray} The Clifford product can be expressed in terms of wedge product and interior derivative. For a 1-form $x$ and an arbitrary form $\alpha$, we have the following identities \begin{eqnarray} x.\alpha&=&x\wedge\alpha+i_{\widetilde{x}}\alpha\\ \alpha.x&=&x\wedge\eta\alpha-i_{\widetilde{x}}\eta\alpha \end{eqnarray} where $\widetilde{x}$ denotes the vector field metric dual to $x$. By using these equalities, we obtain \begin{eqnarray} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&\lambda e_a\wedge\left(\psi\overline{\phi}\right)_{p-1}+\lambda i_{X_a}\left(\psi\overline{\phi}\right)_{p+1}+\lambda^je_a^{\cal{J}}\wedge\left(\psi\overline{\phi}\right)_{p+1}^{\eta}-\lambda^j i_{\widetilde{e_a^{\cal{J}}}}\left(\psi\overline{\phi}\right)_{p+1}^{\eta}\nonumber\\ &&+\lambda e_a\wedge\left(\phi\overline{\psi}\right)_{p-1}+\lambda i_{X_a}\left(\phi\overline{\psi}\right)_{p+1}+\lambda^je_a^{\cal{J}}\wedge\left(\phi\overline{\psi}\right)_{p+1}^{\eta}-\lambda^j i_{\widetilde{e_a^{\cal{J}}}}\left(\phi\overline{\psi}\right)_{p+1}^{\eta} \end{eqnarray} where we denote $\eta\left(\phi\overline{\psi}\right)_{p+1}=\left(\phi\overline{\psi}\right)_{p+1}^{\eta}$. For the torsion-free Levi-Civita connection $\nabla$, we have the identity $d=e^a\wedge\nabla_{X_a}$ and taking the wedge product of (20) with $e^a$ from the left gives \begin{eqnarray} d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&\lambda (p+1)\left(\psi\overline{\phi}\right)_{p+1}-\lambda^j\text{sgn}(e_a^{\cal{J}})(p+1)\left(\psi\overline{\phi}\right)_{p+1}^{\eta}\nonumber\\ &&+\lambda (p+1)\left(\phi\overline{\psi}\right)_{p+1}-\lambda^j\text{sgn}(e_a^{\cal{J}})(p+1)\left(\phi\overline{\psi}\right)_{p+1}^{\eta} \end{eqnarray} where we have used $e^a\wedge e_a=0$ and $e^a\wedge i_{X_a}\alpha=p\alpha$ for a $p$-form $\alpha$. $\text{sgn}(e_a^{\cal{J}})$ is equal to $\pm1$ depending on the involution ${\cal{J}}$. Similarly, we have the identity $\delta=-i_{X^a}\nabla_{X_a}$ for the Levi-Civita connection $\nabla$ and by taking the interior derivative of (20) with respect to $i_{X^a}$, we obtain \begin{eqnarray} \delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&-\lambda(n-p+1)\left(\psi\overline{\phi}\right)_{p-1}^{\eta}-\lambda^j\text{sgn}(e_a^{\cal{J}})(n-p+1)\left(\psi\overline{\phi}\right)_{p-1}^{\eta}\nonumber\\ &&-\lambda(n-p+1)\left(\phi\overline{\psi}\right)_{p-1}^{\eta}-\lambda^j\text{sgn}(e_a^{\cal{J}})(n-p+1)\left(\phi\overline{\psi}\right)_{p-1}^{\eta} \end{eqnarray} where we have used $i_{X^a}e_a=n$ and $e^a\wedge i_{X_a}\alpha=p\alpha$ for a $p$-form $\alpha$. In equalities (20), (21) and (22), we have four parameters to choose; $\lambda$ can be real (Re) or pure imaginary (Im), $j$ can be Id, $^*$, $\bar{\,\,}$ or $\widehat{\,\,}$, ${\cal{J}}$ can be $\xi$, $\xi^*$, $\xi\eta$ or $\xi\eta^*$ and $p$ can be even or odd. If $\lambda$ is real, we have $\lambda^{\text{Id}}=\lambda^*=\bar{\lambda}=\widehat{\lambda}=\lambda$ and if $\lambda$ is pure imaginary, we have $\lambda^{\text{Id}}=\widehat{\lambda}=\lambda$ and $\lambda^*=\bar{\lambda}=-\lambda$. Since $e_a$ is a real 1-form, we have $e_a^{\xi}=e_a^{\xi^*}=e_a$ with $\text{sgn}(e_a^{\cal{J}})=1$ and $e_a^{\xi\eta}=e_a^{\xi\eta^*}=-e_a$ with $\text{sgn}(e_a^{\cal{J}})=-1$. For $p$ even, we have $(\psi\overline{\phi})_{p-1}^{\eta}=-(\psi\overline{\phi})_{p-1}$, $(\phi\overline{\psi})_{p-1}^{\eta}=-(\phi\overline{\psi})_{p-1}$, $(\psi\overline{\phi})_{p+1}^{\eta}=-(\psi\overline{\phi})_{p+1}$, $(\phi\overline{\psi})_{p+1}^{\eta}=-(\phi\overline{\psi})_{p+1}$ and for $p$ odd, we have $(\psi\overline{\phi})_{p-1}^{\eta}=(\psi\overline{\phi})_{p-1}$, $(\phi\overline{\psi})_{p-1}^{\eta}=(\phi\overline{\psi})_{p-1}$, $(\psi\overline{\phi})_{p+1}^{\eta}=(\psi\overline{\phi})_{p+1}$, $(\phi\overline{\psi})_{p+1}^{\eta}=(\phi\overline{\psi})_{p+1}$. So, by considering these possibilities, we can obtain two different sets of equations from (20), (21) and (22). If the parameters ${\cal{J}}$, $\lambda$, $j$ and $p$ are chosen as in the following table \quad\\ {\centering{ \begin{tabular}{c c c c} ${\cal{J}}\quad$ & \quad $\lambda$\quad & \quad $j$\quad & \quad $p$ \\ \hline $\xi,\xi^*\quad$ & \quad Re \quad & \quad $\text{Id}, ^*, \bar{\,\,}, \widehat{\,\,}$ \quad & \quad even \\ $\xi,\xi^*\quad$ & \quad Im \quad & \quad $\text{Id}, \widehat{\,\,}$ \quad & \quad even \\ $\xi,\xi^*\quad$ & \quad Im \quad & \quad $^*, \bar{\,\,}$ \quad & \quad odd \\ $\xi\eta,\xi\eta^*\quad$ & \quad Re \quad & \quad $\text{Id}, ^*, \bar{\,\,}, \widehat{\,\,}$ \quad & \quad odd \\ $\xi\eta,\xi\eta^*\quad$ & \quad Im \quad & \quad $\text{Id}, \widehat{\,\,}$ \quad & \quad odd \\ $\xi\eta,\xi\eta^*\quad$ & \quad Im \quad & \quad $^*, \bar{\,\,}$ \quad & \quad even \\ \end{tabular}} \quad\\ \quad\\ \quad\\} then we find the following set of equations from (20), (21) and (22) \begin{eqnarray} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&2\lambda i_{X_a}\left[(\psi\overline{\phi})_{p+1}+(\phi\overline{\psi})_{p+1}\right]\nonumber\\ d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&2\lambda(p+1)\left[(\psi\overline{\phi})_{p+1}+(\phi\overline{\psi})_{p+1}\right]\\ \delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&0.\nonumber \end{eqnarray} We call this set of equations as Case 1 and by comparing them with each other, one can easily see that $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ satisfies the following equation \begin{equation} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=\frac{1}{p+1}i_{X_a}d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right] \end{equation} and hence from (15) it is a KY $p$-form. For the parameters ${\cal{J}}$, $\lambda$, $j$ and $p$, the remaining possibilities are given as in the following table \quad\\ {\centering{ \begin{tabular}{c c c c} ${\cal{J}}\quad$ & \quad $\lambda$\quad & \quad $j$\quad & \quad $p$ \\ \hline $\xi,\xi^*\quad$ & \quad Re \quad & \quad $\text{Id}, ^*, \bar{\,\,}, \widehat{\,\,}$ \quad & \quad odd \\ $\xi,\xi^*\quad$ & \quad Im \quad & \quad $\text{Id}, \widehat{\,\,}$ \quad & \quad odd \\ $\xi,\xi^*\quad$ & \quad Im \quad & \quad $^*, \bar{\,\,}$ \quad & \quad even \\ $\xi\eta,\xi\eta^*\quad$ & \quad Re \quad & \quad $\text{Id}, ^*, \bar{\,\,}, \widehat{\,\,}$ \quad & \quad even \\ $\xi\eta,\xi\eta^*\quad$ & \quad Im \quad & \quad $\text{Id}, \widehat{\,\,}$ \quad & \quad even \\ $\xi\eta,\xi\eta^*\quad$ & \quad Im \quad & \quad $^*, \bar{\,\,}$ \quad & \quad odd \\ \end{tabular}} \quad\\ \quad\\ \quad\\} Note that the parameter sets in this case differ from the first case only in the parity of $p$. By considering these possibilities, we find the following set of equation from (20), (21) and (22) \begin{eqnarray} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&2\lambda e_a\wedge\left[(\psi\overline{\phi})_{p-1}+(\phi\overline{\psi})_{p-1}\right]\nonumber\\ d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&0\\ \delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]&=&-2\lambda(n-p+1)\left[(\psi\overline{\phi})_{p-1}+(\phi\overline{\psi})_{p-1}\right].\nonumber \end{eqnarray} We call this set of equations as Case 2 and by comparing them with each other, one finds that $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ satisfies the following equation \begin{equation} \nabla_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=-\frac{1}{n-p+1}e_a\wedge \delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right] \end{equation} and hence from (17) it is a CCKY $p$-form. Thus, we show that the $p$-form bilinear $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ is a KY $p$-form or a CCKY $p$-form depending on the chosen set of $\cal{J}$, $\lambda$, $j$ and $p$. Note that for the chosen set of parameters, if the even degree $p$-form bilinears are KY forms then the odd degree $p$-form bilinears correspond to CCKY forms and conversely if the odd degree $p$-form bilinears are KY forms then the even degree $p$-form bilinears correspond to CCKY forms. \end{proof} The integrability condition of the KY equation (15) can be found as follows \cite{Semmelmann, Acik Ertem Onder Vercin} \begin{equation} \nabla_{X_a}d\omega=-\frac{p+1}{p}R_{ab}\wedge i_{X^b}\omega \end{equation} and similarly for the CCKY equation (17), the integrability condition is \cite{Ertem3} \begin{equation} \nabla_{X_a}\delta\omega=-\frac{n-p+1}{n-p}i_{X^c}\left(i_{X_b}R_{ca}\wedge i_{X^b}\omega\right). \end{equation} We can define subsets of KY and CCKY forms which satisfy special types of integrability conditions. \begin{definition} A KY $p$-form $\omega$ is called a special KY $p$-form, if it satisfies the following condition \begin{equation} \nabla_X d\omega=-c(p+1)\widetilde{X}\wedge\omega \end{equation} where $c$ is a constant. Similarly, a CCKY $p$-form is called a special CCKY $p$-form if it satisfies \begin{equation} \nabla_{X}\delta\omega=c(n-p+1)i_X\omega \end{equation} where $c$ is a constant. \end{definition} Note that, in constant curvature manifolds with curvature 2-forms $R_{ab}=ce_a\wedge e_b$ for a constant $c$, the integrability conditions (27) and (28) reduces to (29) and (30), respectively. This means that in constant curvature manifolds, all KY forms are special KY forms and all CCKY forms are special CCKY forms. Obviously, this is not true in general. By using Definition 6, we can refine the Proposition 1 as in the following form. \begin{proposition} For two Killing spinors $\psi$ and $\phi$ satisfying (11), the symmetric combination of $p$-form bilinears $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ corresponds to a special KY $p$-form or a special CCKY $p$-form depending on the chosen involution of the inner product, on the real or imaginary character of $\lambda$ and on the parity (evenness or oddness) of $p$. \end{proposition} \begin{proof} If $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ satisfies Case 1 in (23), then it is a KY $p$-form. We can calculate the covariant derivative of $d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]$ from (23) as \begin{equation} \nabla_{X_a}d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=2\lambda(p+1)\nabla_{X_a}\left[(\psi\overline{\phi})_{p+1}+(\phi\overline{\psi})_{p+1}\right]. \end{equation} We know that if $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ is a KY $p$-form, then the one higher degree bilinear $(\psi\overline{\phi})_{p+1}+(\phi\overline{\psi})_{p+1}$ satisfies Case 2 and is a CCKY $(p+1)$-form. Thus, from (25), we obtain \begin{equation} \nabla_{X_a}d\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=4\lambda^2(p+1)e_a\wedge\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]. \end{equation} This is nothing but the condition of special KY forms in (29) for $c=-4\lambda^2$. So, $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ is a special KY $p$-form in this case. On the other hand, if $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ satisfies Case 2 in (25), then it is a CCKY $p$-form. We can calculate the covariant derivative of $\delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]$ from (25) as \begin{equation} \nabla_{X_a}\delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=-2\lambda(n-p+1)\nabla_{X_a}\left[(\psi\overline{\phi})_{p-1}+(\phi\overline{\psi})_{p-1}\right]. \end{equation} We know that if $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ is a CCKY $p$-form, then the one lower degree bilinear $(\psi\overline{\phi})_{p-1}+(\phi\overline{\psi})_{p-1}$ satisfies Case 1 and is a KY $(p-1)$-form. Thus, from (23), we obtain \begin{equation} \nabla_{X_a}\delta\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]=-4\lambda^2(n-p+1)i_{X_a}\left[(\psi\overline{\phi})_p+(\phi\overline{\psi})_p\right]. \end{equation} This is nothing but the condition of special CCKY forms in (30) for $c=-4\lambda^2$. So, $(\psi\overline{\phi})_p+(\phi\overline{\psi})_p$ is a special CCKY $p$-form in this case. \end{proof} Since all KY forms and CCKY forms are special in constant curvature manifolds, we have the following \begin{corollary} Killing spinors generate special KY and special CCKY forms and especially all KY and CCKY forms in constant curvature manifolds. In general, non-special KY and CCKY forms cannot be generated by Killing spinors. \end{corollary} \section{Lie Algebras of Special KY and Special CCKY forms} Killing vector fields and conformal Killing vector fields, which correspond to the metric duals of KY 1-forms and CKY 1-forms respectively, satisfy Lie algebra structures with respect to the Lie bracket of vector fields that is written for any two vector fields $X$ and $Y$ as \begin{equation} [X,Y]=\nabla_XY-\nabla_YX \end{equation} This bracket can be generalized to the Schouten-Nijenhuis (SN) bracket defined for the higher-degree differential forms. For any $p$-form $\alpha$ and a $q$-form $\beta$, the SN bracket is written as follows \begin{equation} [\alpha,\beta]_{SN}=i_{X^a}\alpha\wedge\nabla_{X_a}\beta+(-1)^{pq}i_{X^a}\beta\wedge\nabla_{X_a}\alpha \end{equation} which gives a $(p+q-1)$-form and reduces to the Lie bracket of vector fields (35) for $p=q=1$. It satisfies the following graded Lie bracket properties \begin{eqnarray} [\alpha,\beta]_{SN}&=&(-1)^{pq}[\beta,\alpha]_{SN}\\ (-1)^{p(r+1)}[\alpha,[\beta,\gamma]_{SN}]_{SN}&+&(-1)^{q(p+1)}[\beta,[\gamma,\alpha]_{SN}]_{SN}+(-1)^{r(q+1)}[\gamma,[\alpha,\beta]_{SN}]_{SN}=0 \end{eqnarray} where $\gamma$ is a $r$-form. It is known that KY forms satisfy a graded Lie algebra structure with respect to the SN bracket in constant curvature manifolds \cite{Kastor Ray Traschen, Ertem1}. Since all KY forms in constant curvature manifolds are special KY forms, the graded Lie algebra property can be generalized to the special KY forms in all manifolds. So, we have \begin{proposition} On a manifold $M$, special KY forms satisfying (15) and (29) constitute a graded Lie algebra structure with respect to the SN bracket defined in (36). \end{proposition} \begin{proof} For a KY $p$-form $\omega_1$ and a KY $q$-form $\omega_2$, the SN bracket is written as \begin{eqnarray} [\omega_1,\omega_2]_{SN}&=&i_{X^a}\omega_1\wedge\nabla_{X_a}\omega_2+(-1)^{pq}i_{X^a}\omega_2\wedge\nabla_{X_a}\omega_1\nonumber\\ &=&\frac{1}{q+1}i_{X^a}\omega_1\wedge i_{X_a}d\omega_2+\frac{(-1)^p}{p+1}i_{X^a}d\omega_1\wedge i_{X_a}\omega_2 \end{eqnarray} where we have used (15). If we apply the covariant derivative $\nabla_{X_a}$ with respect to a frame basis vector $X_a$ to (39), we find \begin{eqnarray} \nabla_{X_a}[\omega_1,\omega_2]_{SN}&=&\frac{1}{q+1}\bigg(\nabla_{X_a}i_{X^b}\omega_1\wedge i_{X_b}d\omega_2+i_{X^b}\omega_1\wedge\nabla_{X_a}i_{X_b}d\omega_2\bigg)\nonumber\\ &&+\frac{(-1)^p}{p+1}\bigg(\nabla_{X_a}i_{X^b}d\omega_1\wedge i_{X_b}\omega_2+i_{X^b}d\omega_1\wedge\nabla_{X_a}i_{X_b}\omega_2\bigg). \end{eqnarray} In general, we have the relation $[i_X,\nabla_Y]=i_{\nabla_XY}$ for any vector fields $X$ and $Y$. If we choose the normal coordinates $\{X_a\}$, then this relation transforms into $[i_{X_a},\nabla_{X_b}]=0$ since we have $\nabla_{X_a}X_b=0$ in that case. By using the normal coordinates and considering that $\omega_1$ and $\omega_2$ satisfy the equations (15) and (29), we obtain \begin{eqnarray} \nabla_{X_a}[\omega_1,\omega_2]_{SN}&=&\frac{1}{q+1}\bigg(\frac{1}{p+1}i_{X^b}i_{X_a}d\omega_1\wedge i_{X_b}d\omega_2-c(q+1)i_{X^b}\omega_1\wedge i_{X_b}\left(e_a\wedge\omega_2\right)\bigg)\nonumber\\ &&+\frac{(-1)^p}{p+1}\bigg(-c(p+1)i_{X^b}\left(e_a\wedge\omega_1\right)\wedge i_{X_b}\omega_2+\frac{1}{q+1}i_{X^b}d\omega_1\wedge i_{X_b}i_{X_a}d\omega_2\bigg)\nonumber\\ &=&-\frac{1}{(p+1)(q+1)}i_{X_a}\left(i_{X^b}d\omega_1\wedge i_{X_b}d\omega_2\right)-ci_{X_a}\left(\omega_1\wedge\omega_2\right) \end{eqnarray} where we have used $i_{X^b}e_a=\delta^b_a$ and the anti-derivative property of $i_{X^b}$ in the last line. If we take the wedge product of (41) with $e^a\wedge$ from the left and use the equality $e^a\wedge\nabla_{X_a}=d$ for the torsion-free connection $\nabla_{X_a}$, we find \begin{equation} d[\omega_1,\omega_2]_{SN}=-\frac{p+q}{(p+1)(q+1)}i_{X^b}d\omega_1\wedge i_{X_b}d\omega_2-c(p+q)\omega_1\wedge\omega_2 \end{equation} where we have used the equality $e^a\wedge i_{X_a}\alpha=p\alpha$ for any $p$-form $\alpha$. By taking the interior derivative $i_{X_a}$ of (42) with respect to $X_a$ and divide it with $(p+q)$, one finds \begin{equation} \frac{1}{p+q}d[\omega_1,\omega_2]_{SN}=-\frac{1}{(p+1)(q+1)}i_{X_a}\left(i_{X^b}d\omega_1\wedge i_{X_b}d\omega_2\right)-ci_{X_a}\left(\omega_1\wedge\omega_2\right). \end{equation} If we compare the right hand sides of (41) and (43), we can easily see that \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{SN}=\frac{1}{p+q}i_{X_a}d[\omega_1,\omega_2]_{SN} \end{equation} which is the KY equation (15) for the $(p+q-1)$-form $[\omega_1,\omega_2]_{SN}$. Then, we show that for a special KY $p$-form $\omega_1$ and a special KY $q$-form $\omega_2$, their SN bracket $[\omega_1,\omega_2]_{SN}$ is a KY $(p+q-1)$-form. However, we still have to show that it is a special KY $(p+q-1)$-form. To show this, we take the covariant derivative of (42) which is equal to \begin{eqnarray} \nabla_{X_a}d[\omega_1,\omega_2]_{SN}&=&-\frac{p+q}{(p+1)(q+1)}\bigg(i_{X^b}\nabla_{X_a}d\omega_1\wedge i_{X_b}d\omega_2+i_{X^b}d\omega_1\wedge i_{X_b}\nabla_{X_a}d\omega_2\bigg)\nonumber\\ &&-c(p+q)\bigg(\nabla_{X_a}\omega_1\wedge\omega_2+\omega_1\wedge\nabla_{X_a}\omega_2\bigg) \end{eqnarray} wher we have used $[i_{X^b},\nabla_{X_a}]=0$ in normal coordinates. Since $\omega_1$ and $\omega_2$ satisfy (15) and (29), we find \begin{eqnarray} \nabla_{X_a}d[\omega_1,\omega_2]_{SN}&=&-\frac{p+q}{(p+1)(q+1)}\bigg(-c(p+1)i_{X^b}(e_a\wedge\omega_1)\wedge i_{X_b}d\omega_2-c(q+1)i_{X^b}d\omega_1\wedge i_{X_b}(e_a\wedge\omega_2)\bigg)\nonumber\\ &&-c(p+q)\bigg(\frac{1}{p+1}i_{X_a}d\omega_1\wedge\omega_2+\frac{1}{q+1}\omega_1\wedge i_{X_a}d\omega_2\bigg)\nonumber\\ &=&-c(p+q)\bigg(\frac{1}{q+1}e_a\wedge i_{X^b}\omega_1\wedge i_{X_b}d\omega_2+\frac{(-1)^p}{p+1}e_a\wedge i_{X^b}d\omega_1\wedge i_{X_b}\omega_2\bigg) \end{eqnarray} wher we have used $i_{X^b}e_a=\delta^b_a$ and the anti-derivative property of $i_{X^b}$ in the last line. By comparing with (39), we obtain \begin{equation} \nabla_{X_a}d[\omega_1,\omega_2]_{SN}=-c(p+q)e_a\wedge [\omega_1,\omega_2]_{SN} \end{equation} which is the equation (29) for KY $(p+q-1)$-form $[\omega_1,\omega_2]_{SN}$. This shows that $[\omega_1,\omega_2]_{SN}$ is a special KY form and hence special KY forms satisfy a graded Lie algebra structure under SN bracket. \end{proof} For the more general case of CKY forms, we can also define a graded Lie bracket which generalizes the SN bracket. For a CKY $p$-form $\omega_1$ and a CKY $q$-form $\omega_2$, we define the CKY bracket as follows \begin{eqnarray} [\omega_1,\omega_2]_{CKY}&=&\frac{1}{q+1}i_{X_a}\omega_1\wedge i_{X^a}d\omega_2+\frac{(-1)^p}{p+1}i_{X_a}d\omega_1\wedge i_{X^a}\omega_2\nonumber\\ &&+\frac{(-1)^p}{n-q+1}\omega_1\wedge\delta\omega_2+\frac{1}{n-p+1}\delta\omega_1\wedge\omega_2 \end{eqnarray} It is shown in \cite{Ertem3} that in constant curvature manifolds $[\omega_1,\omega_2]_{CKY}$ is a CKY $(p+q-1)$-form and CKY bracket has the graded Lie bracket properties. So, in constant curvature manifolds, CKY forms satisfy a graded Lie algebra structure with respect to the CKY bracket and we have the following relation \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=\frac{1}{p+q}i_{X_a}d[\omega_1,\omega_2]_{CKY}-\frac{1}{n-p-q+2}e_a\wedge\delta[\omega_1,\omega_2]_{CKY}. \end{equation} This is also true in Einstein manifolds for normal CKY forms which have the following special integrabilitiy conditions for a normal CKY $p$-form $\omega$ in $n$ dimensions \begin{eqnarray} \nabla_{X_a}d\omega&=&\frac{p+1}{p(n-p+1)}e_a\wedge d\delta\omega+2(p+1)K_a\wedge\omega\\ \nabla_{X_a}\delta\omega&=&-\frac{n-p+1}{(p+1)(n-p)}i_{X_a}\delta d\omega-2(n-p+1)i_{X^b}K_a\wedge i_{X_b}\omega \end{eqnarray} where the Schouten rho 1-form is defined by \begin{equation} K_a=\frac{1}{n-2}\left(\frac{\cal{R}}{2(n-1)}e_a-P_a\right) \end{equation} for Ricci 1-forms $P_a$ and the curvature scalar ${\cal{R}}$. It can easily be seen that for two KY forms $\omega_1$ and $\omega_2$ with $\delta\omega_1=0$ and $\delta\omega_2=0$, the CKY bracket reduces to the SN bracket given in (39) and for $p=q=1$ it reduces to the Lie bracket of vector fields in (35). However, for general CKY forms, it differs from the generalization of the SN bracket to CKY forms and hence is a new bracket. For more general manifolds, there is another graded Lie algebra structure with respect to the CKY bracket that contains special KY and special CCKY forms as elements which are the special forms generated by Killing spinors. Note that, it can be seen from Propositions 1 and 2 that if the odd degree $p$-form Dirac currents of a Killing spinor are special KY forms, then its even degree $p$-form Dirac currents correspond to special CCKY forms and conversely, if the even degree $p$-form Dirac currents of a Killing spinor are special KY forms, then its odd degree $p$-form Dirac currents correspond to special CCKY forms. \begin{theorem} On a manifold $M$, odd degree special KY forms satisfying (15) and (29) and even degree CCKY forms satisfying (17) and (30) constitute a Lie algebra structure with respect to the CKY bracket $[\,,\,]_{CKY}$ given in (48) in all dimensions. \end{theorem} \begin{proof} We consider the cases of two special odd KY forms, two special even CCKY forms and one special odd KY form with one special even CCKY form separately. For a special odd KY $p$-form $\omega_1$ and a special odd KY $q$-form $\omega_2$, we have $\delta\omega_1=0$ and $\delta\omega_2=0$ and the CKY bracket in (48) reduces to the SN bracket written in (39) for special KY forms. From Proposition 3, $[\omega_1,\omega_2]_{SN}$ is also a special odd KY $(p+q-1)$-form since $p+q-1$ is an odd number for $p$ and $q$ are odd. If we take a special odd KY $p$-form $\omega_1$ and a special even CCKY $q$-form $\omega_2$, then we have $\delta\omega_1=0$ and $d\omega_2=0$. So, the CKY bracket reduces to \begin{equation} [\omega_1,\omega_2]_{CKY}=-\frac{1}{p+1}i_{X^a}d\omega_1\wedge i_{X_a}\omega_2-\frac{1}{n-q+1}\omega_1\wedge\delta\omega_2. \end{equation} The covariant derivative of (53) gives \begin{eqnarray} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}&=&-\frac{1}{p+1}i_{X^b}\nabla_{X_a}d\omega_1\wedge i_{X_b}\omega_2-\frac{1}{p+1}i_{X^b}d\omega_1\wedge i_{X_b}\nabla_{X_a}\omega_2\nonumber\\ &&-\frac{1}{n-q+1}\nabla_{X_a}\omega_1\wedge\delta\omega_2-\frac{1}{n-q+1}\omega_1\wedge\nabla_{X_a}\delta\omega_2. \end{eqnarray} If we use (15) and (29) for $\omega_1$ and (17) and (30) for $\omega_2$, we find \begin{eqnarray} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}&=&ci_{X^b}(e_a\wedge\omega_1)\wedge i_{X^b}\omega_2+\frac{1}{(p+1)(n-q+1)}i_{X^b}d\omega_1\wedge i_{X_b}(e_a\wedge\delta\omega_2)\nonumber\\ &&-\frac{1}{(p+1)(n-q+1)}i_{X_a}d\omega_1\wedge\delta\omega_2-c\omega_1\wedge i_{X_a}\omega_2\nonumber\\ &=&-ce_a\wedge i_{X^b}\omega_1\wedge i_{X_b}\omega_2+\frac{1}{(p+1)(n-q+1)}e_a\wedge i_{X^b}d\omega_1\wedge i_{X_b}\delta\omega_2. \end{eqnarray} By taking the wedge product with $e^a\wedge$ from the left, we have \begin{equation} d[\omega_1,\omega_2]_{CKY}=e^a\wedge\nabla_{X_a}[\omega_1,\omega_2]_{CKY}=0. \end{equation} Moreover, if we calculate the interior derivative of (54) with respect to $X^a$, we find \begin{eqnarray} \delta[\omega_1,\omega_2]_{CKY}&=&-i_{X^a}\nabla_{X_a}[\omega_1,\omega_2]_{CKY}\nonumber\\ &=&c(n-(p+q-2))(i_{X^b}\omega_1\wedge i_{X_b}\omega_2)-\frac{n-(p+q-2)}{(p+1)(n-q+1)}i_{X^b}d\omega_1\wedge i_{X_b}\delta\omega_2. \end{eqnarray} Hence, by comparing (55) and (57), we obtain \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=-\frac{1}{n-(p+q-2)}e_a\wedge\delta[\omega_1,\omega_2]_{CKY}. \end{equation} This means that $[\omega_1,\omega_2]_{CKY}$ is an even CCKY $(p+q-1)$-form since for $p$ odd and $q$ even, $p+q-1$ is an even number. To prove the specialty, we need to apply the covariant derivative $\nabla_{X_a}$ to (57) and this gives us \begin{eqnarray} \nabla_{X_a}\delta[\omega_1,\omega_2]_{CKY}&=&c(n-(p+q-2))\left(i_{X^b}\nabla_{X_a}\omega_1\wedge i_{X_b}\omega_2+i_{X^b}\omega_1\wedge i_{X_b}\nabla_{X_a}\omega_2\right)\nonumber\\ &&-\frac{n-(p+q-2)}{(p+1)(n-q+1)}\left(i_{X^b}\nabla_{X_a}d\omega_1+i_{X^b}d\omega_1\wedge i_{X_b}\nabla_{X_a}\delta\omega_2\right)\nonumber\\ &=&\frac{c(n-(p+q-2))}{p+1}i_{X^b}i_{X_a}d\omega_1\wedge i_{X_b}\omega_2-\frac{c(n-(p+q-2))}{n-q+1}i_{X_a}\omega_1\wedge\delta\omega_2\nonumber\\ &&+\frac{c(n-(p+q-2))}{n-q+1}\omega_1\wedge i_{X_a}\delta\omega_2-\frac{c(n-(p+q-2))}{p+1}i_{X^b}d\omega_1\wedge i_{X_b}i_{X_a}\omega_2 \end{eqnarray} where we have used (15) and (29) for $\omega_1$ and (17) and (30) for $\omega_2$. By comparing (59) with (53), one can easily see that \begin{equation} \nabla_{X_a}\delta[\omega_1,\omega_2]_{CKY}=c(n-(p+q-2))i_{X_a}[\omega_1,\omega_2]_{CKY}. \end{equation} This means that the CCKY $(p+q-1)$-form $[\omega_1,\omega_2]_{CKY}$ is a special even CCKY form from (30). On the other hand, if we take $\omega_1$ as a special even CCKY $p$-form and $\omega_2$ as a special even CCKY $q$-form, the CCKY bracket is written as \begin{equation} [\omega_1,\omega_2]_{CKY}=\frac{1}{n-q+1}\omega_1\wedge\delta\omega_2+\frac{1}{n-p+1}\delta\omega_1\wedge\omega_2. \end{equation} The covariant derivative of (61) gives \begin{eqnarray} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}&=&\frac{1}{n-q+1}\left(\nabla_{X_a}\omega_1\wedge\delta\omega_2+\omega_1\wedge\nabla_{X_a}\delta\omega_2\right)\nonumber\\ &&+\frac{1}{n-p+1}\left(\nabla_{X_a}\delta\omega_1\wedge\omega_2+\delta\omega_1\wedge\nabla_{X_a}\omega_2\right)\nonumber\\ &=&-\frac{1}{(n-p+1)(n-q+1)}e_a\wedge\delta\omega_1\wedge\delta\omega_2+c\omega_1\wedge i_{X_a}\omega_2\nonumber\\ &&+ci_{X_a}\omega_1\wedge\omega_2+\frac{1}{(n-p+1)(n-q+1)}e_a\wedge\delta\omega_1\wedge\delta\omega_2\nonumber\\ &=&ci_{X_a}\left(\omega_1\wedge\omega_2\right) \end{eqnarray} where we have used (17) and (30). By taking the interior derivative with respect to $X^a$, we find \begin{equation} \delta[\omega_1,\omega_2]_{CKY}=-i_{X^a}\nabla_{X_a}[\omega_1,\omega_2]_{CKY}=0 \end{equation} and the wedge product of (62) with $e^a\wedge$ from the left gives \begin{equation} d[\omega_1,\omega_2]_{CKY}=e^a\wedge\nabla_{X_a}[\omega_1,\omega_2]_{CKY}=c(p+q)\omega_1\wedge\omega_2. \end{equation} So, by comparing (62) and (64), we obtain \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=\frac{1}{p+q}i_{X_a}d[\omega_1,\omega_2]_{CKY}. \end{equation} This shows that $[\omega_1,\omega_2]_{CKY}$ is an odd KY $(p+q-1)$-form since $p+q-1$ is an odd number for $p$ and $q$ are even. To prove the specialty, we take the covariant derivative of (64) which gives \begin{eqnarray} \nabla_{X_a}d[\omega_1,\omega_2]_{CKY}&=&c(p+q)\left(\nabla_{X_a}\omega_1\wedge\omega_2+\omega_1\wedge\nabla_{X_a}\omega_2\right)\nonumber\\ &=&c(p+q)\left(-\frac{1}{n-p+1}e_a\wedge\delta\omega_1\wedge\omega_2-\frac{1}{n-q+1}\omega_1\wedge e_a\wedge\delta\omega_2\right) \end{eqnarray} where we have used (17). By comparing with (61), one can see that \begin{equation} \nabla_{X_a}d[\omega_1,\omega_2]_{CKY}=-c(p+q)e_a\wedge[\omega_1,\omega_2]_{CKY}. \end{equation} So, the KY $(p+q-1)$-form $[\omega_1,\omega_2]_{CKY}$ is a special odd KY form from (29). In all cases, the CKY bracket has the property $[\omega_1,\omega_2]_{CKY}=-[\omega_2,\omega_1]_{CKY}$ and satisfies the Jacobi identity with no restriction on the dimension. Hence, we prove that the special odd KY forms and special even CCKY forms satisfy a Lie algebra structure with respect to the CKY bracket such that two special odd KY forms gives again a special odd KY form, one special odd KY form and one special even CCKY form gives another special even CCKY form, two special even CCKY forms give a special odd KY form which is shown diagrammatically as \begin{eqnarray} \text{special odd KY}\times\text{special odd KY}&\longrightarrow&\text{special odd KY}\nonumber\\ \text{special odd KY}\times\text{special even CCKY}&\longrightarrow&\text{special even CCKY}\nonumber\\ \text{special even CCKY}\times\text{special even CCKY}&\longrightarrow&\text{special odd KY}\nonumber \end{eqnarray} \end{proof} On the other hand, if the Killing spinors on a manifold $M$ generate even degree special KY forms and odd degree special CCKY forms, then the Lie algebra structure is dependent on the dimension as stated in the following \begin{theorem} On a manifold $M$, even degree special KY forms satisfying (15) and (29) and odd degree special CCKY forms satisfying (17) and (30) constitute a Lie algebra structure with respect to the Hodge star of the CKY bracket $*[\,,\,]_{CKY}$ given in (48) in even dimensions. \end{theorem} \begin{proof} We use the property that KY forms and CCKY forms are Hodge duals of each other. We consider the cases of two special even KY forms, two special odd CCKY forms and one special even KY form with one special odd CCKY form separately in even dimensions. For a special even KY $p$-form $\omega_1$ and a special even KY $q$-form $\omega_2$, we have $\delta\omega_1=0$ and $\delta\omega_2=0$ and the CKY bracket in (48) reduces to the SN bracket for special KY forms. $[\omega_1,\omega_2]_{SN}$ corresponds to a special odd KY $(p+q-1)$-form since $p+q-1$ is an odd number for $p$ and $q$ are even. However, the Hodge dual of a special odd KY form is a special odd CCKY form in even $n$ dimensions. So, $*[\omega_1,\omega_2]_{CKY}=*[\omega_1,\omega_2]_{SN}$ is a special odd CCKY $(n-(p+q-1))$-form. If we take a special even KY $p$-form $\omega_1$ and a special odd CCKY $q$-form $\omega_2$, then we have $\delta\omega_1=0$ and $d\omega_2=0$. So, the CKY bracket reduces to \begin{equation} [\omega_1,\omega_2]_{CKY}=\frac{1}{p+1}i_{X^a}d\omega_1\wedge i_{X_a}\omega_2+\frac{1}{n-q+1}\omega_1\wedge\delta\omega_2. \end{equation} From the same considerations in (54) and (55), the covariant derivative of (68) gives \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=ce_a\wedge i_{X^b}\omega_1\wedge i_{X_b}\omega_2+\frac{1}{(p+1)(n-q+1)}e_a\wedge i_{X^b}d\omega_1\wedge i_{X_b}\delta\omega_2. \end{equation} By taking the wedge product with $e^a\wedge$ from the left, we find \begin{equation} d[\omega_1,\omega_2]_{CKY}=0 \end{equation} and the interior derivative of (69) gives \begin{equation} \delta[\omega_1,\omega_2]_{CKY}=-c(n-(p+q-2))i_{X^b}\omega_1\wedge i_{X_b}\omega_2-\frac{n-(p+q-2)}{(p+1)(n-q+1)}i_{X^b}d\omega_1\wedge i_{X_b}\delta\omega_2. \end{equation} Hence, by comparing (55) and (57), we obtain \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=-\frac{1}{n-(p+q-2)}e_a\wedge\delta[\omega_1,\omega_2]_{CKY}. \end{equation} This means that $[\omega_1,\omega_2]_{CKY}$ is an even CCKY $(p+q-1)$-form since for $p$ even and $q$ odd, $p+q-1$ is an even number. So, from the Hodge duality property between KY and CCKY forms, $*[\omega_1,\omega_2]_{CKY}$ is an even KY $(n-(p+q-1))$-form in even $n$ dimensions. To prove the specialty, we apply the covariant derivative $\nabla_{X_a}$ to (71) and this gives \begin{eqnarray} \nabla_{X_a}\delta[\omega_1,\omega_2]_{CKY}&=&-\frac{c(n-(p+q-2))}{p+1}i_{X^b}i_{X_a}d\omega_1\wedge i_{X_b}\omega_2+\frac{c(n-(p+q-2))}{n-q+1}i_{X_a}\omega_1\wedge\delta\omega_2\nonumber\\ &&+\frac{c(n-(p+q-2))}{n-q+1}\omega_1\wedge i_{X_a}\delta\omega_2-\frac{c(n-(p+q-2))}{p+1}i_{X^b}d\omega_1\wedge i_{X_b}i_{X_a}\omega_2 \end{eqnarray} by using the same procedures as in (59). The comparison of (73) with (68) shows that we have \begin{equation} \nabla_{X_a}\delta[\omega_1,\omega_2]_{CKY}=c(n-(p+q-2))i_{X_a}[\omega_1,\omega_2]_{CKY} \end{equation} and this means that $[\omega_1,\omega_2]_{CKY}$ is a special even CCKY $(p+q-1)$-form and $*[\omega_1,\omega_2]_{CKY}$ is a special even KY $(n-(p+q-1))$-form in even dimensions. On the other hand, if we take $\omega_1$ as a special odd CCKY $p$-form and $\omega_2$ as a special odd CCKY $q$-form, the CKY bracket corresponds to \begin{equation} [\omega_1,\omega_2]_{CKY}=-\frac{1}{n-q+1}\omega_1\wedge\delta\omega_2+\frac{1}{n-p+1}\delta\omega_1\wedge\omega_2 \end{equation} and the covariant derivative of it gives \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=ci_{X_a}(\omega_1\wedge\omega_2). \end{equation} By using (76), the exterior and co-derivatives can be found as \begin{eqnarray} d[\omega_1,\omega_2]_{CKY}&=&c(p+q)\omega_1\wedge\omega_2\\ \delta[\omega_1,\omega_2]_{CKY}&=&0. \end{eqnarray} From the comparison of (76) and (77), one can see that \begin{equation} \nabla_{X_a}[\omega_1,\omega_2]_{CKY}=\frac{1}{p+q}i_{X_a}d[\omega_1,\omega_2]_{CKY} \end{equation} and hence $[\omega_1,\omega_2]_{CKY}$ is an odd KY $(p+q-1)$-form for $p$ and $q$ are odd. This means that $*[\omega_1,\omega_2]_{CKY}$ is an odd CCKY $(n-(p+q-1))$-form in even $n$ dimensions. The covariant derivative of (77) gives \begin{equation} \nabla_{X_a}d[\omega_1,\omega_2]_{CKY}=c(p+q)e_a\wedge\left(-\frac{1}{n-p+1}\delta\omega_1\wedge\omega_2+\frac{1}{n-q+1}\omega_1\wedge\delta\omega_2\right) \end{equation} and we have \begin{equation} \nabla_{X_a}d[\omega_1,\omega_2]_{CKY}=-c(p+q)e_a\wedge[\omega_1,\omega_2]_{CKY}. \end{equation} So, $[\omega_1,\omega_2]_{CKY}$ is a special odd KY $(p+q-1)$-form and $*[\omega_1,\omega_2]_{CKY}$ is a special odd CCKY $(n-(p+q-1))$-form in even $n$ dimensions. In all cases, Hodge star of the CKY bracket has the property $*[\omega_1,\omega_2]_{CKY}=-*[\omega_2,\omega_1]_{CKY}$ and satisfies the Jacobi identity. Hence, we prove that the special even KY forms and special odd CCKY forms satisfy a Lie algebra structure with respect to the bracket $*[\,,\,]_{CKY}$ in even dimensions such that two special even KY forms gives a special odd CCKY form, one special even KY form and one special odd CCKY form gives another special even KY form, two special odd CCKY forms give again a special odd CCKY form which is shown diagrammatically as \begin{eqnarray} \text{special even KY}\times\text{special even KY}&\longrightarrow&\text{special odd CCKY}\nonumber\\ \text{special even KY}\times\text{special odd CCKY}&\longrightarrow&\text{special even KY}\nonumber\\ \text{special odd CCKY}\times\text{special odd CCKY}&\longrightarrow&\text{special odd CCKY}\nonumber \end{eqnarray} \end{proof} In odd dimensions, special even KY forms and special odd CCKY forms do not satisfy a Lie algebra with respect to the bracket $*[\,,\,]_{CKY}$. In that case, two special even KY forms give a special even CCKY form, one special even KY form and one special odd CCKY form give a special odd KY form, two special odd CCKY forms give a special even CCKY form. So, special even KY forms and special odd CCKY forms does not close into an algebra with respect to $*[\,,\,]_{CKY}$. This is also true for the bracket $[\,,\,]_{CKY}$. Unlike the case of special odd KY and special even CCKY forms, the algebra structure of special even KY and special odd CCKY forms is only relevant in even dimensions. \section{Symmetry Operators} Killing spinors and KY forms are also related to each other in another way. For a Killing vector field $K$, the Lie derivative $\pounds_K$ acting on a spinor $\psi$ is defined as follows \cite{Kosmann} \begin{equation} \pounds_K\psi=\nabla_K\psi+\frac{1}{4}d\widetilde{K}.\psi \end{equation} where $\widetilde{K}$ is the 1-form which corresponds to the metric dual of $K$. It is known that $\pounds_K$ is a symmetry operator of the Killing spinor equation for all Killing vector fields $K$ \cite{Benn Tucker}. This means that if $\psi$ is a Killing spinor, then $\pounds_K\psi$ is also a Killing spinor, namely we have \begin{equation} \nabla_X\pounds_K\psi=\lambda\widetilde{X}.\pounds_K\psi \end{equation} for all vector fields $X$ and all Killing vector fields $K$. Moreover, if we generalize the operator in (82) to a KY $p$-form $\omega$ as \begin{equation} L_{\omega}=(i_{X^a}\omega).\nabla_{X_a}+\frac{p}{2(p+1)}d\omega \end{equation} and apply it to a Killing spinor $\psi$, we find \begin{equation} L_{\omega}\psi=(-1)^{p-1}\lambda p\omega.\psi+\frac{p}{2(p+1)}d\omega.\psi. \end{equation} If the degree $p$ of the KY $p$-form $\omega$ is odd, then we have \begin{equation} L_{\omega}\psi=\lambda p\omega.\psi+\frac{p}{2(p+1)}d\omega.\psi \end{equation} and it is shown in \cite{Ertem1} that this operator with respect to an odd KY $p$-form $\omega$ is also a symmetry operator for the Killing spinor equation in constant curvature manifolds. So, in that case we have \begin{equation} \nabla_XL_{\omega}\psi=\lambda\widetilde{X}.L_{\omega}\psi. \end{equation} The operator defined in (84) is a special case of the symmetry operator of twistor spinors in constant curvature manifolds written in terms of CKY $p$-forms $\omega$ as \begin{equation} L_{\omega}=(i_{X^a}\omega).\nabla_{X_a}+\frac{p}{2(p+1)}d\omega+\frac{p}{2(n-p+1)}\delta\omega \end{equation} which is proved in \cite{Ertem5, Ertem2}. We show that this symmetry operator construction can be generalized to all manifolds admitting special KY and special CCKY forms. \begin{theorem} For every special odd KY $p$-form $\omega$ and every special even CCKY $p$-form $\omega$ generated by a Killing spinor with Killing number $\lambda$, the following operator \begin{equation} L_{\omega}=(i_{X^a}\omega).\nabla_{X_a}+\frac{p}{2(p+1)}d\omega+\frac{p}{2(n-p+1)}\delta\omega \end{equation} is a symmetry operator for the Killing spinor equation (11). \end{theorem} \begin{proof} To prove the theorem, we have to show that for $\omega$ is a special odd KY $p$-form or a special even CCKY $p$-form and $\psi$ is a Killing spinor, the operator $L_{\omega}$ in (89) satisfies (87). If $\omega$ is a special odd KY $p$-form, then we have $\delta\omega=0$ and the action of (89) on a Killing spinor $\psi$ is written as \begin{eqnarray} L_{\omega}\psi&=&(i_{X^a}\omega).\nabla_{X_a}\psi+\frac{p}{2(p+1)}d\omega.\psi\nonumber\\ &=&\lambda(i_{X^a}\omega).e_a.\psi+\frac{p}{2(p+1)}d\omega.\psi\nonumber\\ &=&\lambda p\omega.\psi+\frac{p}{2(p+1)}d\omega.\psi \end{eqnarray} where we have used $(i_{X^a}\omega).e_a=e_a\wedge i_{X^a}\omega=p\omega$ since $p$ is odd and $i_{X_a}i_{X^a}=0$. By taking the covariant derivative $\nabla_{X_a}$ with respect to a basis $X_a$ of vector fields, we can calculate the left hand side of (87) as \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&\lambda p\left(\nabla_{X_a}\omega.\psi+\omega.\nabla_{X_a}\psi\right)+\frac{p}{2(p+1)}\left(\nabla_{X_a}d\omega.\psi+d\omega.\nabla_{X_a}\psi\right)\nonumber\\ &=&\lambda\frac{p}{p+1}i_{X_a}d\omega.\psi+\lambda^2p\omega.e_a.\psi-\frac{cp}{2}(e_a\wedge\omega).\psi+\lambda\frac{p}{2(p+1)}d\omega.e_a.\psi \end{eqnarray} where we have used (11), (15) and (29). Since the odd KY $p$-form $\omega$ is generated by a Killing spinor with the Killing number $\lambda$, we have $c=-4\lambda^2$ from (32). By using (19), we can write $\omega.e_a=-e_a\wedge\omega+i_{X_a}\omega$ and $d\omega.e_a=e_a\wedge d\omega-i_{X_a}d\omega$ since $\omega$ is an odd form. Then, we have \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&\lambda\frac{p}{p+1}i_{X_a}d\omega.\psi+\lambda^2p(-e_a\wedge\omega+i_{X_a}\omega).\psi+2\lambda^2p(e_a\wedge\omega).\psi+\lambda\frac{p}{2(p+1)}(e_a\wedge d\omega-i_{X_a}d\omega).\psi\nonumber\\ &=&\lambda^2p(e_a\wedge\omega+i_{X_a}\omega).\psi+\lambda\frac{p}{2(p+1)}(e_a\wedge d\omega+i_{X_a}d\omega).\psi \end{eqnarray} From (18) and the operator $L_{\omega}$ in (90), we obtain \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&\lambda^2pe_a.\omega.\psi+\lambda\frac{p}{2(p+1)}e_a.d\omega.\psi\nonumber\\ &=&\lambda e_a.L_{\omega}\psi \end{eqnarray} which is the equation (87) for the frame basis $X_a$ and co-frame basis $e^a$. So, $L_{\omega}$ defined in (89) with respect to a special odd KY form $\omega$ is a symmetry operator of the Killing spinor equation. On the other hand, if $\omega$ is a special even CCKY $p$-form, then we have $d\omega=0$ and the action of (89) on a Killing spinor $\psi$ is written as \begin{eqnarray} L_{\omega}\psi&=&(i_{X^a}\omega).\nabla_{X_a}\psi+\frac{p}{2(n-p+1)}\delta\omega.\psi\nonumber\\ &=&\lambda(i_{X^a}\omega).e_a.\psi+\frac{p}{2(n-p+1)}\delta\omega.\psi\nonumber\\ &=&-\lambda p\omega.\psi+\frac{p}{2(n-p+1)}\delta\omega.\psi \end{eqnarray} where we have used $(i_{X^a}\omega).e_a=-e_a\wedge i_{X^a}\omega=-p\omega$ since $p$ is even and $i_{X_a}i_{X^a}=0$. By taking the covariant derivative $\nabla_{X_a}$ with respect to a basis $X_a$ of vector fields, the left hand side of (87) gives \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&-\lambda p\left(\nabla_{X_a}\omega.\psi+\omega.\nabla_{X_a}\psi\right)+\frac{p}{2(n-p+1)}\left(\nabla_{X_a}\delta\omega.\psi+\delta\omega.\nabla_{X_a}\psi\right)\nonumber\\ &=&\lambda\frac{p}{n-p+1}(e_a\wedge\delta\omega).\psi-\lambda^2p\omega.e_a.\psi+\frac{cp}{2}i_{X_a}\omega.\psi+\lambda\frac{p}{2(n-p+1)}\delta\omega.e_a.\psi \end{eqnarray} where we have used (11), (17) and (30). Since the even CCKY $p$-form $\omega$ is generated by a Killing spinor with the Killing number $\lambda$, we have $c=-4\lambda^2$ from (34). By using (19), we can write $\omega.e_a=e_a\wedge\omega-i_{X_a}\omega$ and $\delta\omega.e_a=-e_a\wedge\delta\omega+i_{X_a}\delta\omega$ since $\omega$ is an even form. Then, we have \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&\lambda\frac{p}{n-p+1}(e_a\wedge\delta\omega).\psi-\lambda^2p(e_a\wedge\omega-i_{X_a}\omega).\psi-2\lambda^2pi_{X_a}\omega.\psi+\lambda\frac{p}{2(n-p+1)}(-e_a\wedge\delta\omega+i_{X_a}\delta\omega).\psi\nonumber\\ &=&-\lambda^2p(e_a\wedge\omega+i_{X_a}\omega).\psi+\lambda\frac{p}{2(n-p+1)}(e_a\wedge\delta\omega+i_{X_a}\delta\omega).\psi \end{eqnarray} From (18) and the operator $L_{\omega}$ in (94), we obtain \begin{eqnarray} \nabla_{X_a}L_{\omega}\psi&=&-\lambda^2pe_a.\omega.\psi+\lambda\frac{p}{2(n-p+1)}e_a.\delta\omega.\psi\nonumber\\ &=&\lambda e_a.L_{\omega}\psi \end{eqnarray} which is the equation (87) for the frame basis $X_a$ and co-frame basis $e^a$. So, $L_{\omega}$ defined in (89) with respect to a special even CCKY form $\omega$ is a symmetry operator of the Killing spinor equation. \end{proof} For the case of special even KY forms and special odd CCKY forms, we have \begin{theorem} For every special even KY $p$-form $\omega$ and every special odd CCKY $p$-form $\omega$ generated by a Killing spinor with Killing number $\lambda$, the following operator \begin{equation} K_{\omega}=(i_{X^a}\omega).z.\nabla_{X_a}+\frac{p}{2(p+1)}d\omega.z+\frac{p}{2(n-p+1)}\delta\omega.z \end{equation} is a symmetry operator for the Killing spinor equation (11) in even $n$ dimensions. Here $z$ is the volume form. \end{theorem} \begin{proof} To prove the theorem, we need to show that for $\omega$ is a special even KY $p$-form or a special odd CCKY $p$-form and $\psi$ is a Killing spinor, the operator $K_{\omega}$ in (98) satisfies \begin{equation} \nabla_{X_a}K_{\omega}\psi=\lambda e_a.K_{\omega}\psi. \end{equation} If $\omega$ is a special even KY $p$-form, then we have $\delta\omega=0$ and (98) turns into \begin{eqnarray} K_{\omega}\psi&=&(i_{X^a}\omega).z.\nabla_{X_a}\psi+\frac{p}{2(p+1)}d\omega.z.\psi\nonumber\\ &=&\lambda(i_{X^a}\omega).z.e_a.\psi+\frac{p}{2(p+1)}d\omega.z.\psi \end{eqnarray} where we have used the Killing spinor equation (11). In even dimensions, the volume form $z$ anticommutes with all of the generators of the Clifford algebra and we have $z.e_a=-e_a.z$. Moreover, since $\omega$ is an even form, $i_{X^a}\omega$ is an odd form and we have $i_{X^a}\omega.e_a=-e_a\wedge i_{X^a}\omega=-p\omega$. So, we obtain \begin{eqnarray} K_{\omega}\psi&=&-\lambda(i_{X^a}\omega).e_a.z.\psi+\frac{p}{2(p+1)}d\omega.z.\psi\nonumber\\ &=&\lambda p\omega.z.\psi+\frac{p}{2(p+1)}d\omega.z.\psi. \end{eqnarray} To find the left hand side of (99), we can take the covariant derivative of (101) with respect to $X_a$ and this gives \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&\lambda p\left(\nabla_{X_a}\omega.z.\psi+\omega.z.\nabla_{X_a}\psi\right)+\frac{p}{2(p+1)}\left(\nabla_{X_a}d\omega.z.\psi+d\omega.z.\nabla_{X_a}\psi\right)\nonumber\\ &=&\lambda\frac{p}{p+1}i_{X_a}d\omega.z.\psi+\lambda^2p\omega.z.e_a.\psi-\frac{cp}{2}(e_a\wedge\omega).z.\psi+\lambda\frac{p}{2(p+1)}d\omega.z.e_a.\psi \end{eqnarray} where we have used the fact that $\nabla$ is metric compatible and so $\nabla_{X_a}z=0$ and the equations (11), (15) and (29). Since the even KY $p$-form $\omega$ is generated by a Killing spinor with the Killing number $\lambda$, we have $c=-4\lambda^2$ from (32). By using the relation $z.e_a=-e_a.z$ and the identities $\omega.e_a=e_a\wedge\omega-i_{X_a}\omega$ and $d\omega.e_a=-e_a\wedge d\omega+i_{X_a}d\omega$ resulting from (19) since $\omega$ is an even form, we obtain \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&\lambda\frac{p}{p+1}i_{X_a}d\omega.z.\psi-\lambda^2p\omega.e_a.z.\psi+2\lambda^2p(e_a\wedge\omega).z.\psi-\lambda\frac{p}{2(p+1)}d\omega.e_a.z.\psi\nonumber\\ &=&\lambda\frac{p}{p+1}i_{X_a}d\omega.z.\psi-\lambda^2p(e_a\wedge\omega-i_{X_a}\omega).z.\psi\nonumber\\ &&+2\lambda^2p(e_a\wedge\omega).z.\psi+\lambda\frac{p}{2(p+1)}(e_a\wedge d\omega-i_{X_a}d\omega).z.\psi\nonumber\\ &=&\lambda^2p(e_a\wedge\omega+i_{X_a}\omega).z.\psi+\lambda\frac{p}{2(p+1)}(e_a\wedge d\omega+i_{X_a}d\omega).z.\psi. \end{eqnarray} From (18) and the operator $K_{\omega}$ in (101), we have \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&\lambda^2pe_a.\omega.z.\psi+\lambda\frac{p}{2(p+1)}e_a.d\omega.z.\psi\nonumber\\ &=&\lambda e_a.K_{\omega}\psi \end{eqnarray} which is the equation (99). So, $K_{\omega}$ defined in (98) with respect to a special even KY form $\omega$ is a symmetry operator of the Killing spinor equation in even dimensions. On the other hand, if $\omega$ is a special odd CCKY $p$-form, then we have $d\omega=0$ and (98) turns into \begin{eqnarray} K_{\omega}\psi&=&(i_{X^a}\omega).z.\nabla_{X_a}\psi+\frac{p}{2(n-p+1)}\delta\omega.z.\psi\nonumber\\ &=&-\lambda(i_{X^a}\omega).e_a.z.\psi+\frac{p}{2(n-p+1)}\delta\omega.z.\psi \end{eqnarray} where we have used the Killing spinor equation (11) and the equality $z.e_a=-e_a.z$ in even dimensions. Moreover, since $\omega$ is an odd form, $i_{X^a}\omega$ is an even form and we have $i_{X^a}\omega.e_a=e_a\wedge i_{X^a}\omega=p\omega$. So, we obtain \begin{eqnarray} K_{\omega}\psi&=&-\lambda p\omega.z.\psi+\frac{p}{2(n-p+1)}\delta\omega.z.\psi. \end{eqnarray} To find the left hand side of (99), we can take the covariant derivative of (106) with respect to $X_a$ and this gives \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&-\lambda p\left(\nabla_{X_a}\omega.z.\psi+\omega.z.\nabla_{X_a}\psi\right)+\frac{p}{2(n-p+1)}\left(\nabla_{X_a}\delta\omega.z.\psi+\delta\omega.z.\nabla_{X_a}\psi\right)\nonumber\\ &=&\lambda\frac{p}{n-p+1}(e_a\wedge\delta\omega).z.\psi-\lambda^2p\omega.z.e_a.\psi+\frac{cp}{2}i_{X_a}\omega.z.\psi+\lambda\frac{p}{2(n-p+1)}\delta\omega.z.e_a.\psi \end{eqnarray} where we have used $\nabla_{X_a}z=0$ and the equations (11), (17) and (30). Since the odd CCKY $p$-form $\omega$ is generated by a Killing spinor with the Killing number $\lambda$, we have $c=-4\lambda^2$ from (32). By using the relation $z.e_a=-e_a.z$ and the identities $\omega.e_a=-e_a\wedge\omega+i_{X_a}\omega$ and $\delta\omega.e_a=e_a\wedge\delta\omega-i_{X_a}\delta\omega$ resulting from (19) since $\omega$ is an odd form, we obtain \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&\lambda\frac{p}{n-p+1}(e_a\wedge\delta\omega).z.\psi+\lambda^2p\omega.e_a.z.\psi-2\lambda^2pi_{X_a}\omega.z.\psi-\lambda\frac{p}{2(n-p+1)}\delta\omega.e_a.z.\psi\nonumber\\ &=&\lambda\frac{p}{n-p+1}(e_a\wedge\delta\omega).z.\psi-\lambda^2p(e_a\wedge\omega-i_{X_a}\omega).z.\psi\nonumber\\ &&-2\lambda^2pi_{X_a}\omega.z.\psi-\lambda\frac{p}{2(n-p+1)}(e_a\wedge\delta\omega-i_{X_a}\delta\omega).z.\psi\nonumber\\ &=&-\lambda^2p(e_a\wedge\omega+i_{X_a}\omega).z.\psi+\lambda\frac{p}{2(n-p+1)}(e_a\wedge\delta\omega+i_{X_a}\delta\omega).z.\psi. \end{eqnarray} From (18) and the operator $K_{\omega}$ in (106), we have \begin{eqnarray} \nabla_{X_a}K_{\omega}\psi&=&-\lambda^2pe_a.\omega.z.\psi+\lambda\frac{p}{2(n-p+1)}e_a.\delta\omega.z.\psi\nonumber\\ &=&\lambda e_a.K_{\omega}\psi \end{eqnarray} which is the equation (99). So, $K_{\omega}$ defined in (98) with respect to a special odd CCKY form $\omega$ is also a symmetry operator of the Killing spinor equation in even dimensions. \end{proof} In odd dimensions, the volume form $z$ is at the center of the Clifford algebra and hence commutes with all of the elements. So, we have $z.e_a=e_a.z$. However, in that case the operator defined in (98) with respect to special even KY and special odd CCKY forms does not satisfy the symmetry operator condition (99) for the Killing spinor equation (11). So, there is no companion of Theorem 4 in odd dimensions. This fact resembles the non-existence of the Lie algebra structure of special even KY and special odd CCKY forms in odd dimensions discussed in Section 3 and we will see that they are related to each other. \section{Generalized Symmetry Superalgebras} We will see in this section that the structures considered in the previous sections such as the bilinears of Killing spinors, the Lie algebra structure of special KY and special CCKY forms and the symmetry operators of Killing spinors constructed out of them together constitute a superalgebra structure. \begin{definition} A superalgebra $\mathfrak{g}$ is a $\mathbb{Z}_2$-graded algebra that consists of a direct sum of two components $\mathfrak{g}=\mathfrak{g}_0\oplus\mathfrak{g}_1$ on which a bilinear operation \begin{equation} [\,,\,]:\mathfrak{g}_i\times\mathfrak{g}_j\rightarrow\mathfrak{g}_{i+j} \end{equation} is defined. Here $i,j=0,1(\text{mod }2)$ and $[\,,\,]$ satisfies the condition \begin{equation} [a,b]=-(-1)^{|a||b|}[b,a] \end{equation} for $a,b\in\mathfrak{g}$ and $|\,|$ denotes the degree of the element which is 0 or 1 depending on that it is in $\mathfrak{g}_0$ or $\mathfrak{g}_1$. The first component $\mathfrak{g}_0$ is called the even part and the second component $\mathfrak{g}_1$ is called the odd part of the superalgebra. Moreover, if $[\,,\,]$ satisfies the following graded Jacobi identity \begin{equation} [a,[b,c]]=[[a,b],c]+(-1)^{|a||b|}[b,[a,c]] \end{equation} then $\mathfrak{g}$ is called a Lie superalgebra. \end{definition} By considering the Killing spinors and Killing vector fields corresponding to the Dirac currents of Killing spinors on a manifold $M$, a superalgebra structure called a symmetry superalgebra can be defined. \begin{definition} A superalgebra $\mathfrak{k}=\mathfrak{k}_0\oplus\mathfrak{k}_1$ on $M$ is called a symmetry superalgebra if its even part $\mathfrak{k}_0$ consists of the Lie algebra of the Killing vector fields on $M$ and the odd part $\mathfrak{k}_1$ corresponds to the space of Killing spinors on $M$. The bilinear operation is defined as follows. The even-even bracket corresponds to the Lie bracket of vector fields defined in (35) \begin{equation} [\,,\,]:\mathfrak{k}_0\times\mathfrak{k}_0\rightarrow\mathfrak{k}_0 \end{equation} The even-odd bracket is the Lie derivative on spinor fields with respect to Killing vector fields defined in (82) \begin{equation} \pounds:\mathfrak{k}_0\times\mathfrak{k}_1\rightarrow\mathfrak{k}_1 \end{equation} and the odd-odd bracket is the Dirac current of a spinor defined in (7) \begin{equation} (\,\,)_1:\mathfrak{k}_1\times\mathfrak{k}_1\rightarrow\mathfrak{k}_0 \end{equation} These brackets satisfy the property (111). \end{definition} The Jacobi identities of the symmetry superalgebra correspond to the properties of the Lie bracket of vector fields and the Lie derivative on spinors. The even-even-even Jacobi identity is automatically satisfied since it is the Jacobi identity for the Lie bracket of vector fields \begin{equation} [K_1,[K_2,K_3]]+[K_2,[K_3,K_1]]+[K_3,[K_1,K_2]]=0 \end{equation} where $K_1, K_2, K_3 \in\mathfrak{k}_0$. The even-even-odd and even-odd-odd Jacobi identites correspond to the following properties of the Lie derivetive on spinors and are automatically satisfied \begin{eqnarray} [\pounds_{K_1},\pounds_{K_2}]\psi&=&\pounds_{[K_1,K_2]}\psi\\ {\cal{L}}_K(\psi\overline{\phi})&=&(\pounds_K\psi)\overline{\phi}+\psi(\overline{\pounds_K\phi}) \end{eqnarray} where $K, K_1, K_2\in\mathfrak{k}_0$, $\psi, \phi\in\mathfrak{k}_1$ and ${\cal{L}}$ is the Lie derivative on Clifford forms. However, the odd-odd-odd Jacobi identity for $\psi\in\mathfrak{k}_1$ \begin{equation} \pounds_{V_{\psi}}\psi=0 \end{equation} is not automatically satisfied and in the cases that it is satisfied the symmetry superalgebra is a Lie superalgebra. The construction of the symmetry superalgebra can be extended to include the odd degree KY forms \cite{Ertem1, Ertem2}. \begin{definition} A superalgebra $\overline{\mathfrak{k}}=\overline{\mathfrak{k}}_0\oplus\overline{\mathfrak{k}}_1$ on $M$ is called an extended symmetry superalgebra if its even part $\overline{\mathfrak{k}}_0$ consists of the Lie algebra of the odd degree KY forms on $M$ and the odd part $\mathfrak{k}_1$ corresponds to the space of Killing spinors on $M$. The bilinear operation is defined as follows. The even-even bracket corresponds to the SN bracket of KY forms defined in (39) \begin{equation} [\,,\,]_{SN}:\overline{\mathfrak{k}}_0\times\overline{\mathfrak{k}}_0\rightarrow\overline{\mathfrak{k}}_0 \end{equation} The even-odd bracket is the symmetry operators of Killing spinors constructed out of odd KY forms defined in (86) \begin{equation} L:\overline{\mathfrak{k}}_0\times\overline{\mathfrak{k}}_1\rightarrow\overline{\mathfrak{k}}_1 \end{equation} and the odd-odd bracket is the $p$-form Dirac currents of Killing spinors defined in (8) \begin{equation} (\,\,)_p:\overline{\mathfrak{k}}_1\times\overline{\mathfrak{k}}_1\rightarrow\overline{\mathfrak{k}}_0 \end{equation} These brackets satisfy the property (111). \end{definition} Although the even-even-even Jacobi identity of the extended symmetry superalgebra is automatically satisfied since it corresponds to the Jacobi identity for the SN bracket of odd degree forms, other Jacobi identities are not automatically satisfied and in general the extended symmetry superalgebra is not a Lie superalgebra. We generalize the construction of symmetry and extended symmetry superalgebras to include all special KY and special CCKY forms generated by Killing spinors on the manifold $M$. By using Proposition 1, Theorem 1 and Theorem 3, the generalized symmetry superalgebras are constructed as in the following \begin{theorem} On an $n$-dimensional manifold $M$ admitting Killing spinors, if the bilinears of Killing spinors correspond to the special odd KY and special even CCKY forms, then the space of Killing spinors and special odd KY and special even CCKY forms generated by them constitute a superalgebra structure which we denote by $\mathfrak{K}=\mathfrak{K}_0\oplus\mathfrak{K}_1$. The even part $\mathfrak{K}_0$ of the superalgebra is the Lie algebra of the special odd KY and special even CCKY forms with respect to the CKY bracket and the odd part $\mathfrak{K}_1$ is the space of Killing spinors. This superalgebra is called the generalized symmetry superalgebra and the bilinear operation is defined as follows. The even-even bracket is the CKY bracket of the special odd KY and special even CCKY forms defined in (48) \begin{equation} [\,,\,]_{CKY}:\mathfrak{K}_0\otimes\mathfrak{K}_0\rightarrow\mathfrak{K}_0 \end{equation} The even-odd bracket is the symmetry operators of Killing spinors generated from the special odd KY and special even CCKY forms defined in (89) \begin{equation} L:\mathfrak{K}_0\otimes\mathfrak{K}_1\rightarrow\mathfrak{K}_1 \end{equation} The odd-odd bracket is the $p$-form bilinears of Killing spinors defined in (8) \begin{equation} (\,\,)_p:\mathfrak{K}_1\otimes\mathfrak{K}_1\rightarrow\mathfrak{K}_0 \end{equation} These brackets satisfy the condition (111) and hence constitute a superalgebra structure. \end{theorem} \begin{proof} As we proved in Theorem 1, the CKY bracket $[\,,\,]_{CKY}$ is a Lie bracket for special odd KY and special even CCKY forms and $\mathfrak{K}_0$ is a Lie algebra with respect to the bracket in (123). We have also proved in Theorem 3 that the operator (89) constructed out of special odd KY forms and special even CCKY forms are symmetry operators for Killing spinors. Hence, the bracket given in (124) is well-defined for $\mathfrak{K}_0$ and $\mathfrak{K}_1$. Moreover, we have also shown in Proposition 1 that the symmetric combination of $p$-form bilinears of two Killing spinors correspond to special odd KY and special even CCKY forms in relevant situations. So, the bracket defined in (125) is also well-defined for $\mathfrak{K}_0$ and $\mathfrak{K_1}$. As a consequence, $\mathfrak{K}=\mathfrak{K}_0\oplus\mathfrak{K}_1$ constitute a consistent superalgebra structure. \end{proof} On the other hand, we can also construct a superalgebra in even dimensions for the case of special even KY forms and special odd CCKY forms generated by the Killing spinors. \begin{theorem} On an even $n$-dimensional manifold $M$ admitting Killing spinors, if the bilinears of Killing spinors correspond to the special even KY and special odd CCKY forms, then the space of Killing spinors and special even KY and special odd CCKY forms generated by them constitute a superalgebra structure which we denote by $\overline{\mathfrak{K}}=\overline{\mathfrak{K}}_0\oplus\overline{\mathfrak{K}}_1$. The even part $\overline{\mathfrak{K}}_0$ of the superalgebra is the Lie algebra of the special even KY and special odd CCKY forms with respect to the CKY bracket and the odd part $\overline{\mathfrak{K}}_1$ is the space of Killing spinors. This superalgebra is called the generalized symmetry superalgebra and the bilinear operation is defined as follows. The even-even bracket is the Hodge star of the CKY bracket of the special even KY and special odd CCKY forms defined in Theorem 2 \begin{equation} *[\,,\,]_{CKY}:\overline{\mathfrak{K}}_0\otimes\overline{\mathfrak{K}}_0\rightarrow\overline{\mathfrak{K}}_0 \end{equation} The even-odd bracket is the symmetry operators of Killing spinors generated from the special even KY and special odd CCKY forms defined in (98) \begin{equation} K:\overline{\mathfrak{K}}_0\otimes\overline{\mathfrak{K}}_1\rightarrow\overline{\mathfrak{K}}_1 \end{equation} The odd-odd bracket is the $p$-form bilinears of Killing spinors defined in (8) \begin{equation} (\,\,)_p:\overline{\mathfrak{K}}_1\otimes\overline{\mathfrak{K}}_1\rightarrow\overline{\mathfrak{K}}_0 \end{equation} These brackets satisfy the condition (111) and hence constitute a superalgebra structure in even dimensions. \end{theorem} \begin{proof} As we have shown in Theorem 2, the Hodge star of the CKY bracket $*[\,,\,]_{CKY}$ is a Lie bracket for special even KY and special odd CCKY forms in even dimensions and $\overline{\mathfrak{K}}_0$ is a Lie algebra with respect to the bracket in (126). We have also shown in Theorem 4 that the operator (98) constructed out of special even KY forms and special odd CCKY forms are symmetry operators for Killing spinors in even dimensions. Hence, the bracket given in (127) is well-defined for $\overline{\mathfrak{K}}_0$ and $\overline{\mathfrak{K}}_1$. Moreover, we have also shown in Proposition 1 that the symmetric combination of $p$-form bilinears of two Killing spinors correspond to special even KY and special odd CCKY forms in relevant situations. So, the bracket defined in (128) is also well-defined for $\overline{\mathfrak{K}}_0$ and $\overline{\mathfrak{K}}_1$. As a consequence, $\overline{\mathfrak{K}}=\overline{\mathfrak{K}}_0\oplus\overline{\mathfrak{K}}_1$ constitute a consistent superalgebra structure in even dimensions. \end{proof} Note that there is no superalgebra structure for special even KY and special odd CCKY forms generated by Killing spinors in odd dimensions. The reason for this is that the bracket $*[\,,\,]_{CKY}$ is not a Lie bracket and the operator $K$ is not a symmetry operator in odd dimensions. The even-even-even Jacobi identities of the superalgebra structures constructed in Theorems 5 and 6 are automatically satisfied because of the properties of $[\,,\,]_{CKY}$ and $*[\,,\,]_{CKY}$ brackets. However, other Jacobi identities are not automatically satisfied and the superalgebras does not correspond to Lie superalgebras in general. \section{Examples} As examples for the generalized symmetry superalgebras constructed in Section 5, we consider special types of 6 and 7-dimensional compact Riemannian manifolds called nearly K\"{a}hler and weak $G_2$ manifolds \cite{Joyce, Besse}. \subsection{Weak $G_2$ Manifolds} \begin{definition} A 7-dimensional compact Riemannian manifold $M$ is called a weak $G_2$ manifold, if its metric cone has $Spin(7)$ holonomy. $M$ admits one Killing spinor $\psi$ satisfying the Killing spinor equation (11) \[ \nabla_{X}\psi=\lambda\widetilde{X}.\psi \] for any vector field $X$ and the Killing number $\lambda$. This implies that $M$ is an Einstein manifold \cite{Friedrich Kath Moroianu Semmelmann}. \end{definition} For a 7-dimensional Riemannian manifold $M$, the Clifford algebra defined on the tangent bundle $TM$ is isomorphic to $Cl_{7,0}\cong\mathbb{C}(8)$ where $\mathbb{C}(8)$ is the space of $8\times 8$ dimensional complex matrices. The even subalgebra of a real Clifford algebra can be found from the one lower dimensional Clifford algebra by the relation $Cl^0_{p+1,q}\cong Cl_{q,p}$ for any $p$ and $q$. So, the even subalgebra for the 7-dimensional Riemannian case is $Cl^0_{7,0}\cong Cl_{0,6}\cong\mathbb{R}(8)$. The space carrying the irreducible representations of the even subalgebra corresponds to the spinor space $S$ and in our case we have $S\cong\mathbb{R}^8$. Then, spinors are real and correspond to Majorana spinors which means that the Killing spinor $\psi$ is a real Killing spinor and the Killing number $\lambda$ is a real number. The spin invariant inner product defined on $S$ is a $\mathbb{R}$-symmetric inner product with $\xi\eta$ involution. So, we have ${\cal{J}}=\xi\eta$, $j=\text{Id}$ and the following properties for the inner product \begin{eqnarray} (\psi,\phi)&=&(\phi,\psi)\\ (\psi,\alpha.\phi)&=&(\alpha^{\xi\eta}.\psi,\phi) \end{eqnarray} where $\psi,\phi\in S$ and $\alpha\in Cl(M)$. Now, we can calculate the $p$-form Dirac currents of the Killing spinor $\psi$; \begin{eqnarray} (\psi\overline{\psi})_0&=&(\psi,\psi)\neq 0\nonumber\\ (\psi\overline{\psi})_1&=&(\psi,e_a.\psi)e^a=(e_a^{\xi\eta}.\psi,\psi)e^a=-(e_a.\psi,\psi)e^a=-(\psi,e_a.\psi)e^a=0\nonumber\\ (\psi\overline{\psi})_2&=&(\psi,e_{ba}.\psi)e^{ab}=(e_{ba}^{\xi\eta}.\psi,\psi)e^{ab}=-(e_{ba}.\psi,\psi)e^{ab}=-(\psi,e_{ba}.\psi)e^{ab}=0\nonumber\\ (\psi\overline{\psi})_3&=&(\psi,e_{cba}.\psi)e^{abc}=(e_{cba}^{\xi\eta}.\psi,\psi)e^{abc}=(e_{cba}.\psi,\psi)e^{abc}=(\psi,e_{cba}.\psi)e^{abc}\neq 0\nonumber \end{eqnarray} and by similar reasoning, we also have \begin{eqnarray} (\psi\overline{\psi})_4&\neq&0\nonumber\\ (\psi\overline{\psi})_5&=&0\nonumber\\ (\psi\overline{\psi})_6&=&0\nonumber\\ (\psi\overline{\psi})_7&\neq&0\nonumber. \end{eqnarray} From Proposition 1, one can conclude that we have two non-zero special odd KY forms $(\psi\overline{\psi})_3$ and $(\psi\overline{\psi})_7$ and two non-zero special even CCKY forms $(\psi\overline{\psi})_0$ and $(\psi\overline{\psi})_4$. So, they are Hodge duals of each other; $(\psi\overline{\psi})_3=*(\psi\overline{\psi})_4$ and $(\psi\overline{\psi})_0=*(\psi\overline{\psi})_7$. Moreover, since we have $d(\psi\overline{\psi})_7=0$ and $\delta(\psi\overline{\psi})_0=0$, the bilinears $(\psi\overline{\psi})_0$ and $(\psi\overline{\psi})_7$ correspond to parallel forms and we can choose as $(\psi\overline{\psi})_0=1$ and $(\psi\overline{\psi})_7=z$ where $z$ is the volume form. From the equations (23) and (25), $(\psi\overline{\psi})_3$ and $(\psi\overline{\psi})_4$ satisfy the following equalities \begin{eqnarray} d(\psi\overline{\psi})_3&=&8\lambda(\psi\overline{\psi})_4\\ \delta(\psi\overline{\psi})_4&=&-8\lambda(\psi\overline{\psi})_3. \end{eqnarray} We can check the superalgebra structure formed by the Killing spinor $\psi$ and the special odd KY and special even CCKY forms generated by $\psi$. From the definition of the CKY bracket in (48), one can check that the only non-vanishing brackets of the $p$-form Dirac currents are \begin{eqnarray} {[(\psi\overline{\psi})_0,(\psi\overline{\psi})_{4}]}_{CKY}&=&-2\lambda(\psi\overline{\psi})_3\nonumber\\ {[(\psi\overline{\psi})_4,(\psi\overline{\psi})_{4}]}_{CKY}&=&k(\psi\overline{\psi})_7 \end{eqnarray} where $k$ is a constant and all other brackets vanish. To construct the symmetry operators defined in (89), we first consider the algebraic relations between spinor bilinears. From (6), we have \[ \psi\overline{\psi}=(\psi\overline{\psi})_0+(\psi\overline{\psi})_3+(\psi\overline{\psi})_4+(\psi\overline{\psi})_7 \] and we can write \begin{eqnarray} (\psi\overline{\psi})_3.\psi+(\psi\overline{\psi})_4.\psi+(\psi\overline{\psi})_7.\psi&=&(\psi\overline{\psi})\psi-(\psi\overline{\psi})_0.\psi\nonumber\\ &=&(\psi,\psi)\psi-(\psi\overline{\psi})_0.\psi\nonumber\\ &=&(\psi\overline{\psi})_0.\psi-(\psi\overline{\psi})_0.\psi\nonumber\\ &=&0 \end{eqnarray} where we have used (5) and the definition $(\psi\overline{\psi})_0=( \psi,\psi)$. In a 7-dimensional Riemannian manifold, the volume form has the property $z^2=-1$. This implies that $z^2.\psi=-\psi$ and hence $z.\psi=\pm i\psi$. So, we have $(\psi\overline{\psi})_7.\psi=z.\psi=\pm i\psi$ and from (134), we can write \begin{equation} (\psi\overline{\psi})_3.\psi+(\psi\overline{\psi})_4.\psi=-(\psi\overline{\psi})_7.\psi=\mp i\psi. \end{equation} This means that we have the following identities \begin{eqnarray} (\psi\overline{\psi})_3.\psi&=&\frac{\mp i}{1\mp i}\psi\\ (\psi\overline{\psi})_4.\psi&=&-\frac{1}{1\mp i}\psi \end{eqnarray} where we have used that $(\psi\overline{\psi})_4.\psi=*(\psi\overline{\psi})_3.\psi=(\psi\overline{\psi})_3^{\xi}.z.\psi=-(\psi\overline{\psi})_3.z.\psi$. By using these identities, we can construct the symmetry operators defined in (89) as follows \begin{eqnarray} L_{(\psi\overline{\psi})_0}\psi&=&0\nonumber\\ L_{(\psi\overline{\psi})_3}\psi&=&\mp 3i\lambda\psi\nonumber\\ L_{(\psi\overline{\psi})_4}\psi&=&-4\lambda\psi\\ L_{(\psi\overline{\psi})_7}\psi&=&\pm 7i\lambda\psi\nonumber \end{eqnarray} where we have used the identities $d(\psi\overline{\psi})_3=8\lambda(\psi\overline{\psi})_4$ and $\delta(\psi\overline{\psi})_4=-8\lambda(\psi\overline{\psi})_3$ given in (23) and (25) with the closedness and co-closedness properties of CCKY and KY forms respectively. So, we have constructed all the brackets of the superalgebra defined in Theorem 5 for a 7-dimensional weak $G_2$ manifold. To see that whether it is also a Lie superalgebra, we need to check the Jacobi identities. The first Jacobi identity is the the Jacobi identity for the CKY bracket which is automatically satisfied. The second Jacobi identity is the vanishing of the commutators of symmetry operators given by \begin{equation} [L_{(\psi\overline{\psi})_i},L_{(\psi\overline{\psi})_j}]=0 \end{equation} and it can be seen from (138) that it is satisfied for all cases of $i,j=0,3,4,7$. However, the third and fourth Jacobi identities given by \begin{eqnarray} [(\psi\overline{\psi})_i,(\psi\overline{\psi})]_{CKY}&=&(L_{(\psi\overline{\psi})_i}\psi)\overline{\psi}+\psi(\overline{L_{(\psi\overline{\psi})_i}\psi})\\ L_{(\psi\overline{\psi})}\psi&=&0 \end{eqnarray} are not satisfied for all cases. For example, we have \begin{equation} L_{(\psi\overline{\psi})}\psi=-4\lambda(1\mp i)\psi \end{equation} which is not zero. So, the generalized symmetry superalgebra of a weak $G_2$ manifold is a superalgebra but not a Lie superalgebra. \subsection{Nearly K\"{a}hler Manifolds} \begin{definition} A Riemannian manifold $M$ with metric $g$ and almost complex structure $J$ is called a nearly K\"{a}hler manifold if $J$ satisfies the condition \begin{equation} i_X\nabla_XJ=0 \end{equation} for every vector field $X\in TM$. The metric cone of a 6-dimensional nearly K\"{a}hler manifold $M$ has $G_2$ holonomy and $M$ admits two Killing spinors with different chirality \cite{Joyce, Besse}. \end{definition} In a 6-dimensional Riemannian manifold $M$, the Clifford algebra is isomorphic to $Cl_{6,0}\cong\mathbb{H}(4)$ where $\mathbb{H}(4)$ is the space of $4\times 4$ dimensional quaternionic matrices. The even subalgebra corresponds to $Cl^0_{6,0}\cong Cl_{0,5}\cong\mathbb{C}(4)$. There are two different irreducible representations of the even subalgebra and the spinor space $S=S_+\oplus S_-$ is equivalent to $S_+\oplus S_-\cong\mathbb{C}^4\oplus\mathbb{C}^4$. Then, spinors are chiral complex and correspond to Dirac-Weyl spinors. The Killing spinors $\psi^+\in S_+$ and $\psi^-\in S_-$ on $M$ are real Killing spinors and the Killing numbers $\lambda_+$ and $\lambda_-$ are both real numbers. The spin invariant inner product defined on $S$ is a $\mathbb{C}$-skew inner product with $\xi$ involution. So, we have ${\cal{J}}=\xi$, $j=\text{Id}$ and the following properties for the inner product \begin{eqnarray} (\psi,\phi)&=&-(\phi,\psi)\\ (\psi,\alpha.\phi)&=&(\alpha^{\xi}.\psi,\phi) \end{eqnarray} where $\psi,\phi\in S$ and $\alpha\in Cl(M)$. We can find the $p$-form Dirac currents of the Killing spinor $\psi^+$ as follows \begin{eqnarray} (\psi^+\overline{\psi^+})_0&=&-(\psi^+,\psi^+)=0\nonumber\\ (\psi^+\overline{\psi^+})_1&=&(\psi^+,e_a.\psi^+)e^a=(e_a^{\xi}.\psi^+,\psi^+)e^a=(e_a.\psi^+,\psi^+)e^a=-(\psi^+,e_a.\psi^+)e^a=0\nonumber\\ (\psi^+\overline{\psi^+})_2&=&(\psi^+,e_{ba}.\psi^+)e^{ab}=(e_{ba}^{\xi}.\psi^+,\psi^+)e^{ab}=-(e_{ba}.\psi^+,\psi^+)e^{ab}=(\psi^+,e_{ba}.\psi^+)e^{ab}\neq 0\nonumber\\ \end{eqnarray} and by similar reasoning, we also have \begin{eqnarray} (\psi^+\overline{\psi^+})_3&\neq&0\nonumber\\ (\psi^+\overline{\psi^+})_4&=&0\nonumber\\ (\psi^+\overline{\psi^+})_5&=&0\nonumber\\ (\psi^+\overline{\psi^+})_6&\neq&0\nonumber. \end{eqnarray} The same equalities are also true for the other Killing spinor $\psi^-$. By considering the cases for ${\cal{J}}=\xi$, $\lambda$ is real and $j=\text{Id}$ in Proposition 1, we reach to the fact that the even degree bilinears $(\psi^+\overline{\psi^+})_2$, $(\psi^+\overline{\psi^+})_6$, $(\psi^-\overline{\psi^-})_2$ and $(\psi^-\overline{\psi^-})_6$ are special even KY forms and the odd degree bilinears $(\psi^+\overline{\psi^+})_3$ and $(\psi^-\overline{\psi^-})_3$ are special odd CCKY forms. Moreover, $(\psi^+\overline{\psi^+})_6$ and $(\psi^-\overline{\psi^-})_6$ correspond to the volume form $z$ and we have the identities \begin{eqnarray} \delta(\psi\overline{\psi})_2&=&0\\ d(\psi\overline{\psi})_2&=&6\lambda(\psi\overline{\psi})_3\\ \delta(\psi\overline{\psi})_3&=&-8\lambda(\psi\overline{\psi})_2\\ d(\psi\overline{\psi})_3&=&0 \end{eqnarray} for both $\psi^+$ and $\psi^-$. From Theorem 6, the bracket of the even part of the superalgebra is $*[\,,\,]_{CKY}$ and the only non-zero bracket in the even part is \begin{equation} *[(\psi^+\overline{\psi^+})_2,(\psi^-\overline{\psi^-})_2]_{CKY}=a(\psi^+\overline{\psi^+})_3+b(\psi^-\overline{\psi^-})_3 \end{equation} where $a$ and $b$ are constants. From the following algebraic relations \begin{eqnarray} (\psi^+\overline{\psi^+})_2.\psi^++(\psi^+\overline{\psi^+})_3.\psi^++(\psi^+\overline{\psi^+})_6.\psi^+=0\nonumber\\ (\psi^+\overline{\psi^+})_2.\psi^-+(\psi^+\overline{\psi^+})_3.\psi^-+(\psi^+\overline{\psi^+})_6.\psi^-=0\nonumber \end{eqnarray} and the same formulas for the interchange of $\psi^+$ and $\psi^-$, we can find the symmetry operators given in (98) as follows \begin{eqnarray} K_{(\psi^+\overline{\psi^+})_2}\psi^+&=&-2\lambda\psi^+\nonumber\\ K_{(\psi^+\overline{\psi^+})_2}\psi^-&=&-2\lambda\psi^-\nonumber\\ K_{(\psi^+\overline{\psi^+})_3}\psi^+&=&3\lambda\psi^+\nonumber\\ K_{(\psi^+\overline{\psi^+})_3}\psi^-&=&-2\lambda\psi^-\nonumber\\ K_{(\psi^+\overline{\psi^+})_6}\psi^+&=&-6\lambda\psi^+\nonumber\\ K_{(\psi^+\overline{\psi^+})_6}\psi^-&=&-6\lambda\psi^- \end{eqnarray} and the same equalities are true for the symmetry operators constructed from $\psi^-$. To see that whether it is also a Lie superalgebra, we need to check the Jacobi identities. The first Jacobi identity is the the Jacobi identity for the $*[\,,\,]_{CKY}$ bracket which is automatically satisfied. The second Jacobi identity is the vanishing of the commutators of symmetry operators given by \begin{equation} [K_{(\psi\overline{\psi})_i},K_{(\psi\overline{\psi})_j}]=0 \end{equation} for $\psi^+$ and $\psi^-$ and it can be seen from (152) that it is satisfied for all cases of $i,j=2,3,6$. However, the third and fourth Jacobi identities are not satisfied for all cases. So, the generalized symmetry superalgebra of a nearly K\"{a}hler manifold is a superalgebra but not a Lie superalgebra. \section{Conclusion} We generalize the symmetry superalgebras that correspond to geometric invariants of manifolds with isometries to include all the hidden symmetries of the manifold generated by geometric Killing spinors. This defines a more complete construction of the superalgebra structure of symmetries of the manifold. This also gives way to construct generalizations of the Lie derivative on spinor fields as symmetry operators of geometric Killing spinors and the construction of the Lie algebra structure of special KY and special CCKY forms. Besides the symmetry superalgebras of isometries, one can also construct conformal superalgebras from conformal symmetries and twistor spinors \cite{de Medeiros Hollands, Ertem5}. Twistor spinors correspond to supersymmetry generators of superconformal field theories and the construction of conformal superalgebras are related to the classification of the superconformal backgrounds in these theories. The methods described in the paper can also be used to obtain generalized conformal superalgebras and generalized gauged conformal superalgebras whose supersymmetry generators correspond to gauged twistor spinors \cite{Ertem6, Ertem7}. On the other hand, Killing superalgebras of supergravity backgrounds in non-constant curvature backgrounds can also lead to more general geometric invariants by investigating the generalizations with using the similar methods described in the paper and in \cite{Acik Ertem2}. This can give new perspectives to the classification problem of supergravity backgrounds in all dimensions and supergravity theories.
{ "timestamp": "2018-06-05T02:17:38", "yymm": "1806", "arxiv_id": "1806.01079", "language": "en", "url": "https://arxiv.org/abs/1806.01079" }
\section{Introduction} \subsection*{Background material and motivations} Let $(x,y,z)$ be a system of coordinates in $\mathbb{R}^3$. The \emph{symmetric contact structure} on $\mathbb{R}^{3}$ is the plane distribution $\xi_{sym}=ker(dz + xdy - ydx)$. A \emph{link} is a smooth embedding of a number of copies of $\mathbb{S}^1$ into $\mathbb{R}^3$, and a \emph{knot} is a link with a single component. The presence of $\xi_{sym}$ allows one to distinguish a special class of links (and knots): those which are nowhere tangent to $\xi_{sym}$ (\emph{transverse links}). Two transverse links are \emph{equivalent} (or the same \emph{transverse type}) if they are ambient isotopic through a one-parameter family of transverse links. In particular, equivalent transverse links represent the same link-type (i.e. the ambient isotopy class of a link). The study of transverse links has played (and still plays) a key role in the study of low-dimensional topology. The aim of the present paper is to present new invariants for transverse links arising from the deformations of Khovanov $\mathfrak{sl}_3$-homology. Each transverse link inherits a natural orientation from $\xi_{sym}$ (cf. \cite{Etnyre05}). All transverse invariants defined in this paper are defined with respect to this orientation. In order to define our invariants we need a particular representation of transverse links. It is a result due to D. Bennequin (\cite{Bennequin83}) that each transverse link is equivalent to a closed braid. Another result, due to S. Orevkov and V. Shevchishin (\cite{OrevkovShev03}) and independently N. Wrinkle (\cite{Wrinkle03}), provides a complete set of combinatorial moves relating all braids whose closure represent the same transverse type. We summarise these results in the following theorem, which shall be referred to as the \emph{transverse Markov theorem} in the rest of the paper. \begin{theorem}[Bennequin \cite{Bennequin83}, Orevkov and Shevchishin \cite{OrevkovShev03}, Wrinkle \cite{Wrinkle03}] Any transverse link is transversely isotopic to the closure of a braid (with axis the $z$-axis). Moreover, two braids represent the same transverse type if and only if they are related by a finite sequence of braid relations, conjugations, positive stabilisations, and positive destabilisations\footnote{Let $B\in B_{m-1}$, the \emph{positive} (resp. \emph{negative}) \emph{stabilisation} of $B\in B_{m-1}$ is the braid $B\sigma_{m}\in B_{m}$ (resp. $B\sigma_{m}^{-1}\in B_{m}$). The destabilisation is just the inverse process: if one considers a braid of the form $A\sigma_{m} B$ (resp. $A\sigma_{m}^{- 1} B$), where $A,\ B\in B_{m-1}$, then its positive (resp. negative) destabilisation is the braid $AB$.}. These moves are called \emph{transverse Markov moves}. \end{theorem} \begin{rem*} Braids are naturally oriented, and their orientation coincides with the orientation of the corresponding transverse link. \end{rem*} \begin{rem*} By adding the negative stabilisation and destabilisation to the set of transverse Markov moves one recovers the full set of \emph{Markov moves}. \end{rem*} Any sequence of Markov moves between braids, naturally translates into a sequence of oriented Reidemeister moves between their closures. In particular, conjugation in the braid group can be seen as a sequence of Reidemeister moves of the second type followed by a planar isotopy, while the braid relations can be seen as either second or third Reidemeister moves. We remark that not all the oriented versions of second and third Reidemeister moves arise in this way; those which can be obtained as composition of Markov moves and braid relations are called \emph{braid-like} or \emph{coherent}. Finally, a positive (resp. negative) stabilisation translates into a positive (resp. negative) first Reidemeister move, as shown in Figure \ref{fig:stabilisationasr1}. \begin{figure}[] \centering \begin{tikzpicture}[scale =.7] \begin{scope}[shift = {+(-8,0)}] \draw (0,2) circle (1.2); \draw (0,2) circle (1); \draw (0,2) circle (2); \draw (0,2) circle (2.1); \draw[fill, white] (-.5,-.25) rectangle (0.5,1.25); \draw (-.5,-.25) rectangle (0.5,1.25); \node (a) at (0,.5) {$B$}; \end{scope} \node at (-4,2.75) {Stabilisation}; \node at (-4,2) {\huge{$\rightleftarrows$}}; \node at (-4,1.4) {Destabilisation}; \draw (0,2) circle (1.2); \draw (0,2) circle (2); \draw (0,2) circle (2.1); \draw[fill, white] (-.5,-.25) rectangle (0.5,1.25); \draw (-.5,-.25) rectangle (0.5,1.25); \node (a) at (0,.5) {$B$}; \draw (0.5,1.2) .. controls +(.75,.5) and +(.1,-.1) .. (.5,2.5); \draw (.5,2.5) .. controls +(-.5,.5) and +(.1,.2) .. (-0.6,2.44); \pgfsetlinewidth{10*\pgflinewidth} \draw[white] (.92,2) .. controls +(0,-1) and +(-.5,-1) .. (-0.6,2.44); \pgfsetlinewidth{.1*\pgflinewidth} \draw (.92,2) .. controls +(0,-1) and +(-.5,-1) .. (-0.6,2.44); \draw (.92,2) .. controls +(0,1.25) and +(0,1.25) .. (-.92,2) .. controls +(0,-.25) and +(-.25,.25) .. (-.5,1.2); \end{tikzpicture} \caption{Stabilisation and destabilisation as Reidemeister moves between closures.}\label{fig:stabilisationasr1} \end{figure} Transverse links have two \emph{classical invariants}: the link-type and the \emph{self-linking number}\footnote{Technically speaking, for links one may record the self-linking number of each components and this defines a slightly stronger transverse invariant, called the self-linking matrix. However, for consticency with the literature in the subject (cf. \cite{TransFromKhType17, Nglipsar13, Plamenevskaya06, Wu08}) in this paper we shall consider only the self-linking number.} $sl$. The latter is defined as follows. Let $B$ be a braid, and set \[ sl(B) = w(B) - b(B), \] where $w$ denotes the \emph{writhe}, and $b$ denotes the \emph{braid index} (i.e. the number of strands). In the light of the transverse Markov theorem, it is immediate that the self-linking number is a transverse invariant. Moreover, from the definition of $sl$ follows easily that a negative stabilisation (resp. destabilisation) does not preserve the equivalence class of the transverse link. Any invariant capable of distinguishing distinct transverse links with the same classical invariants is called \emph{effective}. It is a major problem in the study of transverse links to find effective invariants. Making use of the representation of transverse links as closed braids, O. Plamenevskaya introduced a transverse invariant $\psi$. This invariant is an homology class in Khovanov's categorification of the Jones polynomial (see \cite{Plamenevskaya06}). Since Plamenevskaya's ground breaking work, other invariants for transverse links coming from quantum (and also Floer theoretic) link homologies have been introduced. In 2008, H. Wu (see \cite{Wu08}) generalised $\psi$ to a family of invariants $\psi_{N}$ which are homology classes in M. Khovanov and L. Rozansky's categorification of the Reshetikhin-Turaev $\mathfrak{sl}_{N}$-invariants. Khovanov-Rozansky homologies (and also Khovanov homology, being the case $N=2$) admit deformations; these ``deformed theories'' are parametrised by a monic polynomial of degree $N$ called the potential. In 2015, R. Lipshitz, L. Ng, and S. Sarkar (see \cite{Nglipsar13}) extended the definition of the Plamenevskaya invariant to two chains $\psi^\pm$ belonging to the chain complex of (a twisted version of) Lee's deformation (i.e. the theory associated to the potential $x^{2} - x$). The author in \cite{TransFromKhType17} extended the Plamenevskaya invariant to all deformations of Khovanov homology. It is still unknown at the time of writing whether or not all these invariants are effective. \subsection*{Outline and statement of results} The aim of this paper is to extend the definition of Wu's $\psi_3$ invariant to all deformations of Khovanov\footnote{The $\mathfrak{sl}_3$-homology was defined first by Khovanov, then Khovanov and Rozansky extended the definition to all $\mathfrak{sl}_N$, for $N\geq 2$, with a different technique.} $\mathfrak{sl}_3$-homology. As we stated above, these deformations are parametrised by a monic polynomial $\omega\in R[x]$ of degree $3$ (the \emph{potential}), where R is the coefficient ring of the homology theory. Furthermore, the resulting homology theory is either graded or filtered depending on $\omega$. We make use of the construction of the universal $\mathfrak{sl}_3$-homology due to M. Mackaay and P. Vaz (\cite{Mackaayvaz07}) which encodes all the deformations of Khovanov $\mathfrak{sl}_3$-homology (\cite{Mackaayvaz07a}). We tried to keep the paper as self-contained as possible, so we shall review Mackaay and Vaz's construction in Sections 2 and 3. In Section 4 we define for each diagram $D$, potential $\omega$, and each root $x_1$ of $\omega$ in $R$, a chain $\beta_{\omega,\: x_1}(D)$, which turns out to be a cycle (Proposition \ref{proposition:betaiscycle}). If $D= \widehat{B}$ is the closure of a braid $B$, we shall denote $\beta_{\omega,\: x_1}(\widehat{B})$ simply by $\beta_{\omega,\: x_1}(B)$. Our main theorem is the following. \begin{theorem}\label{theorem:beta_3} Let $B$ be a braid. For each root $x_1$ of $\omega (x)$, the cycle $\beta_{\omega,\: x_1}(\overline{B})$ is a transverse invariant, where the over-line indicates the mirror braid. More precisely, if $B$ and $B^\prime$ are related by a sequence of transverse Markov moves, and denoted by $\Phi$ the map associated to the mirror sequence of these moves, then \[ \Phi(\beta_{\omega,\: x_1}(\overline{B}))=\beta_{\omega,\: x_1}(\overline{B^\prime}).\] Moreover, if the homology theory is filtered (resp. graded), then the filtered degree\footnote{That is the index corresponding to the smallest sub-complex in the filtration containing the chain $\beta_{\omega,\: x_1}(\overline{B})$.} (resp. the degree) of $\beta_{\omega,\: x_1}(\overline{B})$ is $-2sl(B)$. \end{theorem} \begin{rem*} The filtered degree mentioned in Theorem \ref{theorem:beta_3} is the filtered degree of \emph{the chain} $\beta_{\omega,\: x_1}(\overline{B})$, and not the filtered degree of its homology class. The filtered degree of $[\beta_{\omega,\: x_1}(\overline{B})]$ is not necessarily a meaningful transverse invariant. For instance, if $\mathbb{F}$ is a field and $\omega\in \mathbb{F}[x]$ has three distinct roots, then the filtered degree of $[\beta_{\omega,\: x_1}(\overline{B})]$ is a concordance invariant (one of the $j_{i}$'s mentioned in Proposition \ref{prop:Bennequin-type}). The comparison between the two filtered degrees, under the above hypotheses, gives a weaker version of the first Bennequin-type inequality in Proposition \ref{prop:Bennequin-type}. \end{rem*} Motivated by the theorem above, the cycles $\beta_{\omega,\: x_1}(\overline{B})$ are collectively called $\beta_3$-invariants. We wish to point out that the construction of the $\beta_3$-invariants seems to be generalizable to the $\mathfrak{sl}_{N}$ case (also with colours). This will be the subject of a forthcoming paper by the author joint with P. Wedrich. Exploring the effectiveness of these invariants is quite a difficult problem. First, because distinguishing non-equivalent transverse knots with the same classical invariants is a subtle problem. This is also confirmed by the fact that there are only a few known families of transverse knots which are not distinguished by their classical invariants. Second, because of the nature of our invariants: these are chains in a diagram dependent chain complex up to the action of a certain group of chain homotopies. So we leave open the following question. \begin{question} Are the $\beta_3$-invariants effective? \end{question} In Section 5 we make use of the homology classes of the $\beta_3$-invariants to define two auxiliary invariants which are easier to analyse. These invariants are: the vanishing of the homology class, and the divisibility with respect to a non-unit element of the homology class. In the case where the base ring is a field we can relate the vanishing of the homology class of the $\beta_3$-invariants to the vanishing of the Plamenevskaya invariant and Wu's $\psi_{3}$-invariant. Our second main result is the following. \begin{proposition}\label{prop:vanishing} Let $\omega$ be a potential over a field $\mathbb{F}$, and let $x_1$, $x_2$, $x_3\in \mathbb{F}$ be the roots of $\omega$. Denote by $H_{\omega}^\bullet$ the homology corresponding to the potential $\omega$. Then, given a braid $B$ we have the following: \begin{itemize} \item[(1)] if $x_1$ is a simple root of $\omega$, then $[\beta_{\omega, x_1}(\overline{B})]$ is non-trivial in $H_{\omega}^\bullet(\overline{B}, \mathbb{F})$; \item[(2)] if $x_1$ is a double root of $\omega$, then $[\beta_{\omega, x_1}(\overline{B})]$ vanishes if and only if the Plamenevskaya invariant $\psi(B)$ vanishes in $Kh^{\bullet}(B, \mathbb{F})$; \item[(3)] if $x_1$ is a triple root of $\omega$, then $[\beta_{\omega, x_1}(\overline{B})]$ vanishes if and only if Wu's invariant $\psi_{3}(B)$ vanishes in $H_{x^{3}}^{\bullet}(\overline{B}, \mathbb{F})$; \end{itemize} In particular, the vanishing of $[\beta_{\omega, x_1}(\overline{B})]$ does not depend on the potential $\omega$ and on the root $x_1$, but only on the multiplicity of $x_1$ as a root of $\omega$. (However, it may depend on the choice of the base field.) \end{proposition} The previous proposition holds true only if the base ring is a field. This naturally leads to the following question. \begin{question} Assume the base ring $R$ to be a Noetherian domain. Is it true that the vanishing of $[\beta_{\omega, x_1}(\overline{B})]$ depends only on the multiplicity of the $x_1$ and on the vanishing of the $\psi$ and $\psi_3$-invariants? \end{question} The previous proposition allows us to relate the effectiveness of the vanishing of the homology classes of the $\beta_3$-invariants with the effectiveness of the vanishing of the $\psi$ and $\psi_3$-invariants. In particular, taking into account the results in \cite{TransFromKhType17, Nglipsar13}, we obtain the non-effectiveness of the vanishing of the homology classes of the $\beta_3$-invariants corresponding to double roots in the following cases: knots with crossing number $\leq 11$, flypes, and two bridge knots. Most of the examples of distinct transverse knots with the same classical invariants fall into these categories. However, the following question remains open. \begin{question} Is the vanishing of the homology classes of the $\beta_3$-invariants an effective invariant? \end{question} Let $R$ be an integral domain and $a\in R\setminus\{ 0\}$ a non-unit element. Given a potential $\omega\in R[x]$ and one of its roots $x_1\in R$, the number \[c_{\omega, x_1}(B;a) = max\left\{ k\:\vert\: \exists\: [y]\in H^{0}_{\omega}(\overline{B})\:\text{such that}\: a^k[y] = [\beta_{\omega, x_1}(\overline{B})]\right\}\in \mathbb{N}\cup \{ \infty \}\] is a well-defined transverse invariant, where $c_{\omega,x_1}(B;a)=\infty$ if and only if $[\beta_{\omega, x_1}(\overline{B})]$ is trivial or $a$-torsion. We analyse these invariants in the case $R = \mathbb{F}[U]$ and $\omega = (x - Ux_1) (x - Ux_2) (x - Ux_3)$, with $x_1$, $x_2$, $x_3 \in \mathbb{F}$ distinct, and $a = U$. From our analysis we obtain the following Bennequin-type inequalites. \begin{proposition}\label{prop:Bennequin-type} Let $B$ be a braid representing a knot $K$. Given $\omega$ as above, denote by $\omega_{1} = \omega_{\vert U = 1}$. Then the following inequalities hold \begin{eqnarray} 2 \left[ sl(B) + c_{\omega,Ux_{i}}(B,U) \right] \leq j_{1}(K) \label{eq:inequality1}\\ 2 \left[ sl(B) + c_{\omega,Ux_{i}}(B,U)\right] \leq s_{\omega_1}(K) \label{eq:inequality2} \\ sl(B) + c_{\omega,Ux_{i}}(B,U) - 1 \leq 3 \tilde{s}_{\omega_1,x_{i}}(K) \label{eq:inequality3} \end{eqnarray} for each $i\in \{ 1,2,3 \}$, where $j_{1}$ (\cite{Lobb12, Wu09}, see also \cite{LewarkLobb17}), $s_{\omega_1}(K)$, and $\tilde{s}_{\omega_1,x_{i}}(K)$ (\cite{LewarkLobb17}) are concordance invariants (see also Theorem \ref{thm:jis} and subsequent lines for a very quick overview). \end{proposition} \subsection*{Acknowledgments} The author wish to thank Prof. Paolo Lisca for his advice, the helpful conversations and his continuous support. Moreover, the author also wishes to thank M. Mackaay and P. Vaz for letting him use the images in Figures \ref{fig:secondRmcoer} and \ref{fig:invterzasl3a}. Finally, the author wishes to thank the referees for their valuable comments and suggestions. This paper is partially excerpted from the author's PhD thesis. During his PhD the author was supported by a PhD scholarship ``Firenze-Perugia-Indam''. \section{Webs and Foams}\label{sec:webetfoams} In this section we will briefly review the definition of webs and foams. These objects play the same role played by closed $1$-dimensional manifolds and surfaces in Bar-Natan geometric description of Khovanov homology (and its deformations). \subsection{Webs} Webs were originally introduced by Greg Kuperberg in \cite{Kuperberg96}, as a tool to study the representation theory of rank $2$ Lie algebras, and used by Khovanov in \cite{Khovanov03} to define a categorification of the $\mathfrak{sl}_3$-Jones polynomial. \begin{definition} A \emph{web} $W$ is a directed trivalent planar graph embedded in $\mathbb{R}^{2}$, possibly with components without vertices (\emph{loops}), satisfying the following properties: \begin{enumerate}[(a)] \item $W$ has a finite number of vertices and a finite number of loops; \item there are two types of edges in $W$, the \emph{thin edges} and the \emph{thick edges}, and for each vertex $v\in V(W)$ there is a unique thick edge incident in $v$; \item each vertex of $W$ is either a source\footnote{A vertex $v$ of a directed graph is a \emph{source} if all edges incident in $v$ are directed outwards from $v$.} or a sink\footnote{A vertex $v$ of a directed graph is a \emph{sink} if all edges incident in $v$ are directed towards $v$.}. \end{enumerate} For technical reasons also the empty set is considered a web (the \emph{empty web}). A trivalent directed (abstract) graph $\Gamma$ satisfying (a) and (b) shall be called \emph{abstract web}. \end{definition} The distinction between thick and thin edges is necessary to keep track of crossings after their resolution (cf. Subsection \ref{subsection:homologies}). Whenever necessary the thick edges shall be drawn thicker and coloured purple, otherwise no distinction shall be made. \subsection{Foams} Roughly speaking, foams are decorated branched surfaces which are singular along a smooth $1$-dimensional manifold of triple points. Let us put aside the decorations and let us start by defining the underlying topological structure of a foam. A (\emph{topological}) \emph{pre-foam} $\Sigma$ is a compact topological space such that each point has a neighbourhood homeomorphic to one of the four local models in Figure \ref{fig:Localmodelsingcurvefoam}. A point $p\in \Sigma$ is called \emph{regular} if it has a neighbourhood which is homeomorphic to either (C) or (D). Non-regular points are called \emph{singular}, and the set of singular points is denoted by $Sing(\Sigma)$. The connected components of $\Sigma \setminus Sing(\Sigma) \subseteq \Sigma$ are called \emph{regular regions} of $\Sigma$. Finally, a \emph{boundary} point for $\Sigma$ is a point which does not have a neighbourhood homeomorphic to either (A) or (C). A topological pre-foam with empty boundary is called \emph{closed}. \begin{figure}[H] \centering \begin{tikzpicture} \draw[dashed] ( .866,.5) -- +(-.25,-.5) -- (-1.116,-1)--( -.866,-.5) ; \draw[fill, color = white, opacity = .75] ( -.866,-.5) -- (-.116,-1) -- (1.614,0) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (-.116,-1) -- (1.614,0) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[->, red] ( 1.5,1) .. controls +(-.25 , 0) and +(.25,.25) .. (.5,.5); \node at (2.75,1) {\textcolor{red}{singular points}}; \begin{scope}[shift ={+(6,0)}] \draw[dashed] ( .866,.5) -- +(-.25,-.5) -- (-1.116,-1)--( -.866,-.5) ; \draw[very thick] ( -.866,-.5) -- (-1.116,-1); \draw[fill, color = white, opacity = .75] ( -.866,-.5) -- (-.116,-1) -- (1.614,0) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (-.116,-1) -- (1.614,0) -- (.866,.5) -- cycle; \draw[very thick] ( -.866,-.5) -- (-.116,-1); \draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5)-- cycle; \draw[very thick ] ( -.866,-.5) -- (-.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \end{scope} \draw[->, red] (4,1) .. controls +(.25 , 0) and +(-.25,.25) .. (6.1,.25); \draw[dashed] (-.9,-2) rectangle (1.6,-3); \draw[dashed] (7.6,-3) -- (7.6,-2) -- (5.1,-2) -- (5.1,-3); \draw[very thick] (7.6,-3) -- (5.1,-3); \node at (0.25,-1.5) {\small{(A)}}; \node at (6.25,-1.5) {\small{(B)}}; \node at (0.25,-3.5) {\small{(C)}}; \node at (6.25,-3.5) {\small{(D)}}; \end{tikzpicture} \caption{Local models for a pre-foam.} \label{fig:Localmodelsingcurvefoam} \end{figure} \begin{rem*} The singular locus of pre-foam is the disjoint union of circles and arcs, which are called \emph{singular circles} and \emph{singular arcs}. The singular boundary points correspond to the boundary points of the singular arcs. \end{rem*} The choice of an atlas\footnote{We mean an open cover of $\Sigma$ together with a homeomorphism of each element of the cover with one of the local models in Figure \ref{fig:Localmodelsingcurvefoam}.} on a pre-foam determines an atlas on each regular region and also on $Sing(\Sigma)$. A \emph{smooth pre-foam} is a topological pre-foam with a chosen topological atlas such that the induced atlases on the regular regions and on the singular locus are smooth atlases. Similarly, given a pre-foam $\Sigma$, an \emph{orientation} on $\Sigma$ is the choice of an orientation of the closure of each regular region in such a way that the orientation induced in the intersection of two closed regions agrees. \begin{rem*} If we choose an orientation on a orientable pre-foam, this induces an orientation on the singular locus. \end{rem*} \begin{rem*} The boundary of a topological pre-foam is a (possibly empty) finite trivalent graph whose vertices correspond to singular boundary points. If the pre-foam is oriented its boundary is a directed graph. Moreover, each vertex of the boundary graph of an oriented pre-foam is either a sink or a source. In other words, the boundary of an oriented pre-foam is an abstract web. \end{rem*} \begin{definition} A \emph{decorated pre-foam} is an oriented pre-foam $\Sigma$ together with the following data: \begin{enumerate}[(a)] \item a finite number (possibly zero) of marked points, called \emph{dots}, in the interior of each regular region of $\Sigma$; \item a cyclic order on the regular regions incident to a singular arc or circle. \end{enumerate} \end{definition} It is possible to define a category $\mathbf{PreFoam}$ whose objects are abstract webs and whose morphisms are formal $R$-linear combinations of the triples $(\Sigma,\partial_{0}\Sigma ,\partial_{1}\Sigma)$ satisfying the following properties \begin{itemize} \item[$\triangleright$] $\Sigma$ is a decorated pre-foam; \item[$\triangleright$] $\partial_{0}\Sigma,\: \partial_{1}\Sigma \in Obj(\mathbf{PreFoam})$, $\partial_{0}\Sigma$ is the source object and $\partial_{1} \Sigma$ the target object of the morphism; \item[$\triangleright$] $\partial\Sigma = \partial_{0} \Sigma \sqcup -\partial_1 \Sigma$, where the minus sign denotes the reversal of the orientation; \item[$\triangleright$] the triple is seen up to boundary fixing isotopies which do not change the regular regions of the dots and preserve the ordering of the components near each singular arc. \end{itemize} Finally, the composition of two triples $(\Sigma,\partial_{0}\Sigma ,\partial_{1}\Sigma)$ and $(\Sigma^\prime,\partial_{1}\Sigma ,\partial_{1}\Sigma^\prime)$ is defined as the triple $(\Sigma^{\prime\prime},\partial_{0}\Sigma ,\partial_{1}\Sigma^{\prime})$, where $\Sigma^{\prime\prime}$ is obtained by glueing $\Sigma$ and $\Sigma^\prime$ along $\partial_{1}\Sigma$. \begin{definition} A \emph{foam} is a decorated pre-foam properly and smoothly\footnote{That is in such a way that the restriction of the embedding to each regular region and to the singular locus is smooth.} embedded in $\mathbb{R}^{2} \times I$. Moreover, we ask the cyclic order of the regular regions at each singular arc or circle to coincide with the cyclic order induced by rotating clockwise\footnote{We suppose fixed an orientation of $\mathbb{R}^{2}\times I$.} around a singular arc or circle. Foams will be considered up to ambient isotopies of $\mathbb{R}^{2}\times I$ which fix the boundary of the foam and do not change the regular regions of the dots. \end{definition} Given two webs $W_{0}$ and $W_{1}$, a \emph{foam between $W_{0}$ and $W_{1}$} is a foam $F$ such that \[ F \cap \mathbb{R}^{2} \times \{ 0 \} = W_{0}\quad \text{and}\quad F \cap \mathbb{R}^{2} \times \{ 1 \} = -W_{1}.\] The category $\mathbf{Foam}$ is the category whose objects are webs, whose morphisms are a $R$-linear combinations of foams between two webs, and whose composition is defined as the glueing of two foams along the shared boundary. For technical reasons also the empty foam is included in $\mathbf{Foam}$ as an element of $Hom_{\mathbf{Foam}}(\emptyset,\emptyset)$. \subsection{Local relations} Local relations are equalities among (linear combinations of) foams which are identical except inside a small ball. The relations we are concerned with can be divided into two types: \begin{itemize} \item[$\triangleright$] \emph{reduction relations} (c.f. Figure \ref{fig:localrel1}), \item[$\triangleright$] \emph{evaluation relations} (c.f. Figure \ref{fig:localrel2}). \end{itemize} The reduction relations depend on the choice of a polynomial $\omega(x)\in R [x]$ (the \emph{potential}) of the form \[ \omega(x) = x^{3} + a_2x^{2} + a_1x + a_0. \] Thus, we will hereby suppose $\omega$ fixed. In the case $R$ is a graded ring, one may require the coefficients $a_2$, $a_1$ and $a_0$ and the polynomial $\omega$ to be homogeneous in order to obtain a graded theory. Let us get back to the local relations and postpone the matter of the gradings. The reduction relations allow one to reduce either the number of handles (\emph{genus reduction relation} (GR)) or the number of dots (\emph{dot reduction relation} (DR)) at the expense of trading a single foam for a linear combination of foams. \begin{figure}[H] \centering \begin{tikzpicture}[scale =.275] \begin{scope}[shift = {(-2,0)}] \draw (-3.5,18)circle ( 1 and .5); \draw (-2.5,13) arc (0:-180:1 and .5); \draw[dashed, gray] (-2.5,13) arc (0:180:1 and .5); \draw (-2.5,13) .. controls +(-.5,1.5) and +(-.5,-1.5) .. (-2.5,18); \draw (-4.5,13) .. controls +(.5,1.5) and +(.5,-1.5) .. (-4.5,18); \draw (4.5,18)circle ( 1 and .5); \draw (5.5,13) arc (0:-180:1 and .5); \draw[dashed, gray] (5.5,13) arc (0:180:1 and .5); \draw (5.5,13) .. controls +(0,1.5) and +(0,1.5) .. (3.5,13); \draw (5.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (3.5,18); \draw (9.5,18)circle ( 1 and .5); \draw (10.5,13) arc (0:-180:1 and .5); \draw[dashed,gray] (10.5,13) arc (0:180:1 and .5); \draw (10.5,13) .. controls +(0,1.5) and +(0,1.5) .. (8.5,13); \draw (10.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (8.5,18); \draw (14.5,18)circle ( 1 and .5); \draw (15.5,13) arc (0:-180:1 and .5); \draw[dashed,gray] (15.5,13) arc (0:180:1 and .5); \draw (15.5,13) .. controls +(0,1.5) and +(0,1.5) .. (13.5,13); \draw (15.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (13.5,18); \draw (21.5,18)circle ( 1 and .5); \draw (22.5,13) arc (0:-180:1 and .5); \draw[dashed,gray] (22.5,13) arc (0:180:1 and .5); \draw (22.5,13) .. controls +(0,1.5) and +(0,1.5) .. (20.5,13); \draw (22.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (20.5,18); \draw (26.5,18)circle ( 1 and .5); \draw (27.5,13) arc (0:-180:1 and .5); \draw[dashed,gray] (27.5,13) arc (0:180:1 and .5); \draw (27.5,13) .. controls +(0,1.5) and +(0,1.5) .. (25.5,13); \draw (27.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (25.5,18); \draw (33.5,18)circle ( 1 and .5); \draw (34.5,13) arc (0:-180:1 and .5); \draw[dashed,gray] (34.5,13) arc (0:180:1 and .5); \draw (34.5,13) .. controls +(0,1.5) and +(0,1.5) .. (32.5,13); \draw (34.5,18) .. controls +(0,-1.5) and +(0,-1.5) .. (32.5,18); \node at (-8,15.5) {(GR)}; \node at (0.5,15.5) {=}; \node at (-5,15.5) {$-$}; \node at (7 ,15.5) {$+$}; \node at (12 ,15.5) {$+$}; \node at (18 ,15.5) {$+\: a_2\:$}; \draw (20,18.5) .. controls +(-.75,0) and +(-.75,0) .. (20,12.5); \draw (28,18.5) .. controls +(.75,0) and +(.75,0) .. (28,12.5); \node at (24 ,15.5) {$+$}; \node at (30.5 ,15.5) {$+\: a_1$}; \draw[fill] (4.8,13) circle (0.15) ; \draw[fill] (4.2,13) circle (0.15) ; \draw[fill] (9.5,13) circle (0.15) ; \draw[fill] (9.5,17.25) circle (0.15) ; \draw[fill] (14.8,17.25) circle (0.15) ; \draw[fill] (14.2,17.25) circle (0.15) ; \draw[fill] (21.5,13) circle (0.15) ; \draw[fill] (26.5,17.25) circle (0.15) ; \end{scope} \draw[dashed] (0,7.5) circle (2); \draw[dashed] (7,7.5) circle (2); \draw[dashed] (14,7.5) circle (2); \draw[dashed] (21,7.5) circle (2); \node at (3.5,7.5) {= $a_2$}; \node at (10.5,7.5) {+ $a_1$}; \node at (17.5,7.5) {+ $a_0$}; \node at (-3,7.5) {$-$}; \node at (-10,7.5) {(DR)}; \draw[fill] (7.5,7.5) circle (0.15) ; \draw[fill] (6.5,7.5) circle (0.15) ; \draw[fill] (0,7.5) circle (0.15) ; \draw[fill] (0.75,7.5) circle (0.15) ; \draw[fill] (-.75,7.5) circle (0.15) ; \draw[fill] (14,7.5) circle (0.15) ; \end{tikzpicture} \caption{Reduction relations.} \label{fig:localrel1} \end{figure} There are two types of evaluation relations. The first type are called \emph{sphere relations} (S), and concern spheres with less than $4$ dots. The second type are called \emph{theta foam relations} ($\mathrm{\Theta}$), and concern theta foams (that is spheres with a disk glued along the equator) with $2$ dots or less on each region. \begin{figure}[H] \centering \begin{tikzpicture}[scale = .3] \draw (0,0) circle (2); \draw (7,0) circle (2); \draw (19,0) circle (2); \draw[gray,dashed] (-2,0) arc (-180:0: 2 and 1); \draw[gray,dashed,opacity = .5] (-2,0) arc (180:0: 2 and 1); \draw[gray,dashed] (5,0) arc (-180:0: 2 and 1); \draw[gray,dashed,opacity = .5] (5,0) arc (180:0: 2 and 1); \draw[gray,dashed] (17,0) arc (-180:0: 2 and 1); \draw[gray,dashed,opacity = .5] (17,0) arc (180:0: 2 and 1); \draw[fill] (7,-.75) circle (0.15) ; \draw[fill] (19.5,-.75) circle (0.15) ; \draw[fill] (18.5,-.75) circle (0.15) ; \node at (3.5,0) {=}; \node at (10.5,0) {=}; \node at (12.5,0) {0}; \node at (22.5,0) {=}; \node at (24,0) {-1}; \node at (-5,0) {(S)}; \node at (-5,-7) {($\Theta$)}; \draw (0,-7) circle (2); \draw[pattern color = red, pattern= north west lines, opacity = .45] (0,-7) circle (2 and 1); \draw[thick,red] (0,-8) arc (-90:-180: 2 and 1); \draw[->,thick,red] (2,-7) arc (0:-90: 2 and 1); \draw[dashed,red] (-2,-7) arc (180:0: 2 and 1); \draw (8.5,-7) circle (2); \draw[pattern color = red, pattern= north west lines, fill, opacity = .45] (8.5,-7) circle (2 and 1); \draw[<-,thick,red] (8.5,-8) arc (-90:-180: 2 and 1); \draw[thick,red] (10.5,-7) arc (0:-90: 2 and 1); \draw[dashed,red] (6.5,-7) arc (180:0: 2 and 1); \node at (4.5,-7) {$=\:-$}; \node at (26,-7) {\small{$=\begin{cases} 1 & ( n_1, n_2, n_3 ) = (1,2,0)\:\text{or a cyclic permutation},\\ -1 & ( n_1, n_2, n_3 ) = (2,1,0)\:\text{or a cyclic permutation},\\% 0 & n_1,n_2,n_3 \leq 2\text{, and not in the previous cases}\\\end{cases}$}}; \node at (8.5,-5.5) {\small{$n_1$}}; \node at (8.5, -7) {\small{$n_2$}}; \node at (8.5,-8.6) {\small{$n_{3}$}}; \node at (0,-5.5) {\small{$n_1$}}; \node at (0, -7) {\small{$n_2$}}; \node at (0,-8.6) {\small{$n_{3}$}}; \end{tikzpicture} \caption{Evaluation relations. The numbers $n_1$, $n_2$ and $n_3$ on the theta foam indicate the number of dots in the corresponding region.} \label{fig:localrel2} \end{figure} Modulo the local relations, each closed foam is equivalent to a multiple of the empty foam; using the reduction relations one reduces a the closed foam to a $R$-linear combination of (disjoint unions of) spheres and theta foams with less than three dots in each regular region. Finally, one uses the evaluation relations to obtain a multiple of the empty foam. For each pair of webs $W_0$ and $W_1$, and each foam $F\in Hom_{\mathbf{Foam}}(W_0, W_1)$, there is a well defined $R$-bilinear pairing \[ \langle \cdot \vert \cdot \rangle_F: Hom_{\mathbf{Foam}}(\emptyset, W_0) \otimes Hom_{\mathbf{Foam}}(W_1, \emptyset) \longrightarrow R (= R \langle \emptyset \rangle) \] given by ``capping off'' $F$ with an element of $Hom_{\mathbf{Foam}}(\emptyset, W_0)$ and an element of $Hom_{\mathbf{Foam}}(W_1, \emptyset)$, and evaluating it. Now, we can define the category $\mathbf{Foam}_{/\ell}$, as the category whose object are the same as $\mathbf{Foam}$, but the morphisms are considered to be equal if the corresponding bilinear forms are equal. Using the local relations it is possible to prove the following result. The reader is referred to \cite{Mackaayvaz07} for a proof. \begin{proposition}[Mackaay-Vaz, \cite{Mackaayvaz07}]\label{proposition:Mackaayvazrel} The following local relations hold in $\mathbf{Foam}_{/\ell}$ \begin{figure}[H] \centering \usetikzlibrary{patterns} \begin{tikzpicture}[scale = .25] \draw[] (3,8) arc (-90:90:.5 and 1); \draw (-3,8) arc (-90:90:.5 and 1); \draw[dashed] (3,10) arc (90:270:.5 and 1); \draw (-3,10) arc (90:270:.5 and 1); \draw (-1,5) arc (180:360:1 and .5); \draw[dashed] (1,5) arc (0:180:1 and .5); \draw (-3,8) -- (3,8); \draw (-3,10) -- (3,10); \draw (-1,5) arc (180:0:1 and 1); \draw[pattern = north west lines, pattern color = red] (0,9) circle (.5 and 1); \draw[color =white] (0,9) circle (.5 and 1); \draw[thick, ->] (0,8) arc (-90:0:.5 and 1); \draw[ thick] (.5,9) arc (0:90:.5 and 1); \draw[dashed, thick] (0,10) arc (90:270:.5 and 1); \draw[fill] (0,9) circle (.15); \node at (15,7) {$=$}; \begin{scope}[shift = {+(20,0)}] \draw[] (3,8) arc (-90:90:.5 and 1); \draw (-3,8) arc (-90:90:.5 and 1); \draw[dashed] (3,10) arc (90:270:.5 and 1); \draw (-3,10) arc (90:270:.5 and 1); \draw (-1,5) arc (180:360:1 and .5); \draw[dashed] (1,5) arc (0:180:1 and .5); \draw (-3,8) -- (3,8); \draw (-3,10) -- (3,10); \draw (-1,5) arc (180:0:1 and 1); \draw[pattern = north west lines, pattern color = red] (0,9) circle (.5 and 1); \draw[color =white] (0,9) circle (.5 and 1); \draw[thick, ->] (0,8) arc (-90:0:.5 and 1); \draw[ thick] (.5,9) arc (0:90:.5 and 1); \draw[dashed, thick] (0,10) arc (90:270:.5 and 1); \draw[fill] (0,5) circle (.15); \end{scope} \node at (5,7) {$+$}; \begin{scope}[shift = {+(10,0)}] \draw[] (3,8) arc (-90:90:.5 and 1); \draw (-3,8) arc (-90:90:.5 and 1); \draw[dashed] (3,10) arc (90:270:.5 and 1); \draw (-3,10) arc (90:270:.5 and 1); \draw (-1,5) arc (180:360:1 and .5); \draw[dashed] (1,5) arc (0:180:1 and .5); \draw (1,5) .. controls +(0,1) and +(-1.5,0) .. (3,8); \draw (-1,5) .. controls +(0,2) and +(-3,0) .. (3,10); \draw (-3,10) arc (90:-90:1 and 1); \end{scope} \node at (25,7) {$+$}; \begin{scope}[shift = {+(30,0)}] \draw[] (3,8) arc (-90:90:.5 and 1); \draw (-3,8) arc (-90:90:.5 and 1); \draw[dashed] (3,10) arc (90:270:.5 and 1); \draw (-3,10) arc (90:270:.5 and 1); \draw (-1,5) arc (180:360:1 and .5); \draw[dashed] (1,5) arc (0:180:1 and .5); \draw (-1,5) .. controls +(0,1) and +(1.5,0) .. (-3,8); \draw (1,5) .. controls +(0,2) and +(3,0) .. (-3,10); \draw (3,10) arc (90:270:1 and 1); \end{scope} \end{tikzpicture} \begin{tikzpicture}[scale = .25] \draw[white] (-1,6)--(1,6); \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw (0,4) circle (-1 and .5); \draw[pattern = north west lines, pattern color = red] (3,4) - - (1,4) -- (1,.5) - - (3,.5); \draw[pattern = north west lines, pattern color = red] (-3,4) - - (-1,4) -- (-1,.5) - - (-3,.5); \draw (3,4) - - (1,4) (1,.5) - - (3,.5); \draw[thick, <-] (-1,2.25) - - (-1,.5); \draw[thick, ->] (1,4) - - (1,2.25); \draw[thick] (-1,4) - - (-1,.5); \draw[thick] (1,4) - - (1,.5); \draw (-3,4) - - (-1,4) (-1,.5) - - (-3,.5); \draw (3,4) - - (1,4) (1,.5) - - (3,.5); \node at (5,2.25) {=}; \begin{scope}[shift = {+(10,0)}] \draw[pattern = north west lines, pattern color = red] (-3,4) -- (-1,4) arc (-180:0:1) -- (3,4) -- (3,.5) -- (1,.5) arc (0:180:1) -- (-3,.5) -- cycle; \draw[white, thick] (-3,4) -- (-1,4) arc (-180:0:1) -- (3,4) -- (3,.5) -- (1,.5) arc (0:180:1) -- (-3,.5) -- cycle; \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw[thick, ->] (-1,.5) arc (180:0:1); \draw (0,4) circle (-1 and .5); \draw[thick, <-] (-1,4) arc (-180:0:1); \draw (-3,4) - - (-1,4) (-1,.5) - - (-3,.5); \draw (3,4) - - (1,4) (1,.5) - - (3,.5); \draw[fill] (0,4) circle (.15); \end{scope} \node at (15,2.25) {+}; \begin{scope}[shift = {+(20,0)}] \draw[pattern = north west lines, pattern color = red] (-3,4) -- (-1,4) arc (-180:0:1) -- (3,4) -- (3,.5) -- (1,.5) arc (0:180:1) -- (-3,.5) -- cycle; \draw[white, thick] (-3,4) -- (-1,4) arc (-180:0:1) -- (3,4) -- (3,.5) -- (1,.5) arc (0:180:1) -- (-3,.5) -- cycle; \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw[thick, ->] (-1,.5) arc (180:0:1); \draw (0,4) circle (-1 and .5); \draw[thick, <-] (-1,4) arc (-180:0:1); \draw (-3,4) - - (-1,4) (-1,.5) - - (-3,.5); \draw (3,4) - - (1,4) (1,.5) - - (3,.5); \draw[fill] (0,.5) circle (.15); \end{scope} \begin{scope}[shift ={+(-18,0)}] \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw (0,4) circle (-1 and .5); \draw[pattern = north west lines, pattern color = red] (0,2.25) circle (1 and .5); \draw[white, thick] (0,2.25) circle (1 and .5); \draw[dashed, thick] (-1,2.25) arc (180:0:1 and .5); \draw[thick] (0,1.75) arc (270:360:1 and .5); \draw[->, thick] (-1,2.25) arc (180:270:1 and .5); \draw[] (-1,4) - - (-1,.5); \draw[] (1,4) - - (1,.5); \node at (2.5,2.25) {=}; \begin{scope}[shift = {+(5,0)}] \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw[] (-1,.5) arc (180:0:1); \draw (0,4) circle (-1 and .5); \draw[] (-1,4) arc (-180:0:1); \draw[fill] (0,4) circle (.125); \end{scope} \node at (7.5,2.25) {$-$}; \begin{scope}[shift = {+(10,0)}] \draw (-1,.5) arc (-180:0:1 and .5); \draw[dashed] (-1,.5) arc (180:0:1 and .5); \draw[] (-1,.5) arc (180:0:1); \draw (0,4) circle (-1 and .5); \draw[] (-1,4) arc (-180:0:1); \draw[fill] (0,.5) circle (.125); \end{scope} \end{scope} \end{tikzpicture} \end{figure} \begin{figure}[H] \begin{tikzpicture}[scale=.5] \draw[dashed] ( .866,.5) -- +(-.25,-.75) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (-.5,.125) circle (0.075); \begin{scope}[shift ={+(5,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (0,-.25) circle (0.075); \end{scope} \begin{scope}[shift ={+(10,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (-.75,-.75) circle (0.075); \end{scope} \begin{scope}[shift ={+(15,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \end{scope} \node at (2.75,0) {+}; \node at (7.75,0) {+}; \node at (13,0) {= $\ -a_2$}; \begin{scope}[shift={(0,-3)}] \draw[dashed] ( .866,.5) -- +(-.25,-.75) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (-.5,.125) circle (0.075); \draw[fill] (-.75,-.75) circle (0.075); \begin{scope}[shift ={+(5,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (-.5,.125) circle (0.075); \draw[fill] (0,-.25) circle (0.075); \end{scope} \begin{scope}[shift ={+(10,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (0,-.25) circle (0.075); \draw[fill] (-.75,-.75) circle (0.075); \end{scope} \begin{scope}[shift ={+(15,0)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \end{scope} \node at (2.75,0) {+}; \node at (7.75,0) {+}; \node at (13,0) {= $\ a_1$}; \end{scope} \begin{scope}[shift ={+(5,-6)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \draw[fill] (-.75,-.75) circle (0.075); \draw[fill] (0,-.25) circle (0.075); \draw[fill] (-.5,.125) circle (0.075); \end{scope} \begin{scope}[shift ={+(10,-6)}] \draw[dashed] ( .866,.5) -- (.616,-.25) -- (-1.116,-1.25)--( -.866,-.5) ; \draw[fill, color = white, opacity = .76] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle; \draw[dashed] ( -.866,-.5) -- (.116,-.75) -- (1.73,.25) -- (.866,.5) -- cycle;\draw[dashed] ( -.866,-.5) -- (-.866,.5) -- ( .866,1.5) -- (.866,.5); \draw[thick, red] ( -.866,-.5) -- (.866,.5); \end{scope} \node at (8,-6) {= $\ -a_0$}; \node at (-2.5,0) {(DP1)}; \node at (-2.5,-3) {(DP2)}; \node at (-2.5,-6) {(DP3)}; \end{tikzpicture} \end{figure} where (DP1), (DP2) and (DP3) are also called \emph{dot permutation relations}.\qed \end{proposition} \section{The $\mathfrak{sl}_3$-link homologies via Foams} In this section we shall review the construction of a link homology theory via webs and foams. This construction consists of three steps. First we need some machinery coming from category theory, namely cubes and abstract complexes. Then, we shall describe how to build a cube from a link diagram, and apply the machinery developed to get a formal ``geometric'' complex of foams and webs. Finally, we make use of Bar-Natan's tautological functors to obtain an honest chain complex and a homology theory. \subsection{Cubes in categories and abstract complexes}\label{subs:fromcubestcomplexes} Let us review the construction of the complex in the category of webs and foams associated to an oriented link diagram. To define this complex, a bit of abstract nonsense is necessary. Denote by $Q_{n}$ the standard $n$-dimensional cube $[0,1]^{n}$. Orient each edge of $Q_n$ from the vertex with the lowest number of $1$s to the vertex with highest number of $1$s. Let $R$ be a ring. An \emph{$R$-linear category} is a (small) category $\mathbf{C}$ such that, for each pair of objects $A$, $B$, the set of morphisms $Hom_{\mathbf{C}}(A,B)$ has a structure of $R$-module, and the composition is bilinear with respect to this structure. A $\mathbb{Z}$-linear category is often called a \emph{pre-additive category}. \begin{definition} A \emph{$n$-cube in a category $\mathbf{C}$}\index{Cube in a category} is the assignment of an object $O_{v}$ to each vertex $v$ of $Q_{n}$ and a morphism $F(v,v^{\prime})\in Hom_{\mathbf{C}}(O_v,O_{v^\prime})$ to the edge from $v$ to $v^\prime$, for each (ordered) pair of vertices $v$, $v^\prime$. A $n$-cube in a $R$-linear category $\mathbf{C}$ is \emph{commutative} (resp. \emph{skew-commutative}) if for each square the composition of the morphism on the edges commutes (resp. anti-commutes). \end{definition} Given a commutative cube, it is easy to prove that it is always possible to change the signs of the morphisms on the edges in such a way that the cube becomes skew-commutative (\cite{Khovanov00}). Now, we wish to assign to a skew-commutative cube a formal chain complex. To do so, one must first give a meaning to a complex over a category. \begin{definition} Let $\mathbf{C}$ be a $R$-linear category. The \emph{category $\mathbf{Com}(\mathbf{C})$ of complexes over $\mathbf{C}$} is the category defined as follows \begin{enumerate} \item the objects of $\mathbf{Com}(\mathbf{C})$ are ordered collections of pairs $(C_{i},d_{i})_{i\in\mathbb{Z}}$ where $C_{i} \in Obj(\mathbf{C})$ and $d_{i} \in Hom_{\mathbf{C}}(C_{i},C_{i+1})$ such that \[ d_{i+1} \circ d_{i} = 0_{Hom_{\mathbf{C}}(C_{i},C_{i+2})};\] \item the morphisms between two objects $(C_{i},d_{i})$ and $(C^\prime_{j},d^\prime_{j})$ of $\mathbf{Com}(\mathbf{C})$ are collection of maps $(F_{i})_{i\in \mathbb{Z}}$ such that \[ \forall\: i\in\mathbb{Z}\ :\ F_{i} \in Hom_{\mathbf{C}}(C_{i},C^\prime_{i+k}) \quad\text{and}\quad F_{i+1} \circ d_{i} = d^\prime_{i+k} \circ F_{i},\] for a fixed $k\in \mathbb{Z}$, called \emph{degree of $(F_{i})_{i\in\mathbb{Z}}$}; \item the composition of two morphisms $(F_{i})_{i\in \mathbb{Z}}$ and $(G_{j})_{j\in\mathbb{Z}}$ is defined as $(G_{i+k} \circ F_{i})_{i\in\mathbb{Z}}$, where $k$ is the degree of $(F_{i})_{i\in\mathbb{Z}}$. \end{enumerate} \end{definition} In general, an $R$-linear category does not have kernels and co-kernels. So, even though we can define chain complexes we cannot define the homology. However, it is possible to define chain homotopy equivalences. \begin{definition} Two morphisms $F$ and $G$ between two objects in $\mathbf{Com}(\mathbf{C})$, say $C_{\bullet} = (C_{i},d_{i})$ and $D_{\bullet} = (D_{i},\partial_{i})$, are (\emph{chain}) \emph{homotopy equivalent} if there exists a morphism $H\in Hom_{\mathbf{Com}(\mathbf{C})}(D_{\bullet},C_{\bullet})$ such that \[ F - G = d \circ H \pm H \circ \partial. \] Two objects $C_{\bullet},\ D_{\bullet}\in Obj(\mathbf{Com}(\mathbf{C}))$ are \emph{homotopy equivalent} if there exists two morphisms \[ F \in Hom_{\mathbf{Com}(\mathbf{C})}(D_{\bullet},C_{\bullet})\quad \text{and}\quad G\in Hom_{\mathbf{Com}(\mathbf{C})}(C_{\bullet},D_{\bullet})\] such that the compositions $F \circ G$ and $G \circ F$ are homotopy equivalent to the identity morphism of $D_\bullet$ and $C_\bullet$, respectively. We will denote by $\mathbf{Com}_{/h}(\mathbf{C})$ the category of complexes over $\mathbf{C}$ and morphisms of $\mathbf{Com}(\mathbf{C})$ up to homotopy equivalence. \end{definition} To conclude the abstract construction of a complex from a skew-commutative cube we need one more definition. \begin{definition} Given an $R$-linear category $\mathbf{C}$, the \emph{matrix category over $\mathbf{C}$}\index{Category!- matrix} is the category whose objects are formal direct sums of objects in $\mathbf{C}$ and whose morphisms are matrices with entries in the morphism of $\mathbf{C}$. The composition of two morphisms in the matrix category over $\textbf{C}$ is given by the usual matrix multiplication rule. \end{definition} Denote by $\mathbf{Kom(C)}$ the category of complexes over the matrix category over $\mathbf{C}$, where $\mathbf{C}$ is an arbitrary $R$-linear category. Given a skew-commutative $n$-cube $Q$ in $\mathbf{C}$, define \[ C_{i}^{Q} = \bigoplus_{\vert v \vert = i} Q_{v}\quad d^{Q}_{i} = \sum_{\vert v \vert = i} \bigoplus_{v^\prime} F (v,v^\prime),\] where $F(v,v^\prime)$ is defined to be zero if there is no edge from $v$ to $v^\prime$, and $\vert v \vert$ denotes the number of $1$'s in $v$. It is an easy verification that $(C_{i}^{Q},d_{i}^{Q})$ is an object in $\mathbf{Kom(C)}$. \subsection{The Khovanov-Kuperberg bracket and the geometric complex}\label{subsection:homologies} Fix a potential $\omega(x) \in R[x]$. Let $D$ be an oriented link diagram. Fix an order of the crossings of $D$, say $\{ c_{1} ,...,c_{k} \}$. Each crossing has two possible \emph{web resolutions}, see Figure \ref{fig:Webresolutions}. These resolutions come with an integer depending on the crossing and the type of resolution performed. \begin{figure}[] \centering \begin{tikzpicture}[scale = 2] \draw[->, thick] (2.2,-.2) .. controls +(-.125,.125) and +(-.125,-.125) .. (2.2,.2); \draw[->, thick] (1.8,-.2) .. controls +(.125,.125) and +(.125,-.125) .. (1.8,.2); \draw[->, thick] (0,.1) -- (-.2,.2); \draw[->, thick] (0,.1) --(.2,.2); \draw[thick] (0,-.1) -- (-.2,-.2); \draw[thick, -<] (0,-.1) --(-.1,-.15); \draw[thick] (0,-.1) --(.2,-.2); \draw[thick, -<] (0,-.1) --(.1,-.15); \draw[very thick, purple, <-] (0,-.1) --(0, .1); \draw[<-, thick] (.8, 1.2) --(1.2,.8); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (.8, .8) --(1.2,1.2); \pgfsetlinewidth{.125*\pgflinewidth} \draw[->, thick] (.8, .8) --(1.2,1.2); \draw[<-, thick] (1.2, -.8) --(.8,-1.2); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (1.2, -1.2) --(.8,-.8); \pgfsetlinewidth{.125*\pgflinewidth} \draw[->, thick] (1.2, -1.2) --(.8,-.8); \draw[opacity = .5,->,thick] (.65, .85) -- (.1,.35); \draw[opacity = .5,->,thick] (1.35, .85) -- (1.95,.35); \draw[opacity = .5,->,thick] (.65, -.85) -- (.1,-.35); \draw[opacity = .5,->,thick] (1.35, -.85) -- (1.95,-.35); \node[above left ] at (.375,.6) {0}; \node[below left ] at (.375,-.6) {1}; \node[above right] at (1.625,.6) {1}; \node[below right] at (1.625,-.6) {0}; \end{tikzpicture} \caption{Web resolutions of positive (top) and negative (bottom) crossings.} \label{fig:Webresolutions} \end{figure} There is a natural bijection between the vertices of a $k$-dimensional cube $[0,1]^{k}$ and the web resolutions of the diagram\footnote{That is the web obtained by replacing each crossing with a web resolutions.} $D$; to $v\in \{ 0,1\}^{n}$ is associated the web $W(D,v)$ obtained by replacing, for each $i$, the crossing $c_{i}$ with its $v_{i}$-web resolution. To each oriented edge $v \to v^\prime$ of $Q_{k}$ is associated a foam $F(v, v^\prime)$ between the webs $W(D,v)$ and $W(D,v^\prime)$. The foam $F(v,v^\prime)$ is everywhere a cylinder, except above a disk where the two webs differ, where the cobordism looks like one of the two elementary web cobordisms depicted Figure \ref{fig:Elementarwebcob}. \begin{figure}[h] \centering \begin{tikzpicture}[scale=.35, thick] \draw (.5,.25) -- (2,-1) -- (4,-1) -- (5.5,.25) -- (5.5,4) .. controls +(-.5,-1) and +(.5,-1) ..(.5,4) --cycle; \draw[fill, white, opacity = .5] (0,-2) -- (2,-1) -- (4,-1) -- (6,-2) -- (6,2) .. controls +(-.5,1) and +(.5,1) ..(0,2) --cycle; \draw (0,-2) -- (2,-1) -- (4,-1) -- (6,-2) -- (6,2) .. controls +(-.5,1) and +(.5,1) ..(0,2) --cycle; \draw[pattern color = red, pattern= north west lines, opacity = .5] (4,-1) -- (2,-1) arc (180:0:1); \draw[very thick,red] (2,-1) arc (180:0:1); \begin{scope}[shift = {+(-2,2)}, rotate = 180] \draw (.5,-2) -- (2,-1) -- (4,-1) -- (5.5,-2) -- (5.5,2) .. controls +(-.5,1) and +(.5,1) ..(.5,2) --cycle; \draw[fill, white, opacity = .5] (0,.25) -- (2,-1) -- (4,-1) -- (6,.25) -- (6,4) .. controls +(-.5,-1) and +(.5,-1) ..(0,4) --cycle; \draw (0,.25) -- (2,-1) -- (4,-1) -- (6,.25) -- (6,4) .. controls +(-.5,-1) and +(.5,-1) ..(0,4) --cycle; \draw[pattern color = red, pattern= north west lines, opacity = .5] (4,-1) -- (2,-1) arc (180:0:1); \draw[very thick,red] (2,-1) arc (180:0:1); \node at (3,-3) {Un-Zip}; \end{scope} \node at (3,5) {Zip}; \end{tikzpicture} \caption{Elementary web cobordisms. The arc in red is a singular arc.} \label{fig:Elementarwebcob} \end{figure} The cube we defined depends on the choice of an ordering of the crossings of $D$. Moreover, it is easy to see that, by a Morse-theoretic argument, the cube associated to an oriented link diagram is commutative. We can turn this commutative cube into a skew commutative cube $Q(D)$ (but there is the choice of signs for the edges). Finally, using the abstract construction described in Subsection \ref{subs:fromcubestcomplexes} we can associate a complex $\langle D \rangle_\omega$ in $\mathbf{Kom}(\mathbf{Foam}_{/\ell})$ to the skew commutative cube $Q(D)$. \begin{rem*} The complex $\langle D \rangle_{\omega}$ does not depend on the potential \emph{per se}; what really depends on $\omega$ is the category $\mathbf{Foam}_{/\ell}$. \end{rem*} The complex $\langle D \rangle_{\omega}$ is called the \emph{Khovanov-Kuperberg bracket of $D$} (with respect to $\omega$). Whenever $\omega$ is fixed or clear from the context we will remove it from the notation. With some standard machinery of homological algebra it is easy to show the following proposition. \begin{proposition}[\cite{Khovanov03, Mackaayvaz07}] The Khovanov-Kuperberg bracket of an oriented link diagram $D$ does not depend (up to isomorphism in $\mathbf{Kom}(\mathbf{Foam}_{/\ell})$) on the sign assignment or the order of the crossings used to obtain the cube $Q(D)$.\qed \end{proposition} The Khovanov-Kuperberg bracket is the $\mathfrak{sl}_3$-analogue of the Khovanov bracket introduced by Bar-Natan in \cite{BarNatan05cob}. Exactly as in the case of the Khovanov bracket, to turn the Khovanov-Kuperberg bracket into an invariant of links (and not framed links) we need to shift the homological grading. \begin{definition} Given an $R$-linear category $\mathbf{C}$ and $C_\bullet = (C_{i},d_{i})_{i\in\mathbb{Z}}\in \mathbf{Com}(\mathbf{C})$, the \emph{shift of $C_{\bullet}$ by $k\in \mathbb{Z}$} is the object $C_{\bullet}(k)$ in $ \mathbf{Com}(\mathbf{C})$ defined as follows: \[ C_\bullet(k) = (C_{i+k},d_{i+k})_{i\in\mathbb{Z}}.\] \end{definition} Finally, we can define the \emph{the geometric $\mathfrak{sl}_{3}$-complex of $D$} (with respect to $\omega$) as follows \[\widetilde{C}_{\omega}^{\bullet}(D,R) = \langle D\rangle_{\omega}(-n_{+}(D)),\] where $n_+(D)$ (resp. $n_{-}(D)$) is the number of positive (resp. negative) crossings in $D$ (cf. Figure \ref{fig:Webresolutions}). This is a link invariant in the sense of the following proposition. \begin{theorem}(Mackaay-Vaz, \cite{Mackaayvaz07})\label{theorem:invarianceofthegeometriccomplex} Let $\omega$ be a potential. If $D$ and $D^\prime$ are oriented link diagram representing the same link, then $\widetilde{C}_{\omega}^{\bullet}(D,R)$ and $\widetilde{C}_{\omega}^{\bullet}(D^\prime,R)$ are chain homotopy equivalent. Furthermore, the assignment \[ \widetilde{C}_{\omega}^{\bullet}: \mathbf{Link} \longrightarrow \mathbf{Kom}_{/\pm h}(\mathbf{Foam}_{/\ell}),\] is a functor from the category $\mathbf{Link}$ (i.e. the category of links in $\mathbb{R}^3$, and properly embedded surfaces in $\mathbb{R}^3 \times [0,1]$ up to boundary-fixing isotopies), to the category $\mathbf{Kom}_{/\pm h}(\mathbf{Foam}_{/\ell})$ (i.e. the category obtained from $\mathbf{Kom}_{h}(\mathbf{Foam}_{/\ell})$ by considering the morphisms up to sign). \qed \end{theorem} \subsection{Tautological functors and the $\mathfrak{sl}_3$-homology} The category $\mathbf{Mat}(\mathbf{Foam}_{/\ell})$ is not an Abelian category. Thus, it is not possible to define the homology of $\widetilde{C}_{\omega}^{\bullet}(D,R)$. There are different ways to turn the geometric complex into an honest chain complex. Out of the different possibilities, following \cite{Mackaayvaz07}, we pursue the approach via tautological functors. \begin{definition} The \emph{tautological functor} is the functor \[T: \mathbf{Foam}_{/\ell} \longrightarrow R-\mathbf{Mod}\] defined on an object $W^\prime\in Obj(\mathbf{Foam}_{/\ell})$ by \[ T(W^\prime) = Hom_{\mathbf{Foam}_{/\ell}}(\emptyset,W^\prime)\] and on morphisms by composition on the left, that is \begin{align*} T(F):\ T(&W^\prime) \longrightarrow\quad T(W^{\prime\prime})\\ & G\quad \longmapsto\quad F \circ G \end{align*} for each $F\in Hom_{\mathbf{Foam}_{/\ell}}(W^\prime,W^{\prime\prime})$ and $W^\prime,\: W^{\prime\prime}\in Obj(\mathbf{Foam}_{/\ell})$. \end{definition} Note that, if we have a disjoint union of the webs $W^\prime$ and $W^{\prime\prime}$, then \[T (W^\prime \sqcup W^{\prime\prime}) \simeq T (W^\prime) \otimes_{R} T(W^{\prime\prime})\] as $R$-modules. Before proceeding further let us show in an example how the functor $T$ works. This example will be useful later on. \begin{example}\label{ex:Tcircle} Let us compute $T(\bigcirc)$ (i.e.\: find its isomorphism class as an $R$-module). By definition $ T(W^\prime) = Hom_{\mathbf{Foam}_{/\ell}}(\emptyset,W^{\prime})$ is the $R$-module generated by all foams bounding $W^\prime$ (modulo local relations). All closed components of such a foam evaluate to elements of $R$. Thus, $T(\bigcirc)$ is generated (as $R$-module) by connected foams. Since these foams must bound the circle $\bigcirc$, which is made of regular boundary points, we can use the genus reduction relation and write any connected foam bounding $\bigcirc$ as an $R$-linear combinations of disks marked with at most two dots. It follows that $T(\bigcirc )$ is generated by dotted disks. Now, consider the epimorphism of $R$-modules \[ \Phi : R[x] \longrightarrow Hom_{\mathbf{Foam}_{/\ell}}\left(\emptyset,\bigcirc\right)\] mapping $x^k$ to the disk with $k$-dots. The dot reduction relation tells us that $\omega (x)$ is in the kernel. On the other hand, the disk with two dots, the disk with a single dot and the disk with no dots are linearly independent over $R$ (use the pairing to get a linear system). Finally, an easy application of the Euclidean division algorithm (which works in any ring, assuming that the polynomial by which we are dividing has invertible leading term, see \cite[Theorem 1.1 Section IV]{Lang05}) shows that $Ker(\Phi)$ is exactly $(\omega(x))$, and thus \[ T(\bigcirc) \simeq \frac{R[x]}{(\omega (x))}\] \end{example} There is a natural way to extend the tautological functor to the category $\mathbf{Kom}(\mathbf{Foam}_{/\ell})$ (cf. \cite[Section 9]{BarNatan05cob}). With an abuse of notation we shall denote the extended functor also by $T$. \begin{definition} The \emph{$\mathfrak{sl}_{3}$-complex} (with respect to $\omega$) of an oriented link diagram $D$ is \[ C_{\omega}^{\bullet}(D,R) = T\left(\widetilde{C}_{\omega}^{\bullet}(D,R) \right)\in Obj(\mathbf{Kom}(R-\mathbf{Mod})).\] \end{definition} The following proposition is an immediate consequence of Theorem \ref{theorem:invarianceofthegeometriccomplex}. \begin{proposition} The isomorphism class (as $R$-module) of $H_{\omega}^\bullet(D,R)$ is a link invariant, so we may write $H_{\omega}^\bullet(L,R)$ where $L$ is the oriented link represented by $D$. Moreover, $H_{\omega}^{\bullet}$ defines a functor between the category $\mathbf{Link}$ and the category $R$-$\mathbf{Mod}_{gr}$ of graded $R$-modules. \qed \end{proposition} The homology of the $\mathfrak{sl}_3$-complex will be called \emph{$\mathfrak{sl}_{3}$-homology of $L$} (with respect to $\omega$). \subsection{Graded and filtered $\mathfrak{sl}_3$-homologies}\label{sec:grading} To conclude the background material, we wish to describe how to define a second grading or a filtration on the $\mathfrak{sl}_3$-complex. Suppose that $R$ is a graded ring, for the sake of simplicity we shall assume $R$ to be graded over the non-negative integers. By setting $deg(x) = 2$, the grading on $R$ induces a grading on $R[x]$. The choice of an homogenous potential $\omega(x)$ (i.e. $deg(a_{i}) = 2(3 - i)$) induces a graded structure on the category $\mathbf{Foam}_{/\ell}$ (see \cite[Definition 6.1]{BarNatan05cob} and \cite[Section 2]{Mackaayvaz07}); that is the modules $ Hom_{\mathbf{Foam}_{/\ell}}(W^\prime, W)$ (and thus also $T(W)$) become graded $R$-modules. This structure is defined by setting \[deg_{\mathbf{Foam}}(F) = -2\chi(F) + \chi(\partial F) + 2 d,\] for each foam $F$ with $d$ dots. In particular, it follows that the $\mathfrak{sl}_{3}$-complex associated to an oriented diagram $D$ and a potential $\omega$ is graded. Furthermore, the differential respects the \emph{quantum degree} $qdeg$, which is defined as follows \[ qdeg(x) = deg_{\mathbf{Foam}}(x) - \vert v \vert + 3n_{+}(D) - 2n_{-}(D), \] where $x$ is an homogeneous element of $T(W(D,v))$, and $v$ a vertex of the cube $Q_{n_{+}+n_{-}}$. Now, suppose that $R$ is trivially graded (i.e. supported in degree $0$). In this case, the unique homogeneous potential is $x^{3}$ (which corresponds to the original theory due to Khovanov). For all the other potentials the quantum grading is not well defined (because the reduction relations are not homogenous). Nonetheless, one may define a filtered structure on $ Hom_{\mathbf{Foam}_{/\ell}}(W^\prime, W)$ by setting \[\mathscr{F}_{i} Hom_{\mathbf{Foam}_{/\ell}}(W^\prime, W) = \langle\: F,\ deg_{\mathbf{Foam}}(F) \leq i \rangle_{R},\] where $deg_{\mathbf{Foam}}$ is defined as above. This filtered structure extends to $\mathbf{Kom}(\mathbf{Foam}_{/\ell})$, and also to the $\mathfrak{sl}_3$-complex, and the differential does not increase the filtration level (shifted as in the case of the quantum degree). This (shifted) filtration on the $\mathfrak{sl}_3$-complex is called \emph{quantum filtration}. \begin{rem*}\label{rem:gradingcircle}% One can prove that the isomorphism in Example \ref{ex:Tcircle} induces the following isomorphism of graded (resp. filtered) $R$-modules \[ T\left( \bigcirc \right) \simeq \frac{R[x]}{(\omega(x))}(-2),\] where $deg(x) = 2$, and in the filtered case the filtration on the left-hand side is induced by the degree.% \end{rem*} To conclude this section we state the following result, due to Mackaay and Vaz (\cite[Lemma 2.9]{Mackaayvaz07}, compare also with \cite{Khovanov03}) \begin{proposition}(Khovanov-Kuperberg relations)\label{proposition:KhovKuprelgraded} We have the following isomorphisms of graded (filtered) $R$-modules. \begin{itemize} \item[] $\text{(circle removal)}\qquad\ T(W^\prime \sqcup \bigcirc) \simeq T(\bigcirc) \otimes T(W^\prime)$ \item[] $\text{(digon removal)}\qquad\ T(W_1 ) \simeq T(W_2)(-1) \oplus T(W_2)(1)$ \item[] $\text{(square removal)}\qquad T(W^\prime_1) \simeq T(W^\prime_2) \oplus T(W^\prime_3)$ \end{itemize} where $W_1$ and $W_2$ (resp. $W^\prime_1$, $W^\prime_2$ and $W^\prime_3$) are two (resp. three) webs which are identical but in a small ball where they are as depicted in Figure \ref{fig:KhovanovKuperbergrel}, and $(\cdot)$ indicates the degree (filtration) shift.\qed \end{proposition} \begin{figure}[H] \centering \begin{tikzpicture}[scale = .5] \begin{scope}[shift = {+(1.75,2)}] \draw[thick,->] (1.5,0) -- (2,0); \draw[thick] (2,0) .. controls +(.25,.25) and +(-.25,0) .. (3,.45); \draw[thick,<-] (3,.45) .. controls +(.25,0) and +(-.25,.25) .. (4,0); \draw[thick] (2,0) .. controls +(.25,-.25) and +(-.25,0) .. (3,-.45); \draw[thick,<-] (3,-.45) .. controls +(.25,0) and +(-.25,-.25) .. (4,0); \draw[thick,->] (4,0) -- (4.5,0); \end{scope} \draw[thick,->] (7.5,2) -- (10.5,2); \draw[dashed] (9,2) circle (1.5); \node at (9,0) {$W_{2}$}; \draw[dashed] (4.75,2) circle (1.5); \node at (4.75,0) {$W_{1}$}; \draw[thick,->] (2,-1) -- (2.5,-1.5); \draw[thick,->] (3.5,-1.5) -- (4,-1); \draw[thick,<-] (2,-3) -- (2.5,-2.5); \draw[thick,<-] (3.5,-2.5) -- (4,-3); \draw[thick,->] (3.5,-1.5)-- (3,-1.5); \draw[thick] (3,-1.5) -- (2.5,-1.5); \draw[thick] (3.5,-2) -- (3.5,-2.5); \draw[thick,->] (3.5,-1.5) -- (3.5,-2); \draw[thick,<-] (2.5,-2) -- (2.5,-2.5); \draw[thick] (2.5,-1.5) -- (2.5,-2); \draw[thick,<-] (3,-2.5) -- (2.5,-2.5); \draw[thick] (3.5,-2.5) -- (3,-2.5); \draw[dashed] (3,-2) circle (1.5); \draw[thick,<-] (5.75,-3) .. controls +(.5,.5) and +(.5,-.5) .. (5.75,-1); \draw[thick,->] (7.75,-3) .. controls +(-.5,.5) and +(-.5,-.5) .. (7.75,-1); \draw[dashed] (6.75,-2) circle (1.5); \draw[thick,->] (9.5,-1) .. controls +(.5,-.5) and +(-.5,-.5) .. (11.5,-1); \draw[thick,<-] (9.5,-3) .. controls +(.5,.5) and +(-.5,.5) .. (11.5,-3); \draw[dashed] (10.5,-2) circle (1.5); \node at (3,-4.25) {$W^\prime_{1}$}; \node at (6.75,-4.25) {$W^\prime_{2}$}; \node at (10.5,-4.25) {$W^\prime_{3}$}; \end{tikzpicture} \caption{Webs involved in the Khovanov-Kuperberg relations.} \label{fig:KhovanovKuperbergrel} \end{figure} \section{Transverse invariants and the universal $\mathfrak{sl}_3$-theory} Let $R$ be an integral domain, and fix a potential $\omega (x)\in R[x]$. In this section we define a family of transverse braid invariants in $C_{\omega}^{\bullet}(\overline{B},R)$, where $B$ is a closed braid diagram and the overline denotes the mirror. The elements of this family are in bijection with the (distinct) roots of $\omega$ in $R$. From now on, unless otherwise stated, all tensor products are assumed to be taken over $R$ and all the isomorphisms are assumed to be isomorphisms of $R$-modules. \subsection{The $\beta$-chains} Assume that $\omega(x)$ has a root $x_1$ in $R$. It follows that \[\omega(x) = x^3 + a_2 x^2 + a_1 x + a_0 = (x- x_1)(x^2 + a_1^\prime x + a_0^\prime),\] for some $a_1^\prime$, $a_{0}^\prime \in R$ such that \begin{equation} a_2 = a_1^\prime - x_1 \qquad a_1 = a_0^\prime - x_1 a_1^\prime \qquad a_{0} = -x_1 a_{0}^\prime . \label{eq:relationcoeff} \end{equation} Let $D$ be an oriented link diagram. Define the \emph{oriented web resolution} $\underline{w}_{D}$ to be the web resolution where each positive crossing is replaced by its $1$-web resolution, and every negative crossing is replaced by its $0$-resolution. In other words, the oriented web resolution is the web resolution where both \ocrosstextp and \ocrosstextn are replaced by \orisplittext . It follows that the oriented web resolution is a collection of loops. Moreover, these loops have a natural orientation induced by the orientation of $D$. \begin{definition} Let $D$ be a oriented link diagram. Consider a family of disjoint unknotted disks $\{ \mathbb{D}_{\gamma}\}_{\gamma\in \underline{w}_{D}}$ properly embedded in $(\mathbb{R}^{2}\times \{0\})\times [0,1] \subseteq \mathbb{R}^3 \times [0,1]$, obtained by pushing the Jordan disks bounding $\underline{w}_{D}$ in $\mathbb{R}^2\times \{ 0 \}$. Denote by $\mathbb{D}^{k}_{\gamma}$ the disk $\mathbb{D}_{\gamma}$ with $k$ dots on it. The \emph{$\beta$-chain (with respect to $\omega$) associated to the root $x_{1}$} is defined as follows \[ \beta_{\omega,x_1}(D)= \sum_{S \subseteq \underline{w}_{D}} \sum_{S^\prime \subseteq \underline{w}_{D}\setminus S} (a_{1}^{\prime})^{\# S^\prime }(a_{0}^{\prime})^{\# S }\left(\bigsqcup_{\gamma\in \underline{w}_{D}\setminus( S \cup S^\prime )} \mathbb{D}_{\gamma}^{2} \sqcup \bigsqcup_{\gamma\in S^\prime} \mathbb{D}_{\gamma}^{1} \sqcup \bigsqcup_{\gamma\in S} \mathbb{D}_{\gamma}^{0}\right)\in T(\underline{w}_{D}),\] where $\# S$ denotes the number of elements in $S$. \end{definition} By definition of the $\mathfrak{sl}_3$-complex, to each web resolution $\underline{w}$ of $D$ corresponds a direct summand $T(\underline{w})$ in $C_{\omega}^{\bullet}(D,R)$ in homological degree $ - n_{+}(D) + \vert \underline{w} \vert$ where $n_{+}(D)$ is the number of positive crossings in $D$, and $\vert \underline{w} \vert $ is the number of $1$-web resolutions in the web resolution $\underline{w}$. In particular, we have \[ T(\underline{w}_{D}) \subseteq C_{\omega}^{0}(D,R).\] Furthermore, the filtered quantum degree of $\beta_{\omega,x_{1}}$ can be easily computed. It suffices\footnote{We are using the fact that the disks with dots are a filtered basis for $T(\bigcirc \cdots \bigcirc)$. This follows immediately from the observation that the isomorphism (cf. Equation \eqref{eq:algdefofb3}) \[T(\underbrace{\bigcirc \cdots \bigcirc}_{k}) \simeq \frac{R[x_{1},...,x_{k}]}{(\omega(x_{1}), \dots , \omega(x_{k}))}(-2k),\] given by sending the disk with $d$ dots bounding the $i$-th circle into $x_{i}^{d}$, is an isomorphism of filtered modules. Here the filtration on $R[x_{1},...,x_{k}]/(\omega(x_{1}), \dots , \omega(x_{k}))$ is the one induced by the total grading, $deg(x_{i})=2$ for each $i$, and $(-2k)$ denotes a shift of $-2k$ in the filtered degree (cf. Remark \ref{rem:gradingcircle})}. to look at the maximal quantum degree of the summands in the definition of $\beta_{\omega,x_{1}}$. The maximal degree is achieved by the summand where the disks have two dots each. Thus, the filtered quantum degree of $\beta_{\omega,x_{1}}(D)$ is $2 \# \text{disks in } \underline{w}_{D} + 2 w(D)$. In the special case $D = \widehat{B}$, we have that \[2 \# \text{disks in } \underline{w}_{\widehat{B}} + 2 w(B) = 2( b(B)- w (\overline{B})) = -2sl(\overline{B}),\] where the last equality is due to the fact that $b(B)= b(\overline{B})$. The same computation works in the graded case. From the Khovanov-Kuperberg relations and from Example \ref{ex:Tcircle}, it follows that, as $R$-modules, \begin{equation} T(\underline{w}_{D}) \simeq \bigotimes_{\gamma \in \underline{w}_{D}} \frac{R[x_{\gamma}]}{(\omega (x_\gamma))} \simeq \frac{ R[x_\gamma\:\vert\:\gamma\in\underline{w}_{D}]}{\left(\omega (x_{\gamma})\right)_{\gamma\in\underline{w}_{D}}}\label{eq:algdefofb3} \end{equation} where $\gamma\in \underline{w}_{D}$ should be read as ''$\gamma$ is a circle in $\underline{w}_{D}$''. It is easy to see that the isomorphism in \eqref{eq:algdefofb3} maps $\beta_{\omega,x_1}(D)$ to \[ \prod_{\gamma\in \underline{w}_{D}} [x_{\gamma}^{2} + a_{1}^\prime x_{\gamma} + a_{0}^{\prime}]\in T(\underline{w}_{D}) \subseteq C_{\omega}^{0}(D,R).\] In the rest of the paper we shall freely switch between these two representations of the chains $\beta_{\omega,x_1}(D)$. \begin{rem*} The multiplication of $\beta_{\omega,x_1}(D)$ by $x_{\gamma}$, which is an algebraic operation, corresponds ``geometrically`` to adding a dot to the disk $\mathbb{D}_{\gamma}$ in each summand of the ``geometric expression'' of $\beta_{\omega,x_1}(D)$. \end{rem*} \begin{figure}[H] \centering \begin{tikzpicture}[scale =.25, thick] \draw[dashed] (6,0) arc (0:180:6 and 1); \draw (6,0) arc (0:-180:6 and 1); \draw (-4,8) circle (2 and .333); \draw (4,8) circle (2 and .333); \draw[pattern = north west lines, pattern color =red, dashed, opacity =.8] (-1.55291427061512,0.965925826289068) .. controls +(0,3) and +(-.5,0) .. (0,5) .. controls +(.75,0) and +(0,4.5) .. (1.55291427061512,-0.965925826289068) --cycle; \draw[white] (-1.55291427061512,0.965925826289068) .. controls +(0,3) and +(-.5,0) .. (0,5) .. controls +(.75,0) and +(0,4.5) .. (1.55291427061512,-0.965925826289068) --cycle; \draw[thick, dashed ,red] (1.55291427061512,-0.965925826289068) -- (-1.55291427061512,0.965925826289068) .. controls +(0,3) and +(-.5,0) .. (0,5); \draw[thick,red] (0,5) .. controls +(.75,0) and +(0,4.5) .. (1.55291427061512,-0.965925826289068) ; \draw (2,8) .. controls +(0,-1) and +(1,0) .. (0,5) .. controls +(-1,0) and +(0,-1) .. (-2,8); \draw (-6,8) -- (-6,0); \draw (6,8) -- (6,0); \end{tikzpicture} \caption{The foam $F$.}\label{fig:foamlemma} \end{figure} \begin{lemma}\label{lemma:dbetavanish} Let $F$ be the foam in Figure \ref{fig:foamlemma}, and consider the morphism of $R$-modules \[T(F): T(\bigcirc \sqcup \bigcirc ) \longrightarrow T(W_\theta),\] where $W_{\theta}$ is the theta web (i.e. the closure of the web $W_{1}$ in Figure \ref{fig:KhovanovKuperbergrel}). Define $\beta $ as follows \[ \beta = (x^{2} + a_{1}^\prime x + a_0^\prime) (y^{2} + a_{1}^\prime y + a_0^\prime) \in \frac{R[x,y]}{(\omega(y),\omega(x))} \simeq T(\bigcirc \sqcup \bigcirc),\] we have \[T(F)(\beta) = 0. \] \end{lemma} \begin{proof} Our aim is to prove that $T(F)(\beta)$ can be written as follows \[ T(F)(\beta) = \sum_{j=1}^{k} c_j(F_{j}^{\prime\prime\prime} + a_2 F_{j}^{\prime\prime} + a_1 F_{j}^\prime + a_{0}F_{j}),\quad c_j \in R \] where $F_{j},\: F_{j}^\prime,\: F_{j}^{\prime\prime},\: F_{j}^{\prime\prime\prime}$ are of the form shown in Figure \ref{fig:prefoam}, and are identical except in a small region where they differ as shown in Figure \ref{fig:F_j}. \begin{figure}[H] \centering \begin{tikzpicture}[scale = .25] \draw[dashed] (0,7.5) circle (2); \draw[dashed] (7,7.5) circle (2); \draw[dashed] (14,7.5) circle (2); \draw[dashed] (21,7.5) circle (2); \node at (21,3.5) {$F_{j}$}; \node at (14,3.5) {$F_{j}^{\prime}$}; \node at (7,3.5) {$F_{j}^{\prime\prime}$}; \node at (0,3.5) {$F_j^{\prime\prime\prime}$}; \draw[fill] (7.5,7.5) circle (0.15) ; \draw[fill] (6.5,7.5) circle (0.15) ; \draw[fill] (0,7.5) circle (0.15) ; \draw[fill] (0.75,7.5) circle (0.15) ; \draw[fill] (-.75,7.5) circle (0.15) ; \draw[fill] (14,7.5) circle (0.15) ; \end{tikzpicture} \caption{The local difference between the foams $F_{j},\: F_{j}^\prime,\: F_{j}^{\prime\prime},\: F_{j}^{\prime\prime\prime}$.} \label{fig:F_j} \end{figure} Since, for each $j$, $(F_{j}^{\prime\prime\prime} + a_2 F_{j}^{\prime\prime} + a_1 F_{j}^\prime + a_{0}F_{j})$ is trivial in $\mathbf{Foam}_{/\ell}$ by (DR), the claim shall follow. To avoid graphical calculus we make use of polynomials. So, let us denote by the monomial $A^{r}B^{s}C^{t}$ the foam (in $(\mathbb{R}^{2}\times \{0\} )\times [0,1]$) shown in Figure \ref{fig:prefoam}, where $s$, $r$ and $t$ indicate the number of dots in the regions $A$, $B$ and $C$ respectively. By definition $T(F)(\beta)$ can be written as follows \begin{small} \[T(F)(\beta) = A^{2}B^{2} + a_{1}^{\prime}(A^2B + AB^2) + ( a_{1}^\prime +1)^2 AB + a_{0}^\prime(A^{2} + B^{2}) + a_{1}^\prime a_{0}^\prime (A + B)+(a_{0}^\prime)^{2}.\] \end{small} With this notation we can write the dot permutation relations (DP1), (DP2) and (DP3) described in Proposition \ref{proposition:Mackaayvazrel} as follows: \begin{equation} \tag{DP1} A + B + C = - a_2 \label{eq:lineareq} \end{equation} \begin{equation} \tag{DP2} AC + BC + AB = a_1 \label{eq:quadraticeq} \end{equation} \begin{equation} \tag{DP3} ABC = -a_0 \label{eq:cubiceq} \end{equation} Since all foam relations are local, and since we are allowed to move the dots inside regions, the formal products above satisfy associativity. Using Relations \eqref{eq:lineareq}, \eqref{eq:quadraticeq} and \eqref{eq:cubiceq} we obtain \begin{small} \[ T(F)(\beta) = B(A^3 + a_2A^2 +a_1 A + a_0) (A - a_1^\prime + 1) = \] \[ = \left(BA^4 + a_2 BA^3 +a_1 BA^2 + a_0BA\right) + (- a_1^\prime + 1)(A^3 + a_2A^2 +a_1 A + a_0), \] \end{small} which is the desired decomposition of $T(F)(\beta)$. \end{proof} \begin{figure}[h] \centering \begin{tikzpicture}[scale=.65, thick] \draw[dashed] (-6,0) arc (180:55:3) arc (55:6:3); \draw (-6,0) arc (180:55:3); \draw (-6,0) arc (-180:-82:3 and .5); \draw[dashed] (-6,0) arc (180:22:3 and .5); \draw (3,-.25) arc (0:128:3); \draw (3,-.25) arc (0:-150:3 and .5); \draw[fill, white, opacity =.5, dashed] (0,.2) .. controls +(-.45,1.5) and +(.3,0).. (-1.85,2.11) .. controls +(-.3,0) and +(-.45,1.5) .. (-2.6,-.5) -- cycle; \draw[pattern color=red, pattern = north east lines, opacity =.5, dashed] (0,.2) .. controls +(-.45,1.5) and +(.3,0).. (-1.85,2.11) .. controls +(-.3,0) and +(-.45,1.5) .. (-2.6,-.5) -- cycle; \draw[white, thick] (0,.2) .. controls +(-.45,1.5) and +(.3,0).. (-1.85,2.11) .. controls +(-.3,0) and +(-.45,1.5) .. (-2.6,-.5); \draw[red, thick, dashed] (0,.2) .. controls +(-.45,1.5) and +(.3,0).. (-1.85,2.11); \draw[red, thick] (-1.85,2.11) .. controls +(-.3,0) and +(-.45,1.5) .. (-2.6,-.5); \draw[dashed] (3,-.25) arc (0:91:3 and .5); \node at (0,2) {B}; \node at (-3.25,2.25) {A}; \node at (-1.5,1) {C}; \draw[fill, white] (-4,.7) rectangle (-3.4,-.3); \node at (-3.75,.75) {$\underbrace{\bullet\ ...\ \bullet}_{r}$}; \draw[fill, white] (1.8,.7) rectangle (1.2,-.3); \node at (1.5,.75) {$\underbrace{\bullet\ ...\ \bullet}_{s}$}; \draw[fill, white] (-1.8,-.15) rectangle (-1.2,-.5); \node at (-1.5,0) {$\underbrace{\textcolor{red}{\bullet\ ...\ \bullet}}_{t}$}; \end{tikzpicture} \caption{The foam $A^{r}B^{s}C^{t}$.}\label{fig:prefoam} \end{figure} \begin{rem*} In the proof of Lemma \ref{lemma:dbetavanish} we identified $T(W_{\theta})$ with the $R$-module \[ M = \frac{R[A,B,C]}{\big(\text{\eqref{eq:lineareq}, \eqref{eq:quadraticeq}, \eqref{eq:cubiceq}}\big)}.\] \begin{enumerate} \item The above identification of $T(W_\theta)$ with $M$ completely disregards the quantum grading (resp. filtration). Taking into account the quantum degree (resp. filtration) there is a shift one has to consider. More precisely, we have an isomorphism of graded (resp. filtered) $R$-modules \[T(W_{\theta}) \simeq \frac{R[A,B,C]}{\big(\text{\eqref{eq:lineareq}, \eqref{eq:quadraticeq}, \eqref{eq:cubiceq}}\big)}(-3),\] where $deg(A) = deg(B) = deg(C) = 2$. \item By Proposition \ref{proposition:KhovKuprelgraded}, Example \ref{ex:Tcircle}, and Remark \ref{rem:gradingcircle} we have the following isomorphisms of graded (resp. filtered) $R$-modules \[T\left(W_{\theta}\right) \simeq T(\bigcirc) (-1) \oplus T(\bigcirc)(1) \simeq \frac{R[x]}{(\omega(x))}(-3) \oplus \frac{R[y]}{(\omega(y))}(-1),\] where $deg(x) = deg(y) = 2$. In the follow up, we shall make use of this representation of $T(W_{\theta})$ rather than $M(-3)$. Of course the two representations are isomorphic as graded (resp. filtered) $R$-modules. An explicit isomorphism is given by: \[ M(-3) \longrightarrow \frac{R[x]}{(\omega(x))}(-3) \oplus \frac{R[y]}{(\omega(y))}(-1)\::\: \left\lbrace\begin{matrix}A^k\phantom{B} \mapsto x^k & k = 0,1,2\\ A^kB \mapsto y^k & k = 0,1,2\end{matrix}\right.\] \end{enumerate} \end{rem* Now, we are ready to prove the following result. \begin{proposition}\label{proposition:betaiscycle} Let $D$ be an oriented link diagram. Then, $\beta_{\omega,x_1}(D)$ is a cycle. \end{proposition} \begin{proof} First, notice that the oriented web resolution is bipartite exactly as the oriented resolution; that is, if two arcs in $\underline{w}_{D}$ were connected by a crossing in $D$, then they belong to different circles in $\underline{w}_{D}$. Let $\underline{w^\prime}$ be a web resolution which is obtained from $\underline{w}_{D}$ by replacing a $0$-web resolution with a $1$-resolution, and denote by $E$ the set of such resolutions. Notice that each $\underline{w^\prime} \in E$ is the disjoint union of circles and a theta web. In particular, we have the following isomorphism of $R$-modules \[ T(\underline{w^\prime}) \simeq \bigotimes_{\gamma\in \underline{w}_{D}\setminus \{\gamma_{1},\gamma_{2}\}} \frac{R[x_{\gamma}]}{(\omega(x_{\gamma}))} \otimes \left( \frac{R[y]}{(\omega (y))} \oplus\frac{R[x]}{(\omega(x))}\right), \] where $\gamma_{1} = \gamma_{1}(\underline{w^\prime})$ and $\gamma_{2}= \gamma_{2}(\underline{w^\prime})$ are the two circles in $\underline{w}_{D}$ which are merged into the theta web in $\underline{w^\prime}$, and the circles in $\underline{w^\prime}$ are identified with the corresponding circles in $\underline{w}_{D}$. By definition, the differential $d_{geo}$ of the geometric complex is of the form \[ d_{geo\ \vert\:\underline{w}_{D}} = \sum_{\underline{w^\prime}\in E} \pm F(\underline{w^\prime}),\] where each $F(\underline{w^\prime})$ is a disjoint union of cylinders and a copy of the foam $F$ in Figure \ref{fig:foamlemma}, and the boundary of $F(\underline{w^\prime})$ is $\underline{w^\prime} \sqcup \underline{w}_{D}$. Applying the tautological functor $T$, we get \[ d (T(\underline{w}_{D})) \subseteq \bigoplus_{\underline{w^\prime}\in E} \left(\left( \frac{R[y]}{(\omega(y))} \oplus\frac{R[x]}{(\omega(x))}\right)\otimes \bigotimes_{\gamma\in \underline{w}_{D}\setminus \{\gamma_{1}(\underline{w^\prime}),\gamma_{2}(\underline{w^\prime})\}} \frac{R[x_{\gamma}]}{(\omega(x_{\gamma}))} \right),\] and \[d_{\vert\:T(\underline{w}_{D})} = \bigoplus_{\underline{w^\prime}\in E} \pm \left(T(F) \otimes \bigotimes_{\gamma\in \underline{w}_{D}\setminus \{\gamma_{1}(\underline{w^\prime}),\gamma_{2}(\underline{w^\prime})\}} Id_{R[x_{\gamma}]/(\omega(x_{\gamma}))}\right). \] The statement now follows immediately from Lemma \ref{lemma:dbetavanish}. \end{proof} \subsection{The transverse invariance of the $\beta$-chains} Now, let us analyse the behaviour of the $\beta$-chains under the maps induced by some Reidemeister moves. These moves include the closures of the mirror images of the transverse Markov moves. During the proofs in this section the homological degree and the quantum degree (or filtration) will be disregarded. We remark that the chain homotopy equivalences associated to the Reidemeister moves described here are of (filtered) degree $0$ with respect to the quantum grading (resp. filtration) once the appropriate shifts are taken into account. \subsubsection*{Negative first Reidemeister move} Let $D$ be an oriented link diagram, and denote by $D_{-}$ the oriented link diagram obtained from $D$ via a negative first Reidemeister move on a given arc \textbf{a} (Figure \ref{reidmoves12}). \begin{figure}[H] \centering \begin{tikzpicture}[scale = .75, thick] \draw[->] (-1,0) .. controls +(.5,.5) and +(-.5,.5) .. (1,0); \node at (0,-.5) {$D$}; \node at (1,.35) {\textbf{a}}; \node at (4,-.5) {$D_+$}; \node at (-4,-.5) {$D_-$}; \node at (-2,1) {$R_1^-$}; \node at (-2,.5) {$\leftrightharpoons$}; \node at (2,1) {$R_1^+$}; \node at (2,.5) {$\rightleftharpoons$}; \draw[<-] (-3,0) .. controls +(-.5,.5) and +(.25,-.5) .. (-4.5,1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-5,0) .. controls +(.5,.5) and +(-.25,-.5) .. (-3.5,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw[<-] (-4.5,1) .. controls +(-.25,.5) and +(.25,.5) .. (-3.5,1); \draw[->] (-5,0) .. controls +(.5,.5) and +(-.25,-.5) .. (-3.5,1); \begin{scope}[shift= {(8,0)}] \draw[<-] (-4.5,1) .. controls +(-.25,.5) and +(.25,.5) .. (-3.5,1); \draw[->] (-5,0) .. controls +(.5,.5) and +(-.25,-.5) .. (-3.5,1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-3,0) .. controls +(-.5,.5) and +(.25,-.5) .. (-4.5,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw[<-] (-3,0) .. controls +(-.5,.5) and +(.25,-.5) .. (-4.5,1); \end{scope} \node at (4.75,.5) {$c_+$}; \node at (-4.75,.5) {$c_-$}; \end{tikzpicture} \caption{The negative (left) and positive (right) versions of the first Reidemeister move.} \label{reidmoves12} \end{figure} In Figure \ref{fig:firstRmove} there is a description of the map associated to a negative Reidemeister move between the geometric complexes (cf. \cite[Section 2.2]{Mackaayvaz07}). The figure should be read as follows: the foams are all embedded in $(\mathbb{R}^{2}\times\{ 0 \}) \times [0,1]$ and are cylinders except in a small cylinder above the arc $\mathbf{a}$, where they look like the ones depicted in Figure \ref{fig:firstRmove}. \begin{figure}[h] \begin{tikzpicture}[scale=.3] \draw[dashed] (0,7.5) circle (2) ; \draw[dashed] (0,-7.5) circle (2) ; \draw[dashed] (15,7.5) circle (2) ; \node at (15,-7.5) {0} ; \draw[thick,<-] (-1.414,7.5 +1.414) .. controls +(.75,-.75) and +(.75,.75) .. (-1.414,7.5-1.414); \draw[thick,->] (0,7.5) arc (180:-180:.75); \draw[thick,<-] (15-1.414,7.5 +1.414) -- (15-0.707,7.5 +0.707)-- (15-0.707,7.5 -0.707)-- (15-1.414,7.5-1.414); \draw[thick,->] (15-0.707,7.5 +0.707) .. controls +(2,2) and +(2,-2) .. (15-0.707,7.5 -0.707); \draw[very thick, purple, ->] (15-0.707,7.5 +0.707)-- (15-0.707,7.5 -0.707); \draw[thick,<-] (-1.414,-7.5 +1.414) .. controls +(.75,-.75) and +(.75,.75) .. (-1.414,-7.5-1.414); \draw[thick, ->](4,7.5)--(11,7.5); \draw[thick, ->](4,-7.5)--(11,-7.5); \draw[thick, ->](-.5,-4)--(-.5,4); \draw[thick,<-](.5,-4)--(.5,4); \draw[thick, ->](15.5,-4)--(15.5,4); \draw[thick,<-](14.5,-4)--(14.5,4); \draw (-3.5,0) -- (-3,-1) -- (-3,3)-- (-3.5,4) -- cycle; \draw (-2.75,-.5) arc (180:0:.75) arc (0:-180:.75 and .25); \draw[dashed] (-1.25,-.5) arc (0:180:.75 and .25); \begin{scope}[shift ={(-5.5,0)}] \draw (-3.5,0) -- (-3,-1) -- (-3,3)-- (-3.5,4) -- cycle; \draw (-2.75,-.5) arc (180:0:.75) arc (0:-180:.75 and .25); \draw[dashed] (-1.25,-.5) arc (0:180:.75 and .25); \end{scope} \begin{scope}[shift ={(-11.75,0)}] \draw (-3.5,0) -- (-3,-1) -- (-3,3)-- (-3.5,4) -- cycle; \draw (-2.75,-.5) arc (180:0:.75) arc (0:-180:.75 and .25); \draw[dashed] (-1.25,-.5) arc (0:180:.75 and .25); \end{scope} \begin{scope}[shift ={(10,13)}, yscale = -1] \draw (-4,1) -- (-2,-1) -- (-2,3)-- (-4,5) -- cycle; \draw[pattern =north east lines, pattern color = red, dashed] (-3.5,4.5) .. controls +(0,-1.5) and +(0,-1.5) .. (-2.5,3.5) -- cycle; \draw[] (-3,2.85) -- (-2,2); \draw[] (-3.5,4.5) .. controls +(1.5,0) and +(1.5,0) .. (-2.5,3.5) -- cycle; \draw[] (-1.25,.5)arc (-180:0:.75 and .25); \draw[] (.25,.5) arc (0:180:.75 and .25); \draw (-2,2) .. controls +(.5,-.5) and +(0,.5) .. (-1.25,.5); \draw (-1.75,3.8) .. controls +(.5,-.5) and +(0,.5) .. (.25,.5); \draw[thick, red] (-3.5,4.5) .. controls +(0,-1.5) and +(0,-1.5) .. (-2.5,3.5) -- cycle ; \end{scope} \begin{scope}[shift = {+(7,-1)}] \node[left] at (-3.75,2) {$-$}; \draw (-3.5,0) -- (-3,-1) -- (-3,3)-- (-3.5,4) -- cycle; \draw (-2.75,3.5) arc (-180:0:.75) arc (0:360:.75 and .25); \draw[dashed] (-1.25,3.5) arc (0:180:.75 and .25); \end{scope} \node at (-4.25,1.5) {$a_1$}; \node at (-5.5,1.5) {$-$}; \node at (-11.75,1.5) {$-a_{2}\: \sum_{i=0}^{1}$}; \node at (-17.5,1.5) {$F=\:\sum_{i=0}^{2}\quad $}; \node at (-12.25,-.5) {$i$}; \node at (-13.5,3.5) {$2-i$}; \node at (-6.25,-.5) {$i$}; \node at (-7.5,3.5) {$1-i$}; \node at (-6.25,7.5) {$\langle D_{-} \rangle_{\omega}\ :$}; \node at (-6.25,-7.5) {$\langle D \rangle_{\omega}\ :$}; \node at (6.5,1) {$=G$}; \node at (0,-7) {$\mathbf{a}$}; \end{tikzpicture} \caption{Schematic description of the maps encoding a negative first Reidemeister move. The numbers next to the foams (which are drawn in $(\mathbb{R}^{2}\times\{0\})\times [0,1]$ and should be read top to bottom) indicate the number of dots. The horizontal maps are the differentials. Notice that there is a difference in the sign of the maps $F$ and $G$ with respect to \cite{Mackaayvaz07}.} \label{fig:firstRmove} \end{figure} \begin{rem*} The maps $F$ and $G$ described in Figure \ref{fig:firstRmove} have the opposite sign with respect to the corresponding maps defined in \cite[Section 2.2]{Mackaayvaz07}. \end{rem*} Before proceeding we need the following lemma. \begin{lemma}\label{lemma:techlemmaforfirstmove} Let $P(x) \in R[x]$ be the polynomial $x^2 + a_{1}^\prime x + a_{0}^\prime$. Then, \begin{small} \[[x^2P(x)]\otimes [1] + [xP(x)]\otimes [y] + a_2[xP(x)]\otimes [1] - x_1[P(x)]\otimes[y] -x_1 a_1^\prime [P(x)]\otimes[1]\] \end{small} is zero in \[\frac{R[x]}{(\omega (x))}\otimes \frac{R[y]}{(\omega (y))}\] \end{lemma} \begin{proof} First, let us point out that \begin{equation} x P(x) = x_{1} P (x)\quad mod\ \omega(x). \label{eq:basesl3} \end{equation} Using the equality $a_{2} = a_1^\prime - x_1$, we get \[[x^2 P(x)]\otimes [1] + [x P(x)]\otimes [y] + a_2[x P(x)]\otimes [1] =\] \[ = [x^2P (x)]\otimes [1] + [x P (x)]\otimes [y] + a_1^\prime [x P (x)]\otimes [1] - x_1 [xP(x)]\otimes [1] = \] \[ = x_1[P(x)]\otimes [y] + a_1^\prime x_1[P(x)]\otimes [1], \] where the last equality follows from Equation \eqref{eq:basesl3}. \end{proof} Denote by \[\Phi_{1}:C_{\omega}^{\bullet}(D) \longrightarrow C_{\omega}^{\bullet}(D_{-})\] the map associated to the (linear combination of) foam(s) $F$ in Figure \ref{fig:firstRmove}, and by \[\Psi_{1}:C_{\omega}^{\bullet}(D_{-}) \longrightarrow C_{\omega}^{\bullet}(D),\] the map associated to the foam denoted by $G$ in the same figure. \begin{proposition}\label{proposition:betaandr1sl3} Let $D$ be an oriented link diagram, and let $D_{-}$ be the diagram obtained from $D$ via a negative first Reidemeister move. Then, \[\Psi_{1}(\beta_{\omega, x_1}(D_{-})) = \beta_{\omega, x_1}(D)\quad\text{and}\quad \Phi_{1}(\beta_{\omega, x_1}(D)) = \beta_{\omega, x_1}(D_{-}).\] \end{proposition} \begin{proof} Notice that $\underline{w}_{D_{-}}$ is mapped to $\underline{w}_{D}$ by $\Psi_{1}$ and that $\underline{w}_{D_-}$ can be identified with $\underline{w}_{D} \sqcup \bigcirc$. By the Khovanov-Kuperberg circle removal relation, Example \ref{ex:Tcircle} and the sphere relation we can identify $\Psi_{1\: \vert \underline{w}_{D}}$ with the map \[\frac{R[x_{\gamma^\prime}]}{(\omega (x_\gamma^\prime))} \otimes \bigotimes_{\gamma\in\underline{w}_{D}} \frac{R[x_{\gamma}]}{(\omega (x_\gamma))} \longrightarrow \bigotimes_{\gamma\in\underline{w}_{D}} \frac{R[x_{\gamma}]}{(\omega(x_\gamma))}\] given by \[ q_{\gamma^\prime}(x_{\gamma\prime}) \otimes \bigotimes_{\gamma\in\underline{w}_D} q_{\gamma}(x_{\gamma}) \longmapsto -\epsilon(q^\prime(x_{\gamma^\prime}))\bigotimes_{\gamma\in\underline{w}_{D}} q_{\gamma}(x_{\gamma}), \] where $\gamma^\prime$ indicates the circle in $\underline{w}_{D_-}\setminus \underline{w}_{D}$ and \[\epsilon: \frac{R[x]}{(\omega (x))} \longrightarrow R\: :\: [ax^{2} + bx + c] \mapsto -a.\] Since in the case of $\beta_{\omega, x_1}(D)$ we have $q_\gamma (x) = q_{\gamma^\prime}(x) = [P(x)] = [x^{2} + a_1^\prime x + a_0^\prime]$ the first part of the statement follows. Similarly to what has been done with $\Psi_1$, we can identify $\Phi_1$ with the map \[ \bigotimes_{\gamma\in\underline{w}_{D}} \frac{R[x_{\gamma}]}{(\omega (x_\gamma))} \longrightarrow \frac{R[x_{\gamma^\prime}]}{(\omega (x_\gamma^\prime))} \otimes\bigotimes_{\gamma\in\underline{w}_{D}} \frac{R[x_{\gamma}]}{(\omega (x_\gamma))}\] mapping $ \bigotimes_{\gamma\in\underline{w}_D} q_{\gamma}(x_{\gamma})$ to \begin{small} \[-\left(\sum_{i=0}^{2}(x_{\gamma_{\mathbf{a}}}^{2-i}q_{\gamma_{\mathbf{a}}} \otimes x_{\gamma^\prime}^{i})+ a_{2} \sum_{i=0}^{1}(x_{\gamma_{\mathbf{a}}}^{1-i}q_{\gamma_{\mathbf{a}}}\otimes x_{\gamma^\prime}^{i}) + a_{1}(q_{\gamma_{\mathbf{a}}}\otimes 1)\right)\otimes \bigotimes_{\gamma\in\underline{w}_{D}\setminus \{ \gamma_{\mathbf{a}}\}} q_{\gamma}(x_{\gamma}), \] \end{small} where $\gamma_{\mathbf{a}}$ is the circle in $\underline{w}_{D}$ containing the arc $\mathbf{a}$ (see Figure \ref{fig:firstRmove}) and $q_{\gamma_{\mathbf{a}}} = q_{\gamma_{\mathbf{a}}}(x_{\gamma_{\mathbf{a}}})$. It is now easy to see that \begin{small} \begin{align*} \Phi_1 (\beta_{\omega, x_1}(D)) = &\left( [x_{\gamma_{\mathbf{a}}}^2 P(x_{\gamma_{\mathbf{a}}})]\otimes [1] + [x_{\gamma_{\mathbf{a}}}P(x_{\gamma_{\mathbf{a}}})]\otimes [x_{\gamma^\prime}] + a_2[x_{\gamma_{\mathbf{a}}}P(x_{\gamma_{\mathbf{a}}})]\otimes [1] +\right. \\ & - x_1[P(x_{\gamma_{\mathbf{a}}})]\otimes[x_{\gamma^\prime}]-x_1 a_1 [P(x_{\gamma_{\mathbf{a}}})]\otimes[1] \Big) \otimes \bigotimes_{\gamma\in\underline{w}_{D}\setminus \{ \gamma_{\mathbf{a}}\}} P(x_{\gamma}) + \beta_{\omega, x_1}(D_-) =\\ = &\ \beta_{\omega, x_1}(D_-) \end{align*} \end{small} where the last equality is due to Lemma \ref{lemma:techlemmaforfirstmove}. \end{proof} \subsubsection*{Second Reidemeister move} Now, let us turn to the coherent version of the second Reidemeister move. Let $D$ be an oriented link diagram. Let \textbf{a} and \textbf{b} be two (un-knotted) arcs of $D$ lying in a small ball. Performing a second Reidemeister move on these arcs inserts two adjacent crossings, say $c_1$ and $c_2$, of opposite types. Recall that a Reidemeister move is coherent if it can be obtained by rotating or taking the mirror image of the one in Figure \ref{fig:coherentR22}. Denote by $D^{\prime}$ the link obtained from $D$ by performing a coherent second Reidemeister move. Finally, denote by $\underline{w}^\prime_{D^{\prime}}$ the web resolution of $D^{\prime}$ where all crossings but $c_1$ and $c_{2}$ are resolved as in the oriented web resolution. \begin{figure}[H] \centering \begin{tikzpicture}[scale = .15, thick \ocross{-15}{-2} \icross{-6}{-2} \draw (-6,2) .. controls +(-1.5,1.5) and +(1.5,1.5) .. (-11,2); \draw (-11,-2) .. controls +(1.5,-1.5) and +(-1.5,-1.5) .. (-6,-2); \draw[->] (-15,2) -- (-16,3); \draw[->] (-15,-2) -- (-16,-3); \draw (-2,2) -- (-1,3); \draw (-2,-2) -- (-1,-3); \node at (3,0) {$\rightleftharpoons$}; \draw[->] (21,3) .. controls +(-2,-2) and +(2,-2) .. (8,3); \draw[<-] (8,-3) .. controls +(2,2) and +(-2,2) .. (21,-3); \node at (14.5,-5) {$D$}; \node at (-8.5,-5) {$D^{\prime}$}; \node at (-13,3.5) {$c_1$}; \node at (-3.5,3.5) {$c_2$}; \node at (20,3.5) {$\mathbf{a}$}; \node at (20,-3.5) {$\mathbf{b}$}; \end{tikzpicture} \caption{A coherent version of the second Reidemeister move. All other coherent second Reidemeister moves are obtained by rotating or taking the mirror image of the one in figure.} \label{fig:coherentR22} \end{figure} The map associated to the second Reidemeister move at the level of geometric complexes was defined by Mackaay and Vaz as in Figure \ref{fig:secondRmcoer}. Denote by $\Phi_2$ and $\Psi_2$ the two maps \[\Phi_2 : C_{\omega}^\bullet(D,R)\longrightarrow C_{\omega}^\bullet(D^{\prime},R)\qquad \Psi_2 : C_{\omega}^\bullet(D^{\prime},R)\longrightarrow C_{\omega}^\bullet(D,R)\] associated to the coherent second Reidemeister move. \begin{figure}[h] \centering \includegraphics[scale=.35]{ReidIIa.pdf} \caption{Map associated to the coherent second Reidemeister move.}\label{fig:secondRmcoer} \end{figure} \begin{proposition}\label{proposition:secondcoersl3} Let $D$ be an oriented link diagram and let $D^{\prime}$ be the diagram obtained from $D$ by performing a coherent second Reidemeister move. Then, \[\Phi_2 (\beta_{\omega, x_1}(D)) = \beta_{\omega, x_1}(D^{\prime})\quad\text{and}\quad\Psi_2 (\beta_{\omega, x_1}(D^{\prime})) = \beta_{\omega, x_1}(D)\] \end{proposition} \begin{proof} First notice that $\underline{w}_{D}$ can be easily identified with $\underline{w}_{D^{\prime}}$. With this identification we have that $\Psi_{2\: \vert T(\underline{w}_{D^\prime})}$ behaves as the identity map (cf. Figure \ref{fig:secondRmcoer}), and the second part of the statement follows. The map $\Phi_{2\:\vert T(\underline{w}_{D})}$ sends $T(\underline{w}_{D})$ to $T(\underline{w}_{D}) \oplus T(\underline{w}^\prime_{D^{\prime}})$. More precisely, we have \[ \Phi_{2\:\vert \underline{w}_{D}} = Id_{\underline{w}_{D}} \oplus T(F^\prime),\] where $F^\prime$ is the foam drawn in Figure \ref{fig:cobFprime}. To conclude it suffices to prove that: \[T(F^\prime) (\beta_{\omega, x_1}(D)) = 0.\] This is immediate from Lemma \ref{lemma:dbetavanish}, once one notices that the foam $F^\prime$ is the composition of the foam $F$ in Figure \ref{fig:foamlemma} and a foam $G$ (see Figure \ref{fig:cobFprime}). \begin{figure}[h!] \centering \begin{tikzpicture}[scale=.4, rotate = 180] \draw[pattern color = red ,pattern = north east lines, thick] (0,0) arc (0:180:2)--(-6,0) -- (-6,4) -- (2,4) -- (2,0) -- cycle; \draw[dashed] (0,0) arc (0:180:2 and 1); \draw[thick] (0,0) arc (0:-180:2 and 1); \draw[dashed] (-6,0) arc (0:90:1 and .5); \draw[thick] (-6,0) arc (0:-90:2 and 1); \draw[dashed] (2,0) arc (180:90:1 and .5); \draw[thick] (2,0) arc (180:270:2 and 1); \draw[dashed] (-6,4) arc (0:90:1 and .5); \draw[thick] (-6,4) arc (0:-90:2 and 1); \draw[dashed] (2,4) arc (180:90:1 and .5); \draw[thick] (2,4) arc (180:270:2 and 1); \draw[pattern color = red ,pattern = north east lines, thick] (2,4) arc (0:180:4 and 2.5) -- cycle; \draw (-8,8) -- (4,8); \draw (-7,9.5) -- (3,9.5); \draw (-8,8) -- (-8,-1); \draw (4,8) -- (4,-1); \draw (-7,9.5) -- (-7,8); \draw (3,9.5) -- (3,8); \draw[dashed] (-7,.5) -- (-7,8); \draw[dashed] (3,.5) -- (3,8); \draw[gray, dashed] (-9,5.5) -- (5,5.5) -- (8,2.5) -- (-12,2.5) -- cycle; \draw[gray, dashed] (-9,10) -- (5,10) -- (8,7) -- (-12,7) -- cycle; \draw[gray, dashed] (-9,1.5) -- (5,1.5) -- (8,-1.5) -- (-12,-1.5) -- cycle; \node at (-12,-2) {$(\mathbb{R}^2\times \{ 0 \})\times \{ 1 \}$}; \node at (-12,2) {$(\mathbb{R}^2\times \{ 0 \})\times \{ \frac{1}{2} \}$}; \node at (-12,6.5) {$(\mathbb{R}^2\times \{ 0 \})\times \{ 0 \}$}; \node at (9,0) {$G$}; \node at (9,5) {$F$}; \end{tikzpicture} \caption{The cobordism $F^\prime$ as a composition of the cobordism $F$ (on the bottom) and the cobordism $G$ (on the top).} \label{fig:cobFprime} \end{figure} \end{proof} \subsubsection*{Third Reidemeister move} Finally, we have to prove the invariance of the $\beta$-chains under braid-like third Reidemeister moves. Consider the version $R_{3}^\circ$ of the third Reidemeister move in Figure \ref{fig:thirdOmegatreb}. \begin{figure}[H] \centering \begin{tikzpicture}[thick] \draw[->] (-.5,.866) -- (.5,-.866); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-1,0) .. controls +(.5,.75) and +(-.5,.75) .. (1,0); \pgfsetlinewidth{0.125*\pgflinewidth} \draw[->] (-1,0) .. controls +(.5,.75) and +(-.5,.75) .. (1,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-.5,-.866) -- (.5,.866); \pgfsetlinewidth{0.125*\pgflinewidth} \draw[->] (-.5,-.866)-- (.5,.866); \node at (1.5,.5){$R_{3}^\circ$}; \node at (1.5,0){$\leftrightharpoons$}; \begin{scope}[shift = {+(3,0)}] \draw[->] (-.5,.866) -- (.5,-.866); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-1,0) .. controls +(.5,-.75) and +(-.5,-.75) .. (1,0); \pgfsetlinewidth{0.125*\pgflinewidth} \draw[->] (-1,0) .. controls +(.5,-.75) and +(-.5,-.75) .. (1,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (-.5,-.866) -- (.5,.866); \pgfsetlinewidth{0.125*\pgflinewidth} \draw[->] (-.5,-.866)-- (.5,.866); \end{scope} \node at (0,-1.5){$L_1$}; \node at (3,-1.5){$L_2$}; \end{tikzpicture} \caption{A version of the Third Reidemeister move.} \label{fig:thirdOmegatreb} \end{figure} All the braid-like third Reidemeister moves can be deduced, via a sequence of coherent second Reidemeister moves, from the $R_3^\circ$ move (cf. \cite[Lemma 2.6]{Polyak10}). See Figure \ref{fig:fromRplustoRcirc} for an example. \begin{figure}[H] \centering \begin{tikzpicture}[scale =.5, thick] \draw (1,1) -- (0,1); \draw (1,0) -- (0,0); \draw (1,-1) -- (0,-1); \draw (1,-1) .. controls +(.5,0) and +(-.5,0).. (2,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (1,0) .. controls +(.5,0) and +(-.5,0).. (2,-1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (1,0) .. controls +(.5,0) and +(-.5,0).. (2,-1); \draw (1,1) -- (2,1); \draw (2,1) .. controls +(.5,0) and +(-.5,0).. (3,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \draw (3,-1) -- (2,-1); \draw (3,0) .. controls +(.5,0) and +(-.5,0).. (4,-1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \draw (3,1) -- (4,1); \draw (4,1) -- (6,1); \draw (4,0) -- (6,0); \draw (4,-1) -- (6,-1); \begin{scope}[shift = {+(8,0)}] \draw (0,1) -- (1,1); \draw (0,0) -- (1,0); \draw (0,-1) -- (1,-1); \draw (1,-1) .. controls +(.5,0) and +(-.5,0).. (2,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (1,0) .. controls +(.5,0) and +(-.5,0).. (2,-1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (1,0) .. controls +(.5,0) and +(-.5,0).. (2,-1); \draw (1,1) -- (2,1); \draw (2,1) .. controls +(.5,0) and +(-.5,0).. (3,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \draw (3,-1) -- (2,-1); \draw (3,0) .. controls +(.5,0) and +(-.5,0).. (4,-1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \draw (4,1) -- (3,1); \draw (4,1) .. controls +(.5,0) and +(-.5,0).. (5,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (4,0) .. controls +(.5,0) and +(-.5,0).. (5,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (4,0) .. controls +(.5,0) and +(-.5,0).. (5,1); \draw (5,0) .. controls +(.5,0) and +(-.5,0).. (6,1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (5,1) .. controls +(.5,0) and +(-.5,0).. (6,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (5,1) .. controls +(.5,0) and +(-.5,0).. (6,0); \draw (4,-1) -- (6,-1); \end{scope} \begin{scope}[shift = {+(8,-4)}] \draw (5,1) -- (6,1); \draw (5,0) -- (6,0); \draw (5,-1) -- (6,-1); \draw (0,1) -- (1,1); \draw (4,0) .. controls +(.5,0) and +(-.5,0).. (5,1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (4,1) .. controls +(.5,0) and +(-.5,0).. (5,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (4,1) .. controls +(.5,0) and +(-.5,0).. (5,0); \draw (5,-1) -- (4,-1); \draw (0,-1) .. controls +(.5,0) and +(-.5,0).. (1,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (0,0) .. controls +(.5,0) and +(-.5,0).. (1,-1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (0,0) .. controls +(.5,0) and +(-.5,0).. (1,-1); \draw (1,0) .. controls +(.5,0) and +(-.5,0).. (2,-1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (1,-1) .. controls +(.5,0) and +(-.5,0).. (2,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (1,-1) .. controls +(.5,0) and +(-.5,0).. (2,0); \draw (1,1) -- (2,1); \draw (2,1) .. controls +(.5,0) and +(-.5,0).. (3,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \draw (3,-1) -- (2,-1); \draw (3,0) .. controls +(.5,0) and +(-.5,0).. (4,-1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \draw (3,1) -- (4,1); \end{scope} \begin{scope}[shift = {+(0,-4)}] \draw (0,1) -- (1,1); \draw (5,1) -- (6,1); \draw (5,0) -- (6,0); \draw (5,-1) -- (6,-1); \draw (4,0) .. controls +(.5,0) and +(-.5,0).. (5,1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (4,1) .. controls +(.5,0) and +(-.5,0).. (5,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (4,1) .. controls +(.5,0) and +(-.5,0).. (5,0); \draw (5,-1) -- (4,-1); \draw (0,0) -- (2,0); \draw (0,-1) --(2,-1); \draw (1,1) -- (2,1); \draw (2,1) .. controls +(.5,0) and +(-.5,0).. (3,0); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \pgfsetlinewidth{.125*\pgflinewidth} \draw (2,0) .. controls +(.5,0) and +(-.5,0).. (3,1); \draw (3,-1) -- (2,-1); \draw (3,0) .. controls +(.5,0) and +(-.5,0).. (4,-1); \pgfsetlinewidth{8*\pgflinewidth} \draw[white] (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \pgfsetlinewidth{.125*\pgflinewidth} \draw (3,-1) .. controls +(.5,0) and +(-.5,0).. (4,0); \draw (3,1) -- (4,1); \end{scope} \draw[dashed] (-.25,-5.25) rectangle (6.25,1.25); \node at (3,-2) {$\downharpoonleft\hspace{-2.5pt} \upharpoonright$}; \node at (7,0) {$\leftrightharpoons$}; \node at (7,-4) {$\leftrightharpoons$}; \node at (11,-2) {$\downharpoonleft\hspace{-2.5pt} \upharpoonright$}; \end{tikzpicture} \caption{How to recover another braid like third Reidemeister move (in the dashed box) with an $R_3^\circ$ and two coherent second Reidemeister moves.} \label{fig:fromRplustoRcirc} \end{figure} Chain maps between the geometric complexes associated to $R_3^{\circ}$ have been described explicitly by Mackaay and Vaz. Each of these maps is defined as the composition of two maps. First, one defines an element $Q \in \mathbf{Kom}(\mathbf{Foam}_{/\ell})$ which is \emph{not} the geometric complex associated to a link. Then, one defines chain maps \[ F_i : \langle D_i \rangle_{\omega} \longrightarrow Q\qquad G_{i}: Q \longrightarrow \langle D_i \rangle_{\omega},\] where $ i\in \{1,2\}$, and $D_1$ and $D_2$ are the diagrams on each side of the $R_3^\circ$ move (Figure \ref{fig:thirdOmegatreb}), such that $F_i$ is the up-to-homotopy inverse of $G_i$. For a description of such maps the reader may refer to Figure \ref{fig:invterzasl3a} (cf. \cite{Mackaayvaz07}). The important thing that the reader should keep in mind is that for each web resolution $\underline{w}$ of $D_i$ such that the crossings involved in $R_3^\circ$ are resolved as in the oriented resolution, there is the same direct summand in $Q$, and that the restriction of either $G_i$ or $F_i$ to $\underline{w}$ is minus the identity cobordism. Finally, the maps associated to each direction of $R_3^\circ$ (between the Kuperberg brackets) \[\widetilde{\Psi}_{3}: \langle D_1 \rangle_{\omega} \longrightarrow \langle D_2 \rangle_{\omega} \qquad \widetilde{\Phi}_{3}: \langle D_2 \rangle_{\omega} \longrightarrow \langle D_1 \rangle_{\omega}\] are defined as follows \[ \widetilde{\Psi}_3 = G_2 \circ F_1 \qquad \widetilde{\Phi}_3 = G_1 \circ F_2.\] It is immediate that these two maps, when restricted to the oriented web resolutions, are cylinders. Denote by $\Psi_3$ and $\Phi_3$ the maps between $\mathfrak{sl}_3-$complexes associated to $\widetilde{\Psi}_3$ and $\widetilde{\Phi}_3$, respectively. Then, maps $\Psi_3$ and $\Phi_3$ behave as the identity maps between the summands associated to the oriented web resolutions. So the following proposition is immediate. \begin{proposition} Let $D_1$ and $D_2$ be two oriented link diagrams related by a coherent third Reidemeister move. Then, \[ \Psi_3 (\beta_{\omega,\: x_1}(D_1)) =\beta_{\omega,\: x_1}(D_2)\quad\text{and}\quad \Phi_3 (\beta_{\omega,\: x_1}(D_2)) = \beta_{\omega,\: x_1}(D_1). \]\qed \end{proposition} \begin{proof}[Proof of Theorem \ref{theorem:beta_3}] Theorem \ref{theorem:beta_3} follows immediately by putting together the results concerning the behaviour of $\beta_{\omega,\: x_1}$ under the maps induced by coherent Reidemeister moves and negative first Reidemeister moves, and the computation of the degrees after the definition of $\beta_{\omega,x_{i}}$. \end{proof} We will call $\beta_{\omega,\: x_1}(\overline{B})$ the \emph{$\beta_3$-invariant of $B$ associated to $(\omega, x_1)$}. \begin{rem*} The $\psi_3$-invariant introduced by Wu in \cite{Wu08} is a special case of our construction, more precisely $\psi_3$ is the homology class of the $\beta_3$-invariant associated to $(x^3, 0)$. \end{rem*} \section{Auxiliary invariants} In the previous section, we already proved the existence of transverse invariants in the $\mathfrak{sl}_3$-chain complex obtained from a factorisable potential (i.e. a potential admitting at least a root in the base ring). At this point a natural question arises: how much information do these invariants contain? That is, are these invariants effective? Unfortunately we do not have an explicit answer to this question. This is partly due to the lack of sufficiently simple (non-trivial) examples on which to perform the computations, and also due to the fact that these invariants are, in some sense, difficult to handle. More precisely, they are chains in a diagram-dependent chain complex, and to prove that two braids have distinct invariants one has to prove that there \emph{does not exist} a map (induced by sequences of Reidemeister/Markov moves between the closures of the two braids) sending the $\beta_3$-invariants of one braid to the corresponding $\beta_3$-invariants of the other braid. Proving this is not easy in general. The most natural thing to do in this setting is to prove that the homology classes of the $\beta_3$-invariants associated to the two braids behave differently with respect to a given structure on the $\mathfrak{sl}_3$-homology, which is preserved by the maps induced by sequences of Reidemeister/Markov moves. In this section we shall present some ways to extract information from the homology classes of the $\beta_3$-invariants making use of the $R$-module structure of the $\mathfrak{sl}_3$-homology. \subsection{The vanishing of the homology class} The most basic structure on the $\mathfrak{sl}_3$-homology is that of an $R$-module. The simplest way to make use of this structure is to look at the vanishing of the homology class of the $\beta_{3}$-invariants. That is, if one braid has a $\beta_3$-invariant whose homology class vanishes while the other braid does not, then the two braids represent distinct transverse links. For the sake of simplicity, let us assume throughout this section the potential to be completely factorisable in $R$ (i.e. all its roots are in $R$), and $R = \mathbb{F}$ to be a field. Under these hypotheses we can prove Proposition \ref{prop:vanishing}. \begin{proof}[Proof of Proposition \ref{prop:vanishing}] Before going into the details of the proof, which is quite long, it is worth to schematically describe the idea behind it. We wish to make use of the Mackaay-Vaz classification of the isomorphism types of $H_{\omega}^{\bullet}(L,\mathbb{F})$ depending on the multiplicity of the roots of $\omega$ (see \cite{Mackaayvaz07}). This classification, as well as its generalisation due to Rose and Wedrich in \cite{RoseWedrich16}, is some sort of generalisation of the Chinese remainder theorem (which can be thought as the case $L= \bigcirc$). In each of the three cases we shall identify the image of the $\beta$-cycles under the isomorphism which describes the isomorphism type of $H^\bullet_{\omega}$. This identification will give us the desired result. If the potential admits distinct roots in $\mathbb{F}$, then the homology classes of the corresponding $\beta_3$-invariants are linearly independent by an argument essentially due to Gornik (cf. \cite{Gornik} and \cite[Section 3]{Mackaayvaz07}). More precisely, in this case the $\beta$-cycles are a rescaling of some of the so-called ``canonical generators''. Since the independence of the homology classes of the ``canonical generators'' was proved in \cite{Gornik} and \cite{Mackaayvaz07}, the claim follows in this case. \begin{rem*}\label{rem:gornillobbmackay} The proofs in \cite{Gornik} and \cite{Mackaayvaz07} only concern the case $\mathbb{F} = \mathbb{C}$. However, the proof of the independence works over any integral domain $R$, provided that $\omega$ has all roots in $R$. \end{rem*} Let us shift to the case when potential $\omega$ has a double root $x_1$, and a simple root $x_2$. That is $x_{1} = x_3$. In \cite[Theorem 3.18]{Mackaayvaz07} it was proved that \begin{equation} \label{eq:decomposition_of_Homega} H_{\omega}^{i}(L,\mathbb{F}) \cong \bigoplus_{L^\prime \subseteq L} Kh^{-i-lk(L^\prime,\: L\setminus L^\prime)}(\overline{L^\prime},\mathbb{F}), \end{equation} where $L^\prime \subseteq L$ means that $L^\prime$ ranges among the sub-links of $L$, and $Kh$ denotes the original ($\mathfrak{sl}_2$-)Khovanov homology. \begin{rem*} Some remarks are in order: \begin{enumerate}[(a)] \item the empty sub-link and $L$ are also counted among the sub-links; \item in \cite{Mackaayvaz07} the isomorphism in \eqref{eq:decomposition_of_Homega} is proved only for $\mathbb{F}=\mathbb{C}$. The proof works without change for any field, provided that $\omega$ has all roots in $\mathbb{F}$. \item the isomorphism above is \emph{not} canonical but depends on a number of choices; \item the isomorphism in \eqref{eq:decomposition_of_Homega} is a slight rephrasing of the statement of \cite[Theorem 3.18]{Mackaayvaz07}. More precisely, the statement of \cite[Theorem 3.18]{Mackaayvaz07} reads \[ U_{a,b,c}^{i}(L,\mathbb{F}) \cong \bigoplus_{L^\prime \subseteq L} KH^{i-lk(L^\prime,\: L\setminus L^\prime)}(L^\prime,\mathbb{F}),\] where $U_{a,b,c}^{i}$ denotes $H_{\omega}^{i}$ and $KH$ denotes a theory which is equivalent to Khovanov homology. To be precise, $KH$ denotes the Khovanov-Rozansky $\mathfrak{sl}_2$-homology with the homological degree reversed, that is $KH^{i,j}(K,\mathbb{F}) = Kh^{-i,j}(\overline{K},\mathbb{F})$. To see this, the reader can compare \eqref{eq:decomposition_of_Homega} and \cite[Theorem 3.18]{Mackaayvaz07} with the analogous (more general) result \cite[Theorem 1]{RoseWedrich16} and subsequent examples (cf. the conventions in \cite{Khovanov03, KhovanovRozansky05a} and the results in \cite{Mackaayvaz07a}). \end{enumerate} \end{rem*} Items (1) and (2) in the statement shall follow from a careful inspection of the proof of \cite[Theorem 3.18]{Mackaayvaz07}. In order to keep the paper as self-contained as possible, we shall review the crucial steps of the proof. Let $D$ be an oriented diagram representing an oriented link $L$. A \emph{colouring of} $D$ is a function $\phi$ associating to each arc of $D$ (seen as a graph) a root of $\omega$. A colouring $\phi$ is \emph{compatible} with a web resolution $\underline{w}$ if we can colour all the thick edges of $\underline{w}$ in such a way that at each vertex the set of colours is the set of roots of $\omega$. Denote by $\mathbb{W}_{\phi}(D)$ the set of resolutions compatible with a colouring $\phi$. \begin{rem*} Given a web resolution $\underline{w}$ compatible with $\phi$, the colour of each thick edge is uniquely determined. \end{rem*} Mackaay and Vaz proved that one can associate a sub-complex $C_{\omega}^{\bullet}(\phi ,\mathbb{F})$ to each colouring $\phi$. Furthermore, the complex $C_{\omega}^\bullet (D,\mathbb{F})$ decomposes as the direct sum of these complexes. It turns out that the homology of $C_{\omega}^{\bullet}(\phi ,\mathbb{F})$ is trivial unless $\phi$ assigns the same colour to all arcs belonging to the same components (\emph{proper colouring}). Finally, one proves that for such colourings \[ H_{\omega}^{\bullet}(\phi ,\mathbb{F}) \cong Kh^{-\bullet-lk(L_\phi,\: L\setminus L_\phi)}(\overline{L_{\phi}},\mathbb{F}),\] where $L_\phi$ is the sub-link of $L$ corresponding to the sub-diagram of $D$ obtaned by deleting all arcs whose colour with respect to $\phi$ is $x_1$. Furthermore, to each $\underline{w}\in\mathbb{W}_{\phi}(D)$ and $\phi$ proper colouring, it is possible to associate a cycle $\Sigma_{\phi}(\underline{w})\in T(\underline{w})$ called \emph{canonical generator}\footnote{Even though they are not canonically defined!}. For each proper colouring $\phi$ the set $\{[\Sigma_{\phi}(\underline{w})]\}_{\underline{w}\in\mathbb{W}_{\phi}(D)}$ generates the $\mathbb{F}$-vector space $H_{\omega}^{\bullet}(\phi ,\mathbb{F})$. Denote by $\phi_{i}$ the colouring of $D$ where all arcs are coloured with $x_i$, for $i\in \{1,2\}$. The sub-links associated to $\phi_1$ and $\phi_2$ are the whole link $L$ and the empty sub-link $\emptyset$, respectively. The sub-modules generated by $\beta_{\omega,x_{1}}(D)$ and $\beta_{\omega,x_{2}}(D)$ are contained in $C_{\omega}^\bullet (\phi_1,\mathbb{F})$ and $C_{\omega}^\bullet (\phi_2,\mathbb{F})$ respectively. Thus, the homology classes of $\beta_{\omega,x_{1}}(D)$ and $\beta_{\omega,x_{2}}(D)$ are mapped to $Kh^{0}(\overline{L},\mathbb{F})$ and $Kh^{0}(\emptyset,\mathbb{F}) = \mathbb{F}$, respectively, by the isomorphism in \eqref{eq:decomposition_of_Homega}. \begin{rem*} The above reasoning proves, in particular, that the homology classes of $\beta$-invariants corresponding to different roots (if non-vanishing) are always linearly independent. \end{rem*} By inspecting the argument in \cite{Mackaayvaz07}, it is not difficult to see that the image of the homology class of $\beta_{\omega,x_{2}}(D)$ is non-trivial. It is immediate from the definition of the canonical generators that $\beta_{\omega,x_{2}}(D)$ is $(x_1 - x_2)^r\Sigma_{\phi_2}(\underline{w}_{D})$ for a certain $r$, where $\underline{w}_D$ denotes the oriented web resolution of $D$. Thus, the homology class of $\beta_{\omega,x_{2}}(D)$ generates $H_{\omega}^{\bullet}(\phi_2,\mathbb{F}) \cong \mathbb{F}$, and this concludes the proof of item (1). \begin{rem*} An alternative proof of the non-vanishing of $[\beta_{\omega,x_{2}}(D)]$ can be found in \cite[Proposition 1.3]{LewarkLobb18}; the cycle ``$\psi(D)$''\footnote{Not to be confused with Plamenevskaya's $\psi$-invariant.} defined in \cite{LewarkLobb18} (for $n=3$) coincide with $\beta_{x^3 - x^2,1}(D)$. Moreover, the same argument used in \cite{LewarkLobb18} to prove that $[\beta_{x^3 - x^2,1}(D)]$ is always non-trivial, can be adapted to the case of an arbitrary (degree $3$) potential with a double and a single root (cf. third paragraph in \cite[Section 4]{LewarkLobb18}). \end{rem*} It takes a bit more care to prove that the image of $[\beta_{\omega,x_1}(D)]$ is (a non-zero multiple of) the Plamenevskaya invariant $\psi$. First notice that $\beta_{\omega,x_1}(B)$ is $P\cdot \Sigma_{\phi_1}(\underline{w}_{D})$ where \[P = \prod_{\gamma \in \underline{w}_{D}} (x_2 - x_1)(X_{\gamma} - x_1) \in R_{\phi_{1}}(\underline{w}_{D}) = \frac{\mathbb{F}[X_{\gamma}\: \vert\: \gamma \in \underline{w}_{D}]}{\langle \omega(X_{\gamma})\: \vert\: \gamma \in \underline{w}_{D} \rangle},\] and $X_{\gamma}$ acts on $T(\underline{w}_{D})$ by adding a dot on the regular region bounding $\gamma$. This is implied by the equality \[ (X-x_1)\left[ (x_2 - x_1)^2 - (X-x_1)^2 \right] \equiv (x_2 - x_1)(X-x_1)(X - x_2)\qquad \mathrm{mod} \:\: \omega(X)\] which is easily verified. Then, the chain map in \cite[Equation (17)]{Mackaayvaz07} which defines the isomorphism between $H_{\omega}^{\bullet}(\phi ,\mathbb{F})$ and $ Kh^{-\bullet-lk(L_\phi,\: L\setminus L_\phi)}(\overline{L_{\phi}},\mathbb{F})$, behaves as follows \[\beta_{\omega,x_1}(B) = P\cdot \Sigma_{\phi_1}(\underline{w}_{D}) \mapsto (x_2 - x_1)^{r} X \otimes \cdots \otimes X\quad \text{for some }r\in \mathbb{N}, \] where $X \otimes \cdots \otimes X =\widetilde{\psi}$ belongs to the summand associated to the oriented resolution in the Khovanov chain complex. Since, $ [\widetilde{\psi}] = \psi$, item (2) follows. Finally, assume the potential $\omega$ has a triple root. Consider the endofunctor $\Phi$ of $\mathbf{Foam}$ described in Figure \ref{fig:endofunctor}. $\Phi$ translates the local relations $\ell_1$ associated to the potential $(x-x_1)^3$ into the local relations $\ell_0$ associated to the potential $x^3$. It follows that $\Phi$ induces an equivalence of $\mathbb{F}$-linear categories between $\mathbf{Foam}_{/\ell_1}$ and $\mathbf{Foam}_{/\ell_0}$. This equivalence of categories induces an isomorphism between the two complexes $C_{\omega}^{\bullet}$ and $C_{x^{3}}^{\bullet}$. \begin{figure} \centering \begin{tikzpicture}[scale =.75] \draw[dashed] (0,0) circle (1); \draw[fill] (0,0) circle (.1); \node at (2,0) {$\longmapsto$}; \draw[dashed] (4,0) circle (1); \draw[fill] (4,0) circle (.1); \node at (5.5,0) {$-$}; \node at (6,0) {$x_1$}; \draw[dashed] (7.5,0) circle (1); \end{tikzpicture} \caption{}\label{fig:endofunctor} \end{figure} Finally, it is immediate to see that this isomorphism sends the $\beta_3$-invariant to the $\psi_3$-invariant. \end{proof} Proposition \ref{prop:vanishing}, together with the results in \cite{TransFromKhType17}, implies the following. \begin{corollary} Let $\mathbb{F}$ be a field of characteristic different from $2$. If $\omega$ has a double root $x_{1}\in \mathbb{F}$, then the vanishing of $[\beta_{\omega,x_{1}}(\overline{B})]$ is a non-effective invariant for all $B$'s representing a knot with less than 12 crossings.\qed \end{corollary} \subsection{Divisibility, numerical invariants and Bennequin inequalities}\label{sec:div} There are also other ways to make use of the $R$-module structure of the $\mathfrak{sl}_3$-homology. Let us recall the definition of the $c$-invariants. Let $R$ be an integral domain and $a\in R\setminus\{ 0\}$ a non-unit element. Given a potential $\omega$ and a root $x_1\in R$, the number \[c_{\omega, x_1}(B;a) = max\left\{ k\:\vert\: \exists\: [y]\in H^{0}_{\omega}(\overline{B})\:\text{such that}\: a^k[y] = [\beta_{\omega, x_1}(\overline{B})]\right\}\in \mathbb{N}\cup \{ \infty \}\] is a well-defined transverse invariant, where $c_{\omega,x_1}(B;a)=\infty$ if and only if $[\beta_{\omega, x_1}(\overline{B})]$ is trivial or $a$-torsion. These invariants are particularly useful in the case $\omega$ has only single roots: in this case the homology classes of the $\beta_3$-invariants are non-trivial and non-torsion (cf. Remark \ref{rem:gornillobbmackay}). Now assume \begin{equation} R = \mathbb{F}[U]\quad \text{and}\quad \omega(x) = (x- U x_1)(x- U x_2)(x- Ux_3), \label{eq:setting} \end{equation} where $x_{i} \in \mathbb{F}$, for all $i$, and $x_{i} \ne x_{j}$, if $i\ne j$. By setting $deg(U) =2$ the theory becomes graded, and we also have the following exact sequences of complexes of $\mathbb{F}$-vector spaces \begin{equation} 0 \longrightarrow C_{\omega}^{\bullet,q}(\widehat{B},\mathbb{F}[U]) \overset{U\cdot}{\longrightarrow} C_{\omega}^{\bullet,q}(\widehat{B},\mathbb{F}[U]) \overset{\pi_0}{\longrightarrow} C_{x^{3}}^{\bullet,q}(\widehat{B},\mathbb{F}) \to 0 \label{eq:exactsequence1} \end{equation} for each $q\in \mathbb{Z}$, and \begin{equation} 0 \longrightarrow C_{\omega}^{\bullet}(\widehat{B},\mathbb{F}[U]) \overset{(U-1)\cdot}{\longrightarrow} C_{\omega}^{\bullet}(\widehat{B},\mathbb{F}[U]) \overset{\pi_1}{\longrightarrow} C_{\omega_{\vert U = 1}}^{\bullet}(\widehat{B},\mathbb{F}) \to 0. \label{eq:exactsequence2} \end{equation} Moreover, it is immediate that \[\pi_{0}(\beta_{\omega,x_{i}}(\overline{B})) = \beta_{x^{3},0}(\overline{B})\qquad\qquad \pi_{1}(\beta_{\omega,x_{i}}(\overline{B})) = \beta_{\omega_{\vert U = 1},x_{i}}(\overline{B}).\] The following proposition follows immediately from \eqref{eq:exactsequence1}. \begin{proposition} Let $R$, $\omega$ and $x_{i}$ be as above, then the following are equivalent \begin{enumerate} \item $\psi_{3}(B) \ne 0$; \item $c_{\omega, Ux_{i}}(B;U) = 0$ for any choice of $\omega$ and $x_{i}$; \item $c_{\omega, Ux_{i}}(B;U) = 0$ for all choices of $\omega$ and $x_{i}$; \end{enumerate} for each braid $B$.\qed \end{proposition} Now let us recall a few facts about concordance invariants defined from $\mathfrak{sl}_{3}$-link homologies. These invariants were defined in the more general setting of the deformations of $\mathfrak{sl}_{n}$ Khovanov-Rozansky (KR) homologies. However, the statements here shall be restricted to the case $n=3$. It is worth noticing that the deformation of $\mathfrak{sl}_{3}$ KR homology corresponding to the potential $\omega$ coincides with the theory defined in this paper (cf. \cite{Mackaayvaz07a}). \begin{theorem}[Theorem 1.1, \cite{LewarkLobb17}]\label{thm:jis} Let $K$ be a knot. Given a potential $\omega\in\mathbb{F}[x]$ with distinct roots $x_1$, $x_2$ and $x_3\in \mathbb{F}$, then we have the following isomorphism of bi-graded vector spaces; \[Gr^{*}H_{\omega}^{\bullet}(K,\mathbb{F}) \simeq \bigoplus_{i=1}^{3} \mathbb{F}(0,j_{i})\qquad\text{with } j_1 \leq j_2 \leq j_3,\] where $Gr^*H_{\omega}^\bullet$ is the associated graded object of $H_{\omega}^{\bullet}(K,\mathbb{F})$ (endowed with the quantum filtration), and $\mathbb{F}(h,q)$ is a copy of $\mathbb{F}$ generated in bi-degree $(h,q)$. Furthermore, the $j_{i}$'s are concordance invariants which provide lower bounds to the value of the slice genus.\qed \end{theorem} Lewark and Lobb in \cite{LewarkLobb17} defined two other concordance invariants. The first one is just a rescaled average of the $j_i$'s, that is \[ s_{\omega}(K) = \frac{j_1(K) + j_2(K) +j_3(K)}{12}\in \frac{1}{4}\mathbb{Z},\] which is a concordance quasi-homomorphism. The second invariant depends on the choice of a root $x_i$ and is denoted by $\tilde{s}_{\omega,x_{i}}(K)$. This is a slice-torus knot invariant (cf. \cite{LivingstonNaik, Lewark14}), and in particular a concordance homomorphism. Since its definition involves constructions which we do not wish to introduce, we refer the reader to \cite{LewarkLobb17} for it. All these invariants are somehow related, as is stated in the following proposition. \begin{proposition}[Proposition 2.12, \cite{LewarkLobb17}] Let $K$ be a knot. Given a potential $\omega\in\mathbb{F}[x]$ with distinct roots $x_1$, $x_2$ and $x_3\in \mathbb{F}$, order the roots in such a way that \[ \tilde{s}_{\omega,x_1}(K) \leq\tilde{s}_{\omega,x_2}(K) \leq\tilde{s}_{\omega,x_3}(K). \] Then, the following inequality holds \[ \vert j_{i} - 6 \tilde{s}_{\omega,x_i}(K)\vert \leq 2\] for each $i \in \{ 1,2,3\}$.\qed \end{proposition} To conclude this parenthesis we wish to point out that: $-j_{i}(\overline{K}) = j_{i}(K)$ (\cite[Proposition 2.13]{LewarkLobb17}). It follows that $s_{\omega}(K) = - s_{\omega}(\overline{K})$. Now we are ready to prove Proposition \ref{prop:Bennequin-type}. \begin{proof}[Proof of Proposition \ref{prop:Bennequin-type}] Directly from the definition of $s_{\omega_{i}}$ follows that Equation \eqref{eq:inequality1} implies Equation \eqref{eq:inequality2}. Furthermore, from \cite[Proposition 2.12]{LewarkLobb17} it is immediate that Equation \eqref{eq:inequality1} implies Equation \eqref{eq:inequality3}. So it is sufficient to prove \eqref{eq:inequality1}. We borrow the notation from the proof of Proposition \ref{prop:vanishing}. Denote by $[y]$ an homogeneous element of $ H_{\omega}^{0}(K,\mathbb{F})$ such that \[ U^{c_{\omega,x_{i}}(B,U)}[y] = [\beta_{\omega,x_{i}}(B)].\] It is immediate that $\pi_{1}([y])= \pi([\beta_{\omega,Ux_{i}}(B)])$, and the latter is a non-trivial multiple of the homology class of the canonical generator $\Sigma_{\phi_{i}}(\underline{w}_{D})$. Since, the quantum filtration is increasing it follows that \[ -2sl(B) - 2 c_{\omega,Ux_{i}}(B,U) = qdeg([y]) \geq Fdeg([\Sigma_{\phi_{i}}(\underline{w}_{D})]) =: q_{i}(\overline{K}).\] It is easy to prove that the maximum filtered degree of the elements of a basis of a vector space equipped with an increasing filtration does not depend on the chosen basis. Thus, we obtain that \[-j_{1}(K) = j_{3}(\overline{K}) = \underset{i}{max}\:\{ q_{i}(\overline{K}) \}\leq -2sl(B) - 2 c_{\omega,Ux_{i}}(B,U).\] Using the isomorphism induced by the endofunctor of $\mathbf{Foam}$ obtained from the one described in Figure \ref{fig:endofunctor} by replacing $x_1$ with $x_{i} - x_{j}$, it follows that $q_{i} = q_{j}$; and this concludes the proof. \end{proof} \begin{corollary} Let $K$ be a knot and $B$ a braid representing $K$. If either $j_{1}(K) = sl(B)$, $s_{\omega_{1}}(K) = sl(B)$ or $3 \tilde{s}_{\omega,x_{i}}(K) = sl(B)-1$ then $\psi_{3} (B) \neq 0$. In particular, if $B$ is a quasi-positive braid $\psi_{3} (B) \neq 0$. \end{corollary} \begin{proof} The only thing to prove is that quasi-positive braids are such that $3 \tilde{s}_{\omega,x_{i}}(K) = sl(B)-1$. This is true because $3\tilde{s}_{\omega,x_{i}}(K)$ is a slice torus invariant, and thus its value on quasi-positive braids is exactly $sl(B)-1$ (see \cite{Lewark14}). \end{proof}
{ "timestamp": "2019-09-17T02:14:21", "yymm": "1806", "arxiv_id": "1806.00752", "language": "en", "url": "https://arxiv.org/abs/1806.00752" }
\section*{Unscreened potential} Let the dielectric constant of the environment be $\varepsilon_1$, that of the film be $\varepsilon_2$, and their ratio be denoted as $\varepsilon = \varepsilon_2/\varepsilon_1$. The $z$-axis is normal to the plane of the film (see Fig. 1) and we consider the film to be infinite in the $xy$-plane \begin{figure}[ht] \includegraphics[width=4 cm]{fig1.pdf} \caption{\label{fig1} [\textit{Schematic illustration of the system geometry.}] } \end{figure} The potential $\varphi(\vec{r},\vec{r}\,')$ at the position $\vec{r}$, created by the point charge located at $\vec{r}\,'$, satisfies the following equations in the regions 1, 2, and 3: \begin{align} \label{eq1} \nabla^2_{\vec{r}}\,\varphi_{1}(\vec{r},\vec{r}\,')&=0,\nonumber\\ \nabla^2_{\vec{r}}\,\varphi_{2}(\vec{r},\vec{r}\,')&=-\frac{4\pi e}{\varepsilon_2}\delta(\vec{r}-\vec{r}\,'),\\ \nabla^2_{\vec{r}}\,\varphi_{3}(\vec{r},\vec{r}\,')&=0\nonumber \end{align} with the boundary conditions \begin{align} \label{eq2} \text{at}~z=0,~\varphi_1=\varphi_2,~\frac{\partial \varphi_1}{\partial z}=\varepsilon\frac{\partial \varphi_1}{\partial z}\nonumber\\ \text{at}~z=L,~\varphi_2=\varphi_3,~\varepsilon\frac{\partial \varphi_2}{\partial z}=\frac{\partial \varphi_3}{\partial z} \end{align} and the requirement to be bounded at the infinity. Due to the homogeneity and isotropy of the system in the $xy$-plane of the film, $\varphi(\vec{r},\vec{r}\,')$ depends on $\left|\vec{\rho}-\vec{\rho}\,'\right|$, where $\vec{\rho}$ is the in-plane position vector. It allows us, without loss of generality, to set $\vec{\rho}\,'=0$, i.e., to place the point charge onto the $z$-axis and to expand the potential $\varphi(\vec{r},\vec{r}\,')$ into a double-Fourier-integral: \begin{align} \label{eq3} \varphi(\vec{r},\vec{r}\,') = \int \frac{d\vec{k}}{(2\pi)^2} e^{i \vec{k}\vec{\rho}} \varphi(k,z,z'). \end{align} The Fourier components $\varphi(k, z, z\,')$ depend on the absolute value $|\vec{k}|$. Inserting \eqref{eq3} into \eqref{eq1} and \eqref{eq2}, we obtain the following equations for the Fourier components $\varphi(k, z, z\,')$ of the potential: \begin{align} &\frac{\partial^2\varphi_1(k,z,z')}{\partial z^2} - k^2 \varphi_1(k,z,z')=0, \nonumber\\ &\frac{\partial^2\varphi_2(k,z,z')}{\partial z^2} - k^2 \varphi_2(k,z,z')= - \frac{4\pi e}{\varepsilon_2} \delta(z-z'), \label{eq4}\\ &\frac{\partial^2\varphi_3(k,z,z')}{\partial z^2} - k^2 \varphi_3(k,z,z')=0,\nonumber \end{align} with the boundary conditions~\eqref{eq2}. In the region of interest inside the film [\textit{region 2, see Fig.\,\ref{fig1}}], the solution of the Eqs.~\eqref{eq4} has the following form: \begin{multline} \label{eq5} \varphi_2(k,z,z') = \frac{2\pi e}{\varepsilon_2 k} \left\{ e^{-k|z-z'|} \right. + \\ \left. \frac{2\delta}{e^{2kL} - \delta^2} [\delta \cosh{k(z-z')} + e^{kL} \cosh{k(z+z'-L)}] \right\} \end{multline} where \begin{align} \delta=\frac{\varepsilon-1}{\varepsilon+1}. \nonumber \end{align} In order to obtain the asymptotic behavior for the potential $\varphi_2(\vec{r},\vec{r}\,')$ for distances $\rho \gg L$, it is sufficient to know its Fourier component $\varphi_2(k, z, z\,')$ for $kL \ll 1$. Assuming $kL \ll 1$ in Eq.\,\eqref{eq5}, we obtain an expression, independent from $z$ and $z\,'$ \begin{equation} \label{eq6} \varphi(k) = \frac{2\pi e}{\varepsilon_2 k} \left[\frac{e^{kL}+\delta}{e^{kL}-\delta}\right]. \end{equation} The exponents in \eqref{eq6} are not expanded into power series, since the absolute value of $\delta$ can be close to unity, in which case only one of the quantities $(1-\delta)$ or $(1+\delta)$ can be comparable to $kL$. This way, for distances larger than the film thickness, the potential of the point charge does not depend on the coordinates $z$ and $z\,'$ and becomes ``two-dimensional''. Let us obtain the expressions for it in three limiting cases. \textbf{1.} $\varepsilon \gg 1$, i.e., the optical density\footnote{Note added in translation: We keep the authors term ``optical density'' although the static screening is discussed throughout the paper.} of the film is larger than that of its surroundings. Then, $\delta=1-{2}/{\varepsilon}$ and the expression \eqref{eq6} assumes the form of \begin{equation} \label{eq7} \varphi(k) = \frac{4\pi e}{\varepsilon_2 Lk\left(k + \frac{2}{\varepsilon L}\right)}. \end{equation} Inserting \eqref{eq7} into \eqref{eq3}, we find for $\rho \gg L$ \begin{align} \label{eq8} \varphi(\rho)&=\frac{2e}{\varepsilon_2 L}\int\limits_{0}^{\infty}{{dk}\frac{J_0(k\rho)}{k+\frac{2}{\varepsilon L}}}\nonumber\\ &=\frac{\pi e}{\varepsilon_2 L}\left[\mathbf H_0\left(\frac{2\rho}{\varepsilon L}\right)-N_0\left(\frac{2\rho}{\varepsilon L}\right)\right], \end{align} where $\mathbf H_0(x)$, $N_0(x)$, and $J_0(x)$ are the zero-order Struve, Neumann, and Bessel functions, respectively. At very large distances $\rho \gg \varepsilon L/2$, the expression \eqref{eq8} takes the form \begin{align} \label{eq9} \varphi(\rho)=\frac{e}{\varepsilon_1 \rho}, \end{align} i.e., the potential very far away from the point charge is such, as if the film did not exist at all. It is the Coulomb potential in a homogeneous dielectric medium with the permittivity $\varepsilon_1$. \textbf{2.} $\varepsilon \approx 1$, i.e., the optical density of the film and the environment are roughly equal. In this case, $|\delta|\ll 1$ and \begin{align} \label{eq10} \varphi(k)=\frac{2\pi e}{\varepsilon_1 k}=\frac{2\pi e}{\varepsilon_2 k}. \end{align} Since $\varepsilon_1\approx\varepsilon_2$, the system becomes nearly homogeneous and by inserting \eqref{eq10} into \eqref{eq3}, for distances $\rho\gg L$, we obtain the Coulomb potential: \begin{align} \varphi(\rho)=\frac{e}{\varepsilon_2}\int\limits_{0}^{\infty}{J_0(k\rho)dk}=\frac{e}{\varepsilon_2 \rho}=\frac{e}{\varepsilon_1 \rho}.\nonumber \end{align} \textbf{3.} $\varepsilon \ll 1$, i.e., the optical density of the film is much smaller than that of the surroundings. Here, $\delta=-(1-2\varepsilon)$ and \begin{align} \varphi(k)=\frac{2\pi e \varepsilon}{\varepsilon_2 k}=\frac{2\pi e}{\varepsilon_1 k},\nonumber \end{align} i.e., also in this case, already for $\rho \gg L$, the potential is such, as if the film did not exist: \begin{align} \varphi(\rho)=\frac{e}{\varepsilon_1 \rho}.\nonumber \end{align} \section*{Screened potential} Let the film contain free charges (electrons) distributed with an average surface density $N$ and immobile charges of the opposite sign with the same average surface density. Overall, the film is charge neutral and macroscopically homogenous in the $xy$-plane. Screened potential $\tilde{\varphi}(\vec{r},\vec{r}\,')$ of the point charge at the position $\vec{r}\,'$ of the film (as before, we set $\vec{\rho}\,'=0)$ satisfies in the region 2 the equation \begin{align} \label{eq11} \nabla_{\vec{r}}^2 \,\tilde \varphi_2(\vec{r}, \vec{r}\,') = -\frac{4\pi e}{\varepsilon_2} \left[ \delta(\vec{r} - \vec{r}\,') - \Delta n(\vec{r},\vec{r}\,')\right], \end{align} where $\Delta n(\vec{r},\vec{r}\,')$ is the change of the free electron concentration in $\vec{r}$ under the influence of the field of a point charge located at $\vec{r}\,'$.\\ ~\\ ~~~~Under thermal equilibrium conditions, the electron concentration is a function of the electrochemical potential \begin{align} \mu=\mu_0+e\tilde{\varphi},\nonumber \end{align} where $\mu_0$ is the chemical potential of the electrons in the absence of a field. For sufficiently large distances, when $e\tilde{\varphi}\ll\mu_0$, it is possible to expand $\Delta n$ into a series in $\tilde{\varphi}$ and consider only the first linear term \begin{align} \label{eq12} \Delta n = \frac{\partial n}{\partial \mu_0}e\tilde{\varphi}. \end{align} Inserting \eqref{eq12} into \eqref{eq11} and expanding $\tilde{\varphi}(\rho, z, z\,')$ into a Fourier integral as exemplified in \eqref{eq3}, we obtain the equations for the Fourier components \begin{widetext} \begin{align} &\frac{\partial^2\tilde\varphi_1(k,z,z')}{\partial z^2} - k^2 \tilde\varphi_1(k,z,z')=0, \nonumber\\ &\frac{\partial^2\tilde\varphi_2(k,z,z')}{\partial z^2} - \left[ k^2 + \frac{4\pi e^2}{\varepsilon_2} \frac{\partial n}{\partial \mu_0}\right]\tilde \varphi_2(k,z,z')= - \frac{4\pi e}{\varepsilon_2} \delta(z-z'), \label{eq13}\\ &\frac{\partial^2\tilde\varphi_3(k,z,z')}{\partial z^2} - k^2 \tilde\varphi_3(k,z,z')=0,\nonumber \end{align} \end{widetext} with the same boundary conditions for $\tilde{\varphi}$ being the same as in \eqref{eq2} for $\varphi$. For $\tilde{\varphi}_2$, we obtained a linear equation with a variable coefficient, since ${\partial n}/{\partial \mu_0}$ depends on $z$. We note, that the characteristic distance for which ${\partial n}/{\partial \mu_0}$ changes significantly equals $L$. If the expression ${\partial n}/{\partial \mu_0}$ in the equation for $\tilde{\varphi}_2$ is replaced by its average value \begin{align} \left<\frac{\partial n}{\partial \mu_0}\right>=\frac{1}{L} \int_0^L \frac{\partial n}{\partial \mu_0} dz = \frac{1}{L} \frac{\partial N}{\partial \mu_0},\nonumber \end{align} the equation assumes the form \begin{align} \label{eq14} \frac{\partial^2\tilde\varphi_2(k,z,z')}{\partial z^2} - \tilde k^2 \tilde \varphi_2(k,z,z')= - \frac{4\pi e}{\varepsilon_2} \delta(z-z'), \end{align} where \begin{align} \label{eq15} \tilde{k}=\sqrt{k^2+\frac{4 \pi e^2}{\varepsilon_2 L}\frac{\partial N}{\partial \mu_0}}. \end{align} The characteristic distance for which the solution of Eq.~\eqref{eq14} changes significantly is $\tilde{k}^{-1}$. If it is large compared to $L$, i.e., $\tilde{k} L\ll 1$, then $\tilde{\varphi}_2(k, z, z\,')$ is approximately equal to the solution of Eq.~\eqref{eq14} with an average coefficient. Replacing the second equation in the set~\eqref{eq13} with Eq.~\eqref{eq14} and solving the resulting set of equations, we obtain for $\tilde{\varphi}_2(k, z, z\,')$ an expression, formally similar to \eqref{eq5}: \begin{multline} \label{eq16} \tilde \varphi_2(k,z,z') = \frac{2\pi e}{\varepsilon_2 \tilde k} \left\{ e^{-\tilde k|z-z'|} \right. + \\ \left. \frac{2\tilde \delta}{e^{2\tilde kL} - \tilde \delta^2} [\tilde \delta \cosh{\tilde k(z-z')} + e^{kL} \cosh{\tilde k(z+z'-L)}] \right\} \end{multline} where $\tilde{k}$ is defined as in \eqref{eq15} and \begin{align} \tilde{\delta}=\frac{\varepsilon\tilde{k}-k}{\varepsilon\tilde{k}+k}. \nonumber \end{align} Since $\tilde{k} L\ll 1$ in \eqref{eq16}, we obtain, as it is the case in Eq.~\eqref{eq6}, a ``two-dimensional'' expression \begin{align} \label{eq17} \tilde \varphi(k) = \frac{2\pi e}{\varepsilon_2 \tilde k} \left[\frac{e^{\tilde kL}+\tilde \delta}{e^{\tilde kL}-\tilde \delta}\right]. \end{align} The inequality $\tilde{k} L\ll 1$ can be satisfied only for the simultaneous fulfillment of the two inequalities \begin{align} kL\ll 1 \nonumber \end{align} and \begin{align} \label{eq18} \frac{4 \pi e^2{L}}{\varepsilon_2}\frac{\partial N}{\partial \mu_0} \ll 1. \end{align} The first inequality means that the expression \eqref{eq17} describes the behavior of the potential $\tilde{\varphi}(\rho)$ for large distances $\rho \gg L$. Let us elucidate the physical meaning of the inequality \eqref{eq18}. For the approximations assumed in the beginning of the paper [\textit{it follows then that}] \begin{align} \mu_0=E_1+k_B T\,\ln\left(e^{\frac{\pi N \hbar^2}{m k_{B} T}}-1\right),\nonumber \end{align} where $E_1$ is the energy of the first quantized state [(\textit{i.e., first subband})] and \begin{align} \label{eq19} \frac{\partial N}{\partial \mu_0}=\frac{m}{\pi \hbar^2}\left(1-e^{\frac{\pi N \hbar^2}{m k_{B} T}}\right). \end{align} Inserting \eqref{eq19} into \eqref{eq18}, we obtain \begin{align} \label{eq20} \frac{4L}{a}\left(1-e^{\frac{\pi N \hbar^2}{m k_{B} T}}\right)\ll 1, \end{align} where $a=\varepsilon_2\hbar^2/me^2$ is the radius of the first Bohr orbit in the dielectric environment with permittivity $\varepsilon_2$. For the [\textit{two}] limiting cases of a Fermi distribution of electrons in the first subband ${\pi N \hbar^2}/(m k_{B} T)\gg 1$ and for the non-degenerate Boltzmann distribution ${\pi N \hbar^2}/(m k_{B} T)\ll 1$, the inequality~\eqref{eq20} assumes the form \begin{align} \label{eq21} \frac{4L}{a}\ll 1,~\frac{4L}{a}{\frac{\pi N \hbar^2}{m k_{B} T}}\ll 1. \end{align} To understand the physical meaning of the requirement (20), let us recall the expression for the Debye radius $r_0$ in a bulk crystal: \begin{equation} \label{eq22} r_0^{-1} = \begin{cases} 2\left(\dfrac{3}{\pi}\right)^{1/6} \sqrt{\dfrac{me^2}{\varepsilon_2 \hbar^2}n^{1/3}} ~~~\mbox{(Fermi case)}\\ \sqrt{\dfrac{4\pi n e^2}{\varepsilon_2 k_B T}}~~~\mbox{(Boltzmann case)}. \end{cases} \end{equation} Let us [\textit{further}] consider the ratio $\alpha={L}/{r_0}$, taking into account that $n={N}/{L}$. \begin{equation} \label{eq23} \alpha = \frac{L}{r_0} = \begin{cases} 2\left(\dfrac{3}{\pi}\right)^{1/6} ( NL^2)^{1/6} \left(\dfrac{4L}{a}\right)^{1/2}~~~\mbox{(Fermi case)}\\ \sqrt{\dfrac{4\pi N \hbar^2}{m k_B T}}\left(\dfrac{4L}{a}\right)^{1/2}~~~\mbox{(Boltzmann case)} \end{cases} \end{equation} Comparing the expressions \eqref{eq23} for the limiting cases with the left hand sides of the inequalities \eqref{eq21}, we see, that the requirement \eqref{eq20}, which provides the ``two-dimensionality'' of the screening potential for large distances, coincides with the requirement $\alpha \ll 1$. It means, that the film thickness must be smaller than the Debye screening radius in the [\textit{corresponding}] bulk crystal. Similar to the previous section, let us obtain the expressions of the two-dimensional screening potential $\tilde{\varphi}(\rho)$ for the three limiting cases of the relation between $\varepsilon_1$ and $\varepsilon_2$. We will consider the case of degeneracy, when \begin{align} \label{eq24} \tilde{k}=\sqrt{k^2+\frac{4}{aL}}, \end{align} keeping in mind, that according to \eqref{eq21}, it is easy to proceed to the non-degenerate case by replacing $a$ with $a{m k_{B} T}/{(\pi N \hbar^2)}$. \textbf{1.} $\varepsilon \gg 1$. In this case, $1+\tilde{\delta}=2,~1-\tilde{\delta}={2k}/{\varepsilon\tilde{k}}$ and we obtain from \eqref{eq17} (same result was obtained by using the diagram technique in N.S. Rytova, Proc. USSR Academy of Sciences {\bf 163}, 1118 (1965)): \begin{align} \label{eq25} \tilde{\varphi}(k)=\frac{4\pi e}{\varepsilon_2 L\left[k\left(k+\frac{2}{\varepsilon L}\right)+\frac{4}{aL}\right]}. \end{align} If the following also holds in that case \[ \frac{1}{\varepsilon^2}\ll\frac{4L}{a}, \] then the linear term can be neglected in the denominator of Eq. ~\eqref{eq25}, [\textit{with the result}] \begin{align} \tilde{\varphi}(k)=\frac{4\pi e}{\varepsilon_2 L\left(k^2+\frac{4}{aL}\right)}.\nonumber \end{align} Then, we obtain for $\rho\gg L$ \begin{align} \label{eq26} \tilde\varphi(\rho) = \frac{2e}{{\varepsilon_2} L} K_0 \left( \frac{2\rho}{\sqrt{La}}\right), \end{align} where $K_0(x)$ is the zero-order Macdonald function [\textit{modified Bessel function of the second kind}]. For distances $\rho > \rho_0=\frac{1}{2}\sqrt{L a}$, the equation \eqref{eq26} assumes the form \[ \tilde\varphi(\rho) = \frac{2e}{{\varepsilon_2} L} {\sqrt{\frac{2\rho_0}{\pi \rho}}} e^{-\rho/\rho_0}, \] from which it follows, that the screened potential is exponentially small beyond the circle with radius $\rho_0$, where \[ \rho_0 = \begin{cases} \dfrac{1}{2} \sqrt{La}~~~(\mbox{Fermi case})\\ \dfrac{1}{2} \sqrt{La \dfrac{mk_B T}{\pi N\hbar^2}} = \sqrt{\dfrac{L\varepsilon_2 k_B T}{4\pi N e^2}}~~~(\mbox{Boltzmann case}). \end{cases} \] In case of the Boltzmann distribution, the two-dimensional screening radius $\rho_0$ has the same structure as the three-dimensional one (c.f. Eq.~\eqref{eq22}). For the Fermi distribution, $\rho_0$ does not depend on concentration as a fundamental consequence of [\textit{the Fermi}] statistics in two dimensions. \textbf{2.} $\varepsilon \approx 1$. Inserting $1+\tilde{\delta}={2\tilde{k}}/({k+\tilde{k}})$ and $1-\tilde{\delta}={2{k}}/({k+\tilde{k}})$ into \eqref{eq17}, we obtain \begin{align} \label{eq27} \tilde \varphi(k) = \frac{2\pi e}{\varepsilon_2 (k+2/a)} \end{align} and it follows for $\rho \gg L$ \begin{align} \label{eq28} \tilde \varphi(\rho) = \frac{e}{\varepsilon_2} \left\{ \frac{1}{\rho} -\frac{\pi}{a} \left[ \mathbf H_0\left(\frac{2\rho}{a}\right) - N_0 \left(\frac{2\rho}{a}\right)\right]\right\}. \end{align} For large distances where $\rho \gg a/2$, Eq. \eqref{eq28} assumes the form \begin{equation} \label{eq29} \tilde \varphi(\rho) = \begin{cases} \dfrac{e a^2}{4\varepsilon_2\rho^3}~~~(\mbox{Fermi case})\\ \dfrac{e a^2}{4\varepsilon_2\rho^3} \left(\dfrac{mk_B T}{\pi N\hbar^2}\right)^2~~~(\mbox{Boltzmann case}). \end{cases} \end{equation} In this case, the screening radius does not exist. At large distances, the potential decays as the inverse cube of the distance.\footnote{Note added in translation: This result has been derived about the same time independently in F. Stern, Phys. Rev. Lett. {\bf 18}, 546 (1967); F. Stern, W. E. Howard, Phys. Rev. {\bf 163}, 816 (1967).} \textbf{3.} $\varepsilon \gg 1$. Inserting $1+\tilde{\delta}={2\varepsilon\tilde{k}}/({\varepsilon\tilde{k}+k})$ and $1-\tilde{\delta}={2{k}}/({\varepsilon\tilde{k}+k})$ into \eqref{eq17} and excluding the $k$-independent term (since it does not contribute to the potential at large distances), we obtain \begin{align} \label{eq30} \tilde \varphi(k) = \frac{2\pi e \varepsilon}{\varepsilon_2\left(k+\frac{2\varepsilon}{a}\right)} = \frac{2\pi e}{\varepsilon_1\left(k+\frac{2me^2}{\varepsilon_1 \hbar^2}\right)}. \end{align} The result is equivalent to the expression \eqref{eq27} for the case \textbf{2}, with the replacement of $\varepsilon_2$ by $\varepsilon_1$. Equations~\eqref{eq28} and \eqref{eq29} also hold for the same substitution. The expressions for the screened potential in cases \textbf{2} and \textbf{3} neither depend on the film thickness $L$ nor on its permittivity $\varepsilon_2$ (in case \textbf{2}, $\varepsilon_2=\varepsilon_1$) and can thus be obtained by passing to the limit of a neutral conducting plane (see below). The expression \eqref{eq26}, applicable for the case \textbf{1}, contains the finite film thickness $L$ and its permittivity $\varepsilon_2$. In this case, there is fundamentally no passage to the limit of the conducting plane from the film of a finite thickness. \subsection*{Screened potential of a point charge in a conducting plane\footnote{This part was typeset in a small print in the original.}} {\small The plane $z=0$ is neutral and contains free charges with an average surface concentration $N$. The permittivity of the environment is denoted by $\varepsilon_1$. Let us calculate the potential, created in the $z=0$ plane by a point charge $+e$ placed at the origin of the coordinates.} {\small In the regions 1 ($z<0$) and 2 ($z>0$), the potential $\varphi(r)$ satisfies the Laplace equation \begin{align} \label{eq31} \nabla^2\varphi=0. \end{align} The potential is bounded at the infinity and the following conditions hold at the $z=0$ boundary \begin{align} \label{eq32} \varphi_1 &= \varphi_2, \nonumber~\\~\\ \frac{\partial \varphi_1}{\partial z} - \frac{\partial \varphi_2}{\partial z} &= \frac{4\pi}{\varepsilon_1} \sigma(\rho),\nonumber \end{align} where $\sigma(\rho)$ is the surface charge density in the $z=0$ plane, composed from the point charge at the coordinates origin and the charge induced by the rearrangement of the free electrons in the field of the point charge, \begin{align} \sigma(\rho)=e\delta(\vec{\rho})-e\Delta N(\rho). \nonumber \end{align}} {\small Let us expand $\varphi({\vec{r}})$ into a double-Fourier-integral: \begin{align} \varphi(\vec{r}) = \int \frac{d\vec{k}}{(2\pi)^2} e^{i \vec{k}\vec{\rho}} \varphi(k,z) \nonumber \end{align} From (31) and (32), we obtain for the Fourier components the following equation \begin{align} \label{eq33} \frac{\partial^2 \varphi}{\partial z^2} - k^2\varphi=0, \end{align} with the boundary conditions \begin{equation} \label{eq34} \mbox{at}~z=0\quad \begin{cases} \varphi_1 = \varphi_2\\~\\ \dfrac{\partial \varphi_1}{\partial z} - \dfrac{\partial \varphi_2}{\partial z}= \dfrac{4\pi e}{\varepsilon_1}(1-\Delta N(k)), \end{cases} \end{equation} and the requirement to be bounded to zero at infinity.} {\small Inserting the solutions of the equation \eqref{eq33} $\varphi_1(k, z)=c_1 e^{kz}$ and $\varphi_2(k, z)=c_2 e^{-kz}$ into the boundary conditions \eqref{eq34} we obtain an equation for the Fourier component of the potential in the $z=0$ plane \begin{align} \label{eq35} \tilde \varphi(k) \equiv \varphi(k,z=0) = \frac{2\pi e}{\varepsilon_1 k} [1-\Delta N(k)]. \end{align} At large distances, i.e., for small $k$, the quantity $\Delta N$ is linear in $\tilde{\varphi}$ \begin{align} \label{eq36} \Delta N = \frac{\partial N}{\partial \mu_0}e\tilde{\varphi}(k). \end{align} Inserting \eqref{eq36} into \eqref{eq35}, we find \begin{align} \tilde \varphi(k) = \frac{2\pi e}{\varepsilon_1 \left( k+ \frac{2\pi e^2}{\varepsilon_1} \frac{\partial N}{\partial \mu_0}\right)} \nonumber \end{align} This result is equivalent to the results \eqref{eq27} and \eqref{eq30} for a film with a finite thickness. } ~\\ ~\\ The author would like to thank V. L. Bonch-Bruyevich for helpful comments. \end{document}
{ "timestamp": "2018-06-05T02:14:42", "yymm": "1806", "arxiv_id": "1806.00976", "language": "en", "url": "https://arxiv.org/abs/1806.00976" }
\section{Introduction} Let $L_{\theta}$ be the line through the origin in direction $\theta$, and $Proj_{\theta}$ denotes the orthogonal projection onto $L_{\theta}$. Given two Borel sets $A,B\in \mathbb{R}$, analyzing the set $Proj_{\theta}(A\times B)$ is a crucial topic in geometric measure theory. The classical Marstrand theorem \cite{FG} states that \begin{Theorem} Given two Borel sets $A,B\in \mathbb{R}$. \begin{itemize} \item [(1)] If $\dim_{H}(A)+\dim_{H}(B)\leq 1$, then for almost all $\theta\in[0, \pi)$, $$\dim_{H}(Proj_{\theta}(A\times B))=\dim_{H}(A)+\dim_{H}(B);$$ \item [(2)] If $\dim_{H}(A)+\dim_{H}(B)>1$, then for almost all $\theta\in[0, \pi)$, $Proj_{\theta}(A\times B)$ has positive Lebesgue measure. \end{itemize} \end{Theorem} Unfortunately, Marstrand theorem does not offer any information for a specific angle $\theta.$ For the self-similar sets, Peres and Shmerkin \cite{PS}, Hochman and Shmerkin \cite{Hochman2012} proved the following elegent result. \begin{Theorem}\label{Thm2} Let $K_1$ and $K_2$ be two self-similar sets with IFS's $\{f_i(x)=r_ix+a_i\}_{i=1}^{n}$ and $\{g_j(x)=r_j^{\prime}x+b_j\}_{j=1}^{m}$, respectively. If for any $r_i, r_j^{\prime}$, $$\dfrac{\log |r_i| }{\log |r_j^{\prime}|}\notin \mathbb{Q},$$ then $$\dim_{H}(K_1+K_2)=\min\{\dim_{H}(K_1)+\dim_{H}(K_2),1\},$$ and $$\dim_{H}(K_1+K_2)=\dim_{P}(K_1+K_2)=\dim_{B}(K_1+K_2).$$ \end{Theorem} The condition in Theorem \ref{Thm2} is called the irrationality assumption. Note that $K_1+K_2$ is similar to $Proj_{\pi/4}(K_1\times K_2)$. Therefore, Theorem \ref{Thm2} states that under the irrationality assumption, the Hausdorff dimension of the projection of two self-similar sets through the angle $\pi/4$ does not decrease. Peres and Shmerkin indeed \cite{PS} proved a general result in $\mathbb{R}^2$, i.e. if the group generated by the rotations of IFS is dense in $[0,\pi)$, then for any angle $\theta\in[0,\pi)$, the Hausdorff dimension of the projection of the attractor coincides with the expected Hausdorff dimension. However, without the irrationality assumption, generally the dimension of $Proj_{\theta}(K_1\times K_2)$ may drop. In this paper, we consider the following class of similitudes: let $\beta>1$, define a class of similitudes \[S:=\left\{f_{i}(x)=\dfrac{x}{\beta^{n_i}}+a_i:n_i\in \mathbb{N}^{+}, a_i\in \mathbb{R}\right\}.\] Let $\mathcal{A}$ be the collection of all the self-similar sets generated by the similitudes from $S$. In \cite{PS}, Peres and Shmerkin proved the following result. \begin{Theorem} For any $K_1, K_2\in \mathcal{A}$ such that their Hausdorff dimensions coincide with the associated similarity dimensions, then there exists some $\theta\in[0,\pi)$ such that $$\dim_{H}(Proj_{\theta}(K_1\times K_2))<\min\{1,\dim_{H}(K_1)+\dim_{H}(K_2)\}.$$ \end{Theorem} Generally, the Hausdorff dimension of $Proj_{\theta}(K_1\times K_2)$ is difficult to calculate. The main aim of this paper is to analyze the set $Proj_{\theta}(K_1\times K_2)$, and give an estimation of its Hausdorff dimension. The following are the main results of this paper. \begin{Theorem}\label{Main} Given any $\theta\in[0,\pi)$, and any $K_1, K_2\in \mathcal{A}$, $Proj_{\theta}(K_1\times K_2)$ is similar to a self-similar set or an attractor of some infinite iterated function system. \end{Theorem} In terms of Theorem \ref{Main}, we have the following corollaries. \setcounter{Corollary}{4} \begin{Corollary} For any $\theta\in[0,\pi)$ and any $K_1, K_2\in \mathcal{A}$, $$\dim_{P}(Proj_{\theta}(K_1\times K_2))=\overline{\dim}_{B}(Proj_{\theta}(K_1\times K_2)).$$ \end{Corollary} \begin{Corollary}\label{Cor} Given $k\geq 1$. Suppose that $\beta$ is a Pisot number. Let $K_1$ be the attractor of the following IFS $$\left\{f_i(x)=\dfrac{x}{\beta^{k}}+a_i,1\leq i\leq n\right\},$$ and $K_2$ be the attractor of the following IFS $$\left\{g_j(x)=\dfrac{x}{\beta^{l_jk}}+b_j,1\leq j\leq m\right\},$$ where $l_j\in\mathbb{N}^{+}$. If $a_i, b_j,\tan\theta\in \mathbb{Z}[\beta], 1\leq i\leq n,1\leq j\leq m$, then $Proj_{\theta}(K_1\times K_2)$ is similar to a self-similar set with the finite type condition \cite{NW}. Moreover, the Hausdorff dimension of $Proj_{\theta}(K_1\times K_2)$ can be calculated explicitly. \end{Corollary} \begin{Corollary}\label{IIFSS} Given $\theta\in[0,\pi)$, and $K_1, K_2\in \mathcal{A}$. Suppose that $Proj_{\theta}(K_1\times K_2)$ is similar to an attractor with infinite iterated function system. Then there exist two attractors $J_1, J_2$ with infinite iterated function systems such that $$s_1(\theta)\leq \dim_{H}(Proj_{\theta}(K_1\times K_2)) \leq s_2(\theta),$$ where $s_1(\theta)$ is the Hausdorff dimension of $J_1$, and $s_2(\theta)$ is the similarity dimension of $J_2.$ \end{Corollary} For some cases, even though $Proj_{\theta}(K_1\times K_2)$ is similar to some attractor with infinite iterated function system which does not satisfy the open set condition, we can still calculate the exact Hausdorff dimension of $Proj_{\theta}(K_1\times K_2)$. The following example is one of the typical cases. \setcounter{Example}{7} \begin{Example}\label{Example} Let $K_1=K_2$ be the attractor of the IFS $$\left\{f_1(x)=\dfrac{x}{\beta^4},f_2(x)=\dfrac{x+\beta^8-1}{\beta^8}\right\}.$$ Suppose that $\beta>1.39$, then for any $\theta\in\left(\arctan \dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}, \arctan \dfrac{\beta^{12}-2\beta^4}{\beta^{12}-\beta^8+1}\right)$ $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \sqrt{\dfrac{1+\sqrt{5}}{2}}}{\log \beta}=\dim_{H}(K_1)+\dim_{H}(K_2).$$ Let $\theta=\arctan\dfrac{\beta^{8}-1}{\beta^8-\beta^4+1}$ and $\beta>1.41$. Then $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \gamma}{\log \beta}<\dim_{H}(K_1)+\dim_{H}(K_2),$$ where $\gamma\approx 1.2684$ is the largest real root of $$x^{20}-2x^{16}-2x^{12}+x^{8}+x^{4}-1=0.$$ \end{Example} We can find similar examples as Example \ref{Example} and calculate the Hausdorff dimension of $Proj_{\theta}(K_1\times K_2)$ for some explicit angle $\theta$. This paper is arranged as follows. In section 2, we give the proofs of the main results. In section 3 we analyze Example \ref{Example}. Finally, we give some remarks. \section{Proof of Theorem \ref{Main}} \subsection{Preliminaries and some key lemmas} In this section, we shall prove that $Proj_{\theta}(K_1\times K_2)$ is similar to a self-similar set or an attractor with infinite iterated function system. First, we introduce some definitions and results. The definition of self-similar set is due to Hutchinson \cite{Hutchinson}. Let $K$ be the self-similar set of the IFS $\{f_i\}_{i=1}^{m}$. For any $x \in K$, there exists a sequence $(i_n)_{n=1}^{\infty}\in\{1,\ldots,m\}^{\mathbb{N}}$ such that \[x=\lim_{n\to \infty}f_{i_1}\circ \cdots\circ f_{i_n}(0).\] We call $(i_n)$ a coding of $x$. We can define a surjective projection map between the symbolic space $\{1,\ldots, m\}^{\mathbb{N}}$ and the self-similar set $K$ by $$\pi((i_n)_{n=1}^{\infty}):=\lim_{n\to \infty}f_{i_1} \circ\cdots \circ f_{i_n}(0).$$ Usually, the coding of $x$ is not unique \cite{DJKL,KarmaKan2}. Given two self-similar sets $K_1$ and $K_2$ from the class $\mathcal{A}$. Suppose that the IFS's of $K_1$ and $K_2$ are $\{f_{i}(x)=\frac{x}{\beta^{n_i}}+a_i\}_{i=1}^{n}$ and $\{g_{j}(x)=\frac{x}{\beta^{m_j}}+b_j\}_{j=1}^{m}$, respectively. Note that \[f_{i}(x)=\frac{x}{\beta^{n_i}}+a_i=\frac{x+\beta^{n_i}a_i}{\beta^{n_i}}=\frac{x}{\beta^{n_i}}+\frac{0}{\beta}+\frac{0}{\beta^2}+\cdots+ \frac{0}{\beta^{n_i-1}}+\frac{\beta^{n_i}a_i}{\beta^{n_i}}.\] Therefore, we can identify $f_i(x)$ with a block $(\underbrace{000\cdots 0}_{n_i-1}a_i^{'} )$,\, where $a_i^{'}=\beta^{n_i}a_i$. Conversely, any block $(\underbrace{000\cdots 0}_{n_i-1}a_i^{'} )$ can determine a unique similitude with respect to $\beta$. For simplicity we denote this block by $\hat{P}_{i}=(\underbrace{000\cdots 0}_{n_i-1}a_i^{'} )$. In what follows, we identify $f_{i}$ with $f_{\hat{P}_{i}}$. Similarly, we may define blocks in terms of the IFS of $K_2$. Let $D_1=\{\hat{P}_{1},\, \hat{P}_{2},\,\cdots,\,\hat{P}_{n}\}$ and $D_2=\{\hat{Q}_{1},\, \hat{Q}_{2},\,\cdots,\,\hat{Q}_{m}\}$, where $\hat{P}_{i}=(\underbrace{000\cdots 0}_{n_i-1}a_i^{'} )$, $a_i^{'}=\beta^{n_i}a_i$, $\hat{Q}_{j}=(\underbrace{000\cdots 0}_{m_j-1}b_j^{'} )$ and $b_j^{'}=\beta^{m_j}b_j$. The following lemma is trivial. \begin{Lemma}\label{coding} \[K_{1}=\{x=\lim\limits_{n\to \infty}f_{\hat{P}_{i_1}}\circ f_{\hat{P}_{i_2}} \circ\cdots\circ f_{\hat{P}_{i_n}}(0):\hat{P}_{i_j}\in D_1\}.\] \[K_{2}=\{y=\lim\limits_{n\to \infty}g_{\hat{Q}_{i_1}}\circ g_{\hat{Q}_{i_2}} \circ\cdots\circ g_{\hat{Q}_{i_n}}(0):\hat{Q}_{i_j}\in D_2\}.\] We call the infinite concatenation $\hat{P}_{i_1}\ast \hat{P}_{i_2} \ast \cdots$ ($\hat{Q}_{i_1}\ast \hat{Q}_{i_2} \ast \cdots$) a coding of x ($y$). \end{Lemma} \begin{Lemma}\label{conjugate} For any $\theta\in(0,\pi)\setminus{\pi/2}$, $Proj_{\theta}(K_1\times K_2))$ is similar to $K_1+ \tan(\theta)K_2.$ $Proj_{\pi/2}(K_1\times K_2))=K_2.$ \end{Lemma} \begin{proof} Note that $Proj_{\theta}(x,y)$ is point on the line $L_{\theta}$ at distance $$x\cos\theta+y\sin\theta=(x+sy)\cos\theta,$$ where $s=\tan\theta.$ \end{proof} By Lemma \ref{conjugate}, if we want to analyze $(Proj_{\theta}(K_1\times K_2))$, it suffices to consider the set $K_1+sK_2,$ where $s=\tan\theta.$ The infinite iterated function systems (IIFS) play a pivotal role in this paper, we first introduce some definitions and related results of this powerful tool. There are two definitions of the invariant set of IIFS, see for example, \cite{HF},\, \cite{MRD} and \cite{HM}. We adopt Fernau's definition \cite{HF}. \setcounter{Definition}{2} \begin{Definition}\label{IIIFS} Let $\mathcal{A}=\{\phi_{i}(x)=r_ix+a_i: i\in \mathbb{N}^{+},\,0<r_i<1, a_i\in \mathbb{R}\}$. Suppose that there exists a uniform $0<c<1$ such that for every $\phi_{i}\in \mathcal{A}$, $|\phi_{i}(x)-\phi_{i}(y)|\leq c |x-y| $, then we say $\mathcal{A}$ is an infinite iterated function system. The unique non-empty compact set $J$ is called the attractor (or invariant set) of $\mathcal{A}$ if \[J=\overline{\bigcup_{i\in \mathbb{N}}\phi_{i}(J)},\] where $ \overline{A}$ denotes the closure of $A$. We call $s_0$, which is the unique solution of the equation $\sum_{i=1}^{\infty}r_i^s=1$, the similarity dimension of $J$. \end{Definition} \setcounter{Remark}{1} In \cite{MRD}, Mauldin and Urbanski gave another definition of the attractor of IIFS, i.e. $$J_0\triangleq \bigcup\limits_{\{\phi_{i_n}\}\in\mathcal{A}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots\circ \phi_{i_n}([0,1]),$$ which yields that $J_{0}=\bigcup_{i\in \mathbb{N}}\phi_{i}(J_{0})$. However, for this definition the attractor $J_{0}$ may not be unique or compact, see example 1.3 from \cite{HF}. Evidently, $\overline{J_0}=J$. In what follows, $J_0$ means that the attractor is in the sense of Mauldin and Urbanski while $J$ refers to Fernau's definition. The following result can be found in \cite{MRD,M,HM}. We shall utilize this result to calculate the Hausdorff dimension of $Proj_{\theta}(K_1\times K_2).$ \setcounter{Theorem}{3} \begin{Theorem}\label{DimensionIIFS} Let $J_0$ be the attractor of some IIFS with the open set condition, then \[ \dim_{H}(J_0)=\inf\left\{t:\sum_{i\in \mathbb{N}}r_i^{t}\leq 1\right\}. \] \end{Theorem} The following definitions are defined in a natural way. \setcounter{Definition}{4} \begin{Definition} Let $\Sigma=\{s_1,\cdots, s_p \}$, where $s_i\in \mathbb{R}, 1\leq i\leq p$. Let $d_1d_2\cdots d_k$ and $c_1c_2\cdots c_k$ be two blocks from $\{s_1,\cdots,s_p\}^{k}$. We say the block $d_1d_2\cdots d_k$ is of length $k$. Define the concatenation of $d_1d_2\cdots d_k$ and $c_1c_2\cdots c_k$ by $$(d_1d_2\cdots d_k)* (c_1c_2\cdots c_k)=d_1d_2\cdots d_kc_1c_2\cdots c_k.$$ The sum of $d_1d_2\cdots d_k$ and $c_1c_2\cdots c_k$ is defined by $(d_1+c_1)(d_2+c_2)\cdots (d_k+c_k)$. The concatenation of $t\in \mathbb{N}$ blocks with $\hat{P}_{1}=d_1d_2\cdots d_k $ is denoted by \[\hat{P}_{1}^{t}=\underbrace{\hat{P}_{1}*\hat{P}_{1}*\cdots*\hat{P}_{1}}_\text{$t$ times}.\] The value of the block $\hat{P}_{1}=d_1d_2\cdots d_k$ in base $\beta>1$ is defined as \[(d_1d_2\cdots d_k)_{\beta}= \dfrac{d_1}{\beta}+\dfrac{d_2}{\beta^2}+\cdots+\dfrac{d_k}{\beta^k}.\] Similarly, given $(d_n)\in \{s_1\cdots, s_p \}^{\mathbb{N}}$, define $$(d_n)_{\beta}=\sum\limits_{n=1}^{\infty}\dfrac{d_n}{\beta^n}.$$ \end{Definition} Recall that $D_2=\{\hat{Q}_{1},\, \hat{Q}_{2},\,\cdots,\,\hat{Q}_{m}\}$, where $\hat{Q}_{j}=(\underbrace{000\cdots 0}_{m_j-1}b_j^{'} )$ and $b_j^{'}=\beta^{m_j}b_j$. We define a new set $D_2^{\prime}=\{s\hat{Q}_{1},\, s\hat{Q}_{2},\,\cdots,\,s\hat{Q}_{m}\}$, where $s\hat{Q}_{j}=(\underbrace{000\cdots 0}_{m_j-1}sb_j^{'} )$ and $b_j^{'}=\beta^{m_j}b_j$, $s=\tan\theta$ is from Lemma \ref{conjugate}. The following definition was essentially given in \cite{SumKan}, we slightly modify the definition. \begin{Definition}\label{Matching} Take $u$ blocks \[\hat{P}_{i_1},\,\hat{P}_{i_2},\,\hat{P}_{i_3},\,\cdots,\, \hat{P}_{i_{u}}\] from $D_1$ with lengths $p_{1},\,p_{2},\,p_{3},\,\cdots,\, p_{u}$, respectively. Pick $v$ blocks \[s\hat{Q}_{j_1},\,s\hat{Q}_{j_2},\,s\hat{Q}_{j_3},\,\cdots,\, s\hat{Q}_{j_{v}}\] from $D_2^{\prime}$ with lengths $q_{1},\,q_{2},\,q_{3},\,\cdots ,\,q_{v}$, respectively. If there exist integers $k_{1},\,k_{2},\,k_{3},\cdots, k_{u}$,\\ $l_{1},\,l_{2},\,l_{3},\cdots, l_{v}$ such that \[ \sum_{i=1}^{u}k_{i}p_{i}=\sum_{j=1}^{v}l_{j}q_{j}, \] then we call $A+B$ a Matching with respect to $\beta$, where $$A =\hat{P}_{i_{t_1}}\ast \hat{P}_{i_{t_2}}\ast\cdots\ast \hat{P}_{i_{t_u}},$$ and there are precisely $k_p$ $\hat{P}_{i_{p}}$'s in the concatenation $\hat{P}_{i_{t_1}}\ast \hat{P}_{i_{t_2}}\ast\cdots\ast \hat{P}_{i_{t_u}}$, $$B=s\hat{Q}_{j_{w_1}}\ast s\hat{Q}_{j_{w_2}}\ast\cdots\ast s\hat{Q}_{j_{w_v}},$$ and there are precisely $l_q$ $(s\hat{Q}_{j_{q}})$'s in the concatenation $s\hat{Q}_{j_{w_1}}\ast s\hat{Q}_{j_{w_2}}\ast\cdots\ast s\hat{Q}_{j_{w_v}},$ where $1\leq p\leq u$, $t_i\in\{1,2\cdots,u\}, 1\leq i\leq u$, $1\leq q\leq v$, $w_j\in\{1,2\cdots,v\}, 1\leq j\leq v$. \end{Definition} \setcounter{Remark}{6} \begin{Remark} In \cite{SumKan}, the definition of Matching is incorrect. We need a little modification. Due to the condition $\sum_{i=1}^{u}k_{i}p_{i}=\sum_{j=1}^{v}l_{j}q_{j},$ it follows that the lengths of $A$ and $B$ coincide. Therefore, we can define the sum of these two concatenated blocks. A Matching is also a block which is the sum of some concatenated blocks from $D_1$ and $D_2^{\prime}$, respectively. To avoid some unnecessary Matchings, in what follows, we always obey the following rule, i.e. if the new born Matchings can be concatenated by the old Matchings, then we do not choose these new Matchings. \end{Remark} \setcounter{Example}{7} \begin{Example} Given $\beta>1$. Let $K_1=K_2$ be the attractor of the IFS $$\left\{f_1(x)=\dfrac{x}{\beta^4},f_2(x)=\dfrac{x+\beta^8-1}{\beta^8}\right\}.$$ Denote $A=\beta^8-1,B=sA, C=A+B, s=\tan\theta$. $$D_1=\{(0000), (0000000A)\}, D_2^{\prime}=\{(0000), (0000000B)\}.$$ All the Matchings generated by $D_1$ and $D_2^{\prime}$ is $$D=\{(0000), (0000000A), (0000000B),(0000000C), (0000000B000A),(0000000A000B),\cdots\}.$$ Note that in this example the lengths of the Matchings should be $4k, k\in \mathbb{N}^{+}$ due to the lengths of blocks from $D_1$ and $D_2^{\prime}$. Clearly, the block $$(00000000000A)=(0000)\ast(0000000A),$$ i.e. the block $(00000000000A)$ can be concatenated by another two Matchings. For such case, we do not take $(00000000000A)$ as a Matching. \end{Example} The following result can be found in \cite{SumKan}. \setcounter{Lemma}{8} \begin{Lemma}\label{GenerateMatchings} The cardinality of Matchings which are generated by $D_1$ and $D_2^{\prime}$ is at most countable. \end{Lemma} Denote all the Matchings by \[D=\{\hat{R}_{1} ,\,\hat{R}_{2},\,\cdots,\,\hat{R}_{n-1},\, \hat{R}_{n},\cdots\}. \] Since each Matching determines a similitude with respect to $\beta$ (the approach is the same as we identify each similitude of $K_1$ with some block $\hat{P}_i$), it follows that $D$ uniquely determines a set of similitudes $\Phi^{\infty}\triangleq\{\phi_1,\,\phi_2,\,\phi_3,\,\phi_4,\cdots\} $. If the cardinality of $D$ is finite, then $K_1+sK_2$ is clearly a self-similar set. We will prove this fact in the next subsection. If $\sharp D$ is infinite, then we define $$E\triangleq \bigcup\limits_{\{\phi_{i_n}\}\in\Phi^{\infty}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots\circ \phi_{i_n}([0,1]),$$ and $E$ is a solution of the equation $E=\bigcup\limits_{i\in \mathbb{N}}\phi_{i}(E)$, \cite{MRD}. \subsection{Proof of Theorem \ref{Main}} First we assume that the cardinality of all Matchings is infinitely countable. In Lemma \ref{coding} we give a new definition of the codings of $K_i$, $1\leq i\leq 2$. For any $x+sy\in K_1+sK_2$, we denote the coding of $x+sy$ by $(x_n+sy_n)_{n=1}^{\infty}$, where $(x_n)$ and $(y_n)$ are the codings of $x$ and $y$, respectively. By Lemma \ref{coding}, We know that $(x_n)$ ($(sy_n)$) can be decomposed into infinite blocks from $D_1$($D_2^{\prime}$), namely, $(x_n)=X_1\ast X_2\ast \cdots$ and $(sy_n)=sY_1\ast sY_2\ast \cdots.$ Let $(a_n)_{n=1}^{\infty}=(x_n+sy_n)$ be a coding of some point $x+sy\in K_1+sK_2$, where $(x_n)^{\infty}_{n=1}$ and $(y_n)^{\infty}_{n=1}$ are the codings of $x\in K_1$ and $y\in K_2$, respectively. Given $k>0$, we call $(c_{i_1}c_{i_2}\cdots c_{i_k})$ a word of $(a_{i})_{i=1}^{\infty}$ with length $k$ if there exists some $j>0$ such that $c_{i_1}c_{i_2}\cdots c_{i_k}=a_{j+1}\cdots a_{j+k}$. Let \begin{align*} C=\Big\{(a_n)=(x_n+sy_n): &\textrm{ there exists some } \,N\in \mathbb{N}^{+}\,\textrm{such that any word of}\\ & (a_{N+i})_{i=1}^{\infty} \textrm{ is not a Matching}\Big\}. \end{align*} \setcounter{Lemma}{9} \begin{Lemma}\label{app} Let $(a_n)\in C$, for any $\epsilon>0$ we can find a coding $(b_n)_{n=1}^{\infty}$ which is the concatenation of infinite Matchings such that \[|(a_n)_{\beta}-(b_n)_{\beta} |< \epsilon.\] \end{Lemma} \begin{proof} Let $(a_n)\in C$. For any $\epsilon>0$, there exists some $n_0\in \mathbb{N}$ such that $\beta^{-n_0}< \epsilon$. We will define some $(b_n)_{n=1}^{\infty}$ such that its value in base $\beta$ is a point of $E$. \textbf{Case 1.} Suppose $a_1a_2a_3\cdots a_{n_0}$ is a Matching or a concatenation of some Matchings, then we can choose any $(b_{n_0+i})_{i=1}^{\infty}$ that is the concatenation of infinite Matchings. Therefore, \[ |(a_n)_{\beta}-(b_n)_{\beta} |= |(a_{n_0+1}a_{n_0+2}a_{n_0+3}\cdots)_{\beta}-(b_{n_0+1}b_{n_0+2}b_{n_0+3}\cdots)_{\beta}|\leq M \sum_{i=n_0+1}^{\infty}\beta^{-i}=M^{\prime}\epsilon, \] where $M, M^{\prime}$ are positive constants. Therefore, we have proved that there exists some point $ b\in E$ such that \[|(a_n)_{\beta}-(b_n)_{\beta} |< \epsilon.\] \textbf{Case 2.} If $a_1a_2a_3\cdots a_{n_0}$ is not a concatenation of some Matchings, by virtue of the definition of $(a_n)$, $(a_n)=(x_n+sy_n)$, where $(x_n)=(X_1\ast X_2\ast \cdots), (sy_n)=(sY_1\ast sY_2\ast \cdots) $ are the codings of some points in $K_1$ and $K_2$, respectively. Suppose that there exist $p, q$ such that $a_1a_2a_3\cdots a_{n_0}$ is a prefix of $(X_1\ast X_2\ast \cdots \ast X_p)+(sY_1\ast sY_2\ast \cdots \ast sY_q)$, the lengths of $X_1\ast X_2\ast \cdots \ast X_p$ and $sY_1\ast sY_2\ast \cdots \ast sY_q$ may not coincide. Nevertheless, we may still define the summation of their common prefixes. Assume that the length of $X_1\ast X_2\ast \cdots \ast X_p$ and $sY_1\ast sY_2\ast \cdots \ast sY_q$ are $k_1$ and $k_2$, respectively. Then $$(X_1\ast X_2\ast \cdots \ast X_p)^{k_2}+(sY_1\ast sY_2\ast \cdots \ast sY_q)^{k_1}$$ is a Matching or a concatenation of some Matchings as the blocks $(X_1\ast X_2\ast \cdots \ast X_p)^{k_2}$ and $(sY_1\ast sY_2\ast \cdots \ast sY_q)^{k_1}$ have the same length. Moreover, the initial $n_0$ digits of $(X_1\ast X_2\ast \cdots \ast X_p)^{k_2}+(sY_1\ast sY_2\ast \cdots \ast sY_q)^{k_1}$ is $a_1a_2a_3\cdots a_{n_0}$. Now, we can make use of the idea in the first case. \end{proof} \begin{Lemma}\label{Closure} $\overline{E}=K_1+sK_2$. \end{Lemma} \begin{proof} For any $ \epsilon>0$ and any $ x+sy\in K_1+sK_2$, we can find a coding $(a_n)$ such that $x+sy=\sum\limits_{n=1}^{\infty}a_n\beta^{-n}$. If there exists a subsequence of integer $n_{k}\to \infty$ such that $(a_1a_2a_3 \cdots a_{n_k})$ is always a concatenation of some Matchings, then by the definition of $$E= \bigcup\limits_{\{\phi_{i_n}\}\in\Phi^{\infty}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots \phi_{i_n}([0,1])$$ it follows that $x+y\in E$. If $(a_n)\in C$, by Lemma \ref{app} there exists $b\in E$ such that $|b-x-y|<\epsilon$. \end{proof} \begin{Lemma}\label{IIFS} $\overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(K_1+sK_2)}=K_1+sK_2$. \end{Lemma} \begin{proof} Since $$E= \bigcup\limits_{\{\phi_{i_n}\}\in\Phi^{\infty}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots \phi_{i_n}([0,1])$$ it follows that $E=\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(E)$, which yields that \[ \overline{E}=\overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(E)}=\overline{\overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(E)}}\supseteq \overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\overline{\phi_{i}(E)} }=\overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(K_1+sK_2)}, \] i.e. we have \[\overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(K_1+sK_2)} \subseteq K_1+sK_2.\] Conversely, $E=\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(E)\subseteq \bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(K_1+sK_2)$, by Lemma \ref{Closure} it follows that $$ K_1+sK_2\subset \overline{\bigcup\limits_{i\in \mathbb{N}^{+}}\phi_{i}(K_1+sK_2)}.$$ \end{proof} \begin{proof}[Proof of Theorem \ref{Main}:] Lemma \ref{GenerateMatchings} states that there are at most countably many Matchings generated by $D_1$ and $D_2^{\prime}$. Suppose that the cardinality of Matchings is infinitely countable, then by Lemma \ref{IIFS}, $K_1+sK_2$ is an attractor of $\Phi^{\infty}$. If the cardinality is finite, then $K_1+sK_2$ is a self-similar set. The proof is similar to Lemmas \ref{Closure} and \ref{app}. For this case we may not approximate the coding of $x+sy \in K_1+sK_2$. Indeed, we can directly find a coding which is the concatenation of infinite Matchings such that the value of this infinite coding is $x+sy$, i.e. $E=K_1+sK_2$. \end{proof} Therefore, in terms of Mauldin and Urbanski's result \cite{MRD}, Lemmas \ref{IIFS} and \ref{conjugate}, we have \setcounter{Proposition}{12} \begin{Proposition}\label{P=B} For any $\theta\in[0,\pi)$, $$ \dim_{P}(Proj_{\theta}(K_1\times K_2))= \overline{\dim}_{B}(Proj_{\theta}(K_1\times K_2)).$$ \end{Proposition} The following results were proved in \cite{SumKan}. \setcounter{Lemma}{13} \begin{Lemma}\label{almostequal} If $C$ is countable, then for any $s\in \mathbb{R}$, $\dim_{H}(E)=\dim_{H}(K_1+sK_2)$. \end{Lemma} \begin{Lemma} Given any $k\in \mathbb{N}^{+}$. Let $K_1$ be the attractor of the following IFS $$\left\{f_i(x)=\dfrac{x}{\beta^{k}}+a_i,1\leq i\leq n-1, f_n(x)=\dfrac{x}{\beta^{2k}}+a_n\right\},$$ and $K_2$ be the attractor of the following IFS $$\left\{g_j(x)=\dfrac{x}{\beta^{k}}+b_j,1\leq j\leq m-1, g_m(x)=\dfrac{x}{\beta^{2k}}+b_m\right\},$$ where $a_i, b_j\in \mathbb{R}, 1\leq i\leq n,1\leq j\leq m$. Then $C$ is countable. \end{Lemma} \subsection{Dimension of $K_1+sK_2$} In \cite{SumKan}, we proved the following results. \begin{Lemma}\label{finitecc} If the similarity ratios of $K_1$ are homogeneous, denoted by $\beta^{-k}, k\in \mathbb{N}^{+}$, and the similarity ratios of $K_2$ have the form $\beta^{-kp_j}, 1\leq j\leq m, p_j\in \mathbb{N}^{+},$ then $\sharp D$ is finite. \end{Lemma} \begin{Lemma}\label{sss} If $\sharp(D)$ is finite, then $K_1+sK_2$ is a self-similar set. \end{Lemma} \begin{proof}[\textbf{Proof of Corollary \ref{Cor}}] Corollary \ref{Cor} follows from Lemmas \ref{finitecc}, \ref{sss} and Nagi and Wang's finite type condition \cite{NW}. \end{proof} We are interested in the case when $K_1+sK_2$ is an attractor of some infinite iterated function system. For this case, we utilize Moran's idea \cite{Moran}, and find a sub-infinite iterated function system such that the new IIFS satisfies the open set condition and the Hausdorff dimension of two attractors coincides. For convenience, we introduce the Vitali algorithm. Let $\Phi^{\infty}=\{\phi_1,\,\phi_2,\,\phi_3,\,\phi_4,\cdots\} $ be the IIFS generated from the set of all the Matchings. The attractor of this IIFS is $$E= \bigcup\limits_{\{\phi_{i_n}\}\in\Phi^{\infty}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots\circ \phi_{i_n}([0,1]).$$ Define $$\Psi^{*}=\{\cup_{k=1}^{\infty}\cup_{i_1\cdots i_k}\phi_{i_1}\circ \phi_{i_2}\cdots \circ \phi_{i_k}\}.$$ Clearly, $$\Psi^{*}(E)=\{\cup_{k=1}^{\infty}\cup_{i_1\cdots i_k}\phi_{i_1}\circ \phi_{i_2}\cdots \circ \phi_{i_k}(E)\}$$ is a Vitali class of $E$ (\cite{FG}). Now we implement the Vitali process. Take any $\phi\in \Psi^{*}$, if $\phi_n$ has been selected for $1\leq n\leq k$, then we pick $\phi_{k+1}$ from $\Psi^{*}$ satisfying the following conditions, \begin{itemize} \item [(1)] $\phi_{k+1}(E)\cap \phi_{i}(E)=\emptyset$ for $1\leq i\leq k$. \item [(2)] $|\phi_{k+1}(E)|\geq 2^{-1}\sup\{|\phi(E)|:\phi\in \Psi^{*} \mbox{ and } \phi(E)\cap \phi_i(E)=\emptyset, 1\leq i\leq k\}$, where $|A|$ denotes the diameter of $A$. \end{itemize} This process is finished if the selection of $\phi_{k+1}$ is no longer possible. Denote all the similitudes selected from the Vitali process by $\Psi.$ Moran \cite{Moran} proved the following theorem. \setcounter{Theorem}{17} \begin{Theorem}\label{Vitali} Let $$E= \bigcup\limits_{\{\phi_{i_n}\}\in\Phi^{\infty}}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots\circ \phi_{i_n}([0,1]),$$ and $$G= \bigcup\limits_{\{\phi_{i_n}\}\in\Psi}\bigcap\limits_{n=1}^{\infty}\phi_{i_1}\circ\phi_{i_2}\cdots\circ \phi_{i_n}([0,1]),$$ Then \begin{itemize} \item [(1)] $\mathcal{H}^s(E)=\mathcal{H}^s(G)$ for any $s$ satisfying $\sum_{\phi_i\in \Psi } r_i^s<\infty$, where $r_i$ is the similarity ratio of $\phi_i;$ \item [(2)] $\dim_{H}(E)=s$, where $s=\inf\left\{t:\sum_{\phi_i\in \Psi } r_i^t\leq 1\right\}$. \end{itemize} \end{Theorem} Therefore, by means of Lemma \ref{Closure} and Theorem \ref{Vitali}, it follows that $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dim_{H}(K_1+sK_2)\geq \dim_{H}(E)=\dim_{H}(G), $$ which gives an lower bound of $\dim_{H}(Proj_{\theta}(K_1\times K_2))$. For the upper bound, we use the similarity dimension of $E$. The following lemma is standard. \setcounter{Lemma}{18} \begin{Lemma}\label{similarity} $\dim_{H}(Proj_{\theta}(K_1\times K_2))\leq s_0$, where $s_0$ is the solution of $$\sum_{\phi_i\in \Phi^{\infty}}r_i^s=1.$$ \end{Lemma} \begin{proof} Let $\delta>0$, there exists some $k>0$ such that $$|\phi_{i_1\cdots i_k}(Cov(E))|\leq \delta,$$ where $Cov(E)$ denotes the convex hull of $E$. By the definition of $E,$ it follows that $$E=\bigcup\limits_{i\in \mathbb{N}}\phi_{i}(E).$$ Then for any $k\geq 1$, $$\cup_{(i_1\cdots i_k)\in \mathbb{N}^k}\phi_{i_1\cdots i_k}(Cov(E))\supset \overline{E}=K_1+sK_2.$$ Therefore, $$\mathcal{H}^{s_0}_{\delta}(K_1+sK_2)\leq \sum_{(i_1\cdots i_k)\in \mathbb{N}^k}|\phi_{i_1\cdots i_k}(Cov(E)) |^{s_0}=\sum_{(i_1\cdots i_k)\in \mathbb{N}^k}r_{i_1}^{s_0}\cdots r_{i_k}^{s_0}|Cov(E) |^{s_0}. $$ Note that $$\sum_{(i_1\cdots i_k)\in\mathbb{N}^k}r_{i_1}^{s_0}\cdots r_{i_k}^{s_0}|Cov(E) |^{s_0}\leq \left(\sum_{i=1}^{\infty}r_i^{s_0}\right)^k|Cov(E)|^{s_0}=|Cov(E)|^{s_0}<\infty.$$ \end{proof} \begin{proof}[Proof of Theorem \ref{IIFSS}] Theorem \ref{IIFSS} follows from Lemma \ref{similarity}, Theorem \ref{Vitali}. \end{proof} \section{One example} In this section, we give one example to illustrates how to find the Hausdorff dimension of $Proj_{\theta}(K_1\times K_2)$ in terms of Theorems \ref{DimensionIIFS} and \ref{Vitali}. \begin{Example} Let $K_1=K_2$ be the attractor of the IFS $$\left\{f_1(x)=\dfrac{x}{\beta^4},\dfrac{x+\beta^8-1}{\beta^8}\right\}.$$ Suppose that $\beta>1.39$, then for any $\theta\in\left(\arctan \dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}, \arctan \dfrac{\beta^8-2\beta^4}{\beta^{12}-\beta^8+1}\right)$ $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \sqrt{\dfrac{1+\sqrt{5}}{2}}}{\log \beta}=\dim_{H}(K_1)+\dim_{H}(K_2).$$ Let $\theta=\arctan\dfrac{\beta^{8}-1}{\beta^8-\beta^4+1}$ and $\beta>1.41$. Then $$\dim_{P}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \gamma}{\log \beta}<\dim_{H}(K_1)+\dim_{H}(K_2),$$ where $\gamma\approx 1.2684$ is the largest real root of $$x^{20}-2x^{16}-2x^{12}+x^{8}+x^{4}-1=0.$$ \end{Example} Denote $A=\beta^8-1,B=sA,C=A+B$. Then $$D=\{(0000), (0000000A), (0000000B),(0000000C), (0000000B000A),(0000000A000B),\cdots\}$$ The associated IIFS of $D$ is $$\Phi^{\infty}=\{f(x), h_1(x), h_2(x), \phi_{2n}(x), \phi_{2n-1}(x), g(x), n\geq 1\},$$ where $$f(x)=\dfrac{x}{\beta^4}, h_{1}(x)=\dfrac{x}{\beta^{8}}+\dfrac{A}{\beta^8},h_{2}(x)=\dfrac{x}{\beta^{8}}+\dfrac{B}{\beta^8}, g(x)=\dfrac{x}{\beta^{8}}+\dfrac{A+B}{\beta^8} $$ $$\phi_{2n-1}(x)=\dfrac{x}{\beta^{4n+8}}+\dfrac{B}{\beta^8}+\dfrac{A}{\beta^{12}}+\dfrac{B}{\beta^{16}}+\cdots+\dfrac{c(n)A+e(n)B}{\beta^{4n+8}},$$ $$\phi_{2n}(x)=\dfrac{x}{\beta^{4n+8}}+\dfrac{A}{\beta^8}+\dfrac{B}{\beta^{12}}+\dfrac{A}{\beta^{16}}+\cdots+\dfrac{c(n)B+e(n)A}{\beta^{4n+8}},n\geq 1,$$ where \begin{equation*} c(n)=\left\lbrace\begin{array}{cc} 1& n \mbox{ is odd}\\ 0& n \mbox{ is even} \end{array}\right. \end{equation*} \begin{equation*} e(n)=\left\lbrace\begin{array}{cc} 1& n \mbox{ is even}\\ 0& n \mbox{ is odd} \end{array}\right. \end{equation*} Let $O=(0,1+s)$, and $I=[0,1+s]$. It is easy to check the following statements, see Figure 1. \begin{itemize} \item [(1)] $f(O)\cap h_1(O)=\emptyset$ if and only if $s<\beta^4-\beta^{-4}-1$; \item [(2)] $h_1(O)\cap h_2(O)=\emptyset$ if and only if $s>\dfrac{\beta^8}{\beta^8-2}$; \item [(3)] $h_2(O)\cap \phi_2(O)=\emptyset$ if and only if $s<\dfrac{\beta^{12}-2\beta^4}{\beta^{12}-\beta^8+1}$; \item [(4)] $\phi_{2n}(O)\cap \phi_1(O)=\emptyset$ if and only if $\dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}<s$; \item [(5)] $\phi_{2n-1}(O)\cap g(O)=\emptyset$ if and only if $ s<\beta^{8}-\beta^4-1$, where $n\geq 1$; \item [(6)] $\phi_{2n}(O)\cap \phi_{2n+2}(O)=\emptyset$ and $\phi_{2n-1}(O)\cap \phi_{2n+1}(O)=\emptyset$ if and only if $$\dfrac{\beta^4}{\beta^8-\beta^4-1}<s<\beta^4-\beta^{-4}-1,$$ where $n\geq 1$. \end{itemize} \begin{figure}[h]\label{figure1} \centering \begin{tikzpicture}[scale=5] \draw(0,0)node[below]{\scriptsize $0$}--(3,0)node[below]{\scriptsize$1+s$}; \draw(0,-0.3)node[below]{\scriptsize $0$}--(0.3,-0.3); \node [label={[xshift=0.8cm, yshift=-1.5cm]$f(I)$}] {}; \draw(0.4,-0.3)--(0.55,-0.3); \node [label={[xshift=2.5cm, yshift=-1.5cm]$h_1(I)$}] {}; \draw(0.6,-0.3)--(0.75,-0.3); \node [label={[xshift=3.5cm, yshift=-1.5cm]$h_2(I)$}] {}; \draw(2.85,-0.3)--(3,-0.3)node[below]{\scriptsize$1+s$}; \node [label={[xshift=14.5cm, yshift=-1.5cm]$g(I)$}] {}; \draw(0.8,-0.3)--(0.9,-0.3); \node [label={[xshift=4.5cm, yshift=-1.5cm]$\phi_2(I)$}] {}; \draw(1,-0.3)--(1.05,-0.3); \node [label={[xshift=5.5cm, yshift=-1.5cm]$\phi_4(I)$}] {}; \draw(1.8,-0.3)--(1.9,-0.3); \node [label={[xshift=9.5cm, yshift=-1.5cm]$\phi_1(I)$}] {}; \draw(2.0,-0.3)--(2.05,-0.3); \node [label={[xshift=10.5cm, yshift=-1.5cm]$\phi_3(I)$}] {}; \node [label={[xshift=6.5cm, yshift=-1.9cm]$\cdots\cdots\cdots$}] {}; \node [label={[xshift=12.5cm, yshift=-1.9cm]$\cdots\cdots\cdots$}] {}; \end{tikzpicture}\caption{First iteration} \end{figure} Hence, if $\beta>1.39$ then the following inequalities hold $$\dfrac{\beta^4}{\beta^8-\beta^4-1}<\dfrac{\beta^8}{\beta^8-2}<\dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}<\dfrac{\beta^{12}-2\beta^4}{\beta^{12}-\beta^8+1}<\beta^4-\beta^{-4}-1<\beta^8-\beta^{4}-1.$$ In other words, let $\theta\in\left(\arctan \dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}, \arctan \dfrac{\beta^8-2\beta^4}{\beta^{12}-\beta^8+1}\right)$, then $\Phi^{\infty}$ satisfies the open set condition with the open set $(0,1+s)$. In terms of Theorem \ref{DimensionIIFS} and Lemma \ref{almostequal}, it follows that $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \gamma^{*}}{\log \beta},$$ where $\gamma^{*}$ is largest real root of $x^{12}-2x^8-2x^4+1=0.$ It is easy to check that $$\gamma^{*}=\sqrt{\dfrac{1+\sqrt{5}}{2}}.$$ For the second case, note that $s=\tan\theta=\dfrac{\beta^{8}-1}{\beta^8-\beta^4+1}$ if and only if $h_2\circ g=\phi_2\circ f.$ Moreover, if $\beta>1.41$, then $$\dfrac{\beta^8}{\beta^8-2}<\dfrac{\beta^{8}-1}{\beta^8-\beta^4+1}<\dfrac{\beta^{12}-\beta^{8}+1}{\beta^{12}-\beta^8-\beta^4}<\dfrac{\beta^{12}-2\beta^4}{\beta^{12}-\beta^8+1}.$$ In this case the IIFS does not satisfy the open set condition, see the first iteration in Figure 2. \begin{figure}[h]\label{figure1} \centering \begin{tikzpicture}[scale=5] \draw(0,0)node[below]{\scriptsize $0$}--(3,0)node[below]{\scriptsize$1+s$}; \draw(0,-0.3)node[below]{\scriptsize $0$}--(0.3,-0.3); \node [label={[xshift=0.8cm, yshift=-1.5cm]$f(I)$}] {}; \draw(0.4,-0.3)--(0.55,-0.3); \node [label={[xshift=2.5cm, yshift=-1.5cm]$h_1(I)$}] {}; \draw(0.68,-0.4)--(0.83,-0.4); \node [label={[xshift=3.7cm, yshift=-2.8cm]$h_2(I)$}] {}; \draw(2.85,-0.3)--(3,-0.3)node[below]{\scriptsize$1+s$}; \node [label={[xshift=14.5cm, yshift=-1.5cm]$g(I)$}] {}; \draw(0.8,-0.3)--(0.9,-0.3); \node [label={[xshift=4.5cm, yshift=-1.5cm]$\phi_2(I)$}] {}; \draw(1,-0.3)--(1.05,-0.3); \node [label={[xshift=5.5cm, yshift=-1.5cm]$\phi_4(I)$}] {}; \draw(1.8,-0.3)--(1.9,-0.3); \node [label={[xshift=9.5cm, yshift=-1.5cm]$\phi_1(I)$}] {}; \draw(2.0,-0.3)--(2.05,-0.3); \node [label={[xshift=10.5cm, yshift=-1.5cm]$\phi_3(I)$}] {}; \node [label={[xshift=6.5cm, yshift=-1.9cm]$\cdots\cdots\cdots$}] {}; \node [label={[xshift=12.5cm, yshift=-1.9cm]$\cdots\cdots\cdots$}] {}; \end{tikzpicture}\caption{First iteration} \end{figure} We make use of the Vitali process to find the $\Psi.$ It is not difficult to check that in $\Phi^{\infty}$ only for the pair $(h_2, \phi_2)$, $h_2(O)\cap \phi_2(O)\neq \emptyset$. For other similitudes $$(S_1(x), S_2(x))\neq (h_2, \phi_2) ,$$ ($S_i(x)\in \Phi^{\infty}, i=1, 2$), $S_1(O)\cap S_2(O)=\emptyset$, see the first iteration in Figure 2. Hence, we implement the Vitali process and find all the similitudes of $\Psi$, i.e. $$\Psi=\{\Phi^{\infty}\setminus\{\phi_2\}\}\cup \cup_{k=1}^{\infty}\{\phi_{2^{k}}(\Phi^{\infty}\setminus\{\phi_2, f\})\},$$ where $\phi_{2^{k}}(\Phi^{\infty}\setminus\{\phi_2, f\})=\{\phi_{2^k}\circ h:h\in \Phi^{\infty}\setminus\{\phi_2, f\})\}$ for any $k\geq 1.$ By Theorem \ref{Vitali} and Lemma \ref{almostequal}, it follows that $\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dfrac{\log \gamma}{\log \beta},$ where $\gamma\approx 1.2684$ is the largest real root of $$x^{20}-2x^{16}-2x^{12}+x^{8}+x^{4}-1=0.$$ \section{Final remarks} We can obtain the following stronger result. \begin{Theorem} Take any $K_1, K_2,\cdots, K_n\in \mathcal{A}$ and any real numbers $p_1,\cdots, p_n$. If there are some $1\leq i\neq j\leq n$ such that $p_{i}, p_j\neq 0$, then $$p_1K_1+p_2K_2+\cdots +p_nK_n=\left\{\sum_{i=1}^{n}p_ix_i:x_i\in K_i, 1\leq i\leq n\right\}$$ is a self-similar set or an attractor of some infinite iterated function system. \end{Theorem} The proof of this result is similar to Theorem \ref{Main}. Therefore, we can consider the set $$Proj_\theta(K_1\times K_2\times\cdots \times K_n),$$ and obtain similar result as Theorem \ref{Main}. Finally, we pose the following question: \setcounter{Question}{1} \begin{Question} Take $K_1, K_2\in \mathcal{A}$ and $\theta\in[0,\pi)$. If $$\dim_{H}(Proj_{\theta}(K_1\times K_2))=\dim_{H}(K_1)+\dim_{H}(K_2), $$ then must the IFS (IIFS) of the attractor, which is similar to $Proj_{\theta}(K_1\times K_2)$, satisfy the open set condition? \end{Question} \section*{Acknowledgements} The work is supported by National Natural Science Foundation of China (Nos.11701302, 11671147). The work is also supported by K.C. Wong Magna Fund in Ningbo University.
{ "timestamp": "2018-06-13T02:06:21", "yymm": "1806", "arxiv_id": "1806.01080", "language": "en", "url": "https://arxiv.org/abs/1806.01080" }
\chapter{Preliminaries} \label{chapter:Preliminaries} \section{Some Basic Definitions} \label{section:Some Basic Definitions} Let $F$ be a field. An \textbf{algebra} $A$ over $F$ is an $F$-vector space together with a bilinear map $A \times A \rightarrow A, (x,y) \mapsto xy$, which we call the \textbf{multiplication} of $A$. $A$ is \textbf{associative} if the associative law $(xy)z = x(yz)$ holds for all $x,y,z \in A$. Our algebras are \textbf{nonassociative} in the sense that we do not assume this law and we call algebras in which the associative law fails \textbf{not associative}. $A$ is called \textbf{unital} if it contains a multiplicative identity $1$. We assume throughout this thesis that our algebras are unital without explicitly saying so. Given an algebra $A$, the \textbf{opposite algebra} $A^{\mathrm{op}}$ is the algebra with the same elements and addition operator as $A$, but where multiplication is performed in the reverse order. We say an algebra $0 \neq A$ is a \textbf{left} (resp. \textbf{right}) \textbf{division algebra}, if the left multiplication $L_a : x \mapsto ax$ (resp. the right multiplication $R_a : x \mapsto xa$) is bijective for all $0 \neq a \in A$ and $A$ is a \textbf{division algebra} if it is both a left and a right division algebra. A finite-dimensional algebra is a division algebra if and only if it has no non-trivial zero divisors \cite[p.~12]{schafer1966introduction}, that is $a b = 0$ implies $a = 0$ or $b = 0$ for all $a, b \in A$. Finite division algebras are also called \textbf{finite semifields} in the literature. The \textbf{associator} of three elements of an algebra $A$ is defined to be $[x,y,z] = (xy)z-x(yz)$. We then define the \textbf{left nucleus} to be $\mathrm{Nuc}_l (A) = \{ x \in A \ \vert \ [x,A,A] = 0 \}$, the \textbf{middle nucleus} to be $\mathrm{Nuc}_m (A) = \{ x \in A \ \vert \ [A,x,A] = 0 \}$ and the \textbf{right nucleus} to be $\mathrm{Nuc}_r (A) = \{ x \in A \ \vert \ [A,A,x] = 0 \}$; their intersection $\mathrm{Nuc}(A) = \{ x \in A \ \vert \ [A,A,x] = [A,x,A] = [x,A,A] = 0 \}$ is the \textbf{nucleus} of $A$. $\mathrm{Nuc}(A)$, $\mathrm{Nuc}_l(A)$, $\mathrm{Nuc}_m(A)$ and $\mathrm{Nuc}_r(A)$ are all associative subalgebras of $A$. The \textbf{commutator} of $A$ is the set of all elements which commute with every other element, $$\mathrm{Comm}(A) = \{ x \in A \ \vert \ xy = yx \text{ for all } y \in A \}.$$ The \textbf{center} of $A$ is $\mathrm{Cent}(A) = \mathrm{Nuc}(A) \cap \mathrm{Comm}(A).$ If $D$ is an associative division ring, an automorphism $\sigma: D \rightarrow D$ is called an \textbf{inner automorphism} if $\sigma = I_u : x \mapsto u x u^{-1}$ for some $u \in D^{\times}$. The \textbf{inner order} of an automorphism $\sigma$ of $D$, is the smallest positive integer $n$ such that $\sigma^n$ is an inner automorphism. If no such $n$ exists we say $\sigma$ has \textbf{infinite inner order}. A \textbf{left (right) principal ideal domain} is a domain $R$ such that every left (right) ideal in $R$ is of the form $Rf$ ($fR$) for some $f \in R$. We say $R$ is a \textbf{principal ideal domain}, if it is both a left and a right principal ideal domain. Let $R$ be a left principal ideal domain and $0 \neq a, b \in R$. Then there exists $d \in R$ such that $Ra + Rb = Rd$. This implies $a = c_1 d$ and $b = c_2 d$ for some $c_1, c_2 \in R$, so $d$ is a right factor of both $a$ and $b$. We denote this by writing $d \vert_r a$ and $d \vert_r b$. In addition, if $e \vert_r a$ and $e \vert_r b$, then $Ra \subset Re$, $Rb \subset Re$, hence $Rd \subset Re$ and so $e \vert_r d$. Therefore we call $d$ a \textbf{right greatest common divisor} of $a$ and $b$. Furthermore, there exists $g \in R$ such that $Ra \cap Rb = Rg$. Then $a \vert_r g$ and $b \vert_r g$. Moreover, if $a \vert_r n$ and $b \vert_r n$ then $Rn \subset Rg$ and so $g \vert_r n$. We call $g$ the \textbf{least common left multiple} of $a$ and $b$. The \textbf{field norm} $N_{K/F} : K \rightarrow F$ of a finite Galois field extension $K/F$ is given by $$N_{K/F}(k) = \prod_{\sigma \in \mathrm{Gal}(K/F)} \sigma(k).$$ In particular, if $K/F$ is a cyclic Galois field extension of degree $m$ with Galois group generated by $\sigma$, then the field norm has the form \begin{equation*} N_{K/F}(k) = k \sigma(k) \cdots \sigma^{m-1}(k). \end{equation*} \section{Skew Polynomial Rings} \label{section:Skew Polynomial Rings} Let $D$ be an associative ring, $\sigma$ be an injective endomorphism of $D$ and $\delta$ be a \textbf{left $\sigma$-derivation} of D, i.e. $\delta: D \rightarrow D$ is an additive map and satisfies $$\delta(ab) = \sigma(a) \delta(b) + \delta(a) b,$$ for all $a,b \in D$, in particular $\delta(1) = 0$. Furthermore, an easy induction yields \begin{equation} \label{eqn:goodearl} \delta(a^n) = \sum_{i=0}^{n-1} \sigma(a)^i \delta(a)a^{n-1-i}, \end{equation} for all $a \in D$, $n \in \mathbb{N}$ \cite[Lemma 1.1]{goodearl1992prime}. The set $\mathrm{Const}(\delta) = \{ d \in D \ \vert \ \delta(d) = 0 \}$ of \textbf{$\delta$-constants} forms a subring of $D$, moreover if $D$ is a division ring, this is a division subring of $D$ \cite[p.~7]{jacobson1996finite}. The following definition is due to Ore \cite{ore1933theory}: \begin{definition} The \textbf{skew polynomial ring} $R = D[t;\sigma,\delta]$ is the set of left polynomials $a_0 + a_1t + a_2 t^2 + \ldots + a_m t^m$ with $a_i \in D$, where addition is defined term-wise, and multiplication by $ta = \sigma(a)t + \delta(a)$. \end{definition} This multiplication makes $R$ into an associative ring \cite[p.~2-3]{jacobson1996finite}. If $\delta = 0$, then $D[t;\sigma,0] = D[t;\sigma]$ is called a \textbf{twisted polynomial ring}, and if $\sigma$ is the identity map, then $D[t;\mathrm{id},\delta] = D[t;\delta]$ is called a \textbf{differential polynomial ring}. For the special case that $\delta = 0$ and $\sigma = \mathrm{id}$, we obtain the usual left polynomial ring $D[t] = D[t;\mathrm{id},0]$. Skew polynomial rings are also called Ore extensions in the literature and their properties are well understood. For a thorough introduction to skew polynomial rings see for example \cite[Chapter 2]{cohn1995skew}, \cite[Chapter 1]{jacobson1996finite} and \cite{ore1933theory}. We briefly mention some definitions and properties of skew polynomials which will be useful to us: The associative and distributive laws in $R = D[t;\sigma,\delta]$ yield \begin{equation} \label{eqn:mult in S_f 1} t^n b = \sum_{j=0}^n S_{n,j}(b)t^j, \quad n \geq 0, \end{equation} and \begin{equation} \label{eqn:mult in S_f 2} (bt^n)(ct^m) = \sum_{j=0}^n b(S_{n,j}(c))t^{j+m}, \end{equation} for all $b, c \in D$, where the maps $S_{n,j}: D \rightarrow D$ are defined by the recursion formula \begin{equation} \label{eqn:mult in S_f 3} S_{n,j} = \delta S_{n-1,j} + \sigma S_{n-1,j-1}, \end{equation} with $S_{0,0} = \mathrm{id}_D, S_{1,0} = \delta$, $S_{1,1} = \sigma$ and $S_{n,0} = \delta^n$ \cite[p.~2]{jacobson1996finite}. This means $S_{n,j}$ is the sum of all monomials in $\sigma$ and $\delta$ that are of degree $j$ in $\sigma$ and of degree $n-j$ in $\delta$. In particular $S_{n,n} = \sigma^n$ and if $\delta = 0$ then $S_{n,j} = 0$ for $n \neq j$. We say $f(t) \in R$ is \textbf{right invariant} if $Rf$ is a two-sided ideal in $R$, $f(t)$ is \textbf{left invariant} if $fR$ is a two-sided ideal in $R$, and $f(t)$ is \textbf{invariant} if it is both right and left invariant. We define the \textbf{degree} of a polynomial $f(t) = a_m t^m + \ldots + a_1 t + a_0 \in R$ with $a_m \neq 0$ to be $\mathrm{deg}(f(t)) = m$ and $\mathrm{deg}(0) = - \infty$. We call $a_m$ the \textbf{leading coefficient} of $f(t)$. Then $\mathrm{deg}(g(t)h(t)) \leq \mathrm{deg}(g(t)) + \mathrm{deg}(h(t))$ for all $g(t), h(t) \in R$, with equality if $D$ is a domain. This implies that when $D$ is a domain, $D[t;\sigma,\delta]$ is also a domain. Henceforth we assume $D$ is a division ring and remark that every endomorphism of $D$ is necessarily injective. Then $R = D[t;\sigma,\delta]$ is a left principal ideal domain and there is a \textbf{right division algorithm} in $R$ \cite[p.~3]{jacobson1996finite}. That is, for all $f(t), g(t) \in R$ with $f(t) \neq 0$ there exist unique $r(t), q(t) \in R$ with $\mathrm{deg}(r(t)) < \mathrm{deg}(f(t))$, such that $g(t) = q(t) f(t) + r(t)$. Here $r(t)$ is the remainder after right division by $f(t)$, and if $r(t) = 0$ we say $f(t)$ \textbf{right divides} $g(t)$ and write $f(t) \vert_r g(t)$. A polynomial $f(t) \in R$ is \textbf{irreducible} if it is not a unit and has no proper factors, i.e. there do not exist $g(t), h(t) \in R$ with $\mathrm{deg}(g(t)), \ \mathrm{deg}(h(t)) < \mathrm{deg}(f(t))$ such that $f(t) = g(t)h(t)$. Two non-zero $f(t), g(t) \in R$ are \textbf{similar} if there exist unique $h, q, u \in R$ such that $1 = hf + qg$ and $u'f = gu$ for some $u' \in R$. Notice if $f(t)$ is similar to $g(t)$ then $\mathrm{deg}(f(t)) = \mathrm{deg}(g(t))$ \cite[p.~14]{jacobson1996finite}. If $\sigma$ is a ring automorphism, then $R$ is also a right principal ideal domain, (hence a principal ideal domain) \cite[Proposition 1.1.14]{jacobson1996finite}, and there exists a left division algorithm in $R$ \cite[p.~3 and Proposition 1.1.14]{jacobson1996finite}. \label{sigma automorphism right polynomial ring} In this case any right invariant polynomial $f(t)$ is invariant \cite[p.~6]{jacobson1996finite}, furthermore we can also view $R$ as the set of right polynomials with multiplication defined by $at = t \sigma^{-1}(a) - \delta(\sigma^{-1}(a))$ for all $a \in D$ \cite[(1.1.15)]{jacobson1996finite}. \section{Petit's Algebra Construction} \label{section:Petit's Algebra Construction} In this Section, we describe the construction of a family of nonassociative algebras $S_f$ built using skew polynomial rings. These algebras will be the focus of study of this thesis. They were first introduced in 1966 by Petit \cite{Petit1966-1967}, \cite{petit1968quasi}, and laregly ignored until Wene \cite{wene2000finite} and more recently Lavrauw and Sheekey \cite{lavrauw2013semifields} studied them in the context of semifields. Let $D$ be an associative division ring with center $C$, $\sigma$ be an endomorphism of $D$ and $\delta$ be a left $\sigma$-derivation of $D$. \begin{definition}[Petit \cite{Petit1966-1967}] Let $f(t) \in R = D[t;\sigma,\delta]$ be of degree $m$ and $$R_m = \{ g \in R \ \vert \ \mathrm{deg}(g) < m \} .$$ Define a multiplication $\circ$ on $R_m$ by $a \circ b = ab \ \mathrm{mod}_r f,$ where the juxtaposition $ab$ denotes multiplication in R, and $\mathrm{mod}_r f$ denotes the remainder after right division by $f(t)$. Then $S_f = (R_m, \circ)$ is a nonassociative algebra over $F = \{ c \in D \ \vert \ c \circ h = h \circ c \text{ for all } h \in S_f \}$. We also call the algebras $S_f$ \textbf{Petit algebras} and denote them by $R/Rf$ if we want to make it clear which ring $R$ is used in the construction. \end{definition} W.l.o.g., we may assume $f(t)$ is monic, since the algebras $S_f$ and $S_{df}$ are equal for all $d \in D^{\times}$. We obtain the following straightforward observations: \begin{remarks} \begin{itemize} \item[(i)] If $f(t)$ is right invariant, then $S_f$ is the associative quotient algebra obtained by factoring out the two-sided ideal $Rf$. \item[(ii)] If $\mathrm{deg}(g(t)) + \mathrm{deg}(h(t)) < m$, then the multiplication $g \circ h$ is the usual multiplication of polynomials in $R$. \item[(iii)] If $\mathrm{deg}(f(t)) = 1$ then $R_1 = D$ and $S_f \cong D$. We will assume throughout this thesis that $\mathrm{deg}(f(t)) \geq 2$. \end{itemize} \end{remarks} Note that $F$ is a subfield of $D$ \cite[(7)]{Petit1966-1967}. It is straightforward to see that $F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$. Indeed, if $c \in F$ then $c \in D$ and $c \circ h = h \circ c$ for all $h \in S_f$, in particular this means $c \in C$. Furthermore, we have $c \circ t = t \circ c$ so that $ct = \sigma(c) t + \delta(c)$, hence $\sigma(c) = c$ and $\delta(c) = 0$ and thus $F \subset C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$. Conversely, if $c \in C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$ and $h = \sum_{i=0}^{m-1} h_i t^i \in S_f$, then \begin{align*} hc &= \sum_{i=0}^{m-1} h_i t^i c = \sum_{i=0}^{m-1} h_i \sum_{j=0}^{i} S_{i,j}(c) t^j = \sum_{i=0}^{m-1} h_i c t^i = \sum_{i=0}^{m-1} c h_i t^i = ch, \end{align*} because $S_{i,j}(c) = 0$ for $i \neq j$ and $S_{i,i}(c) = \sigma^i(c) = c$. Therefore $c \in F$ as required. \label{page:F=Fix sigma} In the special case where $D$ is commutative and $\delta = 0$ then $F = \mathrm{Fix}(\sigma)$. \begin{examples} \begin{itemize} \item[(i)] Let $f(t) = t^m - a \in D[t;\sigma]$ where $a \neq 0$. Then the multiplication in $S_f$ is given by $$(bt^i) \circ (ct^j) = \begin{cases} b \sigma^i (c) t^{i+j} & \text{ if } i + j < m, \\ b \sigma^i (c) \sigma^{i+j-m}(a) t^{i+j-m} & \text{ if } i + j \geq m, \end{cases}$$ for all $b, c \in D$ and $i,j \in \{ 0, \ldots, m-1 \}$, then linearly extended. \item[(ii)] Let $f(t) = t^2 - a_1 t - a_0 \in D[t;\sigma]$. Then multiplication in $S_f$ is given by \begin{align*} (x + yt) \circ (u + vt) &= \big( x u + y \sigma(v) a_0 \big) + \big( x v + y \sigma(u) + y \sigma(v) a_1 \big) t, \end{align*} for all $x, y, u, v \in D$. By identifying $x + y t = (x,y)$ and $u + v t = (u,v)$, the multiplication in $S_f$ can also be written as $$(x + yt) \circ (u + vt) = (x , y) \begin{pmatrix} u & v \\ \sigma(v) a_0 & \sigma(u) + \sigma(v) a_1 \end{pmatrix}.$$ \item[(iii)] Let $f(t) = t^2 - a \in D[t;\sigma, \delta]$, then multiplication in $S_f$ is given by $$(x + yt) \circ (u + vt) = (x , y) \begin{pmatrix} u & v \\ \sigma(v) a + \delta(u) & \sigma(u) + \delta(v) \end{pmatrix},$$ for all $x, y, u, v \in D$. \end{itemize} \end{examples} When $\sigma$ is an automorphism there is also a left division algorithm in $R$ and can define a second algebra construction: Let $f(t) \in R$ be of degree $m$ and denote by $\mathrm{mod}_l f$ the remainder after left division by $f(t)$. Then $R_m$ together with the multiplication $a \prescript{}{f}\circ \ b = ab \ \mathrm{mod}_l f,$ becomes a nonassociative algebra $\prescript{}{f}S$ over $F$, also denoted $R/fR$. It suffices to study the algebras $S_f$, as every algebra $\prescript{}{f}S$ is the opposite algebra of some Petit algebra: \begin{proposition} \label{prop:_fS opposite algebra of some S_g} (\cite[(1)]{Petit1966-1967}). Suppose $\sigma \in \mathrm{Aut}(D)$ and $f(t) \in D[t;\sigma,\delta]$. The canonical anti-isomorphism $$\psi: D[t;\sigma,\delta] \rightarrow D^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}], \ \ \sum_{i=0}^{n} a_i t^i \mapsto \sum_{i=0}^{n} \Big( \sum_{j=0}^{i} S_{n,j}(a_i) \Big) t^i,$$ between the skew polynomial rings $D[t;\sigma,\delta]$ and $D^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}]$, induces an anti-isomorphism between $S_f = D[t;\sigma,\delta]/D[t;\sigma,\delta]f$, and $$_{\psi(f)}S = D^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}]/\psi(f)D^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}].$$ \end{proposition} \section{Relation of \texorpdfstring{$S_f$}{S\_f} to other Known Constructions} \label{section:Relation of S_f to Other Known Constructions} We now show connections between Petit algebras and some other known constructions of algebras. \subsection{Nonassociative Cyclic Algebras} \label{section:Nonassociative Cyclic Algebras} Let $K/F$ be a cyclic Galois field extension of degree $m$ with $\mathrm{Gal}(K/F) = \langle \sigma \rangle$ and $f(t) = t^m - a \in K[t;\sigma]$. Then $$A = (K/F,\sigma,a) = K[t;\sigma]/K[t;\sigma]f(t)$$ is called a \textbf{nonassociative cyclic algebra of degree $m$} over $F$. The multiplication in $A$ is associative if and only if $a \in F$, in which case $A$ is a classical associative cyclic algebra over $F$. Nonassociative cyclic algebras were studied in detail by Steele in his Ph.D. thesis \cite{AndrewPhD}. We remark that our definition of nonassociative cyclic algebras yields the opposite algebras to the ones studied by Steele in \cite{AndrewPhD}. Moreover, if $K$ is a finite field, then $A$ is an example of a Sandler semifield \cite{sandler1962autotopism}. Nonassociative cyclic algebras of degree $2$ are \textbf{nonassociative quaternion algebras}. These algebras were first studied in 1935 by Dickson \cite{dickson1935linear} and subsequently by Althoen, Hansen and Kugler \cite{althoen1986c} over $\mathbb{R}$, however, the first systematic study was carried out by Waterhouse \cite{waterhouse}. Classical associative quaternion algebras of characteristic not $2$ are precisely associative cyclic algebras of degree $2$. That is, they have the form $(K/F,\sigma,a)$ where $K/F$ is a quadratic separable field extension with non-trivial automorphism $\sigma$, $\mathrm{char}(F) \neq 2$ and $a \in F^{\times}$. Thus the only difference in defining nonassociative quaternion algebras, is that the element $a$ belongs to the larger field $K$. \subsection{(Nonassociative) Generalised Cyclic Algebras} \label{section:Generalised Cyclic Algebras} Let $D$ be an associative division algebra of degree $n$ over its center $C$ and $\sigma$ be an automorphism of $D$ such that $\sigma \vert_C$ has finite order $m$ and fixed field $F = C \cap \mathrm{Fix}(\sigma)$. A \textbf{nonassociative generalised cyclic algebra} of degree $mn$, is an algebra $S_f = D[t;\sigma]/D[t;\sigma]f(t)$ over $F$ with $f(t) = t^m-a \in D[t;\sigma]$, $a \in D^{\times}$. We denote this algebra $(D,\sigma,a)$ and note that it has dimension $n^2 m^2$ over $F$. Note that when $D = K$ is a field, and $K/F$ is a cyclic field extension with Galois group generated by $\sigma$, we obtain the nonassociative cyclic algebra $(K/F,\sigma,a)$. In the special case where $f(t) = t^m-a \in D[t;\sigma]$, $d \in F^{\times}$, then $f(t)$ is invariant and $(D,\sigma,a)$ is the associative \textbf{generalised cyclic algebra} defined by Jacobson \cite[p.~19]{jacobson1996finite}. \subsection{(Nonassociative) Generalised Differential Algebras} Let $C$ be a field of characteristic $p$ and $D$ be an associative central division algebra over $C$ of degree $n$. Suppose $\delta$ is a derivation of $D$ such that $\delta \vert_C$ is algebraic, that is there exists a $p$-polynomial $$g(t) = t^{p^e} + c_1 t^{p^{e-1}} + \ldots + c_et \in F[t],$$ where $F = C \cap \mathrm{Const}(\delta)$ such that $g(\delta) = 0$. Suppose $g(t)$ is chosen with minimal $e$, $d \in D$ and $f(t) = g(t) - d \in D[t;\delta]$, then the algebra $S_f = D[t;\delta]/D[t;\delta]f(t)$ is called a \textbf{(nonassociative) generalised differential algebra} and also denoted $(D,\delta,d)$ \cite{pumpluen2016nonassociative}. $(D,\delta,d)$ is a nonassociative algebra over $F$ of dimension $p^{2e} n^2$, moreover $(D,\delta,d)$ is associative if and only if $d \in F$. When $d \in F$, then $(D,\delta,d)$ is central simple over $F$ and is called the \textbf{generalised differential extension} of $D$ in \cite[p.~23]{jacobson1996finite}. \vspace*{4mm} \noindent For the remainder of this Section we consider examples of finite-dimensional division algebras over finite fields. These are called (finite) semifields in the literature. \subsection{Hughes-Kleinfeld and Knuth Semifields} \label{section:Hughes-Kleinfeld and Knuth Semifields} Let $K$ be a finite field and $\sigma$ be a non-trivial automorphism of $K$. Choose $a_0, a_1 \in K$ such that the equation $w \sigma(w) + a_1w - a_0 = 0$ has no solution $w \in K$. In \cite[p.~215]{knuth1965finite}, Knuth defined four classes of semifields two-dimensional over $K$ with unit element $(1,0)$. Their multiplications are defined by \begin{align*} & (\text{K}1): (x,y)(u,v) = \big( x u + a_0 \sigma^{-2}(y) \sigma(v) , \ x v + y \sigma(u) + a_1 \sigma^{-1}(y) \sigma(v) \big) , \\ & (\text{K}2): (x,y)(u,v) = \big( x u + a_0 \sigma^{-2}(y) \sigma^{-1}(v) , \ x v + y \sigma(u) + a_1 \sigma^{-1}(y) v \big) , \\ & (\text{K}3): (x,y)(u,v) = \big( x u + a_0 y \sigma^{-1}(v) , \ x v + y \sigma(u) + a_1 y v \big) , \\ & (\text{K}4): (x,y)(u,v) = \big( x u + a_0 y \sigma(v) , \ x v + y \sigma(u) + a_1 y \sigma(v) \big) . \end{align*} The class of semifields defined by the multiplication $(\text{K}4)$ were first discovered by Hughes and Kleinfeld \cite{hughes1960seminuclear} and are called \textbf{Hughes-Kleinfeld semifields}. The classes of semifields defined by the multiplications $(\text{K}2)$ and $(\text{K}4)$ can be obtained using Petit's construction: \begin{theorem} \label{thm:S_f as Hughes Kleinfeld and Knuth Semifield} (\cite[Theorem 5.15]{sheekey2011rank}). Let $f(t) = t^2 - a_1 t - a_0 \in K[t;\sigma]$ where $w \sigma(w) + a_1w - a_0 \neq 0$ for all $w \in K$. \begin{itemize} \item[(i)] $\prescript{}{f}S$ is isomorphic to the semifield $(\text{K}2)$. \item[(ii)] $S_f$ is isomorphic to the semifield $(\text{K}4)$. \end{itemize} \end{theorem} There is a small mistake in \cite[p.~63 (5.1) and (5.2)]{sheekey2011rank} where the multiplication of two elements of $S_f$ is stated incorrectly. We give the full proof to avoid confusion. \begin{proof} \begin{itemize} \item[(i)] The multiplication in $\prescript{}{f}S$ is given by \begin{align*} (x + y t) \prescript{}{f}\circ &(u + v t) = x u + x v t + y \sigma(u) t + t^2 \sigma^{-2}(y)\sigma^{-1}(v) \\ &= x u + x v t + y \sigma(u) t + (a_1 t + a_0) \sigma^{-2}(y)\sigma^{-1}(v) \\ &= x u + x v t + y \sigma(u) t + a_1 \sigma^{-1}(y) v t + a_0 \sigma^{-2}(y)\sigma^{-1}(v) \\ &= \big( x u + a_0 \sigma^{-2}(y)\sigma^{-1}(v) \big) + \big( x v + y \sigma(u) + a_1 \sigma^{-1}(y) v \big) t, \end{align*} for all $x, y, u, v \in K$. Therefore the map $\psi: \prescript{}{f}S \rightarrow (\text{K}2), \ x + y t \mapsto (x,y)$ can easily be seen to be an isomorphism. \item[(ii)] The multiplication in $S_f$ is given by \begin{align*} (x + y t) \circ (u + vt) &= x u + x v t + y \sigma(u) t + y \sigma(v) t^2 \\ &= x u + x v t + y \sigma(u) t + y \sigma(v) (a_1 t + a_0) \\ &= \big( x u + y \sigma(v) a_0 \big) + \big( x v + y \sigma(u) + y \sigma(v) a_1 \big) t, \end{align*} for all $x, y, u, v \in K$. Therefore the map $\phi: S_f \rightarrow (\text{K}4), \ x + y t \mapsto (x,y)$ can readily be seen to be an isomorphism. \end{itemize} \end{proof} \subsection{Jha-Johnson Semifields} Jha-Johnson semifields, also called cyclic semifields, are built using irreducible semilinear transformations and generalise the Sandler and Hughes-Kleinfield semifields. \begin{definition} Let $K$ be a field. An additive map $T: V \rightarrow V$ on a vector space $V = K^m$ is called a \textbf{semilinear transformation} if there exists $\sigma \in \mathrm{Aut}(K)$ such that $T(\lambda v) = \sigma(\lambda)T(v),$ for all $\lambda \in K, v \in V$. The set of invertible semilinear transformations on $V$ forms a group called the \textbf{general semilinear group} and is denoted by $\Gamma \text{L}(V)$. An element $T \in \Gamma \text{L}(V)$ is said to be \textbf{irreducible}, if the only $T$-invariant subspaces of $V$ are $V$ and $\{ 0 \}$. \end{definition} Suppose now $K$ is a finite field. \begin{definition}[\cite{jha1989analog}] \newcommand*{\raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}}}{\raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}}} Let $T \in \Gamma \text{L}(V)$ be irreducible and fix a $K$-basis $\{ e_0, \ldots , e_{m-1} \}$ of $V$. Define a multiplication $\raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}}$ on $V$ by $$a \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}} b = a(T)b = \sum_{i=0}^{m-1} a_i T^i(b),$$ where $a = \sum_{i=0}^{m-1} a_i e_i$. Then $\mathbb{S}_T = (V, \raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}})$ defines a \textbf{Jha-Johnson semifield}. \end{definition} Let $L_{t,f}$ denote the semilinear transformation $v \mapsto tv \ \mathrm{mod}_r f$. \begin{theorem} \label{thm:Jha-Johnson_is_S_f} (\cite[ Theorems 15 and 16]{lavrauw2013semifields}). If $\sigma$ is an automorphism of $K$ and $f(t) \in K[t;\sigma]$ is irreducible then $S_f \cong \mathbb{S}_{L_{t,f}}$. Conversely, if $T$ is any irreducible element of $\Gamma \text{L}(V)$ with automorphism $\sigma$, $\mathbb{S}_T$ is isotopic to $S_f$ for some irreducible $f(t) \in K[t;\sigma]$. \end{theorem} This means that every Petit algebra $S_f$ with $f(t) \in K[t;\sigma]$ irreducible is a Jha-Johnson semifield, and every Jha-Johnson semifield is isotopic to some $S_f$. \chapter{The Structure of Petit Algebras} \label{chapter:The Structure of Petit Algebras} In the following, let $D$ be an associative division ring with center $C$, $\sigma$ be an endomorphism of $D$, $\delta$ be a left $\sigma$-derivation of $D$ and $f(t) \in R = D[t;\sigma,\delta]$. Recall $S_f$ is a nonassociative algebra over $F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$. \section{Some Structure Theory} \label{section:Some Structure Theory} In this Section we investigate the structure theory of Petit algebras. We begin by summarising some of the structure results stated by Petit in \cite{Petit1966-1967}: \begin{theorem} \label{thm:Properties of S_f petit} (\cite[(2), (5), (1), (14), (15)]{Petit1966-1967}). Let $f(t) \in R$ be of degree $m$. \begin{itemize} \item[(i)] If $S_f$ is not associative then $$\mathrm{Nuc}_l(S_f) = \mathrm{Nuc}_m(S_f) = D,$$ and $$\mathrm{Nuc}_r(S_f) = \{ g \in R \ \vert \ \mathrm{deg}(g) < m \text{ and } fg \in Rf \}.$$ \item[(ii)] The powers of $t$ are associative if and only if $t^m \circ t = t \circ t^m$ if and only if $t \in \mathrm{Nuc}_r(S_f)$. \item[(iii)] $S_f$ is associative if and only if $f(t)$ is right invariant. \item[(iv)] Suppose $\delta = 0$, then $\mathrm{Comm}(S_f)$ contains the set \begin{equation} \label{eqn:Comm(S_f)} \Big\{ \sum_{i=0}^{m-1} c_i t^i \ \vert \ c_i \in \mathrm{Fix}(\sigma) \text{ and } d c_i = c_i \sigma^i(d) \text{ for all } d \in D, \ i = 0, \ldots, m-1 \Big\}. \end{equation} If $t$ is left invertible the two sets are equal. \item[(v)] Suppose $\delta = 0$ and $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in D[t;\sigma]$. Then $f(t)$ is right invariant if and only if $\sigma^m (z) a_i = a_i \sigma^i(z)$ and $\sigma(a_i) = a_i$ for all $z \in D$, $i \in \{ 0, \ldots, m-1 \}$. \end{itemize} \end{theorem} The nuclei of $S_f$ were also calculated for special cases by Dempwolff in \cite[Proposition 3.3]{dempwolff2011autotopism}. \begin{remark} (\cite[Remark 9]{pumplun2015finite}). If $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in D[t;\sigma]$, then $t$ is left invertible is equivalent to $a_0 \neq 0$. Indeed, if $a_0 = 0$ and there exists $g(t) \in R_m$ and $q(t) \in R$ such that $g(t)t = q(t)f(t)+1$, then the left side of the equation has constant term $0$, while the right hand side has constant term $1$, a contradiction. Conversely, if $a_0 \neq 0$ then defining $g(t) = a_0^{-1} t^{m-1} - \sum_{i=0}^{m-2} a_0^{-1} a_{i+1} t^i,$ we conclude $g(t)t = a_0^{-1}f(t) + 1$, therefore $g(t) \circ t = 1$ and $t$ is left invertible in $S_f$. \end{remark} \begin{corollary} \label{cor:Comm(S_f) = F} Suppose $\sigma \in \mathrm{Aut}(D)$ is such that $\sigma \vert_C$ has order at least $m$ or infinite order, $f(t) \in D[t;\sigma]$ has degree $m$ and $t \in S_f$ is left invertible. Then $\mathrm{Comm}(S_f) = F = C \cap \mathrm{Fix}(\sigma).$ \end{corollary} \begin{proof} $\mathrm{Comm}(S_f)$ is equal to the set \eqref{eqn:Comm(S_f)} by Theorem \ref{thm:Properties of S_f petit}(iv), in particular $F = C \cap \mathrm{Fix}(\sigma) \subseteq \mathrm{Comm}(S_f)$. Let now $\sum_{i=0}^{m-1} c_i t^i \in \mathrm{Comm}(S_f)$ and suppose, for contradiction, $c_j \neq 0$ for some $j \in \{ 1, \ldots, m-1 \}$. Then $b c_j = c_j \sigma^j(b)$ for all $b \in D$, thus $(b-\sigma^j(b))c_j = 0$ for all $b \in C$ and so $b - \sigma^j(b) = 0$ for all $b \in C$, a contradiction since $\sigma \vert_C$ has order $\geq m$. Therefore $\sum_{i=0}^{m-1} c_i t^i = c_0$ and $c_0 \in F$ by \eqref{eqn:Comm(S_f)}. \end{proof} \begin{proposition} Let $L$ be a division subring of $D$ such that $\sigma \vert_L$ is an endomorphism of $L$ and $\delta \vert_L$ is a $\sigma \vert_L$-derivation of $L$. If $f(t) \in L[t;\sigma \vert_L,\delta \vert_L]$ then $L[t;\sigma \vert_L, \delta \vert_L] / L[t;\sigma \vert_L, \delta \vert_L]f(t)$ is a subring of $S_f = D[t;\sigma,\delta]/D[t;\sigma,\delta]f(t)$. \end{proposition} \begin{proof} Clearly $L[t;\sigma \vert_L, \delta \vert_L] / L[t;\sigma \vert_L, \delta \vert_L]f(t)$ is a subset of $S_f$ and is a ring in its own right. Additionally $L[t;\sigma \vert_L, \delta \vert_L] / L[t;\sigma \vert_L, \delta \vert_L]f(t)$ inherits the multiplication in $S_f$ by the uniqueness of right division in $L[t;\sigma \vert_L, \delta \vert_L]$ and in $D[t;\sigma,\delta]$. \end{proof} Given $f(t) \in R = D[t;\sigma,\delta]$ of degree $m$, the \textbf{idealizer} $I(f) = \{ g \in R \ \vert \ fg \in Rf \}$ is the largest subalgebra of $R$ in which $Rf$ is a two-sided ideal. We then define the \textbf{eigenring} of $f(t)$ as the quotient $E(f) = I(f)/Rf$. Therefore the eigenring $$E(f) = \{ g \in R \ \vert \ \mathrm{deg}(g) < m \text{ and } fg \in Rf \}$$ is equal to the right nucleus $\mathrm{Nuc}_r(S_f)$ by Theorem \ref{thm:Properties of S_f petit}(i), which as the right nucleus, is an associative subalgebra of $S_f$. By Theorem \ref{thm:Properties of S_f petit}(i) we obtain: \begin{corollary} \label{cor:E(f)=S_f iff associative, S_f central} Let $f(t) \in R$. \begin{itemize} \item[(i)] $E(f) = S_f$ if and only if $f(t)$ is right invariant if and only if $S_f$ is associative. \item[(ii)] If $f(t)$ is not right invariant then $\mathrm{Cent}(S_f) = F$. \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] If $f(t)$ is not right invariant then $E(f) = \mathrm{Nuc}_r(S_f) \neq S_f$ by Theorem \ref{thm:Properties of S_f petit}(i). On the other hand if $f(t)$ is right invariant, then $Rf$ is a two-sided ideal, hence $f \vert_r fg$ for all $g \in R_m$ and so $E(f) = S_f$. \item[(ii)] We have \begin{align*} \mathrm{Cent}(S_f) &= \mathrm{Comm}(S_f) \cap \mathrm{Nuc}(S_f) = \mathrm{Comm}(S_f) \cap D \cap E(f) \\ &= F \cap E(f) = F, \end{align*} by Theorem \ref{thm:Properties of S_f petit}. \end{itemize} \end{proof} \begin{example} If $K$ is a finite field, $\sigma$ is an automorphism of $K$ and $f(t) \in K[t;\sigma]$ is irreducible of degree $m$, then $$E(f) = \{ u \ \mathrm{mod}_r f \ \vert \ u \in \mathrm{Cent}(K[t;\sigma]) \} \cong \mathbb{F}_{q^m}$$ by \cite[p.~9]{lavrauw2013semifields}. \end{example} \begin{theorem} \label{thm:Fix(sigma) right nucleus} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in D[t;\sigma,\delta]$ be such that $f(t)$ is not right invariant and $a_j \in \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$ for all $j \in \{ 0, \ldots, m-1 \}$. Then the set \begin{equation} \label{eqn:Fix(sigma) right nucleus} \Big\{ \sum_{i=0}^{m-1} k_i t^i \ \vert \ k_i \in F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta) \Big\}, \end{equation} is contained in $\mathrm{Nuc}_r(S_f)$. \end{theorem} \begin{proof} Clearly $F \subseteq \mathrm{Nuc}_r(S_f)$, therefore if we can show $t \in \mathrm{Nuc}_r(S_f)$, then \eqref{eqn:Fix(sigma) right nucleus} is contained in $\mathrm{Nuc}_r(S_f)$. To this end, we calculate $$t^m \circ t = \Big( \sum_{i=0}^{m-1} a_i t^i \Big) \circ t = \sum_{i=0}^{m-2} a_i t^{i+1} + a_{m-1} \sum_{i=0}^{m-1} a_i t^i,$$ and \begin{align*} t \circ t^m &= t \circ \Big( \sum_{i=0}^{m-1} a_i t^i \Big) = t \circ a_{m-1} t^{m-1} + t \circ \sum_{i=0}^{m-2} a_i t^i \\ &= \big( \sigma(a_{m-1})t + \delta(a_{m-1}) \big) \circ t^{m-1} + \sum_{i=0}^{m-2} \big( \sigma(a_i)t + \delta(a_i) \big) t^i \\ &= a_{m-1} \sum_{i=0}^{m-1} a_i t^i \ + \ \sum_{i=0}^{m-2} a_i t^{i+1}, \end{align*} since $a_0, a_1, \ldots, a_{m-1} \in \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$. Therefore $t^m \circ t = t \circ t^m$, which yields $t \in \mathrm{Nuc}_r(S_f)$ by Theorem \ref{thm:Properties of S_f petit}(ii). \end{proof} If we assume additionally $a_j \in C$ in Theorem \ref{thm:Fix(sigma) right nucleus}, i.e. we assume $f(t) \in F[t]$, then we obtain: \begin{corollary} \label{cor:F right nucleus} Let $f(t) \in F[t] = F[t;\sigma,\delta] \subset D[t;\sigma,\delta]$ be of degree $m$ and not right invariant. Then the set \eqref{eqn:Fix(sigma) right nucleus} is a commutative subalgebra of $\mathrm{Nuc}_r(S_f)$. Here \eqref{eqn:Fix(sigma) right nucleus} equals $F[t]/F[t]f(t)$. Furthermore, if $f(t)$ is irreducible in $F[t]$, the set \eqref{eqn:Fix(sigma) right nucleus} is a field. \end{corollary} \begin{proof} $S_f$ contains the commutative subalgebra $F[t]/F[t]f(t)$ which is isomorphic to \eqref{eqn:Fix(sigma) right nucleus} because $f(t) \in F[t]$. Now, \eqref{eqn:Fix(sigma) right nucleus} is contained in $\mathrm{Nuc}_r(S_f)$ by Theorem \ref{thm:Fix(sigma) right nucleus} and thus if $f(t)$ is irreducible in $F[t]$, then $F[t]/F[t]f(t)$ is a field. \end{proof} When $\sigma = \mathrm{id}$, Corollary \ref{cor:F right nucleus} is precisely \cite[Proposition 2]{pumpluen2016nonassociative}. \begin{remark} Suppose $K$ is a finite field, $\sigma$ is an automorphism of $K$ and $F = \mathrm{Fix}(\sigma)$. If $f(t) \in F[t] \subset K[t;\sigma]$ is irreducible and not right invariant, then \eqref{eqn:Fix(sigma) right nucleus} is equal to $\mathrm{Nuc}_r(S_f)$ \cite[Theorem 3.2]{wene2000finite}. \end{remark} \section{So-called Right Semi-Invariant Polynomials} \label{section:Right Semi-Invariant Polynomials} As in the previous Section, suppose $D$ is a division ring with center $C$, $\sigma$ is an endomorphism of $D$, $\delta$ is a left $\sigma$-derivation of $D$ and $f(t) \in R = D[t;\sigma,\delta]$. We now investigate conditions for $D$ to be contained in the right nucleus of $S_f$, therefore either $S_f$ is associative or $\mathrm{Nuc}(S_f) = D$ by Theorem \ref{thm:Properties of S_f petit}(i). We do this by looking at so-called right semi-invariant polynomials: \begin{definition} (\cite{lam1988algebraic}, \cite{lam1989invariant}). A polynomial $f(t) \in R$ is called \textbf{right semi-invariant} if $f(t)D \subseteq Df(t)$. Similarly, $f(t)$ is \textbf{left semi-invariant} if $Df(t) \subseteq f(t)D$. \end{definition} We have $f(t)$ is right semi-invariant if and only if $df(t)$ is right semi-invariant for all $d \in D^{\times}$ \cite[p.~8]{lam1988algebraic}. For this reason it suffices to only consider monic $f(t)$. Furthermore, if $\sigma$ is an automorphism, then $f(t)$ is right semi-invariant if and only if it is left semi-invariant if and only if $f(t)D = Df(t)$ \cite[Proposition 2.7]{lam1988algebraic}. For a thorough background on right semi-invariant polynomials we refer the reader to \cite{lam1988algebraic} and \cite{lam1989invariant}. Our interest in right semi-invariant polynomials stems from the following result: \begin{theorem} \label{thm:semi-invariant iff D contained in E(f)} $f(t) \in R$ is right semi-invariant if and only if $D \subseteq \mathrm{Nuc}_r(S_f)$. In particular, if $f(t)$ is right semi-invariant, then either $\mathrm{Nuc}(S_f) = D$ or $S_f$ is associative. \end{theorem} \begin{proof} If $f(t) \in R$ is right semi-invariant, $f(t)D \subseteq Df(t) \subseteq Rf(t)$ and hence $D \subseteq E(f) = \mathrm{Nuc}_r(S_f)$. Conversely, if $D \subseteq \mathrm{Nuc}_r(S_f) = E(f)$ then for all $d \in D$, there exists $q(t) \in R$ such that $f(t)d = q(t)f(t)$. Comparing degrees, we see $q(t) \in D$ and thus $f(t)D \subseteq Df(t)$. The second assertion follows by Theorem \ref{thm:Properties of S_f petit}(i). \end{proof} \noindent The following result on the existence of a non-constant right semi-invariant polynomial is due to Lemonnier \cite{lemonnier1978dimension}: \begin{proposition} \label{prop:lemonnier semi-invariant} (\cite[(9.21)]{lemonnier1978dimension}). Suppose $\sigma$ is an automorphism of $D$, then the following are equivalent: \begin{itemize} \item[(i)] There exists a non-constant right semi-invariant polynomial in $R$. \item[(ii)] $R$ is not simple. \item[(iii)] There exist $b_0, \ldots, b_n \in D$ with $b_n \neq 0$ such that $b_0 \delta_{c, \theta} + \sum_{i=1}^{n} b_i \delta^i = 0$, where $\theta$ is an endomorphism of $D$ and $\delta_{c,\theta}$ denotes the $\theta$-derivation of $D$ sending $x \in D$ to $c x - \theta(x) c$. \end{itemize} \end{proposition} Combining Theorem \ref{thm:semi-invariant iff D contained in E(f)} and Proposition \ref{prop:lemonnier semi-invariant} we conclude: \begin{corollary} \label{cor:simple and semi-invariant implies invariant} Suppose $\sigma$ is an automorphism of $D$ and $R$ is simple. Then there are no nonassociative algebras $S_f$ with $D \subseteq \mathrm{Nuc}_r(S_f)$. In particular there are no nonassociative algebras $S_f$ with $D \subseteq \mathrm{Nuc}(S_f)$. \end{corollary} \begin{proof} $R$ is not simple if and only if there exists a non-constant right semi-invariant polynomial in $R$ by Proposition \ref{prop:lemonnier semi-invariant}, and hence the assertion follows by Theorem \ref{thm:semi-invariant iff D contained in E(f)}. \end{proof} Theorem \ref{thm:semi-invariant iff D contained in E(f)} allows us to rephrase some of the results on semi-invariant polynomials in \cite{lam1988algebraic} and \cite{lam1989invariant}, in terms of the right nucleus of $S_f$: \begin{theorem} \label{thm:right semi invariant conditions} (\cite[Lemma 2.2, Corollary 2.12, Propositions 2.3 and 2.4]{lam1988algebraic}, \cite[Corollary 2.6]{lam1989invariant}). Let $f(t) = \sum_{i=0}^{m} a_i t^i \in R$ be monic of degree $m$. \begin{itemize} \item[(i)] $D \subseteq \mathrm{Nuc}_r(S_f)$ if and only if $f(t)c = \sigma^m(c) f(t)$ for all $c \in D$, if and only if \begin{equation} \label{eqn:right semi-invariant 1} \sigma^m(c)a_j = \sum_{i=j}^{m} a_i S_{i,j}(c) \end{equation} for all $c \in D$ and $j \in \{ 0, \ldots, m-1 \}$, where $S_{i,j}$ is defined as in \eqref{eqn:mult in S_f 3}. \item[(ii)] Suppose $\sigma$ is an automorphism of $D$ of infinite inner order. Then $D \subseteq \mathrm{Nuc}_r(S_f)$ implies $S_f$ is associative. \item[(iii)] Suppose $\delta = 0$. Then $D \subseteq \mathrm{Nuc}_r(S_f)$ if and only if \begin{equation} \label{eqn:right semi-invariant 2} \sigma^m(c) = a_j \sigma^j(c) a_j^{-1} \end{equation} for all $c \in D$ and all $j \in \{ 0, \ldots, m-1 \}$ with $a_j \neq 0$. Furthermore, $S_f$ is associative if and only if $f(t)$ satisfies \eqref{eqn:right semi-invariant 2} and $a_j \in \mathrm{Fix}(\sigma)$ for all $j \in \{ 0, \ldots, m-1 \}$. \item[(iv)] Suppose $\delta = 0$ and $\sigma$ is an automorphism of $D$ of finite inner order $k$, i.e. $\sigma^k = I_u$ for some $u \in D^{\times}$. The polynomials $g(t) \in D[t;\sigma]$ such that $D \subseteq \mathrm{Nuc}_r(S_g)$ are precisely those of the form \begin{equation} \label{eqn:right semi-invariant 4} b \sum_{j=0}^{n} c_j u^{n-j}t^{jk}, \end{equation} where $n \in \mathbb{N}$, $c_n = 1$, $c_j \in C$ and $b \in D^{\times}$. Furthermore, $S_g$ is associative if and only if $g(t)$ has the form \eqref{eqn:right semi-invariant 4} and $c_j u^{n-j} \in \mathrm{Fix}(\sigma)$ for all $j \in \{ 0, \ldots, n \}$. \item[(v)] Suppose $\sigma = \mathrm{id}$. Then $D \subseteq \mathrm{Nuc}_r(S_f)$ is equivalent to \begin{equation} \label{eqn:right semi-invariant 3} c a_j = \sum_{i=j}^{m} \binom{i}{j} a_i \delta^{i-j}(c), \end{equation} for all $c \in D$, $j \in \{ 0, \ldots, m-1 \}$. Furthermore, $S_f$ is associative if and only if $f(t)$ satisfies \eqref{eqn:right semi-invariant 3} and $a_j \in \mathrm{Const}(\delta)$ for all $j \in \{ 0, \ldots, m-1 \}$. \end{itemize} \end{theorem} Theorem \ref{thm:right semi invariant conditions}(iii) provides us with an alternate proof of \cite[Corollary 3.2.6]{AndrewPhD} about the nucleus of nonassociative cyclic algebras: \begin{corollary} \label{cor:Nucleus of Nonassociative cyclic algebra} (\cite[Corollary 3.2.6]{AndrewPhD}). Let $A = (K/F,\sigma,a)$ be a nonassociative cyclic algebra of degree $m$ for some $a \in K \setminus F$. Then $\mathrm{Nuc}(A) = K$. \end{corollary} \begin{proof} Notice $A = K[t;\sigma]/K[t;\sigma](t^m -a)$ and $t^m -a$ is right semi-invariant by Theorem \ref{thm:right semi invariant conditions}(iii). Hence $K \subseteq \mathrm{Nuc}_r(A)$ by Theorem \ref{thm:semi-invariant iff D contained in E(f)} since $A$ is not associative. \end{proof} Let $L$ be a division subring of $D$. Then we can look for conditions for $L \subseteq \mathrm{Nuc}_r(S_f)$ by generalising the definition of right semi-invariant polynomials as follows: We say $f(t) \in D[t;\sigma,\delta]$ \textbf{$L$-weak semi-invariant} if $f(t)L \subseteq D f(t)$. Clearly any right semi-invariant polynomial is also $L$-weak semi-invariant for every division subring $L$ of $D$. Moreover we obtain: \begin{proposition} \label{prop:L-weak semi-invariant iff L subset Nuc_r(S_f)} $f(t)$ is $L$-weak semi-invariant if and only if $L \subseteq E(f) = \mathrm{Nuc}_r(S_f)$. If $f(t)$ is $L$-weak semi-invariant but not right invariant, then $L \subseteq \mathrm{Nuc}(S_f) \subseteq D$. \end{proposition} \begin{proof} If $f(t) \in R$ is $L$-weak semi-invariant, $f(t)L \subseteq Df(t) \subseteq Rf(t)$ and hence $L \subseteq E(f)$. Conversely, if $L \subseteq E(f)$ then for all $l \in L$, there exists $q(t) \in R$ such that $f(t)l = q(t)f(t)$. Comparing degrees, we see $q(t) \in D$ and thus $f(t)L \subseteq Df(t)$. Hence if $f(t)$ is $L$-weak semi-invariant but not right invariant, then $$L \subseteq \mathrm{Nuc}(S_f) = E(f) \cap D \subseteq D$$ by Theorem \ref{thm:Properties of S_f petit}, which yields the second assertion. \end{proof} \begin{example} Let $K$ be a field, $\sigma$ be a non-trivial automorphism of $K$, $L = \mathrm{Fix}(\sigma^j)$ be the fixed field of $\sigma^j$ for some $j >1$ and $f(t) = \sum_{i=0}^{n} a_i t^{ij} \in K[t;\sigma]$. Then \begin{align*} f(t)l &= \sum_{i=0}^{n} a_i t^{ij} l = \sum_{i=0}^{n} a_i \sigma^{ij}(l) t^{ij} = \sum_{i=0}^{n} a_i l t^{ij} = l f(t), \end{align*} for all $l \in L$ and hence $f(t) L \subseteq L f(t)$. In particular, $f(t)$ is $L$-weak semi-invariant. \end{example} It turns out that results similar to Theorem \ref{thm:right semi invariant conditions}(i), (iii) and (v) also hold for $L$-weak semi-invariant polynomials: \begin{proposition} \label{prop:L-weak semi invariant conditions} Let $f(t) = \sum_{i=0}^{m} a_i t^i \in D[t;\sigma,\delta]$ be monic of degree $m$ and $L$ be a division subring of $D$. \begin{itemize} \item[(i)] $f(t)$ is $L$-weak semi-invariant if and only if $f(t)c = \sigma^m(c)f(t)$ for all $c \in L$, if and only if \begin{equation} \label{eqn:L-weak semi invariant conditions 1} \sigma^m(c) a_j = \sum_{i=j}^{m} a_i S_{i,j}(c) \end{equation} for all $c \in L$, $j \in \{ 0, \ldots, m-1 \}$. \item[(ii)] Suppose $\delta = 0$. Then $f(t)$ is $L$-weak semi-invariant if and only if $\sigma^m(c) a_j = a_j \sigma^j(c)$ for all $c \in L$, $j \in \{ 0, \ldots, m-1 \}$. \item[(iii)] Suppose $\sigma = \mathrm{id}$. Then $f(t)$ is $L$-weak semi-invariant if and only if \begin{equation} \label{eqn:L-weak semi invariant conditions 2} c a_j = \sum_{i=j}^{m} \binom{i}{j} a_i \delta^{i-j}(c) \end{equation} for all $c \in L$, $j \in \{ 0, \ldots, m-1 \}$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] We have \begin{equation} \label{eqn:L-weak semi invariant conditions 3} f(t)c = \sum_{i=0}^{m} a_i t^i c = \sum_{i=0}^{m} a_i \sum_{j=0}^{i} S_{i,j}(c) t^j = \sum_{j=0}^{m} \sum_{i=j}^{m} a_i S_{i,j}(c) t^j \end{equation} for all $c \in L$, hence the $t^m$ coefficient of $f(t)c$ is $S_{m,m}(c) = \sigma^m(c)$, and so $f(t)$ is $L$-weak semi-invariant if and only if $f(t)c = \sigma^m(c)f(t)$ for all $c \in L$. Comparing the $t^j$ coefficient of \eqref{eqn:L-weak semi invariant conditions 3} and $\sigma^m(c)f(t)$ for all $j \in \{ 0, \ldots, m-1 \}$ yields \eqref{eqn:L-weak semi invariant conditions 1}. \item[(ii)] When $\delta = 0$, $S_{i,j} = 0$ unless $i = j$ in which case $S_{j,j} = \sigma^j$. Therefore \eqref{eqn:L-weak semi invariant conditions 1} simplifies to $\sigma^m(c) a_j = a_j \sigma^j(c)$ for all $c \in L$, $j \in \{ 0, \ldots, m-1 \}$. \item[(iii)] When $\sigma = \mathrm{id}$ we have $$t^i c = \sum_{j=0}^{i} \binom{i}{j} \delta^{i-j}(c)$$ for all $c \in D$ by \cite[(1.1.26)]{jacobson1996finite} and thus \begin{equation} \label{eqn:L-weak semi invariant conditions 4} f(t) c = \sum_{i=0}^{m} a_i t^i c = \sum_{i=0}^{m} a_i \sum_{j=0}^{i} \binom{i}{j} \delta^{i-j}(c) t^j = \sum_{j=0}^{m} \sum_{i=j}^{m} \binom{i}{j} a_i \delta^{i-j}(c) t^j \end{equation} for all $c \in L$. Furthermore $f(t)$ is $L$-weak semi-invariant is equivalent to $f(t) c = c f(t)$ for all $c \in L$ by (i). Comparing the $t^j$ coefficient of \eqref{eqn:L-weak semi invariant conditions 4} and $c f(t) = \sum_{i=0}^{m} c a_i t^i$ for all $c \in L$, $j \in \{ 0, \ldots, m-1 \}$ yields \eqref{eqn:L-weak semi invariant conditions 2}. \end{itemize} \end{proof} \section{When are Petit Algebras Division Algebras?} \label{section:When is S_f a Division Algebra?} In this Section we look at conditions for Petit algebras to be right or left division algebras. This is closely linked to whether the polynomial $f(t)$ used in their construction is irreducible. Given $f(t) \in R = D[t;\sigma,\delta]$, recall $S_f$ is a \textbf{right} (resp. \textbf{left}) \textbf{division algebra}, if the right multiplication $R_a : S_f \rightarrow S_f, \ x \mapsto x \circ a$, (resp. the left multiplication $L_a : S_f \rightarrow S_f, \ x \mapsto a \circ x$), is bijective for all $0 \neq a \in S_f$. Furthermore $S_f$ is a \textbf{division algebra} if it is both a right and a left division algebra. If $S_f$ is finite-dimensional over $F$, then $S_f$ is a division algebra if and only if it has no zero divisors \cite[p.~12]{schafer1966introduction}. We say $f(t) \in R$ is \textbf{bounded} if there exists $0 \neq f^* \in R$ such that $Rf^* = f^* R$ is the largest two-sided ideal of $R$ contained in $Rf$. The element $f^*$ is determined by $f$ up to multiplication on the left by elements of $D^{\times}$. The link between factors of $f(t)$ and zero divisors in the eigenring $E(f)$ is well-known: \begin{proposition} Let $f(t) \in R$. \begin{itemize} \item[(i)] (\cite[Proposition 4]{gomez2014basic}). If $f(t)$ is irreducible then $E(f)$ has no non-trivial zero divisors. \item[(ii)] (\cite[Proposition 4]{gomez2014basic}). Suppose $\sigma$ is an automorphism and $f(t)$ is bounded. Then $f(t)$ is irreducible if and only if $E(f)$ has no non-trivial zero divisors. \item[(iii)] (\cite[Theorem 3.3]{giesbrecht1998factoring}). If $D = \mathbb{F}$ is a finite field and $\delta = 0$, all polynomials are bounded and hence $f(t)$ is irreducible if and only if $E(f)$ is a finite field. \end{itemize} \end{proposition} In general, the statement $f(t)$ is irreducible if and only if $E(f)$ has no non-trivial zero divisors is not true. Examples of reducible skew polynomials whose eigenrings are division algebras are given in \cite[Example 3]{gomez2014basic} and \cite{singer1996testing}. We prove the following result, stated but not proved by Petit in \cite[p.~13-07]{Petit1966-1967}: \begin{proposition} \label{prop:f irreducible implies E(f) division} If $f(t) \in R$ is irreducible then $E(f)$ is a division ring. \end{proposition} \begin{proof} Let $\mathrm{End}_R(R/Rf)$ denote the endomorphism ring of the left $R$-module $R/Rf$, that is $\mathrm{End}_R(R/Rf)$ consists of all maps $\phi: R/Rf \rightarrow R/Rf$ such that $\phi(rh + r'h') = r \phi(h) + r' \phi(h')$ for all $r,r' \in R$, $h, h' \in R/Rf$. Now $f(t)$ irreducible implies $R/Rf$ is a simple left $R$-module \cite[p.~15]{gomez2014basic}, therefore $\mathrm{End}_R(R/Rf)$ is an associative division ring by Schur's Lemma \cite[p.~33]{lam2013first}. Finally $E(f)$ is isomorphic to the ring $\mathrm{End}_R(R/Rf)$ \cite[p.~18-19]{gomez2014basic} and thus $E(f)$ is also an associative division ring. \end{proof} We now look at conditions for $S_f$ to be a right division algebra. \begin{lemma} \label{lem:f(t) reducible implies S_f not division} If $f(t) \in R$ is reducible, then $S_f$ contains zero divisors. In particular, $S_f$ is neither a left nor right division algebra. \end{lemma} \begin{proof} Suppose $f(t) = g(t) h(t)$ for some $g(t), h(t) \in R$ with $\mathrm{deg}(g(t))$, $\mathrm{deg}(h(t)) < \mathrm{deg}(f(t))$, then $g(t) \circ h(t) = g(t)h(t) \ \mathrm{mod}_r f = 0$. \end{proof} Notice $S_f$ is a free left $D$-module of finite rank $m = \mathrm{deg}(f(t))$ and let $0 \neq a \in S_f$. Then $R_a(y+z) = (y+z) \circ a = (y \circ a) + (z \circ a) = R_a(y) + R_a(z)$ and \begin{equation*} \label{eqn:R_a left D-linear} R_a(k \circ z) = (k \circ z) \circ a = k \circ (z \circ a) = k \circ R_a(z), \end{equation*} for all $k \in D$, $y, z \in S_f$, since either $S_f$ is associative or has left nucleus equal to $D$ by Theorem \ref{thm:Properties of S_f petit}. Thus $R_a$ is left $D$-linear. We will require the following well-known Rank-Nullity Theorem: \begin{theorem} \label{thm:Rank-Nullity} (See for example \cite[Chapter IV, Corollary 2.14]{hungerford1980algebra}). Let $S$ be a free left (resp. right) $D$-module of finite rank $m$ and $\phi:S \rightarrow S$ be a left (resp. right) $D$-linear map. Then \begin{equation*} \mathrm{dim}(\mathrm{Ker}(\phi)) + \mathrm{dim}(\mathrm{Im}(\phi)) = m, \end{equation*} in particular, $\phi$ is injective if and only if it is surjective. \end{theorem} \begin{theorem} \label{thm:f(t) irreducible iff S_f right division} (\cite[(6)]{Petit1966-1967}). Let $f(t) \in R$ have degree $m$ and $0 \neq a \in S_f$. Then $R_a$ is bijective is equivalent to $1$ being a right greatest common divisor of $f(t)$ and $a$. In particular, $f(t)$ is irreducible if and only if $S_f$ is a right division algebra. \end{theorem} \begin{proof} Let $0 \neq a \in S_f$. Since $S_f$ is a free left $D$-module of finite rank $m$ and $R_a$ is left $D$-linear, the Rank-Nullity Theorem \ref{thm:Rank-Nullity} implies $R_a$ is bijective if and only if it is injective which is equivalent to $\mathrm{Ker}(R_a) = \{ 0 \}$. Now $R_a(z) = z \circ a = 0$ is equivalent to $za \in Rf$, which means we can write $$\mathrm{Ker}(R_a) = \{ z \in R_m \ \vert \ za \in Rf \}.$$ Furthermore, $R$ is a left principal ideal domain, which implies $za \in Rf$ if and only if $za \in Ra \cap Rf = Rg = Rha,$ where $g = ha$ is the least common left multiple of $a$ and $f$. Therefore $za \in Rf$ is equivalent to $z \in Rh$, and hence $\mathrm{Ker}(R_a) \neq \{ 0 \}$, if and only if there exists a polynomial of degree strictly less than $m$ in $Rh$, which is equivalent to $\mathrm{deg}(h) \leq m-1$. Let $b \in R$ be a right greatest common divisor of $a$ and $f$. Then $$\mathrm{deg}(f) + \mathrm{deg}(a) = \mathrm{deg}(g) + \mathrm{deg}(b) = \mathrm{deg}(ha) + \mathrm{deg}(b),$$ by \cite[Proposition 1.3.1]{jacobson1996finite}, and so $\mathrm{deg}(b) = \mathrm{deg}(f) - \mathrm{deg}(h).$ Thus $\mathrm{deg}(h) \leq m-1$ if and only if $\mathrm{deg}(b) \geq 1$, so we conclude $\mathrm{Ker}(R_a) = \{ 0 \}$ if and only if $\mathrm{deg}(b) = 0$, if and only if $1$ is a right greatest common divisor of $f(t)$ and $a$. In particular, this implies $S_f$ is a right division algebra if and only if $R_a$ is bijective for all $0 \neq a \in S_f$, if and only if $1$ is a right greatest common divisor of $f(t)$ and $a$ for all $0 \neq a \in S_f$, if and only if $f(t)$ is irreducible. \end{proof} We wish to determine when $S_f$ is also a left division algebra, hence when it is a division algebra. \begin{proposition} \label{prop:S_f associative division iff irreducible} If $f(t) \in R$ is right invariant, then $f(t)$ is irreducible if and only if $S_f$ is a division algebra. \end{proposition} \begin{proof} Suppose $f(t)$ is right invariant so that $S_f$ is associative by Theorem \ref{thm:Properties of S_f petit}. If $f(t)$ is reducible then $S_f$ is not a division algebra by Lemma \ref{lem:f(t) reducible implies S_f not division}. Conversely, if $f(t)$ is irreducible the maps $R_b$ are bijective for all $0 \neq b \in S_f$ by Theorem \ref{thm:f(t) irreducible iff S_f right division}. This implies the maps $L_b$ are also bijective for all $0 \neq b \in S_f$ by \cite[Lemma 1B]{bruck1946contributions}, and so $S_f$ is a division algebra. \end{proof} \begin{lemma} \label{lem:L_a injective but not nec surjective} If $f(t)$ is irreducible then $L_a$ is injective for all $0 \neq a \in S_f$. \end{lemma} \begin{proof} If $f(t)$ is irreducible then $L_a(z) = a \circ z = R_z(a) = 0$ is impossible for $0 \neq z \in S_f$, as $R_z$ is injective by Theorem \ref{thm:f(t) irreducible iff S_f right division}. Thus $L_a$ is also injective. \end{proof} In general $L_a$ is neither left nor right $D$-linear. Therefore, when $f(t)$ is irreducible we cannot apply the Rank-Nullity Theorem to conclude $L_a$ is surjective, as we did for $R_a$ in the proof of Theorem \ref{thm:f(t) irreducible iff S_f right division}. In fact, the following Theorem shows that $L_a$ may not be surjective even if $f(t)$ is irreducible: \begin{theorem} \label{thm:L_t surjective iff sigma surjective} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in D[t;\sigma]$ where $a_0 \neq 0$. Then for every $j \in \{ 1, \ldots, m-1 \}$, $L_{t^j}$ is surjective if and only if $\sigma$ is surjective. In particular, if $\sigma$ is not surjective then $S_f$ is not a left division algebra. \end{theorem} \begin{proof} We first prove the result for $j = 1$: Given $z = \sum_{i=0}^{m-1} z_i t^i \in S_f$, we have \begin{equation} \label{eqn:L_t surjective iff sigma surjective 1} \begin{split} L_t(z) &= t \circ z = \sum_{i=0}^{m-2} \sigma(z_{i})t^{i+1} + \sigma(z_{m-1})t \circ t^{m-1} \\ &= \sum_{i=1}^{m-1} \sigma(z_{i-1})t^i + \sigma(z_{m-1}) \sum_{i=0}^{m-1} a_i t^i. \end{split} \end{equation} \begin{itemize} \item[($\Rightarrow$)] Suppose $L_t$ is surjective, then given any $b \in D$ there exists $z \in S_f$ such that $t \circ z = b$. The $t^0$-coefficient of $L_t(z)$ is $\sigma(z_{m-1}) a_0$ by \eqref{eqn:L_t surjective iff sigma surjective 1}, and thus for all $b \in D$ there exists $z_{m-1} \in D$ such that $\sigma(z_{m-1}) a_0 = b$. Therefore $\sigma$ is surjective. \item[($\Leftarrow$)] Suppose $\sigma$ is surjective and let $g = \sum_{i=0}^{m-1} g_i t^i \in S_f$. Define $$z_{m-1} = \sigma^{-1}(g_0 a_0^{-1} ), \ z_{i-1} = \sigma^{-1}(g_i) - z_{m-1} \sigma^{-1}(a_i)$$ for all $i \in \{ 1, \ldots , m-1 \}$. Then \begin{equation*} \begin{split} L_t(z) &= \sigma(z_{m-1}) a_0 + \sum_{i=1}^{m-1} \big( \sigma(z_{i-1}) + \sigma(z_{m-1} a_i \big) t^i = \sum_{i=0}^{m-1} g_i t^i = g, \end{split} \end{equation*} by \eqref{eqn:L_t surjective iff sigma surjective 1}, which implies $L_t$ is surjective. \end{itemize} Hence $L_t$ surjective is equivalent to $\sigma$ surjective. To prove the result for all $j \in \{ 1, \ldots, m-1 \}$ we show that \begin{equation} \label{eqn:L_t surjective iff sigma surjective 2} L_{t^j} = L_t^j, \end{equation} for all $j \in \{ 1, \ldots, m-1 \}$, then it follows $\sigma$ is surjective if and only if $L_t$ is surjective if and only if $L_t^j = L_{t^j}$ is surjective. In the special case when $D = \mathbb{F}_q$ is a finite field, $\sigma$ is an automorphism and $f(t)$ is monic and irreducible, the equality \eqref{eqn:L_t surjective iff sigma surjective 2} is proven in \cite[p.~12]{lavrauw2013semifields}. A similar proof also works more generally in our context: suppose inductively that $L_{t^j} = L_t^j$ for some $j \in \{ 1, \ldots, m-2 \}$. Then $L_t^j(b) = t^j b \ \mathrm{mod}_r f$ for all $b \in R_m$. Let $L_t^j(b) = b'$ so that $t^j b = qf + b'$ for some $q \in R$. We have \begin{align*} L_t^{j+1}(b) &= L_t(L_t^j(b)) = L_t(b') = L_t(t^j b - q f) = t \circ (t^j b - q f) \\ &= (t^{j+1} b - tqf) \ \mathrm{mod}_r f = t^{j+1}b \ \mathrm{mod}_r f = L_{t^{j+1}}(b), \end{align*} hence \eqref{eqn:L_t surjective iff sigma surjective 2} follows by induction. \end{proof} We can use Theorems \ref{thm:f(t) irreducible iff S_f right division} and \ref{thm:L_t surjective iff sigma surjective} to find examples of Petit algebras which are right but not left division algebras: \begin{corollary} \label{cor:S_f right but not left division algebra} Suppose $\sigma$ is not surjective and $f(t) \in D[t;\sigma]$ is irreducible. Then $S_f$ is a right division algebra but not a left division algebra. \end{corollary} \begin{example} \label{example:S_f right but not left division algebra} Let $K$ be a field, $y$ be an indeterminate and define $\sigma: K(y) \rightarrow K(y)$ by $\sigma \vert_K = \mathrm{id}$ and $\sigma(y) = y^2$. Then $\sigma$ is an injective but not surjective endomorphism of $K(y)$ \cite[p.~123]{berrick2000introduction}. For $a(y) \in K[y]$ denote by $\mathrm{deg}_y(a(y))$ the degree of $a(y)$ as a polynomial in $y$. Let $f(t) = t^2 - a(y) \in K(y)[t;\sigma]$ where $0 \neq a(y) \in K[y]$ is such that $3 \nmid \mathrm{deg}_y(a(y))$. We will show later in Corollary \ref{cor:t^2-a(y) in K(y)[t;sigma] irreducibility} that $f(t)$ is irreducible in $K(y)[t;\sigma]$, hence $S_f$ is a right, but not a left division algebra by Corollary \ref{cor:S_f right but not left division algebra}. \end{example} The following result was stated but not proved by Petit \cite[(7)]{Petit1966-1967}: \begin{theorem} \label{thm:S_f_division_iff_irreducible} (\cite[(7)]{Petit1966-1967}). Let $f(t) \in D[t; \sigma, \delta]$ be such that $S_f$ is a finite-dimensional $F$-vector space or a right $\mathrm{Nuc}_r(S_f)$-module, which is free of finite rank. Then $S_f$ is a division algebra if and only if $f(t)$ is irreducible. \end{theorem} \begin{proof} When $S_f$ is associative the assertion follows by Proposition \ref{prop:S_f associative division iff irreducible} so suppose $S_f$ is not associative. If $f(t)$ is reducible, $S_f$ is not a division algebra by Lemma \ref{lem:f(t) reducible implies S_f not division}. Conversely, suppose $f(t)$ is irreducible so that $S_f$ is a right division algebra by Theorem \ref{thm:f(t) irreducible iff S_f right division}. Let $0 \neq a \in S_f$ be arbitrary, then $L_a$ is injective for all $0 \neq a \in S_f$ by Lemma \ref{lem:L_a injective but not nec surjective}. We prove $L_a$ is surjective, hence $S_f$ is also a left division algebra: \begin{itemize} \item[(i)] Suppose $S_f$ is a finite-dimensional $F$-vector space. Then since $F \subseteq \mathrm{Nuc}(S_f)$, we have $$L_a(k \circ z) = a \circ (k \circ z) = (a \circ k) \circ z = (k \circ a) \circ z = k \circ (a \circ z) = k \circ L_a(z)$$ and $$L_a(z \circ k) = a \circ (z \circ k) = (a \circ z) \circ k = L_a(z) \circ k,$$ for all $k \in F$, $z \in S_f$. Therefore $L_a$ is $F$-linear, and thus $L_a$ is surjective by the Rank-Nullity Theorem \ref{thm:Rank-Nullity}. \item[(ii)] Suppose $S_f$ is a free right $\mathrm{Nuc}_r(S_f)$-module of finite rank, then $E(f)$ is a division ring by Proposition \ref{prop:f irreducible implies E(f) division}. Furthermore, we have $$L_a(z \circ k) = a \circ (z \circ k) = (a \circ z) \circ k = L_a(z) \circ k$$ for all $k \in \mathrm{Nuc}_r(S_f)$, $z \in S_f$ and so $L_a$ is right $\mathrm{Nuc}_r(S_f)$-linear. Therefore $L_a$ is surjective by the Rank-Nullity Theorem \ref{thm:Rank-Nullity}. \end{itemize} \end{proof} \begin{theorem} \label{thm:L weak irreducible iff division} Let $\sigma$ be an automorphism of $D$, $L$ be a division subring of $D$ such that $D$ is a free right $L$-module of finite rank, and $f(t) \in D[t;\sigma,\delta]$ be $L$-weak semi-invariant. Then $S_f$ is a division algebra if and only if $f(t)$ is irreducible. In particular if $\sigma$ is an automorphism of $D$ and $f(t)$ is right semi-invariant then $S_f$ is a division algebra if and only if $f(t)$ is irreducible. \end{theorem} \begin{proof} If $f(t)$ is reducible then $S_f$ is not a division algebra by Lemma \ref{lem:f(t) reducible implies S_f not division}. Conversely, suppose $f(t)$ is irreducible. Then $S_f$ is a right division algebra by Theorem \ref{thm:f(t) irreducible iff S_f right division} so we are left to show $S_f$ is also a left division algebra. Let $0 \neq a \in S_f$ be arbitrary and recall $L_a$ is injective by Lemma \ref{lem:L_a injective but not nec surjective}. Since $f(t)$ is $L$-weak semi-invariant, $L \subseteq \mathrm{Nuc}_r(S_f)$ which implies $$L_a(z \circ \lambda) = a \circ (z \circ \lambda) = (a \circ z) \circ \lambda = L_a(z) \circ \lambda,$$ for all $z \in S_f$, $\lambda \in L$. Hence $L_a$ is right $L$-linear. $S_f$ is a free right $D$-module of rank $m = \mathrm{deg}(f)$ because $\sigma$ is an automorphism. Since $D$ is a free right $L$-module of finite rank then also $S_f$ is a free right $L$-module of finite rank. Thus the Rank-Nullity Theorem \ref{thm:Rank-Nullity} implies $L_a$ is bijective as required. \end{proof} \section{Semi-Multiplicative Maps} \label{section:Semi-Multiplicative Maps} \begin{definition} A \textbf{map of degree $m$} over a field $F$, is a map $M: V \rightarrow W$ between two finite-dimensional vector spaces $V$ and $W$ over $F$, such that $M(\alpha v) = \alpha^m M(v)$ for all $\alpha \in F$, $v \in V$, and such that the map $M: V \times \cdots \times V \rightarrow W$ defined by $$M(v_1, \ldots , v_m) = \sum_{1 \leq i_1 < \cdots < i_l \leq m} (-1)^{m-l} M(v_{i_1} + \ldots + v_{i_l}),$$ $(1 \leq l \leq m)$ is $m$-linear over $F$. A map $M: V \rightarrow F$ of degree $m$ is called a \textbf{form of degree $m$} over $F$. \end{definition} \begin{definition} Consider a finite-dimensional nonassociative algebra $A$ over a field $F$ containing a subalgebra $D$. A map $M : A \rightarrow D$ of degree $m$ is called \textbf{left semi-multiplicative} if $M(d g) = M(d) M(g)$, for all $d \in D$, $g \in A$. \textbf{Right semi-multiplicative} maps are defined similarly. \end{definition} As before let $D$ be a division ring with center $C$, $\sigma$ be an endomorphism of $D$, $\delta$ be a left $\sigma$-derivation of $D$, and $f(t) \in D[t;\sigma,\delta]$ be of degree $m$. In his Ph.D. thesis \cite[\S 4.2]{AndrewPhD}, Steele defined and studied a left semi-multiplicative map on nonassociative cyclic algebras. In this Section, we show that when $D$ is commutative and $S_f$ is finite-dimensional over $F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$, then we can similarly define a left semi-multiplicative map $M_f$ for $S_f$. In the classical theory of associative central simple algebras of degree $n$, the reduced norm is a multiplicative form of degree $n$. The maps $M_f$ can be seen as a generalisation of the reduced norm. \vspace*{4mm} Consider $S_f$ as a free left $D$-module of rank $m = \mathrm{deg}(f(t))$ with basis $\{ 1,t,\ldots, t^{m-1} \}$, and recall the right multiplication $R_g: S_f \rightarrow S_f, \ h \mapsto h \circ g$ is left $D$-linear for all $0 \neq g \in S_f$ by the argument on page \pageref{eqn:R_a left D-linear}. Define \begin{equation*} \lambda: S_f \rightarrow \mathrm{End}_D(S_f), \ g \mapsto R_g, \end{equation*} which induces a map \begin{equation*} \lambda: S_f \rightarrow \mathrm{Mat}_m(D), \ g \mapsto W_g, \end{equation*} where $W_g \in \mathrm{Mat}_m(D)$ is the matrix representing $R_g$ with respect to the basis $\{ 1,t,\ldots, t^{m-1} \}$. If we represent $h = h_0 + h_1t + \ldots + h_{m-1} t^{m-1} \in S_f$ as the row vector $(h_0, h_1, \ldots, h_{m-1})$ with entries in $D$, then we can write the product of two elements in $S_f$ as $h \circ g = h W_g$. When $D$ is commutative, define $M_f: S_f \rightarrow D$ by $M_f(g) = \mathrm{det}(W_g)$. Notice this definition does not make sense unless $D$ is commutative, otherwise $W_g$ is a matrix with entries in the noncommutative ring $D$, and as such we cannot take its determinant. \begin{proposition} \label{prop:f(t) two-sided then M_f is multiplicative} Suppose $f(t)$ is right invariant, i.e. $S_f$ is associative, then $W_g W_h = W_{g \circ h}$ for all $g,h \in S_f$. In particular, if $D$ is commutative then $M_f$ is multiplicative. \end{proposition} \begin{proof} We have $$y W_{g \circ h} = y \circ (g \circ h) = (y \circ g) \circ h = (y W_g) W_h = y (W_g W_h)$$ for all $y, g, h \in S_f$, where we have used the associativity in $S_f$ and the associativity of matrix multiplication. This means $W_g W_h = W_{g \circ h}$ for all $g, h \in S_f$. If $D$ is commutative, then \begin{align*} M_f(g \circ h) &= \mathrm{det}(W_{g \circ h}) = \mathrm{det}(W_g W_h) = \mathrm{det}(W_g) \mathrm{det}(W_h) = M_f(g) M_f(h) \end{align*} for all $g, h \in S_f$, therefore $M_f$ is multiplicative. \end{proof} In general, $W_{g \circ h} \neq W_g W_h$ for $g, h \in S_f$ unless $S_f$ is associative since the map $g \mapsto W_g$ is not an $F$-algebra homomorphism. Nevertheless we obtain: \begin{proposition} \label{prop:W_d W_g = W_dg} $W_d W_g = W_{d \circ g}$ for all $d \in D$, $g \in S_f$. In particular, if $D$ is commutative and $S_f$ is finite-dimensional over $F$, then $M_f$ is left semi-multiplicative. \end{proposition} \begin{proof} Consider $d \in D$ as an element of $S_f$ so that $d = d + 0 t + \ldots + 0 t^{m-1}$. When $S_f$ is associative the assertion follows by Proposition \ref{prop:f(t) two-sided then M_f is multiplicative}, otherwise $D = \mathrm{Nuc}_m(S_f)$ by Theorem \ref{thm:Properties of S_f petit} and so \begin{align*} y W_{d \circ g} = y \circ (d \circ g) = (y \circ d) \circ g = (y W_d) W_g = y (W_d W_g) \end{align*} for all $d \in D$, $y, g \in S_f$. Thus $W_d W_g = W_{d \circ g}$. If $D$ is commutative and $S_f$ is finite-dimensional over $F$, then \begin{align*} M_f(d \circ g) &= \mathrm{det}(W_{d \circ g}) = \mathrm{det}(W_d W_g) = \mathrm{det}(W_d) \mathrm{det}(W_g) = M_f(d) M_f(g) \end{align*} for all $d \in D$, $g \in S_f$ and so $M_f$ is left semi-multiplicative. \end{proof} \begin{examples} \begin{itemize} \item[(i)] Let $f(t) = t^2 - a_1 t - a_0 \in D[t;\sigma,\delta]$. Given $g = g_0 + g_1t \in S_f$ with $g_0, g_1 \in D$, the matrix $W_g$ has the form $$\begin{pmatrix} g_0 & g_1 \\ \sigma(g_1)a_0 + \delta(g_0) & \sigma(g_0) + \sigma(g_1)a_1 + \delta(g_1) \end{pmatrix}.$$ \item[(ii)] Let $f(t) = t^m - a \in D[t; \sigma]$, then given $g = g_0 + g_1 t + \ldots + g_{m-1}t^{m-1} \in S_f$, $g_i \in D$, the matrix $W_g$ has the form \begin{equation*} W_g = \begin{pmatrix} g_0 & g_1 & g_2 & \cdots & g_{m-1} \\ \sigma(g_{m-1})a & \sigma(g_0) & \sigma(g_1) & \cdots & \sigma(g_{m-2}) \\ \sigma^2(g_{m-2})a & \sigma^2(g_{m-1}) \sigma(a) & \sigma^2(g_0) & \cdots & \sigma^2(g_{m-3}) \\ \sigma^3(g_{m-3})a & \sigma^3(g_{m-2}) \sigma(a) & \sigma^3(g_{m-1}) \sigma^2(a) & \cdots & \sigma^3(g_{m-4}) \\ \vdots & \vdots & \vdots & & \vdots \\ \sigma^{m-1}(g_1) a & \sigma^{m-1}(g_2) \sigma(a) & \sigma^{m-1}(g_3) \sigma^2(a) & \cdots & \sigma^{m-1}(g_0) \end{pmatrix}. \end{equation*} In other words, $W_g = (W_{ij})_{i,j = 0, \ldots, m-1}$, where \begin{equation*} W_{ij} = \begin{cases} \sigma^i(g_{j-i}) & \text{ if } i \leq j, \\ \sigma^i(g_{m-i+j}) \sigma^j(a) & \text{ if } i > j. \end{cases} \end{equation*} If $D$ is a finite field, the matrix $W_g$ is the $(\sigma,a)$-circulant matrix $M_a^{\sigma}$ in \cite{fogarty2015circulant}. In this case, Proposition \ref{prop:W_d W_g = W_dg} is \cite[Remark 3.2(b)]{fogarty2015circulant}. \end{itemize} \end{examples} We now look at the connection between $M_f$ and zero divisors in $S_f$: \begin{theorem} \label{thm:semi-mult division} Suppose $D$ is commutative. \begin{itemize} \item[(i)] Let $0 \neq g \in S_f$. If $g$ is not a right zero divisor in $S_f$ then $M_f(g) \neq 0$. \item[(ii)] $S_f$ has no non-trivial zero divisors if and only if $M_f(g) \neq 0$ for all $0 \neq g \in S_f$, if and only if $S_f$ is a right division algebra. \item[(iii)] If $S_f$ is a finite-dimensional left $F$-vector space or a free of finite rank right $\mathrm{Nuc}_r(S_f)$-module, then $S_f$ is a division algebra is equivalent to $M_f(g) \neq 0$ for all $0 \neq g \in S_f$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] If $W_g$ is a singular matrix, then the equation $$h \circ g = (h_0, \ldots, h_{m-1}) W_g = 0,$$ has a non-trivial solution $(h_0, \ldots, h_{m-1}) \in D^{m}$, contradicting the assumption that $g$ is not a right zero divisor in $S_f$. \item[(ii)] Suppose $M_f(g) \neq 0$ for all $0 \neq g \in S_f$. If $g, h \in S_f$ are non-zero and $h \circ g = h W_g = 0$ then $h = 0 W_g^{-1} = 0$, a contradiction. Hence $S_f$ has no non-trivial zero divisors. Conversely, if $S_f$ has no non-trivial zero divisors then $M_f(g) \neq 0$ for all $0 \neq g \in S_f$ by (i). Additionally, $S_f$ contains no non-trivial zero divisors if and only if the right multiplication map $R_g : S_f \rightarrow S_f, \ x \mapsto x \circ g$ is injective for all non-zero $g \in S_f$, if and only if $S_f$ is a right division algebra by the proof of Theorem \ref{thm:f(t) irreducible iff S_f right division}. \item[(iii)] Follows from (ii) and Theorem \ref{thm:S_f_division_iff_irreducible}. \end{itemize} \end{proof} \chapter{Irreducibility Criteria in Skew Polynomial Rings} \label{chapter:Irreducibility Criteria for Polynomials in a Skew Polynomial Ring} Let $D$ be a division ring with center $C$, $\sigma$ be an endomorphism of $D$ and $\delta$ be a left $\sigma$-derivation. Throughout this Chapter we assume without loss of generality $f(t) \in R = D[t;\sigma,\delta]$ is monic. In Section \ref{section:When is S_f a Division Algebra?}, we saw that whether $S_f$ is a division algebra or not is closely linked to whether the polynomial $f(t)$ used in its construction is irreducible. For instance, $S_f$ is a right division algebra if and only if $f(t)$ is irreducible by Theorem \ref{thm:f(t) irreducible iff S_f right division}. This motivates the study of factorisation and irreducibility of skew polynomials which we do in the present Chapter. The results we obtain, in conjunction with Theorems \ref{thm:f(t) irreducible iff S_f right division} and \ref{thm:S_f_division_iff_irreducible}, yield criteria for some Petit algebras to be (right) division algebras. It is well-known that a skew polynomial can always be factored as a product of irreducible skew polynomials. This factorisation is in general not unique, however, the degrees of the factors are unique up to permutation: \begin{theorem} \label{thm:Ore factorisation is similar in pairs} (\cite[Theorem 1]{ore1933theory}). Every non-zero polynomial $f(t) \in R$ factorises as $f(t) = f_1(t) \cdots f_n(t)$ where $f_i(t) \in R$ is irreducible for all $i \in \{ 1, \ldots, n \}$. Furthermore, if $f(t) = g_1(t) \cdots g_s(t)$ is any other factorisation of $f(t)$ as a product of irreducible $g_i \in R$, then $s = n$ and there exists a permutation $\pi: \{ 1, \ldots, n \} \rightarrow \{ 1, \ldots, n \}$ such that $f_i$ is similar to $g_{\pi(i)}$. In particular, $f_i$ and $g_{\pi(i)}$ have the same degree for all $i \in \{ 1, \ldots, n \}$. \end{theorem} We first restrict our attention to the case where $\delta = 0$. \section{Irreducibility Criteria in \texorpdfstring{$R = D[t;\sigma]$}{R = D[t;sigma]}} \label{section:Irreducibility Criteria in D[t;sigma]} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R = D[t;\sigma]$. In order to study when $f(t)$ is irreducible, we first determine the remainder after dividing $f(t)$ on the right by $(t-b), \ b \in D$. By \cite[p.~15]{jacobson1996finite} we have the identity \begin{equation} \label{eqn:Right division identity} \begin{split} t^i &- \sigma^{i-1}(b) \sigma^{i-2}(b) \cdots b \\ &= \Big(t^{i-1} + \sigma^{i-1}(b)t^{i-2} + \ldots + \sigma^{i-1}(b) \cdots \sigma(b) \Big)(t-b), \end{split} \end{equation} for all $i \in \mathbb{N}$. Multiplying \eqref{eqn:Right division identity} on the left by $a_i$ and summing over $i$ yields $$f(t) = q(t)(t-b) + N_m (b) - \sum_{i=0}^{m-1} a_i N_i(b),$$ for some $q(t) \in R$, where $N_i (b) = \sigma^{i-1} (b) \cdots \sigma(b) b$ for $i > 0$ and $N_0 (b) = 1$. Therefore the remainder after dividing $f(t)$ on the right by $(t-b)$ is $N_m (b) - \sum_{i=0}^{m-1} a_i N_i(b)$, and we conclude: \begin{proposition} \label{prop:rightdivdegree1} (\cite[p.~16]{jacobson1996finite}). $(t-b) \vert_r f(t)$ is equivalent to $$a_m N_m(b) - \sum_{i=0}^{m-1} a_i N_i (b) = 0.$$ \end{proposition} When $\sigma$ is an automorphism of $D$, we can also determine the remainder after dividing $f(t)$ on the left by $(t-b), \ b \in D$: Similarly to \eqref{eqn:Right division identity} we have the identity \begin{equation} \label{eqn:Left division identity} \begin{split} t^i - b \sigma^{-1}(b)& \cdots \sigma^{1-i}(b) = (t-b) \Big(t^{i-1 } + \sigma^{-1}(b)t^{i-2} \\ &+ \sigma^{-1}(b) \sigma^{-2}(b)t^{i-3} + \ldots + \sigma^{-1}(b) \sigma^{-2}(b) \cdots \sigma^{1-i}(b)\Big) \end{split} \end{equation} for all $i \in \mathbb{N}$. Multiplying \eqref{eqn:Left division identity} on the right by $\sigma^{-i}(a_i)$, and using $a_i t^i = t^i \sigma^{-i} (a_i)$ gives \begin{equation*} \begin{split} &a_i t^i - b \sigma^{-1}(b) \cdots \sigma^{1-i}(b) \sigma^{-i}(a_i) \\ &= (t-b) \Big(t^{i-1} + \sigma^{-1}(b)t^{i-2} + \ldots + \sigma^{-1}(b) \sigma^{-2}(b) \cdots \sigma^{1-i}(b) \Big) \sigma^{-i}(a_i). \end{split} \end{equation*} Summing over $i$, we obtain $$f(t) = (t-b)q(t) + M_m (b) - \sum_{i=0}^{m-1} M_i(b) \sigma^{-i}(a_i),$$ for some $q(t) \in R$ where $M_i(b)$ are defined by $M_0 (b) = 1$, $M_1(b) = b$ and $M_i (b) = b \sigma^{-1}(b) \cdots \sigma^{1-i}(b)$ for $i \geq 2$. We immediately conclude: \begin{proposition} \label{prop:leftdivdegree1} Suppose $\sigma$ is an automorphism of $D$. Then $(t-b) \vert_l f(t) \ $ if and only if $ \ M_m (b) - \sum_{i=0}^{m-1} M_i(b) \sigma^{-i}(a_i) = 0.$ \end{proposition} A careful reading of Propositions \ref{prop:rightdivdegree1} and \ref{prop:leftdivdegree1} yields the following: \begin{corollary} \label{cor:left iff right divisor} Suppose $\sigma$ is an automorphism and $f(t) = t^m - a \in D[t;\sigma]$. Then $f(t)$ has a left linear divisor if and only if it has a right linear divisor. \end{corollary} \begin{proof} Let $b \in D$, then $(t-b) \vert_r f(t)$ is equivalent to $\sigma^{m-1}(b) \cdots \sigma(b) b = a$ by Proposition \ref{prop:rightdivdegree1}, if and only if $c \sigma^{-1}(c) \cdots \sigma^{1-m}(c) = a$ where $c = \sigma^{m-1}(b)$, if and only if $(t-c) \vert_l f(t)$ by Proposition \ref{prop:leftdivdegree1}. \end{proof} Using Propositions \ref{prop:rightdivdegree1} and \ref{prop:leftdivdegree1} we obtain criteria for some skew polynomials of degree two or three to be irreducible. The following was stated but not proven by Petit in \cite[(17), (18)]{Petit1966-1967}: \begin{theorem} \label{thm:Petit_factor} \begin{itemize} \item[(i)] Suppose $\sigma$ is an endomorphism of $D$. Then $f(t) = t^2 - a_1 t - a_0 \in D[t;\sigma]$ is irreducible if and only if \begin{equation*} \label{eqn:Petit (17)} \sigma(b)b - a_1 b - a_0 \neq 0, \end{equation*} for all $b \in D.$ \item[(ii)] Suppose $\sigma$ is an automorphism. Then $f(t) = t^3 - a_2 t^2 - a_1 t - a_0 \in D[t;\sigma]$ is irreducible if and only if \begin{equation*} \label{eqn:Petit (18) 1} \sigma^2 (b) \sigma(b) b - \sigma^2 (b)\sigma(b) a_2 - \sigma^2 (b) \sigma(a_1) - \sigma^2 (a_0) \neq 0, \end{equation*} and \begin{equation*} \label{eqn:Petit (18) 2} \sigma^2 (b) \sigma(b) b - a_2 \sigma(b) b - a_1 b - a_0 \neq 0, \end{equation*} for all $b \in D.$ \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] Since $\mathrm{deg}(f(t)) = 2$, we have $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ for all $b \in D$, if and only if $$N_2 (b) - a_1 N_1 (b) - a_0 N_0(b) = \sigma(b) b - a_1 b - a_0 \neq 0,$$ for all $b \in D$ by Proposition \ref{prop:rightdivdegree1}. \item[(ii)] Here $\mathrm{deg}(f(t)) = 3$ and so $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ and $(t-b) \nmid_l f(t)$ for all $b \in D$, if and only if \begin{equation*} \label{eqn:Petit_factor 1} \sigma^2 (b) \sigma(b) b - a_2 \sigma(b)b - a_1 b - a_0 \neq 0, \end{equation*} and \begin{equation} \label{eqn:Petit_factor 2} b \sigma^{-1}(b) \sigma^{-2}(b) - b \sigma^{-1}(b) \sigma^{-2}(a_2) - b \sigma^{-1}(a_1) - a_0 \neq 0, \end{equation} for all $b \in D$ by Propositions \ref{prop:rightdivdegree1} and \ref{prop:leftdivdegree1}. Applying $\sigma^2$ to \eqref{eqn:Petit_factor 2} we obtain the assertion. \end{itemize} \end{proof} When $f(t)$ has the form $f(t) = t^3 - a \in D[t;\sigma]$, we obtain the following simplification of Theorem \ref{thm:Petit_factor}(ii): \begin{corollary} \label{cor:Irreducibility t^3-a} Suppose $\sigma$ is an automorphism of $D$, then $f(t) = t^3 - a \in D[t;\sigma]$ is irreducible is equivalent to $\sigma^2 (b) \sigma(b) b \neq a$ for all $b \in D$. \end{corollary} \begin{proof} Recall $f(t)$ has a right linear divisor if and only if it has a left linear divisor by Corollary \ref{cor:left iff right divisor}. Therefore $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ for all $b \in D$, if and only if $\sigma^2 (b) \sigma(b) b \neq a$ for all $b \in D$ by Proposition \ref{prop:rightdivdegree1}. \end{proof} \begin{corollary} \label{cor:t^2-a(y) in K(y)[t;sigma] irreducibility} Let $K$ be a field, $y$ be an indeterminate and define $\sigma: K(y) \rightarrow K(y)$ by $\sigma \vert_K = \mathrm{id}$ and $\sigma(y) = y^2$. For $a(y) \in K[y]$ denote by $\mathrm{deg}_y(a(y))$ the degree of $a(y)$ as a polynomial in $y$. Let $f(t) = t^2 - a(y) \in K(y)[t;\sigma]$ where $0 \neq a(y) \in K[y]$ is such that $3 \nmid \mathrm{deg}_y(a(y))$. Then $f(t)$ is irreducible in $K(y)[t;\sigma]$. \end{corollary} \begin{proof} Note that $\sigma$ is an injective but not surjective endomorphism of $K(y)$ by \cite[p.~123]{berrick2000introduction}. We have $f(t)$ is irreducible is equivalent to $\sigma(b(y))b(y) \neq a(y)$ for all $b(y) \in K(y)$ by Theorem \ref{thm:Petit_factor}. Given $0 \neq b(y) \in K(y)$, write $b(y) = c(y) / d(y)$ for some non zero $c(y), d(y) \in K[y]$. If $\sigma(b(y))b(y) \notin K[y]$ then $\sigma(b(y))b(y) \neq a(y)$ because $a(y) \in K[y]$. Conversely suppose $\sigma(b(y))b(y) \in K[y]$ and let $c(y) = \sum_{i=0}^{l} \lambda_i y^i$, \ $d(y) = \sum_{j=0}^{n} \mu_j y^j$ for some $\lambda_i, \mu_j \in K$ with $\lambda_l, \mu_n \neq 0$. Then $\sigma(c(y)) = \sum_{i=0}^{l} \lambda_i y^{2i}$, \quad $\sigma(d(y)) = \sum_{j=0}^{n} \mu_j y^{2j}$, and so \begin{align*} \sigma(b(y))b(y) &= \frac{\sigma(c(y))c(y)}{\sigma(d(y))d(y)} = \frac{\sum_{i=0}^{l} \lambda_i y^{2i} \sum_{k=0}^{l} \lambda_k y^{k}}{\sum_{j=0}^{n} \mu_j y^{2j} \sum_{s=0}^{n} \mu_s y^{s}} = \frac{\sum_{i=0}^{l} \sum_{k=0}^{l} \lambda_i \lambda_k y^{2i+k}}{\sum_{j=0}^{n} \sum_{s=0}^{n} \mu_j \mu_s y^{2j+s}}. \end{align*} This means $\mathrm{deg}_y \big( \sigma(b(y))b(y) \big) = 3l - 3n$ is a multiple of $3$, thus if $3 \nmid \mathrm{deg}_y(a(y))$ then $\sigma(b(y))b(y) \neq a(y)$ and $f(t)$ is irreducible. \end{proof} Consider the field extension $\mathbb{C} / \mathbb{R}$ where $\mathrm{Gal}(\mathbb{C} / \mathbb{R}) = \{ \mathrm{id} , \sigma \}$ and $\sigma$ denotes complex conjugation. By \cite[Corollary 6]{pumplun2015factoring}, any non-constant $g(t) \in \mathbb{C}[t;\sigma]$ decomposes into a product of linear and irreducible quadratic skew polynomials, in particular, every polynomial of degree $\geq 3$ is reducible. As a Corollary of Theorem \ref{thm:Petit_factor}(i) we now give some irreducibility criteria for $f(t) \in \mathbb{C}[t;\sigma]$ of degree $2$: \begin{corollary} \label{cor:Irreducibility_degree_four_complex} \begin{itemize} \item[(i)] Let $f(t) = t^2 - a \in \mathbb{C}[t;\sigma]$, then $f(t)$ is irreducible if and only if $a \in \mathbb{C} \setminus \mathbb{R}$ or $a \in \mathbb{R}^{-} = \{ r \in \mathbb{R} \ \vert \ r < 0 \}$. \item[(ii)] \cite[Corollary 2.6]{bergen2015factorizations} Let $f(t) = t^2 - a_1 t - a_0 \in \mathbb{C}[t;\sigma]$ with $a_0, a_1 \in \mathbb{R}$, then $f(t)$ is irreducible in $\mathbb{C}[t;\sigma]$ if and only if $a_1^2 + 4a_0 < 0$ if and only if $f(t)$ is irreducible in $\mathbb{R}[t]$. Moreover, if $f(t)$ is reducible, then the factorisation of $f(t)$ into monic linear polynomials is unique when $a_1 \neq 0$, whereas $f(t)$ factors an infinite number of ways into monic linear factors when $a_1 = 0$. \item[(iii)] Let $f(t) = t^2 - a_1 t - a_0 \in \mathbb{C}[t;\sigma]$ where $a_0 \in \mathbb{R}$ and $a_1 = \lambda + \mu i$ for some $\lambda , \mu \in \mathbb{R}^{\times}$. Then $f(t)$ is irreducible if and only if $a_0 < -(\lambda^2 + \mu^2)/4$. In particular, if $a_0 \geq 0$ then $f(t)$ is reducible. \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] We have $f(t)$ is reducible is equivalent to $\sigma(b)b = a$ for some $b = c + d i \in \mathbb{C}$ by Theorem \ref{thm:Petit_factor}, which is equivalent to $c^2 + d^2 = a$ for some $c, d \in \mathbb{R}$. Therefore if $0 > a \in \mathbb{R}$ or $a \in \mathbb{C} \setminus \mathbb{R}$, then $f(t)$ must be irreducible. On the other hand, if $0 < a \in \mathbb{R}$ then setting $b = \sqrt{a}$ gives $\sigma(b)b = b^2=a$ as required. \item[(iii)] We have $f(t)$ is reducible if and only if $\sigma(b)b - a_1 b - a_0 = 0$ for some $b = c + d i \in \mathbb{C}$ by Theorem \ref{thm:Petit_factor}, if and only if \begin{align*} c^2 + d^2 - \lambda c + \mu d - a_0 - (\lambda d + \mu c)i = 0, \end{align*} if and only if $c^2 + d^2 - \lambda c + \mu d - a_0 = 0$ and $\lambda d + \mu c = 0$ for some $c, d \in \mathbb{R}$. Therefore $f(t)$ is reducible if and only if $$\Big( 1 + \frac{\mu^2}{\lambda^2} \Big) c^2 + \Big( - \lambda - \frac{\mu^2}{\lambda} \Big) c - a_0 = 0,$$ for some $c \in \mathbb{R}$, if and only if $$\Big( - \lambda - \frac{\mu^2}{\lambda} \Big) ^2 + 4 a_0 \Big( 1 + \frac{\mu^2}{\lambda^2} \Big) \geq 0,$$ if and only if $a_0 \geq - (\lambda^2 + \mu^2)/4$. \end{itemize} \end{proof} \begin{lemma} \label{lem:f(bt)=q(bt)g(bt)} Let $f(t) \in R = D[t;\sigma]$ and suppose $f(t) = q(t) g(t)$ for some $q(t), g(t) \in R$. Then $f(bt) = q(bt) g(bt)$ for all $b \in F = C \cap \mathrm{Fix}(\sigma)$. \end{lemma} \begin{proof} Write $q(t) = \sum_{i=0}^{l} q_i t^i$, \ $g(t) = \sum_{j=0}^{n} g_j t^j$, then $$f(t) = q(t) g(t) = \sum_{i=0}^{l} \sum_{j=0}^{n} q_i t^i g_j t^j = \sum_{i=0}^{l} \sum_{j=0}^{n} q_i \sigma^i(g_j) t^{i+j},$$ and so \begin{align*} q(bt) g(bt) &= \sum_{i=0}^{l} q_i (bt)^i \sum_{j=0}^{n} g_j (bt)^j = \sum_{i=0}^{l} \sum_{j=0}^{n} q_i \sigma^i(g_j) b^{i+j} t^{i+j} \\ &= \sum_{i=0}^{l} \sum_{j=0}^{n} q_i \sigma^i(g_j) (bt)^{i+j} = f(bt), \\ \end{align*} for all $b \in F$. \end{proof} The following result was stated as Exercise by Bourbaki in \cite[p.~344]{bourbaki1973elements} and proven in the special case where $\sigma$ is an automorphism of order $m$ in \cite[Proposition 3.7.5]{cohn1995skew}: \begin{theorem} \label{thm:bourbaki} Let $\sigma$ be an endomorphism of $D$, $f(t) = t^m - a \in R = D[t;\sigma]$ and suppose $F = C \cap \mathrm{Fix}(\sigma)$ contains a primitive $m^{\text{th}}$ root of unity. If $g(t) \in R$ is a monic irreducible polynomial dividing $f(t)$ on the right, then the degree $d$ of $g(t)$ divides $m$ and $f(t)$ is the product of $m/d$ polynomials of degree $d$. \end{theorem} \begin{proof} Let $g(t) \in R$ be a monic irreducible polynomial of degree $d$ dividing $f(t)$ on the right, and $\omega \in F$ be a primitive $m^{\text{th}}$ root of unity. Define $g_{i} (t) = g(\omega^i t)$ for all $i \in \{ 0, \ldots, m-1 \}$. Then $\bigcap_{i=0}^{m-1} R g_{i}(t)$ is an ideal of $R$, and since $R$ is a left principle ideal domain, we have \begin{equation} \label{eqn:bourbaki 1} Rh(t) = \bigcap_{i=0}^{m-1} R g_{i}(t), \end{equation} for a suitably chosen $h(t) \in R$. Furthermore, we may assume $h(t)$ is monic, otherwise if $h(t)$ has leading coefficient $d \in D^{\times}$, then $Rh(t) = R(d^{-1}h(t))$. We show $f(t) \in Rh(t)$: As $g(t)$ right divides $f(t)$, we can write $f(t) = q(t)g(t)$ for some $q(t) \in R$. In addition, we have $(\omega t)^i = \omega^i t^i$ for all $i \in \{ 0, \ldots, m-1 \}$ because $\omega \in F$, therefore $$f(\omega^i t) = \omega^{mi} t^m -a = t^m - a = f(t) = q(\omega^i t) g(\omega^i t),$$ by Lemma \ref{lem:f(bt)=q(bt)g(bt)} and so $g_{i}(t)$ right divides $f(t)$ for all $i \in \{ 0, \ldots, m-1 \}$. This means $$f(t) \in \bigcap_{i=0}^{m-1} R g_{i}(t) = Rh(t),$$ in particular, $Rh(t)$ is not the zero ideal. We next show $h(\omega^i t) = h(t)$ for all $i \in \{ 0, \ldots , m-1 \}$: For simplicity we only do this for $i = 1$, the other cases are similar. Notice $h(t) \in \bigcap_{j=0}^{m-1} R g_{j}(t)$ by \eqref{eqn:bourbaki 1} and thus there exists $q_0(t), \ldots, q_{m-1}(t) \in R$ such that $h(t) = q_j(t) g_{j}(t)$, for all $j \in \{ 0, \ldots , m-1 \}$. Therefore $$h(\omega t) = q_{m-1}(\omega t) g_{m-1}(\omega t) = q_{m-1}(\omega t) g_0(t),$$ and $$h(\omega t) = q_j(\omega t) g_{j}(\omega t) = q_j(\omega t) g_{j+1}(t) \in Rg_{j+1}(t),$$ for all $j \in \{ 0, \ldots , m-2 \}$ by Lemma \ref{lem:f(bt)=q(bt)g(bt)}, which implies $$h(\omega t) \in \bigcap_{j=0}^{m-1} R g_{j}(t) = Rh(t).$$ As a result $h(\omega t) = k (t) h(t)$ for some $k(t) \in R$, and by comparing degrees, we conclude $0 \neq k(t) = k \in D$. Suppose $h(t)$ has degree $l$ and write $$h(t) = a_0 + \ldots + a_{l-1}t^{l-1} + t^l, \ a_j \in D,$$ here $Rg(t) \supseteq Rh(t)$ and $f(t) \in Rh(t)$ which yields $\mathrm{deg}(g(t)) = d \leq l \leq m$. Since $h(\omega t) = k h(t)$, we have $k t^l = (\omega t)^l = \big( \prod_{j=0}^{l-1} \sigma^j(\omega) \big) t^l = \omega^l t^l$ which implies $k = \omega^l$. Clearly, the coefficients $a_j$ must be zero for all $j \in \{ 1, \ldots, l-1 \}$, otherwise $a_j (\omega t)^j = k a_j t^j = \omega^l a_j t^j$ giving $\omega^j = \omega^l$, a contradiction as $\omega$ is a primitive $m^{\text{th}}$ root of unity. This means $h(t) = t^l + a_0$, and with $$\omega^l t^l + a_0 = h(\omega t) = k h(t) = \omega^l (t^l + a_0) = \omega^l t^l + \omega^l a_0,$$ we obtain $\omega^l = 1$. This implies $l = m$ and $k = \omega^m = 1$, hence $h(\omega t) = h(t)$. We next prove $h(t) = f(t)$: Now $f(t) \in Rh(t)$ implies $f(t) = t^m - a = p(t)(t^m + a_0)$ for some $p \in R$. Comparing degrees we see $p \in D^{\times}$, thus $t^m - a = p(t^m + a_0) = pt^m + p a_0$ which yields $p=1$ , $a_0 = -a$ and $f(t) = h(t)$. Finally, $\bigcap_{i=0}^{m-1} R g_{i}(t) = Rf(t)$ is equivalent to $f(t)$ being the least common left multiple of the $g_{i}(t)$, $i \in \{ 0, \ldots, m-1 \}$ \cite[p.~10]{jacobson1996finite}. As a result, we can write $$f(t) = q_{i_r}(t) q_{i_{r-1}}(t) \cdots q_{i_1}(t),$$ by \cite[p.~496]{ore1933theory}, where $i_1 = 0 < i_2 < \ldots < i_r \leq m-1$ and each $q_{i_s}(t) \in R$ is similar to $g_{i_s}(t)$. Similar polynomials have the same degree \cite[p.~14]{jacobson1996finite} so $r = m/d$, and $f(t)$ factorises into $m/d$ irreducible polynomials of degree $d$. \end{proof} Theorem \ref{thm:bourbaki} implies the following result, which improves \cite[(19)]{Petit1966-1967} by making $\sigma^{m-1}(a) \neq \sigma^{m-1} (b) \cdots \sigma(b) b$ for all $b \in D$ a superfluous condition: \begin{theorem} \label{thm:Petit(19)} Suppose $m$ is prime, $\sigma$ is an endomorphism of $D$ and $F$ contains a primitive $m^{\text{th}}$ root of unity. Then $f(t) = t^m - a \in D[t;\sigma]$ is irreducible if and only if it has no right linear divisors, if and only if $$a \neq \sigma^{m-1} (b) \cdots \sigma(b) b$$ for all $b \in D$. \end{theorem} \begin{proof} Let $g(t) \in D[t;\sigma]$ be an irreducible polynomial of degree $d$ dividing $f(t)$ on the right. Without loss of generality $g(t)$ is monic, otherwise if $g(t)$ has leading coefficient $c \in D^{\times}$, then $c^{-1}g(t)$ is monic and also right divides $f(t)$. Thus $d$ divides $m$ by Theorem \ref{thm:bourbaki} and since $m$ is prime, either $d = m$, in which case $g(t) = f(t)$, or $d = 1$, which means $f(t)$ can be written as a product of $m$ linear factors. Therefore $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ for all $b \in D$, if and only if $a \neq \sigma^{m-1} (b) \cdots \sigma(b) b$, for all $b \in D$ by Proposition \ref{prop:rightdivdegree1}. \end{proof} \begin{lemma} \label{quantum plane sigma(b(y))=b(qy)} Let $K$ be a field, $y$ be an indeterminate, and $\sigma$ be the automorphism of $K(y)$ such that $\sigma \vert_K = \mathrm{id}$ and $\sigma(y) = qy$ for some $1 \neq q \in K^{\times}$. Let $b(y) \in K(y)$ and write $b(y) = c(y)/d(y)$ for some $c(y), d(y) \in K[y]$ with $d(y) \neq 0$. Then $\sigma^j(b(y)) = c(q^j y)/ d(q^jy)$ for all $j \in \mathbb{N}$. \end{lemma} \begin{proof} Write $c(y) = \lambda_0 + \lambda_1 y + \ldots + \lambda_l y^l$ and $d(y) = \mu_0 + \mu_1 y + \ldots + \mu_n y^n$ for some $\lambda_i, \mu_j \in K$, then \begin{align*} \sigma&^j(b(y)) = \frac{\sigma^j(c(y))}{\sigma^j(d(y))} = \frac{\sigma^j(\lambda_0) + \sigma^j(\lambda_1 y) + \ldots + \sigma^j(\lambda_l y^l)}{\sigma^j(\mu_0) + \sigma^j(\mu_1 y) + \ldots + \sigma^j(\mu_n y^n)} \\ &= \frac{\lambda_0 + \lambda_1 \sigma^j(y) + \ldots + \lambda_l \sigma^j(y^l)}{\mu_0 + \mu_1 \sigma^j(y) + \ldots + \mu_n \sigma^j(y^n)} = \frac{\lambda_0 + \lambda_1 q^j y + \ldots + \lambda_l (q^j y)^l}{\mu_0 + \mu_1 q^j y + \ldots + \mu_n (q^j y)^n} = b(q^j y). \end{align*} \end{proof} \noindent For $a(y) \in K[y]$ denote $\mathrm{deg}_y (a(y))$ the degree of $a(y)$ as a polynomial in $y$. \begin{corollary} \label{cor:t^m-a(y) in K(y)[t;sigma] irreducible} Let $K(y)$ and $\sigma$ be as in Lemma \ref{quantum plane sigma(b(y))=b(qy)}. Suppose $m$ is prime, $K$ contains a primitive $m^{\text{th}}$ root of unity, and $f(t) = t^m - a(y) \in K(y)[t;\sigma]$ where $0 \neq a(y) \in K[y]$ is such that $m \nmid \mathrm{deg}_y (a(y))$. Then $f(t)$ is irreducible in $K(y)[t;\sigma]$. \end{corollary} \begin{proof} We have $f(t)$ is irreducible if and only if $$N_m(b(y)) = \sigma^{m-1}(b(y)) \cdots \sigma(b(y)) b(y) \neq a(y)$$ for all $b(y) \in K(y)$ by Theorem \ref{thm:Petit(19)}. Given $b(y) \in K(y)$, write $b(y) = c(y)/d(y)$ for some $c(y), d(y) \in K[y]$ with $d(y) \neq 0$, then \begin{align*} N_m(b(y)) &= \frac{c(q^{m-1}y) \cdots c(qy) c(y)}{d(q^{m-1}y) \cdots d(qy) d(y)} \end{align*} by Lemma \ref{quantum plane sigma(b(y))=b(qy)}. If $N_m(y) \notin K[y]$, we immediately conclude $N_m(b(y)) \neq a(y)$ because $a(y) \in K[y]$. Conversely if $N_m(b(y)) \in K[y]$, then $$\mathrm{deg}_y(N_m(b(y))) = m \mathrm{deg}_y(c(y)) - m \mathrm{deg}_y(d(y))$$ for all $0 \neq b(y) \in K(y)$. Therefore $\mathrm{deg}_y(N_m(b(y)))$ is a multiple of $m$, thus $N_m(b(y)) \neq a(y)$ for all $b(y) \in K(y)$, and so $f(t)$ is irreducible. \end{proof} Recall $N_{K/F}(K^{\times}) \subseteq F^{\times}$ for any finite field extension $K/F$, therefore we obtain the following Corollary of Theorem's \ref{thm:Petit_factor} and \ref{thm:Petit(19)}: \begin{corollary} \label{cor:nonassociative cyclic algebra is division} Let $K/F$ be a cyclic Galois field extension of degree $m$ with $\mathrm{Gal}(K/F) = \langle \sigma \rangle$. \begin{itemize} \item[(i)] If $m=2$ then $f(t) = t^2 - a \in K[t;\sigma]$ is irreducible for all $a \in K \setminus F$. \item[(ii)] If $m=3$ then $f(t) = t^3 - a \in K[t;\sigma]$ is irreducible for all $a \in K \setminus F$. \item[(iii)] If $m$ is prime and $F$ contains a primitive $m^{\text{th}}$ root of unity then $f(t) = t^m - a \in K[t;\sigma]$ is irreducible for all $a \in K \setminus F$. \end{itemize} \end{corollary} \begin{proof} We have $\sigma^{m-1}(b) \cdots \sigma(b)b = N_{K/F}(b) \in F$, for all $b \in K$, hence the result follows by Corollary \ref{cor:Irreducibility t^3-a} and Theorems \ref{thm:Petit_factor} and \ref{thm:Petit(19)}. \end{proof} Recently in \cite[Theorem 3.1]{bergen2015factorizations}, it was shown that in the special case where $K$ is an algebraically closed field and $\sigma$ is an automorphism of $K$ of order $n \geq 2$, that every non-constant reducible skew polynomial in $K[t;\sigma]$ can be written as a product of irreducible skew polynomials of degree less than or equal to $n$. Notice that in this case, the Artin-Schreier Theorem implies $\mathrm{char}(K) = 0$ and $n=2$ \cite[p.~242]{lam2013first}. Therefore we can immediately improve \cite[Theorem 3.1]{bergen2015factorizations} to the following: \begin{theorem} \label{thm:algebraically closed factorisation} Let $K$ be an algebraically closed field and $\sigma$ be an automorphism of $K$ of order $n \geq 2$. Then $\mathrm{char}(K) = 0$, $n = 2$ and every non-constant reducible skew polynomial in $K[t;\sigma]$ can be written as a product of linear and irreducible quadratic skew polynomials. \end{theorem} \vspace*{4mm} We now extend some of our previous arguments to find criteria for skew polynomials of degree $4$ to be irreducible. We will see that the conditions for $f(t)$ to be irreducible become complicated when $f(t)$ has degree $4$. Suppose $\sigma$ is an automorphism of $D$ and $f(t) = t^4 - a_3 t^3 - a_2 t^2 - a_1 t - a_0 \in R = D[t;\sigma]$. Then either $f(t)$ is irreducible, $f(t)$ is divisible by a linear factor from the right, from the left, or $f(t) = g (t) h(t)$ for some $g(t), h(t) \in R$ of degree $2$. In Propositions \ref{prop:rightdivdegree1} and \ref{prop:leftdivdegree1} we computed the remainders after dividing $f(t)$ by a linear polynomial on the right and the left. In order to obtain irreducibility criteria for $f(t)$, we wish to find the remainder after dividing $f(t)$ by $t^2 - c t - d \ , \ (c, d \in D)$ on the right. To do this we use the identities \begin{equation} \label{eqn:degree 4 t^2 identity 1} t^2 = (t^2 - c t - d) + (c t + d), \end{equation} \begin{equation} \label{eqn:degree 4 t^2 identity 2} t^3 = (t + \sigma(c)) \big( t^2 - c t - d \big) + \big( \sigma(d) + \sigma(c)c \big) t + \sigma(c) d, \end{equation} and \begin{equation} \label{eqn:degree 4 t^2 identity 3} \begin{split} t^4 &= \big( t^2 + \sigma^2 (c) t + \sigma^2 (d) + \sigma^2 (c) \sigma(c) \big) \big( t^2 - c t - d \big) \\ &+ \big( \sigma^2 (c) \sigma(c) c + \sigma^2 (d)c + \sigma^2 (c) \sigma(d) \big) t + \sigma^2 (d)d + \sigma^2 (c) \sigma(c) d. \end{split} \end{equation} If we define $$M_0 (c , d)(t) = 1, \ M_1 (c , d)(t) = t, \ M_2 (c , d)(t) = c t + d$$ $$M_3 (c , d)(t) = \big( \sigma(d) + \sigma(c)c \big)t + \sigma(c) d,$$ $$M_4 (c ,d)(t) = \big( \sigma^2 (c) \sigma(c) c + \sigma^2 (d)c + \sigma^2 (c) \sigma(d) \big) t + \sigma^2 (d)d + \sigma^2 (c) \sigma(c) d,$$ then multiplying \eqref{eqn:degree 4 t^2 identity 1}, \eqref{eqn:degree 4 t^2 identity 2} and \eqref{eqn:degree 4 t^2 identity 3} on the left by $a_i$ and summing over $i$ yields $$f(t) = q(t) \big( t^2 - c t - d \big) + M_4 (c , d)(t) - \sum_{i=0}^{3} a_i M_i (c , d )(t)$$ for some $q(t) \in R$. This means the remainder after dividing $f(t)$ on the right by $(t^2 - c t - d)$ is $$M_4 (c ,d )(t) - \sum_{i=0}^3 a_i M_i (c , d )(t),$$ which evidently implies: \begin{proposition} \label{prop:degree 4 right divide by quadratic} $(t^2 - c t - d) \vert_r f(t)$ is equivalent to $$\sigma^2 (c) \sigma(c)c + \sigma^2 (d)c + \sigma^2 (c) \sigma(d) - a_3 \big( \sigma(d) + \sigma(c)c \big) - a_2 c - a_1 = 0,$$ and $$\sigma^2 (d)d + \sigma^2 (c) \sigma(c) d - a_3 \sigma(c)d - a_2 d - a_0 = 0.$$ \end{proposition} Together Propositions \ref{prop:rightdivdegree1}, \ref{prop:leftdivdegree1} and \ref{prop:degree 4 right divide by quadratic} yield: \begin{theorem} \label{thm:degree 4 irreducibility criteria} $f(t)$ is irreducible if and only if \begin{equation} \label{eqn:degree 4 irreducible 1} \sigma^3 (b) \sigma^2 (b) \sigma(b) b + a_3 \sigma^2 (b) \sigma(b) b + a_2 \sigma(b) b + a_1 b + a_0 \neq 0, \end{equation} and \begin{equation} \label{eqn:degree 4 irreducible 2} \begin{split} &\sigma^3 (b) \sigma^2 (b) \sigma (b) b + \sigma^3 (b) \sigma^2 (b) \sigma (b) a_3 \\ &+ \sigma^3 (b) \sigma^2 (b) \sigma (a_2) + \sigma^3 (b) \sigma^2 (a_1) + \sigma^3 (a_0) \neq 0, \end{split} \end{equation} for all $b \in D$, and for every $ c , d \in D$, we have \begin{equation} \label{eqn:degree 4 irreducible 3} \sigma^2 (c) \sigma(c)c + \sigma^2 (d)c + \sigma^2 (c) \sigma(d) + a_3 (\sigma(d) + \sigma(c)c) + a_2 c + a_1 \neq 0, \end{equation} or \begin{equation} \label{eqn:degree 4 irreducible 4} \sigma^2 (d)d + \sigma^2 (c) \sigma(c) d + a_3 \sigma(c)d + a_2 d + a_0 \neq 0. \end{equation} i.e., $f(t)$ is irreducible if and only if \eqref{eqn:degree 4 irreducible 1} and \eqref{eqn:degree 4 irreducible 2} and (\eqref{eqn:degree 4 irreducible 3} or \eqref{eqn:degree 4 irreducible 4}) holds. \end{theorem} \begin{proof} $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ for all $b \in D$, $(t-b) \nmid_l f(t)$ for all $b \in D$ and $(t^2 - c t - d) \nmid_r f(t)$ for all $c, d \in D$. Therefore the result follows from Propositions \ref{prop:rightdivdegree1}, \ref{prop:leftdivdegree1} and \ref{prop:degree 4 right divide by quadratic}. \end{proof} We briefly consider the special case where $f(t)$ has the form $f(t) = t^4 - a \in R$: \begin{lemma} \label{lem:t^4-afactorisation1} Let $f(t) = t^4 - a \in R$. Suppose $(t-b) \vert_r f(t)$, then $$f(t) = (t + \sigma^3(b))(t^2 + \sigma^2(b) \sigma(b))(t - b),$$ and $$f(t) = (t^2 + \sigma^3(b) \sigma^2(b))(t + \sigma(b))(t-b),$$ are factorisations of $f(t)$. In particular $(t + \sigma(b))(t-b) = t^2 -\sigma(b)b$ also right divides $f(t)$. \end{lemma} \begin{proof} Multiplying out these factorisations gives $t^4 - \sigma^3(b) \sigma^2(b) \sigma(b) b$ which is equal to $f(t)$ by Proposition \ref{prop:rightdivdegree1}. \end{proof} Lemma \ref{lem:t^4-afactorisation1} implies that if $f(t) = t^4 - a$ has a right linear divisor then it also has a right quadratic divisor. Therefore in this case Theorem \ref{thm:degree 4 irreducibility criteria} simplifies to: \begin{theorem} \label{thm:t^4-a irreducibility criteria delta=0} $f(t) = t^4 - a \in R$ is reducible if and only if $$\sigma^2 (c) \sigma(c)c + \sigma^2 (d)c + \sigma^2 (c) \sigma(d) = 0 \quad \text{and} \quad \sigma^2 (d)d + \sigma^2 (c) \sigma(c) d = a,$$ for some $c, d \in D.$ \end{theorem} \begin{proof} Recall $f(t)$ has a right linear divisor if and only if it has a left linear divisor by Corollary \ref{cor:left iff right divisor}. Moreover if $f(t)$ has a right linear divisor then it also has a quadratic right divisor by Lemma \ref{lem:t^4-afactorisation1}, therefore $f(t)$ is reducible if and only if $(t^2 - c t - d) \vert_r f(t)$ for some $c, d \in D$. The result now follows from Proposition \ref{prop:degree 4 right divide by quadratic}. \end{proof} \section{Irreducibility Criteria in Skew Polynomial Rings over Finite Fields} Let $K = \mathbb{F}_{p^h}$ be a finite field of order $p^h$ for some prime $p$ and $\sigma$ be a non-trivial $\mathbb{F}_p$-automorphism of $K$. This means $\sigma: K \rightarrow K, \ k \mapsto k^{p^r},$ for some $r \in \{ 1, \ldots, h-1 \}$ is a power of the Frobenius automorphism. Here $\sigma$ has order $n = h/ \mathrm{gcd}(r,h)$. Algorithms for efficiently factorising polynomials in $K[t;\sigma]$ exist, see \cite{giesbrecht1998factoring} or more recently \cite{caruso2017new}, however our methods are purely algebraic and employ the previously developed theory. All of our previous results from Section \ref{section:Irreducibility Criteria in D[t;sigma]} hold in $K[t;\sigma]$. In this Section we focus on polynomials of the form $f(t) = t^m - a \in K[t;\sigma]$. We will require the following well-known result: \begin{lemma} \label{lem:gcd number theory result} $\mathrm{gcd}(p^h-1,p^r-1) = p^{\mathrm{gcd}(h,r)}-1.$ \end{lemma} \begin{proof} Let $d = \mathrm{gcd}(r,h)$ so that $h = dn$. We have $$p^{h}-1 = (p^d-1)(p^{d(n-1)} + \ldots + p^d+1),$$ therefore $p^h-1$ is divisible by $p^d-1$. A similar argument shows $(p^d-1) \vert (p^r-1)$. Suppose that $c$ is a common divisor of $p^h-1$ and $p^r-1$, this means $p^h \equiv p^r \equiv 1 \ \mathrm{mod} \ (c)$. Write $d = hx + ry$ for some integers $x,y$, then we have $$p^d = p^{hx + ry} = (p^h)^x (p^r)^y \equiv 1 \ \mathrm{mod} \ (c)$$ which implies $c \vert (p^d-1)$ and hence $p^d-1 = \mathrm{gcd}(p^h-1,p^r-1)$. \end{proof} Given $k \in K^{\times}$, we have $k \in \mathrm{Fix}(\sigma)$ if and only if $k^{p^r-1} = 1$, if and only if $k$ is a $(p^r-1)^{\mathrm{th}}$ root of unity. It is well-known there are $\mathrm{gcd}(p^r-1,p^h-1)$ such roots of unity in $K$, see for example \cite[Proposition II.2.1]{koblitz1994course}, thus $$\vert \mathrm{Fix}(\sigma) \vert = \mathrm{gcd}(p^r-1,p^h-1) + 1 = p^{\mathrm{gcd}(r,h)}$$ by Lemma \ref{lem:gcd number theory result} and so $\mathrm{Fix}(\sigma) \cong \mathbb{F}_{q}$ where $q = p^{\mathrm{gcd}(r,h)}$. \label{page:Fix(sigma) isomorphic to} \begin{proposition} \begin{itemize} \item[(i)] Suppose $n \in \{ 2,3 \}$, then $f(t) = t^n - a \in K[t;\sigma]$ is irreducible if and only if $a \in K \setminus \mathrm{Fix}(\sigma)$. In particular there are precisely $p^{h} - q$ irreducible polynomials in $K[t;\sigma]$ of the form $t^n-a$ for some $a \in K$. \item[(ii)] Suppose $n$ is a prime and $n \vert (q-1)$. Then $f(t) = t^n - a \in K[t;\sigma]$ is irreducible if and only if $a \in K \setminus \mathrm{Fix}(\sigma)$. In particular there are precisely $p^{h} - q$ irreducible polynomials in $K[t;\sigma]$ of the form $t^n-a$ for some $a \in K$. \end{itemize} \end{proposition} \begin{proof} Here $\sigma$ has order $n$ and $K/\mathrm{Fix}(\sigma)$ is a cyclic Galois field extension of degree $n$ with Galois group generated by $\sigma$. \begin{itemize} \item[(i)] $f(t)$ is irreducible if and only if $\prod_{l=0}^{n-1} \sigma^l(b) = N_{K/\mathrm{Fix}(\sigma)}(b) \neq a$ for all $b \in K$ by Theorem \ref{thm:Petit_factor} or Corollary \ref{cor:Irreducibility t^3-a}, where $N_{K/\mathrm{Fix}(\sigma)}$ is the field norm. It is well-known that as $K$ is a finite field, $N_{K/\mathrm{Fix}(\sigma)}: K^{\times} \rightarrow \mathrm{Fix}(\sigma)^{\times}$ is surjective and so $f(t)$ is irreducible if and only if $a \notin \mathrm{Fix}(\sigma)$. There are $p^{h} - q$ elements in $K \setminus \mathrm{Fix}(\sigma)$, hence there are precisely $p^{h} - q$ irreducible polynomials of the form $t^n-a$ for some $a \in K$. \item[(ii)] Notice $\mathrm{Fix}(\sigma) \cong \mathbb{F}_{q}$ contains a primitive $n^{\mathrm{th}}$ root of unity because $n \vert (q-1)$ \cite[Proposition II.2.1]{koblitz1994course}. The rest of the proof is similar to (i) but using Theorem \ref{thm:Petit(19)}. \end{itemize} \end{proof} Let $a, b \in K$ and recall $(t-b) \vert_r (t^m-a)$ is equivalent to $$a = \sigma^{m-1}(b) \cdots \sigma(b)b = b^s$$ by Proposition \ref{prop:rightdivdegree1} where $s = \sum_{j=0}^{m-1}p^{rj} = (p^{mr}-1)/(p^r-1)$. Suppose $z$ is a \textbf{primitive element} of $K$, that is $z$ generates the multiplicative group $K^{\times}$. Writing $b = z^l$ for some $l \in \mathbb{Z}$ yields $(t-b) \vert_r (t^m-a)$ if and only if $a = z^{ls}$. This implies the following: \begin{proposition} \label{prop:finite fields irreducibility primitive element} Let $f(t) = t^m-a \in K[t;\sigma]$ and write $a \in K$ as $a = z^u$ for some $u \in \{ 0, \ldots, p^h-2 \}$. \begin{itemize} \item[(i)] $(t-b) \nmid_r f(t)$ for all $b \in K$ if and only if $u \notin \mathbb{Z} s \ \mathrm{mod} \ (p^h-1).$ \item[(ii)] If $m \in \{2, 3\}$ then $f(t)$ is irreducible if and only if $u \notin \mathbb{Z} s \ \mathrm{mod} \ (p^h-1).$ \item[(iii)] Suppose $m$ is a prime divisor of $(q-1)$, then $f(t)$ is irreducible if and only if $u \notin \mathbb{Z} s \ \mathrm{mod} \ (p^h-1).$ \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] $(t-b) \nmid_r f(t)$ for all $b \in K$ if and only if $a = z^u \neq z^{l s}$ for all $l \in \mathbb{Z}$, if and only if $u \notin \mathbb{Z} s \ \mathrm{mod} \ (p^h-1)$. \item[(ii)] $f(t)$ has a left linear divisor if and only if it has a right linear divisor by Corollary \ref{cor:left iff right divisor}. Therefore if $m \in \{2, 3 \}$ then $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ for all $b \in K$ and so the assertion follows by (i). \item[(iii)] If $m$ is a prime divisor of $(q-1)$ then $\mathrm{Fix}(\sigma) \cong \mathbb{F}_{q}$ contains a primitive $m^{\mathrm{th}}$ root of unity. Therefore the result follows by (i) and Theorem \ref{thm:Petit(19)}. \end{itemize} \end{proof} \begin{corollary} \label{cor:Finite field t^m-a irreducibility criteria} \begin{itemize} \item[(i)] There exists $a \in K$ such that $(t-b) \nmid_r (t^m-a)$ for all $b \in K$ if and only if $\mathrm{gcd}(s,p^h-1) > 1.$ \item[(ii)] \cite[(22)]{Petit1966-1967} Suppose $m \in \{ 2,3 \}$ or $m$ is a prime divisor of $(q-1)$. Then there exists $a \in K^{\times}$ such that $t^m-a \in K[t;\sigma]$ is irreducible if and only if $\mathrm{gcd}(s,p^h-1) > 1.$ \end{itemize} \end{corollary} \begin{proof} There exists $u \in \{ 0, \ldots, p^h-2 \}$ such that $u \notin \mathbb{Z}s \ \mathrm{mod} \ (p^h-1)$, if and only if $s$ does not generate $\mathbb{Z}_{p^h-1}$, if and only if $\mathrm{gcd}(s,p^h-1) > 1.$ Hence the result follows by Proposition \ref{prop:finite fields irreducibility primitive element}. \end{proof} When $p \equiv 1 \ \mathrm{mod} \ m$, it becomes simpler to apply Corollary \ref{cor:Finite field t^m-a irreducibility criteria}: \begin{corollary} \label{cor:p=1mod m finite field irreducibility criteria} Suppose $p \equiv 1 \ \mathrm{mod} \ m$. \begin{itemize} \item[(i)] There exists $a \in K$ such that $(t-b) \nmid_r (t^m-a)$ for all $b \in K$. \item[(ii)] If $p$ is an odd prime, then there exists $a \in K^{\times}$ such that $t^2-a \in K[t;\sigma]$ is irreducible. \item[(iii)] If $m = 3$, then there exists $a \in K^{\times}$ such that $t^3-a \in K[t;\sigma]$ is irreducible. \item[(iv)] Suppose $m$ is a prime divisor of $(q-1)$, then there exists $a \in K^{\times}$ such that $t^m-a \in K[t;\sigma]$ is irreducible. \end{itemize} \end{corollary} \begin{proof} We have $$s \ \mathrm{mod} \ m = \sum_{i=0}^{m-1} (p^{ri} \ \mathrm{mod} \ m) \ \mathrm{mod} \ m = (\sum_{i=0}^{m-1} 1) \ \mathrm{mod} \ m = 0,$$ and $p^h \equiv 1 \ \mathrm{mod} \ m$. This means $m \vert s$ and $m \vert (p^h-1)$, therefore $\mathrm{gcd}( s,p^h-1) \geq m$ and so the assertion follows by Corollary \ref{cor:Finite field t^m-a irreducibility criteria}. \end{proof} \section{Irreducibility Criteria in \texorpdfstring{$R = D[t;\sigma,\delta]$}{R = D[t;sigma,delta]}} Let $D$ be a division ring with center $C$, $\sigma$ be an endomorphism of $D$ and $\delta$ be a left $\sigma$-derivation of $D$. In this Section we investigate irreducibility criteria in $R = D[t;\sigma,\delta]$ generalising some of our results from Section \ref{section:Irreducibility Criteria in D[t;sigma]}. Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ and define a sequence of maps $N_i : D \rightarrow D, \ i \geq 0$, recursively by $$N_{i+1}(b) = \sigma(N_i(b))b + \delta(N_i(b)), \ N_0 (b) = 1,$$ e.g. $N_0 (b) = 1, \ N_1(b) = b, \ N_2(b) = \sigma(b)b + \delta(b), \ldots$ Let $r \in D$ be the unique remainder after right division of $f(t)$ by $(t-b)$, then $$r = N_m(b) - \sum_{i=0}^{m-1} a_i N_i (b),$$ by \cite[Lemma 2.4]{lam1988vandermonde}. This evidently implies: \begin{proposition} \label{prop:rightlinearfactordelta} $(t-b) \vert_r f(t)$ is equivalent to $N_m(b) - \sum_{i=0}^{m-1} a_i N_i (b) = 0$. \end{proposition} Now suppose $\sigma$ is an automorphism of $D$. We wish to find the remainder after left division of $f(t)$ by $(t-b)$. Define a sequence of maps $M_i:D \rightarrow D$, $i \geq 0$, recursively by $$M_{i+1}(b) = b \sigma^{-1}(M_i(b)) - \delta(\sigma^{-1}(M_i(b))), \ M_0(b) = 1,$$ for example $M_0(b) = 1$, \ $M_1(b) = b$, \ $M_2(b) = b \sigma^{-1}(b) - \delta(\sigma^{-1}(b))$, \ldots Recall from page \pageref{sigma automorphism right polynomial ring} that since $\sigma$ is an automorphism, we can also view $R$ as a right polynomial ring. In particular this means we can write $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ in the form $f(t) = t^m - \sum_{i=0}^{m-1} t^i a_i'$ for some uniquely determined $a_i' \in D$. \begin{proposition} \label{prop:leftlinearfactordelta} $(t-b) \vert_l f(t)$ is equivalent to $M_m(b) - \sum_{i=0}^{m-1} M_i(b) a_i' = 0$. In particular, $(t-b) \vert_l (t^m - a)$ if and only if $M_m(b) \neq a$. \end{proposition} \begin{proof} We first show $t^n - M_n(b) \in (t-b)R$ for all $b \in D$ and $n \geq 0$: If $n = 0$ then $t^0 - M_0(b) = 1 - 1 = 0 \in (t-b)R$ as required. Suppose inductively $t^n - M_n(b) \in (t-b)R$ for some $n \geq 0$, then \begin{align*} &t^{n+1} - M_{n+1}(b) = t^{n+1} - b \sigma^{-1}(M_n(b)) + \delta(\sigma^{-1}(M_n(b))) \\ &= t^{n+1} + (t-b) \sigma^{-1}(M_n(b)) - t \sigma^{-1}(M_n(b)) + \delta(\sigma^{-1}(M_n(b))) \\ &= t^{n+1} + (t-b) \sigma^{-1}(M_n(b)) - M_n(b) t - \delta(\sigma^{-1}(M_n(b))) + \delta(\sigma^{-1}(M_n(b))) \\ &= (t-b) \sigma^{-1}(M_n(b)) + (t^n - M_n(b))t \in (t-b)R, \end{align*} as $t^n-M_n(b) \in (t-b)R$. Therefore $t^n - M_n(b) \in (t-b)R$ for all $b \in D$, $n \geq 0$ by induction. As a result, there exists $q_i(t) \in R$ such that $t^i = (t-b) q_i(t) + M_i(b)$, for all $i \in \{ 0, \ldots, m \}$. Multiplying on the right by $a_i'$ and summing over $i$ yields $$f(t) = (t-b)q(t) + M_m(b) - \sum_{i=0}^{m-1} M_i(b) a_i',$$ for some $q(t) \in R$. \end{proof} Using Propositions \ref{prop:rightlinearfactordelta} and \ref{prop:leftlinearfactordelta} we obtain criteria for skew polynomials of degree $2$ and $3$ to be irreducible: \begin{theorem} \label{thm:irredcriteriadelta} \begin{itemize} \item[(i)] Suppose $\sigma$ is an endomorphism of $D$, then $f(t) = t^2 - a_1 t - a_0 \in R$ is irreducible if and only if $\sigma(b)b + \delta(b) - a_1 b - a_0 \neq 0$ for all $b \in D$. \item[(ii)] Suppose $\sigma$ is an automorphism of $D$ and $f(t) = t^3 - a_2 t^2 - a_1 t - a_0 \in R$. Write $f(t) = t^3 - t^2 a_2' - t a_1' - a_0'$ for some unique $a_0', a_1', a_2' \in D$, then $f(t)$ is irreducible if and only if \begin{equation} \label{eqn:irredcriteriadelta 1} N_m(b) - \sum_{i=0}^{2}a_i N_i(b) \neq 0, \end{equation} and \begin{equation} \label{eqn:irredcriteriadelta 2} M_m(b) - \sum_{i=0}^{2} M_i(b) a_i' \neq 0, \end{equation} for all $b \in D$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] We have $f(t)$ is irreducible if and only if it has no right linear factors, if and only if $$N_2 (b) - a_1 N_1(b) - a_0 N_0 (b) = \sigma(b)b + \delta(b) - a_1 b - a_0 \neq 0,$$ for all $b \in D$ by Proposition \ref{prop:rightlinearfactordelta}. \item[(ii)] We have $f(t)$ is irreducible if and only if it has no left or right linear factors, if and only if \eqref{eqn:irredcriteriadelta 1} and \eqref{eqn:irredcriteriadelta 2} hold for all $b \in D$ by Propositions \ref{prop:rightlinearfactordelta} and \ref{prop:leftlinearfactordelta}. \end{itemize} \end{proof} We now prove an analogous result to Theorem \ref{thm:Petit(19)}: \begin{theorem} \label{thm:Petit(19), delta not 0} Suppose $\sigma$ is an endomorphism of $D$, $m$ is prime, $\mathrm{Char}(D) \neq m$ and $C \cap \mathrm{Fix}(\sigma)$ contains a primitive $m^{\text{th}}$ root of unity $\omega$. Then $f(t) = t^m - a \in R$ is irreducible if and only if $N_m(b) \neq a$ for all $b \in D$. \end{theorem} \begin{proof} Recall $$\delta(b^n) = \sum_{i=0}^{n-1} \sigma(b)^i \delta(b)b^{n-1-i},$$ for all $b \in D$, $n \geq 1$ by \eqref{eqn:goodearl} and so \begin{align*} 0 &= \delta(1) = \delta(\omega^m) = \sum_{i=0}^{m-1} \sigma(\omega)^i \delta(\omega) \omega^{m-1-i} = \sum_{i=0}^{m-1} \omega^i \delta(\omega) \omega^{m-1-i} \\ &= \sum_{i=0}^{m-1} \delta(\omega) \omega^{m-1} = \delta(\omega) \omega^{m-1} m, \end{align*} where we have used $\omega \in C \cap \mathrm{Fix}(\sigma)$. Therefore $\omega \in \mathrm{Const}(\delta)$ because $\mathrm{Char}(D) \neq m$, hence also $\omega^i \in \mathrm{Const}(\delta)$ and so $(\omega t)^i = \omega^i t^i$ for all $i \in \{ 1, \ldots , m \}$. Furthermore if $b \in D$, then $(t-b) \nmid_r f(t)$ is equivalent to $N_m(b) \neq a$ by Proposition \ref{prop:rightlinearfactordelta}. The proof now follows exactly as in Theorem's \ref{thm:bourbaki} and \ref{thm:Petit_factor}. \end{proof} Setting $m = 3$ and $\sigma = \mathrm{id}$ in Theorem \ref{thm:Petit(19), delta not 0} yields: \begin{corollary} \label{cor:Petit(19), delta not 0, m=3} Suppose $\mathrm{Char}(D) \neq 3$, $\sigma = \mathrm{id}$ and $C$ contains a primitive $3^{\text{rd}}$ root of unity. Then $f(t) = t^3 - a \in D[t; \delta]$ is irreducible if and only if $$N_3(b) = b^3 + 2 \delta(b)b + b\delta(b) + \delta^2(b) \neq a,$$ for all $b \in D$. \end{corollary} \section{Irreducibility Criteria in \texorpdfstring{$D[t;\delta]$}{D[t;delta]} where \texorpdfstring{$\mathrm{Char}(D) = p$}{Char(D) = p}} \label{section:Irreducibility Criteria in D[t;delta] where Char(D) = p} Suppose $\mathrm{Char}(D) = p \neq 0$, $\sigma = \mathrm{id}$ and $$f(t) = t^{p^e} - a_{1} t^{p^{e-1}} - \ldots - a_e t - d \in D[t;\delta].$$ In $D[t;\delta]$ we have the equalities \begin{equation} \label{p-power formula char p} (t-b)^p = t^p - V_p(b), \qquad V_p(b) = b^p + \delta^{p-1}(b) + * \in D, \end{equation} for all $b \in D$, with $*$ a sum of commutators of $b, \delta(b), \ldots, \delta^{p-2}(b)$ \cite[p.~17-18]{jacobson1996finite}. E.g. $V_2(b) = b^2 + \delta(b)$ and $V_3(b) = b^3 + \delta^2(b) + \delta(b) b - b \delta(b)$. In particular, if $D$ is commutative or $b$ commutes with all of its derivatives, then $* = 0$ and the formula simplifies to \begin{equation} \label{eqn:V_p(b) form when D commutative} V_p(b) = b^p + \delta^{p-1}(b). \end{equation} We can iterate \eqref{p-power formula char p} to obtain \begin{equation} \label{p-power formula char p iterated} (t-b)^{p^i} = t^{p^i} - V_{p^i}(b), \end{equation} for all $i \in \mathbb{N}$ where $V_{p^i}(b) = V_p^i(b) = V_p(V_p( \cdots (V_p(b)) \cdots )$. We thus have $a_i t^{p^i} = a_i (t-b)^{p^i} + a_i V_{p^i}(b)$ and by summing over $i$ we conclude: \begin{proposition} \label{prop:(t-b) right divides f(t) in Char p} (\cite[Proposition 1.3.25]{jacobson1996finite}). $(t-b) \vert_r f(t)$ is equivalent to \begin{equation} \label{eqn:(t-b) right divides f(t) in Char p} V_{p^e}(b) - a_{1} V_{p^{e-1}}(b) - \ldots - a_e b - d = 0. \end{equation} \end{proposition} Looking instead at left division of $f(t)$ by a linear polynomial gives: \begin{proposition} \label{prop:(t-b) left divides f(t) in Char p} $(t-b) \vert_l f(t)$ is equivalent to \begin{equation*} \begin{split} V_{p^e}(b) - \big( V_{p^{e-1}}(b)& a_{1} - \delta^{p^{e-1}}(a_{1}) \big) - \ldots - \big( V_p(b)a_{e-1} - \delta^p(a_{e-1}) \big) \\& - (b a_e - \delta(a_e)) - d = 0. \end{split} \end{equation*} \end{proposition} \begin{proof} We have $t^{p^k} a = (t-b)^{p^k}a + V_{p^k}(b)a,$ for all $a, b \in D$ and $k \geq 1$ by \eqref{p-power formula char p iterated}. Moreover, iterating the relation $ta = at + \delta(a)$ yields \begin{equation*} t^{p^k} a = \sum_{i=0}^{p^k} \binom{p^k}{i} \delta^{p^k - i}(a) t^i, \end{equation*} for all $a \in D$, $k \geq 1$ \cite[(1.1.26)]{jacobson1996finite}. This implies $t^{p^k} a = \delta^{p^k}(a) + at^{p^k}$ for all $a \in D$, $k \geq 1$ because $\binom{p^k}{i} = 0$ for all $i \in \{ 1, \ldots, p^k -1 \}$. Therefore $$at^{p^k} = (t-b)^{p^k}a + V_{p^k}(b)a - \delta^{p^k}(a),$$ and hence \begin{equation*} \begin{split} f(t) &= t^{p^e} - a_{1} t^{p^{e-1}} - \ldots - a_e t - d \\ &= (t-b)q(t) + V_{p^e}(b) - \big( V_{p^{e-1}}(b) a_{1} - \delta^{p^{e-1}}(a_{1}) \big) \\ & \qquad \qquad - \ldots - \big( V_p(b)a_{e-1} - \delta^p(a_{e-1}) \big) - ba_e + \delta(a_e) - d. \end{split} \end{equation*} \end{proof} \begin{corollary} \label{cor:right iff left division in Char p} Suppose $\mathrm{Char}(D) = p \neq 0$ and $f(t) = t^p - a_1 t - a_0 \in D[t;\delta]$, where $a_1 \in C \cap \mathrm{Const}(\delta)$. Then $(t-b) \vert_r f(t)$ if and only if $(t-b) \vert_l f(t)$. \end{corollary} \begin{proof} Recall $(t-b) \vert_r f(t)$ if and only if $V_p(b) - a_1 b - a_0 = 0$ by Proposition \ref{prop:(t-b) right divides f(t) in Char p}, and $(t-b) \vert_l f(t)$ if and only if $V_p(b) - b a_1 + \delta(a_1) - a_0 = 0$ by Proposition \ref{prop:(t-b) left divides f(t) in Char p}. When $a_1 \in C \cap \mathrm{Const}(\delta)$, these two conditions are equivalent. \end{proof} When $p=3$, Propositions \ref{prop:(t-b) right divides f(t) in Char p} and \ref{prop:(t-b) left divides f(t) in Char p} yield the following: \begin{corollary} \label{cor:t^3-a_1t-a_0 Char 3 irreducibility} Let $\mathrm{Char}(D) = 3$ and $f(t) = t^3 - a_1 t - a_0 \in D[t;\delta]$. \begin{itemize} \item[(i)] $f(t)$ is irreducible if and only if $$V_3(b) - a_1 b - a_0 \neq 0 \ \text{ and } \ V_3(b) - b a_1 + \delta(a_1) - a_0 \neq 0,$$ for all $b \in D$. \item[(ii)] Let $a_1 \in C \cap \mathrm{Const}(\delta)$, then $f(t)$ is irreducible if and only if $V_3(b) - a_1 b - a_0 \neq 0$ for all $b \in D$. \item[(iii)] Suppose $D$ is commutative, then $f(t)$ is irreducible if and only if $$b^3 + \delta^2(b) - a_1 b - a_0 \neq 0 \ \text{ and } \ b^3 + \delta^2(b) - b a_1 + \delta(a_1) - a_0 \neq 0,$$ for all $b \in D$. \end{itemize} \end{corollary} \begin{proof} Notice $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ and $(t-b) \nmid_l f(t)$ for all $b \in D$ and thus (i) follows by Propositions \ref{prop:(t-b) right divides f(t) in Char p} and \ref{prop:(t-b) left divides f(t) in Char p}. (ii) follows from (i) and Corollary \ref{cor:right iff left division in Char p} and (iii) follows from (i) and \eqref{eqn:V_p(b) form when D commutative}. \end{proof} \begin{proposition} Suppose $\mathrm{Char}(D) = p \neq 0$, $D$ is commutative and $\delta$ is a non-trivial derivation of $D$ such that $\delta^p = 0$. Let $f(t) = t^p - a \in D[t;\delta]$ where $a \notin \mathrm{Const}(\delta)$, then \begin{itemize} \item[(i)] $(t-b) \nmid_r f(t)$ and $(t-b) \nmid_l f(t)$ for all $b \in D$. \item[(ii)] If $p = 3$ then $f(t)$ is irreducible. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] Recall $(t-b) \vert_r f(t)$ is equivalent to $(t-b) \vert_l f(t)$ by Corollary \ref{cor:right iff left division in Char p}, and that $(t-b) \vert_r f(t)$ if and only if $V_p(b) = a$ if and only if $b^p + \delta^{p-1}(b) = a$ by Proposition \ref{prop:(t-b) right divides f(t) in Char p}. Now $b^p \in \mathrm{Const}(\delta)$ for all $b \in D$ as $D$ is commutative \cite[p.~60]{kolchin1973differential}, also $\delta^{p-1}(b) \in \mathrm{Const}(\delta)$ as $\delta^p = 0$. Therefore if $a \notin \mathrm{Const}(\delta)$ then $b^p + \delta^{p-1}(b) \neq a$ for all $b \in D$. \item[(ii)] If $p = 3$, then $f(t)$ is irreducible if and only if $(t-b) \nmid_r f(t)$ and $(t-b) \nmid_l f(t)$ for all $b \in D$, hence the assertion follows by (i). \end{itemize} \end{proof} When $f(t)$ has the form $f(t) = t^p - t - a \in D[t;\delta]$, we have: \begin{theorem} (\cite[Lemmas 4 and 6]{amitsur1954non}). For any $a \in D$, the polynomial $f(t) = t^p - t - a \in D[t;\delta]$ is either a product of commuting linear factors or irreducible. Furthermore, $f(t)$ is irreducible if and only if $V_p(b) - b - a \neq 0,$ for all $b \in D$. In particular, if $D$ is commutative then $f(t)$ is irreducible if and only if $$b^p + \delta^{p-1}(b) - b - a \neq 0,$$ for all $b \in D$. \end{theorem} \chapter{Isomorphisms Between Petit Algebras} \label{chapter:Isomorphisms Between some Petit Algebras} We now investigate isomorphisms between some Petit Algebras. The results we obtain in this Chapter will then be applied in Chapter \ref{chapter:Automorphisms of S_f} to study the automorphism groups of Petit algebras. Let $D$ be an associative division ring with center $C$, $\sigma$ be an automorphism of $D$ and $\delta$ be a left $\sigma$-derivation of $D$. Suppose $D'$, $C'$, $\sigma'$ and $\delta'$ are defined similarly. Let $f(t) \in R = D[t;\sigma,\delta]$, \ $g(t) \in R' = D'[t;\sigma',\delta']$, and recall $S_f = R/Rf$ is an algebra over $F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$ and $S_g = R'/R'g$ is an algebra over $F' = C' \cap \mathrm{Fix}(\sigma') \cap \mathrm{Const}(\delta')$. Denote by $\circ_f$ the multiplication in $S_f$ and by $\circ_g$ the multiplication in $S_g$. If $f(t)$ and $g(t)$ are not right invariant and $F = F'$, we have the following necessary conditions for $S_f$ to be $F$-isomorphic to $S_g$: \begin{proposition} \label{prop:isomorphism_necessity} Suppose $f(t)$, $g(t)$ are not right invariant, $F = F'$ and $S_f$ is $F$-isomorphic to $S_g$. Then \begin{itemize} \item[(i)] $D \cong D'$. \item[(ii)] $\mathrm{Nuc}_r(S_f) \cong \mathrm{Nuc}_r(S_g)$. \item[(iii)] $S_f$ is a division algebra if and only if $S_g$ is a division algebra. \item[(iv)] If $S_f$ is a finite-dimensional left $F$-vector space, then $\mathrm{deg}(f(t)) = \mathrm{deg}(g(t))$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] We have $\mathrm{Nuc}_l(S_f) = D$ and $\mathrm{Nuc}_l(S_g) = D'$ by Theorem \ref{thm:Properties of S_f petit}(i) because $S_f$ and $S_g$ are not associative. Any isomorphism preserves the left nucleus and so $D \cong D'$. \item[(ii)] Any isomorphism also preserves the right nucleus, thus $\mathrm{Nuc}_r(S_f) \cong \mathrm{Nuc}_r(S_g)$. \item[(iii)] Let $L_{a,f}: S_f \rightarrow S_f, \ x \mapsto a \circ_f x$ denote the left multiplication in $S_f$ and $L_{b,g}: S_g \rightarrow S_g, \ x \mapsto b \circ_g x$ the left multiplication in $S_g$. Similarly denote the right multiplication maps $R_{a,f}$ and $R_{b,g}$. Suppose $\phi:S_f \rightarrow S_g$ is an $F$-isomorphism and $S_f$ is a division algebra, so that $L_{a,f}$ and $R_{a,f}$ are bijective for all non-zero $a \in S_f$. Furthermore we have $$L_{b,g}(x) = \phi(L_{\phi^{-1}(b),f}(\phi^{-1}(x))) \text{ and } R_{b,g}(x) = \phi(R_{\phi^{-1}(b),f}(\phi^{-1}(x))),$$ for all $x \in S_g$, $0 \neq b \in S_g$. These imply $L_{b,g}$ and $R_{b,g}$ are bijective for all non-zero $b \in S_g$ because $\phi, \phi^{-1}, L_{\phi^{-1}(b),f}$ and $R_{\phi^{-1}(b),f}$ are all bijective, hence $S_g$ is a division algebra. The reverse implication is proven analogously. \item[(iv)] $S_f$ and $S_g$ must have the same dimension as left $F$-vector spaces, and since $D \cong D'$, this implies $\mathrm{deg}(f(t)) = \mathrm{deg}(g(t))$. \end{itemize} \end{proof} \section{Automorphisms of \texorpdfstring{$R = D[t;\sigma,\delta]$}{R = D[t;sigma,delta]}} Henceforth we assume $D = D'$, $\sigma = \sigma'$, $\delta = \delta'$ and $f(t), g(t) \in R = D[t;\sigma,\delta]$ have degree $m$. Automorphisms of $R$ can be used to define isomorphisms between Petit algebras: \begin{theorem} \label{thm:lavrauw isomorphic petit algebras generalisation} Let $\Theta$ be an $F$-automorphism of $R$. Given $f(t) \in R$, define $g(t) = l \Theta(f(t)) \in R$ for some $l \in D^{\times}$. Then $\Theta$ induces an $F$-isomorphism $S_f \cong S_g$. \end{theorem} \begin{proof} $\Theta(t)$ has degree $1$ by the argument in \cite[p.~4]{lam1992homomorphisms}, therefore $\Theta$ preserves the degree of elements of $R$ and thus $\Theta \vert_{R_m}: R_m \rightarrow R_m$ is well defined, hence bijective and $F$-linear. Let $a, b \in R_m$, then there exist unique $q(t), r(t) \in R$ with $\mathrm{deg}(r(t)) < m$ such that $ab = q(t)f(t) + r(t)$. We have \begin{align*} \Theta (a \circ_f b) &= \Theta(ab - qf) = \Theta(a)\Theta(b) - \Theta(q)\Theta(f) \\ &= \Theta(a)\Theta(b) - \Theta(q) l^{-1} l \Theta(f) = \Theta(a) \circ_{g} \Theta(b), \end{align*} and so $\Theta \vert_{R_m}: S_f \rightarrow S_{g}$ is an $F$-isomorphism between algebras. \end{proof} When $D = K$ is a finite field and $f(t) \in K[t;\sigma]$ is irreducible, then \cite[Theorem 7]{lavrauw2013semifields} states that an automorphism $\Theta$ of $K[t;\sigma]$ restricts to an isomorphism between $S_f$ and $S_{\Theta(f)}$. Therefore Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation} is a generalisation of \cite[Theorem 7]{lavrauw2013semifields}. In order to employ Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}, we first investigate what the automorphisms of $R$ look like: Given $\tau \in \mathrm{Aut}_{F}(D)$ and $p(t) \in R$, the map \begin{equation*} \Theta: R \rightarrow R, \ \sum_{i=0}^{n} b_i t^i \mapsto \sum_{i=0}^{n} \tau(b_i) p(t)^i, \end{equation*} is an $F$-homomorphism if and only if \begin{equation} \label{eqn:isomorphism between skew polynomial rings} p(t) \tau(b) = \tau(\sigma(b))p(t) + \tau(\delta(b)), \end{equation} for all $b \in D$ by \cite[p.~4]{lam1992homomorphisms}. Furthermore, by a simple degree argument we see $\Theta$ is injective if and only if $p(t)$ has degree $\geq 1$, and $\Theta$ is bijective if and only if $p(t)$ has degree $=1$ \cite[p.~4]{lam1992homomorphisms}. Therefore, by \eqref{eqn:isomorphism between skew polynomial rings} we conclude: \begin{proposition} \label{prop:isomorphisms between skew polynomial rings} Let $c \in D$, $d \in D^{\times}$, then \begin{equation} \label{eqn:isomorphisms between skew polynomial rings 0} \Theta_{\tau,c,d}: R \rightarrow R, \ \sum_{i=0}^n b_i t^i \mapsto \sum_{i=0}^n \tau(b_i) (c + d t)^i, \end{equation} is an $F$-automorphism of $R$ if and only if \begin{equation} \label{eqn:isomorphisms between skew polynomial rings 1} c \tau(b) + d \delta(\tau(b)) = \tau(\sigma(b))c + \tau(\delta(b)), \end{equation} and \begin{equation} \label{eqn:isomorphisms between skew polynomial rings 2} d \sigma(\tau(b)) = \tau(\sigma(b))d, \end{equation} for all $b \in D$. \end{proposition} \begin{proof} Let $p(t) = c + dt$, then $\Theta_{\tau,c,d}: R \rightarrow R$ is bijective because $p(t)$ has degree $1$, furthermore $\Theta_{\tau,c,d}$ is an $F$-automorphism if and only if $$(c + dt) \tau(b) = c \tau(b) + d \sigma(\tau(b))t + d \delta(\tau(b)) = \tau(\sigma(b))(c + dt) + \tau(\delta(b)),$$ by \eqref{eqn:isomorphism between skew polynomial rings}. Comparing the coefficients of $t^0$ and $t$ yields \eqref{eqn:isomorphisms between skew polynomial rings 1} and \eqref{eqn:isomorphisms between skew polynomial rings 2} as required. \end{proof} Looking closely at the conditions \eqref{eqn:isomorphisms between skew polynomial rings 1} and \eqref{eqn:isomorphisms between skew polynomial rings 2} yields the following: \begin{corollary} \label{cor:Automorphisms of D[t;sigma,delta]} \begin{itemize} \item[(i)] Suppose $c \in D$, $d \in D^{\times}$. Then $\Theta_{\mathrm{id},c,d}$ is an $F$-automorphism of $R$ if and only if $d \in C^{\times}$ and $$cb + d \delta(b) = \sigma(b) c + \delta(b)$$ for all $b \in D$. \item[(ii)] $\Theta_{\tau,0,1}$ is an $F$-automorphism of $R$ if and only if $\sigma \circ \tau = \tau \circ \sigma$ and $\delta \circ \tau = \tau \circ \delta$. \item[(iii)] Let $c \in D$, then $\Theta_{\tau,c,1}$ is an $F$-automorphism of $R$ if and only if $\sigma \circ \tau = \tau \circ \sigma$ and \begin{equation} \label{eqn:Automorphisms of D[t;sigma,delta] 1} c \tau(b) + \delta(\tau(b)) = \tau(\sigma(b)) c + \tau(\delta(b)), \end{equation} for all $b \in D$. \item[(iv)] Suppose $c \in D^{\times}, 1 \neq d \in C^{\times}$ and $\delta$ is the inner $\sigma$-derivation \begin{equation} \label{eqn:Automorphisms of D[t;sigma,delta] 2} \delta: D \rightarrow D, \ b \mapsto c(1-d)^{-1} b - \sigma(b) c(1-d)^{-1}. \end{equation} Then $\Theta_{\mathrm{id},c,d}$ is an $F$-automorphism of $R$. \end{itemize} \end{corollary} \begin{proof} (ii) and (iii) follow immediately from \eqref{eqn:isomorphisms between skew polynomial rings 1} and \eqref{eqn:isomorphisms between skew polynomial rings 2}. \begin{itemize} \item[(i)] Setting $\tau = \mathrm{id}$ in \eqref{eqn:isomorphisms between skew polynomial rings 2} yields $d \sigma(b) = \sigma(b) d$ for all $b \in D$, which is equivalent to $d \in C^{\times}$. The result follows by setting $\tau = \mathrm{id}$ in \eqref{eqn:isomorphisms between skew polynomial rings 1}. \item[(iv)] If $\delta$ has the form \eqref{eqn:Automorphisms of D[t;sigma,delta] 2} and $\tau = \mathrm{id}$, then \begin{align*} c b &+ d \delta(b) = c b + d \big( c (1-d)^{-1} b - \sigma(b) c (1-d)^{-1} \big) \\ &= (1-d)^{-1} \big( c b(1-d) + d c b - d \sigma(b) c \big) = (1-d)^{-1}(c b - d \sigma(b) c) \\ &= (1-d)^{-1} \big( \sigma(b) c (1-d) + c b - \sigma(b) c \big) = \sigma(b) c + \delta(b), \end{align*} and $d \sigma(b) = \sigma(b) d$, for all $b \in D$ because $d \in C^{\times}$. Therefore $\Theta_{\mathrm{id},c,d}$ is an $F$-automorphism of $R$ by Proposition \ref{prop:isomorphisms between skew polynomial rings}. \end{itemize} \end{proof} We can use Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation} together with Proposition \ref{prop:isomorphisms between skew polynomial rings} and Corollary \ref{cor:Automorphisms of D[t;sigma,delta]}, to find isomorphisms between some Petit algebras: \begin{corollary} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$. \begin{itemize} \item[(i)] Suppose $\tau \in \mathrm{Aut}_{F}(D)$ and $c \in D$, $d \in D^{\times}$ are such that \eqref{eqn:isomorphisms between skew polynomial rings 1} and \eqref{eqn:isomorphisms between skew polynomial rings 2} hold for all $b \in D$. Let $g(t) = (c+dt)^m - \sum_{i=0}^{m-1} \tau(a_i) (c+dt)^i \in R,$ then $S_f \cong S_g$. \item[(ii)] Suppose $\tau \in \mathrm{Aut}_{F}(D)$ is such that $\sigma \circ \tau = \tau \circ \sigma$ and $\delta \circ \tau = \tau \circ \delta$ and define $g(t) = t^m - \sum_{i=0}^{m-1} \tau(a_i)t^i \in R$, then $S_f \cong S_g$. \item[(iii)] Suppose $\tau \in \mathrm{Aut}_{F}(D)$ and $c \in D$ are such that $\sigma \circ \tau = \tau \circ \sigma$ and \eqref{eqn:Automorphisms of D[t;sigma,delta] 1} holds for all $b \in D$. If $g(t) = (c+t)^m - \sum_{i=0}^{m-1} \tau(a_i)(c+t)^i \in R$, then $S_f \cong S_g$. \item[(iv)] Suppose $\delta$ is the inner $\sigma$-derivation given by \eqref{eqn:Automorphisms of D[t;sigma,delta] 2} for some $c \in D^{\times}, 1 \neq d \in C^{\times}$. If $g(t) = (c + dt)^m - \sum_{i=0}^{m-1} a_i (c + dt)^i \in R$, then $S_f \cong S_g$. \end{itemize} \end{corollary} \begin{proof} In (i), (ii), (iii) and (iv) the maps $\Theta_{\tau,c,d}$, $\Theta_{\tau,0,1}$, $\Theta_{\tau,c,1}$ and $\Theta_{\mathrm{id},c,d}$ resp. are $F$-automorphisms of $R$ by Proposition \ref{prop:isomorphisms between skew polynomial rings} and Corollary \ref{cor:Automorphisms of D[t;sigma,delta]}. Applying these automorphisms to $f(t)$ gives $g(t)$ in each case, thus the assertion follows by Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}. \end{proof} \section{Isomorphisms Between \texorpdfstring{$S_f$}{S\_f} and \texorpdfstring{$S_g$}{S\_g} when \texorpdfstring{$f(t), g(t) \in D[t;\delta]$}{f(t), g(t) in D[t;delta]}} Let $D$ be an associative division ring with center $C$, $\delta$ be a derivation of $D$ and $F = C \cap \mathrm{Const}(\delta)$. We briefly look into the isomorphisms between some Petit algebras $S_f$ and $S_g$ in the special case where $f(t), g(t) \in R = D[t;\delta]$. Here Proposition \ref{prop:isomorphisms between skew polynomial rings} becomes: \begin{corollary} \label{cor:Automorphisms of D[t;delta]} Let $c, d \in D$ and $\tau \in \mathrm{Aut}_{F}(D)$, then the map $\Theta_{\tau,c,d}$ defined by \eqref{eqn:isomorphisms between skew polynomial rings 0} is an $F$-automorphism of $R$, if and only if $d \in C^{\times}$ and \begin{equation} \label{eqn:isomorphisms between skew polynomial rings sigma=id} c \tau(b) + d \delta(\tau(b)) = \tau(b)c + \tau(\delta(b)). \end{equation} In particular, if $c \in C$ then $\Theta_{\tau,c,1}$ is an $F$-automorphism if and only if $\delta \circ \tau = \tau \circ \delta$. \end{corollary} \begin{proof} With $\sigma = \mathrm{id}$, \eqref{eqn:isomorphisms between skew polynomial rings 2} becomes $d \tau(b) = \tau(b) d$ for all $b \in D$, i.e. $d \in C$. Finally, setting $\sigma = \mathrm{id}$ in \eqref{eqn:isomorphisms between skew polynomial rings 1} yields \eqref{eqn:isomorphisms between skew polynomial rings sigma=id}, hence the result follows by Proposition \ref{prop:isomorphisms between skew polynomial rings}. \end{proof} Recall from \eqref{p-power formula char p} that when $D$ has characteristic $p \neq 0$, we have $$(t-b)^p = t^p - V_p(b), \ V_p(b) = b^p + \delta^{p-1}(b) + *$$ for all $b \in D$, where $*$ is a sum of commutators of $b, \delta(b), \ldots, \delta^{p-2}(b)$. An iteration yields $(t-b)^{p^e} = t^{p^e} - V_{p^e}(b)$, for all $b \in D$ with $V_{p^e}(b) = V_p^e(b) = V_p(\ldots(V_p(b)\ldots)$. We use Corollary \ref{cor:Automorphisms of D[t;delta]}, together with Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation} to obtain sufficient conditions for some Petit algebras to be isomorphic: \begin{corollary} \label{cor:Isomorphisms of Petit algebras D[t;delta]} \begin{itemize} \item[(a)] Suppose $D$ has arbitrary characteristic and $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R = D[t;\delta]$. \begin{itemize} \item[(i)] Let $\tau \in \mathrm{Aut}_{F}(D)$ and $g(t) = (t-c)^m - \sum_{i=0}^{m-1} \tau(a_i) (t-c)^i \in R$ for some $c \in C$. If $\tau \circ \delta = \delta \circ \tau$, then $S_f \cong S_{g}$. \item[(ii)] Let $g(t) = (t-c)^m - \sum_{i=0}^{m-1} a_i (t-c)^i \in R$ for some $c \in C$, then $S_f \cong S_{g}$. \end{itemize} \item[(b)] Suppose $\mathrm{Char}(D) = p \neq 0$ and $f(t) = t^{p^e} + a_1 t^{p^{e-1}} + \ldots + a_e t + d \in R$. \begin{itemize} \item[(i)] Let $\tau \in \mathrm{Aut}_{F}(D)$. If $\tau \circ \delta = \delta \circ \tau$ and \begin{align*} g(t) &= t^{p^e} + \tau(a_1) t^{p^{e-1}} + \ldots + \tau(a_e) t + \tau(d) - V_{p^e}(c) \\ & \qquad - \tau(a_1) V_{p^{e-1}}(c) - \ldots - \tau(a_e)c \in R \end{align*} for some $c \in C$, then $S_f \cong S_{g}$. \item[(ii)] Let $$g(t) = f(t) - V_{p^e}(c) - a_1 V_{p^{e-1}}(c) - \ldots - a_e c \in R$$ for some $c \in C$, then $S_f \cong S_{g}$. \end{itemize} \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] The maps $\Theta_{\tau,-c,1}$ are automorphisms of $R$ for all $c \in C$ by Corollary \ref{cor:Automorphisms of D[t;delta]}. In (a) we have $g(t) = \Theta_{\tau,-c,1}(f(t))$ and so $S_f \cong S_{g}$ by Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}. In (b), since $\mathrm{Char}(D) = p \neq 0$, we have \begin{align*} \Theta_{\tau,-c,1}&(f(t)) = (t-c)^{p^e} + \tau(a_1)(t-c)^{p^{e-1}} + \ldots + \tau(a_e)(t-c) + \tau(d) \\ &= t^{p^e} - V_{p^e}(c) + \tau(a_1) \big( t^{p^{e-1}} - V_{p^{e-1}}(c) \big) + \ldots + \tau(a_e)(t-c) + \tau(d)\\ &= g(t), \end{align*} and hence $S_f \cong S_{g}$ by Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}. \item[(ii)] follows from (i) by setting $\tau = \mathrm{id}$. \end{itemize} \end{proof} \section{Isomorphisms Between \texorpdfstring{$S_f$}{S\_f} and \texorpdfstring{$S_g$}{S\_g} when \texorpdfstring{$f(t), g(t) \in D[t;\sigma]$}{f(t), g(t) in D[t;sigma]}} Let $D$ be an associative division ring with center $C$ and $\sigma$ be an automorphism of $D$. Suppose $\delta = 0$ so that $$f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i, \ g(t) = t^m - \sum_{i=0}^{m-1} b_i t^i \in R = D[t;\sigma],$$ then $S_f$ and $S_g$ are nonassociative algebras over $F = C \cap \mathrm{Fix}(\sigma)$. Throughout this Section, if we assume $\sigma$ has order $\geq m-1$, we include infinite order. We begin by looking at the automorphisms of $R$. Here Proposition \ref{prop:isomorphisms between skew polynomial rings} becomes: \begin{corollary} \label{cor:Automorphisms of D[t;sigma]} Let $c \in D$, $d \in D^{\times}$ and $\tau \in \mathrm{Aut}_{F}(D)$. Then $\Theta_{\tau,c,d}$ is an $F$-automorphism of $R$ if and only if $c \tau(b) = \tau(\sigma(b))c$ and $d \sigma(\tau(b)) = \tau(\sigma(b)) d$ for all $b \in D$. In particular, if $\sigma \circ \tau = \tau \circ \sigma$ then $\Theta_{\tau,0,d}$ is an $F$-automorphism if and only if $d \in C^{\times}$. \end{corollary} We employ Corollary \ref{cor:Automorphisms of D[t;sigma]}, together with Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}, to find sufficient conditions for $S_f$ and $S_g$ to be $F$-isomorphic. When $f(t)$ and $g(t)$ are not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$, and $\sigma \vert_C$ has order at least $m-1$, these conditions are also necessary: \begin{theorem} \label{thm:general_isomorphism} \begin{itemize} \item[(i)] Suppose there exists $k \in C^{\times}$ such that \begin{equation} \label{eqn:isomorphism necessity division case Q_id,k} a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) b_i, \end{equation} for all $i \in \{ 0, \ldots, m-1 \}$. Then $S_f \cong S_g$. Furthermore, for every such $k \in C^{\times}$ the maps $Q_{\mathrm{id},k}: S_f \rightarrow S_g$, \begin{equation} \label{eqn:form of Q_id,k isomorphism} Q_{\mathrm{id},k}: \sum_{i=0}^{m-1} x_i t^i \mapsto x_0 + \sum_{i=1}^{m-1} x_i \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i, \end{equation} are $F$-isomorphisms between $S_f$ and $S_g$. \item[(ii)] Suppose there exists $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ such that $\sigma$ commutes with $\tau$ and \begin{equation} \label{eqn:isomorphism necessity division case} \tau(a_i) = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) b_i, \end{equation} for all $i \in \{ 0, \ldots, m-1 \}$. Then $S_f \cong S_g$. Furthermore, for every such $\tau$ and $k$ the maps $Q_{\tau,k}: S_f \rightarrow S_g$, \begin{equation} \label{eqn:isomorphism form of Q_tau,k} Q_{\tau,k}: \sum_{i=0}^{m-1} x_i t^i \mapsto \tau(x_0) + \sum_{i=1}^{m-1} \tau(x_i) \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i, \end{equation} are $F$-isomorphisms between $S_f$ and $S_g$. \item[(iii)] Suppose $f(t)$, $g(t)$ are not right invariant, $\sigma \vert_C$ has order at least $m-1$, and $\sigma$ commutes with all $F$-automorphisms of $D$. Then $S_f \cong S_g$ if and only if there exists $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ such that \eqref{eqn:isomorphism necessity division case} holds for all $i \in \{ 0, \ldots, m-1 \}$. Every such $\tau$ and $k$ gives rise to an $F$-isomorphism $Q_{\tau,k}: S_f \rightarrow S_g$ and these are the only $F$-isomorphisms between $S_f$ and $S_g$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] Recall $\Theta_{\mathrm{id},0,k}$ is an $F$-automorphism of $R$ by Corollary \ref{cor:Automorphisms of D[t;sigma]}, moreover \begin{align*} \Theta_{\mathrm{id},0,k}(f(t)) &= (kt)^m - \sum_{i=0}^{m-1} a_i (kt)^i \\ &= \big( \prod_{l=0}^{m-1} \sigma^l(k) \big) t^m - a_0 - \sum_{i=1}^{m-1} a_i \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i \\ &= \big( \prod_{l=0}^{m-1} \sigma^l(k) \big) \Big( t^m - \sum_{i=0}^{m-1} b_i t^i \big) = \big( \prod_{l=0}^{m-1} \sigma^l(k) \big) g(t), \end{align*} by \eqref{eqn:isomorphism necessity division case Q_id,k} and thus $\Theta_{\mathrm{id},0,k}$ restricts to an isomorphism between $S_f$ and $S_g$ by Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}. This restriction is $Q_{\mathrm{id},k}$ by a straightforward calculation. \item[(ii)] The proof is similar to (i) using that $\Theta_{\tau,0,k} \in \mathrm{Aut}_{F}(R)$ by Corollary \ref{cor:Automorphisms of D[t;sigma]}. \item[(iii)] We are left to prove that $S_f \cong S_g$ implies there exists $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ such that \eqref{eqn:isomorphism necessity division case} holds for all $i \in \{ 0, \ldots, m-1 \}$, and that all $F$-isomorphisms between $S_f$ and $S_g$ have the form $Q_{\tau,k}$ where $\tau$ and $k$ satisfy \eqref{eqn:isomorphism necessity division case}. Suppose $S_f \cong S_g$ and let $Q: S_f \rightarrow S_g$ be an $F$-isomorphism. $f(t)$ and $g(t)$ are not right invariant which is equivalent to $S_f$ and $S_g$ not being associative, therefore $\mathrm{Nuc}_l(S_f) = \mathrm{Nuc}_l(S_g) = D$ by Theorem \ref{thm:Properties of S_f petit}(i). Since any isomorphism preserves the left nucleus, $Q(D) = D$ and so $Q \vert_D = \tau$ for some $\tau \in \mathrm{Aut}_{F}(D)$. Suppose $Q(t) = \sum_{i=0}^{m-1} k_i t^i$ for some $k_i \in D$, then we have \begin{equation} \label{eqn:general_isomorphism_theoremI} Q(t \circ_f z) = Q(t) \circ_g Q(z) = \Big( \sum_{i=0}^{m-1} k_i t^i \Big) \circ_g \tau(z) = \sum_{i=0}^{m-1} k_i \sigma^{i}(\tau(z)) t^i, \end{equation} and \begin{equation} \label{eqn:general_isomorphism_theoremII} Q(t \circ_f z) = Q(\sigma(z)t) = \sum_{i=0}^{m-1} \tau(\sigma(z)) k_i t^i \end{equation} for all $z \in D$. Comparing the coefficients of $t^i$ in \eqref{eqn:general_isomorphism_theoremI} and \eqref{eqn:general_isomorphism_theoremII} we obtain \begin{equation} \label{eqn:general_isomorphism_theoremIII} k_i \sigma^i(\tau(z)) = k_i \tau(\sigma^i(z)) = \tau(\sigma(z)) k_i, \end{equation} for all $i \in \{ 0, \ldots m-1 \}$ and all $z \in D$ as $\sigma$ and $\tau$ commute. In particular \begin{equation*} k_i \tau(\sigma^i(z)) = k_i \tau(\sigma(z)), \end{equation*} for all $i \in \{ 0, \ldots m-1 \}$ and all $z \in C$. This means $$k_i \tau \big( \sigma^i(z) - \sigma(z) \big) = 0,$$ for all $i \in \{ 0, \ldots m-1 \}$ and all $z \in C$, i.e. $k_i = 0$ or $\sigma \vert_C = \sigma^i \vert_C$ for all $i \in \{0, \ldots, m-1 \}$. Now, $\sigma \vert_C$ has order at least $m-1$ or infinite order which means $\sigma^i \vert_C \neq \sigma \vert_C$ for all $1 \neq i \in \{ 0, \ldots, m-1 \}$ and thus $k_i = 0$ for all $1 \neq i \in \{ 0, \ldots, m-1 \}$. Therefore $Q(t) = kt$ for some $k \in D^{\times}$ such that $k \tau(\sigma(z)) = \tau(\sigma(z))k$ for all $z \in D$ by \eqref{eqn:general_isomorphism_theoremIII}. Hence $k \in C^{\times}$. Furthermore, we have $$Q(z t^i) = Q(z) \circ_g Q(t)^i = \tau(z) (kt)^i = \tau(z) \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) t^i,$$ for all $i \in \{ 1, \ldots, m-1 \}$ and all $z \in D$. Thus $Q$ has the form $$Q_{\tau,k}: \sum_{i=0}^{m-1} x_i t^i \mapsto \tau(x_0) + \sum_{i=1}^{m-1} \tau(x_i) \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i,$$ for some $k \in C^{\times}$. Moreover, with $t^m = t \circ_f t^{m-1}$, also \begin{equation} \label{eqn:general_isomorphism_theoremV} \begin{split} Q(t^m) &= Q \big( \sum_{i=0}^{m-1} a_i t^i \big) = \sum_{i=0}^{m-1} Q(a_i) \circ_g Q(t)^i \\ &= \tau(a_0) + \sum_{i=1}^{m-1} \tau(a_i) \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i, \end{split} \end{equation} and $Q(t \circ_f t^{m-1}) = Q(t) \circ_g Q(t)^{m-1}$, i.e. \begin{equation} \label{eqn:general_isomorphism_theoremVI} Q(t^m) = Q(t) \circ_g Q(t)^{m-1} = \Big( \prod_{l=0}^{m-1} \sigma^l(k) \Big) t^m = \Big( \prod_{l=0}^{m-1} \sigma^l(k) \Big) \sum_{i=0}^{m-1} b_i t^i. \end{equation} Comparing \eqref{eqn:general_isomorphism_theoremV} and \eqref{eqn:general_isomorphism_theoremVI} gives $\tau(a_i) = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) b_i$, for all $i \in \{ 0, \ldots, m-1 \}$, and hence $Q$ has the form $Q_{\tau,k}$ where $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ satisfy \eqref{eqn:isomorphism necessity division case}. \end{itemize} \end{proof} \begin{remark} Suppose $D = \mathbb{F}_{p^n}$ is a finite field of order $p^n$, $\sigma: \mathbb{F}_{p^n} \rightarrow \mathbb{F}_{p^n}, \ k \mapsto k^p$ is the Frobenius automorphism and $f(t), g(t) \in \mathbb{F}_{p^n}[t;\sigma]$ are irreducible of degree $m \in \{ 2, 3 \}$. Then $S_f$ and $S_g$ are isomorphic semifields if and only if there exists $\tau \in \mathrm{Aut}(\mathbb{F}_{p^n})$ and $k \in \mathbb{F}_{p^n}^{\times}$ are such that \eqref{eqn:isomorphism necessity division case} holds for all $i \in \{ 0, \ldots, m-1 \}$ by \cite[Theorems 4.2 and 5.4]{wene2000finite}. Therefore Theorem \ref{thm:general_isomorphism}(iii) can be seen as a generalisation of \cite[Theorems 4.2 and 5.4]{wene2000finite}. \end{remark} We obtain the following Corollaries of Theorem \ref{thm:general_isomorphism}: \begin{corollary} Let $f(t) = t^m -a \in R$ and define $g(t) = t^m - k^m a \in R$ for some $k \in F^{\times}$. Then $S_f \cong S_g$. \end{corollary} \begin{proof} We have $\Big( \prod_{l=0}^{m-1} \sigma^l(k) \Big) a = k^m a$ and so $S_f \cong S_g$ by Theorem \ref{thm:general_isomorphism}(i). \end{proof} We next take a closer look at the equation \eqref{eqn:isomorphism necessity division case} to obtain necessary conditions for some $S_f$ and $S_g$ to be isomorphic. \begin{corollary} Suppose $f(t)$ and $g(t)$ are not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$. If $S_f \cong S_g$ then $a_i = 0$ is equivalent to $b_i = 0$ for all $i \in \{ 0 , \ldots, m-1 \}$. \end{corollary} \begin{proof} If $a_i \neq b_i$ then $\tau(a_i) \neq \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) b_i$ for all $\tau \in \mathrm{Aut}_{F}(D)$ and all $k \in C^{\times}$ and so $S_f \ncong S_g$ by Theorem \ref{thm:general_isomorphism}(iii), a contradiction. \end{proof} Now suppose $C/F$ is a proper field extension of finite degree, $D$ is finite-dimensional as an algebra over $C$, and $D$ is also finite-dimensional when considered as an algebra over $F$. Let $N_{D/C}$ denote the norm of $D$ considered as an algebra over $C$, $N_{D/F}$ the norm of $D$ considered as an algebra over $F$ and $N_{C/F}$ be the norm of the field extension $C/F$. We have $N_{D/F}(z) = N_{C/F}(N_{D/C}(z))$ and $N_{D/F}(\tau(z)) = N_{D/F}(z)$, for all $\tau \in \mathrm{Aut}_{F}(D)$ and all $z \in D$ \cite[p.~547, 548]{magurn2002algebraic}. \begin{corollary} \label{cor:norm isomorphism} Suppose $f(t)$ and $g(t)$ are not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$. \begin{itemize} \item[(i)] If $b_0 \neq 0$ and $N_{D/F}(a_0 b_0^{-1}) \notin F^{\times m}$ then $S_f \ncong S_g$. \item[(ii)] If there exists $i \in \{ 0, \ldots, m-1 \}$ such that $b_i \neq 0$ and $N_{D/F}(a_i b_i^{-1}) \notin F^{\times (m-i)}$ then $S_f \ncong S_g$. \end{itemize} \end{corollary} \begin{proof} We prove (ii) since setting $i = 0$ in (ii) yields (i). Suppose, for a contradiction, that $S_f \cong S_g$. Then there exists $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ such that $\tau(a_i) = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) b_i$ by Theorem \ref{thm:general_isomorphism}(iii). Applying $N_{D/F}$ we obtain \begin{equation*} \begin{split} N_{D/F}(\tau(a_i)) &= N_{D/F}(a_i) = N_{D/F} \big( \big( \prod_{l=i}^{m-1} \sigma^l(k) \big) b_i \big) = N_{D/F}(k)^{m-i} N_{D/F}(b_i). \end{split} \end{equation*} This implies $N_{D/F}(a_i b_i^{-1}) = N_{D/F}(k)^{m-i}$ by the multiplicity of the norm, but here $N_{D/F}(k)^{m-i} \in F^{\times (m-i)}$, a contradiction. \end{proof} In \cite{AndrewPhD}, isomorphisms between two nonassociative cyclic algebras were briefly investigated. We show Theorem \ref{thm:general_isomorphism}(iii) specialises to \cite[Proposition 3.2.8]{AndrewPhD} and \cite[Corollary 6.2.5]{AndrewPhD}. Let $K/F$ be a cyclic Galois field extension of degree $m$ with $\mathrm{Gal}(K/F) = \langle \sigma \rangle$ and $a, b \in K \setminus F$. Then Theorem \ref{thm:general_isomorphism}(iii) shows when the nonassociative cyclic algebras $(K/F,\sigma,a)$ and $(K/F,\sigma,b)$ are isomorphic: \begin{corollary} \label{cor:isomorphisms of nonassociative cyclic algebras} (\cite[Proposition 3.2.8]{AndrewPhD}). $(K/F,\sigma,a) \cong (K/F,\sigma,b)$ if and only if there exists $k \in K^{\times}$ and $j \in \{ 0, \ldots, m-1 \}$ such that $\sigma^j(a) = \big( \prod_{l=0}^{m-1} \sigma^l(k) \big) b = N_{K/F}(k) b$. \end{corollary} Now suppose additionally $K$ and $F$ are finite fields. It is well-known that the norm $N_{K/F}:K^{\times} \rightarrow F^{\times}$ is surjective for finite extensions of finite fields and so by Corollary \ref{cor:isomorphisms of nonassociative cyclic algebras}, we conclude: \begin{corollary} \label{cor:isomorphisms between nonassociative cyclic algebras over finite fields} (\cite[Corollary 6.2.5]{AndrewPhD}). $(K/F,\sigma,a) \cong (K/F,\sigma,b)$ if and only if $\sigma^j(a) = k b$ for some $j \in \{ 0, \ldots, m-1 \}$ and some $k \in F^{\times}$. \end{corollary} \chapter{Automorphisms of Petit Algebras} \label{chapter:Automorphisms of S_f} Let $D$ be an associative division ring with center $C$, $\sigma$ be a ring automorphism of $D$, $\delta$ be a left $\sigma$-derivation of $D$ and $f(t) \in R = D[t;\sigma,\delta]$. Here $S_f$ is a nonassociative algebra over $F = C \cap \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$. Since $S_f = S_{df}$ for all $d \in D^{\times}$ we assume w.l.o.g. that $f(t)$ is monic, otherwise, if $f(t)$ has leading coefficient $d \in D^{\times}$, then consider $d^{-1}f(t)$. In this Chapter we study the automorphism groups of Petit algebras building upon our results in Chapter \ref{chapter:Isomorphisms Between some Petit Algebras}. We obtain partial results in the most general case where $\sigma$ is not necessarily the identity and $\delta$ is not necessarily $0$, however, most of our attention is given to the cases where $f(t)$ is either a differential or twisted polynomial (see Sections \ref{section:Automorphisms of S_f, f(t) in D[t;delta]} and \ref{section:Automorphisms of S_f when f in D[t;sigma]} respectively). Later in Chapter \ref{chapter:Automorphisms of Nonassociative Cyclic Algebras} we go on to study the automorphism groups of nonassociative cyclic algebras. \vspace*{4mm} If $f(t) \in R$ is not right invariant, then $S_f$ is not associative and any automorphism of $S_f$ extends automorphisms of $\mathrm{Nuc}_l(S_f)$, $\mathrm{Nuc}_m(S_f)$ and $\mathrm{Nuc}_r(S_f)$ because the nuclei are invariant under automorphisms. Since $\mathrm{Nuc}_l(S_f) = \mathrm{Nuc}_m(S_f) = D$ and $\mathrm{Nuc}_r(S_f) = E(f)$ by Theorem \ref{thm:Properties of S_f petit}, we conclude: \begin{lemma} \label{lem:automorphism restricts to right nucleus} If $f(t) \in R$ is not right invariant, any $F$-automorphism of $S_f$ extends an $F$-automorphism of $D$ and an $F$-automorphism of $E(f)$. \end{lemma} By Theorem \ref{thm:lavrauw isomorphic petit algebras generalisation}, automorphisms $\Theta: R \rightarrow R$ induce isomorphisms between $S_f$ and $S_{\Theta(f(t))}$. This means if $\Theta(f(t)) = l f(t)$ for some $l \in D^{\times}$, then $\Theta$ induces an isomorphism $S_f \cong S_{l \Theta(f(t))} = S_f$, i.e. $\Theta$ induces an automorphism of $S_f$: \begin{theorem} \label{thm:automorphism of skew polynomial induces automorphism of S_f} If $\Theta$ is an $F$-automorphism of $R$ and $\Theta(f(t)) = l f(t)$ for some $l \in D^{\times}$, then $\Theta(b \circ c) = \Theta(b) \circ \Theta(c)$ for all $b, c \in S_f$, i.e. $\Theta$ induces an $F$-automorphism of $S_f$. \end{theorem} Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f} allows us to find automorphisms of Petit algebras which are induced by automorphisms of $R$, later focusing on the special cases where $R$ is a differential or twisted polynomial ring. More generally, when $\sigma$ is not necessarily the identity, and $\delta$ is not necessarily $0$, we can still use Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f} to obtain non-trivial automorphisms of $S_f$ for some quadratic $f(t)$: \begin{proposition} \label{prop:aut of S_f from automorphism of D[t;sigma,delta]} Suppose $1 \neq d \in C^{\times}$ is such that $d \sigma(d) = 1$ and $\delta$ is the inner $\sigma$-derivation given by $$\delta: D \rightarrow D, \ b \mapsto c(1-d)^{-1} b - \sigma(b) c(1-d)^{-1},$$ for some $c \in D^{\times}$. If $f(t) = t^2 - (c - d \sigma(c))(1 - d)^{-1} t - a \in R,$ then the map $$H_{\mathrm{id},c,d}: S_f \rightarrow S_f, \ x_0 + x_1t \mapsto x_0 + x_1(c + d t),$$ is a non-trivial $F$-automorphism of $S_f$. Moreover, if additionally $d$ is a primitive $n^{\text{th}}$ root of unity for some $n > 1$, then $\langle H_{\mathrm{id},c,d} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$ of order $n$. \end{proposition} \begin{proof} Recall $$\Theta :R \rightarrow R, \ \sum_{i=0}^{n} b_i t^i \mapsto \sum_{i=0}^n b_i (c + d t)^i,$$ is an $F$-automorphism by Corollary \ref{cor:Automorphisms of D[t;sigma,delta]}. We have \begin{align*} \Theta(f(t)) &= (c + dt)^2 - (c - d \sigma(c))(1-d)^{-1} (c + d t) - a \\ &= d \sigma(d) t^2 + \Big( c d + d \sigma(c) + d \delta(d) - (c - d \sigma(c)) (1-d)^{-1} d \Big) t \\ & \qquad + \Big( c^2 + d \delta(c) - (c - d \sigma(c)) (1-d)^{-1} c - a \Big) \\ &= t^2 - (c - d \sigma(c))(1-d)^{-1} t - a = f(t), \end{align*} where we have used that $d \sigma(d) = 1$, hence the first assertion follows by Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f}. To prove the final assertion we first show $H_{\mathrm{id},c,d}^n$ has the form \begin{equation} \label{eqn:form of H^n contain primitive nth root} H_{\mathrm{id},c,d}^n(x_0 + x_1t) = x_0 + x_1 c (1-d^n)(1-d)^{-1} + x_1 d^n t, \end{equation} for all $n \in \mathbb{N}$ by induction: For $n = 1$ we have $$x_0 + x_1 c (1-d^n)(1-d)^{-1} + x_1 d^n t = x_0 + x_1 c + x_1 d t = H_{\mathrm{id},c,d}(x_0 + x_1t)$$ as required. Assume as induction hypothesis that \eqref{eqn:form of H^n contain primitive nth root} holds for some $n \geq 1$, then \begin{align*} H_{\mathrm{id},c,d}&^{n+1}(x_0 + x_1t) = H_{\mathrm{id},c,d} \big( H_{\mathrm{id},c,d}^n(x_0 + x_1 t) \big) \\ &= H_{\mathrm{id},c,d} \big( x_0 + x_1 c (1-d^n)(1-d)^{-1} + x_1 d^n t \big) \\ &= x_0 + x_1 c (1-d^n)(1-d)^{-1} + x_1 d^n (c + d t) \\ &= x_0 + x_1 c \sum_{j=0}^{n-1} d^j + x_1 c d^n + x_1 d^{n+1}t = x_0 + x_1 c \sum_{j=0}^{n} d^j + x_1 d^{n+1}t \\ &= x_0 + x_1 c (1-d^{n+1})(1-d)^{-1} + x_1 d^{n+1} t, \end{align*} and thus \eqref{eqn:form of H^n contain primitive nth root} holds by induction. In particular \eqref{eqn:form of H^n contain primitive nth root} implies $H_{\mathrm{id},c,d}^n = \mathrm{id}$ if and only if $d^n = 1$, therefore if $d$ is a primitive $n^{\text{th}}$ root of unity, $\langle H_{\mathrm{id},c,d} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$ of order $n$. \end{proof} Setting $d = -1$ in Proposition \ref{prop:aut of S_f from automorphism of D[t;sigma,delta]} gives: \begin{corollary} Suppose $\mathrm{Char}(D) \neq 2$ and $\delta$ is the inner $\sigma$-derivation given by $$\delta: D \rightarrow D, \ b \mapsto \frac{c}{2}b - \sigma(b) \frac{c}{2},$$ for some $c \in D^{\times}$. Then for $$f(t) = t^2 - \frac{1}{2}(c + \sigma(c)) t - a \in R,$$ the map $H_{\mathrm{id},c,-1}$ is an $F$-automorphism of $S_f$. Moreover $\{ \mathrm{id}, H_{\mathrm{id},c,-1} \}$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \end{corollary} \begin{example} Suppose $K = F(i)$ is a quadratic separable extension of $F$ with non-trivial automorphism $\sigma$ and let $\delta$ be the inner $\sigma$-derivation $$\delta:K \rightarrow K, \ b \mapsto \frac{c}{1-i}(b-\sigma(b)),$$ for some $c \in K$. Then $i \sigma(i) = -i^2 = 1$ and $i$ is a primitive $4^{\text{th}}$ root of unity, so for $$f(t) = t^2 - \frac{c - i \sigma(c)}{1-i}t - a \in K[t;\sigma,\delta],$$ the map $H_{\mathrm{id},c,i}$ is an automorphism of $S_f$ of order $4$ by Proposition \ref{prop:aut of S_f from automorphism of D[t;sigma,delta]}. \end{example} \begin{proposition} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in F[t] = F[t;\sigma,\delta] \subset R$, then for all $\tau \in \mathrm{Aut}_{F}(D)$ such that $\sigma \circ \tau = \tau \circ \sigma$ and $\delta \circ \tau = \tau \circ \delta$, the maps $$H_{\tau,0,1}:S_f \rightarrow S_f, \ \sum_{i=0}^{m-1} b_i t^i \mapsto \sum_{i=0}^{m-1} \tau(b_i) t^i,$$ are $F$-automorphisms of $S_f$. \end{proposition} \begin{proof} Let $\tau \in \mathrm{Aut}_{F}(D)$, then $$\Theta: R \rightarrow R, \ \sum_{i=0}^n b_i t^i \mapsto \sum_{i=0}^n \tau(b_i) t^i,$$ is an $F$-automorphism if and only if $\sigma \circ \tau = \tau \circ \sigma$ and $\delta \circ \tau = \tau \circ \delta$ by Corollary \ref{cor:Automorphisms of D[t;sigma,delta]}. If all the coefficients of $f(t)$ are contained in $F$, a straightforward computation yields $\Theta(f(t)) = f(t)$, hence $H_{\tau,0,1} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f}. \end{proof} Given a nonassociative $F$-algebra $A$, an element $0 \neq c \in A$ has a \textbf{left inverse} $c_l \in A$ if $R_c(c_l) = c_l c = 1$, and a \textbf{right inverse} $c_r \in A$ if $L_c(c_r) = c c_r = 1$. We say $c$ is \textbf{invertible} if it has both a left and a right inverse. In \cite[p.~233]{wene2010inner} the definition of inner automorphisms of associative algebras is generalised to finite nonassociative division algebras, also called finite semifields. We can generalise this definition further to arbitrary nonassociative algebras: \begin{definition} An automorphism $G \in \mathrm{Aut}_F(A)$ is an \textbf{inner automorphism} if there is an element $0 \neq c \in A$ with left inverse $c_l$, such that $G(x) = (c_l x)c$ for all $x \in A$. \end{definition} Equivalently, an automorphism $G$ is inner if there is an element $0 \neq c \in A$ with left inverse $c_l$, such that $G(x) = R_c(L_{c_l}(x))$ for all $x \in A$. When $A$ is a finite semifield, every non-zero element of $A$ is left invertible and our definition reduces to the definition of inner automorphisms given in \cite{wene2010inner}. Given an inner automorphism $G \in \mathrm{Aut}_F(A)$ and some $H \in \mathrm{Aut}_F(A)$ then a straightforward calculation shows $H^{-1} \circ G \circ H$ is the inner automorphism $A \rightarrow A, \ x \mapsto \big( H^{-1}(c_l)x \big) H^{-1}(c)$. If $c \in \mathrm{Nuc}(A)$ is invertible, then $c_l = c_r$ as $\mathrm{Nuc}(A)$ is an associative algebra. We denote this element $c^{-1}$. Let $G_c$ denote the map $G_c: A \rightarrow A, \ x \mapsto (c^{-1} x)c$ for all invertible $c \in \mathrm{Nuc}(A)$. We remark that if $c \in D$ and $f(t) \in D[t;\sigma,\delta]$ is not right invariant, then multiplying $1 = c_l c$ on the right by $c_r$ and using $c \in \mathrm{Nuc}_m(S_f)$ yields $c_r = (c_l c)c_r = c_l (c c_r) = c_l = c^{-1}$. \begin{proposition} \label{prop:Inner automorphisms nucleus} $Q = \{ G_c \ \vert \ c \in \mathrm{Nuc}(A) \text{ is invertible} \}$ is a subgroup of $\mathrm{Aut}_F(A)$ consisting of inner automorphisms. \end{proposition} \begin{proof} We have \begin{align*} G_c(x) G_c(y) &= \big( (c^{-1} x)c \big) \big( (c^{-1} y)c \big) = (c^{-1} x) [ c ( (c^{-1} y)c ) ] = (c^{-1} x) [ [ c c^{-1} y ] c ] \\ &= (c^{-1} x) (yc) = \big( (c^{-1} x)y \big) c = (c^{-1} (xy))c = G_c(xy), \end{align*} for all $x, y \in A$, where we have used $c^{-1} \in \mathrm{Nuc}(A)$. Hence $G_c$ is multiplicative. Now $G_c$ is clearly bijective and $F$-linear, and so the maps $G_c$ are $F$-automorphisms of $A$ for all invertible $c \in \mathrm{Nuc}(A)$. We now prove $Q$ is a subgroup of $\mathrm{Aut}_F(A)$: Clearly $\mathrm{id}_A = G_1 \in Q$. Let $G_c, G_d \in Q$ for some invertible $c, d \in \mathrm{Nuc}(A)$, then $$G_d(G_c(x)) = \big( d^{-1}((c^{-1}x)c) \big) d = (d^{-1} c^{-1} x) cd = (cd)^{-1}xcd = G_{cd}(x)$$ for all $x \in A$ and so $Q$ is closed under composition. Finally $G_c \circ G_{c^{-1}} = G_1 = \mathrm{id}$, therefore $Q$ is also closed under inverses since $c^{-1} \in \mathrm{Nuc}(A)$. Thus $Q$ is a subgroup of $\mathrm{Aut}_F(A)$. \end{proof} In the case where $A$ is a finite semifield, Proposition \ref{prop:Inner automorphisms nucleus} was proven in \cite[2 Lemma]{wene2010inner}. By Proposition \ref{prop:Inner automorphisms nucleus} we conclude: \begin{corollary} \label{cor:inner automorphisms of S_f, D[t;sigma,delta]} Given $f(t) \in R = D[t;\sigma,\delta]$, the maps $G_c(x) = (c^{-1} \circ x) \circ c$ are inner automorphisms of $S_f$ for all invertible $c \in \mathrm{Nuc}(S_f)$. In particular, if $f(t)$ is right semi-invariant the maps $G_c$ are inner automorphisms of $S_f$ for all $c \in D^{\times}$. \end{corollary} \begin{proof} The first assertion follows immediately from Proposition \ref{prop:Inner automorphisms nucleus}. We have $f(t) \in R$ is right semi-invariant is equivalent to $D \subseteq \mathrm{Nuc}_r(S_f)$ by Theorem \ref{thm:semi-invariant iff D contained in E(f)}, thus either $S_f$ is associative or $\mathrm{Nuc}(S_f) = D \cap \mathrm{Nuc}_r(S_f) = D$. Therefore the maps $G_c$ are inner automorphisms of $S_f$ for all $c \in D^{\times}$. \end{proof} \section{Automorphisms of \texorpdfstring{$S_f$}{S\_f}, \texorpdfstring{$f(t) \in R = D[t;\delta]$}{f(t) in R = D[t;delta]}} \label{section:Automorphisms of S_f, f(t) in D[t;delta]} Now suppose $D$ is an associative division ring of characteristic $p \neq 0$ and center $C$, $\sigma = \mathrm{id}$ and $0 \neq \delta$ is a derivation of $D$. In this Section we investigate the automorphisms of $S_f$ in the special case where $$f(t) = t^{p^e} + a_1 t^{p^{e-1}} + \ldots + a_e t + d \in R = D[t;\delta].$$ Here $S_f$ is a nonassociative algebra over $F = C \cap \mathrm{Const}(\delta)$. Recall from \eqref{p-power formula char p} that as $D$ has characteristic $p$, we can write $(t-b)^p = t^p - V_p(b)$, where $V_p(b) = b^p + \delta^{p-1}(b) + *$ for all $b \in D$, with $*$ a sum of commutators of $b, \delta(b), \ldots, \delta^{p-2}(b)$. In particular, if $b \in C$ then $* = 0$ and $V_p(b) = b^p + \delta^{p-1}(b)$. An iteration yields $(t-b)^{p^e} = t^{p^e} - V_{p^e}(b)$, for all $b \in D$ with $V_{p^e}(b) = V_p^e(b) = V_p(\ldots(V_p(b)\ldots)$. The automorphisms of $R$ are described in Corollary \ref{cor:Automorphisms of D[t;delta]}. Together with Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f}, this leads us to the following result, which generalises \cite[Proposition 7]{pumpluen2016nonassociative} in which $\tau = \mathrm{id}$: \begin{corollary} \label{cor:auto of S_f D[t;delta] from auto of D[t;delta]} Suppose $\tau \in \mathrm{Aut}_{F}(D)$ commutes with $\delta$ and $$f(t) = t^{p^e} + a_1 t^{p^{e-1}} + \ldots + a_e t + d \in R,$$ where $a_1, \ldots, a_e \in \mathrm{Fix}(\tau)$. Then for any $b \in C$ such that \begin{equation} \label{eqn:auto of S_f D[t;delta] from auto of D[t;delta]} V_{p^e}(b) + a_1 V_{p^{e-1}}(b) + \ldots + a_e b + d = \tau(d), \end{equation} the map \begin{equation} \label{eqn:form of H_tau,-b,1 sigma=id} H_{\tau,-b,1}: S_f \rightarrow S_f, \ \sum_{i=0}^{p^e-1} x_i t^i \mapsto \sum_{i=0}^{p^e-1} \tau(x_i)(t-b)^i, \end{equation} is an $F$-automorphism of $S_f$. \end{corollary} \begin{proof} The map $$\Theta: R \rightarrow R, \ \sum_{i=0}^n b_i t^i \mapsto \sum_{i=0}^n \tau(b_i) (t-b)^i,$$ is an $F$-automorphism for all $b \in C$ by Corollary \ref{cor:Automorphisms of D[t;delta]}. Furthermore, a close inspection of the proof of Corollary \ref{cor:Isomorphisms of Petit algebras D[t;delta]} shows $\Theta(f(t)) = f(t)$ if and only if \eqref{eqn:auto of S_f D[t;delta] from auto of D[t;delta]} holds, thus the assertion follows by Theorem \ref{thm:automorphism of skew polynomial induces automorphism of S_f}. \end{proof} Let $f(t) = t^p - t - d \in R$, then $H_{\tau,-b,1} \in \mathrm{Aut}_{F}(S_f)$ for all $b \in C$, $\tau \in \mathrm{Aut}_{F}(D)$ such that $\tau \circ \delta = \delta \circ \tau$ and $$\tau(d) = b - V_p(b) + d = b + d - b^p - \delta^{p-1}(b)$$ by Corollary \ref{cor:auto of S_f D[t;delta] from auto of D[t;delta]}. In addition, if $f(t)$ is not right invariant, $\delta$ commutes with all $F$-automorphisms of $D$ and $F \subsetneq C$, these are the only $F$-automorphisms of $S_f$: \begin{theorem} \label{thm:automorphisms of S_f t^p-t-a derivation} Let $f(t) = t^p - t - d \in R$. Then for all $\tau \in \mathrm{Aut}_{F}(D)$ and $b \in C$ such that $\tau$ commutes with $\delta$ and \begin{equation} \label{eqn:automorphisms of S_f t^p-t-a derivation} \tau(d) = b + d - b^p - \delta^{p-1}(b), \end{equation} the maps $H_{\tau,-b,1}$ given by \eqref{eqn:form of H_tau,-b,1 sigma=id} are $F$-automorphisms of $S_f$. Moreover, if $f(t)$ is not right invariant, $\delta$ commutes with all $F$-automorphisms of $D$ and $F \subsetneq C$, these are all the automorphisms of $S_f$. \end{theorem} \begin{proof} Suppose $f(t)$ is not right invariant, $F \subsetneq C$ and $H \in \mathrm{Aut}_F(S_f)$. By Corollary \ref{cor:auto of S_f D[t;delta] from auto of D[t;delta]} we are left to show $H$ has the form $H_{\tau,-b,1}$ for some $\tau \in \mathrm{Aut}_{F}(D)$ and $b \in C$ satisfying \eqref{eqn:automorphisms of S_f t^p-t-a derivation}. Since $f(t)$ is not right invariant, $S_f$ is not associative and $\mathrm{Nuc}_l(S_f) = D$ by Theorem \ref{thm:Properties of S_f petit}. Any automorphism of $S_f$ must preserve the left nucleus, thus $H(D) = D$ and so $H \vert_D = \tau$ for some $\tau \in \mathrm{Aut}_{F}(D)$. Write $H(t) = \sum_{i=0}^{p-1} b_i t^i$ for some $b_i \in D$, then we have \begin{equation} \label{eqn:automorphisms of S_f t^p-t-a derivation 1} H(t \circ z) = H(t) \circ H(z) = \sum_{i=0}^{p-1} b_i t^i \circ \tau(z) = \sum_{i=0}^{p-1} \sum_{j=0}^i \binom{i}{j} b_i \delta^{i-j}(\tau(z))t^j, \end{equation} and \begin{equation} \label{eqn:automorphisms of S_f t^p-t-a derivation 2} H(t \circ z) = H(zt + \delta(z)) = \tau(z) \sum_{i=0}^{p-1} b_i t^i + \tau(\delta(z)), \end{equation} for all $z \in D$. Comparing the coefficients of $t^{p-2}$ in \eqref{eqn:automorphisms of S_f t^p-t-a derivation 1} and \eqref{eqn:automorphisms of S_f t^p-t-a derivation 2} we obtain $$\tau(z) b_{p-2} = b_{p-2} \tau(z) + \binom{p-1}{p-2} b_{p-1} \delta(\tau(z)),$$ for all $z \in D$. In particular, $$\binom{p-1}{p-2} b_{p-1} \delta(\tau(z)) = 0,$$ for all $z \in C$, therefore $b_{p-1} = 0$ since $\binom{p-1}{p-2} \neq 0$ and $\delta(\tau(z)) = \tau(\delta(z)) \neq 0$ for all $z \in C$ with $z \notin \mathrm{Const}(\delta)$. Such a $z$ exists because $F \subsetneq C$. Similarly, looking in turn at the coefficients of $t^{p-3}, \ldots, t^2, t$ in \eqref{eqn:automorphisms of S_f t^p-t-a derivation 1} and \eqref{eqn:automorphisms of S_f t^p-t-a derivation 2} yields $b_{p-1} = b_{p-2} = \ldots = b_2 = 0$, and comparing the coefficients of $t^0$ in \eqref{eqn:automorphisms of S_f t^p-t-a derivation 1} and \eqref{eqn:automorphisms of S_f t^p-t-a derivation 2} we get \begin{equation} \label{eqn:automorphisms of S_f t^p-t-a derivation 3} \tau(z) b_0 + \tau(\delta(z)) = b_0 \tau(z) + b_1 \delta(\tau(z)), \end{equation} for all $z \in D$. In particular, this means $\tau(\delta(z)) = b_1 \tau(\delta(z))$ for all $z \in C$ since $\tau$ and $\delta$ commute. Hence $b_1 = 1$ as $F \subsetneq C$. We conclude $\tau(z) b_0 = b_0 \tau(z)$ for all $z \in D$ by \eqref{eqn:automorphisms of S_f t^p-t-a derivation 3}, thus $b_0 \in C$ and so $H(t) = t - b$ for some $b \in C$. We now show $H$ has the form \eqref{eqn:form of H_tau,-b,1 sigma=id}: We have $$H(z \circ t^i) = H(z) \circ H(t)^i = \tau(z)(t-b)^i,$$ for all $i \in \{ 1, \ldots, m-1 \}$, $z \in D$, and so $H$ has the form $H_{\tau,-b,1}$ for some $b \in C$. Moreover, with $t^p = t \circ t^{p-1}$, also \begin{equation} \label{eqn:automorphisms of S_f t^p-t-a derivation 4} H(t^p) = H(t + d) = t - b + \tau(d), \end{equation} and $H(t \circ t^{p-1}) = H(t) \circ H(t)^{p-1}$, i.e. \begin{align} \label{eqn:automorphisms of S_f t^p-t-a derivation 5} \begin{split} H(t^p) &= H(t) \circ H(t)^{p-1} = (t-b)^p \ \mathrm{mod}_r \ f \\ &= t^p - V_p(b) \ \mathrm{mod}_r \ f = t + d - V_p(b). \end{split} \end{align} Comparing \eqref{eqn:automorphisms of S_f t^p-t-a derivation 4} and \eqref{eqn:automorphisms of S_f t^p-t-a derivation 5} gives \begin{equation} \label{eqn:tau(d) = d - V_p(b) + b} \tau(d) = d - V_p(b) + b = b + d - b^p - \delta^{p-1}(b), \end{equation} and thus $H$ has the form $H_{\tau,-b,1}$, where $\tau \in \mathrm{Aut}_{F}(D)$ and $b \in C$ satisfy \eqref{eqn:tau(d) = d - V_p(b) + b}. \end{proof} When $f(t) = t^p - t - d \in R$, we see $H_{\mathrm{id}, -1,1} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphisms of S_f t^p-t-a derivation}. Additionally, a straightforward calculation shows $H_{\mathrm{id}, -1,1}^i(t) = t-i$ for all $i \in \mathbb{N}$, therefore since $\mathrm{Char}(D) = p$, we have $H_{\mathrm{id}, -1,1}^i \neq \mathrm{id}$ for all $i \in \{ 1, \ldots, p-1 \}$. Furthermore $$\Theta : R \rightarrow R: \sum_{i=0}^{n} b_i t^i \mapsto \sum_{i=0}^{n} b_i (t-1)^i$$ is an automorphism of order $p$ \cite[p.~90]{amitsur1954non}, thus $\langle H_{\mathrm{id}, -1,1} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$ of order $p$ \cite[Lemma 9]{pumpluen2016nonassociative}. \vspace*{4mm} Notice the equation \eqref{eqn:automorphisms of S_f t^p-t-a derivation} in Theorem \ref{thm:automorphisms of S_f t^p-t-a derivation} is remarkably similar to \eqref{eqn:(t-b) right divides f(t) in Char p} in Proposition \ref{prop:(t-b) right divides f(t) in Char p}. Comparing them yields a connection between the automorphisms of $S_f$ and factors of certain differential polynomials: \begin{proposition} \label{prop:automorphisms right linear division f(t)=t^p-t-a} Let $\tau \in \mathrm{Aut}_{F}(D)$, $f(t) = t^p - t - d \in R$ and $g(t) = t^p - t - (d - \tau(d)) \in R$. \begin{itemize} \item[(i)] Suppose $\tau$ commutes with $\delta$. If $b \in C^{\times}$ is such that $(t-b) \vert_r g(t)$, then $H_{\tau,-b,1} \in \mathrm{Aut}_{F}(S_f)$. \item[(ii)] Given $b \in C^{\times}$, if $(t-b) \vert_r (t^p-t)$ then $H_{\mathrm{id},-b,1} \in \mathrm{Aut}_{F}(S_f)$. \item[(iii)] Suppose $f(t)$ is not right invariant, $\delta$ commutes with all $F$-automorphisms of $D$ and $F \subsetneq C$. Then $H_{\tau,-b,1} \in \mathrm{Aut}_{F}(S_f)$ if and only if $b \in C^{\times}$ and $(t-b) \vert_r g(t).$ In particular, if $b \in C^{\times}$ then $H_{\mathrm{id},-b,1} \in \mathrm{Aut}_{F}(S_f)$ if and only if $(t-b) \vert_r (t^p-t)$. \item[(iv)] Suppose $f(t)$ is not right invariant, $\delta$ commutes with all $F$-automorphisms of $D$ and $F \subsetneq C$. If $g(t)$ is irreducible then $H_{\tau,-b,1} \notin \mathrm{Aut}_F(S_f)$ for all $b \in C^{\times}$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] We have $(t-b) \vert_r g(t)$ is equivalent to $V_p(b) - b - d + \tau(d) = 0$ by Proposition \ref{prop:(t-b) right divides f(t) in Char p}. As $\tau$ commutes with $\delta$ and $b \in C^{\times}$, this yields $H_{\tau,-b,1} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphisms of S_f t^p-t-a derivation}. \item[(ii)] follows by setting $\tau = \mathrm{id}$ in (i). \item[(iii)] If $f(t)$ is not right invariant and $F \subsetneq C$, then $H_{\tau,-b,1} \in \mathrm{Aut}_{F}(S_f)$ if and only if $\tau$ commutes with $\delta$, $b \in C^{\times}$ and $\tau(d) = b - V_p(b) + d$ by Theorem \ref{thm:automorphisms of S_f t^p-t-a derivation}. Now $(t-b) \vert_r g(t)$ is equivalent to $V_p(b) - b - d + \tau(d) = 0$ by Proposition \ref{prop:(t-b) right divides f(t) in Char p} and the assertion follows. \item[(iv)] If $g(t)$ is irreducible then in particular, $(t-b) \nmid_r g(t)$ for all $b \in C^{\times}$ and the assertion follows by (iii). \end{itemize} \end{proof} \section{Automorphisms of \texorpdfstring{$S_f$}{S\_f}, \texorpdfstring{$f(t) \in R = D[t;\sigma]$}{f(t) in R = D[t;sigma]}} \label{section:Automorphisms of S_f when f in D[t;sigma]} Now suppose $D$ is an associative division ring with center $C$, $\mathrm{Char}(D)$ is arbitrary, $\sigma$ is a non-trivial automorphism of $D$ and $f(t) \in R = D[t;\sigma]$. Thus $S_f$ is a nonassociative algebra over $F = C \cap \mathrm{Fix}(\sigma)$. Throughout this Section, if we assume $\sigma$ has order $\geq m-1$ we include infinite order. From Theorem \ref{thm:general_isomorphism} we obtain: \begin{theorem} \label{thm:automorphism_of_S_f_division_case} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$. \begin{itemize} \item[(i)] For every $k \in C^{\times}$ such that \begin{equation} \label{eqn:automorphism necessary id} a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i, \end{equation} for all $i \in \{ 0, \ldots, m-1 \}$, the maps \begin{equation} H_{\mathrm{id},k}: \sum_{i=0}^{m-1} x_i t^i \mapsto x_0 + \sum_{i=1}^{m-1} x_i \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i, \end{equation} are $F$-automorphisms of $S_f$. Furthermore $$\{ H_{\mathrm{id},k} \ \vert \ k \in K^{\times} \text{ satisfies } \eqref{eqn:automorphism necessary id} \text{ for all } i \in \{ 0, \ldots, m-1 \} \}$$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(ii)] For every $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ such that $\tau$ commutes with $\sigma$ and \begin{equation} \label{eqn:automorphism necessary} \tau(a_i) = \Big( \prod_{l=i}^{m-1}\sigma^l(k) \Big) a_i, \end{equation} for all $i \in \{ 0, \ldots, m-1 \}$, the maps \begin{equation} \label{automorphism_of_Sf form of H} H_{\tau , k} : \sum_{i=0}^{m-1} x_i t^i \mapsto \tau(x_0) + \sum_{i=1}^{m-1} \tau(x_i) \big( \prod_{l=0}^{i-1} \sigma^l(k) \big) t^i, \end{equation} are $F$-automorphisms of $S_f$. Moreover \begin{align*} \{ H_{\tau,k} \ \vert \ \tau \in \mathrm{Aut}_{F}(D), \ \tau \circ \sigma = \sigma \circ \tau, \ k \in K^{\times} \text{ satisfies } \eqref{eqn:automorphism necessary} \text{ for all } i \} \end{align*} is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(iii)] Suppose $f(t)$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$. A map $H: S_f \rightarrow S_f$ is an $F$-automorphism of $S_f$ if and only if $H$ has the form $H_{\tau,k}$, where $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ are such that \eqref{eqn:automorphism necessary} holds for all $i \in \{ 0, \ldots, m-1 \}$. \end{itemize} \end{theorem} \begin{proof} Note that the inverse of $H_{\tau,k}$ is $H_{\tau^{-1},\tau^{-1}(k^{-1})}$ and $H_{\tau,k} \circ H_{\rho,b} = H_{\tau \rho, \tau(b)k}$. The rest of the proof is trivial using Theorem \ref{thm:general_isomorphism}. \end{proof} The automorphisms $H_{\tau,k}$ in Theorem \ref{thm:automorphism_of_S_f_division_case} are restrictions of automorphisms $$\Theta: R \rightarrow R, \ \sum_{i=0}^{n} b_i t^i \mapsto \sum_{i=0}^{n} \tau(b_i) (kt)^i,$$ by the proof of Theorem \ref{thm:general_isomorphism}. If $f(t)$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$, then Theorem \ref{thm:automorphism_of_S_f_division_case}(iii) shows that all $F$-automorphisms of $S_f$ are restrictions of automorphisms of $R$. Conversely, when $\sigma \vert_C$ has order $< m-1$ and $\sigma$ commutes with all $F$-automorphisms of $D$, the automorphisms $H_{\tau,k}$ are restrictions of automorphisms of $R$ and form a subgroup of $\mathrm{Aut}_F(S_f)$. Moreover we have: \begin{proposition} \label{prop:automorphism_of_Sf_division_caseII} Suppose $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order $n < m-1$. Let $H \in \mathrm{Aut}_{F}(S_f)$ and $N = \mathrm{Nuc}_r(S_f)$. Then $H \vert_D = \tau$ for some $\tau \in \mathrm{Aut}_{F}(D)$, $H_N \in \mathrm{Aut}_F(N)$ and $H(t) = g(t)$ with \begin{equation} \label{eqn:H(t) form when ord(sigma) < m-1} g(t) = k_1t + k_{1+n}t^{1+n} + k_{1+2n}t^{1+2n} + \ldots + k_{1+sn} t^{1+sn}, \end{equation} for some $k_{1+ln}\in D$. \end{proposition} \begin{proof} Let $H: S_f \rightarrow S_f$ be an automorphism. Then $H \vert_N \in \mathrm{Aut}_F(N)$ and $H \vert_D = \tau$ for some $\tau \in \mathrm{Aut}_{F}(D)$ by Lemma \ref{lem:automorphism restricts to right nucleus}. Suppose $H(t) = \sum_{i=0}^{m-1} k_i t^i$ for some $k_i \in D$. Comparing the coefficients of $t$ in $H(t \circ z) = H(t) \circ H(z)= H(\sigma(z)t)$ we obtain $k_i = 0$ or $\sigma^{i}(z) = \sigma(z)$, for all $i \in\{ 0, \ldots, m-1\}$ and all $z \in C$. Now, since $\sigma \vert_C$ has order $n < m-1$, $\sigma^i(z) = \sigma(z)$ for all $z \in C$ if and only if $i = 1 + nl$ for some $l \in \mathbb{Z}$. Therefore $k_i = 0$ for every $i \neq 1 + nl$, $l \in \mathbb{N} \cup \{ 0 \}$, $i \in \{ 0, \ldots, m-1 \}$ and hence $H(t)$ has the form \eqref{eqn:H(t) form when ord(sigma) < m-1} for some $s$ with $sn < m-1$. \end{proof} Suppose $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$, then the automorphism groups of $S_f$ for $f(t) = t^m-a \in R$ not right invariant, are essential to understanding the automorphism groups of all the algebras $S_g$, as for all nonassociative $S_g$ with $g(t) = t^m - \sum_{i=0}^{m-1} b_i t^i \in R$ and $b_0 = a$, $\mathrm{Aut}_F(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$: \begin{theorem} \label{thm:Aut(S_f) subgroup} Suppose $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$. Let $g(t) = t^m - \sum_{i=0}^{m-1} b_i t^i \in R$ not be right invariant. \begin{itemize} \item[(i)] If $f(t) = t^m - b_0 \in R$ is not right invariant, then $\mathrm{Aut}_{F}(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(ii)] If $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ is not right invariant and $a_j \in \{ 0 , b_j \}$ for all $j \in \{ 0, \ldots , m-1 \}$, then $\mathrm{Aut}_{F}(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \end{itemize} If additionally all automorphisms of $S_g$ are extensions of the identity, i.e. have the form $H_{\mathrm{id},k}$ for some $k \in C^{\times}$, then $\mathrm{Aut}_{F}(S_g)$ is a normal subgroup of $\mathrm{Aut}_{F}(S_f)$. \end{theorem} \begin{proof} \begin{itemize} \item[(i)] Let $H \in \mathrm{Aut}_{F}(S_g)$, then $H$ has the form $H_{\tau,k}$ where $\tau \in \mathrm{Aut}_{F}(D)$ and $k \in C^{\times}$ satisfy $\tau(b_i) = \big( \prod_{l=i}^{m-1} \sigma^l(k) \big) b_i$ for all $i \in \{ 0, \ldots, m-1 \}$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). In particular, $\tau(b_0) = \Big( \prod_{l=0}^{m-1} \sigma^l(k) \Big) b_0,$ thus $H_{\tau,k}$ is also an automorphism of $S_f$, again by Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). Therefore $\mathrm{Aut}_{F}(S_g) \leq \mathrm{Aut}_{F}(S_f)$. Suppose additionally that all automorphisms of $S_g$ are extensions of the identity. Let $H_{\rho,k} \in \mathrm{Aut}_{F}(S_f)$, $H_{\mathrm{id},z} \in \mathrm{Aut}_{F}(S_g)$ for some $\rho \in \mathrm{Aut}_{F}(D)$ and $k, z \in C^{\times}$. The inverse of $H_{\rho,k}$ is $H_{\rho^{-1},\rho^{-1}(k^{-1})}$, furthermore \begin{align*} H_{\rho,k} \Big( &H_{\mathrm{id},z} \Big( H_{\rho^{-1},\rho^{-1}(k^{-1})} \Big( \sum_{i=0}^{m-1} x_i t^i \Big) \Big) \Big) \\ &= H_{\rho,k} \Big( H_{\mathrm{id},z} \Big( \rho^{-1}(x_0) + \sum_{i=1}^{m-1} \rho^{-1}(x_i) \big( \prod_{l=0}^{i-1} \sigma^l(\rho^{-1}(k^{-1})) \big) t^i \Big) \Big) \\ &= H_{\rho,k} \Big( \rho^{-1}(x_0) + \sum_{i=1}^{m-1} \rho^{-1}(x_i) \big( \prod_{l=0}^{i-1} \sigma^l(\rho^{-1}(k^{-1})) \big) \big( \prod_{l=0}^{i-1} \sigma^l(z) \big) t^i \Big) \\ &= x_0 + \sum_{i=1}^{m-1} x_i \big( \prod_{l=0}^{i-1} \rho(\sigma^l(z)) \big) t^i = x_0 + \sum_{i=1}^{m-1} x_i \big( \prod_{l=0}^{i-1} \sigma^l(\rho(z)) \big) t^i \\ &= H_{\mathrm{id}, \rho(z)} \Big( \sum_{i=0}^{m-1} x_i t^i \Big), \end{align*} because $\sigma$ and $\rho$ commute. Therefore if we show $H_{\mathrm{id}, \rho(z)} \in \mathrm{Aut}_{F}(S_g)$, then indeed $\mathrm{Aut}_{F}(S_g)$ is a normal subgroup of $\mathrm{Aut}_{F}(S_f)$. As $H_{\mathrm{id},z} \in \mathrm{Aut}_{F}(S_g)$, Theorem \ref{thm:automorphism_of_S_f_division_case}(iii) implies $\prod_{l=i}^{m-1} \sigma^l(z) = 1$, for all $i \in \{0, \ldots, m-1 \}$ such that $b_i \neq 0$. Applying $\rho$ and using that $\sigma$ and $\rho$ commute, we obtain $$\rho(1) = 1 = \rho \Big( \prod_{l=i}^{m-1} \sigma^l(z) \Big) = \prod_{l=i}^{m-1} \sigma^l(\rho(z)),$$ for all $i \in \{0, \ldots, m-1 \}$ such that $b_i \neq 0$. Thus $$b_i = \Big( \prod_{l=i}^{m-1} \sigma^l(\rho(z)) \Big) b_i,$$ for all $i \in \{ 0,\ldots, m-1 \}$ and hence $H_{\mathrm{id}, \rho(z)} \in \mathrm{Aut}_{F}(S_g)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}. \item[(ii)] The proof is analogous to (i). \end{itemize} \end{proof} \subsection{Cyclic Subgroups of \texorpdfstring{$\mathrm{Aut}_F(S_f)$}{Aut(S\_f)}} We now give some conditions for $\mathrm{Aut}_F(S_f)$ to have cyclic subgroups of certain order. In the special case when the coefficients of $f(t)$ are contained in $F$ we obtain the following: \begin{proposition} \label{prop:Aut(S_f) Coefficients in F 2} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in F[t;\sigma] \subseteq R$. \begin{itemize} \item[(i)] If $\sigma$ has finite order $n$, then $\langle H_{\sigma,1} \rangle \cong \mathbb{Z}/n \mathbb{Z}$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(ii)] Suppose $D = K$ is a cyclic Galois field extension of $F$ of prime degree $m$, $\mathrm{Gal}(K/F) = \langle \sigma \rangle$, $a_0 \neq 0$ and not all $a_1, \ldots, a_{m-1}$ are zero. Then $\mathrm{Aut}_{F}(S_f) = \langle H_{\sigma,1} \rangle \cong \mathbb{Z}/m \mathbb{Z}$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] Since $a_i \in F$, we have $\sigma(a_i) = a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(1) \Big) a_i,$ for all $i \in \{ 0, \ldots, m-1 \}$ and so $H_{\sigma,1} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(ii). Furthermore, $H_{\sigma^j,1} \circ H_{\sigma^l,1} = H_{\sigma^{l+j},1}$ and $H_{\sigma^n,1} = H_{\mathrm{id},1}$, hence $\langle H_{\sigma,1} \rangle = \{ H_{\mathrm{id},1}, H_{\sigma,1}, \ldots, H_{\sigma^{n-1}, 1} \} \cong \mathbb{Z}/n \mathbb{Z}$ is a cyclic subgroup of order $n$. \item[(ii)] The automorphisms of $S_f$ are exactly the maps $H_{\sigma^j,k}$ for some $j \in \{ 0, \ldots, m-1 \}$ and $k \in K^{\times}$ such that \begin{equation} \label{eqn:Aut(S_f) Coefficients in F, field caseIII} \sigma^j(a_i) = a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i, \end{equation} for all $i \in \{ 0, \ldots, m-1 \}$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). The maps $H_{\sigma^j,1}$ are therefore automorphisms of $S_f$ for all $j \in \{ 0, \ldots, m-1 \}$. We show that these are all the automorphisms of $S_f$: We have $N_{K/F}(k) = 1$ because $a_0 \neq 0$, hence by Hilbert's Theorem 90, there exists $\alpha \in K$ such that $k = \sigma(\alpha)/\alpha$. Let $q \in \{ 1, \ldots, m-1 \}$ be such that $a_q \neq 0$, then $$1 = \prod_{l=q}^{m-1} \sigma^l(k) = \prod_{l=q}^{m-1} \sigma^l \big( \frac{\sigma(\alpha)}{\alpha} \big) = \frac{\prod_{l=q+1}^{m} \sigma^l(\alpha)}{\prod_{l=q}^{m-1} \sigma^l(\alpha)} = \frac{\alpha}{\sigma^q(\alpha)},$$ by \eqref{eqn:Aut(S_f) Coefficients in F, field caseIII}. This means $\alpha \in \mathrm{Fix}(\sigma^q) = F$ as $m$ is prime. Therefore $k = \sigma(\alpha)/\alpha = \alpha/\alpha = 1$ as required. \end{itemize} \end{proof} If $F$ contains a primitive $n^{\text{th}}$ root of unity for some $n \geq 2$, then there exist algebras $S_f$ whose automorphism groups contain a cyclic subgroup of order $n$: \begin{theorem} \label{thm:Primitive root then subgroup of order m} Let $f(t) = t^m - a \in R$. If $n \vert m$ and $F$ contains a primitive $n^{\text{th}}$ root of unity $\omega$, then $\mathrm{Aut}_{F}(S_f)$ contains a cyclic subgroup of order $n$ generated by $H_{\mathrm{id},\omega}$. \end{theorem} \begin{proof} We have $\omega \sigma(\omega) \cdots \sigma^{m-1}(\omega) = \omega^{m} = 1$ and thus $H_{\mathrm{id},\omega} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). Notice $H_{\mathrm{id},\omega^j}$ is not the identity automorphism for all $j \in \{ 1, \ldots, n-1 \}$ and that $H_{\mathrm{id}, \omega^n} = H_{\mathrm{id},1}$ is the identity. Furthermore \begin{align*} H_{\mathrm{id},\omega^j} \Big( H_{\mathrm{id},\omega^l} \Big( \sum_{i=0}^{m-1} x_i t^i \Big) \Big) &= H_{\mathrm{id},\omega^j} \Big( \sum_{i=0}^{m-1} x_i \omega^{li} t^i \Big) = \sum_{i=0}^{m-1} x_i \omega^{li} \omega^{ji} t^i \\ &= H_{\mathrm{id},\omega^{j+l}} \Big( \sum_{i=0}^{m-1} x_i t^i \Big), \end{align*} thus $H_{\mathrm{id},\omega^j} \circ H_{\mathrm{id},\omega^l} = H_{\mathrm{id},\omega^{j+l}}$ for all $j, l \in \{ 0, \ldots, n-1 \}$. This implies $\langle H_{\mathrm{id},\omega} \rangle = \{ H_{\mathrm{id},1}, H_{\mathrm{id},\omega}, \ldots, H_{\mathrm{id},\omega^{n-1}} \}$ is a cyclic subgroup. \end{proof} \begin{proposition} \label{prop:primitive root of unity t^ml - sum a_im t^mi} Suppose $F$ contains a primitive $n^{\text{th}}$ root of unity $\omega$ and let $f(t) = t^{nm} - \sum_{i=0}^{m-1} a_{in} t^{in} \in R$. Then $\mathrm{Aut}_{F}(S_f)$ contains a cyclic subgroup of order $n$ generated by $H_{\mathrm{id},\omega}$. \end{proposition} \begin{proof} We have $$\Big( \prod_{l=in}^{nm-1} \sigma^l(\omega) \Big) a_{in} = \omega^{mn-in} a_{in} = a_{in},$$ for all $i \in \{ 0, \ldots, m-1 \}$ which implies $H_{\mathrm{id},\omega} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). The rest of the proof is similar to Theorem \ref{thm:Primitive root then subgroup of order m}. \end{proof} $F$ contains a primitive $2^{\text{nd}}$ root of unity whenever $\mathrm{Char}(F) \neq 2$, namely $-1$. Therefore setting $n = 2$ in Proposition \ref{prop:primitive root of unity t^ml - sum a_im t^mi} yields: \begin{corollary} If $\mathrm{Char}(F) \neq 2$ and $f(t) = t^{2m} - \sum_{i=0}^{m-1} a_{2i} t^{2i} \in R$, then $\{ H_{\mathrm{id},1}, H_{\mathrm{id},-1} \}$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$ of order $2$. \end{corollary} \subsection{Inner Automorphisms} In this subsection we consider the case where $\sigma \vert_C$ has finite order $m$, and look at the inner automorphisms of $S_f$. \begin{proposition} \label{prop:automorphisms sigma vertC order m} Suppose $\sigma \vert_C$ has finite order $m$ and $f(t) = t^m - a \in R$. Then the maps $$G_c: S_f \rightarrow S_f, \ \sum_{i=0}^{m-1} x_i t^i \mapsto \sum_{i=0}^{m-1} x_i c^{-1} \sigma^i(c) t^i,$$ are inner automorphisms for all $c \in C^{\times}$. Furthermore, $\{ G_c \ \vert \ c \in C^{\times} \}$ is a non-trivial subgroup of $\mathrm{Aut}_F(S_f)$. \end{proposition} \begin{proof} Let $c \in C^{\times}$, then $\prod_{l=0}^{i-1} \sigma^l \big( c^{-1}\sigma(c) \big) = c^{-1} \sigma^i(c)$ for all $i \geq 1$, thus $G_c = H_{\mathrm{id},k}$ where $k = c^{-1} \sigma(c)$. Moreover, we have $$\prod_{l=0}^{m-1} \sigma^l(c^{-1}\sigma(c)) = c^{-1} \sigma^m(c) = c^{-1}c = 1,$$ and hence $G_c = H_{\mathrm{id}, k} \in \mathrm{Aut}_F(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). A simple calculation shows $G_c \Big( \sum_{i=0}^{m-1} x_i t^i \Big) = \Big( c^{-1} \sum_{i=0}^{m-1} x_i t^i \Big) c,$ and so $G_c$ are inner automorphisms for all $c \in C^{\times}$. A straightforward calculation shows $G_c \circ G_d = G_{cd}$ for all $c,d \in C^{\times}$, therefore $\{ G_c \ \vert \ c \in C^{\times} \}$ is closed under composition. Additionally, we have $G_1 = \mathrm{id}_{S_f}$ and $G_c \circ G_{c^{-1}} = G_1$, hence $\{ G_c \ \vert \ c \in C^{\times} \}$ forms a subgroup of $\mathrm{Aut}_F(S_f)$. Finally, $G_c$ is not the identity for all $c \in C \setminus F$ which yields the assertion. \end{proof} \begin{example} We use the same set-up as in Hanke \cite[p.~200]{hanke2005twisted}: Let $K = \mathbb{Q}(\alpha)$ where $\alpha$ is a root of $x^3 + x^2 - 2x - 1 \in \mathbb{Q}[x]$. Then $K / \mathbb{Q}$ is a cyclic Galois field extension of degree $3$, its Galois group is generated by $\tilde{\sigma}: \alpha \mapsto \alpha^2 - \alpha + 1$. Let also $L = K(\beta)$ where $\beta$ is a root of $x^3 + (\alpha-2)x^2 - (\alpha+1)x + 1 \in K[x]$. Then $L/K$ is a cyclic Galois field extension of degree $3$ and there is $\tau \in \mathrm{Gal}(L/K)$ with $\tau(\beta) = \beta^2 + (\alpha-2)\beta - \alpha$. Define $\pi = \alpha^2 + 2\alpha - 1 \in K$, then $D = (L/K,\tau,2\pi)$ is an associative cyclic division algebra of degree $3$ and $\tilde{\sigma}$ extends to an automorphism $\sigma$ of $D$. Suppose $f(t) = t^3 - a \in D[t;\sigma]$ and note that $\mathrm{Cent}(D) = K$ and $\sigma \vert_K = \tilde{\sigma}$ has order $3$. Therefore $\{ G_c \ \vert \ c \in K^{\times} \}$ is a non-trivial subgroup of $\mathrm{Aut}_F(S_f)$ consisting of inner automorphisms by Proposition \ref{prop:automorphisms sigma vertC order m}. \end{example} \begin{corollary} \label{cor:G_c subgroup order j cyclic} Suppose $\sigma \vert_C$ has finite order $m$ and $f(t) = t^m - a \in R$. Let $c \in C^{\times}$ and suppose there exists $j \in \mathbb{N}$ such that $c^j \in F$. Let $j$ be minimal. Then $\langle G_c \rangle \cong \mathbb{Z}/j \mathbb{Z}$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$ consisting of inner automorphisms. \end{corollary} \begin{proof} The maps $G_c, G_{c^2}, \ldots$ are all automorphisms of $S_f$ by Proposition \ref{prop:automorphisms sigma vertC order m}, furthermore a straightforward calculation shows $G_{c^i} \circ G_{c^l} = G_{c^{i+l}}$ for all $i, l \in \mathbb{N}$. Notice $G_c^j = G_{c^j}$ is the identity automorphism if and only if $c^j \in F$, then by the minimality of $j$ we conclude $\langle G_c \rangle = \{ G_c, G_{c^2}, \ldots, G_{c^{j-1}}, \mathrm{id} \}$ is a cyclic subgroup. \end{proof} If $D$ is finite-dimensional over $C$, then since $\sigma \vert_C$ has finite order $m$, $\sigma$ has inner order $m$ by the Skolem-Noether Theorem. That is $\sigma^m$ is an inner automorphism $I_u: x \mapsto u^{-1}xu$ for some $u \in D^{\times}$, where we can choose $u \in D^{\times}$ such that $\sigma(u) = u$ \cite[Theorem 1.1.22]{jacobson1996finite}. Given $f(t) = \sum_{j=0}^n a_j u^{n-j} t^{jm} \in D[t;\sigma]$ such that $a_n =1$ and $a_j \in C$, then $f(t)$ is right semi-invariant by Theorem \ref{thm:right semi invariant conditions}. Therefore as a direct consequence of Corollary \ref{cor:inner automorphisms of S_f, D[t;sigma,delta]}, we obtain: \begin{corollary} Suppose $\sigma$ and $m$ are as above, and let $f(t) = \sum_{j=0}^n a_j u^{n-j} t^{jm} \in R$ where $a_n =1$ and $a_j \in C$. Then the maps $$G_c : S_f \rightarrow S_f, \ \sum_{i=0}^{mn-1} x_i t^i \mapsto \sum_{i=0}^{mn-1} c^{-1} x_i \sigma^i(c) t^i,$$ are inner automorphisms for all $c \in D^{\times}$. \end{corollary} \subsection{Necessary Conditions for \texorpdfstring{$H_{\tau,k} \in \mathrm{Aut}_F(S_f)$}{H\_\{tau,k\} to be an Automorphism of S\_f}} We next take a closer look at the equality \eqref{eqn:automorphism necessary}, to obtain necessary conditions for $\tau \in \mathrm{Aut}_F(D)$ to extend to $H_{\tau,k} \in \mathrm{Aut}_F(S_f)$. The more non-zero coefficients $f(t)$ has the more restrictive these conditions become. \begin{proposition} \label{prop:Conditions for automorphism group trivial} Let $\sigma, \tau \in \mathrm{Aut}_F(D)$ and $k \in C^{\times}$ be such that \eqref{eqn:automorphism necessary} holds for all $i \in \{ 0, \ldots, m-1 \}$, i.e. $\tau(a_i) = \big( \prod_{l=i}^{m-1} \sigma^l(k) \big) a_i$ for all $i \in \{ 0, \ldots, m-1 \}$. \begin{itemize} \item[(i)] If $a_{m-1} \neq 0$ then $$\tau(a_i) = \big( \prod_{l=i}^{m-1} \sigma^{l-m+1} \big( \tau(a_{m-1})a_{m-1}^{-1} \big) \big) a_i$$ for all $i \in \{ 0, \ldots, m-1 \}$. \item[(ii)] If two consecutive $a_s, a_{s+1} \in \mathrm{Fix}(\tau)^{\times}$ then $k = 1$. \item[(iii)] If $a_{m-1} \in \mathrm{Fix}(\tau)^{\times}$ then $k=1$. \item[(iv)] If there is $i \in \{ 0, \ldots, m-1 \}$ such that $a_i \in \mathrm{Fix}(\tau)^{\times}$ then $1 = \prod_{l=i}^{m-1} \sigma^l(k).$ \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] Since $a_{m-1} \neq 0$, \eqref{eqn:automorphism necessary} implies $\tau(a_{m-1}) = \sigma^{m-1}(k) a_{m-1}$, hence $k = \sigma^{-m+1} \big( \tau(a_{m-1}) a_{m-1}^{-1} \big)$. Subbing this back into \eqref{eqn:automorphism necessary} yields the assertion. \item[(ii)] If there are two consecutive $a_s, a_{s+1} \in \mathrm{Fix}(\tau)^{\times}$, then by \eqref{eqn:automorphism necessary} we conclude $\prod_{l=s}^{m-1} \sigma^l(k) = 1 = \prod_{l=s+1}^{m-1} \sigma^l(k)$, thus cancelling gives $\sigma^s(k) = 1$, i.e. $k = 1$. The proof of (iii) and (iv) is similar to \cite[Proposition 9]{brownautomorphism2017}, but we need not assume $D$ is a field: \item[(iii)] Since $a_{m-1} \in \mathrm{Fix}(\tau)^{\times}$, \eqref{eqn:automorphism necessary} yields $\tau(a_{m-1}) = a_{m-1} = \sigma^{m-1}(k) a_{m-1}$, thus $\sigma^{m-1}(k) = 1$ and so $k = 1$. \item[(iv)] We have $\tau(a_i) = a_i = \prod_{l=i}^{m-1} \sigma^l(k) a_i$ by \eqref{eqn:automorphism necessary}, hence $1 = \prod_{l=i}^{m-1} \sigma^l(k)$. \end{itemize} \end{proof} The condition \eqref{eqn:automorphism necessary} heavily restricts the choice of available $k$ to $k=1$ in many cases. Therefore in many instances we conclude $\mathrm{Aut}_F(S_f)$ is isomorphic to a subgroup of $\mathrm{Aut}_F(D)$ or is trivial: \begin{corollary} Suppose $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma$ has order at least $m-1$. Suppose also that one of the following holds: \begin{itemize} \item[(i)] $a_{m-1} \neq 0$ and there exists $j \in \{ 0, \ldots, m-2 \}$ such that $$\tau(a_j) \neq \big( \prod_{l=j}^{m-1} \sigma^{l-m+1} \big( \tau(a_{m-1})a_{m-1}^{-1} \big) \big) a_j,$$ for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_F(D)$. \item[(ii)] $a_{m-1} \in F^{\times}$ and for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_F(D)$ there exists $j \in \{ 0, \ldots, m-2 \}$ such that $a_j \notin \mathrm{Fix}(\tau)$. \item[(iii)] There are two consecutive $a_s, a_{s+1} \in F^{\times}$ and for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_F(D)$ there exists $j \in \{ 0, \ldots, m-1 \}$ such that $a_j \notin \mathrm{Fix}(\tau)$. \end{itemize} Then $\mathrm{Aut}_F(S_f)$ is trivial. \end{corollary} \begin{proof} Suppose $H \in \mathrm{Aut}_F(S_f)$, then $H = H_{\tau,k}$ for some $\tau \in \mathrm{Aut}_F(D)$ and $k \in C^{\times}$ satisfying \eqref{eqn:automorphism necessary} by Theorem \ref{thm:automorphism_of_S_f_division_case}. \begin{itemize} \item[(i)] We have $\tau = \mathrm{id}$ by Proposition \ref{prop:Conditions for automorphism group trivial}(i), therefore \eqref{eqn:automorphism necessary} implies $a_{m-1} = \sigma^{m-1}(k) a_{m-1}$. This means $k = 1$ and $H = H_{\mathrm{id},1}$ is trivial. \item[(ii)] We have $k = 1$ by Proposition \ref{prop:Conditions for automorphism group trivial}(iii), therefore \eqref{eqn:automorphism necessary} implies $\tau(a_i) = a_i$ for all $i \in \{ 0,\ldots, m-1 \}$ and thus $\tau = \mathrm{id}$. Therefore $H = H_{\mathrm{id},1}$ and $\mathrm{Aut}_F(S_f)$ is trivial. \item[(iii)] Proposition \ref{prop:Conditions for automorphism group trivial}(ii) yields $k=1$, therefore by \eqref{eqn:automorphism necessary} we have $\tau(a_i) = a_i$ for all $i \in \{ 0,\ldots, m-1 \}$, hence $\tau = \mathrm{id}$ and $H = H_{\mathrm{id},1}$. \end{itemize} \end{proof} Denote by $\mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma)$ the centralizer of $\sigma$ in $\mathrm{Aut}_{F}(D)$. If the coefficients of $f(t)$ are all in $F$, we have: \begin{proposition} \label{prop:Aut(S_f) Coefficients in F} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in F[t] = F[t;\sigma] \subset R$. \begin{itemize} \item[(i)] $\{ H_{\tau,1} \ \vert \ \tau \in \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma) \} \cong \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(ii)] Suppose $f(t)$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$, $a_{m-1} \neq 0$ and $\sigma \vert_C$ has order at least $m-1$. Then $$\mathrm{Aut}_{F}(S_f) = \{ H_{\tau,1} \ \vert \ \tau \in \mathrm{Aut}_{F}(D) \} \cong \mathrm{Aut}_{F}(D).$$ \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] We have $\tau(a_i) = a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(1) \Big) a_i$, for all $i \in \{ 0, \ldots, m-1 \}$, $\tau \in \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma)$, therefore $\{ H_{\tau,1} \ \vert \ \tau \in \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma) \}$ is a subset of $\mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(ii). Furthermore, $H_{\tau,1} \circ H_{\rho,1} = H_{\tau \rho,1}$ for all $\tau, \rho \in \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma)$, hence $\{ H_{\tau,1} \ \vert \ \tau \in \mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma) \}$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$ because $\mathrm{Cent}_{\mathrm{Aut}(D)}(\sigma)$ is a group. \item[(ii)] In this case we prove the subgroup in (i) is all of $\mathrm{Aut}_F(S_f)$: Let $H \in \mathrm{Aut}_{F}(S_f)$, then $H$ has the form $H_{\tau,k}$ for some $\tau \in \mathrm{Aut}_{F}(D)$, $k \in C^{\times}$ such that $\tau(a_i) = a_i = \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i$, for all $i \in \{ 0, \ldots, m-1 \}$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). In particular, $a_{m-1} = \sigma^{m-1}(k) a_{m-1}$ which implies $k = 1$ since $a_{m-1}\neq 0$. Thus $H = H_{\tau,1}$ as required. \end{itemize} \end{proof} Now suppose $C/F$ is a proper field extension of finite degree, and $D$ is finite-dimensional as an algebra over $C$ and over $F$. Let $N_{D/C}$ denote the norm of $D$ considered as an algebra over $C$, $N_{D/F}$ denote the norm of $D$ considered as an algebra over $F$, and $N_{C/F}$ denote the norm of the field extension $C/F$. Recall $N_{D/F}(z) = N_{C/F}(N_{D/C}(z))$ for all $z \in D$ \cite[\S 7.4]{jacobson1985basic} and $N_{D/F}(\tau(z)) = N_{D/F}(z)$ for all $\tau \in \mathrm{Aut}_{F}(D)$ and all $z \in D$ \cite[p.~547]{magurn2002algebraic}. Applying $N_{D/C}$ to \eqref{eqn:automorphism necessary} yields a necessary condition for $H_{\tau,k}$ to be an automorphism of $S_f$: \begin{proposition} \label{prop:automorphism division norm argument} Suppose $\sigma$ commutes with all $F$-automorphisms of $D$, $\sigma \vert_C$ has order at least $m - 1$ and $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in D[t;\sigma]$ is not right invariant. If $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$, then $N_{C/F}(k)$ is a $[D:C](m-i)$th root of unity for all $i \in \{ 0, \ldots, m-1 \}$ such that $a_i \neq 0$. In particular, if $D$ is commutative and $a_0 \neq 0$ then $N_{C/F}(k)$ is an $m$th root of unity. \end{proposition} \begin{proof} Applying $N_{D/F}$ to \eqref{eqn:automorphism necessary} we obtain \begin{equation*} \begin{split} N_{D/F}&(\tau(a_i)) = N_{D/F}(a_i) = N_{D/F} \Big( \Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i \Big) \\ &= N_{D/F}(k)^{m-i} N_{D/F}(a_i) = N_{C/F}(k)^{[D:C](m-i)} N_{D/F}(a_i), \end{split} \end{equation*} for all $i \in \{ 0, \ldots, m-1 \}$ since $N_{D/F}(k) = N_{C/F}(N_{D/C}(k)) = N_{C/F}(k)^{[D:C]}$ for all $k \in C^{\times}$. This yields $N_{C/F}(k)^{[D:C](m-i)} = 1$ or $a_i = 0$ for every $i \in \{ 0, \ldots, m-1 \}$. \end{proof} \subsection{Connections Between Automorphisms of \texorpdfstring{$S_f$}{S\_f}, \texorpdfstring{$f(t) = t^m-a \in D[t;\sigma]$}{f(t) = t\^{}m - a in D[t;sigma]}, and Factors of Skew Polynomials} Suppose $D$ is an associative division ring with center $C$, $\sigma$ is a non-trivial ring automorphism of $D$ and $f(t) = t^m-a \in R = D[t;\sigma]$. We compare Theorem \ref{thm:automorphism_of_S_f_division_case} and Proposition \ref{prop:rightdivdegree1} to obtain connections between the automorphisms of $S_f$ and factors of certain skew polynomials: \begin{proposition} \label{prop:automorphisms right divisors of t^m-tau(a)a^-1} \begin{itemize} \item[(i)] Suppose $\tau \in \mathrm{Aut}_{F}(D)$ commutes with $\sigma$ and $k \in C^{\times}$, then $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$ if $(t-k) \vert_r (t^m - \tau(a) a^{-1})$. \item[(ii)] Suppose $(t-k) \vert_r (t^m-1)$ for some $k \in C^{\times}$, then $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$. \item[(iii)] Suppose $f(t)$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma \vert_C$ has order at least $m-1$. If $k \in C^{\times}$ and $\tau \in \mathrm{Aut}_{F}(D)$, then $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$ if and only if $(t-k) \vert_r (t^m - \tau(a)a^{-1})$. In particular, $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$ if and only if $(t-k) \vert_r (t^m - 1)$. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] If $(t-k) \vert_r (t^m - \tau(a) a^{-1})$ then $\prod_{l=0}^{m-1} \sigma^l(k) = \tau(a)a^{-1}$ by Proposition \ref{prop:rightdivdegree1}, thus $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). \item[(ii)] follows by setting $\tau = \mathrm{id}$ in (i). \item[(iii)] If $(t-k) \vert_r (t^m - \tau(a) a^{-1})$ then $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$ by (i). Conversely if $H_{\tau,k} \in \mathrm{Aut}_{F}(S_f)$, then $\tau(a) = \big( \prod_{l=0}^{m-1} \sigma^l(k) \big) a$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(iii), and thus $(t-k) \vert_r (t^m - \tau(a) a^{-1})$ by Proposition \ref{prop:rightdivdegree1}. \end{itemize} \end{proof} When $D$ is commutative, Proposition \ref{prop:automorphisms right divisors of t^m-tau(a)a^-1} shows there is an injective map between the monic linear right divisors of $t^m-1$ in $R$ and the automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$. If additionally $f(t)$ is not right invariant, $\sigma$ commutes with all $F$-automorphisms of $D$ and $\sigma$ has order at least $m-1$, then there is a bijection between the monic linear right divisors of $t^m-1$ in $R$ and the automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$. \begin{corollary} \label{cor:H_id,k automorphism link to right divisors} Suppose $\sigma$ commutes with all $F$-automorphisms of $D$, $f(t)$ is not right invariant, and $\sigma \vert_C$ has order at least $m-1$. \begin{itemize} \item[(i)] If all $F$-automorphisms of $S_f$ have the form $H_{\mathrm{id},k}$ for some $k \in C^{\times}$, then $(t-b) \nmid_r (t^m-\tau(a)a^{-1}),$ for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$, $b \in C^{\times}$. In addition, if $D$ is commutative, $m$ is prime and $\mathrm{Fix}(\sigma)$ contains a primitive $m^{\text{th}}$ root of unity, then $t^m - \tau(a)a^{-1} \in R$ is irreducible for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$. \item[(ii)] If $t^m - \tau(a)a^{-1} \in R$ is irreducible for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$, then $$\mathrm{Aut}_{F}(S_f) = \Big\{ H_{\mathrm{id},k} \ \vert \ k \in C^{\times} \text{ such that } \prod_{l=0}^{m-1} \sigma^l(k) = 1 \Big\} .$$ \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] The first assertion follows immediately from Proposition \ref{prop:automorphisms right divisors of t^m-tau(a)a^-1}. If $D$ is commutative this means $(t-b) \nmid_r (t^m-\tau(a)a^{-1})$ for all $b \in D^{\times}$ and all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$. If also $\mathrm{Fix}(\sigma)$ contains a primitive $m^{\text{th}}$ root of unity, then $t^m-\tau(a)a^{-1} \in R$ is irreducible for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$ by Theorem \ref{thm:Petit(19)}. \item[(ii)] Suppose $t^m - \tau(a)a^{-1} \in R$ is irreducible for all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$, then in particular, $(t-b) \nmid_r (t^m-\tau(a)a^{-1})$ for all $b \in C^{\times}$, $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$. Therefore $H_{\tau,b} \notin \mathrm{Aut}_{F}(S_f)$ for all $b \in C^{\times}$, and all $\mathrm{id} \neq \tau \in \mathrm{Aut}_{F}(D)$ by Proposition \ref{prop:automorphisms right divisors of t^m-tau(a)a^-1} and so $$\mathrm{Aut}_{F}(S_f) = \Big\{ H_{\mathrm{id},k} \ \vert \ k \in C^{\times} \text{ such that } \prod_{l=0}^{m-1} \sigma^l(k) = 1 \Big\}$$ by Theorem \ref{thm:automorphism_of_S_f_division_case}. \end{itemize} \end{proof} \section[Automorphisms of some Jha-Johnson Semifields]{Automorphisms of Jha-Johnson Semifields Obtained from Skew Polynomial Rings} \label{section:Automorphisms of Jha-Johnson Semifields Obtained from Skew Polynomial Rings} In this Section we study the automorphism groups of the Jha-Johnson semifields which arise from Petit's algebra construction. Let $K = \mathbb{F}_{p^h}$ be a finite field of order $p^h$ for some prime $p$ and $\sigma$ be a non-trivial $\mathbb{F}_p$-automorphism of $K$, i.e. $\sigma: K \rightarrow K, \ k \mapsto k^{p^r}$ for some $r \in \{ 1, \ldots, h-1 \}$ is a power of the Frobenius automorphism. Notice that $F = \mathrm{Fix}(\sigma) \cong \mathbb{F}_{q}$ where $q = p^{\mathrm{gcd}(r,h)}$, $\sigma$ has order $n = h/ \mathrm{gcd}(r,h)$, and $\sigma$ commutes with all $\mathbb{F}_q$-automorphisms of $K$. Suppose $f(t) \in R = K[t;\sigma]$ is monic and irreducible, so that $S_f$ is a Jha-Johnson semifield \cite[Theorem 15]{lavrauw2013semifields} (see Theorem \ref{thm:Jha-Johnson_is_S_f}). Recall that when $f(t) = t^m - a \in K[t;\sigma]$ is irreducible, $a \in K \setminus F$ and $n \geq m$, then $S_f$ is called a Sandler semifield \cite{sandler1962autotopism}. The automorphism groups of Sandler semifields are particularly relevant as for all Jha-Johnson semifields $S_g$ with $g(t) = t^m - \sum_{i=0}^{m-1} b_i t^i \in K[t;\sigma]$ irreducible and $b_0 = a$, $\mathrm{Aut}_{F}(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:Aut(S_f) subgroup}. We study the automorphisms of Sandler semifields in Section \ref{section:Automorphisms of Sandler Semifields}. When $m = n$, Sandler semifields are precisely the nonassociative cyclic algebras over $F$, and their automorphism groups are studied in detail in Chapter \ref{chapter:Automorphisms of Nonassociative Cyclic Algebras}. We remark that the results in this Section hold more generally when $f(t) \in R$ is not necessarily irreducible, here $S_f$ is a nonassociative algebra over $F$, but $S_f$ is not a semifield unless $f(t)$ is irreducible (Theorem \ref{thm:S_f_division_iff_irreducible}). Theorem \ref{thm:automorphism_of_S_f_division_case} becomes: \begin{theorem} \label{thm:automorphism_of_S_f_finite field case} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in K[t;\sigma]$ and define $s_i = (p^{rm}-p^{ri})/(p^r-1).$ \begin{itemize} \item[(i)] Suppose $k \in K^{\times}$ is an $s_i$th root of unity for all $i \in \{ 0, \ldots, m - 1 \}$ with $a_i \neq 0$. Then $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$. \item[(ii)] Suppose $k \in K^{\times}$ and $j \in \{ 0, \ldots, n-1 \}$ are such that $\sigma^j(a_i) = k^{s_i} a_i$ for all $i \in \{ 0 , \ldots, m-1 \}$, then $H_{\sigma^j,k} \in \mathrm{Aut}_{F}(S_f)$. \item[(iii)] Suppose $f(t)$ is not right invariant and $n \geq m-1$. Then $H \in \mathrm{Aut}_{F}(S_f)$ if and only if $H = H_{\sigma^j,k}$ for some $k \in K^{\times}$, $j \in \{ 0, \ldots, n-1 \}$ such that $\sigma^j(a_i) = k^{s_i} a_i$ for all $i \in \{ 0 , \ldots, m-1 \}$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] We have $$\Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i = \Big( \prod_{l=i}^{m-1} k^{p^{rl}} \Big) a_i = k^{s_i} a_i = a_i,$$ for all $i \in \{ 0, \ldots, m-1 \}$ and thus $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). \item[(ii)] Similarly to (i) we have $$\Big( \prod_{l=i}^{m-1} \sigma^l(k) \Big) a_i = \Big( \prod_{l=i}^{m-1} k^{p^{rl}} \Big) a_i = k^{s_i} a_i = \sigma^j(a_i)$$ for all $i \in \{ 0, \ldots, m-1 \}$, and thus $H_{\sigma^j,k} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(ii). \item[(iii)] follows by (ii) and Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). \end{itemize} \end{proof} As a direct consequence of Theorem \ref{thm:Aut(S_f) subgroup}, we have: \begin{corollary} Let $n \geq m-1$ and $g(t) = t^m - \sum_{i=0}^{m-1} b_i t^i \in R$ not be right invariant. \begin{itemize} \item[(i)] If $f(t) = t^m - b_0 \in R$ is not right invariant then $\mathrm{Aut}_{F}(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \item[(ii)] If $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R$ is not right invariant and $a_j \in \{ 0 , b_j \}$ for all $j \in \{ 0, \ldots , m-1 \}$, then $\mathrm{Aut}_{F}(S_g)$ is a subgroup of $\mathrm{Aut}_{F}(S_f)$. \end{itemize} \end{corollary} Proposition \ref{prop:Aut(S_f) Coefficients in F 2} becomes: \begin{corollary} Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in F[t;\sigma] \subseteq K[t;\sigma]$. Then $\langle H_{\sigma,1} \rangle \cong \mathbb{Z}/n \mathbb{Z}$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$. \end{corollary} The Hughes-Kleinfeld semifields can be written in the form $S_f$ for irreducible $f(t) \in K[t;\sigma]$ of degree $2$ by Theorem \ref{thm:S_f as Hughes Kleinfeld and Knuth Semifield}. Setting $m=2$ in Theorem \ref{thm:automorphism_of_S_f_finite field case} gives a description of the automorphisms of Hughes Kleinfeld semifields. In particular we obtain \cite[Proposition 8.3.1]{AndrewPhD} and \cite[Corollary 8.3.6]{AndrewPhD} as straightforward corollaries of Theorem \ref{thm:automorphism_of_S_f_division_case}(iii). \subsection{Automorphisms of Sandler Semifields} \label{section:Automorphisms of Sandler Semifields} In this Subsection we study the automorphism group of $S_f$ when $f(t) = t^m-a \in K[t;\sigma]$. In particular, when $a \in K \setminus F$ and $\sigma$ has order $n \geq m$ this yields results on the automorphisms of Sandler semifields. Let $$s = \frac{p^{rm}-1}{p^r-1}.$$ Theorem \ref{thm:automorphism_of_S_f_finite field case} implies: \begin{corollary} \label{cor:automorphisms of Sandler semifields} Let $f(t) = t^m - a \in K[t;\sigma]$. \begin{itemize} \item[(i)] The set $\{ H_{\mathrm{id},k} \ \vert \ k \in K, \ k^s=1 \}$ is a cyclic subgroup of $\mathrm{Aut}_F(S_f)$ of order $\mathrm{gcd}(s, p^h-1)$. \item[(ii)] If $p \equiv 1 \ \mathrm{mod} \ m$, then there are at least $m$ automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$. In particular, $\mathrm{Aut}_{F}(S_f)$ is not trivial. \item[(iii)] Suppose $h$ is even and at least one of $r, m$ are even. If $p \equiv -1 \ \mathrm{mod} \ m$, then there are at least $m$ automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$. In particular, $\mathrm{Aut}_{F}(S_f)$ is not trivial. \item[(iv)] Suppose $p$ is an odd prime and $m = 2$. Then $\mathrm{Aut}_{F}(S_f)$ is not trivial. \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] If $k \in K^{\times}$ is an $s^{\text{th}}$ root of unity then $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_finite field case}(i). Moreover a straightforward calculation shows $H_{\mathrm{id},k} \circ H_{\mathrm{id},l} = H_{\mathrm{id},kl} \in \{ H_{\mathrm{id},k} \ \vert \ k^s=1 \}$ for all $s^{\text{th}}$ roots of unity $k, l \in K^{\times}$. This means $\{ H_{\mathrm{id},k} \ \vert \ k^s=1 \}$ is isomorphic to the cyclic subgroup of $K^{\times}$ consisting of all $s^{\text{th}}$ roots of unity. There are precisely $\mathrm{gcd}(s, p^h-1)$ $s^{\text{th}}$ roots of unity in $K$ by \cite[Proposition II.2.1]{koblitz1994course} which yields the assertion. \item[(ii)] We have $\mathrm{gcd}(s, p^h-1) \geq m$ by the proof of Corollary \ref{cor:p=1mod m finite field irreducibility criteria}, therefore there are at least $m$ automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$ by (i). \item[(iii)] We have $p^h \equiv (-1)^h \ \mathrm{mod} \ m \equiv 1 \ \mathrm{mod} \ m$ because $h$ is even. If $r$ is even then $p^r \equiv 1 \ \mathrm{mod} \ m$ and \begin{align*} s \ \mathrm{mod} \ m & \equiv \Big( \sum_{i=0}^{m-1} (p^{ri} \ \mathrm{mod} \ m) \Big) \ \mathrm{mod} \ m \equiv \big( \sum_{i=0}^{m-1} 1 \big) \ \mathrm{mod} \ m \equiv 0 \ \mathrm{mod} \ m. \end{align*} On the other hand, if $r$ is odd then $m$ must be even, therefore $p^r \equiv -1 \ \mathrm{mod} \ m$ and \begin{align*} s \ \mathrm{mod} \ m & \equiv \Big( \sum_{i=0}^{m-1} (p^{ri} \ \mathrm{mod} \ m) \Big) \ \mathrm{mod} \ m \\ & \equiv \big( \sum_{i=0}^{m-1} (-1)^i \big) \ \mathrm{mod} \ m \equiv 0 \ \mathrm{mod} \ m. \end{align*} In either case, $m \vert (p^h-1)$ and $m \vert s$. Hence $\mathrm{gcd}(s, p^h-1) \geq m$, so there are at least $m$ automorphisms of $S_f$ of the form $H_{\mathrm{id},k}$ by (i). \item[(iv)] $p$ is odd so $p \equiv 1 \ \mathrm{mod} \ 2$ and the result follows by (ii). \end{itemize} \end{proof} Corollary \ref{cor:automorphisms of Sandler semifields}(i) implies that if $\mathrm{Aut}_F(S_f)$ is trivial for a single $f(t) = t^m-a \in K[t;\sigma]$, then all $t^m-c \in K[t;\sigma]$, $c \in K$, are reducible: Indeed if $\mathrm{Aut}_{F}(S_f)$ is trivial then $\mathrm{gcd}(s, p^h-1) = 1$ by Corollary \ref{cor:automorphisms of Sandler semifields}(i), and so $t^m - c \in K[t;\sigma]$ is reducible for all $c \in K$ by Corollary \ref{cor:Finite field t^m-a irreducibility criteria}. \begin{proposition} \label{prop:inner automorphisms of S_f over finite fields} Let $f(t) = t^m-a \in K[t;\sigma]$. \begin{itemize} \item[(i)] If $c \in K^{\times}$ is a $(p^{rm}-1)$ root of unity, then the map $$G_c : S_f \rightarrow S_f, \ \sum_{i=0}^{m-1} x_i t^i \mapsto \Big( c^{-1} \sum_{i=0}^{m-1} x_i t^i \Big) c,$$ is an inner automorphism of $S_f$. \item[(ii)] If $p^{\mathrm{gcd}(rm,h)} - p^{\mathrm{gcd}(r,h)} > 0$, then there exists a non-trivial inner automorphism of $S_f$ of the form $G_c$ for some $c \in K^{\times}$ which is a $(p^{rm}-1)$ root of unity, but not a $(p^r-1)$ root of unity. \end{itemize} \end{proposition} \begin{proof} Let $k = c^{p^r-1}$. \begin{itemize} \item[(i)] We have $k^{(p^{rm}-1)/(p^r-1)} = 1$ and so $H_{\mathrm{id},k} \in \mathrm{Aut}_{F}(S_f)$ by Theorem \ref{thm:automorphism_of_S_f_finite field case}(i). Furthermore \begin{align*} H_{\mathrm{id},k} &\big( \sum_{i=0}^{m-1} x_i t^i \big) = x_0 + \sum_{i=1}^{m-1} x_i \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) t^i = x_0 + \sum_{i=1}^{m-1} x_i \Big( \prod_{l=0}^{i-1} k^{p^{rl}} \Big) t^i \\ &= x_0 + x_1 k t + \sum_{i=2}^{m-1} x_i k^{(p^{ri}-1)/(p^r-1)} t^i = \sum_{i=0}^{m-1} x_i c^{-1} \sigma^i(c) t^i \\ &= \Big( c^{-1} \sum_{i=0}^{m-1} x_i t^i \Big) c, \end{align*} so we conclude $H_{\mathrm{id},k} = G_c$ is an inner automorphism. \item[(ii)] Notice $H_{\mathrm{id},k}$ is the identity if and only if $k = 1$, i.e. if and only if $c$ is a $(p^r-1)$th root of unity. Moreover every $(p^r-1)$th root of unity $c$ is also a $(p^{rm}-1)$th root of unity because $(p^r-1) \vert (p^{rm}-1)$. Therefore if the number of $(p^{rm}-1)$ roots of unity in $K$ is strictly greater than the number of $(p^{r}-1)$ roots of unity, then there exists a non-trivial automorphism of $S_f$ of the form $G_c$ for some $c \in K^{\times}$ which is a $(p^{rm}-1)$ root of unity, but not a $(p^r-1)$ root of unity. Finally, the number of $(p^{rm}-1)$ roots of unity in $K$ is strictly greater than the number of $(p^{r}-1)$ roots of unity if and only if \begin{align*} \mathrm{gcd}&(p^{rm}-1,p^h-1) - \mathrm{gcd}(p^r-1,p^h-1) > 0, \end{align*} if and only if $$p^{\mathrm{gcd}(rm,h)} - 1 - (p^{\mathrm{gcd}(r,h)}-1) = p^{\mathrm{gcd}(rm,h)} - p^{\mathrm{gcd}(r,h)} > 0$$ by Lemma \ref{lem:gcd number theory result}. \end{itemize} \end{proof} When $K$ contains a primitive $(p^{rm}-1)^{\text{th}}$ primitive root of unity, Proposition \ref{prop:inner automorphisms of S_f over finite fields} leads to: \begin{corollary} \label{cor:inner automorphism cyclic subgroup over finite fields} Suppose $f(t) = t^m-a \in K[t;\sigma]$ and $(rm) \vert h$, then $K$ contains a primitive $(p^{rm}-1)$ root of unity $c$ and $\mathrm{Aut}_{F}(S_f)$ contains a cyclic subgroup of inner automorphisms of order $(p^{rm}-1)/(p^r-1)$ generated by $G_c$. \end{corollary} \begin{proof} $K$ contains a primitive $(p^{rm}-1)$ root of unity if and only if $(p^{rm}-1) \vert (p^h-1)$ by \cite[Proposition II.2.1]{koblitz1994course}, if and only if $$p^{rm}-1 = \mathrm{gcd}(p^{rm}-1, p^h-1) = p^{\mathrm{gcd}(rm,h)}-1$$ by Lemma \ref{lem:gcd number theory result}, if and only if $rm = \mathrm{gcd}(rm,h)$ if and only if $(rm) \vert h$. Let $c \in K^{\times}$ be a primitive $(p^{rm}-1)$ root of unity, then $G_c$ is a non-trivial inner automorphism of $S_f$ by Proposition \ref{prop:inner automorphisms of S_f over finite fields}. Additionally, notice $k = c^{p^r-1}$ is a primitive $(p^{rm}-1)/(p^r-1)$ root of unity and $G_c = H_{\mathrm{id},k}$ by the proof of Proposition \ref{prop:inner automorphisms of S_f over finite fields}. Thus $\langle G_c \rangle = \{ G_c, G_{c^2}, \ldots, \mathrm{id} \}$ is a cyclic subgroup of $\mathrm{Aut}_{F}(S_f)$ of order $(p^{rm}-1)/(p^r-1)$. \end{proof} \begin{proposition} Let $f(t) = t^m-a \in K[t;\sigma]$ and $l \in \mathbb{N}$ be such that $l \vert m$ and $l \vert (p^{\mathrm{gcd}(r,h)}-1)$. Then $F$ contains a primitive $l^{\text{th}}$ root of unity $\omega$ and $\mathrm{Aut}_{F}(S_f)$ contains a cyclic subgroup of order $l$ generated by $H_{\mathrm{id},\omega}$. \end{proposition} \begin{proof} $F$ contains a primitive $l^{\text{th}}$ root of unity is equivalent to $l \vert (p^{\mathrm{gcd}(r,h)}-1)$, so the result follows by Theorem \ref{thm:Primitive root then subgroup of order m}. \end{proof} \chapter{Automorphisms of Nonassociative Cyclic Algebras} \label{chapter:Automorphisms of Nonassociative Cyclic Algebras} Throughout this Chapter, let $K/F$ be a cyclic Galois field extension of degree $m$ with $\mathrm{Gal}(K/F) = \langle \sigma \rangle$, and $$A = (K/F,\sigma,a) = K[t;\sigma]/K[t;\sigma](t^m-a)$$ be a nonassociative cyclic algebra of degree $m$ for some $a \in K \setminus F$. Here $A$ is not associative by Theorem \ref{thm:Properties of S_f petit}(v). In this Chapter we investigate the automorphisms of nonassociative cyclic algebras using results in Chapter \ref{chapter:Automorphisms of S_f} and those by Steele in \cite[\S 6.3]{AndrewPhD}. In particular, we will prove there always exist non-trivial inner automorphisms of a nonassociative cyclic algebra and find conditions for all automorphisms to be inner. These conditions are closely related to whether or not the field $F$ contains a primitive $m^{\text{th}}$ root of unity. We pay special attention to the case where $K/F$ is an extension of finite fields in Section \ref{section:Automorphisms of Nonassociative Cyclic Algebras over Finite Fields}, and we completely determine the automorphism group of nonassociative cyclic algebras of prime degree different from $\mathrm{Char}(F)$. The field norm $N_{K/F}:K \rightarrow F$ is given by $N_{K/F}(k) = \prod_{l=0}^{m-1} \sigma^l(k)$. Recall Hilbert's Theorem 90, which states $N_{K/F}(b) = 1$ if and only if $b = \sigma(c)c^{-1}$ for some $c \in K^{\times}$, thus $\mathrm{Ker}(N_{K/F}) = \Delta^\sigma (1)$, where $\Delta^\sigma (l) = \{ \sigma(c) l c^{-1} \ \vert \ c \in K^{\times} \}$ is the \textbf{$\sigma$-conjugacy class} of $l \in K^{\times}$ \cite{lam1988vandermonde}. In this case Theorem \ref{thm:automorphism_of_S_f_division_case}(iii) immediately becomes: \begin{corollary} \label{cor:t^m-a automorphism field result} (\cite[Corollary 3.2.9]{AndrewPhD}). A map $H: A \rightarrow A$ is an $F$-automorphism of $A$ if and only if $H = H_{\sigma^j, k}$ for some $j \in \{ 0, \ldots, m-1 \}$ and $k \in K^{\times}$ such that $\sigma^j(a) = N_{K/F}(k)a$, where $H_{\sigma^j, k}$ is defined as in \eqref{automorphism_of_Sf form of H}. \end{corollary} Corollary \ref{cor:t^m-a automorphism field result} leads us to the following Theorem on the inner automorphisms of nonassociative cyclic algebras: \begin{theorem} \label{thm:t^m-a_automorphism_field} \begin{itemize} \item[(i)] The maps \begin{equation} \label{eqn:form of G_c t^m-a automorphism} G_c: A \rightarrow A, \ \sum_{i=0}^{m-1} x_i t^i \mapsto \sum_{i=0}^{m-1} x_i c^{-1} \sigma^i(c) t^i, \end{equation} are inner automorphisms for all $c \in K^{\times}$. \item[(ii)] Let $c \in K^{\times}$, then $G_c = H_{\mathrm{id},k}$ where $k = c^{-1} \sigma(c)$. Furthermore every $H_{\mathrm{id}, l} \in \mathrm{Aut}_F(A)$ is inner and can be written in the form $G_c$ for some $c \in K^{\times}$. \item[(iii)] Let $c, d \in K^{\times}$, then $G_c = G_d$ if and only if $c^{-1} \sigma(c) = d^{-1}\sigma(d)$. \item[(iv)] There exists a non-trivial inner automorphism of $A$. Therefore the automorphism group of $A$ is not trivial. \item[(v)] $N = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F})$ is an abelian normal subgroup of $\mathrm{Aut}_F(A)$. In particular, $H \circ G_c \circ H^{-1}$ is inner for all $c \in K^{\times}$ and all $H \in \mathrm{Aut}_F(A)$. \item[(vi)] If $\sigma^j(a) a^{-1} \notin N_{K/F}(K^{\times})$ for all $j \in \{ 1, \ldots, m-1 \}$ then $$\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F}),$$ and all automorphisms of $A$ are inner. In particular, $\mathrm{Aut}_F(A)$ is abelian. \item[(vii)] Let $c \in K \setminus F$ and suppose there exists $j \in \mathbb{N}$ such that $c^j \in F^{\times}$. Let $j$ be minimal. Then $\langle G_c \rangle \cong \mathbb{Z}/j \mathbb{Z}$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] A straightforward calculation shows $$G_c \Big( \sum_{i=0}^{m-1} x_i t^i \Big) = \Big( c^{-1} \sum_{i=0}^{m-1} x_i t^i \Big) c,$$ for all $c \in K^{\times}$. Furthermore, $D = \mathrm{Nuc}(A)$ by Corollary \ref{cor:Nucleus of Nonassociative cyclic algebra} and hence $G_c$ are inner automorphisms by Corollary \ref{cor:inner automorphisms of S_f, D[t;sigma,delta]}. \item[(ii)] If $k = c^{-1} \sigma(c)$, then \begin{align*} H_{\mathrm{id}, k} \big( \sum_{i=0}^{m-1} x_i t^i \big) &= x_0 + \sum_{i=1}^{m-1} x_i \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) t^i \\ &= x_0 + \sum_{i=1}^{m-1} x_i c^{-1} \sigma^i(c) t^i = G_c \big( \sum_{i=0}^{m-1} x_i t^i \big), \end{align*} and thus $G_c = H_{\mathrm{id},k}$. Suppose $H_{\mathrm{id},l} \in \mathrm{Aut}_F(A)$ for some $l \in K^{\times}$, then $N_{K/F}(l) = 1$ by Corollary \ref{cor:t^m-a automorphism field result} and so there exists $c \in K^{\times}$ such that $l = c^{-1}\sigma(c)$ by Hilbert 90. This means $H_{\mathrm{id},l} = G_c$ by the above calculation. \item[(iii)] Let $k = c^{-1} \sigma(c)$ and $l = d^{-1} \sigma(d)$, so that $G_c = H_{\mathrm{id},k}$ and $G_d = H_{\mathrm{id},l}$ by (ii). Therefore $G_c = G_d$ if and only if $H_{\mathrm{id},k} = H_{\mathrm{id},l}$ if and only if $k = l$. \item[(iv)] $G_c$ is a non-trivial inner automorphism of $A$ for all $c \in K \setminus F$. \item[(v)] Note $N = \{ H_{\mathrm{id},k} \ \vert \ k \in \mathrm{Ker}(N_{K/F}) \}$ by (ii) and Corollary \ref{cor:t^m-a automorphism field result} and $N$ is a subgroup of $\mathrm{Aut}_F(A)$ by Theorem \ref{thm:automorphism_of_S_f_division_case}(i). Furthermore, a straightforward calculation shows $G_c \circ G_d = G_{cd} = G_{dc} = G_d \circ G_c$, for all $c, d \in K^{\times}$, i.e. $N$ is abelian. We are left to prove $N$ is a normal subgroup of $\mathrm{Aut}_F(A)$: Let $c \in K^{\times}$ and $H \in \mathrm{Aut}_F(A)$. Then $H = H_{\sigma^j,k}$ for some $j \in \{ 0, \ldots, m-1 \}$, $k \in K^{\times}$ such that $\sigma^j(a) = N_{K/F}(k)a$ by Corollary \ref{cor:t^m-a automorphism field result}. Additionally the inverse of $H_{\sigma^j,k}$ is $H_{\sigma^{-j},\sigma^{-j}(k^{-1})}$, and we have \begin{align*} H_{\sigma^j,k} \Big( &G_c \Big( H_{\sigma^j,k}^{-1} \Big( \sum_{i=0}^{m-1} x_i t^i \Big) \Big) \Big) \\ &= H_{\sigma^j,k} \Big( G_c \Big( \sigma^{-j}(x_0) + \sum_{i=1}^{m-1} \sigma^{-j}(x_i) \big( \prod_{l=0}^{i-1} \sigma^{l-j}(k^{-1}) \big) t^i \Big) \Big) \\ &= H_{\sigma^j,k} \Big( \sigma^{-j}(x_0) + \sum_{i=1}^{m-1} \sigma^{-j}(x_i) \big( \prod_{l=0}^{i-1} \sigma^{l-j}(k^{-1}) \big) c^{-1} \sigma^i(c) t^i \Big) \\ &= x_0 + \sum_{i=1}^{m-1} x_i \big( \prod_{l=0}^{i-1} \sigma^{l}(k^{-1}) \big) \sigma^j(c^{-1}) \sigma^{i+j}(c) \big( \prod_{l=0}^{i-1} \sigma^{l}(k) \big) t^i \\ &= x_0 + \sum_{i=1}^{m-1} x_i \sigma^j(c^{-1}) \sigma^{j+i}(c) t^i = G_{\sigma^j(c)} \Big( \sum_{i=0}^{m-1} x_i t^i \Big), \end{align*} hence $H_{\sigma^j,k} \circ G_c \circ H_{\sigma^j,k}^{-1} = G_{\sigma^j(c)} \in N$ which yields the assertion. \item[(vi)] Here $\mathrm{Aut}_{F}(A) = \{ H_{\mathrm{id},k} \ \vert \ k \in \mathrm{Ker}(N_{K/F}) \}$ by Corollary \ref{cor:t^m-a automorphism field result} and hence the result follows by (ii) and (iii). \item[(vii)] This is just Corollary \ref{cor:G_c subgroup order j cyclic}. \end{itemize} \end{proof} In some instances, $\mathrm{Aut}_F(A)$ is a non-abelian group: \begin{proposition} Suppose $H_{\sigma^j,k} \in \mathrm{Aut}_F(A)$ for some $j \in \{ 1, \ldots, m-1 \}$ and $k \in K^{\times}$. If there exists $c \in K^{\times}$ and $i \in \{ 1, \ldots, m-1 \}$ such that $c^{-1} \sigma^i(c) \notin \mathrm{Fix}(\sigma^j)$, then $\mathrm{Aut}_F(A)$ is a non-abelian group. \end{proposition} \begin{proof} We have $$H_{\sigma^j,k}(G_c(t^i)) = H_{\sigma^j,k} \big( c^{-1} \sigma^i(c)t^i \big) = \sigma^j \big( c^{-1} \sigma^i(c) \big) \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) t^i,$$ and $$G_c(H_{\sigma^j,k}(t^i)) = G_c \Big( \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) t^i \Big) = \Big( \prod_{l=0}^{i-1} \sigma^l(k) \Big) c^{-1} \sigma^i(c) t^i,$$ for all $i \in \{ 1, \ldots, m-1 \}$. Thus if there exists $i \in \{ 1, \ldots, m-1 \}$ such that $c^{-1} \sigma^i(c) \notin \mathrm{Fix}(\sigma^j)$, then $G_c \circ H_{\sigma^j,k} \neq H_{\sigma^j,k} \circ G_c$ and $\mathrm{Aut}_F(A)$ is not abelian. \end{proof} We now take a closer look at the inner automorphisms in the case where $F$ does not contain certain primitive roots of unity: \begin{theorem} \label{thm:Aut(S_f) F does not contain primitive root of unity} Suppose $a \in K^{\times}$ does not lie in any proper subfield of $K$ and $F$ does not contain a non-trivial $m^{\text{th}}$ root of unity. Then $$\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F}),$$ and all automorphisms of $A$ are inner. \end{theorem} \begin{proof} We first prove every automorphism of $A$ has the form $H_{\mathrm{id},k}$, then $$\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F}),$$ by Theorem \ref{thm:t^m-a_automorphism_field} and all automorphisms of $A$ are inner. Suppose, for a contradiction, that there exists $j \in \{ 1, \ldots, m-1 \}$ and $k \in K^{\times}$ such that $H_{\sigma^j,k} \in \mathrm{Aut}_{F}(A)$. This implies $H_{\sigma^j,k}^2 = H_{\sigma^j,k} \circ H_{\sigma^j,k} \in \mathrm{Aut}_{F}(A)$, and \begin{equation} \label{eqn:Aut(S_f) F does not contain primitive root of unity 1} \begin{split} H_{\sigma^j,k}^2 & \Big( \sum_{i=0}^{m-1} x_i t^i \Big) = \sigma^{2j}(x_0) + \sum_{i=1}^{m-1} \sigma^{2j}(x_i) \Big( \prod_{q=0}^{i-1} \sigma^{j+q}(k) \sigma^q(k) \Big) t^i. \end{split} \end{equation} Now $H_{\sigma^j,k}^2$ must have the form $H_{\sigma^{2j},l}$ for some $l \in K^{\times}$ by Corollary \ref{cor:t^m-a automorphism field result}, and comparing \eqref{automorphism_of_Sf form of H} and \eqref{eqn:Aut(S_f) F does not contain primitive root of unity 1} yields $l = k \sigma^j(k)$. Similarly, $H_{\sigma^j,k}^3 = H_{\sigma^{3j},s} \in \mathrm{Aut}_{F}(A)$ where $s = k \sigma^j(k) \sigma^{2j}(k)$. Continuing in this manner we conclude the maps $H_{\sigma^j,k}, H_{\sigma^{2j},l}, H_{\sigma^{3j},s}, \ldots$ are all $F$-automorphisms of $A$, therefore \begin{align} \label{eqn:Aut(S_f) F does not contain primitive root of unity 2} \begin{split} \sigma^j(a) &= N_{K/F}(k) a, \\ \sigma^{2j}(a) &= N_{K/F}(k \sigma^j(k)) a = N_{K/F}(k)^2 a, \\ \vdots & \qquad \qquad \vdots \\ a = \sigma^{n j}(a) &= N_{K/F}(k)^{n} a, \end{split} \end{align} by Corollary \ref{cor:t^m-a automorphism field result}, where $n = m/\mathrm{gcd}(j,m)$ is the order of $\sigma^j$. Note that $\sigma^{ij}(a) \neq a$ for all $i \in \{ 1, \ldots, n-1 \}$ since $a$ is not contained in any proper subfield of $K$. Therefore $N_{K/F}(k)^{n} = 1$ and $N_{K/F}(k)^i \neq 1$ for all $i \in \{1, \ldots, n - 1 \}$ by \eqref{eqn:Aut(S_f) F does not contain primitive root of unity 2}, i.e. $N_{K/F}(k)$ is a primitive $n^{\text{th}}$ root of unity, thus also an $m^{\text{th}}$ root of unity, a contradiction. \end{proof} \begin{example} \label{ex:cubic cyclic extension inner automorphisms} Let $F = \mathbb{Q}$ and $K = \mathbb{Q}(\theta)$ where $\theta$ is a root of $T(y) = y^3 + y^2 - 2y - 1 \in F[y]$. Then $K/F$ is a cubic cyclic Galois field extension, its Galois group is generated by $\sigma$ where $\sigma(\theta) = - \theta^2 - \theta +1$ \cite[p.~199]{hanke2005twisted}. Suppose $A = (K/F,\sigma,a)$ for some $a \in K \setminus F$. As $F$ does not contain a non-trivial $3^{\text{rd}}$ root of unity, Theorem \ref{thm:Aut(S_f) F does not contain primitive root of unity} implies $\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F}),$ and all automorphisms of $A$ are inner. \end{example} We now investigate the automorphisms of $A$ in the case when $F$ contains a primitive $m^{\text{th}}$ root of unity. It is well-known that if $F$ contains a primitive $m^{\text{th}}$ root of unity and $K/F$ is cyclic of degree $m$ (where $m$ and $\mathrm{Char}(F)$ are coprime), then $K = F(d)$ where $d$ is a root of the irreducible polynomial $x^m - e \in F[x]$ for some $e \in F^{\times}$ \cite[\S VI.6]{lang2002algebra}. \begin{lemma} \label{lem:eigenvalues and eigenvectors of cyclic extension automorphisms} Suppose $F$ contains a primitive $m^{\text{th}}$ root of unity and either $F$ has characteristic $0$ or $\mathrm{gcd}(\mathrm{char}(F),m)=1$. Write $K = F(d)$ where $d$ is a root of the irreducible polynomial $x^m - e$ for some $e \in F^{\times}$. Then $\sigma^j(\lambda d^i) = \omega^{ij} \lambda d^i$, for all $\lambda \in F$ and $i, j \in \{ 0, \ldots, m-1 \}$ with $\omega \in F^{\times}$ a primitive $m^{\text{th}}$ root of unity. Furthermore, if $m$ is prime then $\lambda d^i$ are the only possible eigenvectors of $\sigma^j$. \end{lemma} \begin{proof} When $m$ is prime this is \cite[Lemma 6.2.7]{AndrewPhD}. We are left to prove the first assertion when $m$ is not necessarily prime, this is similar to the first part of the proof of \cite[Lemma 6.2.7]{AndrewPhD}: We have $\sigma(d^m) = \sigma(e) = e = d^m$, therefore the action of $\sigma$ on $d$ is given by $\sigma(d) = \omega d$ where $\omega$ is a primitive $m^{\text{th}}$ root of unity. Thus $\sigma^j(d) = \omega^jd$ and $\sigma^j(d^i) = \omega^{ij}d^i$ for all $i,j \in \{ 0, \ldots, m-1 \}$. \end{proof} When $m$ is prime, Corollary \ref{cor:t^m-a automorphism field result} and Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms} yield: \begin{proposition} \label{prop:contains rot of unity Ker} Suppose $m$ is prime, $F$ has characteristic not $m$ and contains a primitive $m^{\text{th}}$ root of unity. Write $K = F(d)$ where $d$ is a root of the irreducible polynomial $x^m-e \in F[x]$ for some $e \in F^{\times}$. \begin{itemize} \item[(i)] If $a \neq \lambda d^i$ for all $\lambda \in F^{\times}$, $i \in \{ 1, \ldots, m-1 \}$ then $\mathrm{Aut}_F(A) \cong \mathrm{Ker}(N_{K/F})$ and all automorphisms of $A$ are inner. \item[(ii)] Suppose $a = \lambda d^i$ for some $\lambda \in F^{\times}$, $i \in \{ 1, \ldots, m-1 \}$. If there exists $j \in \{ 1, \ldots, m-1 \}$ and $k \in K^{\times}$ such that $N_{K/F}(k) = \omega^{ij}$ where $\omega \in F^{\times}$ is the primitive $m^{\text{th}}$ root of unity satisfying $\sigma(d) = \omega d$, then $\langle H_{\sigma^j,k} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$ of order $m^2$. Otherwise $\mathrm{Aut}_F(A) \cong \mathrm{Ker}(N_{K/F})$ and all automorphisms of $A$ are inner. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(i)] If $a \neq \lambda d^i$ for all $\lambda \in F^{\times}$, $i \in \{ 1, \ldots, m-1 \}$ then $\sigma^j(a) \neq la$ for all $l \in F^{\times}$, $j \in \{ 1, \ldots, m-1 \}$ by Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms}. In particular, this means $\sigma^j(a) \neq N_{K/F}(k)a$ for all $k \in K^{\times}$ and so $H_{\sigma^j,k}$ is not an automorphism of $A$ for all $j \in \{ 1, \ldots, m-1 \}$, $k \in K^{\times}$ by Corollary \ref{cor:t^m-a automorphism field result}. Therefore $\mathrm{Aut}_F(A) = \{ H_{\mathrm{id},k} \ \vert \ N_{K/F}(k)=1 \}$ again by Corollary \ref{cor:t^m-a automorphism field result} and $\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{ker}(N_{K/F})$ by Theorem \ref{thm:t^m-a_automorphism_field}, hence all automorphisms of $A$ are inner. \item[(ii)] Suppose there exists $j \in \{ 1, \ldots, m-1 \}$ and $k \in K$ such that $N_{K/F}(k) = \omega^{ij}$ where $\omega \in F^{\times}$ is a primitive $m^{\text{th}}$ root of unity. Then $$\sigma^j(a) = \omega^{ij}a = N_{K/F}(k)a,$$ by Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms} which implies $H_{\sigma^j,k} \in \mathrm{Aut}_F(A)$ by Corollary \ref{cor:t^m-a automorphism field result}. Since $m$ is prime, $\sigma^j$ has order $m$, and $H_{\sigma^j,k} \circ \ldots \circ H_{\sigma^j,k}$ ($m$-times) becomes $H_{\mathrm{id},b}$ where $b = \omega^{ij} = N_{K/F}(k)$. As $b$ is a primitive $m^{\text{th}}$ root of unity, $H_{\mathrm{id},b}$ has order $m$ so the subgroup generated by $H_{\sigma^j,k}$ has order $m^2$. On the other hand, if $N_{K/F}(k) \neq \omega^{ij}$ for all $k \in K^{\times}$, $j \in \{ 1, \ldots, m-1 \}$, then $\sigma^j(a) \neq N_{K/F}(k) a$ for all $k \in K^{\times}$, $j \in \{ 1, \ldots, m-1 \}$ by Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms}, and hence $H_{\sigma^j,k} \notin \mathrm{Aut}_{F}(A)$ for all $j \in \{ 1,\ldots, m-1 \}$, $k \in K^{\times}$ by Corollary \ref{cor:t^m-a automorphism field result}. Therefore $\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{ker}(N_{K/F})$ by Theorem \ref{thm:t^m-a_automorphism_field}. \end{itemize} \end{proof} \section{Automorphisms of Nonassociative Quaternion Algebras} \label{section:Automorphisms of Nonassociative Quaternion Algebras} We now study the automorphisms of nonassociative cyclic algebras of degree $2$, i.e. nonassociative quaternion algebras. Suppose $\mathrm{Char}(F) \neq 2$, $K/F$ is a quadratic separable field extension with non-trivial automorphism $\sigma$, and write $K = F(\sqrt{b})$ for some $b \in K^{\times}$. Let $A = (K/F,\sigma,a)$, $a \in K \setminus F$, be a nonassociative quaternion algebra. \begin{theorem} \label{thm:Automorphisms of nonassociative quaternion algebras} \begin{itemize} \item[(i)] A map $H$ is an automorphism of $A$ if and only if $H = H_{\sigma^j,k}$ for some $j \in \{ 0,1 \}$ and $k \in K^{\times}$ such that $\sigma^j(a) = N_{K/F}(k)a$. \item[(ii)] The map $H_{\mathrm{id},c^{-1}\sigma(c)} = G_c$ defined as in \eqref{eqn:form of G_c t^m-a automorphism} is an inner automorphism of $A$ for all $c \in K^{\times}$. Moreover every automorphism of $A$ of the form $H_{\mathrm{id},k}$ can also be written in the form $G_c$ for some $c \in K^{\times}$. \item[(iii)] $G_c$ is not trivial if and only if $c \in K \setminus F$. In particular there exists a non-trivial inner automorphism of $A$. \item[(iv)] $\{ G_c \ \vert \ c \in K^{\times} \}$ are the only inner automorphisms of $A$. \end{itemize} \end{theorem} \begin{proof} (i), (ii) and (iii) follow immediately from Corollary \ref{cor:t^m-a automorphism field result} and Theorem \ref{thm:t^m-a_automorphism_field}. The proof of (iv) is similar to \cite[Lemmas 2 and 3]{wene2006auto} with $\alpha = \mathrm{id}$: Suppose, for a contradiction, that $\{ G_c \ \vert \ c \in K^{\times} \}$ are not the only inner automorphisms of $A$. This means there exists an element $0 \neq r + st \in A$ with left inverse $u+vt$, such that $s \neq 0$ and $$H: A \rightarrow A, \ x_0 + x_1t \mapsto [(u + vt) \circ (x_0 + x_1t)] \circ (r + s t),$$ is an automorphism. We have \begin{align} \label{eqn:t^m-a_automorphism_field 0} \begin{split} 1 &= (u + vt) \circ (r + st) = u r + v \sigma(s)a + \big( u s + v \sigma(r) \big) t, \end{split} \end{align} and comparing the coefficients of $t$ in \eqref{eqn:t^m-a_automorphism_field 0} yields \begin{equation} \label{eqn:t^m-a_automorphism_field 1} u s + v \sigma(r) = 0. \end{equation} Any automorphism must preserve the left nucleus, so $H(K) = K$ and $H \vert_K = \sigma^j$ for some $j \in \{ 0, 1 \}$. This implies \begin{align*} H(k) &= [(u + vt) \circ k] \circ (r + st) = (uk + v \sigma(k)t) \circ (r + s t) \\ &= u k r + v \sigma(k) \sigma(s) a + (u k s + v \sigma(k)\sigma(r))t = \sigma^j(k) \end{align*} for all $k \in K$, in particular \begin{equation} \label{eqn:t^m-a_automorphism_field 2} k u s + \sigma(k) v \sigma(r) = 0. \end{equation} Therefore $k u s = \sigma(k) u s$ for all $k \in K$ by \eqref{eqn:t^m-a_automorphism_field 1}, \eqref{eqn:t^m-a_automorphism_field 2}, hence $u = 0$ since $s \neq 0$ and $\sigma$ is not trivial. Furthermore $v \sigma(r) = 0$ by \eqref{eqn:t^m-a_automorphism_field 1} which means $r = 0$ because $\sigma$ is injective and $v \neq 0$. Now $A$ is a division algebra by Corollary \ref{cor:nonassociative cyclic algebra is division} and Theorem \ref{thm:S_f_division_iff_irreducible} (see also \cite[p.~369]{waterhouse}), moreover $$\frac{1}{\sigma(s)a}t \circ st = 1,$$ and so the left inverse of $st$ is unique and equal to $(1/(\sigma(s)a)) t$. We conclude $H$ has the form \begin{align*} H(x_0 + x_1t) &= \big[ \frac{1}{\sigma(s)a}t \circ (x_0 + x_1t) \big] \circ s t = \Big( \frac{\sigma(x_0)}{\sigma(s)a}t + \frac{\sigma(x_1)}{\sigma(s)} \Big) \circ st \\ &= \sigma(x_0) + \frac{\sigma(x_1)s}{\sigma(s)}t, \end{align*} that is $H = H_{\sigma,b}$ where $b = \sigma(s)^{-1} s$, and $$\sigma(a) = \sigma(s)^{-1} s \sigma^2(s)^{-1} \sigma(s) a = s \sigma^2(s)^{-1} a = s s^{-1} a = a$$ by Corollary \ref{cor:t^m-a automorphism field result}, a contradiction because $a \in K \setminus F$. \end{proof} Theorem \ref{thm:Automorphisms of nonassociative quaternion algebras}(i) is also proven by Waterhouse in \cite[p.~370-371]{waterhouse}. Setting $m = 2$ in Theorem \ref{thm:t^m-a_automorphism_field}(vi) and Proposition \ref{prop:contains rot of unity Ker}(i) yields: \begin{corollary} \label{cor:inner automorphisms of nonassociative quaternion algebras} \begin{itemize} \item[(i)] If $a = \lambda_0 + \lambda_1 \sqrt{b}$ for some $\lambda_0, \lambda_1 \in F^{\times}$, then $\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F})$ and all automorphisms of $A$ are inner. \item[(ii)] If $a = \lambda \sqrt{b}$ for some $\lambda \in F^{\times}$ and $N_{K/F}(k) \neq -1$ for all $k \in K^{\times}$, then $\mathrm{Aut}_F(A) = \{ G_c \ \vert \ c \in K^{\times} \} \cong \mathrm{Ker}(N_{K/F})$ and all automorphisms of $A$ are inner. \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] $-1 \in F^{\times}$ is a primitive $2^{\text{nd}}$ root of unity so the result follows by Proposition \ref{prop:contains rot of unity Ker}(i). \item[(ii)] We have $\sigma(a)a^{-1} = -a a^{-1} = -1$ so the assertion follows by Theorem \ref{thm:t^m-a_automorphism_field}(vi). \end{itemize} \end{proof} \begin{example} \label{ex:automorphisms_of_nonassociative_quaternion_algebras} Consider the nonassociative quaternion algebra $A = (\mathbb{C}/\mathbb{R},\sigma,a)$ where $a \in \mathbb{C} \setminus \mathbb{R}$ and $\sigma$ denotes complex conjugation. Given $k = k_0 + k_1 i \in \mathbb{C}$, $k_0, k_1 \in \mathbb{R}$, we have $$N_{\mathbb{C}/\mathbb{R}}(k) = k \sigma(k) = (k_0 + k_1 i)(k_0 - k_1 i) = k_0^2 + k_1^2 \neq -1,$$ and so $\mathrm{Aut}_{\mathbb{R}}(A) = \{ G_c \ \vert \ c \in \mathbb{C}^{\times} \} \cong \mathrm{Ker}(N_{\mathbb{C}/\mathbb{R}})$ by Corollary \ref{cor:inner automorphisms of nonassociative quaternion algebras}. Now since $\mathrm{Ker}(N_{\mathbb{C}/\mathbb{R}}) = \{ k_0 + k_1 i \in \mathbb{C} \ \vert \ k_0^2 + k_1^2 = 1 \},$ we conclude $\mathrm{Aut}_{\mathbb{R}}(A)$ is isomorphic to the unit circle in $\mathbb{C}$. \end{example} Recall the \textbf{dicyclic group} of order $4l$ has the presentation \begin{equation} \label{eqn:dicyclic group presentation} \mathrm{Dic}_l = \langle x, y \ \vert \ y^{2l}=1, \ x^2 = y^l, \ x^{-1}yx = y^{-1} \rangle. \end{equation} The \textbf{semidirect product} $\mathbb{Z} / s \mathbb{Z} \rtimes \mathbb{Z} / n \mathbb{Z}$ between cyclic groups $\mathbb{Z} / s \mathbb{Z}$ and $\mathbb{Z} / n \mathbb{Z}$ corresponds to a choice of integer $l$ such that $l^n \equiv 1 \ \mathrm{ mod} \ s$. It can be described by the presentation \begin{equation} \label{eqn:semidirect product of cyclic algebras} \mathbb{Z} / s \mathbb{Z} \rtimes_l \mathbb{Z} / n \mathbb{Z} = \langle x,y \ \vert \ x^s = 1, \ y^n= 1, \ yxy^{-1} = x^l \rangle. \end{equation} \begin{theorem} \label{thm:semidirect and dicyclic m=2} Let $a = \lambda \sqrt{b}$ for some $\lambda \in F^{\times}$ and suppose there exists $k \in K^{\times}$ such that $N_{K/F}(k) = -1$. For every $c \in K \setminus F$ for which there exists a positive integer $j$ such that $c^j \in F^{\times}$, pick the smallest such $j$. \begin{itemize} \item[(i)] If $j$ is even then $\mathrm{Aut}_F(A)$ contains a subgroup isomorphic to the dicyclic group of order $2j$. \item[(ii)] If $j$ is odd then $\mathrm{Aut}_F(A)$ contains a subgroup isomorphic to the semidirect product $\mathbb{Z} / j \mathbb{Z} \rtimes_{j-1} \mathbb{Z} / 4 \mathbb{Z}.$ \end{itemize} \end{theorem} \begin{proof} Since $\sigma(a) = -a = N_{K/F}(k)a$, $H_{\sigma,k} \in \mathrm{Aut}_F(A)$ by Corollary \ref{cor:t^m-a automorphism field result}. $\langle G_c \rangle$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$ of order $j$ by Theorem \ref{thm:t^m-a_automorphism_field}, furthermore, a straightforward calculation shows that \begin{equation} \label{eqn:semidirect and dicyclic m=2 I} \langle H_{\sigma,k} \rangle = \{ H_{\sigma,k}, H_{\mathrm{id},-1}, H_{\sigma,-k}, H_{\mathrm{id}, 1} \}. \end{equation} \begin{itemize} \item[(i)] Suppose $j$ is even and write $j = 2l$. We prove first that $G_{c^l} = H_{\mathrm{id}, -1}$. Write $c^l = \lambda + \mu \sqrt{b}$ for some $\lambda, \mu \in F$. Then $$c^j = c^{2l} = \lambda^2 + \mu^2 b + 2 \lambda \mu \sqrt{b} \in F,$$ which implies $2 \lambda \mu = 0$, i.e. $\lambda = 0$ or $\mu = 0$. By the minimality of $j$, $c^l \notin F$ thus $\lambda = 0$ and hence $c^l = \mu \sqrt{b}$. We have \begin{align*} G_{c^l}(x_0 + x_1t) &= x_0 + x_1 (\mu \sqrt{b})^{-1} \sigma(\mu \sqrt{b})t = x_0 + x_1 \mu^{-1} b^{-1} \sqrt{b}(- \mu \sqrt{b})t \\ &= x_0 - x_1t = H_{\mathrm{id}, -1}(x_0 + x_1t), \end{align*} that is $G_{c^l} = H_{\mathrm{id}, -1}$. Next we prove $(H_{\sigma,k})^{-1} \circ G_c \circ H_{\sigma,k} = G_c^{-1}$: Simple calculations show $(H_{\sigma,k})^{-1} = H_{\sigma,-k}$ and $G_c^{-1} = G_{\sigma(c)}$. Furthermore \begin{align*} H_{\sigma,-k} \big( G_c \big( H_{\sigma,k} \big( x_0 + x_1t \big) \big) \big) &= H_{\sigma,-k} \big( G_c \big( \sigma(x_0) + \sigma(x_1)kt \big) \big) \\ &= H_{\sigma,-k} \big( \sigma(x_0) + \sigma(x_1)k c^{-1}\sigma(c) t \big) \\ &= x_0 - x_1 \sigma(k) \sigma(c^{-1})ckt \\ &= x_0 + x_1 \sigma(c^{-1})c t = G_{\sigma(c)}\big( x_0 + x_1t \big), \end{align*} that is $(H_{\sigma,k})^{-1} \circ G_c \circ H_{\sigma,k} = G_c^{-1}$. We conclude $H_{\sigma,k}^2 = H_{\mathrm{id}, -1} = G_{c^l} = G_c^l$, $G_c^{2l} = \mathrm{id}$ and $(H_{\sigma,k})^{-1} \circ G_c \circ H_{\sigma,k} = G_c^{-1}$, hence $\langle H_{\sigma,k}, G_c \rangle$ has the presentation \eqref{eqn:dicyclic group presentation} as required. \item[(ii)] Suppose $j$ is odd, then $\langle G_c \rangle$ is cyclic of order $j$ so does not contain $H_{\mathrm{id}, -1}$, because $H_{\mathrm{id}, -1}$ has order $2$. This implies $\langle H_{\sigma,k} \rangle \cap \langle G_c \rangle = \{ \mathrm{id} \}$ by \eqref{eqn:semidirect and dicyclic m=2 I}. Furthermore, $(H_{\sigma,k})^{-1} \circ G_c \circ H_{\sigma,k} = G_c^{-1} = G_c^{j-1} = G_{c^{j-1}},$ similarly to the argument in (i). Notice that $$(j-1)^4 = j^4 - 4j^3 + 6j^2 - 4j + 1 \equiv 1 \text{ mod }(j),$$ thus $\mathrm{Aut}_F(A)$ contains the subgroup $$\langle G_c \rangle \rtimes_{j-1} \langle H_{\sigma,k} \rangle \cong \mathbb{Z} / j \mathbb{Z} \rtimes_{j-1} \mathbb{Z} / 4 \mathbb{Z}$$ as required. \end{itemize} \end{proof} In particular if we choose $c = \sqrt{b}$ in Theorem \ref{thm:semidirect and dicyclic m=2} then clearly $j = 2$ which means $\mathrm{Aut}_F(A)$ contains the dicyclic group of order $4$, which is the cyclic group of order $4$: \begin{corollary} If $a = \lambda \sqrt{b}$ for some $\lambda \in F^{\times}$ and there exists $k \in K^{\times}$ such that $k \sigma(k) = -1$ then $\mathrm{Aut}_F(A)$ contains a subgroup isomorphic to $\mathbb{Z}/4\mathbb{Z}$. \end{corollary} \begin{examples} \begin{itemize} \item[(i)] Suppose $F = \mathbb{Q}(i)$, $K = F(\sqrt{-3})$ and $\sigma: K \rightarrow K$ is the $F$-automorphism sending $\sqrt{-3}$ to $-\sqrt{-3}$. Let $A = (K/F,\sigma, \lambda \sqrt{-3})$ for some $\lambda \in F^{\times}$, $c = 1 + \sqrt{-3} \in K$, and notice $i \sigma(i) = i^2 = -1$. We have $c^2 = -2+2\sqrt{-3}$ and $c^3 = -8$ so that $c, c^2 \in K \setminus F$ and $c^3 \in F$. This means $\mathrm{Aut}_F(A)$ contains a subgroup isomorphic to the semidirect product $\mathbb{Z} / 3\mathbb{Z} \rtimes_2 \mathbb{Z} / 4 \mathbb{Z}$ by Theorem \ref{thm:semidirect and dicyclic m=2}. \item[(ii)] Suppose $F = \mathbb{Q}(i)$, $K = F(\sqrt{-1/12})$ and $\sigma: K \rightarrow K$ the $F$-automorphism sending $\sqrt{-1/12}$ to $-\sqrt{-1/12}$. Let $$A = (K/F,\sigma,\lambda \sqrt{-1/12})$$ for some $\lambda \in F^{\times}$, $c = 1 + 2 \sqrt{-1/12} \in K$ and notice $i \sigma(i) = i^2 = -1$. Then $$c^2 = \frac{2}{3} + \frac{2i}{\sqrt{3}}, \ \ c^3 = \frac{8i}{3\sqrt{3}}, \ \ c^4 = \frac{-8}{9} + \frac{8i}{3 \sqrt{3}},$$ $$c^5 = \frac{-16}{9} + \frac{16i}{9\sqrt{3}} \ \text{ and } \ c^6 = \frac{-64}{27}.$$ Hence $c, c^2, c^3, c^4, c^5 \in K \setminus F$ and $c^6 \in F$. Therefore $\mathrm{Aut}_F(A)$ contains the dicyclic group of order $12$ by Theorem \ref{thm:semidirect and dicyclic m=2}. \item[(iii)] Suppose $F = \mathbb{Q}(i)(\sqrt{5})$, $K = F \big( \sqrt{2\sqrt{5}-5} \big)$ and $\sigma: K \rightarrow K$ is the $F$-automorphism sending $\sqrt{2\sqrt{5}-5}$ to $-\sqrt{2\sqrt{5}-5}$. Let $$A = (K/F,\sigma,\lambda \sqrt{2\sqrt{5}-5})$$ for some $\lambda \in F^{\times}$, $c = 1 + \sqrt{2\sqrt{5}-5} \in K$ and notice $i \sigma(i) = i^2 = -1$. Then \begin{align*} c^2 &= -4 + 2 \sqrt{5} + 2\sqrt{2\sqrt{5}-5} \in K \setminus F, \\ c^3 &= -14 + 6 \sqrt{5} - 2 \sqrt{2\sqrt{5}-5} + 2 \sqrt{5} \sqrt{2\sqrt{5}-5} \in K \setminus F, \\ c^4 &= 16 - 8 \sqrt{5} - 16 \sqrt{2\sqrt{5}-5} + 8 \sqrt{5} \sqrt{2\sqrt{5}-5} \in K \setminus F, \\ c^5 &= 176 - 80 \sqrt{5} \in F, \end{align*} so we conclude $\mathrm{Aut}_F(A)$ contains a subgroup isomorphic to the semidirect product $\mathbb{Z} / 5\mathbb{Z} \rtimes_4 \mathbb{Z} / 4 \mathbb{Z}$ by Theorem \ref{thm:semidirect and dicyclic m=2}. \end{itemize} \end{examples} \section{Automorphisms of Nonassociative Cyclic Algebras over Finite Fields} \label{section:Automorphisms of Nonassociative Cyclic Algebras over Finite Fields} In \cite[p.~88-92]{AndrewPhD}, the automorphisms of nonassociative cyclic algebras over finite fields were briefly investigated. In this Section we continue this investigation. In particular, we completely determine the automorphism group of nonassociative cyclic algebras of prime degree $p$ where $\mathrm{Char}(F) \neq p$, this was done only for $p = 2$ in \cite{AndrewPhD}. Let now $F = \mathbb{F}_q$ be a finite field of order $q$ and $K = \mathbb{F}_{q^m}$ for some $m \geq 2$. Then $K/F$ is a cyclic Galois extension of degree $m$, say $\mathrm{Gal}(K/F) = \langle \sigma \rangle$. Suppose $A = (K/F,\sigma,a)$ is a nonassociative cyclic algebra for some $a \in K \setminus F$. Let $\alpha$ be a primitive element of $K$, i.e. $K^{\times} = \langle \alpha \rangle$. We recall the well-known fact that since $K$ and $F$ are finite fields, the field norm $N_{K / F} : K^{\times} \rightarrow F^{\times}$ is surjective. Therefore by Corollary \ref{cor:t^m-a automorphism field result}, the problem of finding for which $j \in \{ 0, \ldots, m-1 \}$ there exists $H_{\sigma^j,k} \in \mathrm{Aut}_F(A)$ for some $k \in K^{\times}$, reduces to finding which $j \in \{0 , \ldots, m-1 \}$ exist such that $\sigma^j(a) a^{-1} \in F^{\times}$. Let $s = (q^m-1)/(q-1)$ and notice $s = \sum_{i=0}^{m-1}q^i$. \begin{lemma} \label{lem:number_of_solutions_to_norm_equation} (\cite[Proposition 6.3.1]{AndrewPhD}). For every $b \in F^{\times}$ there are precisely $s$ elements $k \in K^{\times}$ such that $N_{K/F}(k) = b$. Furthermore, $\mathrm{Ker}(N_{K/F}) \cong \mathbb{Z} / s \mathbb{Z}$ is a cyclic subgroup of the multiplicative group $K^{\times}$. \end{lemma} If $\mathrm{gcd}(m,q-1) = 1$ then $F$ does not contain a non-trivial $m^{\text{th}}$ root of unity by \cite[p.~42]{koblitz1994course}. Therefore Theorems \ref{thm:t^m-a_automorphism_field} and \ref{thm:Aut(S_f) F does not contain primitive root of unity} become: \begin{theorem} \label{thm:t^m-a_automorphism_field finite} \begin{itemize} \item[(i)] $\langle G_{\alpha} \rangle$ is a cyclic normal subgroup of $\mathrm{Aut}_F(A)$ of order $s$. Moreover every automorphism of $A$ of the form $H_{\mathrm{id}, k}$ for some $k \in K^{\times}$, is contained in $\langle G_{\alpha} \rangle$. \item[(ii)] All automorphisms contained in $\langle G_{\alpha} \rangle$ are inner, and if $m = 2$ these are all the inner automorphisms of $A$. \item[(iii)] If $\mathrm{gcd}(m,q-1) = 1$ and $a$ is not contained in any proper subfield of $K$, then $\mathrm{Aut}_F(A) = \langle G_{\alpha} \rangle$ is a cyclic group of order $s$ and all automorphisms of $A$ are inner. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] If $K^{\times} = \langle \alpha \rangle$ then $F^{\times} = \langle\alpha^s \rangle$ by \cite[p.~303]{nicholson2012introduction}. This means $\alpha^s \in F^{\times}$ but $\alpha^j \notin F$ for all $j \in \{ 1, \ldots, s-1 \}$ which implies $\langle G_{\alpha} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$ of order $s$ by Theorem \ref{thm:t^m-a_automorphism_field}. Since $\mathrm{Ker}(N_{K/F}) \cong \mathbb{Z} / s \mathbb{Z}$ by Lemma \ref{lem:number_of_solutions_to_norm_equation}, every automorphism $H_{\mathrm{id}, k}$ is contained in $\langle G_{\alpha} \rangle$ by Theorem \ref{thm:t^m-a_automorphism_field}. \item[(ii)] follows by (i) and Theorem \ref{thm:t^m-a_automorphism_field} and (iii) follows by (i) and Theorem \ref{thm:Aut(S_f) F does not contain primitive root of unity}. \end{itemize} \end{proof} \begin{lemma} \label{lem:n divides s} Suppose $m \vert (q-1)$, then $m \vert s$. \end{lemma} \begin{proof} We prove first \begin{equation} \label{eqn:Lemma:n divides s I} (q-1) \vert \Big( \big( \sum_{i=0}^{m-1} q^i \big) - m \Big) \end{equation} for all $m \geq 2$ by induction: Clearly \eqref{eqn:Lemma:n divides s I} holds for $m = 2$. Suppose \eqref{eqn:Lemma:n divides s I} holds for some $m \geq 2$, then \begin{equation} \label{eqn:Lemma:n divides s II} \big( \sum_{i=0}^{m} q^i \big) - (m+1) = \big( \sum_{i=0}^{m-1} q^i \big) - m + q^m - 1 = \big( \sum_{i=0}^{m-1} q^i \big) - m + \big( \sum_{i=0}^{m-1} q^i \big) (q-1). \end{equation} Now, $(q-1) \vert \Big( \big( \sum_{i=0}^{m-1} q^i \big) - m \Big)$ by induction hypothesis, thus $(q-1)$ divides \eqref{eqn:Lemma:n divides s II} which implies \eqref{eqn:Lemma:n divides s I} holds by induction. In particular since $m \vert (q-1)$, \eqref{eqn:Lemma:n divides s I} yields $$m \vert \Big( \big( \sum_{i=0}^{m-1} q^i \big) - m \Big),$$ therefore $m$ divides $\big( \sum_{i=0}^{m-1} q^i \big) - m + m = s$ as required. \end{proof} \begin{lemma} \label{lem:n^2 does not divide s} Suppose $m \vert (q-1)$. \begin{itemize} \item[(i)] If $m$ is odd then $m^2 \nmid (ls)$ for all $l \in \{ 1, \ldots, m-1 \}$. \item[(i)] If $(q-1)/m$ is even then $m^2 \nmid (ls)$ for all $l \in \{ 1, \ldots, m-1 \}$. \end{itemize} \end{lemma} \begin{proof} Write $q = 1 + rm$ for some $r \in \mathbb{N}$, then \begin{align*} q^j &= (1+rm)^j = \sum_{i=0}^j \binom{j}{i} (rm)^i \equiv \sum_{i=0}^1 \binom{j}{i} (rm)^i \ \mathrm{ mod} \ (m^2) \\ & \equiv (1 + jrm) \ \mathrm{ mod} \ (m^2) \end{align*} for all $j \geq 1$. Therefore \begin{align*} ls &= l \sum_{j=0}^{m-1} q^j \equiv l \big( 1 + \sum_{j=1}^{m-1}(1+jrm) \big) \ \mathrm{ mod} \ (m^2) \\ & \equiv \Big( lm + lrm \frac{(m-1)m}{2} \Big) \ \mathrm{ mod} \ (m^2), \end{align*} for all $l \in \{ 1, \ldots, m-1 \}$. If $m$ is odd or $r = (q-1)/m$ is even, then $$\frac{lr(m-1)}{2} \in \mathbb{Z}$$ and so $$lrm \frac{(m-1)m}{2} \ \mathrm{ mod} \ (m^2) \equiv 0$$ for all $l \in \{ 1, \ldots, m-1 \}$. This means $$ls \equiv lm \ \mathrm{ mod} \ (m^2) \not\equiv 0 \ \mathrm{ mod} \ (m^2),$$ that is, $m^2 \nmid (ls)$ for all $l \in \{ 1, \ldots, m-1 \}$. \end{proof} \begin{theorem} \label{thm:Automorphisms of nonassociative cyclic algebras over finite fields} Suppose $\mathrm{gcd}(\mathrm{Char}(F),m) = 1$ and $m \vert (q-1)$. Then we can write $K = F(d)$ where $d$ is a root of the irreducible polynomial $x^m - e \in F[x]$ as in Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms}. Let $A = (K/F,\sigma,\lambda d^i)$ for some $i \in \{ 1 , \ldots, m-1 \}$, $\lambda \in F^{\times}$. If $m$ is odd or $(q-1)/m$ is even, then $\mathrm{Aut}_F(A)$ is a group of order $ms$ and contains a subgroup isomorphic to the semidirect product of cyclic groups \begin{equation} \label{eqn:Automorphisms of nonassociative cyclic algebras over finite fields semidirect not nec prime} \mathbb{Z} / \Big( \frac{s}{m} \Big) \mathbb{Z} \rtimes_{q} \mathbb{Z} / (m \mu) \mathbb{Z}, \end{equation} where $\mu = m/\mathrm{gcd}(i,m)$. Moreover if $i$ and $m$ are coprime, then \begin{equation} \label{eqn:Automorphisms of nonassociative cyclic algebras over finite fields semidirect not nec prime 2} \mathrm{Aut}_F(A) \cong \mathbb{Z} / \Big( \frac{s}{m} \Big) \mathbb{Z} \rtimes_{q} \mathbb{Z} / (m^2) \mathbb{Z}. \end{equation} \end{theorem} \begin{proof} Let $\tau: K \rightarrow K, \ k \mapsto k^q$, then $\tau^j(\lambda d^i) = \omega^{ij} \lambda d^i$, for all $j \in \{ 0, \ldots, m-1 \}$ where $\omega \in F^{\times}$ is a primitive $m^{\text{th}}$ root of unity by Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms}. As $\tau$ generates $\mathrm{Gal}(K/F)$, the automorphisms of $A$ are precisely the maps $H_{\tau^j,k}$, where $j \in \{ 0, \ldots, m-1 \}$ and $k \in K^{\times}$ are such that $\tau^j(\lambda d^i) = N_{K/F}(k) \lambda d^i$ by Corollary \ref{cor:t^m-a automorphism field result}. Moreover there are exactly $s$ elements $k \in K^{\times}$ with $N_{K/F}(k) = \omega^{ij}$ by Lemma \ref{lem:number_of_solutions_to_norm_equation}, and each of these elements corresponds to a unique automorphism of $A$. Therefore $\mathrm{Aut}_F(A)$ is a group of order $ms$. Choose $k \in K^{\times}$ such that $N_{K/F}(k) = \omega^i$ so that $H_{\tau,k} \in \mathrm{Aut}_F(A)$. As $\tau$ has order $m$, $H_{\tau,k} \circ \ldots \circ H_{\tau,k}$ ($m$-times) becomes $H_{\mathrm{id},b}$ where $b = \omega^i = N_{K/F}(k)$. Notice $\omega^i$ is a primitive $\mu^{\text{th}}$ root of unity where $\mu = m/\mathrm{gcd}(i,m)$, therefore $H_{\mathrm{id},b}$ has order $\mu$, and thus the subgroup of $\mathrm{Aut}_F(A)$ generated by $H_{\tau,k}$ has order $m \mu$. $\langle G_{\alpha} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$ of order $s$ by Theorem \ref{thm:t^m-a_automorphism_field finite} where $\alpha$ is a primitive element of $K$. Furthermore, $m \vert s$ by Lemma \ref{lem:n divides s} and so $\langle G_{\alpha^m} \rangle$ is a cyclic subgroup of $\mathrm{Aut}_F(A)$ of order $s/m$. We will prove $\mathrm{Aut}_F(A)$ contains the semidirect product $\langle G_{\alpha^m} \rangle \rtimes_q \langle H_{\tau,k} \rangle$, by showing it can be written in the presentation \eqref{eqn:semidirect product of cyclic algebras}: The inverse of $H_{\tau,k}$ in $\mathrm{Aut}_F(A)$ is $H_{\tau^{-1},\tau^{-1}(k^{-1})}$ and so \begin{align*} H_{\tau,k} \Big( & G_{\alpha^m} \Big( H_{\tau,k}^{-1} \Big( \sum_{j=0}^{m-1} x_j t^j \Big) \Big) \Big) \\ &= H_{\tau,k} \Big( G_{\alpha^m} \Big( \tau^{-1}(x_0) + \sum_{j=1}^{m-1} \tau^{-1}(x_j) \big( \prod_{l=0}^{j-1} \sigma^{l}(\tau^{-1}(k^{-1})) \big) t^j \Big) \Big) \\ &= H_{\tau,k} \Big( \tau^{-1}(x_0) + \sum_{j=1}^{m-1} \tau^{-1}(x_j) \big( \prod_{l=0}^{j-1} \sigma^{l}(\tau^{-1}(k^{-1})) \big) \alpha^{-m} \sigma^j(\alpha^m) t^j \Big) \\ &= x_0 + \sum_{j=1}^{m-1} x_j \tau(\alpha^{-m}) \tau(\sigma^{j}(\alpha^m)) t^j = x_0 + \sum_{j=1}^{m-1} x_j \tau(\alpha^{-m}) \sigma^{j}(\tau(\alpha^m)) t^j \\ &= G_{\tau(\alpha^m)} \Big( \sum_{j=0}^{m-1} x_j t^j \Big) = G_{\alpha^{mq}} \Big( \sum_{j=0}^{m-1} x_j t^j \Big), \end{align*} that is $H_{\tau,k} \circ G_{\alpha^m} \circ H_{\tau,k}^{-1} = G_{\alpha^{mq}} = (G_{\alpha^m})^q.$ Notice $q^m = qs -s + 1$, i.e. $q^m \equiv 1 \text{ mod } (s)$, and so $q^{m \mu} \equiv 1 \text{ mod } (s)$. Then $m \vert s$ by Lemma \ref{lem:n divides s}, hence $q^{m \mu} \equiv 1 \text{ mod } (s/m)$. In order to prove $\mathrm{Aut}_F(A)$ contains $\langle G_{\alpha^m} \rangle \rtimes_q \langle H_{\tau,k} \rangle$, we are left to show that $\langle H_{\tau,k} \rangle \cap \langle G_{\alpha^m} \rangle = \{ \mathrm{id} \}$. Suppose, for a contradiction, that $\langle H_{\tau,k} \rangle \cap \langle G_{\alpha^m} \rangle \neq \{ \mathrm{id} \}$. Then $H_{\mathrm{id},\omega^l} \in \langle G_{\alpha^m} \rangle$ for some $l \in \{ 1, \ldots, m-1 \}$. Therefore $\langle G_{\alpha^m} \rangle$ contains a subgroup of order $m/\mathrm{gcd}(l,m)$ generated by $H_{\mathrm{id}, \omega^l}$ and so $(m/\mathrm{gcd}(l,m)) \vert (s / m)$. This means $m^2 \vert (s \mathrm{gcd}(l,m))$, a contradiction by Lemma \ref{lem:n^2 does not divide s}. Therefore $\mathrm{Aut}_F(A)$ contains the subgroup $$\langle G_{\alpha^m} \rangle \rtimes_q \langle H_{\tau,k} \rangle \cong \mathbb{Z} / \Big( \frac{s}{m} \Big) \mathbb{Z} \rtimes_{q} \mathbb{Z} / (m \mu) \mathbb{Z}.$$ If $\mathrm{gcd}(i,m) = 1$ this subgroup has order $ms$ and since $|\mathrm{Aut}_F(A)| = ms$, this is all of $\mathrm{Aut}_F(A)$. \end{proof} We can completely determine the automorphism group of nonassociative cyclic algebras of prime degree $m$ different from $\mathrm{Char}(F)$. If $F$ does not contain a primitive $m^{\text{th}}$ root of unity, i.e. if $m \nmid (q-1)$, then $\mathrm{Aut}_F(A) = \langle G_{\alpha} \rangle \cong \mathbb{Z}/ s \mathbb{Z}$ by Theorem \ref{thm:t^m-a_automorphism_field finite}(iii), and all automorphisms of $A$ are inner. Otherwise we have: \begin{theorem} \label{thm:Automorphisms of nonassociative cyclic algebras over finite fields prime} Suppose $m$ is prime and $m \vert (q-1)$. Then we can write $K = F(d)$ where $d$ is a root of the irreducible polynomial $x^m - e \in F[x]$ as in Lemma \ref{lem:eigenvalues and eigenvectors of cyclic extension automorphisms}. Let $A = (K/F,\sigma,a)$ for some $a \in K \setminus F$. \begin{itemize} \item[(i)] If $a \neq \lambda d^i$ for any $i \in \{ 0, \ldots, m-1 \}$, $\lambda\in F^\times$, then $\mathrm{Aut}_F(A) = \langle G_{\alpha} \rangle \cong \mathbb{Z}/ s \mathbb{Z}$ and all automorphisms of $A$ are inner. \item[(ii)] (\cite[Theorem 6.3.5]{AndrewPhD}). If $m = 2$ and $a = \lambda d$ for some $\lambda \in F^{\times}$, then $\mathrm{Aut}_F(A)$ is the dicyclic group of order $2q + 2$. \item[(iii)] If $m > 2$ and $a = \lambda d^i$ for some $i \in \{ 1 , \ldots, m-1 \}$, $\lambda \in F^{\times}$, then $$\mathrm{Aut}_F(A) \cong \mathbb{Z} / \Big( \frac{s}{m} \Big) \mathbb{Z} \rtimes_{q} \mathbb{Z} / (m^2) \mathbb{Z}.$$ \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] This is \cite[Corollary 6.3.3]{AndrewPhD} together with Theorem \ref{thm:t^m-a_automorphism_field finite}(i). \item[(iii)] follows immediately from Theorem \ref{thm:Automorphisms of nonassociative cyclic algebras over finite fields}. \end{itemize} \end{proof} \chapter{Generalisation of the \texorpdfstring{$S_f$}{S\_f} Construction} \label{chapter:Generalisation of the S_f Construction} Until now, we have studied the construction $S_f = D[t;\sigma,\delta]/D[t;\sigma,\delta]f$ in the case where $D$ is an associative division ring. In this Chapter we generalise this construction using the skew polynomial ring $S[t;\sigma,\delta]$, where $S$ is any associative unital ring, $\sigma$ is an injective endomorphism of $S$ and $\delta$ is a $\sigma$-derivation of $S$. While $S[t;\sigma,\delta]$ is in general neither left nor right Euclidean (unless $S$ is a division ring), we are still able to right divide by polynomials $f(t) \in S[t;\sigma,\delta]$ whose leading coefficient is a unit. Moreover, if $\sigma$ is an automorphism we are also able to left divide by such $f(t)$. Therefore, when $f(t)$ has an invertible leading coefficient, it is possible to generalise the construction of the algebras $S_f$ to this setting. \vspace{5mm} In the following, let $S$ be a unital associative ring, $\sigma$ be an injective endomorphism of $S$ and $\delta$ be a left $\sigma$-derivation of $S$. An element $0 \neq b \in S$ is called \textbf{right-invertible} if there exists $b_r \in S$ such that $b b_r = 1$, and \textbf{left-invertible} if there exists $b_l \in S$ such that $b_l b = 1$. An element $b$ which is both left and right invertible is called \textbf{invertible} (or a \textbf{unit}), and $b_l = b_r$ is called the \textbf{inverse} of $b$ and denoted $b^{-1}$ \cite[p.~4]{lam2013first}. We say a non-zero ring is a \textbf{domain} if it has no non-trivial zero divisors. A commutative domain is called an \textbf{integral domain}. Recall from \eqref{eqn:mult in S_f 3} that $S_{n,j}$ denotes the sum of all monomials in $\sigma$ and $\delta$ that are of degree $j$ in $\sigma$ and degree $n - j$ in $\delta$. The equality \begin{equation} (bt^n)(ct^m) = \sum_{j=0}^n b(S_{n,j}(c))t^{j+m}, \tag{\ref{eqn:mult in S_f 2}} \end{equation} for all $b,c \in S$, holds more generally for $S$ any unital associative ring by \cite[p.~4]{leroy2013sigma}. The degree function satisfies \begin{equation} \label{eqn:generalised degree function} \mathrm{deg} \big( g(t)h(t) \big) \leq \mathrm{deg}(g(t)) + \mathrm{deg}(h(t)) \end{equation} for all $g(t), h(t) \in S[t;\sigma,\delta]$. In general \eqref{eqn:generalised degree function} is not an equality unless $S$ is a domain, or $g(t)$ has an invertible leading coefficient, or $h(t)$ has an invertible leading coefficient: Indeed, if $S$ is a domain this is \cite[p.~12]{gomez2014basic}, and the equality in \eqref{eqn:generalised degree function} implies $S[t;\sigma,\delta]$ is also a domain. Suppose now $S$ is not necessarily a domain, $g(t)$ has degree $l$ and leading coefficient $g_l$ and $h(t)$ has degree $n$ and leading coefficient $h_n$. If $h_n$ is invertible then $\sigma^l(h_n)$ is also invertible and $$g(t)h(t) = g_{l} \sigma^{l}(h_{n}) t^{l + n} + \text{ lower degree terms}.$$ Here $g_{l} \sigma^{l}(h_{n}) \neq 0$ since invertible elements are not zero divisors. Similarly, if $g_l$ is invertible then $g_{l} \sigma^{l}(h_{n}) \neq 0$. In either case \eqref{eqn:generalised degree function} is an equality. \vspace{4mm} $S[t;\sigma,\delta]$ is in general neither left nor right Euclidean, nor a left or right principal ideal domain. Nevertheless, we can still perform right division by a polynomial whose leading coefficient is a unit. Additionally, when $\sigma$ is an automorphism we can also left divide by such a polynomial. When $\delta = 0$ and $\sigma \in \mathrm{Aut}(S)$ this was proven for special cases of $S$, for instance in \cite[p.~4]{ducoat2015skew}, \cite[p.~4]{jitman2010skew} and \cite[p.~391]{mcdonald1974finite}: \begin{theorem} \label{thm:generalised S_f euclidean division} Let $f(t) = a_m t^m - \sum_{i=0}^{m-1} a_i t^i \in R = S[t;\sigma,\delta]$ and suppose $a_m$ is invertible. \begin{itemize} \item[(i)] For all $g(t) \in R$, there exists uniquely determined $r(t), q(t) \in R$ with $\mathrm{deg}(r(t)) < m$, such that $g(t) = q(t) f(t) + r(t)$. \item[(ii)] Suppose additionally $\sigma$ is an automorphism of $S$. Then for all $g(t) \in R$, there exists uniquely determined $r(t), q(t) \in R$ with $\mathrm{deg}(r(t)) < m$, such that $g(t) = f(t) q(t) + r(t)$. \end{itemize} \end{theorem} \begin{proof} Let $g(t) = \sum_{i=0}^{l} g_i t^i \in R$ have degree $l$. \begin{itemize} \item[(i)] Suppose $l < m$, then $g(t) = 0 q(t) + g(t)$. Moreover, we have $$\mathrm{deg}(q(t)f(t)) = \mathrm{deg}(q(t)) + \mathrm{deg}(f(t)) \geq m,$$ for all $0 \neq q(t) \in R$ as $f(t)$ has an invertible leading coefficient. Therefore if $g(t) = q(t)f(t) + r(t)$ then $q(t) = 0$ and so $r(t) = g(t)$ as required. Now suppose $l \geq m$, then \begin{align*} g(t) - g_l \sigma^{l-m}(&a_m^{-1}) t^{l-m}f(t) = g(t) - g_l \sigma^{l-m}(a_m^{-1}) t^{l-m} \Big( a_m t^m - \sum_{i=0}^{m-1} a_i t^i \Big) \\ &= g(t) - g_l \sigma^{l-m}(a_m^{-1}) t^{l-m} a_m t^m + \sum_{i=0}^{m-1} g_l \sigma^{l-m}(a_m^{-1}) t^{l-m} a_i t^i \\ &= g(t) - g_l \sigma^{l-m}(a_m^{-1}) \Big( \sum_{j=0}^{l-m} S_{l-m,j} (a_m) t^j \Big) t^m \\ & \qquad \qquad + \sum_{i=0}^{m-1} g_l \sigma^{l-m}(a_m^{-1}) \Big( \sum_{j=0}^{l-m} S_{l-m,j}(a_i) t^j \Big) t^i \\ &= g(t) - g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,l-m} (a_m) t^l \\ & \qquad \qquad - g_l \sigma^{l-m}(a_m^{-1}) \sum_{0 \leq j \leq l-m-1} S_{l-m,j} (a_m) t^{j+m} \\ & \qquad \qquad + \sum_{i=0}^{m-1} \sum_{j=0}^{l-m} g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,j}(a_i)t^{i+j} \\ &= g(t) - g_l \sigma^{l-m}(a_m^{-1}) \sigma^{l-m} (a_m) t^l \\ & \qquad \qquad - \sum_{0 \leq j \leq l-m-1} g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,j} (a_m) t^{j+m} \\ & \qquad \qquad + \sum_{i=0}^{m-1} \sum_{j=0}^{l-m} g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,j}(a_i)t^{i+j} \\ &= g(t) - g_l t^l - \sum_{0 \leq j \leq l-m-1} g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,j} (a_m) t^{j+m} \\ & \qquad \qquad + \sum_{i=0}^{m-1} \sum_{j=0}^{l-m} g_l \sigma^{l-m}(a_m^{-1}) S_{l-m,j}(a_i)t^{i+j}, \end{align*} where we have used $S_{l-m,l-m}(a_m) = \sigma^{l-m}(a_m)$. Hence the polynomial $g(t) - g_l \sigma^{l-m}(a_m^{-1}) t^{l-m}f(t)$ has degree $< l$ and by iteration of this argument, we find $q(t), r(t) \in R$ with $\mathrm{deg}(r(t)) < m$, such that $g(t) = q(t)f(t) + r(t).$ We now prove uniqueness of $r(t)$ and $q(t)$. Suppose $$g(t) = q_1(t) f(t) + r_1(t) = q_2(t) f(t) + r_2(t),$$ and so $\big( q_1(t) - q_2(t) \big) f(t) = r_2(t) - r_1(t).$ If $q_1(t) - q_2(t) \neq 0$, then the left hand side of the equation has degree $\geq m$ since $f(t)$ has an invertible leading coefficient, while the right hand side has degree $< m$. Thus $q_1(t) = q_2(t)$ and so $r_1(t) = r_2(t)$. \item[(ii)] If we assume $\sigma$ is an automorphism, then (ii) is proven similarly to (i) by showing $g(t) - f(t) \sigma^{-m}(a_m^{-1}) \sigma^{-m}(g_l) t^{l-m}$ has degree $<l$ and iterating this argument. Uniqueness is proven analogously to (i). \end{itemize} \end{proof} Let $\mathrm{mod}_r f$ denote the remainder after right division by $f(t)$. Since remainders are uniquely determined by Theorem \ref{thm:generalised S_f euclidean division}(i), the skew polynomials of degree $< m$ canonically represent the elements of the left $S[t;\sigma,\delta]$-module $S[t;\sigma,\delta]/S[t;\sigma,\delta]f$. Similarly, when $\sigma$ is an automorphism, the skew polynomials of degree $< m$ canonically represent the elements of the right $S[t;\sigma,\delta]$-module $S[t;\sigma,\delta]/fS[t;\sigma,\delta]$. \begin{definition} Let $f(t) \in R = S[t;\sigma,\delta]$ be of degree $m$ with invertible leading coefficient $a_m \in S$ and $R_m = \{ g(t) \in R \ \vert \ deg(g(t)) < m \}.$ \begin{itemize} \item[(i)] Then $R_m$ together with the multiplication $g \circ h = g h \ \mathrm{mod}_r f$, becomes a unital nonassociative ring $S_f = (R_m, \circ)$, also denoted $R/Rf$. \item[(ii)] Suppose additionally $\sigma$ is an automorphism and let $\mathrm{mod}_l f$ denote the remainder after left division by $f(t)$. Then $R_m$ together with the multiplication $g \prescript{}{f}\circ \ h = g h \ \mathrm{mod}_l f$, becomes a unital nonassociative ring $_f S = (R_m, \prescript{}{f}\circ)$, also denoted $R/fR$. \end{itemize} \end{definition} $S_f$ and $_f S$ are unital nonassociative algebras over $$S_0 = \{ c \in S \ \vert \ c \circ h = h \circ c \text{ for all } h \in S_f \} = \mathrm{Comm}(S_f) \cap S,$$ which is a commutative subring of $S$. If $S$ is a division ring then this construction is precisely Petit's algebra construction given in Chapter \ref{chapter:Preliminaries}. \begin{remarks} \begin{itemize} \item[(i)] Let $g(t), h(t) \in R_m$ be such that $\mathrm{deg}(g(t)h(t)) < m$. Then the multiplication $g \circ h$ is the same as that in $R$. \item[(ii)] $S_f$ is associative if and only if $f(t)$ is right invariant \cite[Theorem 4(ii)]{pumplun2015finite}. \item[(iii)] We have $qf + r = (qd^{-1})(df) + r$, for all $q,r \in R$, and all invertible $d \in S$. It follows that $S_f = S_{df}$ for all invertible $d \in S$. \item[(iv)] If $\mathrm{deg}(f(t)) = m = 1$ then $R_m = S$ and $S_f \cong S$. \end{itemize} \end{remarks} Henceforth suppose $\mathrm{deg}(f(t)) = m \geq 2$. We may assume w.l.o.g. again that $f(t)$ is monic. Similarly to Proposition \ref{prop:_fS opposite algebra of some S_g}, it suffices to only consider the algebras $S_f$ since we still have following anti-isomorphism: \begin{proposition} (\cite[Proposition 3]{pumplun2015finite}). Let $f(t) \in S[t;\sigma,\delta]$ where $\sigma \in \mathrm{Aut}(S)$ and $f(t)$ has invertible leading coefficient. The canonical anti-isomorphism $$\psi: S[t;\sigma,\delta] \rightarrow S^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}], \ \ \sum_{i=0}^{n} b_i t^i \mapsto \sum_{i=0}^{n} \Big( \sum_{j=0}^{i} S_{n,j}(b_i) \Big) t^i,$$ between the skew polynomial rings $S[t;\sigma,\delta]$ and $S^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}]$, induces an anti-isomorphism between the rings $S_f = S[t;\sigma,\delta] / S[t;\sigma,\delta]f$, and $$_{\psi(f)}S = S^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}] / \psi(f)S^{\mathrm{op}}[t;\sigma^{-1},-\delta \sigma^{-1}].$$ \end{proposition} We now take a closer look at $S_f$ in the special case where $\delta = 0$. \begin{proposition} \label{prop:f(t) two-sided delta=0 generalised} Suppose $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in R = S[t;\sigma]$. If $\sigma^m(z) a_i = a_i \sigma^i(z)$ and $\sigma(a_i) = a_i$ for all $z \in S$, $i \in \{ 0, \ldots, m-1 \}$, then $f(t)$ is right invariant and $S_f$ is associative. \end{proposition} \begin{proof} We have \begin{align*} f(t) z t^j &= t^m z t^j - \sum_{i=0}^{m-1} a_i t^i z t^j = \sigma^m(z) t^{m+j} - \sum_{i=0}^{m-1} a_i \sigma^i(z) t^{i+j} \\ &= \sigma^m(z) t^j t^m - \sum_{i=0}^{m-1} \sigma^m(z) a_i t^j t^i = \sigma^m(z) t^j t^m - \sum_{i=0}^{m-1} \sigma^m(z) t^j a_i t^i \\ &= \sigma^m(z) t^j f(t) \in Rf, \end{align*} for all $z \in S$, $j \in \{ 0, \ldots, m-1 \}$ since $t^ja_i = \sigma^j(a_i)t^j = a_i t^j$ and $\sigma^m(z) a_i = a_i \sigma^i(z)$. Therefore by distributivity $fR \subseteq Rf$. This implies $bfc \in Rf$ for all $b,c \in R$, thus $Rf$ is a two-sided ideal and so $S_f$ is associative by \cite[Theorem 4(ii)]{pumplun2015finite}. \end{proof} \begin{proposition} \label{prop:Comm(S_f) delta=0 generalised} Suppose $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in S[t;\sigma]$. \begin{itemize} \item[(i)] (\cite[Theorem 8]{pumplun2015finite}). $\mathrm{Comm}(S_f)$ contains the set \begin{equation} \label{eqn:Comm(S_f) generalised} \Big\{ \sum_{i=0}^{m-1} c_i t^i \ \vert \ c_i \in \mathrm{Fix}(\sigma) \text{ and } b c_i = c_i \sigma^i(b) \text{ for all } b \in S \Big\}. \end{equation} \item[(ii)] If $S$ is a domain and $a_0 \neq 0$ the two sets are equal. \item[(iii)] If $a_0$ is invertible the two sets are equal. \end{itemize} \end{proposition} \begin{proof} \begin{itemize} \item[(ii)] Let $c = \sum_{i=0}^{m-1} c_i t^i \in \mathrm{Comm}(S_f)$, then in particular \begin{equation} \label{eqn:Comm(S_f) delta=0 generalised 1} t \circ c = t \circ \Big( \sum_{i=0}^{m-1} c_i t^i \Big) = \sum_{i=0}^{m-2} \sigma(c_i) t^{i+1} + \sigma(c_{m-1}) \sum_{i=0}^{m-1} a_i t^i, \end{equation} and \begin{equation} \label{eqn:Comm(S_f) delta=0 generalised 2} c \circ t = \Big( \sum_{i=0}^{m-1} c_i t^i \Big) \circ t = \sum_{i=0}^{m-2} c_i t^{i+1} + c_{m-1} \sum_{i=0}^{m-1} a_i t^i, \end{equation} must be equal. Comparing the $t^0$ coefficient in \eqref{eqn:Comm(S_f) delta=0 generalised 1} and \eqref{eqn:Comm(S_f) delta=0 generalised 2} yields \begin{equation} \label{eqn:Comm(S_f) delta=0 generalised 3} (\sigma(c_{m-1})-c_{m-1}) a_0 = 0, \end{equation} which implies $c_{m-1} \in \mathrm{Fix}(\sigma)$ because $S$ is a domain and $a_0 \neq 0$. Comparing the $t^j$ coefficients in \eqref{eqn:Comm(S_f) delta=0 generalised 1} and \eqref{eqn:Comm(S_f) delta=0 generalised 2} gives \begin{equation*} \sigma(c_{j-1}) + \sigma(c_{m-1}) a_j = c_{j-1} + c_{m-1} a_j, \end{equation*} for all $j \in \{ 1, \ldots, m-1 \}$, hence $c_{j-1} \in \mathrm{Fix}(\sigma)$ for all $j \in \{ 1, \ldots, m-1 \}$ because $c_{m-1} \in \mathrm{Fix}(\sigma)$. Since $c \in \mathrm{Comm}(S_f)$ we also have $b \circ c = c \circ b$ for all $b \in S$. As a result \begin{equation} \label{eqn:Comm(S_f) delta=0 generalised 4} b \circ c = \sum_{i=0}^{m-1} b c_i t^i, \end{equation} and \begin{equation} \label{eqn:Comm(S_f) delta=0 generalised 5} c \circ b = \sum_{i=0}^{m-1} c_i t^i b = \sum_{i=0}^{m-1} c_i \sigma^i(b) t^i \end{equation} must be equal. Comparing the coefficients of the powers of $t$ in \eqref{eqn:Comm(S_f) delta=0 generalised 4} and \eqref{eqn:Comm(S_f) delta=0 generalised 5} yields $b c_i = c_i \sigma^i(b)$ for all $b \in S$, $i \in \{ 0, \ldots, m-1 \}$ as required. \item[(iii)] The proof is similar to (ii), but \eqref{eqn:Comm(S_f) delta=0 generalised 3} implies $c_{m-1} \in \mathrm{Fix}(\sigma)$ because $a_0$ is invertible, hence not a zero divisor. \end{itemize} \end{proof} \begin{corollary} \label{cor:Comm(S_f) = F generalised} Suppose $S$ is a central simple algebra over a field $C$ and $\sigma \in \mathrm{Aut}(S)$ is such that $\sigma \vert_C$ has order at least $m$. Let $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in S[t;\sigma]$ where $a_0 \in S$ is invertible. Then $\mathrm{Comm}(S_f) = C \cap \mathrm{Fix}(\sigma)$. \end{corollary} \begin{proof} $\mathrm{Comm}(S_f)$ is equal to the set \eqref{eqn:Comm(S_f) generalised} by Proposition \ref{prop:Comm(S_f) delta=0 generalised}, in particular $C \cap \mathrm{Fix}(\sigma) \subseteq \mathrm{Comm}(S_f)$. Let $\sum_{i=0}^{m-1} c_i t^i \in \mathrm{Comm}(S_f)$ and suppose, for contradiction, $c_j \neq 0$ for some $j \in \{ 1, \ldots, m-1 \}$. Then $bc_j = c_j \sigma^j(b)$ for all $b \in S$, in particular $(b - \sigma^j(b))c_j = 0$ for all $b \in C$. We have $0 \neq b - \sigma^j(b)$ for some $b \in C$ since $\sigma \vert_C$ has order $\geq m$. Moreover $b - \sigma^j(b) \in C$, therefore it is invertible and hence $(b - \sigma^j(b))c_j \neq 0$ for some $b \in C$ because invertible elements are not zero divisors, a contradiction. Thus $\sum_{i=0}^{m-1} c_i t^i = c_0 \in C \cap \mathrm{Fix}(\sigma)$ by \eqref{eqn:Comm(S_f) generalised}. \end{proof} \section{When does \texorpdfstring{$S_f$}{S\_f} Contain Zero Divisors?} We investigate when the algebras $S_f$ contain zero divisors. If the ring $S$ contains zero divisors then clearly so does $S_f$, therefore we only consider the case where $S$ is a domain. Furthermore, if $f(t) = g(t) h(t)$ for some $g(t), h(t) \in R_m$, then $g(t) \circ h(t) = 0$ and so $S_f$ contains zero divisors. When $S$ is commutative, i.e. an integral domain, it is well-known that we can associate to $S$ its field of fractions $D \supseteq S$, so that every element of $D$ has the form $rs^{-1}$ for some $r \in S$, $0 \neq s \in S$. On the other hand, if $S$ is a noncommutative domain then we cannot necessarily associate such a "right division ring of fractions" unless $S$ is a so-called right Ore domain: \begin{definition} A \textbf{right Ore domain} $S$ is a domain such that $aS \cap bS \neq \{ 0 \}$ for all $0 \neq a, b \in S$. The \textbf{ring of right fractions} of $S$ is a division ring $D$ containing $S$, such that every element of $D$ is of the form $rs^{-1}$ for some $r \in S$ and $0 \neq s \in S$. \end{definition} Any integral domain is a right Ore domain; its right ring of fractions is equal to its quotient field. Let now $S$ be a right Ore domain with ring of right fractions $D$, $\sigma$ be an injective endomorphism of $S$ and $\delta$ be a $\sigma$-derivation of $S$. Then $\sigma$ and $\delta$ extend uniquely to $D$ by setting \begin{equation} \label{eqn:extend sigma delta to right ring of fractions} \sigma(rs^{-1}) = \sigma(r)\sigma(s)^{-1} \text{ and } \delta(rs^{-1}) = \delta(r)s^{-1} - \sigma(rs^{-1}) \delta(s) s^{-1}, \end{equation} for all $r \in S, \ 0 \neq s \in S$ by \cite[Lemma 1.3]{goodearl1992prime}. We conclude: \begin{theorem} \label{thm:right Ore domain, no zero divisors} Let $f(t) \in S[t;\sigma,\delta]$ have invertible leading coefficient and extend $\sigma$ and $\delta$ to $D$ as in \eqref{eqn:extend sigma delta to right ring of fractions}. If $f(t)$ is irreducible in $D[t;\sigma,\delta]$ then $S_f = S[t;\sigma,\delta]/S[t;\sigma,\delta]f(t)$ contains no zero divisors. \end{theorem} \begin{proof} If $f(t)$ is irreducible in $D[t;\sigma,\delta]$, then $D[t;\sigma,\delta]/D[t;\sigma,\delta]f$ is a right division algebra by Theorem \ref{thm:f(t) irreducible iff S_f right division}, therefore it contains no zero divisors. $S_f = S[t;\sigma,\delta]/S[t;\sigma,\delta]f$ is contained in $D[t;\sigma,\delta]/D[t;\sigma,\delta]f$, so also contains no zero divisors. \end{proof} Let $K$ be a field and $y$ be an indeterminate. Then $K[y]$ is an integral domain which is not a division algebra. Given $a(y) \in K[y]$, denote $\mathrm{deg}_y(a(y))$ the degree of $a(y)$ as a polynomial in $y$. We obtain the following Corollaries of Theorem \ref{thm:right Ore domain, no zero divisors} using Corollaries \ref{cor:t^2-a(y) in K(y)[t;sigma] irreducibility} and \ref{cor:t^m-a(y) in K(y)[t;sigma] irreducible} in Chapter \ref{chapter:Irreducibility Criteria for Polynomials in a Skew Polynomial Ring}: \begin{corollary} Define the injective endomorphism $\sigma: K[y] \rightarrow K[y]$ by $\sigma \vert_K = \mathrm{id}$ and $\sigma(y) = y^2$. If $f(t) = t^2 - a(y) \in K[y][t;\sigma]$ where $0 \neq a(y) \in K[y]$ is such that $3 \nmid \mathrm{deg}_y(a(y))$, then $S_f$ contains no zero divisors. Here $S_f$ is an infinite-dimensional algebra over $S_0 = \mathrm{Fix}(\sigma) = K$. \end{corollary} \begin{proof} Extend $\sigma$ to an endomorphism of $K(y)$, the field of fractions of $K[y]$ as in \eqref{eqn:extend sigma delta to right ring of fractions}. Then $f(t)$ is irreducible in $K(y)[t;\sigma]$ by Corollary \ref{cor:t^2-a(y) in K(y)[t;sigma] irreducibility} and hence $S_f$ contains no zero divisors by Theorem \ref{thm:right Ore domain, no zero divisors}. \end{proof} \begin{corollary} \label{cor:K[y] zero divisors generalised} Let $\sigma$ be the automorphism of $K[y]$ defined by $\sigma \vert_K = \mathrm{id}$ and $\sigma(y) = qy$ for some $1 \neq q \in K^{\times}$. Suppose $m$ is prime, $K$ contains a primitive $m^{\text{th}}$ root of unity and $f(t) = t^m - a(y) \in K[y][t;\sigma]$ where $0 \neq a(y) \in K[y]$ is such that $m \nmid \mathrm{deg}_y (a(y))$. Then $S_f$ contains no zero divisors. \end{corollary} \begin{proof} Extend $\sigma$ to an automorphism of $K(y)$ as in \eqref{eqn:extend sigma delta to right ring of fractions}. Then $f(t)$ is irreducible in $K(y)[t;\sigma]$ by Corollary \ref{cor:t^m-a(y) in K(y)[t;sigma] irreducible}, which implies $S_f$ contains no zero divisors by Theorem \ref{thm:right Ore domain, no zero divisors}. \end{proof} \begin{remark} Consider the set-up in Corollary \ref{cor:K[y] zero divisors generalised}. If $q$ is not a root of unity then $\sigma(y^i) = q^i y^i \neq y^i$ for all $i > 1$, therefore $S_f$ is an infinite-dimensional algebra over $S_0 = \mathrm{Fix}(\sigma) = K$. Otherwise $q$ is a primitive $n^{\text{th}}$ root of unity for some $n \in \mathbb{N}$, then $\sigma(y^i) = q^i y^i = y^i$ if and only if $i = ln$ for some positive integer $l$. Thus $S_f$ is an algebra over $S_0 = \mathrm{Fix}(\sigma) = K[y^n]$, and since $K[y]$ is finite-dimensional over $K[y^n]$, $S_f$ is also finite-dimensional over $K[y^n]$. \end{remark} \section{The Nucleus} In this Section we study the nuclei of $S_f$, generalising some of our results from Chapter \ref{chapter:The Structure of Petit Algebras}. \begin{theorem} \label{thm:nucleus of S_f generalised} Let $f(t) \in R = S[t;\sigma,\delta]$ be of degree $m$. Then $$S \subseteq \mathrm{Nuc}_l(S_f) \ , \ S \subseteq \mathrm{Nuc}_m(S_f)$$ and $$\mathrm{Nuc}_r(S_f) = \{ u \in R_m \ \vert \ fu \in Rf \} = E(f).$$ \end{theorem} \begin{proof} Let $b, c, d \in R_m$ and write $bc = q_1 f + r_1$ and $cd = q_2 f + r_2$ for some uniquely determined $q_1, q_2, r_1, r_2 \in R$ with $\mathrm{deg}(r_1(t)), \ \mathrm{deg}(r_2(t)) < m$. This means $b \circ c = bc - q_1 f$ and $c \circ d = cd - q_2 f$. We have $$(b \circ c) \circ d = (bc - q_1 f) \circ d = (bc - q_1 f)d \ \mathrm{mod}_r f = (bcd - q_1fd) \ \mathrm{mod}_r f,$$ and $$b \circ (c \circ d) = b \circ (cd - q_2f) = b(cd - q_2f) \ \mathrm{mod}_r f = bcd \ \mathrm{mod}_r f,$$ and hence $(b \circ c) \circ d = b \circ (c \circ d)$ if and only if $q_1 f d \ \mathrm{mod}_r f = 0$ if and only if $q_1 f d \in Rf$. \begin{itemize} \item[(i)] If $b \in S$ then \begin{equation} \label{eqn:generalised Nucleus1} \mathrm{deg}(bc) \leq \mathrm{deg}(b) + \mathrm{deg}(c) \leq \mathrm{deg}(c) < m, \end{equation} but here $bc = q_1 f + r_1$ also means $q_1 = 0$, otherwise $\mathrm{deg}(q_1 f + r_1) \geq m$ as $f(t)$ has invertible leading coefficient, contradicting \eqref{eqn:generalised Nucleus1}. Hence $q_1 f d \ \mathrm{mod}_r f = 0$ which implies $b \in \mathrm{Nuc}_l(S_f)$. \item[(ii)] $S \subseteq \mathrm{Nuc}_m(S_f)$ is proven similarly to (i). \item[(iii)] If $d \in E(f)$ then $fd \in Rf$ and thus $q_1 f d \in Rf$ for all $q_1 \in R$. This implies $d \in \mathrm{Nuc}_r(S_f)$ and $E(f) \subseteq \mathrm{Nuc}_r(S_f)$. To prove the opposite inclusion, let now $d \in \mathrm{Nuc}_r(S_f)$ and choose $b(t), c(t) \in R$ with invertible leading coefficients $b_l$ and $c_n$ respectively such that $\mathrm{deg}(b(t)) + \mathrm{deg}(c(t)) = m$, so that $\mathrm{deg}(b(t)c(t)) = m$. We have $\mathrm{deg}(q_1(t) f(t)) = \mathrm{deg}(q_1(t)) + m$, but using that $b(t)c(t) = q_1(t)f(t)+r_1(t)$, we conclude $\mathrm{deg}(q_1(t)) = 0$, so $q_1(t) = q_1 \in S$ is non-zero. If $\mathrm{deg}(b(t)) = l$ then the leading coefficient of $b(t)c(t)$ is $b_l \sigma^l(c_n)$ and the leading coefficient of $q_1 f(t)$ is $q_1$. Therefore $q_1 = b_{l} \sigma^{l}(c_{n})$ is invertible in $S$, being a product of invertible elements of $S$. Since $d \in \mathrm{Nuc}_r(S_f)$ implies $q_1 f d \in Rf$, this yields $f d \in Rf$. \end{itemize} \end{proof} By Theorem \ref{thm:nucleus of S_f generalised}(iii) we immediately conclude: \begin{corollary} $E(f) = S_f$ if and only if $S_f$ is associative. \end{corollary} \begin{proof} $\mathrm{Nuc}_r(S_f) = S_f$ if and only if $S_f$ is associative so the result follows by Theorem \ref{thm:nucleus of S_f generalised}(iii). \end{proof} The following generalises \cite[(5)]{Petit1966-1967}: \begin{proposition} \label{prop:Petit (5) generalised} Let $f(t) \in R = S[t;\sigma,\delta]$ be of degree $m$. The powers of $t$ are associative if and only if $t \circ t^m = t^m \circ t$ if and only if $t \in \mathrm{Nuc}_r(S_f)$. \end{proposition} \begin{proof} Let $t \in \mathrm{Nuc}_r(S_f)$. Then also $t, \ldots, t^{m-1} \in \mathrm{Nuc}_r(S_f)$ giving $[t^i,t^j,t^k] = 0$ for all $i,j,k < m$, i.e. the powers of $t$ are associative. In particular $(t \circ t^{m-1}) \circ t = t \circ (t^{m-1} \circ t)$, that is $t^m \circ t = t \circ t^m$. We are left to prove $t^m \circ t = t \circ t^m$ implies $t \in \mathrm{Nuc}_r(S_f)$. Suppose $t^m \circ t = t \circ t^m$, then \begin{equation} \label{eqn:Petit (5) generalised t^m circ t} t^m \circ t = (t^m - f(t)) \circ t = \big( t^{m+1} - f(t)t \big) \ \mathrm{mod}_r f, \end{equation} and \begin{equation} \label{eqn:Petit (5) generalised t circ t^m} t \circ t^m = t \circ (t^m - f(t)) = \big( t^{m+1} - t f(t) \big) \ \mathrm{mod}_r f = t^{m+1} \ \mathrm{mod}_r f, \end{equation} are equal. Comparing \eqref{eqn:Petit (5) generalised t^m circ t} and \eqref{eqn:Petit (5) generalised t circ t^m} yields $f(t)t \ \mathrm{mod}_r f = 0$, hence $f(t)t \in Rf$. This means $t \in \mathrm{Nuc}_r(S_f)$ by Theorem \ref{thm:nucleus of S_f generalised}. \end{proof} When $\delta = 0$ and $S$ is a domain, then either $S_f$ is associative or $S_f$ has left and middle nuclei equal to $S$: \begin{theorem} \label{thm:generalised left, middle nucleus=S} Let $S$ be a domain and $f(t) = t^m - \sum_{i=0}^{m-1} a_i t^i \in S[t;\sigma]$. If $S_f$ is not associative then $\mathrm{Nuc}_l(S_f) = \mathrm{Nuc}_m(S_f) = S$. \end{theorem} \begin{proof} We have $S \subseteq \mathrm{Nuc}_l(S_f)$ and $S \subseteq \mathrm{Nuc}_m(S_f)$ by Theorem \ref{thm:nucleus of S_f generalised}. Suppose $S_f$ is not associative. \begin{itemize} \item[(i)] We prove $\mathrm{Nuc}_l(S_f) \subseteq S$: Suppose first that $\sigma(a_i) = a_i$ for all $i \in \{ 0, \ldots, m-1 \}$. Let $0 \neq p = \sum_{i=0}^{m-1} p_i t^i \in \mathrm{Nuc}_l(S_f)$ be arbitrary and $j \in \{ 0, \ldots, m-1 \}$ be maximal such that $p_j \neq 0$. Suppose towards a contradiction $j > 0$. Then \begin{align*} (p \circ t^{m-j}) \circ c &= \big( \sum_{i=0}^{j} p_i t^i \circ t^{m-j} \big) \circ c = \big( \sum_{i=0}^{j-1} p_i t^{i+m-j} + p_j \sum_{i=0}^{m-1} a_i t^i \big) \circ c \\ &= \sum_{i=0}^{j-1} p_i \sigma^{i+m-j}(c) t^{i+m-j} + \sum_{i=0}^{m-1} p_j a_i \sigma^i(c) t^i, \end{align*} and \begin{align*} p \circ (t^{m-j} \circ c) &= \sum_{i=0}^{j} p_i t^i \circ \sigma^{m-j}(c) t^{m-j} \\ &= \sum_{i=0}^{j-1} p_i \sigma^{i+m-j}(c) t^{i+m-j} + \sum_{i=0}^{m-1} p_j \sigma^m(c) a_i t^i, \end{align*} must be equal for all $c \in S$. Comparing the coefficients of $t^i$ yields \begin{equation*} p_j (a_i \sigma^i(c) - \sigma^m(c) a_i) = 0 \end{equation*} for all $c \in S$, $i \in \{ 0, \ldots, m-1 \}$, thus $a_i \sigma^i(c) - \sigma^m(c) a_i = 0$ since $S$ is a domain and $p_j \neq 0$. This implies $S_f$ is associative by Proposition \ref{prop:f(t) two-sided delta=0 generalised}, a contradiction. Thus $p = p_0 \in S$. Now suppose $a_l \notin \mathrm{Fix}(\sigma)$ for some $l \in \{ 0, \ldots, m-1 \}$. As before let $0 \neq p = \sum_{i=0}^{m-1} p_i t^i \in \mathrm{Nuc}_l(S_f)$ be arbitrary and $j \in \{ 0, \ldots, m-1 \}$ be maximal such that $p_j \neq 0$. Suppose towards a contradiction $j > 0$. If $j = 1$ then \begin{align*} (p \circ t^{m-1}) \circ t &= \big( p_0 t^{m-1} + p_1 \sum_{i=0}^{m-1} a_i t^i \big) \circ t \\ &= p_0 \sum_{i=0}^{m-1} a_i t^i + \sum_{i=0}^{m-2} p_1 a_i t^{i+1} + p_1 a_{m-1} \sum_{i=0}^{m-1} a_i t^i \end{align*} and \begin{align*} p \circ (t^{m-1} \circ t) &= p_0 \sum_{i=0}^{m-1} a_i t^i + p_1 \sum_{i=0}^{m-2} \sigma(a_i) t^{i+1} + p_1 \sigma(a_{m-1}) \sum_{i=0}^{m-1} a_i t^i \end{align*} must be equal. Comparing the coefficients of $t^0$ yields \begin{equation} \label{eqn:generalised left, middle nucleus=S 1} p_1 a_{m-1}a_0 = p_1 \sigma(a_{m-1})a_0, \end{equation} and comparing the coefficients of $t^i$ yields \begin{equation} \label{eqn:generalised left, middle nucleus=S 2} p_1 a_{i-1} + p_1 a_{m-1} a_i = p_1 \sigma(a_{i-1}) + p_1 \sigma(a_{m-1}) a_i, \end{equation} for all $i \in \{ 1, \ldots, m-1 \}$. If $a_{m-1} \in \mathrm{Fix}(\sigma)$ then \eqref{eqn:generalised left, middle nucleus=S 2} implies $p_1(a_{i-1} - \sigma(a_{i-1})) = 0$, that is $a_{i} \in \mathrm{Fix}(\sigma)$ for all $i \in \{ 0, \ldots, m-2 \}$ because $S$ is a domain and $p_1 \neq 0$, a contradiction since $a_l \notin \mathrm{Fix}(\sigma)$. Therefore $a_{m-1} \notin \mathrm{Fix}(\sigma)$. Let $k \in \{ 0, \ldots, m-1 \}$ be minimal such that $a_k \neq 0$. If $k = 0$ then $p_1 = 0$ by \eqref{eqn:generalised left, middle nucleus=S 1} as $S$ is a domain, a contradiction. Otherwise \eqref{eqn:generalised left, middle nucleus=S 2} implies $p_1(a_{m-1}-\sigma(a_{m-1}))a_k = 0$, therefore $p_1=0$ as $S$ is a domain and $a_{m-1} \notin \mathrm{Fix}(\sigma)$, a contradiction. Suppose now $j \geq 2$, then \begin{align*} (p &\circ t^{m-j}) \circ t = \big( \sum_{i=0}^{j-1} p_i t^{i+m-j} + p_j \sum_{i=0}^{m-1} a_i t^i \big) \circ t \\ &= \sum_{i=0}^{j-2} p_i t^{i+m-j+1} + p_{j-1} \sum_{i=0}^{m-1} a_i t^i + p_j \sum_{i=0}^{m-2} a_i t^{i+1} + p_j a_{m-1} \sum_{i=0}^{m-1} a_i t^i \end{align*} and \begin{align*} p &\circ (t^{m-j} \circ t) = \sum_{i=0}^{j} p_i t^i \circ t^{m-j+1} \\ &= \sum_{i=0}^{j-2} p_i t^{i+m-j+1} + p_{j-1} \sum_{i=0}^{m-1} a_i t^i + p_j \sum_{i=0}^{m-2} \sigma(a_i) t^{i+1} + p_j \sigma(a_{m-1}) \sum_{i=0}^{m-1} a_i t^i \end{align*} must be equal. Comparing them gives \begin{equation} p_j a_{m-1} a_0 = p_j \sigma(a_{m-1}) a_0, \end{equation} and \begin{equation} p_j a_{i-1} + p_j a_{m-1} a_i = p_j \sigma(a_{i-1}) + p_j \sigma(a_{m-1}) a_i. \end{equation} This yields a contradiction similar to the $j=1$ case. Therefore $p = p_0 \in S$ and so $\mathrm{Nuc}_l(S_f) \subseteq S$. \item[(ii)] The proof that $\mathrm{Nuc}_m(S_f) \subseteq S$ is similar to (i), but we look at $(t^{m-j} \circ p) \circ c = t^{m-j} \circ (p \circ c)$ and $(t^{m-j} \circ p) \circ t = t^{m-j} \circ (p \circ t)$ instead. \end{itemize} \end{proof} When $S$ is a domain and $\delta$ is not necessarily $0$, we can prove a similar result to Theorem \ref{thm:generalised left, middle nucleus=S} for polynomials of degree $2$: \begin{proposition} \label{prop:t^2 - a_1 t - a_0 left middle nucleus generalised} Let $S$ be a domain and $f(t) = t^2 - a_1 t - a_0 \in S[t;\sigma,\delta]$ be such that one of the following holds: \begin{itemize} \item[(i)] $a_1 \in \mathrm{Fix}(\sigma)$ and $a_0 \notin \mathrm{Const}(\delta)$. \item[(ii)] $a_1 \in \mathrm{Fix}(\sigma)$, $a_0 \in \mathrm{Fix}(\sigma)$ and $a_1 \notin \mathrm{Const}(\delta)$. \item[(iii)] $a_1 \in \mathrm{Fix}(\sigma) \cap \mathrm{Const}(\delta)$ and $a_0 \notin \mathrm{Fix}(\sigma)$. \item[(iv)] $a_1 \notin \mathrm{Fix}(\sigma)$ and $a_0 \in \mathrm{Const}(\delta)$. \item[(v)] $a_1 \notin \mathrm{Fix}(\sigma)$, $a_0 \in \mathrm{Fix}(\sigma)$ and $a_1 \in \mathrm{Const}(\delta)$. \end{itemize} Then $\mathrm{Nuc}_l(S_f) = \mathrm{Nuc}_m(S_f) = S$. \end{proposition} \begin{proof} Recall $S \subseteq \mathrm{Nuc}_l(S_f)$ by Theorem \ref{thm:nucleus of S_f generalised}. We prove the reverse inclusion: Suppose $p = p_0 + p_1 t \in \mathrm{Nuc}_l(S_f)$ for some $p_0, p_1 \in S$, then \begin{align*} (p \circ t) \circ t &= ( p_0 t + p_1(a_1 t + a_0)) \circ t \\ &= p_0(a_1 t + a_0) + p_1 a_1(a_1 t + a_0) + p_1 a_0 t \\ &= p_0 a_0 + p_1 a_1 a_0 + \big( p_0 a_1 + p_1 a_1^2 + p_1 a_0 \big) t, \end{align*} and \begin{align*} p &\circ (t \circ t) = (p_0 + p_1 t) \circ (a_1 t + a_0) \\ &= p_0 a_1 t + p_0 a_0 + p_1(\sigma(a_0)t + \delta(a_0)) + p_1(\sigma(a_1)t + \delta(a_1)) \circ t \\ &= p_0 a_1 t + p_0 a_0 + p_1 \sigma(a_0)t + p_1 \delta(a_0) + p_1 \sigma(a_1)(a_1t + a_0) + p_1 \delta(a_1) t \\ &= p_0 a_0 + p_1 \sigma(a_1)a_0 + p_1 \delta(a_0) + \big( p_0 a_1 + p_1 \sigma(a_1)a_1 + p_1 \delta(a_1) + p_1 \sigma(a_0) \big) t, \end{align*} must be equal. Comparing the coefficients of $t$ yields \begin{equation} \label{eqn:generalised left Nucleus 3 delta not 0} p_1 a_1 a_0 = p_1 \sigma(a_1) a_0 + p_1 \delta(a_0), \end{equation} and \begin{equation} \label{eqn:generalised left Nucleus 4 delta not 0} p_1 a_1^2 + p_1 a_0 = p_1 \sigma(a_1) a_1 + p_1 \sigma(a_0) + p_1 \delta(a_1). \end{equation} It is a straightforward exercise to check using \eqref{eqn:generalised left Nucleus 3 delta not 0} and \eqref{eqn:generalised left Nucleus 4 delta not 0} that in each of our five cases we must have $p_1 = 0$ so $p = p_0 \in S$. We prove $S \subseteq \mathrm{Nuc}_m(S_f)$ similarly but consider instead $(t \circ p) \circ t = t \circ (p \circ t)$. \end{proof} \chapter{Solvable Crossed Product Algebras and Applications to G-Admissible Groups} \label{chapter:G-Admissible Groups and Crossed Products} In this Chapter, we show that for every finite-dimensional central simple algebra $A$ over a field $F$, which contains a maximal subfield $M$ with non-trivial $G = \mathrm{Aut}_F(M)$, then $G$ is solvable if and only if $A$ contains a finite chain of subalgebras, which are generalised cyclic algebras over their centers, satisfying certain conditions. This chain of subalgebras is closely related to a normal series of $G$ which exists when $G$ is solvable. In particular, we obtain that a crossed product algebra is solvable if and only if it has such a chain. Recall from Section \ref{section:Generalised Cyclic Algebras}, that when $D$ is a finite-dimensional central division algebra over $C$, $\sigma \vert_C$ has finite order $m$ and $f(t) = t^m-a \in D[t;\sigma]$, $d \in \mathrm{Fix}(\sigma)^{\times}$ is right invariant, the associative quotient algebra $D[t;\sigma]/D[t;\sigma]f$ is also called a generalised cyclic algebra and denoted $(D,\sigma,a)$. For this Chapter, we extend the definition of a generalised cyclic algebra, to where we do not require $D$ be a division algebra: \begin{definition} Let $S$ be a finite dimensional central simple algebra of degree $n$ over $C$ and $\sigma \in \mathrm{Aut}(S)$ be such that $\sigma \vert_C$ has finite order $m$. We define the \textbf{generalised cyclic algebra} $(S,\sigma,a)$ to be the associative algebra of the form $S_f = S[t;\sigma]/S[t;\sigma]f$ where $f(t) = t^m - a \in S[t;\sigma]$, $a \in \mathrm{Fix}(\sigma)^{\times}$ is right invariant. $(S,\sigma,a)$ is a central simple algebra over $F = \mathrm{Fix}(\sigma) \cap C$ of degree $mn$ \cite[p.~4]{brown2017solvable}. \end{definition} \begin{definition} Let $F$ be a field and $A$ be a central simple algebra over $F$ of degree $n$. $A$ is called a \textbf{$G$-crossed product algebra} or \textbf{crossed product algebra} if it contains a field extension $M/F$ which is Galois of degree $n$ with Galois group $G = \mathrm{Gal}(M/F)$. Equivalently we can define a ($G$)-crossed product algebra $(M,G,\mathfrak{a})$ over $F$ via factor sets starting with a finite Galois field extension as follows: Suppose $M/F$ is a finite Galois field extension of degree $n$ with Galois group $G$ and $\{ a_{\sigma,\tau} \ \vert \ \sigma, \tau \in G \}$ is a set of elements of $M^{\times}$ such that \begin{equation} \label{eqn:Crossed product criteria 1} a_{\sigma,\tau} a_{\sigma \tau, \rho} = a_{\sigma,\tau \rho} \sigma(a_{\tau,\rho}), \end{equation} for all $\sigma, \tau, \rho \in G$. Then a map $\mathfrak{a}: G \times G \rightarrow M^{\times}, \ (\sigma,\tau) \mapsto a_{\sigma,\tau}$, is called a \textbf{factor set} or \textbf{2-cocycle} of $G$. An associative multiplication is defined on the $F$-vector space $\bigoplus_{\sigma \in G} M x_{\sigma}$ by \begin{equation} \label{eqn:Crossed product criteria 2} x_{\sigma} m = \sigma(m) x_{\sigma}, \end{equation} \begin{equation} \label{eqn:Crossed product criteria 3} x_{\sigma} x_{\tau} = a_{\sigma, \tau} x_{\sigma \tau}, \end{equation} for all $m \in M$, $\sigma, \tau \in G$. This way $\bigoplus_{\sigma \in G} M x_{\sigma}$ becomes an associative central simple $F$-algebra that contains a maximal subfield isomorphic to $M$. This algebra is denoted $(M,G,\mathfrak{a})$ and is called a $G$-crossed product algebra over $F$. If $G$ is solvable then $A$ is also called a \textbf{solvable $G$-crossed product algebra} over $F$. \end{definition} \section{Crossed Product Subalgebras of Central Simple Algebras} Let $M/F$ be a field extension of degree $n$ and $G = \mathrm{Aut}_{F}(M)$ be the group of automorphisms of $M$ which fix the elements of $F$. Let $A$ be a central simple algebra of degree $n$ over $F$ and suppose $M$ is contained in $A$, this makes $M$ a maximal subfield of $A$ \cite[Lemma 15.1]{berhuy2013introduction}. Such maximal subfields do not always exist for a general central simple algebra \cite[Remark 15.4]{berhuy2013introduction}, however they always exists for example when $A$ is a division algebra \cite[Corollary 15.6]{berhuy2013introduction}. We denote by $A^{\times}$ the set of invertible elements of $A$, and for a subset $B$ in $ A$, denote $\mathrm{Cent}_{A}(B)$ the centralizer of $B$ in $A$. Some of the results in this Chapter, namely Lemma \ref{lem:Petit (26)}, Corollary \ref{cor:Petit (28)} and Theorems \ref{thm:Petit (27)}, \ref{thm:Petit (29)}, are stated for central division algebras $A$ over $F$ by Petit in \cite[\S 7]{Petit1966-1967}, and none of them are proved there. The following generalises \cite[(27)]{Petit1966-1967} to central simple algebras with a maximal subfield $M$ as above. The result was before only stated for division algebras and also not in terms of crossed product algebras: \begin{theorem} \label{thm:Petit (27)} \begin{itemize} \item[(i)] $A$ contains a subalgebra $M(G)$ which is a crossed product algebra $(M,G,\mathfrak{a})$ of degree $|G|$ over $\mathrm{Fix}(G)$ with maximal subfield $M$. \item[(ii)] $A$ is equal to $M(G)$ if and only if $M$ is a Galois extension of $F$. In this case $A$ is a $G$-crossed product algebra over $F$. \item[(iii)] For any subgroup $H$ of $G$, there is an $F$-subalgebra $M(H)$ of both $M(G)$ and $A$, which is a $H$-crossed product algebra of degree $|H|$ over $\mathrm{Fix}(H)$ with maximal subfield $M$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] Define $M(G) = \mathrm{Cent}_{A}(\mathrm{Fix}(G))$, then $M(G)$ is a central simple algebra over $\mathrm{Fix}(G)$ by the Centralizer Theorem for central simple algebras \cite[Theorem III.5.1]{berhuy2013introduction}. Furthermore, since $M$ is a maximal subfield of $M(G)$ and $M/\mathrm{Fix}(G)$ is a Galois field extension with Galois group $G$, we conclude $M(G)$ is a $G$-crossed product algebra. \item[(ii)] Notice $[M:F] = n$, $A$ has dimension $n^2$ over $F$, and $M(G)$ has a basis $\{ x_{\sigma} | \ \sigma \in G \}$ as a vector space over $M$. If $M$ is not a Galois extension of $F$, then $\vert G \vert < n$ and thus $\{ x_{\sigma} \ \vert \ \sigma \in G \}$ cannot be a set of generators for $A$ as a vector space over $M$. Conversely, if $M/F$ is a Galois extension, then $\vert G \vert = n$ and since $\{ x_{\sigma} \ \vert \ \sigma \in G \}$ is linearly independent over $M$, counting dimensions yields $M(G) = A$. The rest of the assertion is trivial. \item[(iii)] For any subgroup $H$ of $G$, let $M(H) = \mathrm{Cent}_A(\mathrm{Fix}(H))$. Since $\mathrm{Fix}(G) \subseteq \mathrm{Fix}(H)$, we have $M(H) = \mathrm{Cent}_A(\mathrm{Fix}(H))$ is contained in $M(G) = \mathrm{Cent}_A(\mathrm{Fix}(G))$ The proof now follows exactly as in (i). \end{itemize} \end{proof} \begin{remark} When $A$ has prime degree over $F$ and $M/F$ is not a Galois extension then $M(G) = M$: $M(G)$ is a simple subalgebra of $A$ by Theorem \ref{thm:Petit (27)}(i), therefore $\mathrm{dim}_F(M(G))$ divides $\mathrm{dim}_F(A) = n^2$ by the Centralizer Theorem for central simple algebras \cite[Theorem III.5.1]{berhuy2013introduction}. Now $M(G)$ contains $M$ and $M(G)$ is equal to $A$ if and only if $M/F$ is a Galois extension by Theorem \ref{thm:Petit (27)}(ii). As $n$ is prime and $M/F$ is not Galois, this means $M(G) = M$. \end{remark} A close look at the proof of Theorem \ref{thm:Petit (27)} yields the following observations: \begin{lemma} \label{lem:Centralizer if Fix(H) in A} \begin{itemize} \item[(i)] Given any subgroup $H$ of $G$, $M(H)$ is a $H$-crossed product algebra over its center with $M(H) = (M,H,\mathfrak{a}_H)$, where $\mathfrak{a}_H$ denotes the factor set of the $G$-crossed product algebra $M(G) = (M,G,\mathfrak{a})$ restricted to the elements of $H$. \item[(ii)] For any subgroup $H$ of $G$, $M(H)$ is the centralizer of $\mathrm{Fix}(H)$ in $A$. \end{itemize} \end{lemma} The following generalises \cite[(26)]{Petit1966-1967} to any central simple algebra $A$ with a maximal subfield $M$ as above. Again it was previously only stated and not proved for central division algebras: \begin{lemma} \label{lem:Petit (26)} \begin{itemize} \item[(i)] For any $\sigma \in G$ there exists $x_{\sigma} \in A^{\times}$ such that the inner automorphism $$I_{x_{\sigma}}: A \rightarrow A, \ y \mapsto x_{\sigma} y x_{\sigma}^{-1}$$ restricted to $M$ is $\sigma$. \item[(ii)] Given any $\sigma \in G$, we have $\{ x \in A^{\times} \ \vert \ I_{x} \vert_M = \sigma \} = M^{\times} x_{\sigma}.$ \item[(iii)] The set of cosets $\{ M^{\times} x_{\sigma} \ \vert \ \sigma \in G\}$ with multiplication given by \begin{equation*} \label{eqn:M^X x_sigma M^times x_tau = M^times x_sigma tau} M^{\times} x_{\sigma} M^{\times} x_{\tau} = M^{\times} x_{\sigma \tau}, \end{equation*} is a group isomorphic to $G$, where $\sigma$ and $M^{\times} x_{\sigma}$ correspond under this isomorphism. \end{itemize} \end{lemma} \begin{proof} $A$ contains the $G$-crossed product algebra $M(G)$ by Theorem \ref{thm:Petit (27)}, thus (i) and (iii) follows from \eqref{eqn:Crossed product criteria 2} and \eqref{eqn:Crossed product criteria 3}. For (ii) we have \begin{align*} I_{m x_{\sigma}}(y) &= (m x_{\sigma}) y (m x_{\sigma})^{-1} = (m x_{\sigma}) y (x_{\sigma}^{-1} m^{-1}) = m \sigma(y) m^{-1} = \sigma(y) \end{align*} for all $m, y \in M^{\times}$, and thus $M^{\times} x_{\sigma} \subset \{ x \in A^{\times} \ \vert \ I_{x} \vert_M = \sigma \}$. Suppose $u \in \{ x \in A^{\times} \ \vert \ I_{x} \vert_M = \sigma \}$. Then as $u$ and $x_{\sigma}$ are invertible, we can write $u = v x_{\sigma}$ for some $v \in A^{\times}$. We are left to prove that $v \in M^{\times}$. We have $$\sigma(y) = I_{u}(y) = (v x_{\sigma}) y (v x_{\sigma})^{-1} = v x_{\sigma} y x_{\sigma}^{-1} v^{-1} = v \sigma(y) v^{-1},$$ for all $y \in M$, and so $\sigma(y) v = v \sigma(y)$ for all $y \in M$, that is $m v = v m$ for all $m \in M$ since $\sigma$ is bijective. Therefore $v$ is contained in the centralizer of $M$ in $A$, which is equal to $M$ because $M$ is a maximal subfield of $A$. \end{proof} The following generalises \cite[(28)]{Petit1966-1967}: \begin{corollary} \label{cor:Petit (28)} If $H$ is a cyclic subgroup of $G$ of order $h > 1$ generated by $\sigma$, then there exists $c \in \mathrm{Fix}(\sigma)^{\times}$ such that $$M(H) \cong (M/\mathrm{Fix}(\sigma), \sigma,c) = M[t;\sigma]/M[t;\sigma](t^h - c),$$ is a cyclic algebra of degree $h$ over $\mathrm{Fix}(\sigma)$. \end{corollary} \begin{proof} $M(H)$ is a $H$-crossed product algebra of degree $h$ over $\mathrm{Fix}(\sigma)$ by Theorem \ref{thm:Petit (27)}. Moreover $H$ is a cyclic group and so $M(H)$ is a cyclic algebra of degree $h$ over $\mathrm{Fix}(\sigma)$ (see for example \cite[p.~49]{saltman1999lectures}). This means there exists $c \in \mathrm{Fix}(\sigma)^{\times}$ such that $M(H) \cong (M/\mathrm{Fix}(\sigma), \sigma,c)$. \end{proof} In particular, we conclude that if a central division algebra $A$ over $F$ contains a maximal subfield $M$ and non-trivial $\sigma \in \mathrm{Aut}_F(M)$ of order $h$, then it contains a cyclic division algebra of degree $h$, (though not necessarily with center $F$). This is the case even if $A$ is a noncrossed product (i.e. if $A$ is not a crossed product algebra): \begin{theorem} \label{thm:central division algebra contains cyclic algebra} (\cite[Theorem 4]{brown2017solvable}). Let $A$ be a central division algebra of degree $n$ over $F$ with maximal subfield $M$ and non-trivial $\sigma \in \mathrm{Aut}_F(M)$ of order $h$. Then $A$ contains a cyclic division algebra $(M/\mathrm{Fix}(\sigma), \sigma,c)$ of degree $h$ over $\mathrm{Fix}(\sigma)$ as a subalgebra. \end{theorem} \begin{proof} This follows immediately from Corollary \ref{cor:Petit (28)}. \end{proof} It is well-known that a central division algebra of prime degree over $F$ is a cyclic algebra if and only if it contains a cyclic subalgebra of prime degree (though not necessarily with center $F$) \cite[p.~2]{motiee2016note}. Together with Theorem \ref{thm:central division algebra contains cyclic algebra} this yields the following: \begin{corollary} \label{cor:central division algebras of prime degree} (\cite[Corollary 6]{brown2017solvable}). Let $A$ be a central division algebra over $F$ of prime degree $p$. Then either $A$ is a cyclic algebra or each of its maximal subfields $M$ has trivial automorphism group $\mathrm{Aut}_F(M)$. \end{corollary} \begin{proof} Suppose $G = \mathrm{Aut}_F(M)$ is non-trivial, then there exists a non-trivial $\sigma \in \mathrm{Aut}_F(M)$ of finite order $h$. Thus $A$ contains a cyclic division algebra of degree $h$ over $\mathrm{Fix}(\sigma)$ as a subalgebra. Looking at the possible intermediate field extensions of $M/F$ yields $[M:\mathrm{Fix}(\sigma)]$ is either $1$ or $p$. Since $G$ is not trivial, $[M:\mathrm{Fix}(\sigma)] = h = p$, thus $A$ contains a cyclic subalgebra of prime degree and so is itself a cyclic algebra. \end{proof} \section[CSA's Containing Max Subfield with Solvable Automorphism Group]{Central Simple Algebras Containing a Maximal Subfield \texorpdfstring{$M$}{M} with Solvable \texorpdfstring{$F$}{F}-Automorphism Group} Suppose $G$ is a finite solvable group, then there exists a chain of subgroups \begin{equation} \label{eqn:Subnormal series} \{ 1 \} = G_0 < G_1 < \ldots < G_k = G, \end{equation} such that $G_{j}$ is normal in $G_{j+1}$ and $G_{j+1}/G_{j}$ is cyclic of prime order $q_{j}$ for all $j \in \{ 0, \ldots, k-1 \}$, i.e. \begin{equation} G_{j+1}/G_{j} = \{ G_{j}, G_{j} \sigma_{j+1}, \ldots \}, \end{equation} for some $\sigma_{j+1} \in G_{j+1}$. Theorem \ref{thm:Petit (27)}, Corollary \ref{cor:Petit (28)} and Lemma \ref{lem:Petit (26)} lead us to the following generalisation of \cite[(29)]{Petit1966-1967}. As before, the result was previously only stated (and not proved) for central division algebras over $F$, and also without the connection to crossed product algebras: \begin{theorem} \label{thm:Petit (29)} Let $M/F$ be a field extension of degree $n$ with non-trivial solvable $G = \mathrm{Aut}_{F}(M)$, and $A$ be a central simple algebra of degree $n$ over $F$ with maximal subfield $M$. Then there exists a chain of subalgebras \begin{equation} \label{eqn:Petit (29) chain of subalgebras} M = A_0 \subset A_1 \subset \ldots \subset A_k = M(G) \subseteq A, \end{equation} of $A$ which are $G_i$-crossed product algebras over $Z_i = \mathrm{Fix}(G_i)$, and where \begin{equation} \label{eqn:Petit (29) chain of subalgebras 2} A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i), \end{equation} for all $i \in \{ 0, \ldots, k-1 \}$, such that \begin{itemize} \item[(i)] $q_i$ is the prime order of the factor group $G_{i+1}/G_i$ in the chain \eqref{eqn:Subnormal series}, \item[(ii)] $\tau_i$ is an $F$-automorphism of $A_i$ of inner order $q_i$ which restricts to $\sigma_{i+1} \in G_{i+1}$ which generates $G_{i+1}/G_i$, and \item[(iii)] $c_i \in \mathrm{Fix}(\tau_i)$ is invertible. \end{itemize} \end{theorem} Note that the inclusion $M(G) \subseteq A$ in \eqref{eqn:Petit (29) chain of subalgebras} is an equality if and only if $M/F$ is a Galois extension by Theorem \ref{thm:Petit (27)}. In this case $A$ is a solvable $G$-crossed product algebra. \begin{proof} Define $A_i= M(G_i)$ for all $i \in \{ 1, \ldots, k \}$. $A_i$ is a $G_i$-crossed product algebra over $\mathrm{Fix}(G_i)$ by Theorem \ref{thm:Petit (27)}. $G_1/G_0 \cong G_1$ is a cyclic subgroup of $G$ of prime order $q_0$ generated by some $\sigma_1 \in G$. Let $\tau_0 = \sigma_1$, then there exists $c_0 \in \mathrm{Fix}(\tau_0)^{\times}$ such that $A_1 = M(G_1)$ is $F$-isomorphic to $$M[t_0;\tau_0]/M[t_0;\tau_0](t_0^{q_0} - c_0),$$ by Corollary \ref{cor:Petit (28)}, which is a cyclic algebra of prime degree $q_0$ over $\mathrm{Fix}(\tau_0)$. Now $G_1 \triangleleft G_2$ and $G_2/G_1$ is cyclic of prime order $q_1$ with \begin{equation} \label{eqn:proof petit (29) 1} G_2/G_1 = \{ (G_1 \sigma_2)^i \ \vert \ i \in \mathbb{Z} \} = \{ G_1, G_1 \sigma_2, \ldots, G_1 \sigma_2^{q_1-1} \}, \end{equation} for some $\sigma_2 \in G_2$. Hence we can write $G_2 = \{ h \sigma_2^i \ \vert \ h \in G_1, 0 \leq i \leq q_1-1 \}$ and thus the crossed product algebra $A_2 = M(G_2)$ has a basis $$\{ x_{h \sigma_2^j} \ \vert \ h \in G_1, \ 0\leq j \leq q_1-1 \},$$ as an $M$-vector space. Recall $M^{\times} x_{h \sigma_2^j} = M^{\times} x_{h} x_{\sigma_2^j} = M^{\times} x_{h} x_{\sigma_2}^j$ for all $h \in G_1$ by Lemma \ref{lem:Petit (26)}, and $\{ 1, x_{\sigma_2}, \ldots, x_{\sigma_2}^{q_1-1} \}$ is a basis for $A_2$ as a left $A_1$-module, i.e. \begin{equation} \label{eqn:proof petit (29) 2} A_2 = A_1 + A_1 x_{\sigma_2} + \ldots + A_1 x_{\sigma_2}^{q_1-1}. \end{equation} We have $G_2 G_1 = G_1 G_2$ as $G_1$ is normal in $G_2$ and so for every $h \in G_1$, we get $\sigma_2 h = h' \sigma_2$ for some $h' \in G_1$. Choose the basis $\{ x_{h} \ \vert \ h \in G_1 \}$ of $A_1$ as a vector space over $M$. By \eqref{eqn:Crossed product criteria 3} we obtain \begin{equation} \label{eqn:proof petit (29) 3} x_{\sigma_2} x_{h} = a_{\sigma_2,h} x_{\sigma_2 h} = a_{\sigma_2,h} x_{h' \sigma_2} = a_{\sigma_2,h} (a_{h',\sigma_2})^{-1} x_{h'} x_{\sigma_2}. \end{equation} Recall $x_{\sigma_2}\in A^\times$ by Lemma \ref{lem:Petit (26)}. The inner automorphism $$\tau_1: A \rightarrow A,\ z \mapsto x_{\sigma_2} z x_{\sigma_2}^{-1}$$ restricts to $\sigma_2$ on $M$. Moreover, \begin{equation} \label{eqn:proof petit (29) 4} \begin{split} \tau_1(x_{h}) &= x_{\sigma_2} x_{h} x_{\sigma_2}^{-1} = a_{\sigma_2,h} (a_{h',\sigma_2})^{-1} x_{h'} x_{\sigma_2} x_{\sigma_2}^{-1} \\ &= a_{\sigma_2,h} (a_{h',\sigma_2})^{-1} x_{h'} \in A_1, \end{split} \end{equation} for all $h \in G_1$, i.e. $\tau_1 \vert_{A_1}(y) \in A_1$ for all $y \in A_1$ and so $\tau_1 \vert_{A_1}$ is an $F$-automorphism of $A_1$. Furthermore, $x_{\sigma_2} x_{h} = \tau_1 \vert_{A_1}(x_{h}) x_{\sigma_2},$ for all $h \in G_1$ by \eqref{eqn:proof petit (29) 3}, \eqref{eqn:proof petit (29) 4}, and $$x_{\sigma_2} m = \sigma_2(m) x_{\sigma_2} = \tau_1 \vert_{A_1}(m) x_{\sigma_2},$$ for all $m \in M$. We conclude that \begin{equation} \label{eqn:proof petit (29) 6} x_{\sigma_2} y = \tau_1 \vert_{A_1}(y) x_{\sigma_2} \end{equation} for all $y \in A_1$ by distributivity. Define $c_1 = x_{\sigma_2}^{q_1}$, then $\sigma_2^{q_1} \in G_1$ by \eqref{eqn:proof petit (29) 1} which implies $c_1 \in A_1$. Furthermore $c_1$ is invertible since $x_{\sigma_2}$ is invertible. Also, $\tau_1 \vert_{A_1}(c_1) = x_{\sigma_2} x_{\sigma_2}^{q_1} x_{\sigma_2}^{-1} = c_1$ which means $c_1 \in \mathrm{Fix}(\tau_1 \vert_{A_1})^\times$. Since $$x_{\sigma_2^{-q_1}} x_{\sigma_2^{q_1}} = a_{\sigma_2^{-q_1},\sigma_2^{q_1}} x_{\mathrm{id}} \in M^{\times},$$ it follows that $c_1^{-1} = x_{\sigma_2^{q_1}}^{-1} \in M^{\times}x_{\sigma_2^{-q_1}} \in A_1$ as $\sigma_2^{-q_1} \in G_1$. Hence $\tau_1 \vert_{A_1}$ has inner order $q_1$, as indeed $(\tau_1 \vert_{A_1})^{q_1}: A_1 \rightarrow A_1, z \mapsto c_1 z c_1^{-1},$ is an inner automorphism. Consider the algebra $$B_2 = A_1[t_1;\tau_1 \vert_{A_1}] /A_1[t_1;\tau_1 \vert_{A_1}](t_1^{q_1} - c_1)$$ with center \begin{align*} \mathrm{Cent}(B_2) & \supset \{ b \in A_1 \ \vert \ bh = hb \text{ for all } h \in B_2 \} = \mathrm{Cent}(A_1) \cap \mathrm{Fix}(\tau_1) \supset F. \end{align*} Now, using \eqref{eqn:proof petit (29) 2} and \eqref{eqn:proof petit (29) 6}, the map $$\phi: A_2 \rightarrow B_2, \ y x_{\sigma_2}^i \mapsto y t_1^i \qquad \qquad (y \in A_1),$$ can readily be seen to be an isomorphism between $F$-algebras. Indeed, it is clearly bijective and $F$-linear. In addition, we have \begin{align*} \phi \big( (y x_{\sigma_2}^i)(z x_{\sigma_2}^j) \big) &= \phi \big( y \tau_1 \vert_{A_1}^i(z) x_{\sigma_2}^i x_{\sigma_2}^j \big) \\ &= \begin{cases} \phi \big( y \tau_1 \vert_{A_1}^i(z) x_{\sigma_2}^{i+j} \big) & \text{ if } i+j < q_1, \\ \phi \big( y \tau_1 \vert_{A_1}^i(z) x_{\sigma_2}^{q_1} x_{\sigma_2}^{i+j-q_1} \big) & \text{ if } i+j \geq q_1, \end{cases} \\ &= \begin{cases} y \tau_1 \vert_{A_1}^i(z) t^{i+j} & \text{ if } i+j < q_1, \\ y \tau_1 \vert_{A_1}^i(z) c_1 t^{i+j-q_1} & \text{ if } i+j \geq q_1, \end{cases} \end{align*} by \eqref{eqn:proof petit (29) 6}, and \begin{align*} \phi(y x_{\sigma_2}^i) \circ \phi(z x_{\sigma_2}^j) &= (yt^i) \circ (zt^j) = \begin{cases} y \tau_1 \vert_{A_1}^i(z) t^{i+j} & \text{if } i+j < q_1, \\ y \tau_1 \vert_{A_1}^i(z) t^{i+j-q_1} c_1 & \text{if } i+j \geq q_1, \end{cases} \\ &= \begin{cases} y \tau_1 \vert_{A_1}^i(z) t^{i+j} & \text{ if } i+j < q_1, \\ y \tau_1 \vert_{A_1}^i(z) c_1 t^{i+j-q_1} & \text{ if } i+j \geq q_1, \end{cases} \end{align*} for all $y, z \in A_1$, $i, j \in \{ 0, \ldots, q_1-1 \}$. By distributivity we conclude $\phi$ is also multiplicative, thus an isomorphism between $F$-algebras. Continuing in this manner for $G_2 \triangleleft G_3$ etc. yields the assertion. \end{proof} \begin{remarks} \begin{itemize} \item[(i)] If $A$ is a division algebra, the algebras $A_i$ in Theorem \ref{thm:Petit (29)} are also division algebras, being subalgebras of the finite-dimensional algebra $A$. \item[(iii)] The algebras $A_i$ in Theorem \ref{thm:Petit (29)} are associative being subalgebras of the associative algebra $A$. Therefore $t_i^{q_i}-c_i \in A_i[t_i;\tau_i]$ are right invariant for all $i \in \{ 0, \ldots, k-1 \}$ by \cite[Theorem 4]{pumplun2015finite}. \end{itemize} \end{remarks} We obtain the following straightforward observations about Theorem \ref{thm:Petit (29)}: \begin{corollary} \label{cor:Observations on Theorem Petit(29)} Let $M/F$ be a field extension of degree $n$ with non-trivial solvable $G = \mathrm{Aut}_{F}(M)$, and $A$ be a central simple algebra of degree $n$ over $F$ with maximal subfield $M$. Consider the algebras $A_i = M(G_i)$ as in Theorem \ref{thm:Petit (29)}. \begin{itemize} \item[(i)] $A_i = \mathrm{Cent}_{A}(\mathrm{Fix}(G_i))$ for all $i \in \{ 0, \ldots, k-1 \}$. \item[(ii)] $A_i$ is a crossed product algebra over $Z_i = \mathrm{Fix}(G_i)$ of degree $|G_i| = \prod_{l=0}^{i-1} q_l$ for all $i \in \{ 1, \ldots, k-1 \}$. \item[(iii)] $M = Z_0 \supset \ldots \supset Z_{k-1} \supset Z_k \supset F$. \item[(iv)] $Z_{i-1} / Z_i$ has prime degree $q_{i-1}$ for all $i \in \{ 1, \ldots, k \}$. \item[(v)] $M/Z_i$ is a Galois field extension and $M$ is a maximal subfield of $A_i$ for all $i \in \{ 0, \ldots, k \}$. \item[(vi)] \cite[Corollary 10]{brown2017solvable} $A_i$ is a generalised cyclic algebra over $Z_i$ for all $i \in \{ 0, \ldots, k \}$. \item[(vii)] $A$ contains the cyclic algebra $$(M/\mathrm{Fix}(\sigma_1),\sigma_1,c_0) \cong M[t_0;\sigma_1]/M[t_0;\sigma_1](t_0^{q_0} - c_0),$$ of prime degree $q_0$ over $\mathrm{Fix}(\sigma_1)$. \end{itemize} \end{corollary} \begin{proof} \begin{itemize} \item[(i)] This is Lemma \ref{lem:Centralizer if Fix(H) in A}. \item[(ii)] $A_i$ has degree $[M:\mathrm{Fix}(G_i)]$ which is equal to $|G_i|$ by the Fundamental Theorem of Galois Theory. \item[(iii)] Follows from the fact that $\{ 1 \} = G_0 \leq G_1 \leq \ldots \leq G_k = G$ and $Z_i = \mathrm{Fix}(G_i)$. \item[(iv)] We have $n = [M:F] = [M:Z_i][Z_i:F] = |G_i| [Z_i:F]$ for all $i$ by the Fundamental Theorem of Galois Theory, therefore $$[Z_{i-1}:Z_i] = \frac{[Z_{i-1}:F]}{[Z_i:F]} = \frac{|G_i|}{|G_{i-1}|} = q_{i-1},$$ for all $i \in \{1,\ldots, k\}$ as required. \item[(v)] $M/\mathrm{Fix}(G_i)$ is a Galois field extension with Galois group $G_i$ by Galois Theory. The rest of the assertion is trivial by Theorem \ref{thm:Petit (29)}. \item[(vii)] $G_1 = \langle \sigma_1 \rangle$ is a cyclic subgroup of $G$ of order $q_0$, therefore the result follows by Corollary \ref{cor:Petit (28)}. \end{itemize} \end{proof} \begin{corollary} \label{cor:non-central elements} (\cite[Corollaries 9, 11]{brown2017solvable}). Let $A$ be a central division algebra over $F$ containing a maximal subfield $M$ with non-trivial solvable $G = \mathrm{Aut}_F(M)$. \begin{itemize} \item[(i)] There is a non-central element $t_0 \in A$ such that $t_0^{q_0} \in \mathrm{Fix}(\sigma_1)^{\times}$ and $t_0^m \notin \mathrm{Fix}(\sigma_1)$ for all $m \in \{ 1, \ldots, q_0-1 \}$. \item[(ii)] $A$ contains a chain of generalised cyclic division algebras $A_i$ over intermediate fields $Z_i = \mathrm{Fix}(G_i)$ of $M/F$ as in \eqref{eqn:Petit (29) chain of subalgebras}. \end{itemize} \end{corollary} Our next result generalises \cite[(9)]{petit1968quasi} and characterises all the algebras with a maximal subfield $M/F$ that have a non-trivial solvable automorphism group $G = \mathrm{Aut}_{F}(M)$: \begin{theorem} \label{thm:Petit (29) Generalised} Let $M/F$ be a field extension of degree $n$ with non-trivial $G = \mathrm{Aut}_{F}(M)$, and $A$ be a central simple algebra of degree $n$ over $F$ containing $M$. Then $G$ is solvable if there exists a chain of subalgebras \begin{equation} \label{eqn:Petit (29) chain of subalgebras Galois} M = A_0 \subset A_1 \subset \ldots \subset A_k \subseteq A \end{equation} of $A$ which all have maximal subfield $M$, where $A_k$ is a $G$-crossed product algebra over $\mathrm{Fix}(G)$, and where \begin{equation} \label{eqn:Petit (29) chain of subalgebras Galois 2} A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i), \end{equation} for all $i \in \{ 0, \ldots, k-1 \}$, with \begin{itemize} \item[(i)] $q_i$ a prime, \item[(ii)] $\tau_i$ an $F$-automorphism of $A_i$ of inner order $q_i$ which restricts to an automorphism $\sigma_{i+1} \in G$, and \item[(iii)] $c_i \in \mathrm{Fix}(\tau_i)^\times$. \end{itemize} \end{theorem} \begin{proof} Suppose there exists a chain of algebras $A_i$, $i \in \{ 0, \ldots, k \}$ satisfying the above assumptions. Put $G_k = G$. Each $A_i$ has center $Z_i = Z_{i-1} \cap \mathrm{Fix}(\tau_{i-1})$ by Corollary \ref{cor:Comm(S_f) = F generalised}, so that by induction $$Z_i = \mathrm{Fix}(\tau_0) \cap \mathrm{Fix}(\tau_1) \cap \dots \cap \mathrm{Fix}(\tau_{i-1}) \supset F,$$ in particular $Z_i \subseteq M$. $M/Z_i$ is a Galois extension contained in $A_i$: Let $H_i$ be the subgroup of $G$ generated by $\sigma_1, \ldots, \sigma_i$, $i \in \{ 1, \ldots, k \}$. Then \begin{align*} Z_i &= \mathrm{Fix}(\tau_0) \cap \mathrm{Fix}(\tau_1) \cap \dots \cap \mathrm{Fix}(\tau_{i-1}) \\ &= M \cap \mathrm{Fix}(\tau_0) \cap \mathrm{Fix}(\tau_1) \cap \dots \cap \mathrm{Fix}(\tau_{i-1}) \\ &= \mathrm{Fix}(\sigma_1) \cap \dots \cap \mathrm{Fix}(\sigma_{i}) = \mathrm{Fix}(H_i), \end{align*} so $M/Z_i$ is a Galois field extension by Galois theory. Put $G_i = \mathrm{Gal}(M/Z_i)$, then each $A_i$ is a $G_i$-crossed product algebra. In particular, $G_i$ is a subgroup of $G_{i+1}$. We use induction to prove that each $G_i$, thus $G$, is a solvable group. For $i = 1$, $$A_1 \cong M[t_0;\sigma_1]/M[t_0;\sigma_1](t_0^{q_0}-c_0)$$ is a cyclic algebra of degree $q_0$ over $\mathrm{Fix}(\sigma_1)$. $G_1 = \langle \sigma_1 \rangle$ is a cyclic group of prime order $q_0$ and therefore solvable. We assume as induction hypothesis that if there exists a chain $$M = A_0 \subset \ldots \subset A_j$$ of algebras such that \eqref{eqn:Petit (29) chain of subalgebras Galois 2} holds for all $i \in \{ 0, \ldots, j-1 \}$, $j \geq 1$, then $G_j$ is solvable. For the induction step we take a chain of algebras $M = A_0 \subset \ldots \subset A_j\subset A_{j+1},$ $$A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} -c_i)$$ where $\tau_i$ is an automorphism of $A_i$ of inner order $q_i$ which induces an automorphism $\sigma_{i+1} \in G$, $c_i \in \mathrm{Fix}(\tau_i)$ is invertible and $q_i$ is prime, for all $i \in \{ 0, \ldots, j \}$. By the induction hypothesis, $G_j$ is a solvable group. We show that $G_{j+1}$ is solvable: $t_j$ is an invertible element of $$ A_{j+1}\cong A_j[t_j;\tau_j]/A_j[t_j;\tau_j](t_j^{q_j} - c_j),$$ with inverse $c_j^{-1}t_j^{q_j-1}$. $A_{j}$ is a $G_{j}$-crossed product algebra over $Z_{j}$ with maximal subfield $M$. The $F$-automorphism $\tau_j$ on $A_j$ satisfies $t_j l = \tau_j(l) t_j$ for all $l \in A_j$ which implies the inner automorphism $$I_{t_j}: A \rightarrow A, \ d \mapsto t_j d t_j^{-1}$$ restricts to $\tau_j$ on $A_j$ and so also restricts to $\sigma_{j+1}$ on $M$. For any $\sigma \in G$ there exists an invertible $x_{\sigma} \in A $ such that the inner automorphism $$I_{x_{\sigma}}: A \rightarrow A, \ y \mapsto x_{\sigma} y x_{\sigma}^{-1}$$ restricted to $M$ is $\sigma$ by Lemma \ref{lem:Petit (26)}. Hence we have $x_{\sigma_{j+1}} = t_j$ with $x_{\sigma_{j+1}}$ as defined in Lemma \ref{lem:Petit (26)}. We know that $\{ 1, t_j, \ldots, {t_j}^{q_j-1} \}$ is a basis for $A_{j+1}$ as a left $A_j$-module. By \eqref{eqn:Crossed product criteria 3} we have $x_{\sigma_{j+1}^2} = a_1{t_j}^2$, $x_{\sigma_{j+1}^3} = a_2 {t_j}^3, \dots$ for suitable $a_i \in M^\times$, so that w.l.o.g. $\{ 1, x_{\sigma_{j+1}}, \ldots, x_{\sigma_{j+1}^{q_j-1}} \}$ is a basis for $A_{j+1}$ as a left $A_j$-module. Since $A_j$ is a $G_j$-crossed product algebra, it has $\{ x_{\rho} \ \vert \ \rho \in G_j \}$ as $M$-basis, and hence $A_{j+1}$ has basis $$\{ x_{\rho} x_{\sigma_{j+1}^i} \ \vert \ \rho \in G_j, \ 0 \leq i \leq q_j - 1 \}$$ as a vector space over $M$. Additionally, $x_{\rho} x_{\sigma_{j+1}^i} \in M^{\times} x_{\rho \sigma_{j+1}^i}$ by Lemma \ref{lem:Petit (26)} (iii) and thus $A_{j+1}$ has the $M$-basis $$\{ x_{\rho \sigma_{j+1}^i} \ \vert \ \rho \in G_j, \ 0 \leq i \leq q_j-1 \}.$$ Now $A_{j+1}$ is a $G_{j+1}$-crossed product algebra and thus also has the $M$-basis $\{ x_{\sigma} \ \vert \ \sigma \in G_{j+1} \}$. We use these two basis to show that $G_{j+1} = G_{j} \langle \sigma_{j+1} \rangle$: Write $$x_{\rho \sigma_{j+1}^i} = \sum_{\sigma \in G_{j+1}} m_{\sigma} x_{\sigma}$$ for some $m_{\sigma} \in M$, not all zero. Then $$x_{\rho \sigma_{j+1}^i} m = \sum_{\sigma \in G_{j+1}} m_{\sigma} x_{\sigma} m = \sum_{\sigma \in G_{j+1}} m_{\sigma} \sigma(m) x_{\sigma},$$ and $$x_{\rho \sigma_{j+1}^i} m = \rho \sigma_{j+1}^i(m) x_{\rho \sigma_{j+1}^i} = \rho \sigma_{j+1}^i(m) \sum_{\sigma \in G_{j+1}} m_{\sigma} x_{\sigma},$$ for all $m \in M$. Let $\sigma \in G_{j+1}$ be such that $m_{\sigma} \neq 0$, then in particular $$m_{\sigma} \sigma(m) x_{\sigma} = \rho \sigma_{j+1}^i(m) m_{\sigma} x_{\sigma},$$ for all $m \in M$, that is $\sigma = \rho \sigma_{j+1}^i$. This means that $\{ \rho \sigma_{j+1}^i \ | \ \rho \in G_j, \ 0 \leq i \leq q_j-1 \} \subseteq G_{j+1}$. Both sets have the same size so must be equal and we conclude $G_{j+1} = G_j \langle \sigma_{j+1} \rangle$. Finally we prove $G_j$ is a normal subgroup of $G_{j+1}$: the inner automorphism $I_{x_{\sigma_{j+1}}}$ restricts to the $F$-automorphism $\tau_j$ of $A_j$. In particular, this implies $x_{\sigma_{j+1}} x_{\rho} x_{\sigma_{j+1}}^{-1} \in A_j,$ for all $\rho \in G_j$. Furthermore, $$x_{\sigma_{j+1} \rho \sigma_{j+1}^{-1}} \in M^{\times} x_{\sigma_{j+1}} x_{\rho} x_{\sigma_{j+1}^{-1}} = M^{\times} x_{\sigma_{j+1}} x_{\rho} x_{\sigma_{j+1}}^{-1} \subset A_j,$$ for all $\rho \in G_j$ by Lemma \ref{lem:Petit (26)}. Hence $\sigma_{j+1} \rho \sigma_{j+1}^{-1} \in G_j$ because $A_j$ is a $G_j$-crossed product algebra. Similarly, we see $\sigma_{j+1}^r \rho \sigma_{j+1}^{-r} \in G_j$ for all $r \in \mathbb{N}$. Let $g \in G_{j+1}$ be arbitrary and write $g = h \sigma_{j+1}^r$ for some $h \in G_j$, $r \in \{ 0, \ldots, q_j-1 \}$ which we can do because $G_{j+1} = G_j \langle \sigma_{j+1} \rangle$. Then \begin{align*} g \rho g^{-1} &= ( h \sigma_{j+1}^r) \rho ( h \sigma_{j+1}^r)^{-1} = h (\sigma_{j+1}^r \rho \sigma_{j+1}^{-r})h^{-1} \in G_j, \end{align*} for all $\rho \in G_j$ so $G_j$ is indeed normal. It is well-known that a group $G$ is solvable if and only if given a normal subgroup $H$ of $G$, both $H$ and $G/H$ are solvable. It is clear now that $G_{j+1}/G_j$ is cyclic and hence solvable, which implies $G_{j+1}$ is solvable as required. \end{proof} \section{Solvable Crossed Product Algebras} \label{section:Solvable Crossed Product Algebras} We now focus on the case when $M/F$ is a Galois field extension: Suppose $M/F$ is a finite Galois field extension of degree $n$ and $A$ is a central simple algebra of degree $n$ over $F$ with maximal subfield $M$. i.e. now $A$ is a $G$-crossed product algebra where $G = \mathrm{Gal}(M/F)$. We obtain the following as a special case of Theorem \ref{thm:Petit (29)} and Corollary \ref{cor:Observations on Theorem Petit(29)}: \begin{theorem} \label{thm:Petit (29) solvable crossed product algebra version} Let $A$ be a $G$-crossed product algebra of degree $n$ over $F$ with maximal subfield $M$ such that $M/F$is a Galois field extension of degree $n$ with non-trivial solvable $G = \mathrm{Gal}(M/F)$. Then there exists a chain of subalgebras \begin{equation*} M = A_0 \subset A_1 \subset \ldots \subset A_k = M(G) = A \end{equation*} of $A$ which are generalised cyclic algebras \begin{equation*} A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i), \end{equation*} of degree $\prod_{l=0}^{i-1} q_l$ over $Z_i = \mathrm{Fix}(G_i)$ for all $i \in \{ 0, \ldots, k-1 \}$, such that \begin{itemize} \item[(i)] $q_i$ a prime, \item[(ii)] $\tau_i$ an $F$-automorphism of $A_i$ of inner order $q_i$ which restricts to an automorphism $\sigma_{i+1} \in G$, \item[(iii)] $c_i \in \mathrm{Fix}(\tau_i)^{\times}$, and \item[(iv)] $Z_i/ Z_{i-1}$ has prime degree $q_{i-1}$ and $A_i$ is the centralizer of $\mathrm{Fix}(G_i)$ in $A$. \end{itemize} \end{theorem} \begin{theorem} \label{thm:crossed product division condition} In the set-up of Theorem \ref{thm:Petit (29) solvable crossed product algebra version}, $A$ is a division algebra if and only if \begin{equation} \label{eqn:crossed product division condition} b \tau_i(b) \cdots \tau_i^{q_i -1}(b) \neq c_i, \end{equation} for all $b \in A_i$, for all $i \in \{ 0, \ldots, k-1 \}$. \end{theorem} \begin{proof} Suppose $A_k = A$ is a division algebra, then every subalgebra of $A$ must also be a division algebra as $A$ is finite-dimensional. Therefore $A_i$ are division algebras for all $i \in \{ 0, \ldots, k \}$. In particular this means $t_j^{q_j} - c_j \in A_j[t_j;\tau_j]$ are irreducible by Theorem \ref{thm:S_f_division_iff_irreducible}, thus \eqref{eqn:crossed product division condition} holds for all $b \in A_i$ by \cite[Theorem 1.3.16]{jacobson1996finite}. Conversely suppose \eqref{eqn:crossed product division condition} holds for all $b \in A_i$, for all $i \in \{ 0, \ldots, k-1 \}$. We prove by induction that $A_j$ is a division algebra for all $j \in \{ 0, \ldots, k\}$, then in particular $A = A_k$ is a division algebra: Clearly $A_0 = M$ is a field so in particular is a division algebra. Assume as induction hypothesis $A_j$ is a division algebra for some $j \in \{ 0, \ldots, k-1 \}$. By the proof of Theorem \ref{thm:Petit (29)}, $\tau_j^{q_j}$ is the inner automorphism $z \mapsto c_j z c_j^{-1}$ on $A_j$ and $\tau_j$ has inner order $q_j$. Then $A_{j+1} \cong A_j[t_j;\tau_j]/A_j[t_j;\tau_j](t_j^{q_j} - c_j)$ is a division algebra if and only if $t_j^{q_j} - c_j \in A_j[t_j;\tau_j]$ is irreducible, if and only if $$b \tau_j(b) \cdots \tau_j^{q_j -1}(b) \neq c_j,$$ for all $b \in A_j$ by \cite[Theorem 1.3.16]{jacobson1996finite}. Thus $A_i$ is a division algebra for all $i \in \{ 0, \ldots, k \}$ by induction. \end{proof} The following result follows immediately from Theorem \ref{thm:Petit (29) Generalised}: \begin{corollary} \label{cor:Characterisation of solvable crossed product algebras} Let $A$ be a $G$-crossed product algebra of degree $n$ over $F$ with maximal subfield $M$ such that $M/F$is a Galois field extension of degree $n$ with non-trivial $G = \mathrm{Gal}(M/F)$. Then $G$ is solvable if there exists a chain of subalgebras \begin{equation*} M = A_0 \subset A_1 \subset \ldots \subset A_k = A \end{equation*} of $A$ which all have maximal subfield $M$, and are generalised cyclic algebras \begin{equation*} A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i), \end{equation*} over their centers for all $i \in \{ 0, \ldots, k-1 \}$, where $q_i$ a prime, $\tau_i$ an $F$-automorphism of $A_i$ of inner order $q_i$ which restricts to an automorphism $\sigma_{i+1} \in G$, and $c_i \in \mathrm{Fix}(\tau_i)^{\times}$. \end{corollary} \begin{remark} Let $M/F$ be a finite Galois field extension with non-trivial solvable Galois group $G$ and $A$ be a solvable crossed product algebra over $F$ with maximal subfield $M$. Careful reading of \cite[p.~182-187]{albert1939structure} shows that Albert constructs the same chain of algebras $$A_{i+1} = A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i}-c_i)$$ inside a solvable crossed product $A$ as we do in Theorem \ref{thm:Petit (29) solvable crossed product algebra version}. However they are not explicitly identified as quotient algebras of skew polynomial rings. We also obtain the converse of Albert's statement in Corollary \ref{cor:Characterisation of solvable crossed product algebras}. Furthermore, neither Theorems \ref{thm:Petit (29)}, \ref{thm:Petit (29) Generalised}, nor Corollaries \ref{cor:Observations on Theorem Petit(29)} and \ref{cor:non-central elements} require $M/F$ to be a Galois field extension, unlike Albert's result which requires $M/F$ to be Galois. \end{remark} \section{Some Applications to \texorpdfstring{$G$}{G}-Admissible Groups} \label{section:G-Admissible Groups} The following definition is due to Schacher \cite{schacher1968subfields}: \begin{definition} A finite group $G$ is called \textbf{admissible} over a field $F$, if there exists a $G$-crossed product division algebra over $F$. \end{definition} Suppose $G$ is a finite solvable group, then we have a chain of normal subgroups $\{ 1 \} = G_0 \leq \ldots \leq G_k = G$, such that $G_j \triangleleft G_{j+1}$ and $G_{j+1}/G_j$ is cyclic of prime order $q_{j}$ for all $j \in \{ 0, \ldots, k-1 \}$. If $G$ is admissible over $F$, then Theorem \ref{thm:Petit (29)} implies that the subgroups $G_i$ of $G$ in the chain, are admissible over suitable intermediate fields of $M/F$: \begin{theorem} \label{thm:subgroups of G are admissible} Suppose $G$ is a finite solvable group which is admissible over a field $F$. Then each $G_i$ is admissible over the intermediate field $Z_i = \mathrm{Fix}(G_i)$ of $M/F$. Furthermore $[Z_i:F] = \prod_{j=i}^{k-1} q_j$ for all $i \in \{ 1, \ldots, k \}$. In particular $G_{k-1}$ is admissible over $Z_{k-1} = \mathrm{Fix}(G_{k-1})$ which has prime degree $q_{k-1}$ over $F$. \end{theorem} \begin{proof} As $G$ is $F$-admissible there exists a $G$-crossed product division algebra $D$ over $F$. By Theorem \ref{thm:Petit (29) solvable crossed product algebra version}, there also exists a chain of $G_i$-crossed product division algebras $A_i$ over $Z_i$ with maximal subfield $M$, and $M/Z_i$ is a Galois field extension with $G_i = \mathrm{Gal}(M/Z_i)$. This means $G_i$ is $Z_i$-admissible. \end{proof} \begin{example} Let $G = \mathbf{S_4}$, then $G$ is $\mathbb{Q}$-admissible \cite[Theorem 7.1]{schacher1968subfields}, so there exists a finite-dimensional associative central division algebra $D$ over $\mathbb{Q}$, with maximal subfield $M$ such that $M/\mathbb{Q}$ is a finite Galois field extension and $\mathrm{Gal}(M/\mathbb{Q}) = G$. Furthermore, $G$ is a finite solvable group, indeed we have the subnormal series $$\{ \mathrm{id} \} \lhd \langle (12)(34) \rangle \lhd \mathbf{K} \lhd \mathbf{A_4} \lhd \mathbf{S_4},$$ where $\mathbf{K}$ is the Klein four-group and $\mathbf{A_4}$ is the alternating group. We have $$\mathbf{S_4}/\mathbf{A_4} \cong \mathbb{Z}/2\mathbb{Z}, \ \mathbf{A_4}/\mathbf{K} \cong \mathbb{Z}/3\mathbb{Z},$$ $$\mathbf{K}/ \langle (12)(34) \rangle \cong \mathbb{Z}/2\mathbb{Z} \quad \text{ and} \quad \langle (12)(34) \rangle /\{ \mathrm{id} \} \cong \mathbb{Z}/2\mathbb{Z}.$$ By Theorem \ref{thm:Petit (29) solvable crossed product algebra version}, there exists a corresponding chain of division algebras $$M = A_0 \subset A_1 \subset A_2 \subset A_3 \subset A_4 = D$$ over $\mathbb{Q}$, such that $$A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i),$$ for all $i \in \{ 0, 1, 2, 3 \}$, where $\tau_i$ is an automorphism of $A_i$, whose restriction to $M$ is $\sigma_{i+1} \in G$, $c_i \in \mathrm{Fix}(\tau_i)^{\times}$ and $\tau_i$ has inner order $2, 2, 3, 2$ for $i = 0, 1, 2, 3$ respectively. Moreover $q_0 = q_1 = q_3 = 2$ and $q_2 = 3$, and $A_i$ has degree $\prod_{l=0}^{i-1} q_l$ over its center $Z_i$ for all $i \in \{ 1, 2, 3, 4 \}$ by Corollary \ref{cor:Observations on Theorem Petit(29)}. In addition, by Theorem \ref{thm:subgroups of G are admissible} we conclude: \begin{itemize} \item[(i)] $\mathbf{A_4}$ is admissible over $Z_3$, where $Z_3 \subset M$ is a quadratic field extension of $\mathbb{Q}$. \item[(ii)] $\mathbf{K}$ is admissible over $Z_2\subset M$, where $Z_2$ is a simple field extension of $Z_3$ of degree $q_2 = 3$. Therefore $Z_2$ is a simple field extension of $\mathbb{Q}$ of degree $q_2 q_3 = 6$ as any finite field extension of $\mathbb{Q}$ is simple. \item[(iii)] $\langle (12)(34) \rangle$ is admissible over $Z_1 \subset M$, where $Z_1$ is a simple field extension of $Z_2$ of degree $q_1 = 2$. Therefore $Z_1$ is a simple extension of $\mathbb{Q}$ of degree $q_1 q_2 q_3 = 12$. \end{itemize} \end{example} Schacher proved that for every finite group $G$, there exists an algebraic number field $F$ such that $G$ is admissible over $F$ \cite[Theorem 9.1]{schacher1968subfields}. Combining this with Theorem \ref{thm:Petit (29) solvable crossed product algebra version} we obtain: \begin{corollary} Let $G$ be a finite solvable group. Then there exists an algebraic number field $F$ and a $G$-crossed product division algebra $D$ over $F$. Furthermore, there exists a chain of crossed product division algebras $$M = A_0 \subset A_1 \subset \ldots \subset A_k = D$$ over $F$, such that $$A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i),$$ for all $i \in \{ 0, \ldots, k-1 \}$, and satisfying \begin{itemize} \item[(i)] $q_i$ is the prime order of the factor group $G_{i+1}/G_i$ in the subnormal series \eqref{eqn:Subnormal series} which exists because $G$ is solvable, \item[(ii)] $\tau_i$ is an automorphism of $A_i$ of inner order $q_i$ which restricts to $\sigma_i \in G$ on $M$, \item[(iii)] $c_i \in \mathrm{Fix}(\tau_i)$ is invertible. \end{itemize} \end{corollary} \begin{proof} Such a field $F$ and division algebra $D$ exist by \cite[Theorem 9.1]{schacher1968subfields}. The assertion then follows by Theorem \ref{thm:Petit (29) solvable crossed product algebra version}. \end{proof} In \cite[Theorem 1]{sonn1983admissibility}, Sonn proved that a finite solvable group is admissible over $\mathbb{Q}$ if and only if all its Sylow subgroups are metacyclic, i.e. if every Sylow subgroup $H$ of $G$ has a cyclic normal subgroup $N$, such that $H/N$ is also cyclic. Combining this with Theorem \ref{thm:Petit (29) solvable crossed product algebra version} yields: \begin{corollary} \label{cor:Q-admissible solvable petit (29) construction} Let $G$ be a finite solvable group such that all its Sylow subgroups are metacyclic. Then there exists a $G$-crossed product division algebra $D$ over $\mathbb{Q}$, and a chain of crossed product division algebras $$M = A_0 \subset A_1 \subset \ldots \subset A_k = D$$ over $\mathbb{Q}$, such that $$A_{i+1} \cong A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i} - c_i),$$ for all $i \in \{ 0, \ldots, k-1 \}$, such that \begin{itemize} \item[(i)] $q_i$ is the prime order of the factor group $G_{i+1}/G_i$ in the subnormal series \eqref{eqn:Subnormal series} which exists because $G$ is solvable, \item[(ii)] $\tau_i$ is an automorphism of $A_i$ of inner order $q_i$ which restricts to $\sigma_i \in G$ on $M$, \item[(iii)] $c_i \in \mathrm{Fix}(\tau_i)$ is invertible. \end{itemize} \end{corollary} \begin{proof} Such a division algebra $D$ exists by \cite[Theorem 1]{sonn1983admissibility}. The assertion then follows by Theorem \ref{thm:Petit (29) solvable crossed product algebra version}. \end{proof} \section[How to Construct Abelian Crossed Product Division Algebras]{How to Construct Crossed Product Division Algebras Containing a Given Abelian Galois Field Extension as a Maximal Subfield} \label{section:How to Construct Crossed Product Division Algebras Containing a Given Abelian Galois Field Extension as a Maximal Subfield} Let $M/F$ be a Galois field extension of degree $n$ with abelian Galois group $G = \mathrm{Gal}(M/F)$. We now show how to canonically construct crossed product division algebras of degree $n$ over $F$ containing $M$ as a subfield. This generalises a result by Albert in which $n = 4$ and $G \cong \mathbb{Z}_2 \times \mathbb{Z}_2$ \cite[p.~186]{albert1939structure}, cf. also \cite[Theorem 2.9.55]{jacobson1996finite}: For $n = 4$ every central division algebra containing a quartic abelian extension $M$ with Galois group $\mathbb{Z}_2 \times \mathbb{Z}_2$ can be obtained this way \cite[p.~186]{albert1939structure}, that means as a generalised cyclic algebra $(D,\tau,c)$ with $D$ a quaternion algebra over its center. Another way to construct such a crossed product algebra is via generic algebras, using a process going back to Amitsur and Saltman \cite{amitsur1978generic}, described also in \cite[\S 4.6]{jacobson1996finite}. As $G$ is a finite abelian group, we have a chain of subgroups $$\{ 1 \} = G_0 \leq \ldots \leq G_k = G,$$ such that $G_{j} \triangleleft G_{j+1}$ and $G_{j+1} / G_{j}$ is cyclic of prime order $q_{j} > 1$ for all $j \in \{ 0, \ldots, k-1 \}$. We use this chain to construct the algebras we want: $G_1 = \langle \sigma_1 \rangle$ is cyclic of prime order $q_0 > 1$ for some $\sigma_1 \in G$. Let $\tau_0 = \sigma_1$. Choose any $c_0 \in F^{\times}$ that satisfies $z \tau_0(z) \cdots \tau_0^{q_0-1}(z) \neq c_0$ for all $z \in M$ and define $f(t_0)=t_0^{q_0}-c_0\in M[t_0;\tau_0].$ Since $\tau_0$ has order $q_0$, we have $\tau_0^{q_0}(z) c_0 = z c_0 = c_0 z$ for all $z \in M$, and so $f(t_0)$ is right invariant by Theorem \ref{thm:Properties of S_f petit}. Therefore we see that $$A_1 = M[t_0;\tau_0] /M[t_0;\tau_0](t_0^{q_0}-c_0)$$ is an associative algebra which is cyclic of degree $q_0$ over $\mathrm{Fix}(\tau_0)$. Moreover, $f(t_0)$ is irreducible by \cite[Theorem 2.6.20(i)]{jacobson1996finite} and therefore $A_1$ is a division algebra. Now $G_2/G_1$ is cyclic of prime order $q_1$, say $G_2/G_1 = \{\sigma_2^i G_1 \ \vert \ i \in \mathbb{Z} \}$ for some $\sigma_2 \in G_2$ where $\sigma_2^{q_1} \in G_1$. As $\sigma_2^{q_1} \in G_1$ we have $\sigma_2^{q_1} = \sigma_1^{\mu}$ for some $\mu \in \{ 0, \ldots, q_0-1 \}$. Define $c_1 = l_1 t_0^{\mu}$ for some $l_1 \in F^{\times}$ and define the map $$\tau_1: A_1\rightarrow A_1, \ \sum_{i=0}^{q_0-1} m_i t_0^i \mapsto \sum_{i=0}^{q_0-1} \sigma_2(m_i) t_0^i,$$ which is an automorphism of $A_1$ by a straightforward calculation. Denote the multiplication in $A_1$ by $\circ$. Then $$\tau_1(c_1) = \sigma_2(l_1) t_0^{\mu} = l_1 t_0^{\mu} = c_1.$$ We have \begin{align*} \tau_1^{q_1} \Big( \sum_{i=0}^{q_0-1} m_i t_0^i \Big) \circ c_1 &= \sum_{i=0}^{q_0-1} \sigma_2^{q_1}(m_i) t_0^i \circ l_1 t_0^{\mu} = \sum_{i=0}^{q_0-1} l_1 \sigma_1^{\mu}(m_i) t_0^i \circ t_0^{\mu} \end{align*} and \begin{align*} c_1 \circ \sum_{i=0}^{q_0-1} m_i t_0^i &= l_1 t_0^{\mu} \circ \sum_{i=0}^{q_0-1} m_i t_0^i = \sum_{i=0}^{q_0-1} l_1 \sigma_1^{\mu}(m_i) t_0^{\mu} \circ t_0^i \end{align*} for all $m_i \in M$. Hence $\tau_1^{q_1}(z) \circ c_1 = c_1 \circ z$ for all $z \in A_1$ and $\tau_1(c_1) = c_1$, thus $f(t_1)=t_1^{q_1}-c_1\in A_1[t_1;\tau_1]$ is right invariant by Proposition \ref{prop:f(t) two-sided delta=0 generalised} and $$A_2 = A_1[t_1;\tau_1]/A_1[t_1;\tau_1](t_1^{q_1}-c_1)$$ is a finite-dimensional associative algebra over $$\mathrm{Comm}(A_2) \cap A_1 = \mathrm{Fix}(\tau_1) \cap \mathrm{Cent}(A_1) = \mathrm{Fix}(\tau_1) \cap \mathrm{Fix}(\tau_0) \supset F$$ by Proposition \ref{prop:Comm(S_f) delta=0 generalised}(iii). Again, $G_3/G_2$ is cyclic of prime order $q_2$, say $G_3/G_2 = \{\sigma_3^i G_2 \ \vert \ i \in \mathbb{Z} \}$ for some $\sigma_3 \in G$ with $\sigma_3^{q_2} \in G_2$. Write $\sigma_3^{q_2} = \sigma_2^{\lambda_1} \sigma_1^{\lambda_0}$ for some $\lambda_1 \in \{ 0, \ldots, q_1-1 \}$ and $\lambda_0 \in \{ 0, \ldots, q_0-1 \}$. The map $$H_{\sigma_3}: A_1 \rightarrow A_1, \ \sum_{i=0}^{q_0-1} m_i t_0^i \mapsto \sum_{i=0}^{q_0-1} \sigma_3(m_i)t_0^i,$$ is an automorphism of $A_1$ by a straightforward calculation. Define $$\tau_2: A_2 \rightarrow A_2, \ \sum_{i=0}^{q_1-1} x_i t_1^i \mapsto \sum_{i=0}^{q_1-1} H_{\sigma_3}(x_i) t_1^i \ (x_i \in A_1).$$ Then a straightforward calculation using that $H_{\sigma_3}$ commutes with $\tau_1$ and $H_{\sigma_3}(c_1) = c_1$ shows that $\tau_2$ is an automorphism of $A_2$. Define $c_2 = l_2 t_0^{\lambda_0} t_1^{\lambda_1}$ for some $l_2 \in F^{\times}$. Denote the multiplication in $A_i$ by $\circ_{A_i}$ and let $x_i = \sum_{j=0}^{q_0-1} y_{ij} t_0^j \in A_1$, $y_{ij} \in M$, $i \in \{ 0, \ldots, q_1-1 \}$. Then $$\tau_2(c_2) = \tau_2(l_2 t_0^{\lambda_0} t_1^{\lambda_1}) = H_{\sigma_3}(l_2 t_0^{\lambda_0}) t_1^{\lambda_1} = l_2 t_0^{\lambda_0} t_1^{\lambda_1} = c_2.$$ Furthermore we have \begin{align*} \tau_2^{q_2} \Big( \sum_{i=0}^{q_1-1} x_i t_1^i \Big) \circ_{A_2} c_2 &= \sum_{i=0}^{q_1-1} H_{\sigma_3}^{q_2}(x_i) t_1^i \circ_{A_2} l_2 t_0^{\lambda_0} t_1^{\lambda_1} \\ &= \sum_{i=0}^{q_1-1} \sum_{j=0}^{q_0-1} \sigma_3^{q_2}(y_{ij}) t_0^j t_1^i \circ_{A_2} l_2 t_0^{\lambda_0} t_1^{\lambda_1} \\ &= \sum_{i=0}^{q_1-1} \sum_{j=0}^{q_0-1} \sigma_2^{\lambda_1}(\sigma_1^{\lambda_0}(y_{ij})) t_0^j t_1^i \circ_{A_2} l_2 t_0^{\lambda_0} t_1^{\lambda_1} \\ &= \sum_{i=0}^{q_1-1} \Big( \sum_{j=0}^{q_0-1} \sigma_2^{\lambda_1}(\sigma_1^{\lambda_0}(y_{ij})) t_0^j \circ_{A_1} \tau_1^i(l_2 t_0^{\lambda_0}) \Big) t_1^i \circ_{A_2} t_1^{\lambda_1} \\ &= \sum_{i=0}^{q_1-1} \Big( \sum_{j=0}^{q_0-1} l_2 \sigma_2^{\lambda_1}(\sigma_1^{\lambda_0}(y_{ij})) t_0^j \circ_{A_1} t_0^{\lambda_0} \Big) t_1^i \circ_{A_2} t_1^{\lambda_1}, \end{align*} and \begin{align*} c_2 \circ_{A_2} \sum_{i=0}^{q_1-1} x_i t_1^i &= l_2 t_0^{\lambda_0} t_1^{\lambda_1} \circ_{A_2} \sum_{i=0}^{q_1-1} x_i t_1^i = \sum_{i=0}^{q_1-1} \big( l_2 t_0^{\lambda_0} \circ_{A_1} \tau_1^{\lambda_1}(x_i) \big) t_1^{\lambda_1} \circ_{A_2} t_1^i \\ &= \sum_{i=0}^{q_1-1} \sum_{j=0}^{q_0-1} \big( l_2 t_0^{\lambda_0} \circ_{A_1} \sigma_2^{\lambda_1}(y_{ij}) t_0^j \big) t_1^{\lambda_1} \circ_{A_2} t_1^i \\ &= \sum_{i=0}^{q_1-1} \Big( \sum_{j=0}^{q_0-1} l_2 \sigma_1^{\lambda_0}(\sigma_2^{\lambda_1}(y_{ij})) t_0^{\lambda_0} \circ_{A_1} t_0^j \Big) t_1^{\lambda_1} \circ_{A_2} t_1^i. \end{align*} Hence $\tau_2^{q_2}(z) \circ_{A_2} c_2 = c_2 \circ_{A_2} z$ for all $z \in A_2$ and $\tau_2(c_2) = c_2$, therefore $f(t_2)=t_2^{q_2}-c_2\in A_2[t_2;\tau_2]$ is right invariant by Proposition \ref{prop:f(t) two-sided delta=0 generalised} and thus $$A_3 = A_2[t_2;\tau_2]/A_2[t_2;\tau_2](t_2^{q_2}-c_2)$$ is a finite-dimensional associative algebra over $$\mathrm{Comm}(A_3) \cap A_2 = \mathrm{Fix}(\tau_2) \cap \mathrm{Cent}(A_2) = \mathrm{Fix}(\tau_0) \cap \mathrm{Fix}(\tau_1) \cap \mathrm{Fix}(\tau_2) \supset F$$ by Proposition \ref{prop:Comm(S_f) delta=0 generalised}(iii). Continuing in this manner we obtain a chain $M = A_0 \subset \ldots \subset A_k$ of finite-dimensional associative algebras $$A_{i+1} = A_i[t_i;\tau_i]/A_i[t_i;\tau_i](t_i^{q_i}-c_i)$$ over $$\mathrm{Fix}(\tau_i) \cap \mathrm{Cent}(A_i) = \mathrm{Fix}(\tau_0) \cap \mathrm{Fix}(\tau_1) \cap \dots \cap \mathrm{Fix}(\tau_i) \supset F,$$ for all $i \in \{ 0, \ldots, k-1 \}$, where $\tau_0 = \sigma_1$ and $\tau_i$ restricts to $\sigma_{i+1}$ on $M$ for all $i \in \{ 0, \ldots,k-1 \}$. Moreover, $$[A_i:M] = [A_i: A_{i-1}] \cdots [A_1:M] =\prod_{l=0}^{i-1} q_l$$ hence $$[A_k:F] = \big( \prod_{l=0}^{k-1} q_l\big) n = n^2,$$ and $A_k$ contains $M$ as a subfield. Let us furthermore assume that each $c_i$ above, $i \in \{ 0, \ldots, k-1 \}$, is successively chosen such that \begin{equation} \label{eqn:last} z \tau_i(z) \cdots \tau_i^{q_i-1}(z)\neq c_i \end{equation} for all $z \in A_i$, then using that $\tau_i$ has inner order $q_i$ and $f(t_i) = t_i^{q_i} - c_i \in A_i[t_i;\tau_i]$ is an irreducible twisted polynomial, we conclude $A_{i+1}$ is a division algebra by \cite[Theorem 1.3.16]{jacobson1996finite}. \begin{lemma} \label{lem:tau_i has inner order q_i} For all $i \in \{ 0, \ldots, k-1 \}$, $\tau_i:A_i \rightarrow A_i$ has inner order $q_i$. \end{lemma} \begin{proof} The automorphism $\tau_0 = \sigma_1: M \rightarrow M$ has inner order $q_0$. Fix $i \in \{ 1, \ldots, k-1 \}$. $A_i$ is finite-dimensional over $F$, so it is also finite-dimensional over its center $\mathrm{Cent}(A_i) \supset F$. Recall that $\tau_i^{q_i}(z) c_i = c_i z$ for all $z \in A_i$, in particular $\tau_i^{q_i}\vert_{\mathrm{Cent}(A_i)} = id $. As $q_i$ is prime this means either $\tau_i \vert_{\mathrm{Cent}(A_i)} = id $ or $\tau_i \vert_{\mathrm{Cent}(A_i)}$ has order $q_i>1$. Assume that $\tau_i \vert_{\mathrm{Cent}(A_i)} = id $, then $\tau_i$ is an inner automorphism of $A_i$ by the Theorem of Skolem-Noether, say $\tau_i(z) = uzu^{-1}$ for some invertible $u \in A_i$, for all $z \in A_i$. In particular $\tau_i(m)= \sigma_{i+1}(m) = umu^{-1}$ for all $m \in M$. Write $u = \sum_{j=0}^{q_{i-1}-1} u_j t_{i-1}^j$ for some $u_j \in A_{i-1}$, thus \begin{align*} \sigma_{i+1}(m)u &= \sigma_{i+1}(m) \sum_{j=0}^{q_{i-1}-1} u_j t_{i-1}^j = \sum_{j=0}^{q_{i-1}-1} u_j t_{i-1}^j m \\ &= \sum_{j=0}^{q_{i-1}-1} u_j \tau_{i-1}^j(m) t_{i-1}^j = \sum_{j=0}^{q_{i-1}-1} u_j \sigma_{i}^j(m) t_{i-1}^j. \end{align*} for all $m \in M$. Choose $\eta_{i}$ with $u_{\eta_{i}} \neq 0$ then \begin{equation} \label{eqn:tau_i is not inner 1} \sigma_{i+1}(m) u_{\eta_{i}} = u_{\eta_{i}} \sigma_{i}^{\eta_{i}}(m), \end{equation} for all $m \in M$. If $i = 1$ we are done. If $i \geq 2$ then we can also write $u_{\eta_{i}} = \sum_{l=0}^{q_{i-2}-1} w_l t_{i-2}^l$ for some $w_l \in A_{i-2}$, therefore \eqref{eqn:tau_i is not inner 1} yields \begin{align*} \sigma_{i+1}(m) \sum_{l=0}^{q_{i-2}-1} w_l t_{i-2}^l &= \sum_{l=0}^{q_{i-2}-1} w_l t_{i-2}^l \sigma_{i}^{\eta_{i}}(m) = \sum_{l=0}^{q_{i-2}-1} w_l \tau_{i-2}^l(\sigma_{i}^{\eta_{i}}(m)) t_{i-2}^l \\ &= \sum_{l=0}^{q_{i-2}-1} w_l \sigma_{i-1}^l(\sigma_{i}^{\eta_{i}}(m)) t_{i-2}^l, \end{align*} for all $m \in M$. Choose $\eta_{i-1}$ with $w_{\eta_{i-1}} \neq 0$, then $$\sigma_{i+1}(m) w_{\eta_{i-1}} = w_{\eta_{i-1}} \sigma_{i-1}^{\eta_{i-1}}(\sigma_{i}^{\eta_{i}}(m)),$$ for all $m \in M$. Continuing in this manner we see that there exists $s \in M^{\times}$ such that $$\sigma_{i+1}(m) s = s \sigma_1^{\eta_1}(\sigma_2^{\eta_2}(\cdots (\sigma_{i}^{\eta_{i}}(m) \cdots ),$$ for all $m \in M$, hence $$\sigma_{i+1}(m) = \sigma_1^{\eta_1}(\sigma_2^{\eta_2}(\cdots (\sigma_{i}^{\eta_{i}}(m) \cdots ),$$ for all $m \in M$ where $\eta_j \in \{ 0, \ldots, q_{j-1}-1 \}$ for all $j \in \{ 1, \ldots, i \}$. But $\sigma_{i+1} \notin G_i$ and thus $$\sigma_{i+1} \neq \sigma_1^{\eta_1} \circ \sigma_2^{\eta_2} \circ \cdots \circ \sigma_{i}^{\eta_{i}},$$ a contradiction. It follows that $\tau_i \vert_{\mathrm{Cent}(A_i)}$ has order $q_i>1$. By the Skolem-Noether Theorem the kernel of the restriction map $\mathrm{Aut}(A_i) \rightarrow \mathrm{Aut}(\mathrm{Cent}(A_i))$ is the group of inner automorphisms of $A_i$, and so $\tau_i$ has inner order $q_i$. \end{proof} \begin{proposition} $\mathrm{Cent}(A_k) = F$. \end{proposition} \begin{proof} $F \subset \mathrm{Cent}(A_k)$ by construction. Let now $$z = z_0 + z_1 t_{k-1} + \ldots + z_{q_{k-1}-1} t_{k-1}^{q_{k-1}-1} \in \mathrm{Cent}(A_k)$$ where $z_i \in A_{k-1}$. Then $z$ commutes with all $l \in A_{k-1}$, hence $l z_i = z_i \tau_{k-1}^i(l)$ for all $i \in \{ 0, \ldots, q_{k-1}-1 \}$. This implies $z_0 \in \mathrm{Cent}(A_{k-1})$ and $z_i = 0$ for all $i \in \{ 1, \ldots, q_{k-1}-1 \}$, otherwise $z_i$ is invertible and $\tau_{k-1}^i$ is inner, a contradiction by Lemma \ref{lem:tau_i has inner order q_i}. Thus $z = z_0 \in \mathrm{Cent}(A_{k-1})$. A similar argument shows $z \in \mathrm{Cent}(A_{k-1})$ and continuing in this manner we conclude $z \in M = \mathrm{Cent}(A_0)$. Suppose, for a contradiction, that $z \notin F$. Then $\rho(z) \neq z$ for some $\rho \in G$. Since the $\sigma_{i+1}$ were chosen so that they generate the cyclic factor groups $ G_{i+1}/G_i,$ we can write $\rho = \sigma_1^{i_0} \circ\sigma_2^{i_1} \circ\cdots \circ \sigma_{k}^{i_{k-1}}$ for some $i_s \in \{ 0, \ldots, q_s-1 \}$. We have \begin{align*} t_0^{i_0} t_1^{i_1} \cdots t_{k-1}^{i_{k-1}} z &= \sigma_1^{i_0}(\sigma_2^{i_1}(\cdots(\sigma_{k}^{i_{k-1}}(z) \cdots ) t_0^{i_0} t_1^{i_1} \cdots t_{k-1}^{i_{k-1}} \\ &= \rho(z) t_0^{i_0} t_1^{i_1} \cdots t_{k-1}^{i_{k-1}} \neq z t_0^{i_0} t_1^{i_1} \cdots t_{k-1}^{i_{k-1}}, \end{align*} contradicting that $z \in \mathrm{Cent}(A_k)$. Therefore $\mathrm{Cent}(A_k) \subset F$. \end{proof} This yields a recipe for constructing a $G$-crossed product division algebra $A = A_k$ over $F$ with maximal subfield $M$ provided it is possible to find suitable $c_i$'s satisfying \eqref{eqn:last}. \bibliographystyle{abbrv}
{ "timestamp": "2018-06-05T02:11:33", "yymm": "1806", "arxiv_id": "1806.00822", "language": "en", "url": "https://arxiv.org/abs/1806.00822" }
\section{Introduction} Collective behavior of motile elements is ubiquitous in wide varieties of living systems ranging from molecular level to cellular level and even in individual animal levels. In these hierarchical systems, each motile element dissipates energy and transduces it into motion \cite{Vicsek2012,Wu2009,Zhang2010,Peruani2012,Nishiguchi2017,Kawaguchi2017,Szabo2006,John2009a,Tennenbaum2015,Gerum2013,Lopez2012,Katz2011,Ballerini2008a,Helbing2000}. The group of such objects can organize sustained collective motion at each level as in cytoplasmic streaming driven by molecular motors, moving clusters in bacterial colonies, swirling in cell tissues, and flocking of birds. Interactions in these systems diversely include mechanical, chemical, and informational processes. Statistical mechanics manifests that details of the elements and interactions become irrelevant for a large system size limit, and global behavior is determined by a few factors such as dimensionality and symmetries of the systems thanks to the universality classes. This is the common belief that collectives of motile elements can be regarded theoretically as active matter systems irrespective of whether the system is living or artificial {\cite{Vicsek1995,Toner1995,Simha2002,Gregoire2004,Kraikivski2006,Ginelli2010,Yang2010,Abkenar2013,Narayan2007a,Bricard2013,Nishiguchi2018}. Nevertheless, subtle difference in interactions sometimes makes all the difference. Such an example exists even in the fundamental motile systems consist of molecular motors and bio-filaments which cause most of complex motile behaviors in living systems. In a large parameter space of these systems, diverse patterns are observed. Despite great research efforts, general understanding of what difference in interactions of constituents bring different patterns still remains elusive. In this article, we constructed a motility assay consists of kinesin motor and microtubule in which the frequency of binary collision can be controlled without using depletion nor binding agents. By controlling strength of volume exclusion interaction and density of filaments, we found different states; disordered state, long-range orientational ordered (LOO) state, liquid-gas-like phase separated (LGS) state, and transitions between them. We found that a balance between crossing-over and aligning events between colliding filaments controls transition from disordered to LOO states, while excessively strong volume exclusion interaction leads to a cluster (LGS) state. This finding gives a unified view for seemingly different phases; cluster phase and long range nematically ordered phase in suspension of bacteria observed in independent experiments, as two limiting cases expected to appear by varying excluded volume interaction\cite{Zhang2010,Peruani2012,Nishiguchi2017}. The reconstituted cytoskeletal systems consists of the smallest elements at the size of nano meter scale, thus they have a possibility to attain the maximum system size (the ratio of the system size to elementary size) among a variety of active matter systems. There are several combinations in these systems. The first combination is actin filaments and myosin motors. High concentration of actin filaments led to polar pattern \cite{Schaller2010,Butt2010,Hussain2013a}, loops \cite{Schaller2011}, swirls, clusters, and density wave \cite{Schaller2011a,Schaller2013,Suzuki2017}, where emergence of ordered patterns could not be explained solely from alignment by binary collision of filaments, instead multi-filament collisions are required\cite{Suzuki2017}. The second combination is microtubule (MT) and dynein motors \cite{Sumino2012}. In this case, the ratio of crossing-over to all events was about 30\% in binary collisions. It suffices to give rise to an ordered pattern, however, MTs eventually formed large ``vortices" in which MTs aligned their orientations nematically with each other. Origin of vortex pattern was attributed to the intrinsic curvature of MTs trajectories and its memory effect. The third combination is MT and kinesin motors,in which following several patterns were known to emerge \cite{HenryHess2005,Kawamura2008,Tamura2011,Liu2011b,Kabir2012,Inoue2013,Lam2014a,Inoue2015,Saito2017}. One is tiny loop arising from the steric barrier between MT segments \cite{Kawamura2008,Tamura2011,Liu2011b,Kabir2012,Inoue2013}. Another pattern is a ``stream", which shows global alignment appearing at high MT density \cite{Inoue2015,Saito2017}. In order to realize global order, either depletion agents (methylcellulose) or binding molecules for MTs were required\cite{Kakugo2011}. Also, asters and topological defects appear in MT and kinesin mixed solution \cite{Surrey2001,DeCamp2015,Torisawa2016a}. In this paper, we focus on the reconstructed system of MTs and kinesins. When kinesin motors are immobilized on the substrate with non-specific binding, no significant pattern has emerged in motility assay composed of only MTs and kinesins in consistent with previous reports. The fact would be the consequence that a large part of kinesins are not functioning due to non-specific binding to glass surface and hight of functioning kinesins were not uniform, which results in lack of collision interaction between MTs. To solve this problem, we coated glass surface with surfactant Pluronic F-127 to avoid non-specific binding. F-127 is tri-block copolymer of ABA type with A being hydrophilic and B being hydrophobic. We introduced a specific binding spot for kinesin into the two footing parts A which stem from the central part B on the substrate. The height of Pluronic F-127 molecule is about 30 nm, the size of kinesin motors bound to F-127 is about 10 nm, and the diameter of MT gliding on kinesins is 25 nm. This procedure to make the height of kinesin even, microtubules can be confined into two dimensions and then MTs always collide and align without overlapping when all F-127 molecules are functionalized. Owing to treatment to promote collision interaction, we found dense gliding MTs form ``liquid-gas-like phase separation(LGS)'' pattern. This LGS pattern is rather similar to those in bacteria gliding on an ager surface exhibiting moving cluster pattern\cite{Wu2009,Zhang2010,Peruani2012}. Considering that bacteria swimming highly confined to a quasi-2-dimensional thin chamber showed global alignment pattern \cite{Nishiguchi2017}, overlapping of moving particles seemed to enable them to generate non-cluster pattern in two dimensions. Also, numerical simulations of motile rods changing the strength of volume exclusion have shown the formation of moving clusters and lanes \cite{Abkenar2013}. Those suggested the possibility that MTs form other patterns if we could control the ratio between aligning and cross over events during collision. To do this, we decreased kinesin density by reducing the ratio of concentration of functionalized Pluronic to that of non-functionalized one, kinesins can bind only the sites of functionalized Pluronic. At the limit of high kinesin density, MTs are forbidden to overlap because of tight adhesion to kinesin-coated surface at uniform height promoting collisions (Fig. \ref{fig:sketch}, right). In contrast, at the lower kinesin density, MTs can overlap because intervals of kinesins attaching on a single MT are wide and the tip of the MT is free from the confinement of kinesin-coated surface (Fig. \ref{fig:sketch}, left). With this lowered kinesin density assay, we found that gliding MTs form another new pattern, ``long-range orientational ordered (LOO)", which is similar to a global alignment pattern of swimming bacteria highly confined to a quasi-2-dimensional thin chamber \cite{Nishiguchi2017}. Since, MTs are about 5 $\mu$m length and those in this pattern were aligning at least an 1 mm $\times$ 1 mm region. It is the largest system showing global alignment. Whether objects can overlap or not is interpreted as a degree of strength of volume exclusion. Stronger volume exclusion can align pair of colliding MTs more frequently, so that it might be thought to form global alignment. Surprisingly, our results suggest that weak volume exclusion forms LOO state while strong volume exclusion forms LGS. In addition, as shown in Fig.\ref{fig:global_plots}, we found that the phase transition occurred depending on MT density. With weak volume exclusion, {\it i.e.} at the lower kinesin density, increasing the MT density led to the the phase transition from disordered to LOO patterns through phase coexistence regime. In contrast, with strong volume exclusion, {\it i.e.} at high kinesin density, it led to phase transition from disordered to LGS patterns. Moreover, to confirm this tendency, we carried out numerical simulations applying the particle-based model of self-propelled objects with volume exclusion. Numerical results followed our experimental observations. Consequently, these results suggest that the above-mentioned two patterns observed in a system with unidirectional motion and nematic alignment could be explained by the parameter of volume exclusion. Also we report striking behaviors, global rotation of orientation at low kinesin density and loop formation at high kinesin density. \begin{figure}[t] \centering \includegraphics[width=.5\linewidth]{fig_sketch} \caption{Schematic images of the motility assay controlling kinesin density. When the kinesin density is low, the intervals of the kinesins attaching on a MT are wide enough for MTs to crossing over (left). When it is high, MTs adhere to glass surface through kinesin, preventing them from crossing over (right).} \label{fig:sketch} \end{figure} The organization of this paper is as follows: Section \ref{sec:MateMeth} is devoted to brief descriptions of experimental preparation and method. In Section \ref{sec:strength}, we examine kinesin density can control the strength of volume exclusion in our experimental system. We analyze the effect of volume exclusion on global pattern of both experiment and numerical simulation in Section \ref{sec:global}. To explain the global behavior, we study binary collision interaction in Section \ref{sec:binarycollision}. We discuss and then summarize our results in Section \ref{sec:discussion} and \ref{sec:conclusions}. \section{Materials and Methods} \label{sec:MateMeth} \subsection{Preparation of MT and kinesin} Tubulins were purified from porcine brain using a high-molarity PIPES buffer (1 M PIPES, 20 mM EGTA, and 10 mM MgCl$_2$; pH adjusted to 6.8 using KOH) as described previously \cite{Castoldi2003}. Rhodamine-labeled tubulins and Alexa 488-labeled tubulins were prepared using 5-(and-6)carboxy-tetramethyl-rhodamine succinimidyl ester (Invitrogen, C1171), and Alexa Fluor 488 succinimidyl ester (Alexa Fluor 488-SE\textregistered; Invitrogen A20000), respectively, according to the standard techniques \cite{Hyman1991}. These two differently labeled tubulins ($8\ \%$ labeled, $3.0$ mg/ml) were separately polymerized into MTs in the presence of guanosine-5'-[($\alpha,\beta$)-methyleno]triphosphate (GMPCPP) ($1$ mM), and BRB80 buffer (80 mM PIPES-KOH, 1 mM MgSO$_4$, 1 mM EGTA, pH 6.8) in $37\ ^\circ $C for $30$ min. The MT solution was diluted to $0.15 \ {\rm mg/ml} $ with taxol (final $50 \ {\rm \mu M}$) and left for at least 2 days. Length of MTs prepared in this method ranged from 4 to 8$\ {\rm \mu m}$ (See Supplementary Information (SI) \cite{SI}). The final solution was made by mixing these two solutions labeled with different fluorophores such MTs that the ratio of the number of Alexa 488-labeled MTs to the total MT number ranged between 1 and $ 5\ \%$. The SNAP-tagged kinesin-1 (rat KIF5C truncated at 430 amino acids, and fused with SNAP-tag and six-histidine tag at the C-terminus) was expressed in {\it Escherichia coli} Rosetta2 (DE3) and prepared as described previously \cite{Furuta2013}. \subsection{Synthesis of BG-functionalized Pluronic F-127} The terminal hydroxyl groups of Pluronic F-127 were functionalized with benzylguanine via an amine terminated Pluronic F-127 \cite{Li1996}. Pluronic F-127 (2 g, 0.16 mmol; Sigma-Aldrich, P2443) was dissolved in 6 ml of benzene and slowly added into a stirred solution of 4-nitrophenyl chloroformate (192 mg, 0.954 mmol; Tokyo Chemical Industry, C1400) in 6 ml benzene. The solution (12 ml) was continuously stirred at room temperature for 24 h under nitrogen atmosphere. The resulting yellow-colored solution was mixed with 200 ml of ice cold diethyl ether for 5 min using a magnetic stirrer. The precipitate was then collected by vacuum filtration (Millipore, model WP6110060; TGK, 0371430103) through a membrane filter with 47 mm in diameter (Whatman, 1450-090; manually cut with a circle cutter). The precipitate was further washed by 4--5 additional cycles of dissolving and precipitating until the precipitate turns white and then dried under vacuum overnight. The activated F-127 ($\approx$1 g) was dissolved in methanol. One milliliter of hydrazine monohydrate solution (Wako, 081-00893) was added drop-wise and stirred at room temperature for 16 h under nitrogen atmosphere. The product was precipitated with 120 ml of diethyl ether and vacuum filtered (Sansyo, 81-0106) through a 90-mm membrane filter. The precipitate was washed four times. The vacuum dried F-127-amine (6.4 mg) and an amine-reactive benzylguanine reagent (2 mg; New England Biolabs, BG-GLA-NHS, S9151S) was dissolved in 0.3 ml of anhydrous {\it N,N}-dimethylformamide, allowed to stand at room temperature overnight, and dried under vacuum for at least 4 hours. The product was then dissolved in 0.65 ml of Milli-Q water. The precipitated benzylguanine reagent was removed with a spin filter (Millipore, UltraFree 0.1 $\mu$m, UFC30VV25). The filtrate was further filtered through a Zeba spin filter (Thermo Scientific, 7k MWCO, 89882) to remove dissolved benzylguanine reagent. The filtrate was stored at $-$80${}^\circ$C. \subsection{Preparation of flow chamber} Teflon-coated coverslips were prepared as described previously \cite{Furuta2017}. A flow chamber was made of $22 $ mm $\times 32$ mm Teflon coating coverslip and $18\ {\rm mm} \times 18\ {\rm mm}$ coverslip with parafilm as a spacer. The flow chamber was first filled with Pluronic F127 functionalized with benzylguanine.After 10 minutes incubation, the flow chamber was washed out with BRB80 (50 ml) and introduced 0.25 mg/ml SNAP-tagged kinesin-1 solution. After another 10 minutes incubation and washing, we filled with the dual-colored MT solution described above into the chamber. The flow chamber was again incubated for 5--10 minutes to allow the MTs to bind kinesins, and then washed out with Assay buffer ($10\ \mu$M taxol, 25 mM glucose, and $200\ {\rm\mu g/ml}$ glucose-oxidase, $40\ \mu$g/ml catalase, and $140$ mM bata-melcaptoethanol in BRB80). Finally, we introduced ATP solution with the ATP regenerating system (10 mM ATP, 2 unit/ml pyruvate kinase/lactate dehydrogenase, and 2.5 mM phosphoenol-pyruvate in Assay buffer) into the flow chamber. To control the kinesin density on the glass surface of a chamber, we changed ratio of Pluronic F127 functionalized with benzylguanine. The mixing ratios of functionalized Pluronic F127 to non-functionalized one were adjusted to be $5\%$ and $100\%$. Each made kinesin density $(6.1 \pm 0.7) \times 10^3$ molecules per ${\rm \mu m^2}$ and $(2.5 \pm 0.6) \times 10^3$ molecules per ${\rm \mu m^2}$, respectively. \subsection{Microscopy and image capture} To observe the motility of MTs, samples were illuminated with a 100 W mercury lamp and visualized by Leica DMI 6000B using a objective lens HCX PL APO 63x/1.40-0.60 OIL CS (Leica). Images were captured using EMCCD camera (iXon Ultra, Andor) connected to a PC. For observations of binary collision, snapshots were taken at every 10 sec for 3 hours. And for observation of patterns, we scanned and stitched multiple area to get a wide snapshot. Images and movies of motility assays of MTs captured by the fluorescence microscopy were analyzed using the image analysis software ImageJ, and the algorithm which we have developed in python with scikit-image and python image library. \section{Results} \label{sec:results} \subsection{Strength of volume exclusion depending on kinesin density} \label{sec:strength} \begin{figure}[tbhp] \centering \includegraphics[width=.6\linewidth]{fig_snapshots.eps} \caption{ (a-c) Snapshots of three types of behavior after collision: crossing-over (a), alignment (b), anti-alignment (c). Scale bar $8\ \mu m$. (d,e) Probability of each type in each angular bin at the lower kinesin density (d, number of collision event is $n=147$) and at the higher kinesin density (e, $n=135$). Yellow circle markers represent crossing-over; red triangle markers represent alignment; blue square markers represent anti-alignment. } \label{fig:snaps} \end{figure} \clearpage To assess the effect of kinesin density on the volume exclusion, we observed behaviors of two MTs during collision at each kinesin density. The experiments were conducted under a dilute condition, $0.4\times 10^{-3}$ filaments per ${\rm \mu m^2}$. MTs showed isotropic and homogeneous state at this diluted density. The behaviors after collision can be classified three types; crossing-over, alignment, and anti-alignment [Fig. \ref{fig:snaps} (a)-(c)]. We firstly define crossing-over as collision events having one or more snapshots in which the observing pair of MTs forms four branches [the third to fifth snapshots in Fig. \ref{fig:snaps} (a)]. From the rest of events, alignment was defined such that the outgoing angle $\theta_{out}$ is smaller than $\pi/2$, and anti-alignment is defined otherwise. Incoming and outgoing angle $\theta_{in}, \theta_{out}$ are defined as the angle at which two MTs touch and detach, respectively. As mention in Sec \ref{sec:MateMeth}, we prepared and observed chambers at two different kinesin densities. At the higher kinesin density of $(6.1 \pm 0.7) \times 10^3$ molecules per ${\rm \mu m^2}$, no crossing-over was observed [Fig.\ref{fig:snaps} (e)]. On the other hand, $10\%$ of events showed crossing over at the lower kinesin density of $(2.5 \pm 0.6) \times 10^3$ molecules per ${\rm \mu m^2}$ [yellow circle marker in Fig.\ref{fig:snaps} (d)]. MTs can crossing-over only when the volume exclusion is weak, so that these results indicate that the strength of the volume exclusion can be controlled via kinesin density. Although crossing-over ratio for the higher kinesin density is not very different from that for the lower kinesin density, but it is enough to produce different behaviors. Whether crossing-over is possible or not depends on whether the tip can surmount the height of the MT. The height of a MT is about 25 nm, and it is almost the same as the average interval of kinesin at the lower kinesin density. It would generate crossing-over after collision as shown in Fig.\ref{fig:snaps} (d). In addition to the density-dependent behaviors, Figs. \ref{fig:snaps} (d) and (e) clearly give angle-dependent behavior. Alignments tend to occur at acute incoming angle, $\theta_{in}<\pi/2$, and anti-alignments tend to occur at obtuse incoming angle, $\theta_{in}>\pi/2$ at both kinesin densities [Figs.\ref{fig:snaps} (d) and (e)]. This implies that velocity alignment is nematic in this experimental system. \subsection{Global patterns} \label{sec:global} \subsubsection{Appearing patterns depending on kinesin density} \label{sec:kinesindensitydep} \begin{figure*}[tbph] \includegraphics[width=.99\linewidth]{fig_global_snapshots.eps} \caption{Time evolution of MT patterns at the higher and the lower kinesin densities. Yellow arrows show the directions of motions of representative MTs. At initial state at both kinesin densities, MTs were bound to the kinesin-coated surface randomly. After injection of ATP solution, MTs gradually formed patterns. At the higher kinesin density, MTs gathered and went into a ``liquid-gas-like phase separation (LGS)" state without nematic orientational order (a). In contrast at the lower kinesin density, nematic orientational order was increasing with time and pattern ended up with `` Long-range orientational order (LOO)" state (b). The MT densities are 0.3 (a) and 0.1 (b) filaments par ${\rm \mu m^2}$. Bar is 10 ${\rm\mu m}$. } \label{fig:global_snaps} \end{figure*} As mentioned above, our experimental set up enabled us to control the strength of volume exclusion (See Sec. \ref{sec:strength}). Using this set up, we found two different patterns depending on the kinesin density. Figures \ref{fig:global_snaps}(a) and (b) show time evolution of patterns at the higher and lower kinesin densities, respectively. At the higher kinesin density, MTs started to segregate spontaneously and go into the ``liquid-gas-like phase separation (LGS)" state forming clusters which were moving around in random directions [Fig. \ref{fig:global_snaps}(a)]. In an individual cluster, MTs were aligned in parallel and almost all of them moved in the same direction, thus forming polar cluster. Crawling around, the clusters often merged with other and sometimes splitting into small polar clusters. In contrast at the lower kinesin density, gliding MTs eventually aligned their orientation and went into the ``long-range orientational ordered (LOO)" state in 30 minutes [Fig. \ref{fig:global_snaps}(b)]. In this state, MTs move in both directions. \begin{figure*}[htbp] \centering \includegraphics[width = 1.\linewidth]{fig_global_plots_1.eps} \caption{ Phase diagram for various kinesin and MT densities. At the higher kinesin density, pattern was transited from disordered to order state at 0.01 filaments per ${\rm \mu m^2}$ . Moreover, ordered state at the higher kinesin density was categorized by dynamics of clusters which emerged in this ordered state. Clusters were moving around below 0.07 filaments per ${\rm \mu m^2}$(``moving cluster"), while they got into and formed ``aggregation" above the MT density. At the lower kinesin density, when the filament density is lower than 0.032 per ${\rm \mu m^2}$, the disordered state is observed, whereas when the density is higher than it, the coexistence of ordered and disordered patterns is observed. The border between ordered and disordered regions is represented by a yellow broken line in a bottom middle image, whose kinesin density is low and MT density is 0.036 filaments per ${\rm \mu m^2}$. Above 0.05 filaments per ${\rm \mu m^2}$, MTs showed the LOO pattern. Bar is 20 $\mu$m.} \label{fig:global_plots} \end{figure*} \subsubsection{Critical densities depending on kinesin density} \begin{figure*}[tbp] \centering \includegraphics[width=.99\linewidth]{fig_global_plots_2.eps} \caption{ (a) Degree of accumulation $A$ for various MT densities at the higher kinesin density. It begun to increase at $0.01$ filaments per $\mu m^2$ and form clusters. (b) Degree of the moving velocity of structure at the higher kinesin density $V$ for various MT densities. $V$ drops abruptly at $\rho =0.07$ because ``aggregation" of cluster becomes dominant. (c-1) Nematic orientational order $S$ (blue) and fraction of ordered area (red) for various MT densities at the lower kinesin density. Blue circle markers and vertical error bars represent mean and standard deviation of $S$, respectively. Blue horizontal error bar is standard deviation of MT densities. A cyan line is trend line of $S$ for eye guide. Red triangle markers represent fractions of ordered area which are calculated in local area $200\ {\rm \mu m} \times 200\ {\rm \mu m}$ and $S>0.5$ at high kinesin density. Red horizontal error bars represents standard deviation of MT densities. An orange broken line shows trend line of fractions of ordered area for eye guide. Histograms show distributions of local orientational order for 0.032, 0.036, and 0.05 filaments per ${\rm \mu m^2}$ (c-2,3,4). At 0.036 filaments per ${\rm \mu m^2}$ (c-3), the distribution is bimodal, which means order-disorder phase coexistence. On the other hand, 0.032 and 0.050 filaments per ${\rm \mu m^2}$ (c-1,2), the distribution is unimodal. (d)Coexistence sate: A $825.5 \ {\rm \mu m} \times 825.5\ {\rm \mu m}$ snapshot of coexistence of ordered and disordered regions, and color map of spacial distribution of local orientation. Local orientations are calculated in $200\ {\rm \mu m} \times 200\ {\rm \mu m}$ area, which moves to cover the whole image as displayed in left. Bar is 20 $\mu $m } \label{fig:global_plots2} \end{figure*} Not only the kinesin density but also the MT density strongly affected the global patterns [Fig.\ref{fig:global_plots}]. Below a critical density $\rho_c$, the pattern was homogeneous and isotropic. In this paper, we call it ``disordered state". Above $\rho_c$, pattern was the LOO or LGS state depending on kinesin density. The critical density $\rho_c$ also depended on the kinesin density, varying from 0.01 to 0.04 ${\rm\mu m ^{-2}}$. And the critical density $\rho_c$ can be quantified according to a degree of accumulation and an orientational order. A significant feature in the LGS state is density-segregation which is characterized by the degree of accumulation calculated in the following way. We assumed intensity of each pixel at each sliced time $I({\bf r},t)$ is linearly increased with MT density. We calculated the static structure function $F_s(q)$ and the intermediate structure function $F_i(q,t)$ as follows: \begin{eqnarray} F_s(q) &=& \langle \tilde{I}({\bf q},0)\tilde{I}({\bf q},0) \rangle _{|{\bf q}|=q}\\ F_i(q,t) &=& \langle \tilde{I}({\bf q},t)\tilde{I}({\bf q},t) \rangle _{|{\bf q}|=q} \, \end{eqnarray} where $\tilde{I}$ is the Fourier transform of the intensity $I$, and $\langle \cdot \rangle _{ |{\bf q}|=q }$ is ensemble average that satisfies $|{\bf q}|=q$. Figure S12 (a) shows $F_s(q)/F_s(0)$ for each MT density. The static structure function of isolated MTs and clusters are expected to decay as a function; \begin{eqnarray} f(q)=\frac{A}{1+B^2q^2} \end{eqnarray} (see SI \cite{SI}). The fitting coefficient $B$ represents the typical cluster size, and $A$ represents the degree of accumulation. We found that the parameter $A$ drastically increased at $ 0.01$ filaments par ${\rm \mu m^2}$ [Fig. \ref{fig:global_plots2}(a)], which means MT density was high enough to form cluster. We adopted 0.01 filaments par ${\rm \mu m^2}$ as the critical density for the LGS state $\rho_c^{LGS}$. Next we investigate dynamics of clusters in this state. Just above critical density from disorder to the LGS state, MTs form clusters crawling around. We defined this type of cluster as ``moving cluster". As MT density increases, cluster-cluster collision in opposite directions became dominant and slowed down the speed of a merging cluster. We call this type of cluster, in particular, aggregation. To characterize ``moving clusters'' and ``aggregation", we compared the intermediate scattering function at $0.1\ {\rm \mu m^{-1}}$ in wavelength $q$ for each MT density [Fig.\ref{fig:global_plots2}(b)]. In the moving cluster state, $F_i(q=0.1,t)$ decays fast with time $t$. Each cluster moves with a certain velocity and it makes the intermediate function decay with the following function \begin{eqnarray} f(t) = \frac{1}{4/Cq^2t+2/V^2q^2t^2}\ \end{eqnarray} (see SI \cite{SI}). The fitting parameter $V$ corresponds to the moving velocity of the structure with the size of $1/q$. We define the density threshold for the aggregation state as a density at which the moving velocity $V$ decreased drastically. As shown in Fig. \ref{fig:global_plots}(b), we found that the fitting parameter $V$ drops abruptly at $0.07$ filaments par ${\rm \mu m^2}$. At the lower kinesin density, orientation of MTs tended to be parallel for wide area. We characterized it by using the orientational order $S$, which is calculated using the following equation, \begin{eqnarray} S=\left \langle \left | \left \langle e^{2i \theta ({\bf r})} \right \rangle _{\bf r} \right | \right \rangle _j \ , \label{eq:S}% \end{eqnarray} where $\left \langle \cdot \right \rangle_{\bf r}$ is the average over pixels in each ROI in the $j$-th image, and $\left \langle \cdot \right \rangle_{j}$ is the average over ROIs with the size of 200 ${\rm \mu m}\ \times$ 200 ${\rm \mu m}$ in the image. We calculated orientation $\theta({\bf r})$ by OrientationJ (See SI for the details \cite{SI}). As shown by blue markers in Fig. \ref{fig:global_plots2} (c), $S$ significantly increased at the middle density, 0.032 and 0.050 filaments per ${\rm \mu m^2}$. At these densities, we observed coexistence of ordered and disordered area. A color map of the local orientational order in Fig. \ref{fig:global_plots2}(d) obviously illustrates high and low order regions spreading in bottom and top of a snapshot, respectively. Also, the histogram of local orientational order of the same snapshot clearly shows bi-modality whose peaks are at $S= 0.4$ and 0.6 [Fig.\ref{fig:global_plots2}(c-3)] Distributions at the both lower and higher MT densities are uni-modal [Figs. \ref{fig:global_plots2}(c-2 and 4)]. To assess the expansion of the ordered area, we measured fraction of ordered area $S>0.5$ in each snapshot at each MT density [red markers in Fig. \ref{fig:global_plots2}(a)]. The fraction was 0 below 0.036 filaments per ${\rm \mu m^2}$, and it increased abruptly at the middle density. Based on these results, coexistence of ordered and disordered area appeared at $\rho^{CE}=0.036$ filaments per ${\rm \mu m^2}$. In a simulation study observing orientation order of particles with uni-directional motion and nematic interaction, it is reported that such coexistence appears as nematic band \cite{Ginelli2010}. Above 0.05 filaments per $\mu m^2$, the fraction was 1. It means ordered area extended at least for image size 1 mm $\times$ 1 mm. We adopted 0.05 filaments par ${\rm \mu m^2}$ as the critical density for the LOO state $\rho_c^{LOO}$. \begin{figure*}[tbhp] \centering \includegraphics[width=.99\linewidth]{fig_longrange_2.eps} \caption{ Long-range orientational order at the low kinesin density. (a) A snapshot of long-range alignment of MTs at low kinesin density. Scale bar $200 \ {\rm\mu m}$ . (b) The spacial decay of orientational order for various MT densities : $\rho = 0.032$ (blue diamond), $0.036$ (magenta up arrow triangle), $0.040$ (yellow square), $0.050$ (orange down arrow triangle), and $0.10$ (red circle) filaments per ${\rm \mu m^2}$. Each density has 2 data sets. All $S(R)$ decay toward each orientation order $S_0$. (c) $S(R)-S_0$ vs $R$ for various MT densities : $S_0 =$ 0.13 and 0.235 ($\rho= 0.032$), $S_0 =$ 0.42 and 0.535 ($\rho= 0.036$), $S_0 =$ 0.396 and 0.54 ($\rho= 0.040$), $S_0 =$ 0.604 and 0.728 ($\rho= 0.05$), and $S_0 =$ 0.906 and 0.915 ($\rho= 0.10$). Colors and symbols correspond to (b). } \label{fig:longrange} \end{figure*} \subsubsection{Long range order of the orientational ordered pattern} \label{sec:LOOstate} As showing in Fig. \ref{fig:longrange}(a), MTs at low-kinesin density exhibited long-range alignment across more than $1$ mm. To examine these patterns have long-range orientational order, we calculated orientational order for various size of area as follows; \begin{eqnarray} S(R) = \left\langle \left | \left \langle e^{2i \theta({\bf r}')} \right \rangle_{(|{\bf r}-{\bf r}'|<R)} \right | \right\rangle_{{\bf r}} \ , \end{eqnarray} where $R$ runs from 4 to 400 $\mu$m. When $R$ is 200 $\mu m$, $S(R)$ is the same as Eq. (\ref{eq:S}). At all MT densities, $S(R)$ decayed toward each asymptotic value $S_0$ [Fig. \ref{fig:longrange}(b)]. Figure \ref{fig:longrange}(c) plots $S(R) - S_0$ at each MT density. As MT density increases, a decay of $S(R) - S_0$ became steeper. All of those decay curves were close to exponential. Considering the mean MT length is about $5 \ {\rm \mu m}$ and $S(R)-S_0$ at higher density than 0.050 saturate at values larger than 0.6 before $R= 400 {\rm\mu m}$, MTs show long-range orientational order. It is worth noting that $S(R)$ decays algebraically toward a saturated value $S_0$ instead of exponential decay, in the bacterial suspension experiment \cite{Nishiguchi2017} and numerical simulation\cite{Ginelli2010} in which true long range orders are believed to be observed for active systems with uni-directional motion and nematic alignment. In the present experiment, algebraic decay was observed only when the final pattern appeared more disordered. The exponential decay toward a saturation implies the presence of a characteristic length scale. It might be the length scale of undulation of aligned bundles as is seen in Fig.\ref{fig:longrange}(a). The understanding the mechanism of this undulation is missing at present. \subsubsection{Slow rotation dynamics of the orientational ordered pattern} \label{sec:rotation} We found that the direction of LOO rotated at around 0.05 filaments per ${\rm \mu m^2}$ [Fig. \ref{fig:rotation}(a)]. Figure \ref{fig:rotation}(b) gives the following function representing the rotation: \begin{eqnarray} {\bf n}(t) = S(t) \left( \begin{array}{cc} \cos\Theta(t)\\ \sin\Theta(t) \end{array} \right) \ , \end{eqnarray} where $S(t)$ is $S(R=200 \ \mu m)$ at time $t$, and $\Theta(t)$ is the direction of the LOO at time $t$. Keeping the high orientational order $S(t)$, the global orientation of MTs $\Theta (t)$ rotates about $\pi$ radian in 6 hours. The average velocity of MTs gliding on the lower kinesin density is about $0.2\ {\rm \mu m/s}$, so that a MT runs about 4.3 mm when the direction of LOO rotates $\pi$. Such rotation was observed in every area of the sample extended over more than 4 mm $\times$ 4 mm. This chiral symmetry breaking in a global nematic ordered state could be attributed to the chirality of microtubule filaments. It is known that 14-protofilament microtubules are dominant (96\%) in MTs polymerized in the presence of GMPCPP as in our protocol \cite{Hyman1995}. 14-mer is also richer (61\%) than the normal 13-protofilament microtubules (32\%) in some standard protocols for polymerizing microtubule\cite{Ray1993}. The 14-protofilament MT has a left-handed chirality with super-twisted protofilament-lattice. If kinesin bound at the substrate moves along the protofilaments then the microtubule must rotate counterclockwise when looking in the direction of motion\cite{Ray1993}. Counterclockwise spinning MTs in their propulsion exhibit clockwise rotation on the substrate when looking from the top which corresponds to counterclockwise rotation in the inverted microscope as in the present experiment (shown in Fig.\ref{fig:rotation}). If each MT experiences clockwise torque at each point then overall nematic pattern should rotate in clockwise (counterclockwise under inverted microscope) direction synchronously over the space. This is similar to the phenomenon that swimming bacteria on the substrate exhibit clockwise rotating trajectories owing to the interaction between counterclockwise rotating flagella and the solid substrate\cite{Maeda1976,Frymier1995,Lauga2006,Hu2015}. This is also similar to Lehman rotation effect of chiral nematic liquid crystal under non-equilibrium conditions in which global pattern rotate since each molecule rotates at each position due to cross coupling effect between thermal gradient and angular speed through molecular chirality\cite{Oswald2008,Yamamoto2017,Yamamoto2018}. A weak chiral symmetry breaking in progressive MTs has been also observed in motility assay with dynein motors\cite{Sumino2012} and with kinesin\cite{Kim2018}. Although the hierarchical connection between molecular level chirality of MTs and macroscopic level is still elusive in cell chirality formation in developmental process of organisms, similar connection is well studied in macroscopic rotational motion of chiral liquid crystals due to non-equilibrium cross coupling effect\cite{Yamamoto2018}. \begin{figure}[bthp] \centering \includegraphics[width=.5\linewidth]{fig_rotation_.eps} \caption{ At about $0.05$ filaments per $\mu m^2$, the global direction of the LOO pattern rotates $\pi$ in anti-clock wise direction in $6$ hours. (a) Snapshots of rotation of LOO pattern for every 2 hours. Bars in the white circle show the global orientation at each time. Their colors are the same as show in (b). (b) Time evolution of the direction and magnitude of the LOO. The angle between the position of each marker and the positive x-axis shows the global direction, and the distance from the center shows the magnitude of orientational order $S$. Color represents the time since MTs began to glide. Scale bar $20\ {\rm \mu m}$. } \label{fig:rotation} \end{figure} \subsubsection{Numerical simulations} \label{sec:numerical} To test whether the patterns are actually determined by the strength of volume exclusion, we carried out numerical simulations based on the particle-based stochastic model of self-driven objects with nematic alignment and various strengths of volume exclusion. The numerical model based on the similar motivation was in detail investigated in Ref. \cite{Abkenar2013}, in which the rod-shaped elements are explicitly assumed and alignment between the rods is individually induced by volume exclusion. Here we propose a new numerical model as follows. Main feature of our model is that the strengths of alignment interaction and volume exclusion can be independently controlled; namely, we here consider the case that, even when an object can cross over others easily, their directions can be aligned well with each other. Our model also assumes anisotropic mobility so that each object hardly moves toward the direction perpendicular to its polarity, which corresponds to the length axis of the MT in motility-assay experiments. This model can be mathematically formalized as given in the next paragraph, while more details of this model are found in SI \cite{SI}. Let us assume $N$ objects in a square box with periodic boundaries in two dimensions. Location of the $j$-th object ${\bm x}_j =(x_j,y_j)$ ($j=1, 2, \cdots, N$) evolves over time $t$ obeying \begin{equation} \label{eq:vdyn} {\bm Z} ({\bm q}_j) \frac{d {\bm x}_j}{dt} = v_0 {\bm q}_j + {\bm J_j^{v}} \ , \end{equation} where the vector ${\bm q}_j$ means the polarity of the $j$-th object. The polarity is assumed to maintain the constant magnitude, {\it i.e.} ${\bm q}_j=(\cos \theta_j, \sin \theta_j)$, and the direction $\theta_j$ obeys \begin{equation} \label{eq:thetadyn} \frac{d \theta_j}{dt} = - {J_j^{q}}_x \sin \theta_j + {J_j^{q}}_y \cos \theta_j + \xi_j(t) \ . \end{equation} Anisotropic mobility is introduced through the anisotropic friction tensor ${\bm Z} ({\bm q}_j)$ in the left hand side of Eq. (\ref{eq:vdyn}), which is defined by ${\bm Z} ({\bm q}_j)= {\bm q}_j \otimes {\bm q}_j + r_{\zeta}^{-1} ({\bm I} - {\bm q}_j \otimes {\bm q}_j)$ with the ratio $r_{\zeta}=\zeta_{\parallel}/\zeta_{\perp}$ of friction coefficients in parallel $\zeta_{\parallel}$ and perpendicular directions $\zeta_{\perp}$ and the unit vector along the polarity direction ${\bm q}_j=(\cos \theta_j, \sin \theta_j)$. Here, $\otimes$ mean the tensor product, and ${\bm I}$ is the identity matrix. The first term in the right hand side of Eq. (\ref{eq:vdyn}) assumes that each object moves along its polarity ${\bm q}_j$ with a constant velocity $v_0$ in the absence of volume exclusion interactions. The second term means mechanical volume interaction, which is given by \begin{equation} \label{eq:volumeexclusion} {\bm J_j^{v}} = - \beta \sum_{j'} \frac{r \Delta {\bm x}_{j,j'}}{|\Delta {\bm x}_{j,j'}|^2} \end{equation} when $|\Delta {\bm x}_{j,j'}|<r$ with $\Delta {\bm x}_{j,j'}= {\bm x}_{j'} - {\bm x}_{j}$, and ${\bm J_j^{v}}={\bm 0}$ otherwise. The coefficient $\beta$ indicates the strength of volume exclusion, and $r$ is the interaction range. The first and second terms in the right hand side of Eq. (\ref{eq:thetadyn}) mean nematic interaction, given by \begin{equation} \label{eq:nematicint} {\bm J_j^{q}} = 2 \alpha \sum_{j'} \left( {\bm q}_{j} \cdot {\bm q}_{j'} \right) {\bm q}_{j'} \end{equation} when $|\Delta {\bm x}_{j,j'}|<r$ with $\Delta {\bm x}_{j,j'}= {\bm x}_{j'} - {\bm x}_{j}$, and ${\bm J_j^{q}}={\bm 0}$ otherwise. The interaction range of nematic interaction is assumed here to be identical to that of mechanical volume interaction, $r$. The coefficient $\alpha$ indicates the strength of the nematic alignment. The last term $\xi_j(t)$ in Eq. (\ref{eq:thetadyn}) indicates the noise on the polarity, assumed as Gaussian white noise with $\langle \xi_j \rangle =0$ and \begin{equation} \langle \xi_j (t) \xi_{j'} (t') \rangle = 2 R \delta_{j,j'} \delta(t-t') \ , \label{eq:xidispersion} \end{equation} where $R$ is the dispersion of noise and it corresponds to the inverse correlation time of polarity direction for an isolated object. Time evolution of ${\bm x}_j$ and $\theta_j$ are numerically calculated based on Eqs. (\ref{eq:vdyn}) and (\ref{eq:thetadyn}) by the Heun's method. Time $t$ is discretized into steps with the interval $dt = 0.004$, and the numerical integration is carried out up to $t=2,560$. The parameters are set $v_0=1$, $r_{\zeta} =0.01$, $R=0.1$, $\alpha=5$ and $r=1$. Linear system size $L$ depends on the object density $\rho$ as $L=\sqrt{N/\rho}$ both for $x$ and $y$ directions with the given number of objects $N$. \begin{figure*}[hbtp] \centering \includegraphics[width=.99\linewidth]{fig_numsimulation.eps} \caption{ Numerical simulation results for collective motion of self-driven objects with volume exclusion effects, nematic alignment, and anisotropic mobility. (a) Phase diagram. Top and bottom rows display steady state snapshots for the cases with weak and strong volume exclusions ($\beta=0.07$ and $\beta=5.0$), respectively. The horizontal axis indicates the object density. Object density is defined in the way that interaction range is the unit of length. The dotted and broken lines indicate the transition line. (b) Degree of accumulation against object density for the case with strong volume exclusion ($\beta=5.0$). Error bars mean the error of fitting. (c) Orientational order $S$ (blue broken line) and fractions of the ordered region (red solid line) against object density for the case with weak volume exclusion ($\beta=0.07$). Error bars indicate the standard error ($n=8$). (c-1, 2 and 3) Histograms of local orientational order for $\rho=0.1$, $0.5$ and $1,5$, respectively. The dotted and broken lines in (b) and (c) correspond to those in (a), respectively. The gray long dashed short dashed line is the eye guide to show the increase of fraction of ordered ROIs with assuming the linear increase. } \label{fig:simulation} \end{figure*} \begin{figure*}[btp] \centering \includegraphics[width=.95\linewidth]{fig_numsimulation2.eps} \caption{Dependence of structure and dynamics at the steady state on volume exclusion strength in numerical results. (a) Fraction of ordered regions, rescaled variance of the orientational order $\tilde{\rm Var}(S)$, average speed in the ordered regions, and squared velocity with correlated with orientational order $V^2_S$ against strength $\beta$ of volume exclusion for $\rho=1.5$. Object number is fixed as $N=80,000$. The regimes I, II and III, which correspond to the LOO, phase separated and confluent states, respectively, are identified by looking at whether $\tilde{\rm Var}(S)$ is almost zero ($\tilde{\rm Var}(S)<0.02$) or not. (b,c,d) Snapshots at the final time step of numerical simulations for $\rho=1.5$. Strength of volume exclusions are set $\beta=0.02$ (b), $\beta=0.2$ (c) and $\beta=10$ (d). (e) The same variables in (a) for the case of $\rho=1.0$, against strength $\beta$ of volume exclusion. Object number is fixed as $N=80,000$. In this case, for $\rho \leq 0.1$, the LOO is realized. (f, g, h) Snapshots at the final time step of numerical simulations. Strength of volume exclusions are set $\beta=0.05$ (f), $\beta=0.5$ (g) and $\beta=10$ (h). (i) The same variables in (a) for the case of $\rho=0.5$, against strength $\beta$ of volume exclusion. Object number is fixed as $N=80,000$. (j, k, l) Snapshots at the final time step of numerical simulations. Strength of volume exclusions are set $\beta=0.05$ (j), $\beta=0.5$ (k) and $\beta=10$ (l). See SI for the details of analysis and more detailed $\rho-$ and $\beta-$dependence \cite{SI}.} \label{fig:simulation2} \end{figure*} The results with the number of objects $N=20,000$ recapitulate our experimental observations well. When the strength of volume exclusion is high enough as $\beta=5$, the LGS state with motion-less aggregations is observed for high object density and disordered state is observed for low object density [Fig. \ref{fig:simulation}(a)]. In fact, the LGS state is quantified in Fig. \ref{fig:simulation}(b), showing the degree of accumulation $A$ at the steady state against various object density, and it shows similar tendency with experimental observation given in Fig. \ref{fig:global_plots2}(a). The typical size of aggregation gets larger for increasing object density. Furthermore, as shown in Figs. S2 and S3, at $\rho > 1$ for $\beta \geq 5$ the almost entire space is filled with objects, where topological defects and their slow pair annihilation dynamics are observed (See SI \cite{SI}; see Fig. \ref{fig:simulation2}(d) also). In contrast, when objects had low strength of volume exclusion as $\beta=0.07$, the LOO, nematic band and disordered state were observed depending on the object density. The LOO state is quantified through global orientational order $S$ and fraction ordered ROIs and histograms of local orientational order at the steady state against various object density in Fig. \ref{fig:simulation2}(c), which agrees well with the experimental result in Fig. \ref{fig:global_plots}(c). Note that it is known that the orientational order becomes zero for larger system size since the bending mode of the stream is unstable in the phase coexistence regime or the nematic band \cite{Ginelli2010}. To examine crossing over from the LOO to the LGS when the volume exclusion effect increases, we investigated the structure and dynamics at the steady state for various volume exclusion strength $\beta$ in Fig. \ref{fig:simulation2} for $\rho=1.5$, $\rho=1.0$ and $\rho=0.5$ (with $N=40,000$, $N=80,000$ and $N=80,000$, resp.). To quantify the structure, we evaluated fraction of the region where orientational order is high as $S>0.75$ and variance of the order parameter, as shown by the red squares in Fig. \ref{fig:simulation2}(a) for firstly $\rho=1.5$. The fraction of ordered region is almost $1$ for $\beta \leq 0.1$, whereas at $\beta \geq 0.2$, it becomes smaller and deviates from $1$ [Fig. \ref{fig:simulation2}(a); red diamonds]. Further increase of $\beta$ for $\beta \geq 1$ leads to the increase of the fraction, and for $\beta \geq 5$ we find that it the ordered fraction gets roughly $1$ again. The typical snapshots at each value of $\beta$ are illustrated in Fig. \ref{fig:simulation2}(b)-(d): For $\beta=0.05$, the LOO state was observed [Fig. \ref{fig:simulation2}(b)], which was the same as a pattern when $\beta=0.07$ and object density was 0.95 in Fig. \ref{fig:simulation}(a). In contrast, for $\beta=0.5$, the density phase separation occurs [Fig. \ref{fig:simulation2}(c)]. Furthermore, for $\beta = 10$, the entire system gets filled with objects [Fig. \ref{fig:simulation2}(d)] because the objects become unable to overlap with each other [Fig. \ref{fig:simulation2}(a); $\beta \geq 5$]. We refer to this state as confluent state in this article. The same conclusion is obtained by evaluating the rescaled variance of orientational order $\tilde{\rm Var}(S) = (\Delta S^{\rm ROI})^2 / \langle S^{\rm ROI} \rangle^2_{{\rm ROI},t}$ with the orientational order $S^{\rm ROI}$ defined in each ROI and its variance $(\Delta S^{\rm ROI})^2$ over various ROIs and time [Fig. \ref{fig:simulation2}(a); black inverse triangles]. We identified the regimes I, II and III in Fig. \ref{fig:simulation2}(a) by looking at whether $\tilde{\rm Var}(S)$ is almost zero or not. Each regime corresponds to the LOO, phase separated and confluent states, respectively. Next we investigated the case for $\rho=1$ as shown in Figs. \ref{fig:simulation2}(e)-(h). We found that, also in this case, the homogeneous order is violated when $\beta$ is increased larger than $0.1$ [see red circles in Fig. \ref{fig:simulation2}(e)]. However, the confluent state is not observed even at $\beta =10$ [Fig. \ref{fig:simulation2}(h)]. The same analyses were performed also for $\rho=0.5$, as shown in Figs. \ref{fig:simulation2}(i)-(l). The fraction of ordered region decreases for increasing $\beta$ at $\beta < 0.2$, whereas at $\beta \geq 0.5$, it takes around $0.3$. Again note that the bending mode of this band is known to become unstable with much larger system size \cite{Ginelli2010}. When one increases $\beta$ up to $0.2$, the band become much narrower and the bending mode becomes unstable within smaller length scale [Fig. \ref{fig:simulation2}(k)]. For larger $\beta$ like $\beta=10$, the band itself gets disassembled [Fig. \ref{fig:simulation2}(l)]. We also investigated the dynamics by firstly evaluating the average speed of objects in ordered regions. In what follows, we focus on Figs. \ref{fig:simulation2}(i)-(l). As shown by blue circles in Fig. \ref{fig:simulation2}(i), the average speed gradually decreases as the volume exclusion strength $\beta$ is increased. The average speed stays finite even for $\beta=10$, where the snapshot already shows the aggregation state. This can be because that objects in an aggregation can slowly move into the direction perpendicular to that of orientational order, since the anisotropy of friction $r_{\zeta}$ is finite here. To see only the object motion along the direction of orientation order of each aggregation, we also plotted the squared velocity correlated with orientational order \begin{equation} V^2_S = \frac{\langle (1/N) \sum_{k {\rm : ROI}} \sum_{ij} S_{ij}^{\rm ROI} v_i^k v_j^k \rangle_{\rm ROI}}{\langle n^{\rm ROI} S^{\rm ROI} \rangle_{\rm ROI}} \end{equation} [The magenta squares in Fig. \ref{fig:simulation2}(i)] with the object number fraction $n^{\rm ROI} \equiv (1/N) \sum_{k {\rm : ROI}} 1$ and the tensor orientational order $S_{ij}^{\rm ROI} \equiv \sum_{k {\rm : ROI}} 2 [q_i q_j - (1/2) \delta_{ij}]$ ($i=x,y$, $j=x,y$) in each ROI, and the velocity vector $v_i^k$ ($i=x,y$) of each object ($k$-th object). The summation $\sum_{k {\rm : ROI}}$ ran over objects in the ROI, and the average $\langle \cdot \rangle_{\rm ROI}$ was taken over the various ROI locations. Also, $S^{\rm ROI}$ was the orientation order defined in the ROI, which was identical with the positive eigenvalue of the tensor order parameter $S_{ij}^{\rm ROI}$. This variable $V^2_S$ is also showing the gradual decrease as $\beta$ is increased, and at $\beta \gg 0.1$, asymptotically goes to zero for increasing $\beta$. The similar shapes of curves are obtained also in Figs. \ref{fig:simulation2}(a) and (e). \subsection{Statistics of binary collisions} \label{sec:binarycollision} In the previous subsections, we found various collective patterns of MTs like the LOO and LGS states depending on the strength of volume exclusion. To understand the mechanism how the volume exclusion affects the emerging patterns, we examined the statistics of binary collisions between MTs. We decreased the MT density to $0.4\times 10^{-3}$ filaments per ${\rm \mu m^2}$ to focus on only pair interactions between isolated MTs experimentally. When two filaments encountered each other, they exhibited three types of behaviors: crossing-over, alignment or anti-alignment, as already shown in Figs. \ref{fig:snaps}(a)-(c). To quantify consequence of binary collision, we measured incoming and outgoing angles defined in Sec. \ref{sec:strength} and duration time $\tau$ for the lower and higher kinesin density. The duration time is defined as a time between two MTs touch to detach. In order to characterize degree of alignment just after collisions independently of other factor like fluctuation of moving direction, we especially defined perfect alignment and anti-alignment assigned $\theta_{out}=0$ and $\pi$ respectively as the followings: (1) events in which the one of colliding MTs becomes parallel and anti-parallel to the other, respectively, (2) events whose $\tau$ is longer than the time by which a MT glides 4 ${\rm\mu m}$ and the incoming angle $\theta_{in}$ is smaller and larger than $\pi/2$, respectively. As shown by the red lines in Fig. \ref{fig:binary}(a), which indicate averages of the outgoing angles for every $\pi/9$ incoming angles, binary collision gives rise to nematic alignment at the lower kinesin density. In contrast, at the higher kinesin density, binary collisions with obtuse incoming angles exhibited effective polar alignment rather than nematic alignment [Fig. \ref{fig:binary}(b)]. Based on this observation, we slightly modified the numerical model given in the previous subsection to recapitulate effective polar alignment tendency given in Fig. \ref{fig:binary}(b). Simulation results with this model are shown in Fig. S8, which actually shows the moving clusters. In comparing situations at high and low kinesin densities, a clear difference appears in distribution of the duration time of collision $\tau$ defined above. At the higher kinesin density, the acute incoming angle tends to show longer duration time $\tau$ than that of the obtuse incoming angles while $\tau$ at the lower kinesin density did not exhibit a significant difference between acute and obtuse incoming angles [Fig. \ref{fig:binary}(c)]. To qualify this difference, we define the following function: \begin{eqnarray} \label{eq:t} T=\frac{\langle\tau(\{\theta_{in}\}:\theta_{in}\leq \pi/6)\rangle-\langle\tau(\{\theta_{in}\}:\theta_{in}\geq 5\pi/6)\rangle}{\langle\tau(\{\theta_{in}\})\rangle} \end{eqnarray} where $\tau(\{\theta_{in}\})$ is mean duration time of collision for a set of ${\theta_{in} }$. As obviously indicated by $T$ in Fig. \ref{fig:binary}(d), only the $\tau$ at the higher kinesin density had a difference. Based on these results, we assumed that asymmetry of duration time $\tau$ produced the coherent motions of MTs in a single cluster. If a MT collides at the pair of MTs, three MTs are aligned to be parallel and then the middle one, which is sandwiched by the others, cannot escape from the group [Fig. \ref{fig:binary}(e)]. Thus, a condition of cluster formation can be estimated by the duration time $\tau$ and collision frequency $z$. When $z$ is larger than $1/\tau$, clusters are formed. We assume the MT density $\rho$ is uniform at the initial state, and $z$ is represented by the following equation: \begin{eqnarray} \label{eq:colfre} z = 2 \ell v_m \rho/\pi \ , \end{eqnarray} where $\ell$ refers to a typical length of a MT, and $v_m$ is a typical velocity of a MT. One can obtain the condition for the cluster formation by replacing $z$ by the inverse duration time $1/\tau$ in Eq. (\ref{eq:colfre}). We obtained the lowest density to form cluster, $\rho_c^{\rm est}$, to be $\sim 0.02$ filaments per ${\rm}$ using the real values of parameters, $\tau \sim$ 300 s, $\ell \sim 5\ {\rm \mu m}$, and $v_m \sim 0.05 \ {\rm \mu m /s}$. This value of $\rho_c^{\rm est}$ is comparable with the experimental result, $\rho_c^{LGS}= 0.01$ filaments per ${\rm \mu m^2}$. It appears that a long accompaniment after collide in acute angle causes polar motion of a cluster. Note that the kinesin density also affects velocity of MTs. Speed at low kinesin density is 0.2 ${\rm \mu m/s}$ and that at high kinesin density is 0.05 ${\rm \mu m/s}$. Although MTs run slower as the kinesin density increases, it dose not change the consequence of this discussion. \begin{figure*}[thbp] \centering \includegraphics[width=.95\linewidth]{fig_binary.eps} \caption{(a,b)Effects of binary collision of MTs. Angular relation between two MTs before and after binary collision when kinesin density was lower (a, number of collision events $n=147$) and when it was higher (b, $n=135$). Each dot represents each collision event. A red line represents mean and mean error of alignment angles for every $\pi/9$ degree of incoming angles. (c) Duration time of collision $\tau$ from touch to detach for every $\pi/9$ degree of incoming angles. Red circle and blue square markers represent durations at the higher and lower kinesin densities, respectively. (d) Effective difference between duration of acute ($\leq\pi/6$) and obtuse ($\geq5\pi/6$) incoming angles. (e) Schematic image of cluster formation.} \label{fig:binary} \end{figure*} \subsection{Loop formation}\label{sec:loop} As we discussed in Sec. \ref{sec:binarycollision}, clear difference of duration time between alignment and anti-alignment led to unidirectional motion of each cluster at high kinesin density. After forming clusters, cluster-cluster collisions became dominant as shown in the areas surrounded by blue lines in Fig. \ref{fig:defects} and SI movie 2 \cite{SI}. Also, some moving clusters changed its direction of motion and then formed loops and arcs to which all MTs belonging rotate in the same direction [Figs. \ref{fig:defects}(a)-(c)]. The average inner diameter was 4.2 ${\rm \mu m}$. These loops remind us of cytoskeletal arcs in the gliding dynamics of single filaments \cite{Kawamura2008,Ziebert2015,Liu2011b,Kabir2012}, bundles \cite{HenryHess2005,Schaller2011,Tamura2011,Inoue2013,Lam2014a}, and in collective motion \cite{Sumino2012}. Those can be characterized by loop size and rotation direction of filaments in a loop. Typical diameters of arcs of single filaments, bundles and collective motions were about 2, 5 and 400 ${\rm \mu m}$, respectively. Rotation directions of single filament and bundle loops were unidirectional, while that of collective motions was bidirectional. Loops in our experiments are similar to the bundle loop in size and rotation direction. Here, we show the fact that the model proposed above can recapitulate this loop structure by adding a small change. We added the interaction term expressing the contact following into Eq. (\ref{eq:nematicint}), which is now rewritten by \begin{eqnarray} {\bm J^{q}}_j &=& 2 \alpha \sum_{j'} \left( {\bm q}_{j} \cdot {\bm q}_{j'} \right) {\bm q}_{j'} + \alpha_{\rm CF} \sum_{j'} \frac{1}{4} \left( 1+{\bm q}_{j} \cdot \frac{\Delta {\bm x}_{j,j'}}{|\Delta {\bm x}_{j,j'}|} \right) \nonumber \\ &\times& \left( 1+{\bm q}_{j'} \cdot \frac{\Delta {\bm x}_{j,j'}}{|\Delta {\bm x}_{j,j'}|} \right) \frac{\Delta {\bm x}_{j,j'}}{|\Delta {\bm x}_{j,j'}|} \ , \label{eq:nematicintwithCF} \end{eqnarray} reflecting the following observations. When two microtubules collided with each other, their direction after the collision did not always get the average direction of in-coming directions. Rather, seemingly the MT bumping into the backside of another MT had the tendency to follow it. The result is demonstrated in Fig. \ref{fig:defects}(d) for $\alpha_{\rm CF}=1.0$, $\alpha=1.0$, $\beta=0.05$ and $\rho=0.5$ (the other parameters are the same as above), which actually shows the loop structure. However, this loop structure seems shrinking and unstable for longer time. The similar loop structure is observed transiently in the numerical simulation of the self-propelled system with the visual corn \cite{Barberis2016,Peruani2017}. \begin{figure*}[thbp] \centering \includegraphics[width=.9\linewidth]{fig_defects_1.eps} \caption{ (a-c) Snapshots of moving clusters and loops at the higher kinesin density $6.1 \times 10^3$ per ${\rm \mu m^2}$. MT density is the higher than the density threshold for aggregation pattern, 0.22 filaments per ${\rm \mu m^2}$. This induces frequent cluster-cluster collisions [in a blue square in (a)] and results in formations of loops whose rotation directions are random as indicated by the yellow circles with arrows in (a). Scale bar is $20{\rm \mu m}$. Time evolution of loop at the area surrounded by blue (b) and red (c) square in the snapshot (a). A tip of a cluster were bended to collide with itself and caught in 4 ${\rm \mu m}$ inner diameter of a loop. The snapshot (a) correspond to 48 min after this loop formation. Scale bar is $10\ {\rm \mu m}$. (d) Snapshots from the numerical simulation showing the loop formation. The numerical model with contact following was used. See the main text for details. } \label{fig:defects} \end{figure*} \section{Discussion} \label{sec:discussion} This study reasoned the effect of volume exclusion on collective motion whose symmetry is polar motion and nematic alignment, and found that excessively strong volume exclusion works to destroy orientation order and form clusters. Our findings explain previous works \cite{Hussain2013a,Sumino2012,Inoue2015,Nishiguchi2017,Peruani2012,Zhang2010} according to the volume exclusion effect. The probability of crossing in the motility assay with actins and myosins by S. Hussain et al \cite{Hussain2013a} is about $50\%$, and that with MTs and dyneins by Y. Sumino et al. \cite{Sumino2012} is about $20\%$, and that with microtubules and kinesins adding methylcellulose by Inoue et al. \cite{Inoue2015} is about $50\%$. All these systems give high orientational order. Also, elongated {\it E. coli} confined in a quasi-two-dimensional chamber, where {\it E. coli} can cross over with each other, also showed the true long-range orientational order \cite{Nishiguchi2017}. These results can be interpreted as the consequence that high crossing-over probability and alignment interaction can form orientational order. In contrast, the systems in which crossing-over is prohibited, like Myxobacteria by F. Peruani et al. \cite{Peruani2012} and {\it B.subtilis} by H. P. Zhang et al. \cite{Zhang2010}, formed the moving cluster pattern and did not exhibit any orientational order. Our results are consistent with these results, {\it i.e.} orientational order emerges at low kinesin density where MTs are able to cross over, and LGS emerge at high density where they cannot cross over. It may seem strange that lower orientation order is made by stronger excluded volume effects, by which one may expect also stronger alignment effect. According to statistics of binary collisions, nematic interaction dose not disappear even when volume exclusion is weaker. Based on this, our numerical model can implement the strength of alignment independently from the volume exclusion strength. Furthermore, previous insight and our results in Fig. \ref{fig:simulation2}(d) suggest that highly packed self-propelled rods show high orientational order \cite{Samuel2012,Abkenar2013}. When area fractions of particles are close to 1, every particles are surrounded by others and aligned to the same direction. The mechanism to form these ordered patterns are common to highly packed rods including those reverting their motile direction occasionally, for example, fluidized granular rods, myxobacteria, and neural stem cells \cite{Narayan2007a,Wu2009,Kawaguchi2017}. Theoretically, this high orientation order is considered as a quasi long range order\cite{Chate2006} for so called ``active nematics" with bi-directional motion with nematic interaction. Difference between uni- and bi-directional motion would appear in the regime of middle area fraction. When there is enough room to move around, unidirectional motile particles gather and form asters and vortices \cite{Nedelec1997,Kruse2005a,Backouche2006,Torisawa2016a}. On the other hand, bidirectional motile particles can escape from such aster and vortex structures by reverting its direction of motion. It would be assumed that reversal time of direction of motion would be a parameter determining whether these structures are stable or not. Critical densities obtained in experiments can be compared with those in numerical simulations. The length scale of experimental results is normalized by a half of the typical MT length $\ell/2 \sim 2.5 \ {\rm \mu m}$. As a result, the normalized critical densities from disorder to ``liquid-gas-like phase separation (LGS)" state $\rho_c^{\rm LGS}$ and to the coexistence of ordered and disordered phases $\rho_c^{\rm CO}$ become $0.06$ and $0.23$, respectively. These are of the same order of magnitude as those obtained in numerical simulations, $\rho_{cs}^{\rm LGS}=0.2$ and $\rho_{cs}^{\rm CO}=0.3$. The magnitude relation is also the same between them as $\rho_c^{\rm LGS}<\rho_c^{\rm CO}$ and $\rho_{cs}^{\rm LGS}<\rho_{cs}^{\rm CO}$. In experiment, various striking patterns appeared. Loops and moving cluster could be due to the contact following which appear to act when the volume exclusion is high. Actually, as shown in Sec.\ref{sec:loop}, contact following can yield loops and moving cluster and keep them for a while. In addition, loops are the typical structure of unidirectional motile rods with strong volume exclusion at middle density. Comparing their size and rotation direction, loops in our experiment are similar to the bundle loops in previous studies \cite{HenryHess2005,Schaller2011,Kakugo2011,Tamura2011,Inoue2013,Lam2014a}. However, origins of alignment interaction are different. Previous studies used cross-linkers while we confined MTs to increased the kinesin density without cross-linkers. Although interaction details are different, we believe almost the same mechanism to form loops are working because alignment behaviors of colliding filaments are common. \section{Conclusions} \label{sec:conclusions} In summary, we focused on the effect of volume exclusion on global patterns of collective motions with motility assay using microtubule (MT) and kinesin in order to give a unified view for patterns reported in previous studies. We varied the strength of volume exclusion between MTs, i.e. the probability of MT overlapping in collision, by changing the kinesin density. MTs at low kinesin density could cross over. Meanwhile, those at high kinesin density did not cross over, rather often show perfect alignment more frequently. Surprisingly, however, we found that the orientational order emerged when volume exclusion is weak, whereas the cluster patterns without the orientational order emerged when volume exclusion was strong. We also found the MT density-dependency of patterns. Firstly, near the critical density, static appearances of MTs change from disordered to patterned states, which includes the nematic band or the coexistence of ordered and disordered phases, ``long-range orientational order (LOO)" and ``liquid-gas-like phase separation (LGS)" states. The critical density was found to depend on the kinesin density. Secondly, dynamical appearance of MTs in the LGS state changed from ``moving clusters'' to ``aggregation" when we increased MT density. Numerical simulation confirmed that whether the system is in disordered or patterned states depends on the object density. Moreover it could recapture characteristics of patterns in experiment, {\it i.e.} the coexistence of ordered and disordered phases, the LOO and LGS states, under proper strength of volume exclusion. To our best knowledge, this study reports the first experimental results showing the effect of volume exclusion directly. It shows that controlling the strength of volume exclusion can produce both long-range alignment and clusters. Although our study focused on experimental systems where elements move unidirectional and interact nematically, this approach is expected to be applied to other sets of symmetry of motion and interaction in future works. \section{Acknowledgment} We gratefully acknowledge the experimental work of past and present members of Higuchi laboratory. We are grateful to Hideo Higuchi, Motoshi Kaya, Hugues Chate, Ken H Nagai, Daiki Nishiguchi and Natsuhiko Yoshinaga for helpful discussions and for their kind interest in this work. We would like to thank Zvonimir Dogic for practical advice for experiment. This work is ostensibly supported by JSPS KAKENHI Grant Number 25103004, 16H02212, JP16J06301 and JP16K17777.
{ "timestamp": "2018-06-05T02:16:50", "yymm": "1806", "arxiv_id": "1806.01049", "language": "en", "url": "https://arxiv.org/abs/1806.01049" }
\section{Introduction} The dynamics of interacting population species can be described macroscopically by cross-diffusion equations. A well-known model example is the deterministic Shigesada-Kawasaki-Teramoto population system \cite{SKT79}. It can be derived formally from a random-walk model on lattices for transition rates which depend linearly on the population densities \cite[Appendix A]{ZaJu17}. Generalized population cross-diffusion models are obtained when the dependence of the transition rates on the densities is nonlinear. The existence of global weak solutions to these deterministic models was proved for an arbitrary number of species in \cite{CDJ18}. In this paper, we allow for a random influence of the environment and prove the existence of global nonnegative martingale solutions to the corresponding stochastic cross-diffusion system. More precisely, we consider the cross-diffusion equations \begin{equation}\label{1.eq} du_i - \diver\bigg(\sum_{j=1}^n A_{ij}(u)\na u_j\bigg)dt = \sum_{j=1}^n\sigma_{ij}(u)dW_j(t) \quad\mbox{in }{\mathcal O},\ t>0,\ i=1,\ldots,n, \end{equation} with no-flux boundary and initial conditions \begin{equation}\label{1.bic} \sum_{j=1}^n A_{ij}(u)\na u_j\cdot\nu = 0\quad\mbox{on }\pa{\mathcal O},\ t>0, \quad u_i(0)=u_i^0\quad\mbox{in }{\mathcal O},\ i=1,\ldots,n, \end{equation} where ${\mathcal O}\subset\R^d$ with $d=2,3$ is a bounded domain with Lipschitz boundary, $\nu$ is the exterior unit normal vector to $\pa{\mathcal O}$, and $u_i^0$ is a possibly random initial datum. The solution $u=(u_1,\ldots,u_n):{\mathcal O}\times[0,T]\times\Omega\to\R^n$ models the density of the $i^{\text{th}}$ population species, where $x\in{\mathcal O}$ represents the spatial variable, $t\in(0,T)$ the time, and $\omega\in\Omega$ the stochastic variable. The matrix $A(u)=(A_{ij}(u))$ is the diffusion matrix, $\sigma_{ij}(u)$ is a multiplicative noise term, and $W=(W_1,\ldots,W_n)$ is an $n$-dimensional cylindrical Wiener process. Details on the stochastic framework will be given in section \ref{sec.frame}. The diffusion coefficients are given by \begin{equation}\label{1.A} A_{ij}(u) = \delta_{ij}\bigg(a_{i0}+\sum_{k=1}^n a_{ik}u_k^2\bigg) + 2a_{ij}u_iu_j, \quad i,j=1,\ldots,n, \end{equation} where $a_{i0}>0$ and $a_{ij}>0$. This model is derived from an on-lattice model with transition rates $p_i(u)$, which depend quadratically on the densities, i.e.\ $p_i(u)=a_{i0}+\sum_{k=1}^n a_{ik}u_k^2$ for $i=1,\ldots,n$ \cite{ZaJu17}. This quadratic structure is essential for our analysis. To understand this, we need to explain the entropy structure of equations \eqref{1.eq}. \subsection{Entropy structure} Generally, the diffusion matrix in \eqref{1.eq}, originating from general transition rates in the lattice model, is neither symmetric nor positive definite which significantly complicates the analysis. However, the equations possess a formal gradient-flow or entropy structure under certain conditions. For the sake of simplicity, we sketch this structure in the deterministic context only and refer to \cite[Chapter~4]{Jue16} for details. By entropy structure, we mean that there exists a so-called entropy density $h:\R_+^n\to\R$ such that, still in the deterministic context, system \eqref{1.eq} in the entropy variables $w_i:=\pa h/\pa u_i$, $i=1,\ldots,n$, has a positive semi-definite diffusion matrix $B=(B_{ij})$, \begin{equation}\label{1.B} \pa_t u_i(w) - \diver\bigg(\sum_{j=1}^n B_{ij}\na w_j\bigg) = 0, \end{equation} where $B=A(u)h''(u)^{-1}$ is the product of $A(u)$ and the inverse of the Hessian of $h(u)$, and $u(w)=(h')^{-1}(w)$ is the back transformation. When the transition rates are given by $p_i(u)=a_{i0}+\sum_{k=1}^n a_{ik}u_k^s$ for some $s\ge 1$, the entropy density can be chosen as $h(u)=\sum_{i=1}^n \pi_i h_s(u_i)ds$, where $\pi_i>0$ are some numbers and $$ h_s(z) = \left\{\begin{array}{ll} z(\log z-1)+1 &\quad\mbox{for } s=1, \\ z^s/s &\quad\mbox{for }s\neq 1. \end{array}\right. $$ It was shown in \cite{ChJu04} that $B=(B_{ij})$ in \eqref{1.B} is positive semi-definite in the two-species case $n=2$ with $\pi_1=\pi_2=1$. This property generally does not hold for the $n$-species system. It turns out \cite{CDJ18} that $B$ is symmetric, positive semi-definite if the numbers $\pi_i$ are chosen such that $$ \pi_i a_{ij} = \pi_j a_{ji} \quad\mbox{for all }i,j=1,\ldots,n. $$ This condition is recognized as the detailed-balance condition for the Markov chain associated to $(a_{ij})$ and $(\pi_1,\ldots,\pi_n)$ is the reversible measure. The detailed-balance condition is sufficient but not necessary for the positive semi-definiteness of $B$; in fact, when self-diffusion dominates cross-diffusion (see \eqref{1.eta2} for the precise statement) then $B$ is still positive semi-definite. The entropy structure also yields a priori estimates. Indeed, let $H(u)=\int_{\mathcal O} h(u)dx$ be the so-called entropy. A computation shows that, still in the absence of the stochastic term, $$ \frac{dH}{dt} + \int_{\mathcal O}\sum_{i,j=1}^n \frac{\pa^2 h}{\pa u_i\pa u_j}(u)A_{ij}(u)\na u_i\cdot\na u_j dx = 0. $$ Since $B=A(u)h''(u)^{-1}$ is positive semi-definite, this holds true for $h''(u)A(u)$. Thus, taking into account the special structure of $A(u)$, this yields gradient estimates (see Lemma \ref{lem.pd} below). The gradient-flow structure is the key of the analysis of the deterministic analog to \eqref{1.eq}, but there are severe difficulties in the stochastic context. Indeed, neither semigroup techniques \cite{DaZa14,Kru14} nor monotonicity arguments \cite{LiRo13} can be applied because of the properties of the differential operator in \eqref{1.eq}. Stochastic Galerkin methods usually work in Hilbert spaces, and generally they cannot be used since the transformation to entropy variables is nonlinear. In order to overcome these difficulties, we consider quadratic transition rates with $s=2$ which makes the transformation to entropy variable linear, $$ w_i = \frac{\pa h}{\pa u_i} = \pi_i h_2'(u_i) = \pi_i u_i. $$ Still, the diffusion matrix $A(u)$ is not positive definite, but the new diffusion matrix $B=A(u)\operatorname{diag}(1/\pi_1,\ldots,1/\pi_n)$ is positive semi-definite; see Lemma \ref{lem.pd}. This allows us to combine entropy methods for diffusive equations and stochastic techniques. \subsection{State of the art} Before stating our main existence result, let us review the literature. Fundamental results on stochastic partial differential equations of monotone type were obtained already in the 1970s by Pardoux \cite{Par76}. More recently, abstract stochastic evolution equations with locally monotone nonlinearities \cite{LiRo13} or maximal monotone operators \cite{BaRo18} were analyzed. The existence of (mild or pathwise strong) solutions to quasilinear stochastic evolution equations was proved in, e.g., \cite{DeSt04,HoZh17}. For these solutions, the driving noise is given in advance. A weaker concept is given by martingale solutions, where the stochastic basis is unknown a priori and is given as part of the solution. Existence proofs of such solutions to nonlinear stochastic evolution equations can be found in \cite{BrGa99,ChGo95}. Stochastic reaction-diffusion equations are a special class of evolution equations, and they are investigated in many papers starting from the 1980s \cite{Fla91,FoMi86}. There are less results on {\em systems} of stochastic reaction-diffusion equations. In \cite{Cer03}, the existence and uniqueness of mild solutions with Lipschitz continuous multiplicative noise was shown. The result was generalized in \cite{Kun15} to H\"older continuous multiplicative noise. The existence of maximal pathwise solutions to stochastic reaction-diffusion systems with polynomial reaction terms was proved in \cite{NgPh16}. More general quasilinear systems were investigated recently in \cite{KuNe18}, proving the existence of local pathwise mild solutions, including the Shigesada-Kawasaki-Teramoto cross-diffusion system. The local-in-time results are not surprising since even in the deterministic case, certain reaction terms may lead to finite-time blow-up of solutions. The work \cite{MaMa02} also analyzes population systems and provides the existence of pathwise unique solutions, but only for two species and for Lipschitz continuous nonlinearities. Up to our knowledge, the population model \eqref{1.eq} with coefficients \eqref{1.A} was not studied in the literature. In this paper, we prove the existence of global martingale solutions using the techniques of \cite{BrMo14,BrOn10}. We show that the solutions are nonnegative under a natural condition on the operators $\sigma_{ij}(u)$ using the stochastic maximum principle of \cite{CPT16}. Since even the uniqueness of weak solutions to the deterministic analog of \eqref{1.eq}-\eqref{1.A} is not known (see the partial result in \cite{JuZa16}), we cannot expect to obtain pathwise unique strong solutions. \subsection{Stochastic framework and main results}\label{sec.frame} Let $(\Omega,{\mathcal F},\mathbb{P})$ be a probability space endowed with a complete right continuous filtration $\mathbb{F}=({\mathcal F}_t)_{t\ge 0}$ and let $H$ be a Hilbert space. The space $L^2({\mathcal O})$ is the vector space of all square integrable functions $u:{\mathcal O}\to\R$ with the inner product $(\cdot,\cdot)_{L^2({\mathcal O})}$. We fix a Hilbert basis $(e_k)_{k\in\N}$ of $L^2({\mathcal O})$. The space $L^2(\Omega;H)$ consists of all $H$-valued random variables $u$ with $$ \mathbb{E}\|u\|_H^2 := \int_\Omega\|u(\omega)\|_H^2 \mathbb{P}(d\omega) < \infty. $$ Furthermore, the space $H^1({\mathcal O})$ contains all functions $u\in L^2({\mathcal O})$ such that the distributional derivatives $\pa u/\pa x_1,\ldots,\pa u/\pa x_d$ belong to $L^2({\mathcal O})$. Let $Y$ be any separable Hilbert space with orthonormal basis $(\eta_k)_{k\in\N}$. We denote by $$ {\mathcal L}_2(Y;L^2({\mathcal O})) = \bigg\{L:Y\to L^2({\mathcal O})\mbox{ linear continuous: } \sum_{k=1}^\infty\|L\eta_k\|_{L^2({\mathcal O})}^2<\infty\bigg\} $$ the space of Hilbert-Schmidt operators from $Y$ to $L^2({\mathcal O})$ endowed with the norm $$ \|L\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 := \sum_{k=1}^\infty\|L\eta_k\|_{L^2({\mathcal O})}^2. $$ Let $(\beta_{jk})_{j=1,\cdots,n,\, k\in\N}$ be a sequence of independent one-dimensional Brownian motions and for $j=1,\ldots,n$, let $W_j(x,t,\omega)=\sum_{k\in\N}\eta_k(x)\beta_{jk}(t,\omega)$ be a cylindrical Brownian motion. If $Y_0\supset Y$ is a second auxiliary Hilbert space such that the map $Y\ni u\mapsto u\in Y_0$ is Hilbert-Schmidt, the series $W_j=\sum_{k\in\N}\eta_k\beta_{jk}$ converges in ${\mathcal L}_2(\Omega;Y_0)$. The multiplicative noise terms $\sigma := \sigma_{ij}(u,t,\omega):L^2({\mathcal O})\times[0,T]\times\Omega \to {\mathcal L}_2(Y; L^2({\mathcal O}))$ are assumed to be ${\mathcal B}(L^2({\mathcal O})\otimes[0,T]\otimes{\mathcal F};{\mathcal B}({\mathcal L}_2(Y;L^2({\mathcal O}))))$-measurable and $\mathbb{F}$-adapted with the property that there exists one constant $C_\sigma>0$ such that for all $u$, $v\in L^2({\mathcal O})$ and $i,j=1,\ldots,n$, \begin{equation}\label{1.sigma} \begin{aligned} \|\sigma_{ij}(u)\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 &\le C_\sigma\big(1+\|u\|^2_{L^2({\mathcal O})}\big), \\ \|\sigma_{ij}(u)-\sigma_{ij}(v)\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 &\le C_\sigma\|u-v\|^2_{L^2({\mathcal O})}. \end{aligned} \end{equation} Here, the $L^2({\mathcal O})$ norm of the function $u=(u_1,\ldots,u_n)$ is understood as $\|u\|_{L^2({\mathcal O})}^2=\sum_{i=1}^n\|u_i\|_{L^2({\mathcal O})}^2$, and we use this notation also for other vector-valued or tensor-valued functions. The expression $\sigma_{ij}(u)dW_j(t)$ formally means that \begin{equation}\label{1.sigmadW} \sigma_{ij}(u)dW_j(t) = \sum_{k,\ell\in\N}\sigma_{ij}^{k\ell}(u)e_\ell d\beta_{jk}(t), \quad\mbox{where } \sigma_{ij}^{k\ell}(u) := \big(\sigma_{ij}(u)\eta_k,e_\ell\big)_{L^2({\mathcal O})}. \end{equation} Next, we define our concept of solution. \begin{definition} Let $T>0$ be arbitrary. We say that the system $(\widetilde U,\widetilde W,\widetilde u)$ is a {\em global martingale solution} to \eqref{1.eq}-\eqref{1.A} if $\widetilde U=(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P},\widetilde\mathbb{F})$ is a stochastic basis with filtration $\widetilde\mathbb{F}=(\widetilde{\mathcal F}_t)_{t\in[0,T]}$, $\widetilde W$ is a cylindrical Wiener process, and $\widetilde u(t)=(\widetilde u_1(t),\ldots,\widetilde u_n(t))$ is an $\widetilde{\mathcal F}_t$-adapted stochastic process for all $t\in[0,T]$ such that for all $i=1,\ldots,n$, $$ \widetilde u_i\in L^2(\widetilde\Omega;C^0([0,T];L^2_w({\mathcal O})))\cap L^2(\widetilde\Omega;L^2(0,T;H^1({\mathcal O}))), $$ the law of $\widetilde u_i(0)$ is the same as for $u^0_i$, and $\widetilde u$ satisfies for all $\phi\in H^1({\mathcal O})$ and all $i=1,\ldots,n$, \begin{align*} (\widetilde u_i(t),\phi)_{L^2({\mathcal O})} &= (\widetilde u_{i}(0),\phi)_{L^2({\mathcal O})} - \sum_{j=1}^n\int_0^t\big\langle\diver\big(A_{ij}(\widetilde u(s)) \na\widetilde u_j(s)\big),\phi\big\rangle ds \\ &\phantom{xx}{}+ \bigg(\sum_{j=1}^n\int_0^t\sigma_{ij}(\widetilde u(s)) d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}. \end{align*} \end{definition} The brackets $\langle\cdot,\cdot\rangle$ signify the duality pairing between $H^1({\mathcal O})'$ and $H^1({\mathcal O})$, i.e. $$ \big\langle\diver\big(A_{ij}(\widetilde u) \na\widetilde u_j\big),\phi\big\rangle = -\int_{\mathcal O} A_{ij}(\widetilde u) \na\widetilde u_j\cdot\na\phi dx. $$ As mentioned before, the new diffusion matrix $B$ in \eqref{1.B} is positive definite only under an additional assumption, namely either \begin{align} & \pi_ia_{ij} = \pi_ja_{ji}\mbox{ for }i\neq j\quad\mbox{and}\quad \alpha_1 := \min_{i=1,\ldots,n}\bigg(a_{ii}-\frac13\sum_{j=1,\,j\neq i}^n a_{ij} \bigg) > 0, \quad\mbox{or} \label{1.eta1} \\ & \alpha_2 := \min_{i=1,\ldots,n}\bigg(a_{ii} - \frac13\sum_{j=1,\,j\neq i}^n \big((a_{ij}+a_{ji}) - 2\sqrt{a_{ij}a_{ji}}\big)\bigg) > 0. \label{1.eta2} \end{align} Our main result is as follows. \begin{theorem}[Existence of global martingale solution]\label{thm.ex} Let $T>0$ be arbitrary, $d\le 3$, and $u_0\in L^2({\mathcal O})$. Let $\sigma=(\sigma_{ij})_{i,j=1}^n$ with $\sigma_{ij}:L^2({\mathcal O})\times[0,T]\times\Omega\to{\mathcal L}_2(Y;L^2({\mathcal O}))$ satisfy \eqref{1.sigma}, $a_{i0}>0$, $a_{ij}>0$ for $i,j=1,\ldots,n$, and let either \eqref{1.eta1} or \eqref{1.eta2} hold. Then there exists a global martingale solution to \eqref{1.eq}-\eqref{1.A}. If additionally, $u_i^0\ge 0$ a.e.\ in ${\mathcal O}$, $\mathbb{P}$-a.s.\ for $i=1,\ldots,n$ and \begin{equation}\label{1.sigma2} \sum_{j=1}^n\|\sigma_{ij}(u)\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))} \le C\|u_i\|_{L^2({\mathcal O})}, \end{equation} then the population densities are nonnegative $\mathbb{P}$-a.s. \end{theorem} \begin{remark}[Discussion of the assumptions]\label{rem.assump}\rm (i) We can also choose random initial data, see Remark \ref{rem.ic}. We need additionally that $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p<\infty$ for $p=24/(4-d)$. This condition is needed to derive a higher-order estimate for $u_i$. It can be weakened to smaller values of $p$ by refining the Gagliardo-Nirenberg argument in the proof of Lemma \ref{lem.3}. (ii) Assumption \eqref{1.sigma} on $\sigma_{ij}$ seems to be quite natural. In \cite{Kun15}, the multiplicative noise was assumed to be only H\"older continuous, but the matrix $(\sigma_{ij}(u))$ is needed to be diagonal, which we do not assume. Condition \eqref{1.sigma2} implies that $\sum_{j=1}^n\sigma_{ij}(u)=0$ if $u_i=0$, which is a natural condition to obtain the nonnegativity of $u_i$. (iii) The existence of solutions to the deterministic version of \eqref{1.eq}-\eqref{1.A} can be shown also for vanishing coefficients $a_{i0}=0$ \cite{CDJ18}. This seems to be not possible in the stochastic framework, since the condition $a_{i0}>0$ is needed to derive estimates for $\na u_i$ in $L^2({\mathcal O})$ $\mathbb{P}$-a.s., and these estimates are necessary to work in the Hilbert space $H^1({\mathcal O})$. (iv) Conditions \eqref{1.eta1} and \eqref{1.eta2} on the matrix coefficients are probably not optimal. For local-in-time existence of solutions to the determinstic analog of \eqref{1.eq}, only the positivity of the real parts of the eigenvalues of $A(u)$ is needed \cite{Ama90}. This condition is generally not sufficient to ensure global solvability. A sufficient condition for the global existence for general quasilinear evolution equations is provided by uniform $W^{1,p}({\mathcal O})$ bounds with $p>d$ \cite[Theorem 15.3]{Ama93}, but it is difficult to prove this regularity for solutions to cross-diffusion systems. Conditions \eqref{1.eta1} and \eqref{1.eta2} are currently the best available assumptions to guarantee the existence of global solutions, even in the deterministic framework. \qed \end{remark} \subsection{Ideas of the proof of Theorem \ref{thm.ex}} We sketch the main steps of the proof. The full proof is given in section \ref{sec.ex}. First, we show the existence of a pathwise unique strong solution $u^{(N)}$ to a stochastic Galerkin approximation of \eqref{1.eq}-\eqref{1.A}, where $N\in\N$ is the Galerkin dimension. Estimates uniform in $N$ are derived from a stochastic version of the entropy inequality (which is made rigorous using It\^o's formula in section \ref{sec.unif}) \begin{align*} &\mathbb{E} H(u^{(N)}(t)) - \mathbb{E} H(u^{(N)}(0)) + \sum_{i,j=1}^n\mathbb{E}\int_0^t\int_{\mathcal O} \pi_iA_{ij}(u^{(N)})\na u_i^{(N)}\cdot \na u_j^{(N)} dxds \\ &\le \frac12\mathbb{E}\int_0^t\|P^{1/2}\Pi_N\sigma(u^{(N)})\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds + \sum_{i,j=1}^n\mathbb{E}\int_0^t\int_{\mathcal O}\pi_i u^{(N)}_i\sigma_{ij}(u^{(N)})dW_j(s) dx, \end{align*} where $\Pi_N$ is the projection on the finite-dimensional Galerkin space, $$ H(u) = \sum_{i=1}^n\int_{\mathcal O} \pi_i h_2(u_i)dx = \sum_{i=1}^n\frac{\pi_i}{2}\int_{\mathcal O} u_i^2 dx = \frac12\|P^{1/2}u\|_{L^2({\mathcal O})}^2 $$ is the quadratic entropy, and $P=\operatorname{diag}(\pi_1,\ldots,\pi_n)$, $P^{1/2}=\operatorname{diag}(\pi_1^{1/2},\ldots,\pi_n^{1/2})$. Since $PA(u^{(N)})$ is positive definite, the last term on the left-hand side yields uniform gradient estimates. The first integral on the right-hand side is bounded from above by the entropy $H$ (up to some additive constant), using assumption \eqref{1.sigma}, and the second integral is estimated using the Burkholder-Davis-Gundy inequality (see Proposition \ref{prop.bdg} in the appendix). Next, the tightness of the laws ${\mathcal L}(u^{(N)})$ in the topological space $Z_T$, defined in \eqref{3.Z} below, is proved by applying a criterion of Brze\'zniak, Goldys, and Jegaraj \cite{BGJ13}. Because of the low regularity properties of the solutions, $Z_T$ cannot be chosen to be a metric space and we cannot apply the Skorokhod representation theorem, as usually done in the literature (e.g.\ \cite{DHV16,NgPh16}). This problem is overcome by using Jakubowski's generalization of the Skorokhod theorem, which holds for topological spaces with a separating-points property (Theorem \ref{thm.skoro}). Then there exists a subsequence of $(u^{(N)})$ (not relabeled), another probability space, and random variables $(\widetilde u^{(N)},\widetilde W^{(N)})$ having the same law as $(u^{(N)},W)$ and $(\widetilde u^{(N)},\widetilde W^{(N)})$ converges to $(\widetilde u,\widetilde W)$ in the topology of $Z_T$. Because of the gradient estimates, we conclude in particular the strong convergence $\widetilde u^{(N)}\to \widetilde u$ in $L^2({\mathcal O}\times(0,T))$ $\mathbb{P}$-a.s. This, together with further convergences resulting from the relative compactness in $Z_T$, allows us to pass to the limit $N\to\infty$ in the Galerkin approximation, showing that $(\widetilde u,\widetilde W)$ is a global martingale solution to \eqref{1.eq}. From the application viewpoint, we expect that the population densities $u_i(t)$ are nonnegative $\mathbb{P}$-a.s.\ if this holds initially. The problem is that generally, maximum principle arguments cannot be applied to cross-diffusion systems. System \eqref{1.eq}, \eqref{1.A}, however, possesses a special structure. Indeed, we may write \eqref{1.eq} as $$ du_i - \diver\bigg(\bigg(a_{i0}+\sum_{k=1}^n a_{ik}u_k^2\bigg)\na u_i + u_iF_i[u]\bigg) = \sum_{j=1}^n\sigma_{ij}(u)dW_j(t), $$ and $F_i$ depends on $u_j$ and $\na u_j$ for $j\neq i$. The term $u_iF_i[u]$ can be interpreted as a drift term which vanishes if $u_i=0$. If we assume that $\sigma_{ij}(u)=0$ if $u_i=0$ then a maximum principle can be applied. More precisely, we employ the stochastic Stampacchia-type maximum principle due to Chekroun, Park, and Temam \cite{CPT16}. The idea is to regularize the test function $(\widetilde u_i^{(N)})^-=\max\{0,-\widetilde u_i^{(N)}\}$ by some smooth function $F_\eps(\widetilde u_i^{(N)})$, to apply the It\^o formula for $\mathbb{E}\int F_\eps(\widetilde u_i^{(N)})dx$, and then to pass to the limits $N\to\infty$ and $\eps\to 0$ leading to the inequality $$ \mathbb{E}\|\widetilde u_i(t)^-\|_{L^2({\mathcal O})}^2 \le \mathbb{E}\int_0^t\|\widetilde u_i(s)^-\|_{L^2({\mathcal O})}^2 ds. $$ Gronwall's lemma show that $\widetilde u_i(t)^-=0$ a.e.\ in ${\mathcal O}$, which proves the nonnegativity of $\widetilde u_i$ $\mathbb{P}$-a.s. In order to make the manuscript accessible also to non-experts of stochastic partial differential equations, we recall some known results from stochastic analysis used in this paper in Appendix \ref{app}. As the tightness criterion of \cite{BGJ13} is probably less known, we present the details directly in the proof of Theorem \ref{thm.ex} in section \ref{sec.tight}. \section{Proof of the existence theorem}\label{sec.ex} \subsection{An algebraic property}\label{sec.alg} We recall the following result on the positive definite\-ness of the new diffusion matrix, taken from \cite[Lemma 3]{CDJ18} by choosing $s=2$. \begin{lemma}\label{lem.pd} Let $\pi_1,\ldots,\pi_n>0$ and $P=\operatorname{diag}(\pi_1,\ldots,\pi_n)\in\R^{n\times n}$. Let either condition \eqref{1.eta1} or \eqref{1.eta2} hold. Then $PA(u)$ is positive definite, i.e., it holds for any $z=(z_1,\ldots,z_n)\in\R^n$ and $u=(u_1,\ldots,u_n)\in\R^n$, $$ \sum_{i,j=1}^n \pi_iA_{ij}(u)z_iz_j \ge \sum_{i=1}^n\pi_i a_{i0}z_i^2 + 3\alpha\sum_{i=1}^n \pi_i u_i^2z_i^2, $$ where $\alpha=\alpha_1$ if \eqref{1.eta1} holds and $\alpha=\alpha_2$ if \eqref{1.eta2} is satisfied. In the latter case, we may choose $\pi_i=1$ for all $i=1,\ldots,n$. \end{lemma} \subsection{Stochastic Galerkin approximation}\label{sec.gal} We fix an orthonormal basis $(e_k)_{k\ge 1}$ of $L^2({\mathcal O})$ and a number $N\in\N$ and set $H_N=\operatorname{span}\{e_1,\ldots,e_N\}$. We introduce the projection operator $\Pi_N:L^2({\mathcal O})\to H_N$, $$ \Pi_N(v) = \sum_{i=1}^N(v,e_i)_{L^2({\mathcal O})}e_i, \quad v\in L^2({\mathcal O}). $$ The approximate problem is the following system of stochastic differential equations, \begin{align} & du_i^{(N)} - \Pi_N\diver\bigg(\sum_{j=1}^n A_{ij}(u^{(N)})\na u_j^{(N)} \bigg) dt = \Pi_N\bigg(\sum_{j=1}^n\sigma_{ij}(u^{(N)})\bigg)dW_j(t), \label{2.approx1} \\ & u_i^{(N)}(0) = \Pi_N(u_i^0), \quad i=1,\ldots,n. \label{2.approx2} \end{align} \begin{lemma}\label{lem.ex} Let Assumptions \eqref{1.eta1} or \eqref{1.eta2} hold. Then there exists a pathwise unique strong solution to \eqref{2.approx1}-\eqref{2.approx2}. \end{lemma} \begin{proof} We apply Theorem \ref{thm.sde} in Appendix \ref{app} to \begin{equation}\label{2.pi} \pi\cdot du = a(u)dt + b(u)dW(t), \quad t>0, \quad u(0)=\Pi_N (u^0), \end{equation} where \begin{align*} & a=(a_1,\ldots,a_n):H_N\to\R^n, \quad a_i(u)=\Pi_N\diver\bigg(\sum_{j=1}^n \pi_iA_{ij}(u)\na u_j\bigg), \\ & b_{ij}:H_N \to{\mathcal L}_2(Y;H_N), \quad b_{ij}(u) = \pi_i\Pi_N\sigma_{ij}(u), \end{align*} and the numbers $\pi_1,\ldots,\pi_n>0$ are given by \eqref{1.eta1}. Observe that this problem is equivalent to \eqref{2.approx1} after componentwise division by $\pi_i$. It is sufficient to verify Assumptions \eqref{2.ab1}-\eqref{2.ab2}. Let $R>0$, $T>0$, and $\omega\in\Omega$ and let $u$, $v\in H_N$ with $\|u\|_{H_N}$, $\|v\|_{H_N}\le R$. Then, using the positive definiteness of $PA$, according to Lemma \ref{lem.pd}, and the equivalence of norms on $H_N$, \begin{align*} (a(u)-a(v),u-v)_{H_N} &= -\sum_{i,j=1}^n\int_{{\mathcal O}}\pi_iA_{ij}(u)\na(u_i-v_i)\cdot\na(u_j-v_j)dx \\ &\phantom{xx}{} + \sum_{i,j=1}^n\int_{{\mathcal O}}\pi_i(A_{ij}(u)-A_{ij}(v))\na(u_i-v_i)\cdot\na v_j dx \\ &\le C\|A(u)-A(v)\|_{L^2({\mathcal O})}\|\na(u-v)\|_{L^2({\mathcal O})}\|\na v\|_{L^\infty({\mathcal O})} \\ &\le C(N,R)\|u-v\|_{H_N}^2, \end{align*} where the constant $C(N,R)>0$ depends on $N$ and $R$. In the last step we have used the fact that $A_{ij}(u)$ is locally Lipschitz continuous. Hence, together with assumption \eqref{1.sigma} on $\sigma$, the local weak monotonicity condition \eqref{2.ab1} holds. To verify the weak coercivity condition \eqref{2.ab2}, we take $u\in H_N$ with $\|u\|_{H_N}\le R$ and employ again the positive definiteness of $PA$: \begin{align*} (a(u),u)_{H_N} + \|b(u)\|_{{\mathcal L}_2(Y;H_N)}^2 &= -\sum_{i,j=1}^n\int_{{\mathcal O}}\pi_iA_{ij}(u)\na u_i\cdot\na u_j dx + \|P^{1/2}\sigma(u)\|_{{\mathcal L}_2(Y;H_N)}^2 \\ &\le C_\sigma(1+\|u\|_{H_N}^2), \end{align*} where we recall that $P^{1/2}=\operatorname{diag}(\pi_1^{1/2},\ldots,\pi_n^{1/2})$. Therefore, the lemma follows after applying Theorem \ref{thm.sde}. \end{proof} \subsection{Uniform estimates}\label{sec.unif} We prove some energy-type estimates uniform in $N$. \begin{lemma}[A priori estimates]\label{lem.est1} Let $T>0$ and let $u^{(N)}$ be the pathwise unique strong solution to \eqref{2.approx1}-\eqref{2.approx2} on $[0,T]$. Then there exists a constant $C_1>0$ which depends on $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^2$, $C_\sigma$, and $T$ but not on $N$ such that \begin{align} \sup_{N\in\N}\mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^2\bigg) &\le C_1, \label{3.est1} \\ \sup_{N\in\N}\mathbb{E}\bigg(\int_0^T\|\na u^{(N)}\|_{L^2({\mathcal O})}^2 dt\bigg) &\le C_1, \label{3.est2} \\ \alpha\sup_{N\in\N}\mathbb{E}\bigg(\int_0^T\big\|\na (u^{(N)})^2\big\|_{L^2({\mathcal O})}^2 dt\bigg) &\le C_1, \label{3.est3} \end{align} and $\alpha=\alpha_1$ if \eqref{1.eta1} holds, $\alpha=\alpha_2$ if \eqref{1.eta2} holds. \end{lemma} We remark that \eqref{3.est1} shows that $(u^{(N)})$ is bounded in $L^2({\mathcal O}\times(0,T)\times\Omega)$, so together with \eqref{3.est2}, we infer a uniform bound for $u^{(N)}$ in $L^2((0,T)\times\Omega;H^1({\mathcal O}))$. \begin{proof} We apply the It\^o formula (Theorem \ref{thm.ito}) to the process $X(t)=u^{(N)}(t)$, where $u^{(N)}$ solves \eqref{2.pi}: \begin{align} \frac12\|&P^{1/2}u^{(N)}(t)\|_{L^2({\mathcal O})}^2 - \frac12\|\Pi_N(P^{1/2}u^0)\|_{L^2({\mathcal O})}^2 \nonumber \\ &= \sum_{i,j=1}^n\int_0^t\big(u_i^{(N)}(s), \Pi_N\diver(\pi_i A_{ij}(u^{(N)}(s)) \na u_j^{(N)}(s))\big)_{L^2({\mathcal O})}ds \nonumber \\ &\phantom{xx}{}+ \frac12\int_0^t\big\|\Pi_N(P^{1/2}\sigma(u^{(N)}(s))) \big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2ds \nonumber \\ &\phantom{xx}{} + \sum_{i,j=1}^n\int_0^t\big(u_i^{(N)}(s),\Pi_N(\pi_i\sigma_{ij}(u^{(N)}(s)))dW_j(s) \big)_{L^2({\mathcal O})} \nonumber \\ &= -\sum_{i,j=1}^n\int_0^t\big(\na u_i^{(N)}(s),\pi_iA_{ij}(u^{(N)}(s)) \na u_j^{(N)}(s)\big)_{L^2({\mathcal O})}ds \nonumber \\ &\phantom{xx}{}+ \frac12\int_0^t\big\|\Pi_N(P^{1/2}\sigma(u^{(N)}(s))) \big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2ds \nonumber \\ &\phantom{xx}{}+ \sum_{i,j=1}^n\int_0^t\pi_i \big(u_i^{(N)}(s), \sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}. \label{2.aux1} \end{align} The first term on the right-hand side can be estimated by using Lemma \ref{lem.pd}: \begin{align*} \sum_{i,j=1}^n&\big(\na u_i^{(N)}(s),\pi_iA_{ij}(u^{(N)}(s)) \na u_j^{(N)}\big)_{L^2({\mathcal O})} \\ &\ge \sum_{i=1}^n\pi_i a_{i0}\int_{{\mathcal O}}|\na u_i^{(N)}|^2dx + 3\alpha\sum_{i=1}^n\pi_i\int_{{\mathcal O}}|u_i^{(N)}|^2|\na u_i^{(N)}|^2 dx \\ &\ge C\|\na u^{(N)}\|_{L^2({\mathcal O})}^2 + C\alpha\|\na (u^{(N)})^2\|_{L^2({\mathcal O})}^2, \end{align*} where $(u^{(N)})^2=((u_1^{(N)})^2,\ldots,(u_n^{(N)})^2)$ and here and in the following, $C>0$ is a generic constant independent of $N$ with values changing from line to line. Therefore, \eqref{2.aux1} becomes \begin{align} \frac12\|&P^{1/2}u^{(N)}(t)\|_{L^2({\mathcal O})}^2 + C\int_0^t\|\na u^{(N)}(s)\|_{L^2({\mathcal O})}^2ds + C\alpha\int_0^t\|\na(u^{(N)}(s)^2)\|_{L^2({\mathcal O})}^2ds \nonumber \\ &\le \frac12\|P^{1/2}u^0\|_{L^2({\mathcal O})}^2 + \frac12\int_0^t\big\|P^{1/2}\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds \label{3.aux2} \\ &\phantom{xx}{}+ \sum_{i,j=1}^n\int_0^t\pi_i\big(u_i^{(N)}(s), \sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}. \nonumber \end{align} For the second integral on the right-hand side, we take into account assumption \eqref{1.sigma}: \begin{align*} \frac12\int_0^t&\big\|P^{1/2}\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds \le C\int_0^t\big\|\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds \\ &\le C\int_0^t\big(1 + \|u^{(N)}\|_{L^2({\mathcal O})}^2\big)ds = Ct + C\int_0^t\|u^{(N)}\|_{L^2({\mathcal O})}^2 ds. \end{align*} To estimate the last integral in \eqref{3.aux2}, we observe that, since the process $u^{(N)}$ is $H_N$-valued and a solution to \eqref{2.approx1}, the process $$ \mu^{(N)}(t) = \sum_{i,j=1}^n\int_0^t\pi_i\big(u_i^{(N)}, \sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}, \quad t\in[0,T], $$ is an ${\mathcal F}_t$-martingale. Then, by the Burkholder-Davis-Gundy inequality (see Proposition \ref{prop.bdg}), we have \begin{align*} \mathbb{E}\bigg(\sup_{t\in(0,T)}&\bigg|\sum_{i,j=1}^n\int_0^t\pi_i\big(u_i^{(N)}, \sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}\bigg|\bigg) \\ &\le C\mathbb{E}\bigg(\int_0^T\|u^{(N)}(s)\|_{L^2({\mathcal O})}^2 \big\|\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2\bigg)^{1/2}, \end{align*} and by the H\"older inequality, assumption \eqref{1.sigma} on $\sigma$, and the Young inequality, we obtain \begin{align} \mathbb{E}&\sup_{t\in(0,T)}\bigg|\sum_{i,j=1}^n\int_0^t\pi_i\big(u_i^{(N)}, \sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}\bigg| \nonumber \\ &\le C\mathbb{E}\bigg\{\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^2\bigg)^{1/2} C_\sigma^{1/2} \bigg(\int_0^T\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^2\big)ds\bigg)^{1/2}\bigg\} \nonumber \\ &\le \frac14\mathbb{E}\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^2\bigg) + C\bigg(T + \mathbb{E}\int_0^T\|u^{(N)}\|_{L^2({\mathcal O})}^2 ds\bigg). \label{3.dW} \end{align} We take in \eqref{3.aux2} the supremum over $t\in(0,T)$ and the mathematical expectation and use the inequality $\|P^{1/2}u^{(N)}\|_{L^2({\mathcal O})}\ge C\|u^{(N)}\|_{L^2({\mathcal O})}$ for some constant $C>0$ only depending on $\pi_1,\ldots,\pi_n$ and the previous estimates to conclude that \begin{align} \frac14\mathbb{E}\bigg(&\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^2\bigg) + C\mathbb{E}\int_0^t\|\na u^{(N)}(s)\|_{L^2({\mathcal O})}^2 ds + C\alpha\mathbb{E}\int_0^t\|\na(u^{(N)}(s)^2)\|_{L^2({\mathcal O})}^2ds \nonumber \\ &\le CT + C\mathbb{E}\big(\|u^0\|_{L^2({\mathcal O})}^2\big) + C\int_0^T\mathbb{E}\bigg(\sup_{t\in[0,\tau]}\|u^{(N)}(\tau)\|_{L^2({\mathcal O})}^2\bigg)ds. \label{3.aux3} \end{align} We infer from the Gronwall lemma that $$ \sup_{N\in\N}\mathbb{E}\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^2\bigg) \le C, $$ where $C>0$ depends on $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^2$, $C_\sigma$, and $T$. This proves \eqref{3.est1}. Inserting the previous estimate into \eqref{3.aux3}, we deduce immediately estimates \eqref{3.est2} and \eqref{3.est3}. \end{proof} We need a higher-order moment estimate, which is proved in the following lemma. \begin{lemma}\label{lem.estp} Let $T>0$ and let $u^{(N)}$ be the pathwise unique strong solution to \eqref{2.approx1}-\eqref{2.approx2} on $[0,T]$. Furthermore, let $p>2$ and $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p<\infty$. Then there exists a constant $C_2>0$ which depends on $p$, $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p$, $C_\sigma$, and $T$ but not on $N$ such that \begin{equation}\label{3.estp} \sup_{N\in\N}\mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^p\bigg) \le C_2. \end{equation} \end{lemma} \begin{proof} We take the supremum over $t\in(0,T)$ in \eqref{3.aux2} and neglect the second and third terms on the left-hand side. Then, raising both sides to the the power $p/2$ and applying the H\"older inequality, we find that \begin{align*} \sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^p &\le C\|u^0\|_{L^2({\mathcal O})}^p + CT^{p/2-1}\int_0^T\big\|\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^p ds \\ &\phantom{xx}{}+ C\bigg(\sup_{t\in(0,T)}\sum_{i,j=1}^n\int_0^t\big(u_i^{(N)}(s), \pi_i\sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}\bigg)^{p/2}. \end{align*} Taking the mathematical expectation and using assumption \eqref{1.sigma}, it follows that \begin{align} \mathbb{E}\bigg(&\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^p\bigg) \le C + C\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p + C\mathbb{E}\int_0^T\|u^{(N)}(s)\|_{L^2({\mathcal O})}^{p} ds \nonumber \\ &\phantom{xx}{}+ C\mathbb{E}\bigg(\sup_{t\in(0,T)}\sum_{i,j=1}^n\int_0^t\big(u_i^{(N)}(s), \pi_i\sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}\bigg)^{p/2}. \label{3.aux4} \end{align} For the last term, we use the Burkholder-Davis-Gundy and Young inequalities, \begin{align*} \mathbb{E}\bigg(&\sup_{t\in(0,T)}\sum_{i,j=1}^n\int_0^t\big(u^{(N)}(s), \pi_i\sigma_{ij}(u^{(N)}(s))dW_j(s)\big)_{L^2({\mathcal O})}\bigg)^{p/2} \\ &\le C\mathbb{E}\bigg(\int_0^T\|u^{(N)}(s)\|_{L^2({\mathcal O})}^2 \big\|\sigma(u^{(N)}(s))\big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds\bigg)^{p/4} \\ &\le C\mathbb{E}\bigg\{\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^2\bigg)^{p/4} C_\sigma^{p/4}\bigg(\int_0^T\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^2\big)ds \bigg)^{p/4}\bigg\} \\ &\le C\mathbb{E}\bigg\{\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^p\bigg)^{1/2} \bigg(\int_0^T\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^p\big)ds\bigg)^{1/2}\bigg\} \\ &\le \frac12\mathbb{E}\bigg(\sup_{t\in[0,T]}\|u^{(N)}(t)\|_{L^2({\mathcal O})}^p\bigg) + C\mathbb{E}\int_0^T\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^p\big)ds. \end{align*} Inserting this estimate into \eqref{3.aux4} and observing that the first term on the right-hand side of the previous inequality can be absorbed by the first term on the left-hand side of \eqref{3.aux4}, we infer that \begin{align*} \mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^p\bigg) &\le C + C\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p + C\mathbb{E}\int_0^T\sup_{\tau\in(0,s)}\|u^{(N)}(\tau)\|_{L^2({\mathcal O})}^{p} ds \\ &\phantom{xx}{}+ C\mathbb{E}\int_0^T\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^p\big)ds. \end{align*} Then the Gronwall inequality implies that $$ \mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^p\bigg) \le C, $$ which concludes the proof. \end{proof} The previous lemma allows us to improve slightly the regularity of $u^{(N)}$. \begin{lemma}\label{lem.3} Let $T>0$ and let $u^{(N)}$ be the pathwise unique strong solution to \eqref{2.approx1}-\eqref{2.approx2} on $[0,T]$. Then $(u_i^{(N)})^2\in L^3((0,T)\times\Omega;L^2({\mathcal O}))$ for $i=1,\ldots,N$ and, for some constant $C_3>0$, $$ \mathbb{E}\int_0^T\|(u^{(N)})^2\|_{L^2({\mathcal O})}^3 dt \le C_3, $$ where $(u^{(N)})^2$ is the vector with the coefficients $(u_i^{(N)})^2$ for $i=1,\ldots,N$. \end{lemma} \begin{proof} By the Gagliardo-Nirenberg inequality with $\theta=d/(2+d)$ and the H\"older inequality with $q=2(2+d)/(3d)$ and $q'=2(2+d)/(4-d)$ (here, we need that $d\le 3$), we find that \begin{align*} \mathbb{E}\int_0^T\|(u^{(N)})^2\|_{L^2({\mathcal O})}^3 dt &\le C\mathbb{E}\int_0^T\|(u^{(N)})^2\|_{H^1({\mathcal O})}^{3d/(2+d)} \|(u^{(N)})^2\|_{L^1({\mathcal O})}^{6/(2+d)}dt \\ &\le C\mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^{12/(2+d)} \int_0^T\|(u^{(N)})^2\|_{H^1({\mathcal O})}^{3d/(2+d)}dt\bigg) \\ &\le C\bigg\{\mathbb{E}\bigg(\sup_{t\in(0,T)}\|u^{(N)}\|_{L^2({\mathcal O})}^{24/(4-d)} \bigg)\bigg\}^{1/q'} \bigg\{\mathbb{E}\int_0^T\|(u^{(N)})^2\|_{H^1({\mathcal O})}^2 dt\bigg\}^{1/q}. \end{align*} The first factor is uniformly bounded by \eqref{3.estp} with $p=24/(4-d)$ and the second factor is uniformly bounded as a consequence of \eqref{3.est1} and \eqref{3.est2}. \end{proof} \subsection{Tightness}\label{sec.tight} The aim of this subsection is to prove that the sequence of laws of $u^{(N)}$ is tight on a certain topological space. For this, we introduce the following spaces: \begin{itemize} \item $C^0([0,T];H^3({\mathcal O})')$ is the space of continuous functions $u:[0,T]\to H^3({\mathcal O})'$ with the topology $\mathcal{T}_1$ induced by the norm $\|u\|_{C^0([0,T];H^3({\mathcal O})')}=\sup_{t\in(0,T)}\|u(t)\|_{H^3({\mathcal O})'}$; \item $L^2_w(0,T;H^1({\mathcal O}))$ is the space $L^2(0,T;H^1({\mathcal O}))$ with the weak topology $\mathcal{T}_2$; \item $L^2(0,T;L^2({\mathcal O}))$ is the space of square integrable functions $u:(0,T)\to L^2({\mathcal O})$ with the topology $\mathcal{T}_3$ induced by the norm $\|\cdot\|_{L^2(0,T;L^2({\mathcal O}))}$; \item $C^0([0,T];L^2_w({\mathcal O}))$ is the space of weakly continuous functions $u:[0,T]\to L^2({\mathcal O})$ endowed with the weakest topology $\mathcal{T}_4$ such that for all $h\in L^2({\mathcal O})$, the mappings $$ C^0([0,T];L_w^2({\mathcal O}))\to C^0([0,T];\R), \quad u\mapsto (u(\cdot),h)_{L^2({\mathcal O})}, $$ are continuous. \end{itemize} In particular, convergence in $C^0([0,T];L^2_w({\mathcal O}))$ means the following: $u_n\to u$ in $C^0([0,T];$ $L^2_w({\mathcal O}))$ as $n\to\infty$ holds if and only if $$ \lim_{n\to\infty}\sup_{t\in(0,T)}|(u_n(t)-u(t),h)_{L^2({\mathcal O})}| = 0 \quad\mbox{for all }h\in L^2({\mathcal O}). $$ We need another space: Let $r>0$ and $B:=\{u\in L^2({\mathcal O}):\|u\|_{L^2({\mathcal O})}\le r\}$. Let $q$ be the metric compatible with the weak topology on $B$. We define the following subspace of $C^0([0,T];L^2_w({\mathcal O}))$: \begin{equation}\label{3.C0Bw} \begin{aligned} C^0([0,T];B_w) &= \mbox{set of all weakly continuous functions } u:[0,T]\to L^2({\mathcal O})\\[-1mm] &\phantom{xx}\mbox{ such that } \textstyle\sup_{t\in(0,T)}\|u(t)\|_{L^2({\mathcal O})}\le r. \end{aligned} \end{equation} This space is metrizable with the metric $q^*(u,v)=\sup_{t\in(0,T)}q(u(t),v(t))$ \cite[Theorem 3.29]{Bre11}. By the Banach-Alaoglu theorem, $B_w$ is compact \cite[Theorem 3.16]{Bre11}, so, $(C^0([0,T];B_w),q^*)$ is a complete metric space. The following lemma ensures that any sequence in $C^0([0,T];B)$ which converges in some space $C^0([0,T];U')$ with $U\subset H^1({\mathcal O})$ is also convergent in $C^0([0,T];B_w)$. We apply this lemma with $U=H^3({\mathcal O})$. \begin{lemma}[Lemma 2.1 in \cite{BrMo14}]\label{lem.aux} Let $u_n:[0,T]\to L^2({\mathcal O})$ ($n\in\N$) be functions satisfying \begin{align*} & \sup_{n\in\N}\sup_{t\in(0,T)}\|u_n(t)\|_{L^2({\mathcal O})}\le r, \\ & u_n\to u \quad\mbox{in }C^0([0,T];U') \quad\mbox{as }n\to\infty, \end{align*} where $U\subset H^1({\mathcal O})$ and $U'$ is the dual space of $U$. Then $u_n$, $u\in C^0([0,T];B_w)$ and $u_n\to u$ in $C^0([0,T];B_w)$ as $n\to\infty$. \end{lemma} We define the space \begin{equation}\label{3.Z} Z_T:=C^0([0,T];H^3({\mathcal O})')\cap L_w^2(0,T;H^1({\mathcal O}))\cap L^2(0,T;L^2({\mathcal O})) \cap C^0([0,T];L_w^2({\mathcal O})), \end{equation} endowed with the topology $\mathcal{T}$ which is the maximum of the topologies $\mathcal{T}_i$, $i=1,2,3,4$, of the corresponding spaces. On this space, we can formulate a compactness criterion which is analogous to the result due to Mikulevcius and Rozowskii \cite{MiRo05}. \begin{lemma}[Compactness criterion]\label{lem.comp} Let $(Z_T,\mathcal{T})$ be as defined in \eqref{3.Z}. A set $K\subset Z_T$ is $\mathcal{T}$-relatively compact if the following three conditions hold: \begin{enumerate} \item $\sup_{u\in K}\sup_{t\in(0,T)}\|u(t)\|_{L^2({\mathcal O})}<\infty$, \item $K$ is bounded in $L^2(0,T;H^1({\mathcal O}))$, and \item $\lim_{\delta\to 0}\sup_{u\in K}\sup_{s,t\in(0,T),\,|s-t|\le\delta} \|u(s)-u(t)\|_{H^3({\mathcal O})'}=0$. \end{enumerate} \end{lemma} We refer to \cite[Lemma 2.3]{BrMo14} for a proof. The result follows since the embeddings $H^1({\mathcal O})\hookrightarrow L^2({\mathcal O})\hookrightarrow H^3({\mathcal O})'$ are continuous and the embedding $H^1({\mathcal O})\hookrightarrow L^2({\mathcal O})$ is compact, such that we can apply Dubinskii's Theorem \cite{Dub65} (also see \cite{Sim87}) to a sequence $(u_n)_{n\in\N}\subset K$ to conclude that there exists a subsequence of $(u_n)_{n\in\N}$ that is convergent in $C^0([0,T];H^3({\mathcal O})')$. By Lemma \ref{lem.aux}, this subsequence is also convergent in $C^0([0,T];B_w)$. The compactness criterion in Lemma \ref{lem.comp} allows for a proof of the following tightness criterion taken from \cite[Corollary 2.6]{BrMo14}. \begin{theorem}[Tightness criterion]\label{thm.tight} Let $H$, $V$, and $U$ be separable Hilbert spaces such that the embeddings $U\hookrightarrow V\hookrightarrow H$ are dense and continuous and the embedding $V\hookrightarrow H$ is compact. Furthermore, let $(X_n)_{n\in\N}$ be a sequence of continuous $\mathbb{F}$-adapted $U'$-valued stochastic processes such that \begin{enumerate} \item there exists $C>0$ such that $$ \sup_{n\in\N}\mathbb{E}\bigg(\sup_{t\in(0,T)}\|X_n(t)\|_H^2\bigg) \le C, $$ \item there exists $C>0$ such that $$ \sup_{n\in\N}\mathbb{E}\bigg(\int_0^T\|X_n(t)\|_V^2 dt\bigg) \le C, $$ \item $(X_n)_{n\in\N}$ satisfies the Aldous condition in $U'$ (see Definition \ref{def.ald} in the appendix). \end{enumerate} Furthermore, let $\mathbb{P}_n$ be the law of $X_n$ on $Z_T$. Then $(\mathbb{P}_n)_{n\in\N}$ is tight on $Z_T$. \end{theorem} The main result of this subsection is the tightness of the laws ${\mathcal L}(u^{(N)})$ of the solutions $u^{(N)}$ to \eqref{2.approx1}-\eqref{2.approx2}. \begin{lemma}\label{lem.ut} The set of measures $\{{\mathcal L}(u^{(N)}):N\in\N\}$ is tight on $(Z_T,\mathcal{T})$. \end{lemma} \begin{proof} The idea of the proof is to apply Theorem \ref{thm.tight} with $U=H^3({\mathcal O})$, $V=H^1({\mathcal O})$, and $H=L^2({\mathcal O})$. In view of estimates \eqref{3.est1} and \eqref{3.est2}, conditions (1) and (2) of Theorem \ref{thm.tight} are fulfilled. It remains to show that $(u^{(N)})_{N\in\N}$ satisfies the Aldous condition in $H^3({\mathcal O})'$. To this end, let $(\tau_N)_{N\in\N}$ be a sequence of $\mathbb{F}$-stopping times such that $0\le\tau_N\le T$. Let $t\in[0,T]$ and $\phi\in H^3({\mathcal O})$. Then \eqref{2.approx1} can be written as \begin{align} \langle u_i^{(N)}(t),\phi\rangle &= \langle \Pi_N(u_i^0),\phi\rangle - \sum_{j=1}^n\int_0^t\big\langle A_{ij}(u^{(N)})\na u_j^{(N)},\na\Pi_N\phi \big\rangle ds \nonumber \\ &\phantom{xx}{} + \sum_{j=1}^n\bigg\langle\int_0^t\Pi_N\big(\sigma_{ij}(u^{(N)}(s))\big)dW_j(s),\phi \bigg\rangle \nonumber \\ &=: J_1^{(N)} + J_2^{(N)}(t) + J_3^{(N)}(t), \label{3.J} \end{align} where $\langle\cdot,\cdot\rangle$ is the dual pairing between $H^3({\mathcal O})'$ and $H^3({\mathcal O})$. We estimate each term on the right-hand side individually. First, consider the term involving the diffusion coefficients. Let $\theta>0$. Then, using the (at most) quadratic dependence of $A_{ij}$ on $u_k$ and the continuous embedding $H^3({\mathcal O})\hookrightarrow W^{1,\infty}({\mathcal O})$ (this is another instance where we use $d\le 3$), we find that \begin{align*} \mathbb{E}\bigg|&\int_{\tau_N}^{\tau_N+\theta} \big\langle A_{ij}(u^{(N)})\na u_j^{(N)},\na\Pi_N\phi\big\rangle ds\bigg| \\ &\le \mathbb{E}\int_{\tau_N}^{\tau_N+\theta}\|A_{ij}(u^{(N)})\|_{L^2({\mathcal O})} \|\na u_j^{(N)}\|_{L^2({\mathcal O})}\|\na\phi\|_{L^\infty({\mathcal O})} ds \\ &\le \mathbb{E}\bigg(\int_{\tau_N}^{\tau_N+\theta}\big(1+\|(u^{(N)})^2\|_{L^2({\mathcal O})}\big) \|\na u^{(N)}\|_{L^2({\mathcal O})}ds\bigg)\|\phi\|_{H^3({\mathcal O})} \\ &\le \mathbb{E}\bigg(\big(\theta^{1/2} + \theta^{1/6}\|(u^{(N)})^2\|_{L^3(0,T;L^2({\mathcal O}))}\big) \|\na u^{(N)}\|_{L^2(0,T;L^2({\mathcal O}))}\bigg)\|\phi\|_{H^3({\mathcal O})} \\ &\le \bigg\{\theta^{1/2} + \theta^{1/6}\bigg(\mathbb{E}\bigg(\int_0^T\|(u^{(N)})^2\|_{L^2({\mathcal O})}^3 dt\bigg)^{2/3} \bigg)^{1/2} \\ &\phantom{xx}{}\times\bigg(\mathbb{E}\int_0^T\|\na u^{(N)}\|_{L^2({\mathcal O})}^2 dt\bigg)^{1/2}\bigg\} \|\phi\|_{H^3({\mathcal O})}, \end{align*} where in the last two inequalities we applied the H\"older inequality with respect to time and then with respect to the random variable. The vector $(u^{(N)}))^2$ consists of elements $(u_i^{(N)}))^2$ for $i=1,\ldots,N$. Taking into account the estimates from Lemmas \ref{lem.est1} and \ref{lem.3}, we deduce that \begin{equation}\label{3.theta1} \mathbb{E}\bigg|\int_{\tau_N}^{\tau_N+\theta} \big\langle A_{ij}(u^{(N)})\na u_j^{(N)},\na\Pi_N\phi\big\rangle ds\bigg| \le C\theta^{1/6}\|\phi\|_{H^3({\mathcal O})}. \end{equation} For the stochastic term, we use assumption \eqref{1.sigma} on $\sigma$, the It\^{o} isometry (see Proposition \ref{prop.iso}), and the H\"older inequality to obtain \begin{align} \mathbb{E}\bigg|\bigg\langle &\int_{\tau_N}^{\tau_N+\theta} \Pi_N(\sigma_{ij}(u^{(N)}(s)))dW_j(s),\phi\bigg\rangle\bigg|^2 \nonumber \\ &\le \mathbb{E}\bigg(\int_{\tau_N}^{\tau_N+\theta}\|\sigma(u^{(N)})\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 dt\bigg)\|\phi\|_{L^2({\mathcal O})}^2 \nonumber \\ &\le C_\sigma\mathbb{E}\bigg(\int_{\tau_N}^{\tau_N+\theta}\big(1+\|u^{(N)}\|_{L^2({\mathcal O})}^2 \big)dt\bigg)\|\phi\|_{L^2({\mathcal O})}^2 \nonumber \\ &\le C\bigg(\theta + \theta^{1/3}\bigg(\mathbb{E}\int_0^T\|u^{(N)}\|_{L^2({\mathcal O})}^3 dt \bigg)^{2/3}\bigg)\|\phi\|_{L^2({\mathcal O})}^2 \le C\theta^{1/3}\|\phi\|_{L^2({\mathcal O})}^2. \label{3.theta2} \end{align} Next, let $\kappa>0$ and $\eps>0$. By the definition of the $H^3({\mathcal O})'$ norm, the Chebyshev inequality, and estimate \eqref{3.theta1}, we have \begin{align*} \mathbb{P}\Big\{ & \|J_2^{(N)}(\tau_N+\theta)-J_2^{(N)}(\tau_N)\|_{H^3({\mathcal O})'}\ge\kappa\Big\} \le \frac{1}{\kappa}\mathbb{E}\|J_2^{(N)}(\tau_N+\theta)-J_2^{(N)}(\tau_N)\|_{H^3({\mathcal O})'} \\ &= \frac{1}{\kappa}\sup_{\|\phi\|_{H^3({\mathcal O})}=1}\mathbb{E} \big|\big\langle J_2^{(N)}(\tau_N+\theta)-J_2^{(N)}(\tau_N),\phi\big\rangle\big| \le \frac{C\theta^{1/6}}{\kappa}. \end{align*} Thus, choosing $\delta_1=(\kappa\eps/C)^6$, we infer that $$ \sup_{N\in\N}\sup_{0<\theta<\delta_1} \mathbb{P}\Big\{\|J_2^{(N)}(\tau_N+\theta)-J_2^{(N)}(\tau_N) \|_{H^3({\mathcal O})'}\ge\kappa\Big\} \le \eps. $$ In a similar way, it follows that \begin{align*} \mathbb{P}\Big\{\|J_3^{(N)}(\tau_N+\theta)-J_3^{(N)}(\tau_N)\|_{H^3({\mathcal O})'}\ge\kappa\Big\} &\le \frac{1}{\kappa^2}\mathbb{E}\|J_3^{(N)}(\tau_N+\theta) -J_3^{(N)}(\tau_N)\|_{H^3({\mathcal O})'}^2 \\ &\le \frac{C_2\theta^{1/3}}{\kappa^2}, \end{align*} and choosing $\delta_2=(\kappa^2\eps/C)^3$ gives $$ \sup_{N\in\N}\sup_{0<\theta<\delta_1} \mathbb{P}\Big\{\|J_3^{(N)}(\tau_N+\theta_2)-J_3^{(N)}(\tau_N) \|_{H^3({\mathcal O})'}\ge\kappa\Big\} \le \eps. $$ This shows that the Aldous condition holds for all three terms $J_i^{(N)}$, $i=1,2,3$. Consequently, in view of \eqref{3.J}, it also holds for $(u^{(N)})_{N\in\N}$. We conclude the proof by invoking Theorem \ref{thm.tight}. \end{proof} \subsection{Convergence of the approximate solutions}\label{sec.conv} First, we show that the space $Z_T$, defined in \eqref{3.Z}, verifies the assumption of the Skorokhod-Jakubowski theorem (see Theorem \ref{thm.skoro} in the appendix). More precisely, we prove that on each space in definition \eqref{3.Z} of $Z_T$, there exists a countable set of continuous real-valued functions separating points. \begin{lemma}\label{lem.Z} The topological space $Z_T$, defined in \eqref{3.Z}, satisfies the assumption of Theorem \ref{thm.skoro}. \end{lemma} \begin{proof} Since the spaces $C^0([0,T];H^3({\mathcal O})')$ and $L^2(0,T;L^2({\mathcal O}))$ are separable, metrizable, and complete, the assumption of Theorem \ref{thm.skoro} is satisfied; see \cite[Expos\'e 8]{Bad70}. For the space $L_w^2(0,T;H^1({\mathcal O}))$, it is sufficient to define $$ f_m(u) = \int_0^T (u(t),v_m(t))_{H^1({\mathcal O})}dt\in\R, \quad \mbox{where }u\in L_w^2(0,T;H^1({\mathcal O})),\ m\in\N, $$ and $(v_m)_{m\in\N}$ is a dense subset of $L^2(0,T;H^1({\mathcal O}))$. It remains to consider the space $C^0([0,T];L_w^2({\mathcal O}))$. Let $(w_m)_{m\in\N}$ be a dense subset of $L^2(0,T;L^2({\mathcal O}))$ and let $\mathbb{Q}_T$ be the set of rational numbers from the interval $[0,T]$. Then the family $\{f_{m,t}:m\in\N,$ $t\in\mathbb{Q}_T\}$, defined by $$ f_{m,t}(u) = (u(t),w_m)_{L^2({\mathcal O})}\in\R, \quad\mbox{where } u\in C^0([0,T];L_w^2({\mathcal O})),\ m\in\N,\ t\in\mathbb{Q}_T, $$ consists of continuous functions separating points in $C^0([0,T];L_w^2({\mathcal O}))$. \end{proof} In view of Lemma \ref{lem.Z} and Theorem \ref{thm.skoro}, we infer the following result. \begin{corollary}\label{coro.Z} Let $(\eta_n)_{n\in\N}$ be a sequence of $Z_T$-valued random variables such that their laws ${\mathcal L}(\eta_n)$ on $(Z_T,\mathcal{T})$ form a tight sequence of probability measures. Then there exists a subsequence $(\eta_k)_{k\in\N}$, which is not relabeled, a probability space $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$, and $Z_T$-valued random variables $\widetilde\eta$, $\widetilde\eta_k$ with $k\in\N$ such that the variables $\eta_k$ and $\widetilde\eta_k$ have the same laws on $Z_T$ and $(\widetilde\eta_k)_{k\in\N}$ converges to $\widetilde\eta$ a.s.\ on $\widetilde\Omega$. \end{corollary} By Lemma \ref{lem.ut}, the set of measures $\{{\mathcal L}(u^{(N)}):N\in\N\}$ is tight on $(Z_T,\mathcal{T})$ and by Lemma \ref{lem.Z}, the space $Z_T\times C^0([0,T];Y_0)$ satisfies the assumption of Theorem \ref{thm.skoro}. Therefore, we can apply Corollary \ref{coro.Z} to deduce the existence of a subsequence of $(u^{(N)})_{N\in\N}$, which is not relabeled, a probability space $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$, and, on this space, $Z_T\times C^0([0,T];Y_0)$-valued random variables $(\widetilde u,\widetilde W)$, $(\widetilde u^{(N)},\widetilde W^{(N)})$ with $N\in\N$ such that $(\widetilde u^{(N)},\widetilde W^{(N)})$ has the same law as $(u^{(N)},W)$ on ${\mathcal B}(Z_T\times C^0([0,T];Y_0))$ and $$ (\widetilde u^{(N)},\widetilde W^{(N)}) \to (\widetilde u,\widetilde W) \quad\mbox{in }Z_T\times C^0([0,T];Y_0),\ \widetilde\mathbb{P}\mbox{-a.s., as }N\to\infty. $$ Because of the definition of the space $Z_T$, this convergence means that $\widetilde\mathbb{P}$-a.s., \begin{align} \widetilde u^{(N)}\to \widetilde u &\quad\mbox{in }C^0([0,T];H^3({\mathcal O})'), \nonumber \\ \widetilde u^{(N)}\rightharpoonup \widetilde u &\quad\mbox{weakly in } L^2(0,T;H^1({\mathcal O})), \nonumber \\ \widetilde u^{(N)}\to \widetilde u &\quad\mbox{in }L^2(0,T;L^2({\mathcal O})), \label{3.conv} \\ \widetilde u^{(N)}\to \widetilde u &\quad\mbox{in }C^0([0,T];L_w^2({\mathcal O})), \nonumber \\ \widetilde W^{(N)}\to \widetilde W &\quad\mbox{in }C^0([0,T];Y_0). \nonumber \end{align} Since $u^{(N)}$ is an element of $C^0([0,T];H_N)$ $\mathbb{P}$-a.s., $C^0([0,T];H_N)$ is a Borel set of $C^0([0,T];$ $H^3({\mathcal O})')\cap L^2(0,T;L^2({\mathcal O}))$, and since $u^{(N)}$ and $\widetilde u^{(N)}$ have the same laws, we infer that $$ {\mathcal L}(\widetilde u^{(N)})\big(C^0([0,T];H_N)\big) = 1 \quad\mbox{for all }N\ge 1. $$ Note that, as ${\mathcal B}(Z_T\times C^0([0,T];Y_0))$ is a subset of ${\mathcal B}(Z_T)\times{\mathcal B}(C^0([0,T];Y_0))$, the function $\widetilde u$ is a $Z_T$-Borel random variable. Furthermore, in view of estimates \eqref{3.est1}-\eqref{3.est3} and \eqref{3.estp} and the equivalence of the laws of $\widetilde u^{(N)}$ and $\widetilde u$ on ${\mathcal B}(Z_T)$, we have the uniform bounds \begin{align} \sup_{N\in\N}\widetilde\mathbb{E}\Big(\sup_{t\in(0,T)}\|\widetilde u^{(N)}\|_{L^2({\mathcal O})}^2\Big) &\le C_1, \label{3.tilde1} \\ \sup_{N\in\N}\widetilde\mathbb{E}\bigg(\int_0^T\|\widetilde u^{(N)}\|_{H^1({\mathcal O})}^2 dt\bigg) + \alpha\sup_{N\in\N}\widetilde\mathbb{E}\bigg(\int_0^T\big\|(\widetilde u^{(N)})^2 \big\|_{H^1({\mathcal O})}^2 dt\bigg) &\le C_1, \label{3.tilde2} \\ \sup_{N\in\N}\widetilde\mathbb{E}\bigg(\sup_{t\in(0,T)} \|\widetilde u^{(N)}\|_{L^2({\mathcal O})}^p\bigg) &\le C_2, \label{3.tilde3} \end{align} where $p\ge 2$ is any number. We deduce from \eqref{3.tilde2} that there exists a subsequence of $(\widetilde u^{(N)})$ (not relabeled) which is weakly converging in $L^2((0,T)\times\widetilde\Omega;H^1({\mathcal O}))$ as $N\to\infty$. Since $\widetilde u^{(N)}\to\widetilde u$ $\widetilde\mathbb{P}$-a.s.\ in $Z_T$, we conclude that $\widetilde u\in L^2((0,T)\times\widetilde\Omega;H^1({\mathcal O}))$, i.e. \begin{equation}\label{3.uH1} \widetilde\mathbb{E}\int_0^T\|\widetilde u(t)\|_{H^1({\mathcal O})}^2 dt < \infty. \end{equation} Similarly, the bound \eqref{3.tilde1} allows us to extract a subsequence which is weakly* convergent in $L^2(\widetilde\Omega;L^\infty(0,T;L^2({\mathcal O})))$ and \begin{equation}\label{3.uL2} \widetilde\mathbb{E}\bigg(\sup_{t\in(0,T)}\|\widetilde u(t)\|_{L^2({\mathcal O})}^2\bigg) < \infty. \end{equation} The convergence $\widetilde u^{(N)}\to \widetilde u$ in $L^2(0,T;L^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s.\ implies, up to a subsequence, that $$ \widetilde u^{(N)}\to\widetilde u \quad\mbox{a.e. in }{\mathcal O}, \widetilde\mathbb{P}\mbox{-a.s.} $$ In particular, we have (componentwise) $(\widetilde u^{(N)})^2\to(\widetilde u)^2$ a.e.\ in ${\mathcal O}$, $\widetilde\mathbb{P}$-a.s. On the other hand, by estimate \eqref{3.tilde2}, there exists a subsequence of $((\widetilde u^{(N)})^2)_{N\in\N}$ weakly converging to some function $v$ in $L^2(\widetilde\Omega; L^2(0,T;H^1({\mathcal O})))$. The uniqueness of the limit function then implies that $v=\widetilde u^2$ and consequently, $$ (\widetilde u^{(N)})^2\rightharpoonup (\widetilde u)^2 \quad\mbox{weakly in }L^2(\widetilde\Omega;L^2(0,T;H^1({\mathcal O}))). $$ It remains to show that the stochastic process $\widetilde u$ is a martingale solution to \eqref{1.eq}. The following lemmas are taken from \cite[Lemma 5.2 and proof]{BGJ13}. \begin{lemma}\label{lem.W1} Suppose that the process $(\widetilde W^{(N)}(t))_{t\in[0,T]}$, defined on $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$, has the same law as the $Y$-valued cylindrical Wiener process $W$, defined on $(\Omega,{\mathcal F},\mathbb{P})$. Then $\widetilde W^{(N)}$ is also a $Y$-valued cylindrical Wiener process on $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$. \end{lemma} \begin{lemma}\label{lem.W2} The process $(\widetilde W(t))_{t\in[0,T]}$ is a $Y$-valued cylindrical Wiener process on $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$. If $0\le s<t\le T$, the increments $\widetilde W(t)-\widetilde W(s)$ are independent of the $\sigma$-algebra generated by $\widetilde u(r)$ and $\widetilde W(r)$ for $r\in[0,s]$. \end{lemma} We denote by $\widetilde\mathbb{F}$ the filtration generated by $(\widetilde u,\widetilde W)$ and by $\widetilde\mathbb{F}^{(N)}$ the filtration generated by $(\widetilde u^{(N)},\widetilde W^{(N)})$. Lemma \ref{lem.W1} implies that $\widetilde u$ is progressively measurable with respect to $\widetilde\mathbb{F}$, and Lemma \ref{lem.W2} shows that $\widetilde u^{(N)}$ is progressively measurable with respect to $\widetilde\mathbb{F}^{(N)}$. The following lemma plays a significant role in establishing the existence of a martingale solution to \eqref{1.eq}. \begin{lemma}\label{lem.E} It holds for all $s$, $t\in[0,T]$ with $s\le t$ and all $\phi_1\in L^2({\mathcal O})$ and $\phi_2\in H^3({\mathcal O})$ satisfying $\na\phi_2\cdot\nu=0$ on $\pa{\mathcal O}$ that \begin{align} \lim_{N\to\infty}\widetilde\mathbb{E}\int_0^T\big(\widetilde u^{(N)}(t)-\widetilde u(t), \phi_1\big)_{L^2({\mathcal O})}^2dt &= 0, \label{3.E1} \\ \lim_{N\to\infty}\widetilde\mathbb{E}\big(\widetilde u^{(N)}(0)-\widetilde u(0), \phi_1\big)_{L^2({\mathcal O})}^2 &= 0, \label{3.E2} \\ \lim_{N\to\infty}\widetilde\mathbb{E}\int_0^T\bigg|\sum_{j=1}^n\int_0^t\Big\langle A_{ij}(\widetilde u^{(N)}(s))\na\widetilde u_j^{(N)}(s) - A_{ij}(\widetilde u(s))\na\widetilde u_j(s),\na\phi_2\Big\rangle ds\bigg|dt &= 0, \label{3.E3} \\ \lim_{N\to\infty}\widetilde\mathbb{E}\int_0^T\bigg|\sum_{j=1}^n\int_0^t\Big( \sigma_{ij}(\widetilde u^{(N)}(s))d\widetilde W_j^{(N)}(s)-\sigma_{ij}(\widetilde u(s)) d\widetilde W_j(s),\phi_1\Big)_{L^2({\mathcal O})}\bigg|^2 dt &= 0. \label{3.E4} \end{align} \end{lemma} \begin{proof} Let $\phi_1\in L^2({\mathcal O})$. We know that $\widetilde u^{(N)}\to \widetilde u$ in $Z_T$ $\widetilde\mathbb{P}$-a.s. In particular, $\widetilde u^{(N)}\to \widetilde u$ in $C^0([0,T];L_w^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s., which means that for any $t\in[0,T]$, $$ \lim_{N\to\infty}(\widetilde u^{(N)}(t),\phi_1)_{L^2({\mathcal O})} = (\widetilde u(t),\phi_1)_{L^2({\mathcal O})} \quad\widetilde\mathbb{P}\mbox{-a.s.} $$ Estimate \eqref{3.tilde1} provides a uniform bound for $(\widetilde u^{(N)}(t),\phi_1)_{L^2({\mathcal O})}^2$ such that we can apply the dominated convergence theorem to conclude that \begin{equation}\label{3.aux} \lim_{N\to\infty}\int_0^T\big(\widetilde u^{(N)}(t)-\widetilde u(t), \phi_1\big)_{L^2({\mathcal O})}^2 dt = 0 \quad\widetilde\mathbb{P}\mbox{-a.s.} \end{equation} We have for any $r>1$, by \eqref{3.tilde3}, $$ \widetilde\mathbb{E}\bigg(\bigg|\int_0^T\|\widetilde u^{(N)}(t)-\widetilde u(t)\|_{L^2({\mathcal O})}^2 dt\bigg|^r\bigg) \le C\widetilde\mathbb{E}\int_0^T\big(\|\widetilde u^{(N)}(t)\|_{L^2({\mathcal O})}^{2r} + \|\widetilde u(t)\|_{L^2({\mathcal O})}^{2r}\big)dt \le C. $$ This bound provides the equi-integrability of $\int_0^T\big(\widetilde u^{(N)}(t)-\widetilde u(t),\phi_1\big)_{L^2({\mathcal O})}^2 dt$. Taking into account the convergence \eqref{3.aux}, Vitali's convergence theorem (see the appendix) then shows that \eqref{3.E1} holds. Convergence \eqref{3.E2} follows in a similar way. Indeed, since $\widetilde u^{(N)}\to\widetilde u$ in $C^0([0,T];L_w^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s.\ and $\widetilde u$ is continuous at $t=0$, we infer that for any $\phi_1\in L^2({\mathcal O})$, $$ \lim_{N\to\infty}(\widetilde u^{(N)}(0),\phi_1)_{L^2({\mathcal O})} = (\widetilde u(0),\phi_1)_{L^2({\mathcal O})} \quad\widetilde\mathbb{P}\mbox{-a.s.} $$ Then convergence \eqref{3.E2} follows from \eqref{3.tilde1} and Vitali's convergence theorem. Next, we establish convergence \eqref{3.E3} through several steps. Due to the structure of $A_{ij}(\widetilde u^{(N)})$, we need to show the following three convergences: \begin{align} \lim_{N\to\infty}\int_0^t\big\langle\na\widetilde u_j^{(N)}(s)-\na\widetilde u_j(s), \na\phi\big\rangle ds &= 0, \label{3.N1} \\ \lim_{N\to\infty}\int_0^t\Big\langle \widetilde u_j^{(N)}(s)\widetilde u_k^{(N)}(s) \na\widetilde u_k^{(N)}(s) - \widetilde u_j(s)\widetilde u_k(s)\na\widetilde u_k(s), \na\phi\Big\rangle ds &= 0, \label{3.N2} \\ \lim_{N\to\infty}\int_0^t\Big\langle(\widetilde u_k^{(N)}(s))^2 \na\widetilde u_j^{(N)}(s) - (\widetilde u_k(s))^2\na\widetilde u(s),\na\phi \Big\rangle ds &= 0, \label{3.N3} \end{align} for $j\neq k$ and suitable test functions $\phi$. We deduce from convergence \eqref{3.conv} that \eqref{3.N1} follows for all $\phi\in H^1({\mathcal O})$. The second convergence \eqref{3.N2} is proved as follows: \begin{align*} \bigg|&\int_0^t\Big\langle\widetilde u_j^{(N)}(s)\widetilde u_k^{(N)}(s) \na\widetilde u_k^{(N)}(s) - \widetilde u_j(s)\widetilde u_k(s)\na\widetilde u_k(s), \na\phi\Big\rangle ds\bigg| \\ &= \frac12\bigg|\int_0^t\Big\langle \widetilde u_j^{(N)}(s) \na\big(\widetilde u_k^{(N)}(s)\big)^2 - \widetilde u_j(s)\na\big(\widetilde u_k(s)\big)^2,\na\phi\Big\rangle ds\bigg| \\ &= \frac12\bigg|\int_0^t\Big\langle\big(\widetilde u_j^{(N)}(s)-\widetilde u_j(s)\big) \na\big(\widetilde u_k^{(N)}(s)\big)^2 + \widetilde u_j(s)\na\big\{\big(\widetilde u_k^{(N)}(s)\big)^2 - \big(\widetilde u_k(s)\big)^2\big\},\na\phi\Big\rangle ds\bigg| \\ &\le \frac12\int_0^t\|\widetilde u_j^{(N)}(s)-\widetilde u_j(s)\|_{L^2({\mathcal O})} \|\na(\widetilde u_k^{(N)})^2\|_{L^2({\mathcal O})}\|\na\phi\|_{L^\infty({\mathcal O})} ds \\ &\phantom{xx}{}+ \frac12\bigg|\int_0^t\Big(\widetilde u_j(s) \na\big\{\big(\widetilde u_k^{(N)}(s)\big)^2 - \big(\widetilde u_k(s)\big)^2\big\},\na\phi\Big)_{L^2({\mathcal O})} ds\bigg| \\ &=: I^{(N)}_1 + I^{(N)}_2. \end{align*} Let $\phi\in H^3({\mathcal O})$. Then the embedding $H^3({\mathcal O})\hookrightarrow W^{1,\infty}({\mathcal O})$ is continuous for $d\le 3$ and, using the Cauchy-Schwarz inequality, $$ I^{(N)}_1 \le \frac12\|\phi\|_{H^3({\mathcal O})}\|\widetilde u_j^{(N)} -\widetilde u_j\|_{L^2(0,T;L^2({\mathcal O}))} \|\na(\widetilde u_k^{(N)})^2\|_{L^2(0,T;L^2({\mathcal O}))}. $$ Since $\widetilde u^{(N)}\to \widetilde u$ in $L^2(0,T;L^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s. and $\na(\widetilde u^{(N)})^2$ is uniformly bounded in $L^2(0,T;$ $L^2({\mathcal O}))$, it follows that $I^{(N)}_1\to 0$ as $N\to\infty$. For the second integral, we observe that $\widetilde u_j\na\phi\in L^2(0,T;L^2({\mathcal O}))$ (using \eqref{3.tilde2}) and $(\widetilde u^{(N)})^2\rightharpoonup(\widetilde u)^2$ weakly in $L^2(0,T;H^1({\mathcal O}))$ (by \eqref{3.conv}). This implies that $I^{(N)}_2\to 0$ as $N\to\infty$, and we have proved \eqref{3.N2}. We turn to the proof of \eqref{3.N3}. Let $\phi\in H^3({\mathcal O})$ be such that $\na\phi\cdot\nu=0$ on $\pa{\mathcal O}$. An integration by parts leads to \begin{align*} \int_0^t&\Big\langle(\widetilde u_k^{(N)}(s))^2 \na\widetilde u_j^{(N)}(s) - (\widetilde u_k(s))^2\na\widetilde u_j(s),\na\phi \Big\rangle ds \\ &= \int_0^t\int_{\mathcal O}\Big((\widetilde u_k^{(N)}(s))^2\na\widetilde u_j^{(N)}(s) - (\widetilde u_k(s))^2\na\widetilde u_j(s)\Big)\cdot\na\phi dxds \\ &= -\int_0^t\int_{\mathcal O}\Big((\widetilde u_k^{(N)}(s))^2\widetilde u_j^{(N)}(s) - (\widetilde u_k(s))^2\widetilde u_j(s)\Big)\Delta\phi dxds \\ &\phantom{xx}{}- \int_0^t\int_{\mathcal O}\Big(\widetilde u_j^{(N)}(s)\na(u_k^{(N)}(s))^2 - \widetilde u_j(s)\na(\widetilde u_k(s))^2\Big)\cdot\na\phi dxds \\ &=: I^{(N)}_3 + I_4^{(N)}. \end{align*} The estimates for $I^{(N)}_1+I_2^{(N)}$ show that $I^{(N)}_4\to 0$ as $N\to\infty$. We estimate $I_3^{(N)}$ as follows, using the continuous embeddings $H^3({\mathcal O})\hookrightarrow W^{2,4}({\mathcal O})$ and $H^1({\mathcal O})\hookrightarrow L^4({\mathcal O})$ (for $d\le 3$): \begin{align*} I_3^{(N)} &= -\int_0^t\int_{\mathcal O}\big(\widetilde u_j^{(N)}(s)-\widetilde u_j(s)\big) \big(\widetilde u_k^{(N)}(s)\big)^2\Delta\phi dxds \\ &\phantom{xx}{}+ \int_0^t\int_{\mathcal O} \big((\widetilde u_k^{(N)}(s))^2-(\widetilde u_k(s))^2\big)\widetilde u_j(s) \Delta\phi dxds \\ &\le \int_0^t\big\|\widetilde u_j^{(N)}(s)-\widetilde u_j(s)\big\|_{L^2({\mathcal O})} \big\|(\widetilde u_k^{(N)}(s))^2\big\|_{L^4({\mathcal O})}\|\Delta\phi\|_{L^4({\mathcal O})}ds \\ &\phantom{xx}+ \int_0^t\int_{\mathcal O} \big((\widetilde u_k^{(N)}(s))^2-(\widetilde u_k(s))^2\big)\widetilde u_j(s) \Delta\phi dxds \\ &\le \big\|\widetilde u_j^{(N)}-\widetilde u_j\big\|_{L^2(0,T;L^2({\mathcal O}))} \big\|(\widetilde u_k^{(N)})^2\big\|_{L^2(0,T;H^1({\mathcal O}))}\|\phi\|_{H^3({\mathcal O})} \\ &\phantom{xx}+ \int_0^t\int_{\mathcal O} \big((\widetilde u_k^{(N)}(s))^2-(\widetilde u_k(s))^2\big)\widetilde u_j(s) \Delta\phi dxds. \end{align*} The convergences \eqref{3.conv} and $\widetilde u_j\Delta\phi\in L^2(0,T;L^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s.\ imply that $I_3^{(N)}\to 0$ as $N\to\infty$. Convergences \eqref{3.N1}-\eqref{3.N3} imply that $\widetilde\mathbb{P}$-a.s. \begin{equation}\label{3.A1} \lim_{N\to\infty}\int_0^t\big(A_{ij}(\widetilde u^{(N)}(s))\na\widetilde u_j^{(N)}(s), \na \phi_2\big)_{L^2({\mathcal O})} ds = \int_0^t\big(A_{ij}(\widetilde u(s))\na \widetilde u_j(s), \na\phi_2\big)_{L^2({\mathcal O})} ds \end{equation} for all $\phi_2\in H^3({\mathcal O})$ with $\na\phi_2\cdot\nu=0$ on $\pa{\mathcal O}$. Furthermore, employing the structure of $A_{ij}(u^{(N)})$, the continuous embedding $H^3({\mathcal O})\hookrightarrow W^{1,\infty}({\mathcal O})$ (again for $d\le 3$ only), and estimates \eqref{3.tilde2}-\eqref{3.tilde3}, we find that \begin{align*} \widetilde\mathbb{E}&\bigg(\bigg|\int_0^t\big(A_{ij}(\widetilde u^{(N)}(s)) \na\widetilde u_j^{(N)}(s),\na\phi_2\big)_{L^2({\mathcal O})}ds\bigg|^2\bigg) \\ &\le \|\na\phi_2\|^2_{L^\infty({\mathcal O})}\widetilde\mathbb{E}\bigg(\bigg|\int_0^t \big\|A_{ij}(u^{(N)}(s))\na\widetilde u_j^{(N)}(s)\big\|_{L^1({\mathcal O})} ds\bigg|^2\bigg) \\ &\le C\|\phi_2\|^2_{H^3({\mathcal O})}\widetilde\mathbb{E}\bigg(\bigg|\int_0^t \big(1+\|\widetilde u^{(N)}(s)^2\|_{L^2({\mathcal O})}\big) \|\na\widetilde u^{(N)}(s)\|_{L^2({\mathcal O})}ds\bigg|^2\bigg) \\ &\le C\|\phi_2\|^2_{H^3({\mathcal O})}\Big(T^{1/2}\big( \widetilde\mathbb{E}\|\widetilde u^{(N)}\|_{L^2(0,T;H^1({\mathcal O}))}^2\big)^{1/2} \\ &\phantom{xx}{} + T^{1/6}\big(\widetilde\mathbb{E}\|(\widetilde u^{(N)})^2\|_{L^3(0,T;L^2({\mathcal O}))}^3\big)^{1/3} \big(\widetilde\mathbb{E}\|\widetilde u^{(N)}\|_{L^2(0,T;H^1({\mathcal O}))}^2\big)^{1/2}\Big) \le C. \end{align*} This bound and the $\widetilde\mathbb{P}$-a.s.\ convergence \eqref{3.A1} allow us to apply the Vitali convergence theorem to infer that \eqref{3.E3} holds. It remains to prove convergence \eqref{3.E4}. Since $\widetilde{W}^{(N)} \to \widetilde{W}$ in $C^0([0,T]; Y_0)$, it is sufficient to show that $\sigma_{ij}(\widetilde{u}^{(N)}) \to \sigma_{ij}(\widetilde{u})$ in $L^2(0,T; {\mathcal L}_2(Y; L^2({\mathcal O})))$ $\mathbb{P}$-a.s. We estimate for $\phi\in L^2({\mathcal O})$: \begin{align*} \int_0^t&\big\|\big(\sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s)), \phi\big)_{L^2({\mathcal O})}\big\|_{{\mathcal L}_2(Y;\R)}^2 ds \\ &\le \int_0^t\big\|\sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s)) \big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2\|\phi\|_{L^2({\mathcal O})}^2 ds \\ &\le C_\sigma\|\widetilde u^{(N)}-\widetilde u\|_{L^2(0,T;L^2({\mathcal O}))}^2 \|\phi\|_{L^2({\mathcal O})}^2. \end{align*} Since $\widetilde u^{(N)}\to\widetilde u$ in $L^2(0,T;L^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s., by \eqref{3.conv}, we infer that for $t\in[0,T]$, $\omega\in\widetilde\Omega$, and $\phi\in L^2({\mathcal O})$, \begin{equation}\label{3.aux5} \lim_{N\to\infty} \int_0^t\big\|\big(\sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s)), \phi\big)_{L^2({\mathcal O})}\big\|_{{\mathcal L}_2(Y;\R)}^2 ds = 0. \end{equation} We conclude from \eqref{3.tilde3} and \eqref{3.uH1} that \begin{align*} \widetilde\mathbb{E}&\bigg|\int_0^t\big\|(\sigma_{ij}(\widetilde u^{(N)}(s)) -\sigma_{ij}(\widetilde u(s)),\phi\big)_{L^2({\mathcal O})}\big\|_{{\mathcal L}_2(Y;\R)}^2 ds \bigg|^2 \\ &\le C\widetilde\mathbb{E}\bigg(\|\phi\|_{L^2({\mathcal O})}^4\int_0^t\big( \|\sigma_{ij}(\widetilde u^{(N)}(s))\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^4 + \|\sigma_{ij}(\widetilde u(s))\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^4\big)ds\bigg) \\ &\le C\bigg(1+\widetilde\mathbb{E}\Big(\sup_{t\in(0,T)}\|\widetilde u^{(N)}(t)\|_{L^2({\mathcal O})}^4 + \sup_{t\in(0,T)}\|\widetilde u(t)\|_{L^2({\mathcal O})}^4\Big)\bigg) \le C. \end{align*} With this bound, convergence \eqref{3.aux5}, and the Vitali convergence theorem we obtain for all $\phi\in L^2({\mathcal O})$, $$ \lim_{N\to\infty}\widetilde\mathbb{E}\int_0^t\big\|\big(\sigma_{ij}(\widetilde u^{(N)}(s)) - \sigma_{ij}(\widetilde u(s)),\phi\big)_{L^2({\mathcal O})}\big\|_{{\mathcal L}_2(Y;\R)}^2 ds = 0. $$ Hence, by the It\^{o} isometry (Proposition \ref{prop.iso}) for $t\in[0,T]$ and $\phi\in L^2({\mathcal O})$, \begin{equation}\label{3.aux6} \lim_{N\to\infty}\widetilde\mathbb{E}\bigg|\bigg(\int_0^t \big(\sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s))\big) d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}\bigg|^2 = 0. \end{equation} We use the It\^{o} isometry again and estimates \eqref{3.tilde1} and \eqref{3.uL2} for $N\in\N$, $t\in[0,T]$, and $\phi\in L^2({\mathcal O})$ to infer that \begin{align*} \widetilde\mathbb{E}&\bigg|\bigg(\int_0^t\big(\sigma_{ij}(\widetilde u^{(N)}(s)) - \sigma_{ij}(\widetilde u(s))\big)d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}\bigg|^2 \\ &= \widetilde\mathbb{E}\bigg(\int_0^t\big\|\big(\sigma_{ij}(\widetilde u^{(N)}(s)) - \sigma_{ij}(\widetilde u(s)),\phi\big)_{L^2({\mathcal O})}\big\|_{{\mathcal L}_2(Y;\R)}^2 ds \bigg) \\ &\le \widetilde\mathbb{E}\bigg(\|\phi\|_{L^2({\mathcal O})}^2\int_0^t\big\| \sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s)) \big\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds\bigg) \\ &\le C\widetilde\mathbb{E}\bigg(t\sup_{t\in(0,T)}\|\widetilde u^{(N)}(t)\|_{L^2({\mathcal O})}^2 + t\sup_{t\in(0,T)}\|\widetilde u(s)\|_{L^2({\mathcal O})}^2\bigg) \le C. \end{align*} This bound and convergence \eqref{3.aux6} allow us to apply the dominated convergence theorem to conclude that for all $\phi\in L^2({\mathcal O})$, $$ \lim_{N\to\infty}\widetilde\mathbb{E}\int_0^T\bigg|\bigg(\int_0^t \big(\sigma_{ij}(\widetilde u^{(N)}(s))-\sigma_{ij}(\widetilde u(s))\big) d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}\bigg|^2 dt = 0. $$ This shows \eqref{3.E4} and finishes the proof. \end{proof} Let us define \begin{align*} \Lambda^{(N)}_i(\widetilde u^{(N)},\widetilde W^{(N)},\phi)(t) &:= (\Pi_N(\widetilde u_i(0)),\phi)_{L^2({\mathcal O})} \\ &\phantom{xx}{} + \sum_{j=1}^n\int_0^t\Big\langle\Pi_N\diver\big(A_{ij}(\widetilde u^{(N)}(s)) \na\widetilde u^{(N)}_j(s)\big),\phi\Big\rangle ds \\ &\phantom{xx}{}+ \bigg(\sum_{j=1}^n\int_0^t\Pi_N\sigma_{ij}(\widetilde u^{(N)}(s)) d\widetilde W_j^{(N)},\phi\bigg)_{L^2({\mathcal O})}, \\ \Lambda_i(\widetilde u,\widetilde W,\phi)(t) &:= (\widetilde u_i(0),\phi)_{L^2({\mathcal O})} + \sum_{j=1}^n\int_0^t\big\langle\diver\big(A_{ij}(\widetilde u(s)) \na\widetilde u_j(s)\big),\phi\big\rangle ds \\ &\phantom{xx}{}+ \bigg(\sum_{j=1}^n\int_0^t\sigma_{ij}(\widetilde u(s)) d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}, \end{align*} for $t\in[0,T]$ and $i=1,\ldots,n$. The following corollary is essentially a consequence of Lemma \ref{lem.E}. \begin{corollary}\label{coro.E} It holds for any $\phi_1\in L^2({\mathcal O})$ and any $\phi_2\in H^3({\mathcal O})$ satisfying $\na\phi_2\cdot\nu=0$ on $\pa{\mathcal O}$ that \begin{align*} \lim_{N\to\infty}\big\|(\widetilde u^{(N)},\phi_1)_{L^2({\mathcal O})} - (\widetilde u,\phi_1)_{L^2({\mathcal O})}\big\|_{L^2(\widetilde\Omega\times(0,T))} &= 0, \\ \lim_{N\to\infty}\big\|\Lambda_i^{(N)}(\widetilde u^{(N)},\widetilde W^{(N)},\phi_2) - \Lambda_i(\widetilde u,\widetilde W,\phi_2)\big\|_{L^1(\widetilde\Omega\times(0,T))} &= 0. \end{align*} \end{corollary} \begin{proof} The first convergence follows immediately from the identity $$ \big\|(\widetilde u^{(N)},\phi_1)_{L^2({\mathcal O})} - (\widetilde u,\phi_1)_{L^2({\mathcal O})}\big\|_{L^2(\widetilde\Omega\times(0,T)} = \widetilde\mathbb{E}\int_0^T\big|\big(\widetilde u^{(N)}(t)-\widetilde u(t), \phi_1\big)_{L^2({\mathcal O})}\big|^2 dt $$ and convergence \eqref{3.E1}. For the second convergence, let $\phi_2\in H^3({\mathcal O})$ satisfying $\na\phi_2\cdot\nu=0$ on $\pa{\mathcal O}$. Fubini's theorem implies that \begin{align*} \big\|\Lambda_i^{(N)}&(\widetilde u^{(N)},\widetilde W^{(N)},\phi_2) - \Lambda_i(\widetilde u,\widetilde W,\phi_2) \big\|_{L^1(\widetilde\Omega\times(0,T))} \\ &= \int_0^T\widetilde\mathbb{E}\Big|\Lambda_i^{(N)}(\widetilde u^{(N)}, \widetilde W^{(N)},\phi_2) - \Lambda_i(\widetilde u,\widetilde W,\phi_2)\Big| dt. \end{align*} Convergences \eqref{3.E2}-\eqref{3.E4} show that each term in the definition of $\Lambda_i^{(N)}(\widetilde u^{(N)},\widetilde W^{(N)},\phi_2)$ tends to the corresponding terms in $\Lambda_i(\widetilde u,\widetilde W,\phi_2)$ at least in the space $L^1(\widetilde\Omega\times(0,T))$. \end{proof} Since $u^{(N)}$ is a strong solution to \eqref{2.approx1}-\eqref{2.approx2}, it satisfies the identity $$ (u_i^{(N)}(t),\phi)_{L^2({\mathcal O})} = \Lambda_i^{(N)}(u^{(N)},W,\phi)(t) \quad\mathbb{P}\mbox{-a.s.} $$ for all $t\in[0,T]$, $i=1,\ldots,n$, and $\phi\in H^1({\mathcal O})$ and in particular, we have $$ \int_0^T\mathbb{E}\Big|(u_i^{(N)}(t),\phi)_{L^2({\mathcal O})} - \Lambda_i^{(N)}(u^{(N)},W,\phi)(t)\Big|dt = 0. $$ Since the laws ${\mathcal L}(u^{(N)},W)$ and ${\mathcal L}(\widetilde u^{(N)},\widetilde W^{(N)})$ coincide, we find that $$ \int_0^T\widetilde\mathbb{E}\Big|(\widetilde u_i^{(N)}(t),\phi)_{L^2({\mathcal O})} - \Lambda_i^{(N)}(\widetilde u^{(N)},\widetilde W^{(N)},\phi)(t)\Big|dt = 0. $$ By Corollary \ref{coro.E}, the limit $N\to\infty$ in this equation yields $$ \int_0^T\widetilde\mathbb{E}\Big|(\widetilde u_i(t),\phi)_{L^2({\mathcal O})} - \Lambda_i(\widetilde u,\widetilde W,\phi)(t)\Big|dt = 0, \quad i=1,\ldots,n. $$ This identity holds for all $\phi\in H^3({\mathcal O})$ satisfying $\na\phi\cdot\nu=0$ on $\pa{\mathcal O}$. By a density argument, it also holds for all $\phi\in H^1({\mathcal O})$. Hence, for Lebesgue-a.e.\ $t\in(0,T]$ and $\widetilde\mathbb{P}$-a.e.\ $\omega\in\widetilde \Omega$, we deduce that $$ (\widetilde u_i(t),\phi)_{L^2({\mathcal O})} - \Lambda_i(\widetilde u,\widetilde W,\phi)(t) = 0, \quad i=1,\ldots,n. $$ By definition of $\Lambda_i$, this means that for Lebesgue-a.e.\ $t\in(0,T]$ and $\widetilde\mathbb{P}$-a.e.\ $\omega\in\widetilde\Omega$, \begin{align*} (\widetilde u_i(t),\phi)_{L^2({\mathcal O})} &= (\widetilde u(0),\phi)_{L^2({\mathcal O})} + \sum_{j=1}^n\int_0^t\big\langle\diver\big(A_{ij}(\widetilde u(s)) \na\widetilde u_j(s)\big),\phi\big\rangle ds \\ &\phantom{xx}{}+ \bigg(\sum_{j=1}^n\int_0^t\sigma_{ij}(\widetilde u(s)) d\widetilde W_j(s),\phi\bigg)_{L^2({\mathcal O})}. \end{align*} Setting $\widetilde U:=(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P},\widetilde\mathbb{F})$, we infer that the system $(\widetilde U,\widetilde W,\widetilde u)$ is a martingale solution to \eqref{1.eq} and the stochastic process $\widetilde u$ satisfies estimates \eqref{3.uH1} and \eqref{3.uL2}. \begin{remark}[Random initial data]\label{rem.ic}\rm The initial data may be chosen to be random, i.e., we prescribe an inital probability measure $\mu^0$ on $L^2({\mathcal O})$ instead of a given initial data. We assume that \begin{equation}\label{eq:5.18} \int_{L^2({\mathcal O})} \|x\|_{L^2({\mathcal O})}^p d\mu^0(x) < \infty\quad\mbox{for } p=\frac{24}{4-d}. \end{equation} Now, in principle, we can carry out the whole analysis also in this case. Since for the given initial distribution $\mu^0$ and a given stochastic basis $(\Omega, {\mathcal F}, \mathbb{F}, \mathbb{P})$, we have an $\mathcal{F}_0$-measurable random variable, which we will denote by $u^0$ and whose distribution is $\mu^0$. Because of assumption \eqref{eq:5.18}, we have $\mathbb{E}\|u^0\|_{L^2({\mathcal O})}^p < \infty$ and consequently, the a priori estimates obtained in section \ref{sec.unif} still hold true. As before we can show that the set of measure $\{{\mathcal L}(u^{(N)}):N\in\N\}$ is tight on $Z_T$ and therefore, by the Skorohod-Jakubowski theorem, we obtain a sequence of new random variables $(\widetilde{u}^{(N)})_{N\in\N}$ (and also a sequence of new Wiener processes) which have the same law as the old random variables $u^{(N)}$ on $Z_T$. In particular, ${\mathcal L}(\tilde{u}^{(N)}(0)) = {\mathcal L}(u^{(N)}(0))$ in $L^2({\mathcal O})$ as well as $\widetilde{u}^{(N)} \to \widetilde{u}$ in $C^0([0,T];L_w^2({\mathcal O}))$ $\widetilde\mathbb{P}$-a.s.\ and $\widetilde{u}^{(N)}(0) \to \widetilde{u}(0)$ in $L^2({\mathcal O})$ weakly $\widetilde\mathbb{P}$-a.s. We conclude that ${\mathcal L}(\tilde{u}(0)) = {\mathcal L}(\tilde{u}^{(N)}(0)) = {\mathcal L}(u^0) = \mu^0$. Thus, we have shown that the process $\widetilde{u}$ has the initial measure $\mu^0$ and therefore is the required martingale solution of \eqref{1.eq}. \qed \end{remark} \subsection{Nonnegativity of the solutions}\label{sec.pos} We show that if $u_i^0\ge 0$ in ${\mathcal O}$ for $i=1,\ldots,n$ and condition \eqref{1.sigma2} on $\sigma$ holds then $\widetilde u_i$ is nonnegative $\mathbb{P}$-a.s. For this, we employ the technique of \cite{CPT16}. The idea is to approximate the test function $f(z)=z^-=\max\{0,-z\}$ for $z\in\R$ and to use It\^o's formula. We define as in \cite[Section 2.4]{CPT16} the following functions: $$ f_\eps(z) = \left\{\begin{array}{ll} -z &\quad\mbox{if }z\le -\eps, \\ \displaystyle-3\left(\frac{z}{\eps}\right)^4z - 8\left(\frac{z}{\eps}\right)^3z - 6\left(\frac{z}{\eps}\right)^2z &\quad\mbox{if }-\eps\le z\le 0 \\ 0 &\quad\mbox{if }z\ge 0 \end{array}\right. $$ for $\eps>0$. Then $f_\eps$ has at most linear growth, i.e.\ $|f_\eps(z)|\le C|z|$ for all $z\in\R$, and the functions $f_\eps'$ and $\psi_\eps:=f_\eps f_\eps''+(f_\eps')^2$ are bounded in $\R$. We set $$ F_\eps(v) = \int_{\mathcal O} f_\eps(v(x))^2dx, \quad F(v) = \int_{\mathcal O} f(v(x))^2 dx $$ for square-integrable functions $v:{\mathcal O}\to\R$. We replace the diffusion coefficients $A_{ij}(u^{(N)})$ in \eqref{2.approx1} by the modified coefficients $$ A_{ij}^+(u^{(N)}) = \delta_{ij}\bigg(a_{i0} + \sum_{k=1}^n a_{ik}u_k^2\bigg) + 2a_{ij}u_i^+u_j, \quad i,j=1,\ldots,n, $$ where $z^+=\min\{0,z\}$ is the positive part of $z\in\R$. Observe that generally, $A_{ij}^+(u)\neq A_{ij}(u)$ but if $u_i\ge 0$ for all $i=1,\ldots,n$ then we obtain the original coefficients, $A_{ij}^+(u)=A_{ij}(u)$. The proof of Lemma \ref{lem.ex} provides the existence of a pathwise unique strong solution $u^{(N)}$ to this truncated problem. The It\^o formula in finite dimensions gives \cite[Formula (3.3)]{CPT16} \begin{align} F_\eps(u_i^{(N)}(t)) &= F_\eps(u_i^{(N)}(0)) \nonumber \\ &\phantom{xx}{}+ 2\int_0^t\int_{\mathcal O} f_\eps(u_i^{(N)}(s))f_\eps'(u_i^{(N)}(s))\Pi_N \bigg(\sum_{j=1}^n\sigma_{ij}(u^{(N)}(s))\bigg)dxdW_j(s) \nonumber \\ &\phantom{xx}{}- 2\int_0^t\int_{\mathcal O}\psi_\eps(u_i^{(N)}(s)) \sum_{j=1}^n A_{ij}^+(u^{(N)}(s))\na u_i^{(N)}(s)\cdot\na u_j^{(N)}(s)dxds \label{3.Feps} \\ &\phantom{xx}{}+ \int_0^t\int_{\mathcal O}\sum_{j=1}^n\sum_{k,\ell=1}^N\sum_{m=1}^\infty \psi_\eps(u_i^{(N)}(s))e_ke_\ell\sigma_{ij}^{mk}(u^{(N)}(s)) \sigma_{ij}^{m\ell}(u^{(N)}(s))dxds \nonumber \\ &=: I_{\eps,0}^{(N)} + I_{\eps,1}^{(N)} + I_{\eps,2}^{(N)} + I_{\eps,3}^{(N)}, \nonumber \end{align} where $\sigma_{ij}^{km}$ is defined in \eqref{1.sigmadW}. We claim that the integral $I_{\eps,1}^{(N)}$ is nonpositive. Indeed, we write \begin{align*} I_{\eps,1}^{(N)} &= -2\int_0^t\int_{\mathcal O}\psi_\eps(u_i^{(N)})A_{ii}^+(u^{(N)}) |\na u_i^{(N)}|^2 dxds \\ &\phantom{xx}{}- 2\int_0^t\int_{\mathcal O}\psi_\eps(u_i^{(N)})\sum_{j\neq i}A_{ij}^+(u^{(N)}) \na u_i^{(N)}\cdot\na u_j^{(N)} dxds. \end{align*} The first term on the right-hand side is clearly nonpositive; the second term vanishes since $\psi_\eps(u_i^{(N)})=0$ in $\{u_i^{(N)}\ge 0\}$ and $A_{ij}^+(u^{(N)})=0$ in $\{u_i^{(N)}\le 0\}$. This shows that $I_{1,\eps}^{(N)}\le 0$. By \eqref{3.conv}, we know that $u^{(N)}\to u$ in $L^2(0,T;L^2({\mathcal O}))$ as $N\to\infty$. (To be precise, we should work with the new processes $\widetilde u^{(N)}$ but we omit the tilde.) Therefore, up to a subsequence which is not relabeled, $u^{(N)}\to u$ for a.e.\ $(x,t,\omega)\in{\mathcal O}\times(0,T)\times\Omega$. Following the steps of \cite[Section 3.2]{CPT16}, we can show the following $\mathbb{P}$-a.s.\ convergence results as $N\to\infty$: \begin{align*} & F_\eps(u_i^{(N)}(t)) \to F_\eps(u_i(t)), \quad I_{\eps,0}^{(N)} \to F_\eps(u_i^0), \\ & I_{\eps,2}^{(N)} \to 2\int_0^t\int_{\mathcal O} f_\eps(u_i(s))f_\eps'(u_i(s)) \sum_{j=1}^n\sigma_{ij}(u(s))dxdW_j(s), \\ & I_{\eps,3}^{(N)} \to \int_0^t\int_{\mathcal O}\sum_{j=1}^n\sum_{k,\ell=1}^\infty \sum_{m=1}^\infty \psi_\eps(u_i(s))e_ke_\ell\sigma_{ij}^{mk}(u(s))\sigma_{ij}^{m\ell}(u(s))dxds. \end{align*} Passing to the limit $N\to\infty$ in \eqref{3.Feps} then leads to \begin{align*} F_\eps(u_i(t)) &\le F_\eps(u_i^0) + 2\int_0^t\int_{\mathcal O} f_\eps(u_i(s))f'_\eps(u_i(s)) \sum_{j=1}^n\sigma_{ij}(u(s))dW_j(s)dx \\ &\phantom{xx}{}+\int_0^t\int_{\mathcal O} \psi_\eps(u_i(s))\sum_{j=1}^\infty\sum_{m=1}^\infty\big(\sigma_{ij}(u(s))\eta_m\big)^2 dxds. \end{align*} Taking the mathematical expectation, the stochastic integral vanishes: \begin{equation}\label{3.Feps2} \mathbb{E} F_\eps(u_i(t)) \le \mathbb{E} F_\eps(u_i^0) + \mathbb{E}\int_0^t\int_{\mathcal O}\psi_\eps(u_i(s)) \sum_{j=1}^n\sum_{m=1}^\infty\bigg(\sigma_{ij}(u(s))\eta_m\bigg)^2 dxds. \end{equation} It is shown in \cite[Section 3.4]{CPT16} that in the limit $\eps\to 0$, $\mathbb{P}$-a.s.\footnote{Observe that there is a typo in \cite[formulas (3.21)-(3.24)]{CPT16}: The sum from $l=1$ to $\infty$ should be outside the brackets.} \begin{align*} & \mathbb{E} F_\eps(u_i(t)) \to \mathbb{E}\|u_i^-(t)\|_{L^2({\mathcal O})}^2, \quad \mathbb{E} F_\eps(u_i^0) \to \mathbb{E}\|(u_i^0)^-\|_{L^2({\mathcal O})}^2, \\ & \mathbb{E}\int_0^t\int_{\mathcal O}\psi_\eps(u_i)\sum_{j=1}^n\sum_{m=1}^\infty\bigg(\sigma_{ij}(u) \eta_m\bigg)^2 dxds \to \mathbb{E}\int_0^t\sum_{j=1}^n \|\sigma_{ij}(-u^-)\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds. \end{align*} Thus, the limit $\eps\to 0$ in \eqref{3.Feps2} gives $$ \mathbb{E}\|u_i^-(t)\|_{L^2({\mathcal O})}^2 \le \mathbb{E}\|(u_i^0)^-\|_{L^2({\mathcal O})}^2 + \mathbb{E}\int_0^t\sum_{j=1}^n\|\sigma_{ij}(-u_i^-(s))\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds. $$ The first term on the right-hand side vanishes since $u_i^0\ge 0$. For the second term, we employ the linear growth \eqref{1.sigma2} of $\sigma_{ij}$, showing that $$ \mathbb{E}\|u_i^-(t)\|_{L^2({\mathcal O})}^2 \le \mathbb{E}\int_0^t\|u_i^-(s)\|_{L^2({\mathcal O})}^2 ds. $$ Gronwall's lemma implies that $\mathbb{E}\|u_i^-(t)\|_{L^2({\mathcal O})}^2=0$ for $t\in(0,T)$ and consequently, $u_i(t)\ge 0$ in ${\mathcal O}$, $\mathbb{P}$-a.s. for a.e.\ $t\in[0,T]$ and all $i=1,\ldots,n$. This finishes the proof. \begin{appendix} \section{Some results from stochastic analysis}\label{app} \subsection{Results for stochastic processes} The following particular It\^o formula is proved in \cite[Theorem 4.2.5]{PrRo07}. \begin{theorem}[It\^o formula]\label{thm.ito} Let $V\subset H\subset V'$ be a Gelfand triple and $U$ be a separable Hilbert space, $X_0\in L^2(\Omega;H)$, and let $a\in L^2(\Omega\times(0,T);V')$, $b\in L^2(\Omega\times(0,T);{\mathcal L}_2(U,H))$ be progressively measurable. Define the stochastic process $$ X(t) = X_0 + \int_0^t a(s)ds + \int_0^t b(s)dW(s), \quad t\in(0,T). $$ Then \begin{align*} \frac12\|X(t)\|_H^2 &= \frac12\|X_0\|_H^2 + \int_0^t \langle a(s),X(s)\rangle_{V',V} ds + \frac12\int_0^t\|b(s)\|_{{\mathcal L}_2(U,H)}^2 ds \\ &\phantom{xx}{}+ \int_0^t(X(s),b(s)dW(s))_H \quad\mbox{for }t\in(0,T), \end{align*} where $\langle\cdot,\cdot\rangle_{V',V}$ is the duality pairing between $V'$ and $V$, $(\cdot,\cdot)_H$ is the inner product in $H$, and $X(s)\in L^2(\Omega\times(0,T);V)$ in $\langle a(s),X(s)\rangle_{V',V}$ is any $V$-valued progressively measurable $dt\otimes\mathbb{P}$ version of the equivalence class represented by $X(s)$. \end{theorem} The next proposition can be found in \cite[Prop. 2.10]{Kru14}. \begin{proposition}[It\^o isometry]\label{prop.iso} Let $\sigma(u)\in L^2((0,T)\times\Omega;{\mathcal L}_2(Y;L^2({\mathcal O})))$ be a predictable stochastic process. Then $$ \mathbb{E}\bigg(\int_0^T\sigma(u(s))dW(s)\bigg)^2 = \mathbb{E}\int_0^T\|\sigma(u)\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2ds. $$ \end{proposition} This result can be generalized in the following sense; see \cite[Prop. 2.12]{Kru14}. \begin{proposition}[Burkholder-Davis-Gundy inequality]\label{prop.bdg} Let $p\ge 2$ and let $\sigma:L^2({\mathcal O})\times[0,T]\times\Omega\to {\mathcal L}_2(Y;L^2({\mathcal O}))$ be a predictible stochastic process such that $$ \mathbb{E}\bigg(\int_0^T\|\sigma(u(s))\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds\bigg)^{p/2} < \infty. $$ Then, for some $C>0$ depending on $p$, $$ \mathbb{E}\bigg|\int_0^T\sigma(u(s))dW(s)\bigg|^p \le C\mathbb{E}\bigg( \int_0^T\|\sigma(u(s))\|_{{\mathcal L}_2(Y;L^2({\mathcal O}))}^2 ds\bigg)^{p/2}. $$ \end{proposition} \subsection{Finite-dimensional stochastic differential equations} We state a result on the existence of the pathwise unique strong solution to the stochastic differential equation on $\R^n$ (essentially taken from \cite[Theorem 3.1.1]{PrRo07}; originally from \cite{Kry99}), \begin{equation}\label{2.sde} \pi\cdot dX(t) = a(X,t)dt + b(X,t)dW(t), \quad t>0, \quad X(0)=X_0. \end{equation} Here, $\pi=(\pi_1,\ldots,\pi_n)\in(0,\infty)^n$, $a:\R^n\times[0,\infty)\times\Omega\to\R^n$ and $b:\R^n\times[0,\infty)\times\Omega\to\R^{n\times m}$ are both continuous in $x\in\R^n$ for fixed $t\in[0,\infty)$, $\omega\in\Omega$, progressively measurable, and satisfy for all $R$, $T>0$, \begin{equation}\label{2.ab1} \int_0^T\sup_{|x|\le R}\big(|a(x,t)|^2 + |b(x,t)|^2\big)dt < \infty \quad\mbox{in }\Omega, \end{equation} where $|a(x,t)|$ is the Euclidean norm on $\R^n$ and $|b(x,t)|$ is the Frobenius norm on $\R^{n\times m}$. Furthermore, we assume that for all $R$, $t>0$, and $x$, $y\in\R^n$ with $|x|$, $|y|\le R$, \begin{equation}\label{2.ab2} \begin{aligned} 2\big(a(x,t)-a(y,t),x-y\big) + \big|b(x,t)-b(y,t)\big|^2 &\le K_R(t)|x-y|^2, \\ 2(a(x,t),x) + |b(x,t)|^2 &\le K_1(t)(1+|x|^2), \end{aligned} \end{equation} where for every $R>0$, $K_R(t)$ is an $\R_+$-valued ${\mathcal F}_t$-adapted process satisyfing $\int_0^T K_R(t)dt<\infty$ in $\Omega$ for all $R$, $T>0$. We call $X$ the pathwise strong solution to \eqref{2.sde} if $X(t)=(X_1(t),\ldots,X_n(t))$ for $t\ge 0$ is a $\mathbb{P}$-a.s.\ continuous $\R^n$-valued ${\mathcal F}_t$-adapted process such that $\mathbb{P}$-a.s. for all $t\ge 0$, \begin{equation}\label{2sde2} \pi_i X_i(t) = \pi_i X_{0i} + \int_0^t a_i(X(s),s)ds + \int_0^t \sum_{j=1}^m b_{ij}(X(s),s)dW_j(s), \quad i=1,\ldots,n. \end{equation} \begin{theorem}[Existence of solutions]\label{thm.sde} Let Assumptions \eqref{2.ab1}-\eqref{2.ab2} hold and let $X_0:\Omega\to\R^n$ be ${\mathcal F}_0$-measurable. Then there exists a (up to $\mathbb{P}$-indistinguishability) pathwise unique strong solution to \eqref{2.sde}. \end{theorem} The proof is the same as in \cite[Theorem 3.1.1]{PrRo07}. The difference to this theorem is the appearance of the constant vector $\pi$ on the left-hand side of \eqref{2.sde}. As the proof in \cite{PrRo07} is based on the Euler method and the vector is constant, this appearance does not change the arguments. We just have to take into account that $\min_{i=1,\ldots,n}\pi_i$ is positive. \subsection{Tightness} We recall some definitions and results on the tightness of families of probability measures. Let $E$ be a separable Banach space with norm $\|\cdot\|_E$ and associated Borel $\sigma$-field ${\mathcal B}(E)$. \begin{definition}[Tightness] The family $\Lambda$ of probability measures on $(E,{\mathcal B}(E))$ is said to be {\em tight} if and only if for any $\eps>0$, there exists a compact set $K_\eps\subset E$ such that $$ \mu(K_\eps)\ge 1-\eps\quad\mbox{for all }\mu\in\Lambda. $$ \end{definition} The theorem of Skorokhod allows for the representation of the limit measure of a weakly convergent sequence of probability measures on a metric space as the law of a pointwise convergent sequence of random variables defined on a common probability space. Since our space $Z_T$, defined in \eqref{3.Z}, is not a metric space, we use Jakubowski's generalization of the Skorokhod Theorem, in the form given in \cite[Theorem C.1]{BrOn10} (see the original theorem in \cite{Jak97}). This version is valid for topological spaces. \begin{theorem}[Skorokhod-Jakubowski]\label{thm.skoro} Let $Z$ be a topological space such that there exists a sequence $(f_m)_{m\in\N}$ of continuous functions $f_m:Z\to\R$ that separate points of $Z$. Let $S$ be the $\sigma$-algebra generated by $(f_m)_{m\in\N}$. Then \begin{enumerate} \item Every compact subset of $Z$ is metrizable. \item If $(\mu_m)_{m\in\N}$ is a tight sequence of probability measures on $(Z,S)$, then there exists a subquence $(\mu_{m_k})_{k\in\N}$, a probability space $(\widetilde\Omega,\widetilde{\mathcal F},\widetilde\mathbb{P})$, and $Z$-valued Borel measurable random variables $\xi_k$ and $\xi$ such that {\rm (i)} $\mu_{m_k}$ is the law of $\xi_k$ and {\em (ii)} $\xi_k\to\xi$ almost surely on $\Omega$. \end{enumerate} \end{theorem} The Aldous condition is mentioned in the tightness criterion of Theorem \ref{thm.tight}, and therefore we recall its definition. \begin{definition}[Aldous condition]\label{def.ald} Let $(X_n)_{n\in\N}$ be a sequence of stochastic processes on a complete separable metric space $S$, defined on the probability space $(\Omega,{\mathcal F},\mathbb{P})$ with filtration $\mathbb{F}=({\mathcal F}_{t})_{t\in[0,T]}$. We say that $(X_n)_{n\in\N}$ satisfies the {\em Aldous condition} if and only if for any $\eps>0$, there exists $\eta>0$ such that for any $\delta>0$ and any sequence $(\tau_n)_{n\in\N}$ of $\mathbb{F}$-stopping times with $\tau_n\le T$, it holds that $$ \sup_{n\in\N}\sup_{0<\theta<\delta}\mathbb{P}\big\{d\big(X_n(\tau_n+\theta), X_n(\tau_n)\big)\ge\eta\big\} \le \eps. $$ \end{definition} \subsection{Vitali's convergence theorem} We use the following version of Vitali's convergence theorem (which can be seen as a special version of the theorem of De la Vall\'ee-Poussin). \begin{theorem}[Vitali] Let $(a_N)$ be a sequence of integrable functions on some probability space $(\Omega,{\mathcal B}(\Omega),\mathbb{P})$ such that $a_N\to a$ a.e.\ as $N\to\infty$ (or $a_N\to a$ in measure) for some integrable function $a$ and there exist $r>1$ and a constant $C>$ such that $\mathbb{E}|a_N|^r\le C$ for all $N\in\N$. Then $\mathbb{E}|a_N|\to\mathbb{E}|a|$ as $N\to\infty$. \end{theorem} \end{appendix}
{ "timestamp": "2018-06-05T02:18:24", "yymm": "1806", "arxiv_id": "1806.01124", "language": "en", "url": "https://arxiv.org/abs/1806.01124" }
\section{ Introduction} Throughout this paper all algebras and vector spaces will be over the complex field $\mathbb{C}$. Let $A$ be an algebra and $M$ be an $A$-bimodule. Recall that a linear map $D:A\rightarrow M$ is said a \textit{derivation} if $D(ab) = aD(b)+D(a)b $ for all $a, b \in A$. Each map of the form $a \mapsto am-ma$, where $m \in M$, is a derivation which will be called an \textit{inner derivation}. Also $D$ is called an \textit{anti-derivation} if $D(ab) =bD(a)+D(b)a$ for all $a,b \in A$. There have been a number of papers concerning the study of conditions under which mappings of (Banach) algebras can be completely determined by the action on some sets of points. We refer the reader to \cite{al1, al10, bre, chb, gh1, gh2} for a full account of the topic and a list of references. In the case of derivations, the subsequent condition attracted much attention of some mathematicians: \[a, b \in A, \,\, ab=z\Rightarrow \Delta(ab) = a\Delta(b)+\Delta(a)b \quad (\blacklozenge),\] where $z\in A$ is a fixed point and $\Delta:A\rightarrow M$ is a linear (additive) map. Bre$\check{\textrm{s}}$ar, \cite{bre} study the derivations of rings with idempotents in this direction with $z=0$. It was shown in \cite{bre} that if $A$ is a prime ring containing a nontrivial idempotent and $\Delta:A\rightarrow A$ is an additive map satisfying $(\blacklozenge)$ with $z=0$, then $\Delta(a)=D(a)+ca$ ($a\in A$) where $D$ is an additive derivation and $c$ is a central element of $A$. Note that the nest algebras are important operator algebras that are not prime. Jing et al. in \cite{jin} showed that, for the cases of nest algebras on a Hilbert space and standard operator algebras in a Banach space, the set of linear maps satisfying $(\blacklozenge)$ with $z=0$ and $\Delta(I)=0$ coincides with the set of inner derivations. Then, many studies have been done in this case and different results were obtained, for instance, see \cite{al1, al10, bre, chb, che, gh1, gh12} and the references therein. \par The other direction is to study linear (additive) maps that behave like homomorphisms of (Banach) algebras when acting on special products. Especially, one of the interesting question is the characterizing linear maps of group algebras and other Banach algebras associated with a locally compact group behaving like homomorphisms at zero product elements or orthogonal elements. This question has been extensively studied \cite{al1, al10, al11, al111, al2, fo, fo2, la, lin, mo}. \par Motivated by these reasons, in this paper we consider the problem of characterizing continuous linear maps on group algebras behaving like derivations or anti-derivations at orthogonal elements for several types of orthogonality conditions. In particular, in this paper we consider the subsequent conditions on a continuous linear map $\Delta$ from a group algebra $L^{1}(G)$ into the measure convolution algebra $M(G)$ where $G$ is a locally compact group: \begin{enumerate} \item[(i)] \textit{derivations through one-sided orthogonality conditions} \begin{enumerate} \item[(D1)] \[ f*g=0 \Longrightarrow f*\Delta(g)+\Delta(f)*g=0\quad (f,g \in L^{1}(G));\] \item[(D2)] \[f*g^{\star}=0 \Longrightarrow f*\Delta(g)^{\star}+\Delta(f)*g^{\star}=0;\] \item[(D3)] \[f^{\star}*g=0 \Longrightarrow f^{\star}*\Delta(g)+\Delta(f)^{\star}*g=0;\] \end{enumerate} \item[(ii)] \textit{anti-derivations through one-sided orthogonality conditions} \begin{enumerate} \item[(D4)] \[ f*g=0 \Longrightarrow g*\Delta(f)+\Delta(g)*f=0;\] \item[(D5)] \[ f*g^{\star}=0 \Longrightarrow \Delta(g)^{\star}*f+g^{\star}*\Delta(f)=0;\] \item[(D6)] \[f^{\star}*g=0 \Longrightarrow \Delta(g)*f^{\star}+g*\Delta(f)^{\star}=0;\] \end{enumerate} \item[(iii)] \textit{Derivations through two-sided orthogonality conditions} \begin{enumerate} \item[(D7)] \[ f*g=g*f=0 \Longrightarrow f*\Delta(g)+\Delta(f)*g=g*\Delta(f)+\Delta(g)*f=0;\] \item[(D8)] \[ f*g^{\star}=g^{\star}*f=0 \Longrightarrow f*\Delta(g)^{\star}+\Delta(f)*g^{\star}=\Delta(g)^{\star}*f+g^{\star}*\Delta(f)=0;\] \end{enumerate} \end{enumerate} where $f,g \in L^{1}(G)$, the convolution product is denoted by $*$ and the involution is denoted by $\star$. It is worth noting that the conditions $D1$ and $D4$, $D2$ and $D3$, $D5$ and $D6$, $D7$ and $D8$ agree in the case where the group $G$ is abelian. \par Our purpose is to investigate whether the above conditions characterize derivations ($\star$-derivations) or anti-derivations ($\star$-anti-derivations). This article is organized as follows. In section 2 some preliminaries are given. Section 3 is concerned with characterizing derivations and anti-derivations through one-sided orthogonality conditions (conditions $D1-D6$). In the last section continuous linear maps of group algebras of a $SIN$ group satisfying in conditions $D7$ and $D8$ (derivations through two-sided orthogonality conditions) are considered. \par We note that the centre of an algebra $A$ are written by $\mathcal{Z}(A)$. \section{ Preliminaries} Let $G$ be a locally compact group. The \textit{group algebra} and the \textit{measure convolution algebra} of $G$, are denoted by $L^{1}(G)$ and $M(G)$, respectively. The convolution product is denoted by $*$ and the involution is denoted by $\star$. The element $\delta_{e}$ is the identity of $M(G)$, where $\delta_{e}$ is the point mass at $e\in G$ and $e$ is the identity of $G$. The measure algebra $M(G)$ is a unital Banach $\star$-algebra, and $L^{1}(G)$ is a closed ideal in $M(G)$, identified with the subspace of $M(G)$ consisting of measures which are absolutely continuous with respect to the Haar measure. If a net $(\lambda_{i})_{i\in I}$ in $M(G)$ converges to $\lambda \in M(G)$ with respect to the weak$^{*}$ topology, we write it by $\lambda_{i}\xlongrightarrow{w^{*}} \lambda$. Every group algebra $L^{1}(G)$ has a bounded approximate identity. The group $G$ is a $SIN$ \textit{group} if it has a base of compact neighborhoods of the the identity that is invariant under all inner automorphisms. If $G$ is a $SIN$ group, we denote it by $G\in [SIN]$. It is known that the group algebra $L^{1}(G)$ has a bounded approximate identity consisting of functions in $\mathcal{Z}(L^{1}(G))$ if and only if $G\in [SIN]$. We refer the reader to \cite [Section 3.3]{da} for the essential information about the group algebras and measure algebras. Also see \cite [section 12.5 and 12.6]{pa} for a discussion of the class of $SIN$ groups. \par In order to prove our results we need the following results. \begin{lem}(\cite [Lemma 1.1] {al2}).\label{bi} Let $G$ be a locally compact group, and let $\phi: L^{1}(G) \times L^{1}(G)\rightarrow X$ be a continuous bilinear map, where $X$ is a Banach space. \begin{enumerate} \item[(i)] Suppose that \[ f,g \in L^{1}(G), \, \, f*g=0\Longrightarrow \phi(f,g)=0.\] Then \[ \phi(f*g,h)=\phi(f,g*h),\] for all $f,g\inL^{1}(G)$. \item[(ii)] Suppose that \[ f,g \in L^{1}(G), \, \, f*g=g*f=0\Longrightarrow \phi(f,g)=0.\] Then \[ \phi(f*g,h)=\phi(f,g*h),\] for all $f,h\in \mathcal{Z}(L^{1}(G))$ and $g\inL^{1}(G)$, and \[ \phi(f*g,h*k)-\phi(f,g*h*k)+\phi(k*f,g*h)-\phi(k*f*g,h)=0,\] for all $f,g,h,k\inL^{1}(G)$. \end{enumerate} \end{lem} \begin{lem}(\cite [Lemma 1.3] {al2}).\label{zz} Let $G$ be a locally compact group, and let $\mu\in M(G)$. \begin{enumerate} \item[(i)] Suppose that $\mu*L^{1}(G)=\lbrace0\rbrace$. Then $\mu=0$. \item[(ii)] Suppose that $\mu*f=f*\mu$ for each $f\in L^{1}(G)$. Then $\mu \in \mathcal{Z}(M(G))$. \end{enumerate} \end{lem} Note that by \cite [Theorem 6.3]{ke} the convolution product in $M(G)$ is separately continuous with respect to the weak$^{*}$ topology, i.e., $\nu\mapsto \mu*\nu$ is $w^{*}$-continuous for each $\mu\inM(G)$ and $\mu\mapsto \mu*\nu$ is $w^{*}$-continuous for each $\nu\inM(G)$. \begin{rem}\label{we} Let $(u_{i})_{i\in I}$ be a bounded approximate identity of $L^{1}(G)$. Since $(u_{i})_{i\in I}$ is bounded, we can assume that it converges to $\mu \in M(G)$ with respect to the weak$^{*}$ topology. So by separately $w^{*}$-continuity of convolution product in $M(G)$ we have $u_{i}*f\xlongrightarrow{w^{*}} \mu* f$ for all $f\in L^{1}(G)$. On the other hand by the fact that $(u_{i})_{i\in I}$ is an approximate identity, for each $f\in L^{1}(G)$ we get $u_{i}*f\xlongrightarrow{w^{*}} f$ in $M(G)$. So $(\mu-\delta_{e})*L^{1}(G)=\lbrace 0 \rbrace $ and by Lemma \ref{zz}-$(i)$, it follows that $\mu=\delta_{e}$. Therefore we can assume that the group algebra $L^{1}(G)$ has a bounded approximate identity such that $u_{i}\xlongrightarrow{w^{*}} \delta_{e}$ in $M(G)$. \end{rem} Let $D:L^{1}(G)\rightarrowM(G)$ be a map. We say that $D$ is a $\star$-map whenever $D(f^{\star})=D(f)^{\star}$ for all $f\in L^{1}(G)$. \begin{rem}\label{der} Let $D:L^{1}(G)\rightarrow M(G)$ be a continuous derivation. According to \cite{lo} (derivation problem), there exists $\mu\in M(G)$ such that $D(f)=f*\mu-\mu*f$ for any $f\inL^{1}(G)$. If $D$ is a continuous $\star$-derivation, then $D(f^{\star})=D(f)^{\star}$ and hence $f^{\star}*\mu-\mu*f^{\star}=\mu^{\star}*f^{\star}-f^{\star}*\mu^{\star}$ for all $f\inL^{1}(G)$. So by Lemma \ref{zz}-$(ii)$ we have $Re\mu=\dfrac{1}{2}(\mu+\mu^{\star})\in \mathcal{Z}(M(G))$. Conversely for $\mu\in M(G)$ with $Re\mu\in \mathcal{Z}(M(G))$, the map $D:L^{1}(G)\rightarrow M(G)$ defined by $D(f)=f*\mu-\mu*f$ is a continuous $\star$-derivation. \end{rem} Let $A$ be an algebra. Recall that a linear map $D:A\rightarrow A$ is said to be a \textit{Jordan derivation} if $D(a^{2})=aD(a)+D(a)a$ for all $a\in A$. Clearly, each derivation is a Jordan derivation. The converse is, in general, not true. Sinclair \cite{sin} shows that a continuous Jordan derivation on a semisimple Banach algebra is a derivation. Since $L^{1}(G)$ is a semisimple Banach algebra, it follows that any continuous Jordan derivation $D:L^{1}(G)\rightarrow L^{1}(G)$ is a derivation. \section{Derivations and anti-derivations through one-sided orthogonality conditions} In this section we will consider a linear map $\Delta:L^{1}(G) \rightarrow M(G)$ behaving like derivation or anti-derivation at one-sided orthogonality conditions. firstly we characterize derivations through one-sided orthogonality conditions. \begin{thm}\label{o1} Let $G$ be a locally compact group, and let $\Delta:L^{1}(G) \rightarrow M(G)$ be a continuous linear map. \begin{enumerate} \item[(i)] Assume that \begin{equation*} f*g=0 \Longrightarrow f*\Delta(g)+\Delta(f)*g=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$ and $\mu-\nu\in \mathcal{Z}(M(G))$. \item[(ii)] Assume that \begin{equation*} f*g^{\star}=0 \Longrightarrow f*\Delta(g)^{\star}+\Delta(f)*g^{\star}=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$ and $Re\mu\in \mathcal{Z}(M(G))$. \item[(iii)] Assume that \begin{equation*} f^{\star}*g=0 \Longrightarrow f^{\star}*\Delta(g)+\Delta(f)^{\star}*g=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\nu-\mu*f$ for all $f \in L^{1}(G)$ and $Re\mu\in \mathcal{Z}(M(G))$. \end{enumerate} \end{thm} \begin{proof} $(i)$ By \cite [Theorem 4.6]{al1} and Lemma \ref{zz}-$(i)$, there is a continuous derivation $D:L^{1}(G) \rightarrow M(G)$ and a measure $\xi \in \mathcal{Z}(M(G))$ such that $\Delta(f)=D(f)+\xi*f$ for all $f \in L^{1}(G)$. From derivation problem $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$, where $\mu\in M(G)$. Setting $\nu=\mu-\xi$. So $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$ and $\mu-\nu\in \mathcal{Z}(M(G))$. \par $(ii)$ Suppose that $(u_i)_{i\in I}$ is a bounded approximate identity of $L^{1}(G)$ such that $u_{i}\xlongrightarrow{w^{*}} \delta_{e}$, where $\delta_{e}$ is the identity of $M(G)$. Since the net $(\Delta(u_{i}))_{i\in I}$ is bounded, we can assume that it converges to $\xi \in M(G)$ with respect to the weak$^{*}$ topology. Define $D:L^{1}(G)\rightarrow M(G)$ by $D(f)=\Delta(f)-\xi*f$. Then $D$ is a continuous linear map which satisfies \begin{equation}\label{1o4} f*g^{\star}=0 \Longrightarrow f*D(g)^{\star}+D(f)*g^{\star}=0\quad (f,g \in L^{1}(G)), \end{equation} and $D(u_{i})\xlongrightarrow{w^{*}} 0$. We will show that $D$ is a $\star$-derivation. In order to prove this we consider the continuous bilinear map $\phi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by $\phi(f,g)=f*D(g^{\star})^{\star}+D(f)*g$. If $f,g\in L^{1}(G)$ are such that $f*g=0$, then $f*(g^{\star})^{\star}=0$ and \eqref{1o4} gives $\phi(f,g)=0$. So by Lemma \ref{bi}-$(i)$, we get $ \phi (f*g,h)=\phi (f,g*h)$ for all $f,g,h \in L^{1}(G)$. Therefore \begin{equation}\label{1o5} f*g*D(h^{\star})^{\star}+D(f*g)*h=f*D(h^{\star}*g^{\star})^{\star}+D(f)*g*h, \end{equation} for all $f,g,h \in L^{1}(G)$. On account of \eqref{1o5}, for all $g,h \in L^{1}(G)$ we have \[u_{i}*g*D(h^{\star})^{\star}+D(u_{i}*g)*h=u_{i}*D(h^{\star}*g^{\star})^{\star}+D(u_{i})*g*h.\] From continuity of $D$, we get $u_{i}*D(h^{\star}*g^{\star})^{\star}+D(u_{i})*g*h $ converges to $g*D(h^{\star})^{\star}+D(g)*h$ with respect to the norm topology. On the other hand, from separately $w^{*}$-continuity of convolution product in $M(G)$ and $D(u_{i})\xlongrightarrow{w^{*}} 0$ it follows that $u_{i}*D(h^{\star}*g^{\star})^{\star}+D(u_{i})*g*h \xlongrightarrow{w^{*}}D(h^{\star}*g^{\star})^{\star}$. Hence \begin{equation}\label{1o6} D(f*g^{\star})=D(f)*g^{\star}+f*D(g)^{\star}, \end{equation} for all $f,g \in L^{1}(G)$. Now letting $f=u_{i}$ in \eqref{1o6}, we obtain \[D(u_{i}*g^{\star})=D(u_{i})*g^{\star}+u_{i}*D(g)^{\star},\] for all $g\in L^{1}(G)$. By continuity of $D$, $D(u_{i})\xlongrightarrow{w^{*}} 0$ and using similar arguments as above it follows that $D(g^{\star})=D(g)^{\star}$ for all $g\in L^{1}(G)$. Thus $D$ is a $\star$-derivation and by Remark \ref{der}, there is a $\mu\in M(G)$ with $Re\mu\in \mathcal{Z}(M(G))$, such that $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$. Setting $ \nu=\mu-\xi$. From definition of $D$, we arrow at $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$ where $Re\mu\in \mathcal{Z}(M(G))$. \par $(iii)$ Consider the map $D:L^{1}(G)\rightarrow M(G)$ defined by $D(f)=\Delta(f^{\star})^{\star}$. It is easily seen that the map $D$ satisfies \[ f*g^{\star}=0 \Longrightarrow f*D(g)^{\star}+D(f)*g^{\star}=0\quad (f,g \in L^{1}(G)).\] By $(ii)$, there exists $\mu_{1}, \nu_{1}\in M(G)$ such that $D(f)=f*\mu_{1}-\nu_{1}*f$ for all $f \in L^{1}(G)$ and $Re\mu_{1}\in \mathcal{Z}(M(G))$. Then $\Delta(f)=f*\nu-\mu*f$ for all $f\in L^{1}(G)$, where $\nu=-\nu_{1}^{\star}$, $\mu=-\mu_{1}^{\star}$ and $Re\mu\in \mathcal{Z}(M(G))$. \end{proof} In the next theorem we characterize anti-derivations through one-sided orthogonality conditions. \begin{thm}\label{anti} Let $G$ be a locally compact group, and let $\Delta:L^{1}(G) \rightarrow M(G)$ be a continuous linear map. \begin{enumerate} \item[(i)] Assume that \[ f*g=0 \Longrightarrow g*\Delta(f)+\Delta(g)*f=0\quad (f,g \in L^{1}(G)).\] Then there are measures $\mu,\nu \in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$, where $\mu-\nu\in \mathcal{Z}(M(G))$ and \[ [[f,g],\mu]+2 [f,g]*(\mu-\nu)=0, \] for all $f,g\in L^{1}(G)$. \item[(ii)] Assume that \begin{equation*} f*g^{\star}=0 \Longrightarrow \Delta(g)^{\star}*f+g^{\star}*\Delta(f)=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\nu-\mu*f$, where $Re\mu\in \mathcal{Z}(M(G))$ and \[ [[f,g],\mu]+(\nu-\mu)^{\star}*[f,g]+[f,g]*(\nu-\mu)=0,\] for all $f,g \in L^{1}(G)$ . \item[(iii)] Assume that \begin{equation*} f^{\star}*g=0 \Longrightarrow \Delta(g)*f^{\star}+g*\Delta(f)^{\star}=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$, where $Re\mu\in \mathcal{Z}(M(G))$ and \[ [[f,g],\mu]+[f,g]*(\mu-\nu)^{\star}+(\mu-\nu)[f,g]=0,\] for all $f,g \in L^{1}(G)$ . \end{enumerate} \end{thm} \begin{proof} Suppose that $(u_i)_{i\in I}$ is a bounded approximate identity of $L^{1}(G)$ such that $u_{i}\xlongrightarrow{w^{*}}\delta_{e}$, where $\delta_{e}$ is the identity of $M(G)$. \par $(i)$ Define a continuous bilinear map $\phi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by $\phi(f,g)=g*\Delta(f)+\Delta(g)*f$. Then $\phi (f,g) = 0$ for all $f,g \in L^{1}(G)$ with $f*g=0$. By applying Lemma \ref{bi}-$(i)$, we obtain $ \phi (f*g,h)=\phi (f,g*h)$ for all $f,g,h \in L^{1}(G)$. So \begin{equation}\label{anti1} h*\Delta(f*g)+\Delta(h)*f*g= g*h*\Delta(f)+\Delta(g*h)*f, \end{equation} for all $f,g,h \in L^{1}(G)$. Since the net $(\Delta(u_{i}))_{i\in I}$ is bounded, we can assume that it converge to $\xi \in M(G)$ with respect to the weak$^{*}$ topology. On account of \eqref{anti1}, for all $f,g \in L^{1}(G)$ we have \[ u_{i}*\Delta(f*g)+\Delta(u_{i})*f*g= g*u_{i}*\Delta(f)+\Delta(g*u_{i})*f.\] From continuity of $\Delta$, we get $ u_{i}*\Delta(f*g)+\Delta(u_{i})*f*g$ converges to $g*\Delta(f)+\Delta(g)*f$ with respect to the norm topology. On the other hand, by separately $w^{*}$-continuity of convolution product in $M(G)$, it follows that $ u_{i}*\Delta(f*g)+\Delta(u_{i})*f*g$ converges to $\Delta(f*g)+\xi*f*g$ with respect to the weak$^{*}$ topology. Hence \begin{equation}\label{anti2} \Delta(f*g)=g*\Delta(f)+\Delta(g)*f-\xi*f*g \end{equation} for all $f,g,h \in L^{1}(G)$. Now letting $f=u_{i}$ in \ref{anti1}, we obtain \[ g*h*\Delta(u_{i})+\Delta(g*h)*u_{i}=h*\Delta(u_{i}*g)+\Delta(h)*u_{i}*g.\] By this identity and using similar arguments as above it follows that \begin{equation}\label{anti3} \Delta(f*g)=g*\Delta(f)+\Delta(g)*f-f*g*\xi \end{equation} for all $f,g,h \in L^{1}(G)$. Hence from \ref{anti2} and \ref{anti3}, for each $f,g,h \in L^{1}(G)$, we find that $\mu*f*g=f*g*\xi$. So by Cohen's factorization theorem and Lemma \ref{zz}-$(ii)$, it follows that $\xi \in \mathcal{Z}(M(G))$. Define $D:L^{1}(G)\rightarrow M(G)$ by $D(f)=\Delta(f)-\xi*f$. By \ref{anti2} and the fact that $\xi \in \mathcal{Z}(M(G))$, it follows that $D$ is an Jordan derivation. From Cohen's factorization theorem and \ref{anti2}, we obtain $\Delta(L^{1}(G))\subseteq L^{1}(G)$ and hence $D(L^{1}(G))\subseteq L^{1}(G)$. Since $L^{1}(G)$ is semisimple, it follows that $D$ is a derivation \cite{sin}. So by derivation problem \cite{lo}, there is a $\mu \in M(G)$, such that $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$. Letting $\nu=\mu-\xi$. So $\xi=\mu-\nu \in \mathcal{Z}(M(G))$ and $\Delta(f)=f*\mu-\nu*f$ for all $f\in L^{1}(G)$. \par Now by \ref{anti3} and the fact that $D$ is a derivation we see that \begin{equation*} \begin{split} \Delta(f*g)+f*g*\xi&=g*\Delta(f)+\Delta(g)*f \\ &= g*D(f)+g*\xi*f+D(g)*f+\xi*g*f\\ &= D(g*f)+2g*f*\xi\\ &= \Delta(g*f)+g*f*\xi, \end{split} \end{equation*} for all $f,g\in L^{1}(G)$. So \[ f*g*\mu-\nu*f*g+f*g*\xi=g*f*\mu-\nu*g*f+g*f*\xi,\] and hence \[ f*g*\mu- \mu *f*g+2f*g*\xi=g*f*\mu-\mu*g*f+2g*f*\xi,\] for all $f,g\in L^{1}(G)$. Therefore \[ [[f,g],\mu]+2 [f,g]*(\mu-\nu)=0, \] for all $f,g\in L^{1}(G)$. \par $(ii)$ In order to prove this we consider the contiuous bilinear map $\phi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by $\phi(f,g)=\Delta(g^{\star})^{\star}*f+g*\Delta(f)$. If $f,g\in L^{1}(G)$ are such that $f*g=0$, then $\phi(f,g)=0$. So by Lemma \ref{bi}-$(i)$, we get $ \phi (f*g,h)=\phi (f,g*h)$ for all $f,g,h \in L^{1}(G)$. Therefore \begin{equation}\label{anti4} \Delta(h^{\star})^{\star}*f*g+h*\Delta(f*g)=\Delta(h^{\star}*g^{\star})^{\star}*f+g*h*\Delta(f), \end{equation} for all $f,g,h \in L^{1}(G)$. Setting $f=u_{i}$ in \ref{anti4} and by using similar methods as part (i), we get \begin{equation}\label{anti5} \Delta(h^{\star})^{\star}*g+h*\Delta(g)=\Delta(h^{\star}*g^{\star})^{\star}+g*h*\xi, \end{equation} for all $g,h \in L^{1}(G)$, where $\xi\in M(G)$ and $\Delta(u_{i})\xlongrightarrow{w^{*}} \xi$. By \ref{anti5} we have \begin{equation}\label{anti55} g^{\star}*\Delta(h^{\star})+\Delta(g)^{\star}*h^{\star}=\Delta(h^{\star}*g^{\star})+\xi^{\star}*h^{\star}*g^{\star},\end{equation} for all $g,h \in L^{1}(G)$. Letting $h^{\star}=u_{i}$, we arrive at \[ g^{\star}*\xi+\Delta(g)^{\star}=\Delta(g^{\star})+\xi^{\star}*g^{\star},\] for all $g \in L^{1}(G)$. Hence \begin{equation}\label{anti6} \Delta(g^{\star})-g^{\star}*\xi=(\Delta(g)-g*\xi)^{\star}, \end{equation} for all $g \in L^{1}(G)$. From \ref{anti55} we have \begin{equation}\label{anti555} \Delta(f*g)=g*\Delta(f)+\Delta(g^{\star})^{\star}*f-\xi^{\star}*f*g, \end{equation} for all $f,g \in L^{1}(G)$. Define $D:L^{1}(G)\rightarrow M(G)$ by $D(f)=\Delta(f)-f*\xi$. By \ref{anti6} and \ref{anti555}, it follows that $D$ is a $\star$-Jordan derivation and $D(L^{1}(G))\subseteq L^{1}(G)$. Hence it is a $\star$-derivation and so there is a $\mu\in M(G)$ with $Re\mu\in \mathcal{Z}(M(G))$, such that $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$. \par Now by \ref{anti555} and the fact that $D$ is a $\star$-derivation, we have \[ \Delta(f*g)+\xi^{\star}*f*g=\Delta(g*f)+\xi^{\star}*g*f,\] for all $f,g \in L^{1}(G)$. So \[ f*g*\mu-\mu*f*g+\xi^{\star}*f*g=g*f*\mu-\mu*g*f+\xi^{\star}*g*f,\] and hence \[ [[f,g],\mu]+\xi^{\star}*[f,g]+[f,g]\xi=0,\] for all $f,g \in L^{1}(G)$. By setting $\nu=\mu+\xi$, we have $\Delta(f)=f*\nu-\mu*f$ and \[ [[f,g],\mu]+(\nu-\mu)^{\star}*[f,g]+[f,g]*(\nu-\mu)=0,\] for all $f,g \in L^{1}(G)$, where $Re\mu\in \mathcal{Z}(M(G))$. \par $(iii)$ Consider the map $D:L^{1}(G)\rightarrow M(G)$ defined by $D(f)=\Delta(f^{\star})^{\star}$. It is easily seen that the map $D$ satisfies the conditions in part $(ii)$. So, there exists $\mu_{1}, \nu_{1}\in M(G)$ such that $D(f)=f*\nu_{1}-\mu_{1}*f$ for all $f \in L^{1}(G)$ with $Re\mu_{1}\in \mathcal{Z}(M(G))$ and \[ [[f,g],\mu_{1}]+(\nu_{1}-\mu_{1})^{\star}*[f,g]+[f,g]*(\nu_{1}-\mu_{1})=0,\] for all $f,g \in L^{1}(G)$. Then $\Delta(f)=f*\mu-\nu*f$ for all $f\in L^{1}(G)$, where $\nu=-\nu_{1}^{\star}$, $\mu=-\mu_{1}^{\star}$ with $Re\mu\in \mathcal{Z}(M(G))$ and \[ [[f,g],\mu]+[f,g]*(\mu-\nu)^{\star}+(\mu-\nu)[f,g]=0,\] for all $f,g \in L^{1}(G)$ . \end{proof} Note that the converse of Theorems \ref{o1}-$(i)-(iii)$ and \ref{anti}-$(i)-(iii)$ hold. It is checked easily. \section{Derivations through two-sided orthogonality conditions} In this section we will consider a linear map $\Delta:L^{1}(G) \rightarrow M(G)$ behaving like derivation at two-sided orthogonality conditions, where $G\in [SIN]$. \begin{thm}\label{two} Let $G$ be a locally compact group with $G\in [SIN]$, and let $\Delta:L^{1}(G) \rightarrow M(G) $ be a continuous linear map. \begin{enumerate} \item[(i)] Assume that \[ f*g=g*f=0 \Longrightarrow f*\Delta(g)+\Delta(f)*g=g*\Delta(f)+\Delta(g)*f=0\quad (f,g \in L^{1}(G)).\] Then there are measures $\mu,\nu \in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$ for all $f\in L^{1}(G)$, where $\mu-\nu\in \mathcal{Z}(M(G))$. \item[(ii)] Assume that \begin{equation*}\label{1o2} f*g^{\star}=g^{\star}*f=0 \Longrightarrow f*\Delta(g)^{\star}+\Delta(f)*g^{\star}=\Delta(g)^{\star}*f+g^{\star}*\Delta(f)=0\quad (f,g \in L^{1}(G)). \end{equation*} Then there are $\mu, \nu\in M(G)$ such that $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$, where $Re\mu\in \mathcal{Z}(M(G))$ and $Re(\mu-\nu)\in \mathcal{Z}(M(G))$. \end{enumerate} \end{thm} \begin{proof} Since $G\in [SIN]$, we can suppose that $(u_i)_{i\in I}$ is a central bounded approximate identity of $L^{1}(G)$ such that $u_{i}\xlongrightarrow{w^{*}} \delta_{e}$, where $\delta_{e}$ is the identity of $M(G)$. Also we can assume that $\Delta(u_{i})\xlongrightarrow{w^{*}} \xi$ with $\xi \in M(G)$, since the net $(\Delta(u_{i}))_{i\in I}$ is bounded. \par $(i)$ Define a continuous bilinear map $\phi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by $\phi(f,g)=f*\Delta(g)+\Delta(f)*g$. So $\phi(f,g)=0$, whenever $f*g=g*f=0$. Hence by Lemma \ref{bi}-$(ii)$, we get $ \phi (f*g,h)=\phi (f,g*h)$ for all $f,h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. Therefore \begin{equation}\label{tw1} f*g*\Delta(h)+\Delta(f*g)*h=f*\Delta(g*h)+\Delta(f)*g*h, \end{equation} for all $f,h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. Taking $f=u_{i}$ in \eqref{tw1}, we get \[u_{i}*g*\Delta(h)+\Delta(u_{i}*g)*h=u_{i}*\Delta(g*h)+\Delta(u_{i})*g*h,\] for all $h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. From continuity of $\Delta$, we get $u_{i}*\Delta(g*h)+\Delta(u_{i})*g*h$ converges to $g*\Delta(h)+\Delta(g)*h$ with respect to the norm topology. On the other hand, from separately $w^{*}$-continuity of convolution product in $M(G)$ and $\Delta(u_{i})\xlongrightarrow{w^{*}} \xi$, it follows that $u_{i}*\Delta(g*h)+\Delta(u_{i})*g*h \xlongrightarrow{w^{*}} \Delta(g*h)+\xi*g*h$. Hence \begin{equation}\label{tw2} \Delta(g*h)+\xi*g*h=g*\Delta(h)+\Delta(g)*h, \end{equation} for all $h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. Now letting $h=u_{i}$ in \ref{tw2} and similar as above we get $\xi*g=g*\xi$ for all $g\in L^{1}(G)$. From Lemma \ref{zz}-$(ii)$, it follows that $g\in \mathcal{Z}(M(G))$. Define $D:L^{1}(G)\rightarrow M(G)$ by $D(f)=\Delta(f)-\xi*f$. Then $D$ is a continuous linear map which satisfies \begin{equation}\label{tw3} f*g=g*f=0 \Longrightarrow f*D(g)+D(f)*g=g*D(f)+D(g)*f=0\quad (f,g \in L^{1}(G)), \end{equation} and $D(u_{i})\xlongrightarrow{w^{*}} 0$. We will show that $D$ is a derivation. In order to prove this we consider the continuous bilinear map $\phi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by $\phi(f,g)=f*D(g)+D(f)*g$. If $f,g\in L^{1}(G)$ are such that $f*g=g*f=0$, then \eqref{tw3} gives $\phi(f,g)=0$. So by Lemma \ref{bi}-$(ii)$, we get \[\phi (f*g,h*k)-\phi(f,g*h*k)+\phi (k*f,g*h)-\phi(k*f*g,h),\] for all $f,g,h \in L^{1}(G)$. Therefore \begin{equation}\label{tw4} \begin{split} &f*g*D(h*k)+D(f*g)*h*k-f*D(g*h*k)-D(f)*g*h*k+\\ & k*f*D(g*h)+D(k*f)*g*h-k*f*g*D(h)-D(k*f*g)*h=0, \end{split} \end{equation} for all $f,g,h,k \in L^{1}(G)$. Taking $f=u_{i}$ in \eqref{tw4}, since $D(u_{i})\xlongrightarrow{w^{*}} 0$, it follows that \begin{equation}\label{tw5} g*D(h*k)+D(g)*h*k-D(g*h*k)+k*D(g*h)+D(k)*g*h-k*g*D(h)-D(k*g)*h=0, \end{equation} for all $g,h,k \in L^{1}(G)$. Now letting $h=u_{i}$ in \eqref{tw5}, we get \begin{equation}\label{tw6} g*D(k)+D(g)*k-D(g*k)+k*D(g)+D(k)*g-D(k*g)=0, \end{equation} for all $g,k \in L^{1}(G)$. So $D$ is a Jordan derivation. An analogue of the Cohen factorization theorem on Banach algebras holds for Jordan-Banach algebras (see \cite{ak1, ak2}). This result implies that for any $h\in L^{1}(G)$ there exist $f,g \in L^{1}(G)$ such that $h=f*g*f$. Since $D$ is a Jordan derivation, it follows that \[ D(h)=D(f*g*f)=D(f)*g*f+f*D(g)*f+f*g*D(f),\] for all $h\in L^{1}(G)$, where $h=f*g*f$. Thus $D(L^{1}(G))\subseteq L^{1}(G)$ and hence $D$ is a derivation. So by derivation problem there is a $\mu\in M(G)$ such that $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$. Setting $\nu=\mu-\xi$. So $\Delta(f)=f*\mu-\nu*f$ for all $f \in L^{1}(G)$ and $\mu-\nu\in \mathcal{Z}(M(G))$. \par $(ii)$ Define continuous bilinear maps $\phi, \psi:L^{1}(G) \times L^{1}(G) \rightarrow M(G)$ by \[\phi(f,g)=f*\Delta(g^{\star})^{\star}+\Delta(f)*g \,\, \text{and} \,\, \psi(f,g)=\Delta(g^{\star})^{\star}*f+g*\Delta(f),\] for all $f,g \in L^{1}(G)$. It is straightforward to check that $\phi(f,g)=0$ and $\psi(f,g)=0$, whenever $f*g=g*f=0$. Thus by Lemma \ref{bi}-$(ii)$, we get \begin{equation}\label{tw7} f*g*\Delta(h^{\star})^{\star}+\Delta(f*g)*h=f*\Delta(h^{\star}*g^{\star})^{\star}+\Delta(f)*g*h, \end{equation} and \begin{equation}\label{tw8} \Delta(h^{\star})^{\star}*f*g+h*\Delta(f*g)=\Delta(h^{\star}*g^{\star})^{\star}*f+g*h*\Delta(f), \end{equation} for all $f,h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. Letting $f=u_{i}$ in \eqref{tw7} and \eqref{tw8}, and applying $u_{i}\xlongrightarrow{w^{*}} \delta_{e}$, $\Delta(u_{i})\xlongrightarrow{w^{*}} \xi$ we obtain \begin{equation*} g*\Delta(h^{\star})^{\star}+\Delta(g)*h=\Delta(h^{\star}*g^{\star})^{\star}+\xi*g*h, \end{equation*} and \begin{equation*} \Delta(h^{\star})^{\star}*g+h*\Delta(g)=\Delta(h^{\star}*g^{\star})^{\star}+g*h*\xi, \end{equation*} for all $h \in \mathcal{Z}(L^{1}(G))$ and $g\in L^{1}(G)$. We now apply $\star$, set $h^{\star}=u_{i}$, to see that \begin{equation}\label{tw9} \xi*g^{\star}+\Delta(g)^{\star}=\Delta(g^{\star})+g^{\star}*\xi^{\star}, \end{equation} and \begin{equation}\label{tw10} g^{\star}*\xi+\Delta(g)^{\star}=\Delta(g^{\star})+\xi^{\star}*g^{\star}, \end{equation} for all $g\in L^{1}(G)$. From equations \eqref{tw9} and \eqref{tw10}, we get $(\xi+\xi^{\star})*g^{\star}=g^{\star}*(\xi^{\star}+\xi)$ for all $g\in L^{1}(G)$. Hence $Re\xi \in \mathcal{Z}(M(G))$. \par Define $D:L^{1}(G)\rightarrow M(G)$ by $D(f)=\Delta(f)-\xi*f$. Then $D$ is a continuous linear map and by \eqref{tw9}, we have $D(f^{\star})=D(f)^{\star}$ for all $f\inL^{1}(G)$. Hence $D$ is a $\star$-map. If $f*g=g*f=0$, then by hypothesis, definition of $D$ and the fact that $D$ is a $\star$-map and $Re\xi \in \mathcal{Z}(M(G))$, we have \begin{equation*} \begin{split} f*D(g)+D(f)*g&=f*D(g^{\star})^{\star}+D(f)*g\\ &= f*(\Delta(g^{\star})-\xi*g^{\star})^{\star}+(\Delta(f)-\xi*f)*g=0, \end{split} \end{equation*} and \begin{equation*} \begin{split} g*D(f)+D(g)*f&=g*D(f)+D(g^{\star})^{\star}*f\\ &= g*(\Delta(f)-\xi*f)+(\Delta(g^{\star})-\xi*g^{\star})^{\star}*f\\&= -g*\xi*f-g*\xi^{\star}*f=-g*f*(\xi+\xi^{\star})=0. \end{split} \end{equation*} So $D$ satisfies in $(i)$ and hence there are $\mu,\nu \in M(G)$ such that $D(f)=f*\mu-\nu_{1}*f$ for all $f\in L^{1}(G)$. Since $D(u_{i})\xlongrightarrow{w^{*}} 0$ , it follows that $\mu=\nu_{1}$. On the other hand $D$ is a $\star$-map, so by Remark \ref{der}, $D(f)=f*\mu-\mu*f$ for all $f\in L^{1}(G)$, where $Re\mu\in \mathcal{Z}(M(G))$. Therefore by letting $\nu=\mu-\xi$ we have $\Delta(f)=f*\mu-\nu*f$ for all $f\in L^{1}(G)$, where $Re\mu\in \mathcal{Z}(M(G))$ and $Re(\mu-\nu)\in \mathcal{Z}(M(G))$. \end{proof} Note that the converse of Theorems \ref{two}-$(i)-(iii)$ holds. It is checked easily. \bibliographystyle{amsplain}
{ "timestamp": "2018-06-05T02:15:55", "yymm": "1806", "arxiv_id": "1806.01019", "language": "en", "url": "https://arxiv.org/abs/1806.01019" }
\section*{Acknowledgments}\label{sec:Acknowledgments} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n$\degree$ 740435).}\fi} \section{Introduction} \label{sec:intro} Recommendation systems (RSs hereinafter) have rapidly developed over the past decade. By predicting a user preference for an item, RSs have been successfully applied in a variety of applications. Moreover, the amazing RSs offered by giant e-tailers and e-marketing platforms, such as Amazon and Google, lie at the heart of online commerce and marketing on the web. However, current significant challenges faced by personal assistants (e.g. Cortana, Google Now and Alexa) and mobile applications go way beyond the practice of predicting the satisfaction levels of a user from a set of offered items. Such systems have to generate recommendations that satisfy the needs of both the end users and other parties or stakeholders \cite{burke2017multisided,zheng2017multi}. Consider the following cases: $\bullet$ When Alice drives her car, her personal assistant runs the default navigation application. When she makes a stop at a junction, the personal assistant may show Alice advertisements provided by neighborhood stores, or an update on the stock market status as provided by financial brokers. Each of these pieces of information –- the plain navigation content, the local advertisements and the financial information –- are served by different content providers. These content providers are all competing over Alice's attention at a given point. The personal assistant is aware of Alice's satisfaction with each content, and needs to select the right content to show at a particular time. $\bullet$ Bob is reading news of the day on his mobile application. The application, aware of Bob's interests, is presenting news deemed most relevant to him. The news is augmented by advertisements, provided by competing content providers, as well as articles by independent reporters. The mobile application, balancing Bob's taste and the interests of the content providers, determines the mix of content shown to Bob. In these contexts, the RS integrates information from various providers, often sponsored content, which is probably relevant to the user. The content providers are \textit{strategic} –- namely, make decisions based on the way the RS operates, aiming at maximizing their exposure. For instance, to draw Bob's attention, a content provider strategically selects the topic of her news item, aiming at maximizing the exposure to her item. On the one hand, fair content provider treatment is critical for smooth efficient use of the system and also to maintained content provider engagement over time. On the other hand, the strategic behavior of the content providers may lead to instability of the system: a content provider might choose to adjust the content she offers in order to increase the expected number of displays to the users, assuming the others stick to their offered contents. In this paper, we study ways of overcoming this dilemma using canonical concepts in game theory to impose two requirements on the RS: fairness and stability. Fairness is formalized as the requirement of satisfying fairness-related properties, and stability is defined as the existence of a pure Nash equilibrium. Analyzing RSs that satisfy these two requirements is the main goal of this paper. Our first result is that traditional RSs fail to satisfy both of the above requirements. Traditional RSs are complete, in the sense that they always show some content to the user, but it turns out that this completeness property cannot be satisfied simultaneously with the fairness and equilibrium existence requirements. This impossibility result is striking and calls for a search of a fair and stable RS. To do so, we model the setting as a cooperative game, binding content provider payoffs with user satisfaction. We resort to a core solution concept in cooperative game theory, the Shapley value \cite{shapley1952value}, which is a celebrated mechanism for value distribution in game-theoretic contexts (see, e.g., \cite{myerson1977graphs}). In our work, it is proposed as a tool for recommendations, namely for setting display probabilities. Since the Shapley value is employed in countless settings for fair allocation, it is not surprising that it satisfies our fairness properties. In addition, we prove that the related RS, termed the {\em Shapley mediator}, does satisfy the stability requirement. In particular, we show that the Shapley mediator possesses a potential function \cite{monderer1996potential}, and therefore any better-response learning dynamics converge to an equilibrium (see, e.g., \cite{cohen2017learning,garg2016learning}). Note that this far exceeds our minimal stability requirement from the RS. Implementation in commercial products would require the mediator to be computationally tractable. The mediator interacts with users; hence a fast response is of great importance. In another major result, we show that the Shapley mediator has a computationally efficient implementation. The latter is in contrast to the intractability of the Shapley value in classical game-theoretic contexts \cite{deng1994complexity}. Another essential property of the Shapley mediator is economic efficiency \cite{young1985monotonic}. Unlike cooperative games, where the Shapley value can be characterized as the only solution concept to satisfy properties equivalent to fairness and economic efficiency, in our setting the Shapley mediator is not characterized solely by fairness and economic efficiency. Namely, one can find other simple mediators that satisfy these two properties. However, we show that the Shapley mediator is the unique mediator to satisfy the fairness, economic efficiency and stability requirements. Importantly, our study stems from a rigorous definition of the minimal requirements from an RS, and so characterizes a unique RS. Interested in understanding the ramification on user utility, we introduce a rigorous analysis of user utility in (strategic) recommendation systems, and show that the Shapley mediator is not inferior to traditional approaches. {\ifnum\Includeappendix=0{ Due to space constraints, the proofs are deferred to the full version \cite{ben2018game}. }\fi} \subsection{Related work} This work contributes to three interacting topics: fairness in general machine learning, multi-stakeholder RSs and game theory. The topic of fairness is receiving increasing attention in machine learning \cite{bozdag2013bias,chierichetti2017fair,pedreshi2008discrimination,pleiss2017fairness} and data mining \cite{Kamishima2015}. A major line of research is discrimination aware classification \cite{dwork2012fairness,kamiran2010discrimination,kamishima2012fairness,zemel2013learning}, where classification algorithms must maintain high predictive accuracy without discriminating on the basis of a variable representing membership in a protected class, e.g. ethnicity. In the context of RSs, the work of \citet{kamishima2012enhancement,kamishima2014correcting} addresses a different aspect of fairness (or lack thereof): bias towards popular items. The authors propose a collaborative filtering model which takes into account viewpoints given by users, thereby tackling the tendency for popular items to be recommended more frequently, a problem posed in \cite{pariser2011filter}. A related problem is over-specialization, i.e., the tendency to recommend items similar to those already purchased or liked in the past, which is addressed in \cite{adamopoulos2014over}. \citet{zheng2017multi} surveys multi-stakeholder RSs, and highlights practical applications. Examples include RSs for sharing economies (e.g. AirBnB, Uber, etc.), online dating \cite{pizzato2010recon}, and recruiting \cite{yu2011reciprocal}. \citet{burke2017multisided} discusses fairness in multi-stakeholder RSs, and presents a taxonomy of classes of fairness-aware RSs. The author distinguishes between user fairness, content provider fairness and pairwise fairness, and reviews applications for these fairness types. A practical problem concerning fairness in multi-stakeholder RSs is discussed in \cite{modani2017fairness}. In their work, an online platform is used by users who play two roles: customers seeking recommendations and content providers aiming for exposure. They report, based on empirical evidence, that collaborative filtering techniques tend to create rich-gets-richer scenarios, and propose a method for re-ranking scores, in order to improve exposure distribution across the content providers. Note that all the work above considers traditional machine learning tasks that enforce upon the solution some form of fairness, as defined specifically for each task. They suggest additional considerations, but do not consider that the parties (i.e., users, content providers) will change their behavior as a result of the new mechanism, nor examine the game theoretic aspects imposed by the selection of the RS in a formal manner. To the best of our knowledge, our work is the first to suggest a fully grounded approach to content provider fairness in RSs. Finally, strategic aspects of classical machine learning tasks were also introduced recently \cite{porat2017best,Ben-Porat18}. The idea that a recommendation algorithm affects content-provider policy, and as a result must be accompanied by a game-theoretic study is key to recent works in search/information retrieval \cite{ben2015probability,raifer2017information}; so far, however, such work has not dealt with the issue of fairness. \section{Problem formulation} \newcommand{v}{v} \newcommand{\text{Shapley mediator}}{\text{Shapley mediator}} \newcommand{\textsc{SM}}{\textsc{SM}} \newcommand{\textsc{TOP}}{\textsc{TOP}} \newcommand{\textsc{BTL}}{\textsc{BTL}} \newcommand{\textsc{DM}}{\textsc{DM}} \newcommand{\textsc{NONE}}{\textsc{NONE}} \newcommand{\textsc{RAND}}{\textsc{RAND}} \newcommand{\algname}{\textsc{SM}} \label{sec:model} \begin{table*}[!t] \caption{Consider an arbitrary game, a fixed strategy profile $\bl X$ and an arbitrary user $u_i$. $\textsc{TOP}$ selects uniformly among the players that satisfy $u_i$ the most. The Bradley-Terry-Luce mediator \cite{bradley1952rank,luce2005individual}, or simply $\textsc{BTL}$, selects player $j$ w.p. proportional to her satisfaction level over the sum of satisfaction levels. $\textsc{NONE}$ displays no item, and $\textsc{RAND}$ selects uniformly among players with a positive satisfaction level. Both $\textsc{TOP}$ and $\textsc{BTL}$ satisfy $\textbf{F}$, but do not satisfy $\textbf{S}$. $\textsc{NONE}$ and $\textsc{RAND}$ satisfy $\textbf{S}$, but do not satisfy $\textbf{F}$. The bottom line refers to the Shapley mediator, $\textsc{SM}$, which is defined and analyzed in Section \ref{sec:ourapproach}. In contrast to the other mediators, $\textsc{SM}$ satisfies both $\textbf{F}$ and $\textbf{S}$.} \label{table:mediators} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcll} \toprule Mediator & \thead{Probability Computation \\ $\mathbb{P}\left(\mathcal{M}(\bl X,u_i)= j\right)$ } &Fairness (\textbf{F}) & Stability (\textbf{S}) \\ \midrule \textsc{TOP} &$\frac{\mathbbm{1}_{j\in\argmax_{j'} \sigma_i(X_{j'})}}{\abs{\argmax_{j'} \sigma_i(X_{j'})}}$ & $\surd$ & $\times$ (Theorem \ref{thm:imptwoplayers})\\ \textsc{BTL} & $\frac{\sigma_i(X_j)}{\sum_{j'=1}^N \sigma_i(X_{j'})} $& $\surd$ & $\times$ (Theorem \ref{thm:imptwoplayers})\\ \textsc{NONE} & 0 & $\times$ & $\surd$ \\ \textsc{RAND} & $\frac{\mathbbm{1}_{\sigma_i(X_{j})>0}}{\sum_{j'=1}^N \mathbbm{1}_{\sigma_i(X_{j'})>0}}$ & $\times$ & $\surd$ \\ \midrule \thead[l]{\algname \\ \text{(Section \ref{sec:ourapproach})}} & Equation (\ref{eq:shpleyoneuser})& $\surd$ & $\surd$ \text{(Theorem \ref{existance-pot-thm})}\\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} From here on, our ideas will be exemplified in the following motivational example: a mobile application (or simply app) is providing users with valuable content. A set of players (advertisers) publish their items (advertisements) on the app. When a user enters the app, a mediator (RS/advertising engine) decides whether to display an item to that user or not, and which player's item to display. The reader should notice that while we use that motivation for the purpose of exposition, our formal model and results are applicable to a whole range of RSs with strategic content providers. Formally, the recommendation game is defined as follows: \begin{itemize}[leftmargin=0cm,itemindent=.5cm,labelwidth=\itemindent,labelsep=0cm,align=left] \item A set of users $\mathcal{U}=\{u_1,\dots,u_n \}$, a set of players $[N]=\{1,\dots N\}$, and a mediator $\mathcal{M}$. \item The set of items (e.g. possible ad formats/messages to select from) available to player $j$ is denoted by $\mathcal{L}_j$, which we assume to be finite. A {\em strategy} of player $j$ is an item from $\mathcal{L}_j$. \item Each user $u_i$ has a satisfaction function $\sigma_i: \mathcal{L} \rightarrow [0,1]$, where $\mathcal{L} =\bigcup_{j=1}^N \mathcal{L}_j$ is the set of all available items. In general, $\sigma_i(l)$ measures the \textit{satisfaction level} of $u_i$ w.r.t. $l$. \item When triggered by the app, $\mathcal{M}$ decides which item to display, if any. Formally, given the strategy profile $\bl X=(X_1,\dots,X_N)$ and a user $u_i$, $\mathcal{M}(\bl X,u_i)$ is a distribution over $[N]\cup \{\emptyset \}$, where $\emptyset$ symbolizes maintaining the plain content of the app. That is, displaying no item at all. We refer to $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)$ as the probability that player $j$'s item will be displayed to $u_i$ under the strategy profile $\bl X$. \item Each player gets one monetary unit when her item is displayed to a user. Therefore, the expected payoff of player $j$ under the strategy profile $\bl X$ is $\pi_j(\bl X)=\sum_{i=1}^{n}{\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)}$. \item The {\em social welfare} of the players under the strategy profile $\bl X$ is the expected number of displays, $V(\bl X)= \sum_{j=1}^N \pi_j(\bl X)$. \end{itemize} For ease of notation, we shall sometimes refer to $\sigma_i (\bl X)$ as the maximum satisfaction level of user $u_i$ from the items in $\bl X$, i.e., $\sigma_i(\bl X)=\max_j \sigma_i(X_j)$. We demonstrate our setting with the following example. \begin{example} \label{example:motive} Consider a game with two players and three users. Let $\mathcal{L}_1=\{l_1,l_2\},\mathcal{L}_2=\{l_3\}$ such that the satisfaction levels of the users with respect to the items are \[ \kbordermatrix{ & u_1 & u_2 & u_3 \\ l_1 & 0.1 & 0.9 & 0.2 \\ l_2 & 0.8 & 0.7 & 0.9 \\ l_3 & 0.9 & 0.8 & 0.1 }. \] Consider a mediator displaying each user with the most satisfying item to her taste, denoted by $\textsc{TOP}$. For example, $\mathbb{P}(\textsc{TOP}\left((l_1,l_3),u_1\right)=2)=1$, since $\sigma_1(l_3)=0.9 > \sigma_1(l_1)=0.1$. The profile $(l_1,l_3)$ will probably be materialized in realistic scenarios, since the payoff of player 1 under the strategy profile $(l_2,l_3)$ is $\pi_1(l_2,l_3)=1$, while $\pi_1(l_1,l_3)=2$. Notice that from the users' perspective,\footnote{For a formal definition of the user utility, see Subsection \ref{subsec:user utility}.} this profile is not optimal, since $\sum_{i=1}^3\sigma_i(l_1,l_3)=0.9+0.9+0.2=2$, while $\sum_{i=1}^3\sigma_i(l_2,l_3)=2.6$; hence, the users suffer from strategic behavior of the players. \end{example} After defining general recommendation games, we now present a few properties that one may desire from a mediator. First and foremost, a mediator has to be {\em fair}. The following is a minimal set of fairness properties: \begin{enumerate}[leftmargin=0cm,itemindent=.0cm,labelwidth=\itemindent,labelsep=0cm,align=left] \item[]\textbf{Null Player}. If $\sigma_i(X_j)=0$, then it holds that $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)=0$. Informally, an item will not be displayed to $u_i$ if it has zero satisfaction level w.r.t. him. \item[] \textbf{Symmetry}. If $u_i$ has the same satisfaction level from two items, they will be displayed with the same probability. Put differently, if $\sigma_i(X_j)=\sigma_i(X_m)$, then $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)=\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= m \right)$. \item[] \textbf{User-Independence}. Given the selected items, the display probabilities depend only on the user: if user $u_{i'}$ is removed from/added to $\mathcal{U}$, $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)$ will not change, i.e., \[ \mathbb{P} \left( \mathcal{M}(\bl X,u_i)=j \right) = \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \mid u_{i'} \in \mathcal{U}\right). \] \item[] \textbf{Leader Monotonicity}. $\mathcal{M}$ displays the most satisfying items (w.r.t. a specific user) with higher probability than it displays other items. Formally, if $ j\in \argmax_{j'\in[N]} \sigma_i(X_{j'})$ and $m \notin \argmax_{j'\in[N]} \sigma_i(X_{j'})$, then $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)>\mathbb{P} \left( \mathcal{M}(\bl X,u_i)= m \right)$. \end{enumerate} For brevity, we denote the above set of fairness properties by $\textbf{F}$. In addition, an essential property in a system with self-motivated participants is that it will be stable. Instability in such systems is a result of a player aiming to improve her payoff given the items selected by others. A minimal requirement in this regard is stability against unilateral deviations as captured by the celebrated pure Nash equilibrium concept, herein denoted PNE. A strategy profile $\bl X=(X_1,\dots,X_N)$ is called a \textit{pure Nash equilibrium} if for every player $j\in[N]$ and any strategy $X'_j\in \mathcal{L}_j$ it holds that $\pi_j(X_j,\bl X_{-j}) \geq \pi_j(X'_j,\bl X_{-j})$, where $\bl X_{-j}$ denotes the vector $\bl X$ of all strategies, but with the $j$-th component deleted. We use the notion of PNE to formalize the stability requirement: \begin{enumerate}[leftmargin=0cm,itemindent=.0cm,labelwidth=\itemindent,labelsep=0cm,align=left] \item[] \textbf{Stability}. Under any set of players, available items, users and user satisfaction functions, the game induced by $\mathcal{M}$ possesses a PNE. \end{enumerate} For brevity, we denote this property by \textbf{S}. The goal of this paper is to devise a computationally tractable mediator that satisfies both $\textbf{F}$ and $\textbf{S}$.\footnote{One may require the convergence of any better-response dynamics, thereby allowing the players to learn the environment. In Section \ref{sec:ourapproach} we show that our solution satisfies this stronger notion of stability as well. } \subsection{Impossibility of classical approaches} \label{subsec:impos} We highlight a few benchmark mediators in Table \ref{table:mediators}, including $\textsc{TOP}$, which was introduced informally in Example \ref{example:motive}. Another interesting mediator is $\textsc{BTL}$, which follows the lines of the Bradley-Terry-Luce model \cite{bradley1952rank,luce2005individual}. $\textsc{BTL}$ is addressed here as a representative of a wide family of weight-based mediators: mediators that distribute display probability according to weights, determined by a monotonically increasing function of the user satisfaction (e.g., softmax). Common to $\textsc{TOP}, \textsc{BTL}$ and any other weight-based mediator, is that an item is displayed to a user with probability 1.\footnote{\label{foot:excluding}Perhaps excluding profiles $\bl X$ where $\sigma_i(\bl X)=0$. We allow $\mathcal{M}$ to behave arbitrarily in this case.} We model this property as follows. \begin{enumerate}[leftmargin=0cm,itemindent=.0cm,labelwidth=\itemindent,labelsep=0cm,align=left] \item[] \textbf{Complete}. For any recommendation game and any strategy profile $\bl X$, $ \sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)=1$. \end{enumerate} Since the goal of an RS is to provide useful content to users, satisfying $\textbf{Complete}$ seems justified. Although it seems unreasonable to avoid showing any content to a certain user at a certain time, it turns out that this avoidance is crucial in order to satisfy our requirements. \begin{theorem} \label{thm:imptwoplayers} No mediator can satisfy $\textbf{F},\textbf{S}$ and $\textbf{Complete}$. \end{theorem} \begin{proof}[Proof sketch] We construct a game with two players, three users and three strategies, and show that no mediator can satisfy $\textbf{F},\textbf{S}$ and $\textbf{Complete}$. Importantly, our technique can be used to show that any arbitrary game does not possess a PNE or that a slight modification of this game does not possess a PNE. Consider the following satisfaction matrix: \[ \kbordermatrix{ & u_1 & u_2 & u_3 \\ l_1 & 0 & y & x \\ l_2 & x & 0 & y \\ l_3 & y & x & 0 }, \] where $(x,y)\in (0,1]^2$. Let $\mathcal{L}_1=\mathcal{L}_2=\{l_1,l_2,l_3\}$ (i.e., a symmetric two-player game). By using the properties of $\textbf{F}$ we characterize the structure of the induced normal form game. We show that in this normal form game, a PNE only exists if $\mathbb{P} \left( \mathcal{M}((l_2,l_3),u_1)= 1 \right)=0.5$ (and similarly to the other users and strategy profiles, due to \textbf{User-Independence}). Since this holds for every $x$ and $y$, the mediator displays a random item for each user under any strategy profile. Recall that a random selection does not satisfy \textbf{Leader Monotonicity}; hence, no mediator can satisfy $\textbf{F},\textbf{S}$ and $\textbf{Complete}$. \end{proof} Moreover, Theorem \ref{thm:imptwoplayers} is not sensitive to the sum of the display probabilities being equal to 1. One can show a similar argument for any mediator that displays items with constant probabilities, i.e., $\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)=c$ for some $0< c \leq 1$. Theorem \ref{thm:imptwoplayers} suggests that $\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)$ should be bounded to the user satisfaction levels. In the next section, we show a novel way of doing so. \section{Our approach: the Shapley mediator} \label{sec:ourapproach} In order to provide a fair and stable mediator, we resort to cooperative game theory. Informally, a cooperative game consists of two elements: a set of players $[ N]$ and a characteristic function $v:2^{[ N]} \rightarrow\mathbb{R}$, where $v$ determines the value given to every coalition, i.e., every subset of players. The analysis of cooperative games focuses on how the collective payoff of a coalition should be distributed among its members. One core solution concept in cooperative game theory is the Shapley value \cite{shapley1952value}. \begin{definition}[Shapley value] Let $(v,[N])$ be a cooperative game such that $v(\emptyset)=0$. According to the Shapley value, the amount that player $j$ gets is \begin{equation} \label{def:shapley-value} \frac{1}{N!} \sum_{R \in \Pi([N])}{\left(v(P_j^R\cup \{j \})-v(P_j^R) \right)}, \end{equation} where $\Pi([N])$ is the set of all permutations of $[N]$ and $P_j^R$ is the set of players in $[N] $ which precede player $j$ in the permutation $R$. \end{definition} One way to describe the Shapley value, is by imagining the process in which coalitions are formed: when player $j$ joins coalition $\mathcal{C}$, she demands her contribution to the collective payoff of the coalition, namely $v(\mathcal{C}\cup\{j\})-v(\mathcal{C})$. Equation (\ref{def:shapley-value}) is simply summing over all such possible demands, assuming that all coalitions are equally likely to occur. For our purposes, we fix a strategy profile $\bl X$, and focus on an arbitrary user $u_i$. How should a mediator assign the probabilities of being displayed in a fair fashion? The \textit{induced cooperative game} contains the same set of players. For every $\mathcal{C} \subseteq [N]$, let $ \bl X_\mathcal{C}$ denote the strategy profile where all players missing from $\mathcal{C}$ are removed. We define the characteristic function of the induced cooperative game as \begin{equation*} \label{eq:coopgame} v_i(\mathcal{C} ; \bl X)= \sigma_i(\bl X_\mathcal{C}), \end{equation*} where $\sigma_i(\bl X_\mathcal{C})$ is the maximal satisfaction level a user $u_i$ may obtain from the items chosen by the members of $\mathcal{C}$. Indeed, this formulation represents a collaborative behavior of the players, when they aim to maximize the satisfaction of $u_i$. Observe that $v_i(\cdot ; \bl X):2^{[N]}\rightarrow \mathbb{R}$ is a valid characteristic function, hence $(v_i(\cdot ; \bl X),[N])$ is a well defined cooperative game. Note that the selection of a mediator fully determines the probability of the events $\mathcal{M}(\bl X,u_i)=j$, and vice versa. The mediator that sets the probability of the event $\mathcal{M}(\bl X,u_i)=j$ according to the Shapley value of the induced cooperative game $(v_i(\cdot ;\bl X),[N])$ is hereinafter referred to as \textit{the Shapley mediator}, or $\textsc{SM}$ for abbreviation. \subsection{Properties of the Shapley mediator} Since the Shapley value is employed in countless settings for fair allocation, it is not surprising that it satisfies our fairness properties. \begin{proposition} \label{prop:shapleyisfair} $\textsc{SM}$ satisfies $\textbf{F}$. \end{proposition} We now show that recommendation games with $\textsc{SM}$ possess a PNE. This is done using the notion of potential games \cite{monderer1996potential}. A non-cooperative game is called \textit{an exact potential game} if there exists a function $\Phi:\prod_j \mathcal{L}_j\rightarrow \mathbb{R}$ such that for any strategy profile $\bl X=(X_1,\dots,X_N) \in \prod_j \mathcal{L}_j$, any player $j$ and any strategy $X_j' \in \mathcal{L}_j$, whenever player $j$ switches from $X_j$ to $X'_j$, the change in her payoff function equals the change in $\Phi$, i.e., \[ \Phi (X_{{j}},\bl X_{{-j}})-\Phi (X'_{{j}},\bl X_{{-j}})=\pi_{{j}}(X_{{j}},\bl X_{{-j}})-\pi_{{j}}(X'_{{j}},\bl X_{{-j}}). \] This brings us to the main result of this section: \begin{theorem}\label{existance-pot-thm} Recommendation games with the Shapley mediator are exact potential games. \end{theorem} Thus, due to \citet{monderer1996potential}, any recommendation game with the Shapley mediator possesses at least one PNE, and the set of pure Nash equilibria corresponds to the set of argmax points of the potential function; therefore, $\textsc{SM}$ satisfies $\textbf{S}$. \begin{corollary} $\textsc{SM}$ satisfies $\textbf{S}$. \end{corollary} In fact, Theorem \ref{existance-pot-thm} proves a much stronger claim than merely the existence of PNE. A better-response dynamics is a sequential process, where in each iteration an arbitrary player unilaterally deviates to a strategy which increases her payoff. \begin{corollary} In recommendation games with the Shapley mediator, any better-response dynamics converges. \end{corollary} This convergence guarantee allows the players to learn which items to pick in order to maximize their payoffs. Indeed, as has been observed by work on the topic of online recommendation and advertising systems (e.g. sponsored search \cite{cary2014convergence}), convergence to PNE is essential for system stability, as otherwise inefficient fluctuations may occur. \iffalse To facilitate the understanding of the Shapley mediator, we provide an intuitive example. \begin{example} \label{shapley-selection-example} Consider a recommendation game as described above, with three players, one user and an arbitrary set of items. Assume that the strategy profile $\bl X=(X_1,X_2,X_3)$ induces satisfaction levels $(\sigma_1(X_1),\sigma_1(X_2),\sigma_1(X_3))=(0.3,0.5,0.7)$. It follows that: \begin{align*} &\mathbb{P} (\textsc{SM}(\bl X,u_i)= 1)= \frac{\sigma_i(X_1)-\sigma_i(X_0)}{3}=0.1 \\ & \mathbb{P} (\textsc{SM}(\bl X,u_i)= 2) = \frac{\sigma_i(X_1)-\sigma_i(X_0)}{3}+\frac{\sigma_i(X_2)-\sigma_i(X_1)}{2}=0.2 \\ & \mathbb{P} (\textsc{SM}(\bl X,u_i)= 3)= \frac{\sigma_i(X_1)-\sigma_i(X_0)}{3}+\frac{\sigma_i(X_2)-\sigma_i(X_1)}{2}+\frac{\sigma_i(X_3)-\sigma_i(X_1)}{2}=0.4\\ & \mathbb{P} (\textsc{SM}(\bl X,u_i)=\emptyset ) = 1-\sum_{j=1}^3 \mathbb{P} (\mathcal{M}\mathcal{S}(\bl X,u_i)= j)=0.3 \end{align*} \end{example} \fi \section{Linear time implementation} In Section \ref{sec:ourapproach} we showed that the Shapley mediator, $\textsc{SM}$, satisfies $\textbf{F}$ and $\textbf{S}$. Therefore, it fulfills our requirements stated in Section \ref{sec:model}. However, implementation in commercial products would require the mediator to be computationally tractable. The mediator interacts with users; hence a fast response is of great importance. In general, since Equation (\ref{def:shapley-value}) includes $2^N$ summands, the computation of the Shapley value in a cooperative game need not be tractable. Indeed, the computation often involves marginal contribution nets \cite{chalkiadakis2011computational,ieong2005marginal}. In the following theorem we derive a closed-form formula for calculating the display probabilities under the Shapley mediator, which allows it to compute the display probabilities in linear time. \begin{theorem} \label{theorem:shapleyoneuseronly} Let $\bl X$ be a strategy profile, and let $\sigma_i^m(\bl X)$ denote the $m$'th entry in the result of sorting $\left(\sigma_i(X_1),\dots,\sigma_i(X_N)\right)$ in ascending order, preserving duplicate elements. The Shapley mediator displays player $j$'s item to a user $u_i$ with probability \begin{equation} \label{eq:shpleyoneuser} \mathbb{P} \left( \textsc{SM}(\bl X,u_i)=j \right) = \sum_{m=1}^{\rho_i^j(\bl X)} \frac{\sigma_i^{m}(\bl X)-\sigma_i^{m-1}(\bl X)}{N-m+1}, \end{equation} where $\sigma_i^0(\bl X)=0$, and $\rho_i^j(\bl X)$ is an index such that $\sigma_i(X_j)=\sigma_i^{\rho_i^j(\bl X)}(\bl X)$. \end{theorem} The Shapley mediator is implemented in Algorithm \ref{alg:sm}. As an input, it receives a strategy profile and a user, or equivalently user satisfaction levels from that strategy profile. It outputs a player's item with a probability equal to her Shapley value in the cooperative game defined above. Note that the run-time of Algorithm \ref{alg:sm} is linear in the number of players, i.e., $\mathcal{O}(N)$. A direct result from Theorem \ref{theorem:shapleyoneuseronly} and \textbf{User-Independence} (see Section \ref{sec:model}) is that player payoffs can be calculated efficiently. \begin{corollary} In recommendation games with the Shapley mediator, the payoff of player $j$ under the strategy profile $\bl X$ is given by $\pi_j(\bl X) = \sum_{i=1}^n \sum_{m=1}^{\rho_i^j(\bl X)} \frac{\sigma_i^{m}(\bl X)-\sigma_i^{m-1}(\bl X)}{N-m+1}$. \iffalse \begin{equation*} \label{eq:payoffsshapley} \pi_j(\bl X) = \sum_{i=1}^n \sum_{m=1}^{\rho_i^j(\bl X)} \frac{\sigma_i^{m}(\bl X)-\sigma_i^{m-1}(\bl X)}{N-m+1}. \end{equation*} \fi \end{corollary} To facilitate understanding of the Shapley mediator and its fast computation, we reconsider Example \ref{example:motive} above. \begin{example} \label{example:shapley} Consider the game given in Example \ref{example:motive}. According to the Shapley mediator, the display probabilities of player 1 under the strategy profile $\bl X= (l_2,l_3)$ are \begin{align*} &\mathbb{P} (\textsc{SM} \left( \bl X,u_1 \right)= 1)=\frac{\sigma^1_1\left(\bl X\right)-\sigma^0_1\left(\bl X\right)}{2} =\frac{0.8-0}{2}=0.4, \\ &\mathbb{P} (\textsc{SM} \left( \bl X,u_2 \right)= 1)= \frac{\sigma^1_2\left(\bl X\right)-\sigma^0_2\left(\bl X\right)}{2}=\frac{0.7-0}{2}=0.35,\\ &\mathbb{P} (\textsc{SM} \left( \bl X ,u_3 \right)= 1)=\frac{\sigma^1_3\left(\bl X\right)-\sigma^0_3\left(\bl X\right)}{2}+ \frac{\sigma^2_3\left(\bl X\right)-\sigma^1_3\left(\bl X\right)}{1} =\frac{0.1-0}{2}+\frac{0.9-0.1}{1}=0.85. \end{align*} It follows that $\pi_1(l_2,l_3)=\frac{8}{5}$ while $\pi_1(l_1,l_3)=\frac{7}{10}$, and the profile to be materialized is $(l_2,l_3)$. Indeed, it can be verified that this is the unique PNE of the corresponding game. Moreover, while the unique PNE under $\textsc{TOP}$ (see Example \ref{example:motive} in Section \ref{sec:model}) results in a user utility of $2$, the unique PNE under the Shapley mediator results in user utility of {\small \begin{align*} &\sum_{i=1}^3 \left( \sigma_i(l_2) \mathbb{P} (\textsc{SM} \left( (l_2,l_3),u_i \right)= 1)\right) +\left( \sigma_i(l_3) \mathbb{P} (\textsc{SM} \left( (l_2,l_3),u_i \right)= 2) \right) =2.145>2. \end{align*}}Hence, the users benefit from the Shapley mediator is greater than from the $\textsc{TOP}$ mediator. This is in addition to the main property of the Shapley mediator, probabilistic selection according to the central measure of fair allocation. \end{example} \begin{algorithm}[t] \caption{Shapley Mediator \label{alg:sm}} \DontPrintSemicolon \KwIn{ A strategy profile $\bl X=(X_1,\dots,X_N)$ and a user $u_i$} \KwOut{ An element from $\{\emptyset,X_1,\dots,X_N\}$} Pick $Y$ uniformly at random from $(0,1)$ \; \eIf {$Y>\max_{j\in[N]} \sigma_i(X_j)$}{ return $\emptyset$} {Return an element uniformly at random from ${\{X_j\mid j\in[N],\sigma_i(X_j) \geq Y \}}$} \end{algorithm} \section{Uniqueness of the Shapley mediator} \label{sec:uniqueness} As analyzed in Subsection \ref{subsec:impos}, Theorem \ref{thm:imptwoplayers} suggests that a mediator cannot satisfy both $\textbf{F}$ and $\textbf{S}$ if it sets the probabilities such that $\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)$ is constant. One way of determining $\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right)$ is defined as follows. \begin{enumerate}[leftmargin=0cm,itemindent=.0cm,labelwidth=\itemindent,labelsep=0cm,align=left] \item[] \textbf{Efficiency}. The probability of displaying an item to $u_i$ is the maximal satisfaction level $u_i$ may obtain from the items chosen in $\bl X$. Formally, $\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)=j \right)=\sigma_i(\bl X)$. \end{enumerate} Efficiency (for brevity, $\textbf{EF}$) binds player payoffs with the maximum satisfaction level of $u_i$ from the items chosen by the players under $\bl X$. It is well known \cite{dubey1975uniqueness,shapley1952value} that the Shapley value is uniquely characterized by properties equivalent to $\textbf{F}$ and $\textbf{EF}$, when stated in terms of cooperative games. It is therefore obvious that the Shapley mediator satisfies $\textbf{EF}$. \footnote{ See the proof of Proposition \ref{prop:shapleyisfair} in the appendix. \textbf{Leader Monotonicity}, as opposed to the other fairness properties, is not one of Shapley's axioms but rather a byproduct of Shapley's characterization. } Thus, one would expect that the Shapley mediator will be the only mediator that satisfies $\textbf{F}$ and $\textbf{EF}$. This is, however, not the case: consider a mediator that runs $\textsc{TOP}$ w.p. $\sigma_i(\bl X)$ and $\textsc{NONE}$ otherwise. Clearly, it satisfies $\textbf{F}$ and $\textbf{EF}$. In fact, given a mediator $\mathcal{M}$ satisfying $\textbf{F}$ and $\textbf{Complete}$, we can define $\mathcal{M}'$ such that \begin{equation} \mathbb{P} \left( \mathcal{M}'(\bl X,u_i)=j \right) = \mathbb{P} \left( \mathcal{M}(\bl X,u_i)=j \right) \cdot \sigma_i(\bl X), \end{equation} thereby obtaining a mediator satisfying $\textbf{F}$ and $\textbf{EF}$. The question of uniqueness then arises: is $\textbf{S}$ derived by satisfying $\textbf{F}$ and $\textbf{EF}$? Or even more broadly, are there mediators that satisfy $\textbf{F}$, $\textbf{S}$ and $\textbf{EF}$ besides the Shapley mediator? Had the answer been yes, this recipe for generating new mediators would have allowed us to seek potentially better mediators, e.g., one satisfying $\textbf{F}, \textbf{S}$ and $\textbf{EF}$ while maximizing user utility. However, as we show next, the Shapley mediator is unique in satisfying $\textbf{F}$, $\textbf{S}$ and $\textbf{EF}$. \begin{theorem} \label{thm:uniquenesseff} The only mediator satisfying $\textbf{F},\textbf{S}$ and $\textbf{EF}$ is the Shapley mediator. \end{theorem} \iffalse \omer{it is here} \begin{proof}[Proof sketch] Consider an arbitrary mediator $\mathcal{M}$, which satisfies $\textbf{F},\textbf{S}$ and $\textbf{EF}$. Due to \textbf{User-Independence}, $\mathcal{M}$ sets the display probabilities for a user $u_i$ according to $\bl \sigma =\left(\sigma_i(X_1),\dots, \sigma_i(X_N)\right)$, which we assume w.l.o.g. to be monotonically non-decreasing. Hence, the display probabilities $\mathbb{P} \left( \mathcal{M}(\bl X,u_i)=j \right)$ are fully determined by $\bl \sigma$, and the problem reduces to showing that $\mathcal{M}$ must set the display probabilities as $\algname$ for every $\bl \sigma$. The heart of the theorem relies on showing that unless player $j$'s item is displayed with probability $\mathbb{P} \left( \algname(\bl X,u_i)=j \right)$, she has a beneficial deviation. This implies that $\mathcal{M}$ is essentially $\algname$. \end{proof} \fi \section{Implications of strategic behavior} \label{sec:userutil} In this section we examine the implications of strategic behavior of the players on their payoffs and user utility. Comprehensive treatment of the integration of multiple stakeholders into recommendation calculations was discussed only recently \cite{burke2016towards}, and appears to be challenging. As our work is concerned with strategic content providers, it is natural to consider the Price of Anarchy \cite{koutsoupias1999worst,roughgarden2009intrinsic}, a common inefficiency measure in non-cooperative games. \subsection{Player payoffs} The Price of Anarchy, herein denoted $PoA$, measures the inefficiency in terms of social welfare, as a result of selfish behavior of the players. Specifically, it is the ratio between an optimal dictatorial scenario and the social welfare of the worst PNE. Formally, if $E_{\mathcal{M}}\subseteq \prod_j \mathcal{L}_j$ is the set of PNE profiles induced by a mediator $\mathcal{M}$, then $ PoA_{\mathcal{M}} = \frac{\max_{\bl X \in \prod_j \mathcal{L}_j}{{V(\bl X)}}}{\min_{\bl X \in E_{\mathcal{M}}}{{V(\bl X)}}} \geq 1 $. We use the subscript $\mathcal{M}$ to stress that the $PoA_{\mathcal{M}}$ depends on the mediator, through the definition of social welfare function $V$ and player payoffs. Notice that the $PoA$ of a mediator that does not satisfy $\textbf{S}$ can be unbounded, as a PNE may not exist. Quantifying the $PoA$ can be technically challenging; thus we restrict our analysis to $PoA_{\algname}$, the $PoA$ of the Shapley mediator. \begin{theorem} \label{thm:poa} $PoA_{\algname} \leq \frac{2N-1}{N}$, and this bound is tight. \end{theorem} Hence, under the Shapley mediator the social welfare of the players can decrease by at most a factor of 2, when compared to an optimal solution. \subsection{User utility} \label{subsec:user utility} We now examine the implications of using the $\text{Shapley mediator}$ on the users. For that, we shall assume that the utility of a user from an item is his satisfaction level from that item. Namely, when item $l$ is displayed to $u_i$, his utility is $\sigma_i(l)$. As a result, the expected utility of the users under the strategy profile $\bl X$ and a mediator $\mathcal{M}$ is defined by \begin{align*} U_{\mathcal{M}}(\bl X)=& \sum_{i=1}^{n}\sum_{j=1}^N \mathbb{P} \left( \mathcal{M}(\bl X,u_i)= j \right) \sigma_i(X_j) + \sum_{i=1}^{n} \mathbb{P} \left( \mathcal{M}(\bl X,u_i)=\emptyset \right) \sigma_i(\emptyset). \end{align*} Note that the first term results from the displayed items, and the second term from the plain content of the app (displaying no item at all). To quantify the inefficiency of user utility due to selfish behavior of the players under $\mathcal{M}$, we define the \textit{User Price of Anarchy}, \[ UPoA_{\mathcal{M}}= \frac{ \max_{\mathcal{M}', \bl X \in \Pi_{j=1}^N \mathcal{L}_j} U_{\mathcal{M}'}(\bl X) } { \min_{\bl X \in E_{\mathcal{M}}} U_{\mathcal{M}}(\bl X) } . \] The $UPoA$ serves as our benchmark for inefficiency of user utility. The nominator is the best possible case: the user utility under any mediator $\mathcal{M}'$ and any strategy profile $\bl X$. The denominator is the worst user utility under $\mathcal{M}$, where $E_{\mathcal{M}}$ is again the set of PNE profiles induced by $\mathcal{M}$. Note that the nominator is independent of $\mathcal{M}$. We first treat users as having zero satisfaction when only the plain content is displayed, i.e., $\sigma_i(\emptyset)=0$, and consider the complementary case afterwards. The following is a negative result for the Shapley mediator. \begin{proposition} \label{prop:shapleyuserutility} The User PoA of the Shapley mediator, $UPoA_{\algname}$, is unbounded. \end{proposition} Proposition \ref{prop:shapleyuserutility} questions the applicability of the Shapley mediator. An unavoidable consequence of its use is a potentially destructive effect on user utility. While content-provider fairness is essential, users are the driving force of the RS. Therefore, one may advocate for other mediators that perform better with respect to user utility, albeit not necessarily satisfying $\textbf{S}$. If $\textbf{S}$ is discarded and a mediator satisfying $\textbf{Complete}$ adopted, would this result in better user utility? Unfortunately, other mediators may lead to a similar decrease in user utility due to strategic behavior of the players, so there appears to be no better solution in this regard. \begin{proposition} \label{prop:topupoa} The User PoA of $\textsc{TOP}$, $UPoA_{\textsc{TOP}}$, is unbounded. \end{proposition} Using similar arguments, one can show that $UPoA_{\textsc{BTL}}$ is unbounded as well. In many situations, it is reasonable to assume that when no item is displayed to a user, his utility is 1. Namely, $\sigma_i(\emptyset)=1$ for every user $u_i$. Indeed, this seems aligned with the ads-in-apps model: the user is interrupted when an advertisement is displayed. We refer to this scenario as the \textit{optimal plain content} case. From here on, we adopt this perspective for upper-bounding the $UPoA$. Observe that user utility is therefore maximized when no item is displayed whatsoever. Nevertheless, displaying no item will also result in zero payoff for the players. Here too, $UPoA_{\textsc{TOP}}$ is unbounded, while $UPoA_{\textsc{NONE}}=1$. The following lemma bounds the User $PoA$ of the Shapley mediator. \begin{lemma} \label{lemma:usrutil} In the optimal plain content case, it holds that $UPoA_{\algname} \leq 4$. \end{lemma} In fact, numerical calculations show that $UPoA_{\algname}$ is bounded by $1.76$, see the appendix for further discussion. \section{Discussion} \label{sec:disc} Our results are readily extendable in the following important direction (which is even further elaborated in the appendix). In many online scenarios, content providers typically customize the items they offer to accommodate specific individuals. Indeed, personalization is applied in a variety of fields in order to improve user satisfaction. Specifically, consider the case where each player may promote a set of items, where different items may be targeted towards different users, and the size of this set is determined exogenously (e.g., by her budget). In this case, a player selects a set of items which she then provides to the mediator. Here the Shapley mediator satisfies $\textbf{F}$ and $\textbf{S}$; the game induced by the Shapley mediator is still a potential game, and the computation of the Shapley mediator still takes linear time. \input{ackerc.tex} \input{fairrec.bbl} {\ifnum\Includeappendix=1{
{ "timestamp": "2018-10-19T02:11:18", "yymm": "1806", "arxiv_id": "1806.00955", "language": "en", "url": "https://arxiv.org/abs/1806.00955" }
\section{Coherence Time}\label{sec:coh} In the same style that the channel coherence time is defined in~\cite{620535}, we define the \textit{interference coherence time} to be the time lag until the auto-correlation function of the interference becomes small and hence the interference becomes stochastically independent from its original value. \begin{definition}[Interference coherence time] The interference coherence time $\delay_c$ is the minimum time lag $\tau$ such that the auto-correlation is smaller than a threshold $\theta$, i.e., \begin{equation} \delay_c=\min\big\{\tau\in\N\,|\,\cor{\interf_{t}}{\interf_{t+\tau}}\leq\theta\big\}\:. \end{equation} \end{definition} \textbf{Remark:} Note that this is a subjective definition since $\delay_c$ is a function of $\theta$, which has to be chosen in accordance to the considered case. The threshold below which the interference can be assumed to be uncorrelated is in general greater than zero. If, for example, a scenario with no mobility and correlation from the node positions is considered (case $(2,j,k)$), correlation will not drop to zero, no matter how high the time lag $\tau$. In general the coherence time depends on the sources of correlation and the time they need to uncorrelate. In the following we consider them separately to acquire an insight into their individual role. \subsection{Impact of traffic on coherence time} In this section we consider the coherence time $\delay_c$ that results from a threshold $\theta=0$. If all transmissions span $\msglen$ slots, the correlation of interference caused by it monotonically decreases for $\msglen$ slots (see Fig.~\ref{fig:case002cont}). However, the coherence time typically is shorter as after $\msglen$ slots the auto-correlation is negative and hence crossed zero earlier, i.e., $\delay_c\leq\msglen$. Hence, for the analysis of the coherence time we can adopt the simplified expression from Corollary~\ref{cor:trafficsimp}. In order to calculate the coherence time, we have to find the value of $\tau$ such that $\cor{\interf_{t_1}}{\interf_{t_2}}= 0$. For general parameters, there is usually no slot that exactly reaches zero correlation and therefore, we calculate instead the time lag $\tau$ until correlation reaches zero and then round it to the next higher integer. \begin{theorem}\label{th:coh002} The coherence time when traffic is the only source of correlation (case $(0,0,2)$) is \begin{equation}\label{eq:thcoh002} \delay_c=\left\lceil\frac{\log(1-\mu)}{\log{q}}\right\rceil\:, \end{equation} where ${q}=1-\frac{p}{1-p(\msglen-1)}$ and $\lceil x\rceil=\min\{y\in\N\,|\,y\geq x\}$ is the smallest integer being larger than or equal to $x$. \end{theorem} \begin{IEEEproof} The correlation of interference $\cor{\interf_{t_1}}{\interf_{t_2}}$ is defined as in Theorem~\ref{th:case0xx}. Since traffic is the only source of correlation, we substitute $\expect{\fad{x}{t_1}\fad{x}{t_2}}=\expect{\fadsq{x}{t}}=1$. Further, we substitute the result of Corollary~\ref{cor:trafficsimp} for $\expect{\tx{x}{t_1}\,\tx{x}{t_2}}$ into Theorem~\ref{th:case0xx} yielding \begin{equation} \cor{\interf_{t_1}}{\interf_{t_2}}=\frac{p\,(\msglen-\tau) +p\,\frac{\tau(1-{q})+{q}^{\tau+1}-{q}}{1-{q}}-\mu^2}{\mu-\mu^2}\:. \end{equation} Solving the equation $\cor{\interf_{t_1}}{\interf_{t_2}}=0$ for the time lag $\tau$ gives \begin{equation}\label{eq:pr:coh002c} \tau=\frac{\log(1-\mu)}{\log{q}}\:. \end{equation} In general the solution of $\tau$ in this expression is a non-integer and hence we have to round it to the next higher integer as the correlation is monotonically decreasing with $\tau$ and we aim for a correlation being smaller or equal to zero. \end{IEEEproof} \textbf{Remark:} Although Theorem~\ref{th:coh002} is derived for no fading, the same expression holds for fading with $c=1$, i.e., for case $(0,1,2)$ as the only consequence of considering fading is to divide the auto-correlation function by a constant value $\frac{m+1}{m}$. \begin{figure}[tb] \centering \begin{tikzpicture} \begin{axis}[xlabel={Sending probability $p$},ylabel={Coherence time $\delay_c$},ymin=1,ymax=10,xmin=0,xmax=0.5,grid=both, legend style={at={(0.98,0.98)}, anchor=north east, font=\footnotesize}, legend cell align=left] \addplot plot[color=black,solid,no marks,style=thick] table[x index=0,y index=9] {coh002.txt};\addlegendentry{ ~$\msglen=10$ } \addplot plot[color=black,dashed,no marks,style=thick] table[x index=0,y index=7] {coh002.txt};\addlegendentry{ ~$\msglen=8$ } \addplot plot[color=black,dotted,no marks,style=thick] table[x index=0,y index=5] {coh002.txt};\addlegendentry{ ~$\msglen=6$ } \addplot plot[color=black,dashdotted,no marks,style=thick] table[x index=0,y index=3] {coh002.txt};\addlegendentry{ ~$\msglen=4$ } \addplot plot[color=black,densely dotted,no marks,style=thick] table[x index=0,y index=1] {coh002.txt};\addlegendentry{ ~$\msglen=2$ } \end{axis} \end{tikzpicture} \caption{Interference coherence time if considering solely the traffic as source of temporal correlation (case $(0,0,2)$) over the sending probability $p$ for varying the message length $\msglen$. In this plot the rounding to the next higher integer is omitted to avoid high overlap of the curves, .i.e., we plot~\eqref{eq:pr:coh002c} instead of~\eqref{eq:thcoh002}.} \label{fig:coh002} \end{figure} Fig.~\ref{fig:coh002} shows the corresponding plot of the interference coherence time. It shows that in case of very small sending probabilities $p$ (close to zero) the coherence time is roughly equal to the message length ($\delay_c\approx\msglen$). For increasing values of $p$ the coherence time is monotonically decreasing and approaches its minimum for $\mu\to 1$, which is $\lim_{p\to\frac{1}{\msglen}}\delay_c=1$. Hence, in this case interference is already uncorrelated in consecutive slots. It is interesting to notice that the coherence time depends on the sending probability $p$. In potential applications that require two uncorrelated slots, e.g. a retransmission protocol, the retransmission back-off interval (=time lag) has to be adjusted to the traffic load of the network. For higher traffic loads the back-off interval could be shortened based on this coherence time result, leading to a lower transmission delay at the nodes. There might be of course other reasons that prevent the back-off interval from being too short, but still this example illustrates the potential of having a better understanding of interference dynamics. \subsection{Impact of channel on coherence time} If the channel is the only source of correlation, i.e., case $(0,2,1)$, assuming again $\theta=0$ we have the following result. \begin{theorem} The interference coherence time $\delay_c$ if the channel is the only source of correlation (case $(0,2,1)$) equals the channel coherence time $\chlen$. \end{theorem} \begin{IEEEproof} From Theorem~\ref{th:case0xx} we have that the correlation coefficient for the case $(0,2,1)$ is \begin{equation} \cor{\interf_{t_1}}{\interf_{t_2}}=\frac{p\big(\expect{\fad{x}{t_1}\fad{x}{t_2}}-1\big)}{\frac{(m+1)}{m}-p}\:, \end{equation} where $\expect{\fad{x}{t_1}\fad{x}{t_2}}$ is given in~\eqref{eq:channel}. For the case $\tau=\chlen-1$ we have $\cor{\interf_{t_1}}{\interf_{t_2}}=\frac{p}{\chlen(1+m-mp))}$, which is always positive for $p>0$. As for $\tau\geq\chlen$ the correlation vanishes, the coherence time is always equal to $\chlen$. \end{IEEEproof} \subsection{Impact of node locations on coherence time} If the node locations are considered as a source of correlation and a threshold $\theta=0$, it is important to assume mobility. Otherwise the temporal correlation caused by only node locations is constant for all $\tau\geq 1$ and never reaches $\theta$. In case other sources of correlation exist, the correlation converges to this constant for $\tau\to\infty$ (actually the convergence is rather fast for reasonable values of $\chlen$ and $\msglen$). Specifically, if the static node locations are the only source of correlation, we have $\cor{\interf_{t_1}}{\interf_{t_2}}=p$~\cite{ganti09:interf-correl,schilcher12:tmc}, independent of $\tau$. In such case it makes no sense to talk about a coherence time as for $\theta=0$ (or more generally for sufficiently small $\theta$). Let us assume that nodes move at an average speed $\bar{v}>0$. The temporal correlation of the interference is monotonically decreasing to its limit $\lim_{t\to\infty}\cor{\interf_{t_1}}{\interf_{t_2}}=0$. For finite time, however, it will get arbitrarily small but remain positive. Hence we choose a threshold $\theta>0$ for our analysis. There exists no closed-form expression of the coherence time of interference $\delay_c$ when correlation is induced by the node locations. This is due to the rightmost integration in~\eqref{eq:cor2xxmob} that to the best of our knowledge has no closed-form solution and therefore, cannot be rearranged to give an expression for $\tau$. Accounting to this, we numerically evaluate the coherence time $\delay_c$. The numerical results are presented in Fig.~\ref{fig:coh201}, where the coherence time $\delay_c$ for a threshold $\theta=0.01$ is investigated. The plot shows $\delay_c$ for varying sending probability $p$ and different average node speeds $\bar{v}$. The general trend corresponds to intuition: Firstly, $\delay_c$ increases with increasing $p$, as the temporal correlation of interference increases with $p$, making the threshold $\theta$ to be crossed later. For very small sending probabilities the coherence time is very small (in the limit we have $\lim_{p\to 0}\delay_c=1$), and the interference in consecutive slots is uncorrelated. Secondly, a higher average speed $\bar{v}$ of the nodes leads to a smaller coherence time. The reason for it is that with higher speed the nodes reach earlier the distance determining the decorrelation of interference. The general trends are the same with Brownian motion, but the coherence times are higher, as for this mobility model the distance between the initial and final position of the nodes increases on average slower. This happens because of the back and forth movements of the nodes. We omitted a plot of these results as they do not provide further insights. \begin{figure}[tb] \centering \begin{tikzpicture} \begin{axis}[xlabel={Sending probability $p$},ylabel={Coherence time $\delay_c$},ymin=0,ymax=80,xmin=0,xmax=1,grid=both, legend style={at={(0.02,0.98)}, anchor=north west, font=\footnotesize}, legend cell align=left] \addplot plot[color=black,solid,no marks,style=thick] table[x index=0,y index=1] {coh201_hr.txt};\addlegendentry{ ~$\bar{v}=0.1$ } \addplot plot[color=black,dashed,no marks,style=thick] table[x index=0,y index=2] {coh201_hr.txt};\addlegendentry{ ~$\bar{v}=0.2$ } \addplot plot[color=black,dotted,no marks,style=thick] table[x index=0,y index=3] {coh201_hr.txt};\addlegendentry{ ~$\bar{v}=0.3$ } \addplot plot[color=black,dashdotted,no marks,style=thick] table[x index=0,y index=4] {coh201_hr.txt};\addlegendentry{ ~$\bar{v}=0.4$ } \addplot plot[color=black,densely dotted,no marks,style=thick] table[x index=0,y index=5] {coh201_hr.txt};\addlegendentry{ ~$\bar{v}=0.5$ } \end{axis} \end{tikzpicture} \caption{Interference coherence time considering the node locations as sole source of temporal correlation (case $(2,0,1)$) with linear mobility. The results are plotted over the sending probability $p$ for varying the average speed of the nodes $\bar{v}$ with $\theta=0.01$. The steps in the plot occur since $\delay_c$ is measured in terms of slots and hence is integer.} \label{fig:coh201} \end{figure} \section{Conclusions and future work}\label{sec:con} We investigated the temporal dynamics of interference in Poisson networks with a focus on its auto-correlation function and coherence time. We see from the auto-correlation function that the correlation of interference for small time lags typically decreases. This decrease strongly depends on the sources of correlation. When evaluated over a large span of time lags, the correlation approaches zero, follows a damped oscillation around zero or shows a completely different behavior, all depending on the chosen scenario. It is evident that networking protocols and techniques have to be designed and parametrized differently for such different scenarios in order to operate best. Hence, a good knowledge of the auto-correlation function of interference contributes to improved network performance and robustness. For example, if interference traces are interpreted as time series, their underlying model order is determined by the auto-correlation function. Hence, our work contributes to model design by providing essential input parameters. It is thus a key enabling result for interference prediction based on time series. The coherence time of interference can be calculated by means of expressions provided in the article at hand. This is a valuable tool for designing network protocols properly. Moreover, the expressions are simple enough to be implemented on networked nodes, allowing them to adapt to a changing network environment. This is a requirement in scenarios with, e.g., high mobility or high fluctuations in the network nodes. Ongoing work analyzes how, by estimating a small number of network parameters, the node gathers a good estimate of the interference coherence time. Based on this information, we plan to devise a distributed algorithm to optimize the node's successful communication attempts and thus the overall network performance. \section{Derivation of the auto-correlation of interference}\label{sec:cor} \begin{table*}[t] \begin{center} \caption{Summary of results without mobility: Auto-correlation and coherence time of interference. For the coherence time expressions, all resulting non-integer values have to be rounded to the next higher integer. The symbol '-' denotes that we have no expression for this case.\label{tab:results}} \begin{tabular}{ c c c | c | c } Locations & Channel & Traffic & Auto-correlation function & Coherence time\\ $i$ & $j$ & $k$ & $\rho(\tau)$ & $\delay_c$ \\ \hline $0$ & $0$ & $0$ & undefined & undefined\\ $0$ & $0$ & $1$ & $0$ & $1$\\ $0$ & $0$ & $2$ & $\frac{{q}^\tau+\mu-1}{\mu}$ for $\tau<d$ & $\frac{\log(1-\mu)}{\log{q}}$\\ $0$ & $1$ & $0$ & $0$ & $1$\\ $0$ & $1$ & $1$ & $0$ & $1$\\ $0$ & $1$ & $2$ & $\frac{m(\mu-1)({q}^\tau+\mu-1)}{\mu(m(\mu-1)-1)}$ for $\tau<d$ & $\frac{\log(1-\mu)}{\log{q}}$\\ $0$ & $2$ & $0$ & $1-\frac{\tau}{c}$ for $\tau\leq c$ & $c$\\ $0$ & $2$ & $1$ & $\frac{p(c-\tau)}{c(1+m-mp)}$ for $\tau<c$ & $c$\\ $0$ & $2$ & $2$ & $\frac{\left({q}^\tau(\mu-1)-2\mu+1\right)(\tau-c(m+1))-cm\mu^2}{c\mu(1+m-m\mu)}$ for $\tau<\min(c,d)$ & -\\ $1$ & $0,1$, or $2$ & $0,1$, or $2$ & $0$ & $1$\\ $2$ & $0$ & $0$ & $1$ & $\infty$ \\ $2$ & $0$ & $1$ & $p$ & $\infty$ \\ $2$ & $1$ & $0$ & $1/2$ & $\infty$ \\ $2$ & $1$ & $1$ & $p/2$ & $\infty$ \\ $2$ & $0,1$ & $2$ & $\frac{{q}^\tau (1-\mu)+2\mu-1}{\mathbb{E}[h^4]\mu}$ for $\tau<d$ & -\\ $2$ & $2$ & $0$ & $1-\frac{\tau}{c(m+1)}$ for $\tau<c$ & -\\ $2$ & $2$ & $1$ & $p\left(1-\frac{\tau}{c(m+1)}\right)$ for $\tau<c$ & -\\ $2$ & $2$ & $2$ & $\frac{\left({q}^\tau(\mu-1)-2\mu+1\right)(\tau-c(m+1))}{c\mu(m+1)}$ for $\tau<\min(c,d)$ & - \end{tabular} \end{center} \end{table*} We derive in the following paragraphs general expressions for the correlation of interference. These expressions are further specialized for selected interference scenarios in the next section. Interference correlation is measured in terms of Pearson's correlation coefficient \begin{eqnarray} \cor{\interf_{t_1}}{\interf_{t_2}}&=&\frac{\cov{\interf_{t_1}}{\interf_{t_2}}}{\sqrt{\var{\interf_{t_1}}\var{\interf_{t_2}}}}\\\nonumber &=&\frac{\cov{\interf_{t_1}}{\interf_{t_2}}}{\var{\interf}}\:, \end{eqnarray} where $\cov{\interf_{t_1}}{\interf_{t_2}}=\expect{\interf_{t_1}\interf_{t_2}}-\expect{\interf_{t_1}}\,\expect{\interf_{t_1}}$ denotes the covariance of interference at different time instants $t_1$ and $t_2$, and $\var{\interf}=\var{\interf_{t_1}}=\var{\interf_{t_2}}$ denotes the variance of interference, which is constant over time due to the stationarity of the processes involved. The time lag from $t_1$ to $t_2$ is denoted by $\tau=t_2-t_1$. In the derivation of the general expressions, we distinguish between random and known node locations. In the former case we consider node locations as sources of interference correlation (cases $(2,j,k)$) while in the latter we do not consider them as a source (cases $(0,j,k)$). A summary of the auto-correlation and the coherence time of interference without mobility is presented in Table~\ref{tab:results}. The expressions in this table only hold for small time lags $\tau<c$ and $\tau<d$. The more general expressions for arbitrary $\tau$, and for mobility, are too long for the table but are available in the following sections. Coherence time results are computed by solving $\rho(\tau)=0$ for $\tau$. For the cases with '-' this can also be solved, but leads to expressions that have $\tau>\chlen$ or $\tau>\msglen$ for all parameters, which violates the presumptions for the simplified expressions of the auto-correlation functions presented in this table. Adopting the full expression for the auto-correlation does not lead to closed form expressions for coherence time. Note that when substituting $\tau=1$ and $m=1$ to the expressions in the table, the correlation results correspond to that in Table~1 of~\cite{schilcher12:tmc} with the following exception: In the cases $(i,2,1)$ and $(i,2,2)$ there is a difference due to a difference in the modeling assumptions. Here we assume that the channel of any pair of nodes is changing every $c$ slots while in~\cite{schilcher12:tmc} the channel changes $c$ slots after a transmission occurred. \subsection{Random node locations} \begin{theorem}[Correlation for cases $(2,j,k)$]\label{th:case2xx} The temporal correlation of interference between time instants $t_1$ and $t_2$ considering the node locations as a source of correlation is \begin{eqnarray}\label{eq:cor2xxmob} \lefteqn{\cor{\interf_{t_1}}{\interf_{t_2}}=}\\\nonumber &=&\frac{\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}}}{\mu\,\expect{\fadsq{x}{t}}}\cdot\frac{\int_{\mathbb{R}^2}\ell_{x}\,\expectmob{\ploss{x+\bar{v}{\omega_\delay}}}\,\dd x}{\int_{\mathbb{R}^2}\ell_{x}^2\,\dd x}\:. \end{eqnarray} \end{theorem} \begin{IEEEproof} The expected value of interference is \begin{eqnarray} \expect{\interf}&=&\mathbb{E}_{\ppp,h,\gamma}{\sum_{x\in\ppp}\fad{t}{x}\,\plossx\tx{x}{t}}\\\nonumber &=&\mathbb{E}_\ppp\sum_{x\in\ppp}\mathbb{E}_h[\fad{x}{t}]\,\ell_{x}\,\,\mathbb{E}_\gamma[\tx{x}{t}]\\\nonumber &\stackrel{(a)}{=}&\msglenp\,\dens\int_{\mathbb{R}^2}\ell_{x}\,\dd x\\\nonumber &=&\mu\,\dens\,\frac{\alpha\pi}{\alpha-2}\:, \end{eqnarray} where $(a)$ holds due to Campbell's theorem~\cite{haenggi13:book}, $\expect{\fad{x}{t}}=1$ for all $x\in\ppp$ and $t\in\N$ and $\expect{\tx{x}{t}}=\msglenp$. The indices for the expectation operator $\mathbb{E}$ indicate the random variables involved and will be omitted in the future for shorter expressions. Aiming for the covariance, we calculate \begin{eqnarray}\label{eq:pr:2xxcoexp} \lefteqn{\expect{\interf_{t_1}\interf_{t_2}}=}\\\nonumber &=&\expect{\sum_{x\in\ppp}\channel{x}{t_1}\tx{x}{t_1}\sum_{y\in\ppp}\tilde{h}_{t_2}^2\ploss{y+\bar{v}{\omega_\delay}}\tilde{\gamma}_{t_2}}\\\nonumber &=&\expect{\sum_{x\in\ppp}\fad{x}{t_1}\fad{x}{t_2}\ell_{x}\,\ploss{x+\bar{v}{\omega_\delay}}\tx{x}{t_1}\tx{x}{t_2}}\\\nonumber &&+\,\expect{\sum_{\stackrel{x,y\in\ppp}{x\neq y}}\fad{x}{t_1}\tilde{h}_{t_2}^2\ell_{x}\,\ploss{y+\bar{v}{\omega_\delay}}\tx{x}{t_1}\tilde{\gamma}_{t_2}}\:, \end{eqnarray} where we introduce $\tilde{h}_{t}$ and $\tilde{\gamma}_{t}$ to denote the fading coefficient and sending indicator of node $y$ at time $t$, respectively. The first of the two expected values of~\eqref{eq:pr:2xxcoexp} yields \begin{eqnarray} \lefteqn{\expect{\sum_{x\in\ppp}\fad{x}{t_1}\fad{x}{t_2}\ell_{x}\,\ploss{x+\bar{v}{\omega_\delay}}\tx{x}{t_1}\tx{x}{t_2}}=}\\\nonumber &=&\expectppp{\sum_{x\in\ppp}\expect{\fad{x}{t_1}\fad{x}{t_2}}\ell_{x}\,\expectmob{\ploss{x+\bar{v}{\omega_\delay}}}\expect{\tx{x}{t_1}\tx{x}{t_2}}}\\\nonumber &=&\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}}\,\dens\int_{\mathbb{R}^2}\ell_{x}\,\expectmob{\ploss{x+\bar{v}{\omega_\delay}}}\dd x\:, \end{eqnarray} and the second gives \begin{eqnarray} \lefteqn{\expect{\sum_{\stackrel{x,y\in\ppp}{x\neq y}}\fad{x}{t_1}\tilde{h}_{t_2}^2\ell_{x}\,\expectmob{\ploss{y+\bar{v}{\omega_\delay}}}\tx{x}{t_1}\tilde{\gamma}_{t_2}}=}\\\nonumber &\stackrel{(a)}{=}&\dens^2\int_{\mathbb{R}^2}\int_{\mathbb{R}^2}\expect{\fad{x}{t_1}\tilde{h}_{t_2}^2}\ell_{x}\ploss{y+\bar{v}{\omega_\delay}}\expect{\tx{x}{t_1}\tilde{\gamma}_{t_2}}\,\dd x\,\dd y\\\nonumber &\stackrel{(b)}{=}&\left(\mu\,\dens\int_{\mathbb{R}^2}\ell_{x}\,\dd x\right)^2\\\nonumber &=&\expect{\interf}^2\:. \end{eqnarray} In $(a)$ we split the expected value for the different independent random variables and apply Campbell's theorem. In $(b)$ we use $\expect{\fad{x}{t_1}\tilde{h}_{t_2}^2}=1$, $\expect{\tx{x}{t_1}\tilde{\gamma}_{t_2}}=\mu^2$ and the stationarity of the PPP. Hence, the covariance is \begin{eqnarray}\label{eq:pr:2xxcov} \lefteqn{\cov{\interf_{t_1}}{\interf_{t_2}}=}\\\nonumber &=&\expect{\interf_{t_1}\interf_{t_2}}-\expect{\interf_{t_1}}\expect{\interf_{t_2}}\\\nonumber &=&\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}} \,\dens\,\int_{\mathbb{R}^2}\ell_{x}\,\expectmob{\ploss{x+\bar{v}{\omega_\delay}}}\,\dd x\:. \end{eqnarray} The values of $\expect{\fad{x}{t_1}\fad{x}{t_2}}$ and $\expect{\tx{x}{t_1}\tx{x}{t_2}}$ are characterizing the contribution of the wireless channel and the traffic to the correlation of interference, respectively. They depend on the values of $j,k$ of the case $(2,j,k)$ under consideration. The variance is obtained by setting $t_1=t_2$ in the above derivations yielding \begin{equation}\label{eq:pr:2xxvar} \var{\interf}=\cov{\interf}{\interf}=\mu\,\expect{\fadsq{x}{t}}\,\dens\,\int_{\mathbb{R}^2}\ell_{x}^2\,\,\dd x\:. \end{equation} Dividing~\eqref{eq:pr:2xxcov} by~\eqref{eq:pr:2xxvar} yields the result. \end{IEEEproof} \begin{corollary}[Correlation for cases $(2,j,k)$ without mobility]\label{cor:case2xx} The temporal correlation of interference between time instants $t_1$ and $t_2$ considering the node locations as sources of interference correlation and having no mobility ($\bar{v}=0$) is \begin{equation}\label{eq:cor2xx} \cor{\interf_{t_1}}{\interf_{t_2}}=\frac{\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}}}{\mu\,\expect{\fadsq{x}{t}}}\:. \end{equation} \end{corollary} \begin{IEEEproof} Substituting $\bar{v}=0$ into~\eqref{eq:cor2xxmob} yields the result. \end{IEEEproof} \subsection{Known node locations} \begin{theorem}[Correlation for cases $(0,j,k)$]\label{th:case0xx} The temporal correlation of interference between time instants $t_1$ and $t_2$ if neglecting the node locations as sources of interference correlation is \begin{equation} \cor{\interf_{t_1}}{\interf_{t_2}}=\frac{\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}}-\mu^2}{\mu\,\expect{\fadsq{x}{t}}-\mu^2}\:. \end{equation} \end{theorem} \begin{IEEEproof} The covariance of interference is \begin{eqnarray} \cov{\interf_{t_1}}{\interf_{t_2}} \hspace{-1mm}&=&\hspace{-1mm}\expectppp{\sum_{x\in\ppp}\sum_{y\in\ppp}\cov{\channel{x}{t_1}\tx{x}{t_1}}{\tilde{h}_{t_2}^2\,\ell_{y}\,\tilde{\gamma}_{t_2}}}\hspace{4mm}\\\nonumber &=&\hspace{-1mm}\expectppp{\sum_{x\in\ppp}\cov{\channel{x}{t_1}\tx{x}{t_1}}{\channel{x}{t_2}\tx{x}{t_2}}}\\\nonumber &&\hspace{-1mm}+\,\expectppp{\sum_{\stackrel{x,y\in\ppp}{x\neq y}}\cov{\channel{x}{t_1}\tx{x}{t_1}}{\tilde{h}_{t_2}^2\,\ell_{y}\,\tilde{\gamma}_{t_2}}}\:. \end{eqnarray} The covariance in the second sum is always zero as the arguments are stochastically independent. The covariance in the first sum yields \begin{eqnarray}\label{eq:pr:0xxcov} \lefteqn{\expectppp{\sum_{x\in\ppp}\cov{\channel{x}{t_1}\tx{x}{t_1}}{\channel{x}{t_2}\tx{x}{t_2}}}=}\\\nonumber &\stackrel{(a)}{=}&\mathbb{E}\Bigg[\sum_{x\in\ppp}\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\ell_{x}^2\,\,\expect{\tx{x}{t_1}\tx{x}{t_2}} -\big(\expect{\fad{x}{t}}\,\ell_{x}\,\,\expect{\tx{x}{t}}\big)^2\Bigg]\\\nonumber &\stackrel{(b)}{=}&\left(\expect{\fad{x}{t_1}\fad{x}{t_2}}\,\expect{\tx{x}{t_1}\tx{x}{t_2}}-\mu^2\right) \,\dens\,\int_{\mathbb{R}^2}\ell_{x}^2\,\,\dd x\:, \end{eqnarray} where in $(a)$ we calculated the covariance with $\cov{X}{Y}=\expect{XY}-\expect{X}\expect{Y}$ and in $(b)$ we substituted $\expect{\fad{x}{t}}=1$ and $\expect{\tx{x}{t}}=\mu$. Similar to the proof of Theorem~\ref{th:case2xx}, we calculate the variance by substituting $t_2=t_1$ yielding \begin{equation}\label{eq:pr:0xxvar} \var{\interf}=\cov{\interf}{\interf}=\big(\mu\,\expect{\fadsq{x}{t}}-\mu^2\big)\,\dens\,\int_{\mathbb{R}^2}\ell_{x}^2\,\,\dd x\:. \end{equation} Dividing~\eqref{eq:pr:0xxcov} by~\eqref{eq:pr:0xxvar} yields the result. \end{IEEEproof} \textbf{Remark:} \begin{itemize} \item Since the node locations are not considered in Theorem~\ref{th:case0xx}, mobility is not taken into account in the result. \item The correlation $\cor{\interf_{t_1}}{\interf_{t_2}}=0$ for all cases $(1,j,k)$. \end{itemize} \section{Introduction}\label{sec:intro} \subsection{Motivation} The performance analysis of mobile communication and computing networks must model the interference caused by nodes on other nodes. Common models describe the average value or distribution of interference at a given receiver. More accurate modeling also considers the {\it interference dynamics\/}, i.e., the way interference changes over time and space~\cite{andrews17:mmWave, 8115171, haenggi13:book, schilcher15:tit, haenggi13:div-poly}. Questions of interest are: How rapidly does interference change? What is the correlation of interference? Which parameters influence this correlation? Answers to these questions are particularly useful for the design of retransmission protocols and diversity schemes. Various researchers addressed these questions, taking different views. A first view adopts measures like outage probability (or its counterpart, the coverage probability) for the purpose of protocol performance evaluation. This requires knowledge of the probability distribution of the signal-to-interference ratio (SIR), which in many cases is easier to calculate than some interference statistics~\cite{haenggi13:book}. It is, for example, possible to analyze the performance degradation or improvement of cooperative relaying~\cite{tanbourgi14:nakagami,tanbourgi14:mrc,crismani14:tvt,schilcher13:mswim} or MIMO~\cite{7880697,haenggi13:div-poly} impacted by interference. A second view is to use interference dynamics to determine interference statistics, i.e., to establish the probability distribution of interference power in two or more points in time and/or space. Such multidimensional probability distribution is best represented by its probability density function. Unfortunately, for interference this function does not exist in closed form unless we consider scenarios with restricted network parameter values. Hence, we have to resort to characteristic functions, which are less flexible to handle but still valuable as they allow to calculate the moments of the distribution~\cite{schilcher15:tit,haenggi13:book} and other values. Finally, a third view is to quantify the dependence of interference powers at different points in time and/or space. In other words, it addresses the question as to how much information is gained on the interference power (at a certain point in time and space) if we know interference power values close-by or from the past. Mathematically, this dependence is expressed in terms of correlation, with one application being interference prediction~\cite{atiq17:mswim}: The auto-correlation function of the interference determines the order of the time series associated to the interference evolution over time, and can be the basis for designing a predictor. The auto-correlation function of interference is still unknown. The reason is that most work on interference dynamics only considers the node locations as source of interference correlation. This assumption has two important consequences: First, the auto-correlation of interference at a given point in space is independent of the time lag $\tau$, i.e., it is a constant function~\cite{ganti09:interf-correl} not especially interesting to adopt in network performance analysis. Second, the resulting interference correlation may be inaccurate as important other sources of correlation are neglected. Two such additional sources of correlation are the data traffic and the wireless channel~\cite{schilcher12:tmc}. When considering these sources, the auto-correlation becomes non-constant and thus turns into an important tool for wireless network modeling and analysis. In the article at hand, we generalize the known expressions for temporal correlation of interference in consecutive time slots (our previous work~\cite{schilcher12:tmc}) to arbitrary time lags $\tau$. In other words, we calculate the \textit{auto-correlation function} (ACF) of interference for this $\tau$. Our work is based on the commonly used Poisson model, which means that the nodes of the network are distributed in space according to a Poisson point process. Random access to the wireless channel without channel sensing is used. As part of this work, we present modular expressions that allow different fading and traffic models to be incorporated by simply substituting a corresponding expression. In particular, we derive ACF expressions both individually for the three sources of correlation and for particular combinations of these sources. Furthermore, we analyze the \textit{coherence time} of interference, which is the time until the interference correlation falls below a given threshold. We are able to derive closed form expressions in some cases and rely on numerical analysis in others. \subsection{Related work} There are several results on the temporal correlation of interference in Poisson networks (e.g.,~\cite{ganti09:interf-correl,6697936,haenggi13:div-poly,6331038,schilcher12:tmc,schilcher13:scc}). All these results, however, consider only the node locations as source of correlation and assume a static network without mobility. Because of these assumptions the temporal correlation does not change over time, i.e., it does not depend on the time lag $\tau$ between the two instants under consideration. As a natural consequence, neither auto-correlation nor coherence time is considered. Furthermore, results are available on the stochastic dependence of interference, which is typically expressed as the joint outage probability of several transmissions (e.g.,~\cite{schilcher15:tit,tanbourgi14:mrc,tanbourgi14:nakagami,crismani14:tvt,net:Haenggi09jsac,haenggi13:book}). In all these publications, again the joint outage probability is independent of the time instant of the transmissions, which is a consequence of assuming that the node locations are the only source of correlation and that nodes are not mobile. Important exceptions to these limitations are the results by Gong and Haenggi~\cite{gong14:tmc, gong11:icc}. They also consider the node locations as the sole source of interference correlation but use a mobile network. It is the mobility that makes the temporal correlation to decrease for longer time lags $\tau$. The analysis is performed for four different stochastic mobility models including Brownian motion (also adopted in the article at hand). The dependence of the correlation on the system parameters is analyzed with special focus on the average speed of the nodes. The evolution of the correlation in terms of the time lag is not extensively analyzed in those two publications and thus no notion of coherence time is considered. The three sources of interference investigated in the article at hand are used in~\cite{schilcher12:tmc}, where all 27 possible combinations of them are systematically addressed. However, the temporal correlation is only calculated for consecutive time slots, which provides no insights on the time it takes to reach low or negative correlation values. \subsection{Main contributions} We investigate the temporal dynamics of the interference in terms of its auto-correlation function and coherence time. We do so by accounting for three key sources of interference correlation: the node locations, the wireless channel, and the network traffic. Prior to this work, temporal correlation was only known for consecutive time slots~\cite{schilcher12:tmc}, and for longer lags, only when considering the node locations as sole source of correlation. Only two possible outcomes apply: If there is no mobility among the nodes, the temporal correlation does not change over time~\cite{ganti09:interf-correl}, while for mobility it decreases~\cite{gong14:tmc} with increasing lags. Our contributions can be summarized as follows: \begin{itemize} \item We are the first to derive expressions for the auto-correlation function of interference in Poisson networks for the three key sources of correlation. These provide insights on the temporal dynamics of interference far beyond existing results, and are relevant for facilitating the exploitation of the temporal features of interference in emerging wireless systems. \item The mathematical framework provided enables us to combine the three sources of correlation into a general expression for the temporal correlation of interference. Hence, unlike in previous work~\cite{schilcher12:tmc}, it is no longer needed to address all possible combinations of the sources individually. Results are much more flexible and easier to apply in further work. \item The analysis of the interference coherence time is further contributing to the practical applicability of our theoretical work. It provides an answer to the question as to how long a retransmission protocol has to wait for an uncorrelated channel. \end{itemize} Our expressions also cover the fundamental and well-known channel coherence time without interference, which corresponds to taking only the channel into account as a source of correlation. Under this assumption, the interference coherence time and the channel coherence time are equal. The rest of the article is organized as follows: Section~\ref{sec:model} introduces the modeling assumptions. Section~\ref{sec:cor} provides general expressions for the temporal correlation of interference, which need specific sub-expression that model the different sources of interference correlation to be substituted. In Section~\ref{sec:src} the expressions that have to be substituted into the equations for temporal interference correlation are derived. Based on these results, the coherence time of interference is defined, and expressions for it are derived and used to analyze its features for different parameters in Section~\ref{sec:coh}. Finally, Section~\ref{sec:con} concludes the paper. \section{System model}\label{sec:model} \subsection{Node location and traffic} We consider a Poisson network: a wireless network consisting of nodes randomly located in a plane according to a Poisson point process (PPP) $\ppp$ on $\mathbb{R}^2$. The medium access opportunities are arranged into time slots. In each slot, every idle node decides independently from other nodes whether or not to start a new transmission. The duration of this transmission is $\msglen\in\N$ time slots (message length), which is constant for all nodes and over time. We intend to have on average a fraction $p$ of all nodes in $\ppp$ start a new transmission in each slot. Since only idle nodes start new transmissions, they adopt a sending probability of $\frac{p}{1-p(\msglen-1)}$. Let ${q}$ denote the probability that a node stays idle, i.e., ${q}=1-\frac{p}{1-p(\msglen-1)}$ and $S_t$ denote the set of all sending nodes at time $t$. The expected fraction of nodes sending in a given time slot $t$ is thus $\mu=|S_t|=p\,\msglen$, which we refer to as the traffic intensity. \subsection{Node mobility} Both static and mobile nodes are investigated. For mobile nodes, we let $\bar{v}$ denote the average speed of all nodes and consider two mobility models: linear mobility and time-discrete Brownian motion. The linear mobility model assumes the location of each node $x$ at times $t$ and $t+\tau$ to change in a random direction with $|x_{t}-x_{t+\tau}|=\bar{v}\tau$, i.e., the distance increases linearly with time and we set ${\omega_\delay}=\tau$. This can be considered to be a reasonable model for time spans covering the duration of a few time slots, as the direction of movement and speed does not change significantly within the timescale of a slot in a practical system. The location of a node $x$ with Brownian motion at time $t+1$ is~\cite{gong14:tmc,gong11:icc} \begin{equation} x_{t+1}=x_{t}+\bar{v}\omega_{t}\:, \end{equation} where $\omega_t$ is a two-dimensional Gaussian random variable $\omega_t\sim N(\vec{0},\Sigma)$ with covariance matrix \begin{equation} \Sigma= \begin{pmatrix} 0 & \sqrt{\frac{2}{\pi}}\\ \sqrt{\frac{2}{\pi}} & 0 \end{pmatrix}\:. \end{equation} The random variables $\omega$ are i.i.d. for each time $t$. Hence, the location after $\tau$ slots is \begin{equation} {\omega_\delay}=\sum_{t=1}^\tau x_t\stackrel{d}{=}\sqrt{\tau}\,\omega_0\:, \end{equation} where $\stackrel{d}{=}$ denotes the equality in distribution. \textbf{Remark:} The homogeneity of the PPP describing the node locations is not altered by these two mobility models. At any time $t$ the location of nodes forms a PPP with intensity $\dens$. Note that this preservation of homogeneity is not provided by many other mobility models, for example, for the random waypoint model (nodes asymptotically concentrate in the center of the deployment area~\cite{bettstetter03:tmc}). \subsection{Wireless channel} The wireless channel is modeled by a distance dependent path loss and multi-path fading accounting for reflections, diffraction, and other small-scale propagation effects. The signal power at a receiver $x$ from an active sender $y$, with $x,y\in\mathbb{R}^2$ is \begin{equation} p_\mathrm{RX}=\kappa\,h_t^2\,\ell_{xy}\:. \end{equation} In this equation, $\kappa$ is the sending power of $y$, which we consider to be the same for all nodes in the network. $h_t$ models Nakagami-$m$ block fading at time $t$ and node $x$, i.e., $h_t^2$ is gamma distributed according to $h_t^2\sim\Gamma(m,m)$. This implies $\expect{h_t^2}=1$ and $\expect{h_t^4}=\frac{m+1}{m} $. The temporal aspect of fading is modeled in the following way~\cite{schilcher12:tmc}: We consider a block fading model, where the channel is assumed to remain constant over a duration of $\chlen\in\N$ time slots after which it changes to an independent state, i.e., a random experiment is carried out independently of the previous channel state to establish the new channel state. This model for the temporal behavior is of widespread use. It matches well practical systems where the signal timing is usually designed to approximately meet this condition for easing the channel state acquisition and equalization tasks. Finally, $\ell_{xy}=\ploss{y-x}$ denotes a non-singular distance dependent path loss, for which we adopt \begin{equation} \ploss{y-x}=\min(1, \|y-x\|^{-\alpha}) \end{equation} with a path loss exponent $\alpha>2$. Let $\ell_{x}=\ploss{x-o}=\ploss{x}$ be the path loss from a node $x$ to the origin $o$. \subsection{Interference} Interference at time $t$ is measured at the origin $o$ of the plane $\mathbb{R}^2$, which is equal to the interference experienced by a typical node of the network due to Slivnyak's theorem. Its power is the sum of the signal powers of all sending nodes in the network (besides the intended signal from an specific sender, which is not considered in this work) yielding \begin{equation} \interf_t=\sum_{x\in\ppp}\kappa\,h_t^2\,\ell_{x}\,\gamma_t\:, \end{equation} where $\gamma_t$ is a Bernoulli random variable indicating whether node $x$ is sending ($\gamma_t=1$) at time $t$ or not ($\gamma_t=0$). \subsection{Classification of correlation sources}\label{ssec:cases} We consider three sources of correlation of interference: node locations, wireless channel (i.e. correlated fading), and traffic. For each of them, there are three possible options, denoted by a triplet $(i,j,k)\in\{0,1,2\}^3$: \begin{itemize} \item They are constant or the correlation is not considered (denoted by $0$). \item They are random but uncorrelated (denoted by $1$). \item They are random and correlated (denoted by $2$). \end{itemize} This leads to $27$ different cases that have been introduced in more detail and analyzed with respect to temporal correlation of interference in~\cite{schilcher12:tmc}. \section*{Acknowledgments} This work has been supported by the Austrian Science Fund (FWF) under grant P24480-N15 (Dynamics of Interference in Wireless Networks) and by the K-project DeSSnet. The K-Project DeSSnet is funded within the context of COMET -- Competence Centers for Excellent Technologies by the Austrian Ministry for Transport, Innovation and Technology (BMVIT), the Federal Ministry for Digital and Economic Affairs (BMDW), and the federal states of Styria and Carinthia. The program is conducted by the Austrian Research Promotion Agency (FFG). \section{Analysis of the auto-correlation of interference}\label{sec:src} Using the general expressions derived in the previous section, we can now analyze the temporal correlation of interference for the three sources of correlation. We start by treating these correlation sources individually and afterwards look at some combinations that provide interesting insights. All plots have been compared to simulation results, which showed a good match. We refrain from plotting the simulation results as they would crowd the figures without providing any additional insights. \subsection{Correlation by node locations} The locations of the interfering nodes introduce a correlation that can be intuitively interpreted in the following way: If a receiver has close-by interferers, it is more likely to be disturbed in receiving a message than if no interferers are close-by. If there is no mobility at all, this correlation is independent of the time lag $\tau$, i.e., $\cor{\interf_{t_1}}{\interf_{t_1+\tau_1}}=\cor{\interf_{t_1}}{\interf_{t_1+\tau_2}}$ for all positive $\tau_1, \tau_2\in\N$. In case of mobility, the interference correlation decreases with time~\cite{gong11:icc} depending on the average speed $\bar{v}$ and the type of mobility. Fig.~\ref{fig:case201} shows the temporal correlation over the time lag $\tau$ for both linear mobility and Brownian motion. For the same average speed $\bar{v}$, the distance traveled after time $\tau$ is on average smaller in case of Brownian motion, and hence the correlation decreases slower with $\tau$. In general, the decrease of correlation depends only on the distance traveled during the time lag $\tau$ or its distribution in case it is random (e.g. for Brownian motion). \begin{figure}[tb] \centering \begin{tikzpicture} \begin{axis}[xlabel={Time lag $\tau$},ylabel={Auto-correlation of interference},ymin=0,ymax=1,xmin=1,xmax=30,grid=both, legend style={at={(0.98,0.98)}, anchor=north east, font=\footnotesize}, legend cell align=left ] \addlegendimage{mark=square,only marks} \addlegendimage{mark=x,only marks} \addlegendimage{mark=o,only marks} \addlegendimage{mark=diamond,only marks} \addlegendimage{mark=star,only marks} \addplot plot[color=black,only marks,mark=square,mark options=solid,style=thick] table[x index=0,y index=1] {case201LMM.txt};\addlegendentry{ ~$\bar{v}=0.1$ } \addplot plot[color=black,only marks,mark=triangle,mark options=solid,style=thick] table[x index=0,y index=2] {case201LMM.txt};\addlegendentry{ ~$\bar{v}=0.2$ } \addplot plot[color=black,only marks ,mark=o,mark options=solid,style=thick] table[x index=0,y index=3] {case201LMM.txt};\addlegendentry{ ~$\bar{v}=0.3$ } \addplot plot[color=black,only marks ,mark=diamond,mark options=solid,style=thick] table[x index=0,y index=4] {case201LMM.txt};\addlegendentry{ ~$\bar{v}=0.4$ } \addplot plot[color=black,only marks ,mark=pentagon,mark options=solid,style=thick] table[x index=0,y index=5] {case201LMM.txt};\addlegendentry{ ~$\bar{v}=0.5$ } \addplot plot[color=black,only marks ,mark=square*,mark options=solid,style=thick] table[x index=0,y index=1] {case201BM.txt}; \addplot plot[color=black,only marks ,mark=triangle*,mark options=solid,style=thick] table[x index=0,y index=2] {case201BM.txt}; \addplot plot[color=black,only marks ,mark=*,mark options=solid,style=thick] table[x index=0,y index=3] {case201BM.txt}; \addplot plot[color=black,only marks ,mark=diamond*,mark options=solid,style=thick] table[x index=0,y index=4] {case201BM.txt}; \addplot plot[color=black,only marks ,mark=pentagon*,mark options=solid,style=thick] table[x index=0,y index=5] {case201BM.txt}; \end{axis} \end{tikzpicture} \caption{Auto-correlation function for the case $(2,0,1)$ for the linear mobility model (open marks) and for Brownian motion (filled marks) with different average speeds $\bar{v}$. The speed is measured in meters per time slot and the sending probability is $p=0.9$.} \label{fig:case201} \end{figure} \subsection{Correlation by wireless channel}\label{sec:case021} The wireless channel is modeled as a block fading channel with length $\chlen$ slots. This means that the channel gain due to multi-path propagation from a potential interferer stays unchanged for $\chlen$ slots and then changes to a stochastically independent value. This change is independent for each of the potential interferers. Hence, in each slot on average the channels of $\frac{1}{\msglen}$ interferers change to a new state. This assumption introduces a correlation to the interference values of different slots for $\chlen>1$. In the expressions for interference correlation in Theorems~\ref{th:case2xx} and~\ref{th:case0xx}, the effect of the channel of node $x$ is covered by the term $\expect{\fad{x}{t_1}\fad{x}{t_2}}$. For Nakagami fading, this term depends on the fading parameter $m$, the channel block length $\chlen$, and the time lag $\tau$. It is given by \begin{equation}\label{eq:channel} \expect{\fad{x}{t_1}\fad{x}{t_2}}= \begin{cases} \frac{m+1}{m}-\frac{\tau}{m\chlen} & \mbox{for }\tau<\chlen\\ 1 & \mbox{for }\tau\geq\chlen. \end{cases} \end{equation} In the special case of Rayleigh fading ($m=1$), this term simplifies to $\expect{\fad{x}{t_1}\fad{x}{t_2}}=2-\frac{\tau}{\chlen}$ for $\tau<\chlen$. \textbf{Remark:} In case a node travels at least half the wavelength during a single time slot, the channel can be assumed stochastically independent for consecutive slots, i.e., $\chlen=1$. Under this assumption fading does not contribute to interference correlation and equations for cases $(i,1,k)$ for some $i,j\in\{0,1,2\}$ apply. \begin{figure}[tb] \centering \begin{tikzpicture} \begin{axis}[xlabel={Time lag $\tau$},ylabel={Auto-correlation of interference},ymin=0,ymax=1,xmin=1,xmax=20,grid=both,xtick={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20}, xticklabels={,2,,4,,6,,8,,10,,12,,14,,16,,18,,20}, legend style={at={(0.98,0.98)}, anchor=north east, font=\footnotesize}, legend cell align=left ] \addplot plot[color=black,mark=star,only marks,style=thick] table[x index=0,y index=5] {case021.txt};\addlegendentry{ ~$\chlen=22$ } \addplot plot[color=black,mark=square,only marks,style=thick] table[x index=0,y index=4] {case021.txt};\addlegendentry{ ~$\chlen=17$ } \addplot plot[color=black,mark=x,only marks,style=thick] table[x index=0,y index=3] {case021.txt};\addlegendentry{ ~$\chlen=12$ } \addplot plot[color=black,mark=diamond,only marks,style=thick] table[x index=0,y index=2] {case021.txt};\addlegendentry{ ~$\chlen=7$ } \addplot plot[color=black,mark=o,only marks,style=thick] table[x index=0,y index=1] {case021.txt};\addlegendentry{ ~$\chlen=2$ } \end{axis} \end{tikzpicture} \caption{Auto-correlation function for the case $(0,2,1)$ for varying the value of the channel block length $\chlen$. The Nakagami fading parameter is assumed to be $m=\frac{1}{2}$ and the sending probability is $p=0.9$.} \label{fig:case021} \end{figure} \begin{figure*} \centering \begin{tikzpicture} \draw [color=black!20,fill=black!5,rounded corners=3pt] (-0.7,2.65) rectangle ++(1.4,0.3); \draw [color=black!20,fill=black!5,rounded corners=3pt] (0.9,2.65) rectangle ++(1.4,0.3); \draw [fill=black!10,rounded corners=3pt] (2.5,2.65) rectangle ++(1.4,0.3); \draw [color=black!20,fill=black!5,,rounded corners=3pt] (4.1,2.65) rectangle ++(1.4,0.3); \draw [color=black!20,fill=black!5,rounded corners=3pt] (5.7,2.65) rectangle ++(1.4,0.3); \draw [fill=black!10,rounded corners=3pt] (7.3,2.65) rectangle ++(1.4,0.3); \draw [color=black!20,fill=black!5,rounded corners=3pt] (8.9,2.65) rectangle ++(1.4,0.3); \draw [color=black!20,fill=black!5,rounded corners=3pt] (10.5,2.65) rectangle ++(1.4,0.3); \node [align=center, text width=2cm] at (3.2,2.8) {\scriptsize $t_1$}; \node [align=center, text width=2cm] at (8,2.8) {\scriptsize $t_2$}; \draw (-0.8,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (0,0.05) {\scriptsize $1$}; \draw (0.8,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,0.05) {\scriptsize $2$}; \draw (2.4,-0.1) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (3.2,0.05) {\scriptsize $3$}; \draw (-0.80,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (0,0.5) {\scriptsize $1$}; \draw (0.8,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,0.5) {\scriptsize $2$}; \draw (2.4,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,0.5) {\scriptsize $3$}; \draw (-0.8,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (0,0.95) {\scriptsize $1$}; \draw (0.8,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,0.95) {\scriptsize $2$}; \draw (2.4,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,0.95) {\scriptsize $3$}; \draw (7.2,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (8,0.95) {\scriptsize $1$}; \draw (8.8,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (9.6,0.95) {\scriptsize $2$}; \draw (10.4,0.8) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (11.2,0.95) {\scriptsize $3$}; \draw (5.6,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (6.4,0.5) {\scriptsize $1$}; \draw (7.2,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (8,0.5) {\scriptsize $2$}; \draw (8.8,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (9.6,0.5) {\scriptsize $3$}; \draw (4.03,-0.1) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (4.8,0.05) {\scriptsize $1$}; \draw (5.6,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (6.4,0.05) {\scriptsize $2$}; \draw (7.2,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (8,0.05) {\scriptsize $3$}; \draw (0.8,1.25) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,1.4) {\scriptsize $1$}; \draw (2.4,1.25) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,1.4) {\scriptsize $2$}; \draw (4,1.25) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (4.8,1.4) {\scriptsize $3$}; \draw (0.8,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,1.85) {\scriptsize $1$}; \draw (2.4,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,1.85) {\scriptsize $2$}; \draw (4,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (4.8,1.85) {\scriptsize $3$}; \draw (7.2,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (8,1.85) {\scriptsize $1$}; \draw (8.8,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (9.6,1.85) {\scriptsize $2$}; \draw (10.4,1.7) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (11.2,1.85) {\scriptsize $3$}; \draw (5.63,1.25) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (6.4,1.4) {\scriptsize $1$}; \draw (7.2,1.25) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (8,1.4) {\scriptsize $2$}; \draw (8.8,1.25) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (9.6,1.4) {\scriptsize $3$}; \draw (2.4,2.15) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,2.3) {\scriptsize $1$}; \draw (4,2.15) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (4.8,2.3) {\scriptsize $2$}; \draw (5.6,2.15) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (6.4,2.3) {\scriptsize $3$}; \draw (7.23,2.15) rectangle ++(1.57,0.3); \node [align=center, text width=2cm] at (8,2.3) {\scriptsize $1$}; \draw (8.8,2.15) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (9.6,2.3) {\scriptsize $2$}; \draw (10.4,2.15) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (11.2,2.3) {\scriptsize $3$}; \node [align=center] at (12.9,2.75) {\scriptsize Indices}; \node [align=center] at (12.9,2.3) {\scriptsize $(1,1)$}; \node [align=center] at (12.9,1.85) {\scriptsize $(2,1)$}; \node [align=center] at (12.9,1.4) {\scriptsize $(2,2)$}; \node [align=center] at (12.9,0.95) {\scriptsize $(3,1)$}; \node [align=center] at (12.9,0.5) {\scriptsize $(3,2)$}; \node [align=center] at (12.9,0.05) {\scriptsize $(3,3)$}; \draw [->] (-1,-0.3) -- (12.2,-0.3); \node [align=center, text width=2cm] at (11.7,-0.5) {\scriptsize time $t$}; \end{tikzpicture} \caption{An illustration of the potential starting slots of two messages covering $t_1$ and $t_2$. In the shown setup ($\msglen=3$ and $\tau=3$), there are six possibilities, of which the indices are shown on the right hand side.} \label{fig:txtwo} \end{figure*} When the wireless channel (i.e., fading) is the only source of interference correlation, we are in a static case $(0,2,1)$. A plot of interference correlation in this case is shown in Fig.~\ref{fig:case021} over the time lag $\tau$ for different values of the channel block length $\chlen$. The correlation decreases linearly with the time lag $\tau$ and vanishes for all $\tau\geq\chlen$. For a given time lag, slower fading (higher values of $\chlen$) implies a higher correlation. In the limit for a constant channel, i.e., $\chlen\to\infty$, we get \begin{equation} \lim_{\chlen\to\infty}\cor{\interf_{t}}{\interf_{t+\tau}}=\frac{p}{1+m-mp} \end{equation} independent of $\tau$. \subsection{Correlation by data traffic} \begin{figure}[b!] \centering \begin{tikzpicture} \draw [color=black!20,fill=black!5,rounded corners=3pt] (-0.7,0.85) rectangle ++(1.4,0.3); \draw [fill=black!10,rounded corners=3pt] (0.9,0.85) rectangle ++(1.4,0.3); \node [align=center, text width=2cm] at (1.6,1) {\scriptsize $t_1$}; \draw [color=black!20,fill=black!5,rounded corners=3pt] (2.5,0.85) rectangle ++(1.4,0.3); \draw [fill=black!10,rounded corners=3pt] (4.1,0.85) rectangle ++(1.4,0.3); \node [align=center, text width=2cm] at (4.8,1) {\scriptsize $t_2$}; \draw [color=black!20,fill=black!5,rounded corners=3pt] (5.7,0.85) rectangle ++(1.4,0.3); \draw (-0.8,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (0,0.05) {\scriptsize $1$}; \draw (0.8,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,0.05) {\scriptsize $2$}; \draw (2.4,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,0.05) {\scriptsize $3$}; \draw (4,-0.1) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (4.8,0.05) {\scriptsize $4$}; \draw (0.8,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (1.6,0.5) {\scriptsize $1$}; \draw (2.4,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (3.2,0.5) {\scriptsize $2$}; \draw (4,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (4.8,0.5) {\scriptsize $3$}; \draw (5.6,0.35) rectangle ++(1.6,0.3); \node [align=center, text width=2cm] at (6.4,0.5) {\scriptsize $4$}; \draw [->] (-1,-0.3) -- (7.5,-0.3); \node [align=center, text width=2cm] at (7,-0.5) {\scriptsize time $t$}; \end{tikzpicture} \caption{An illustration of the slots in which transmissions start that each span over both slots $t_1$ and $t_2$. In the shown setup ($\msglen=4$ and $\tau=2$), there are two possibilities: messages starting at $t_1$ having indices $(1,3)$ and messages starting at $t_1-1$ with indices $(2,4)$.} \label{fig:txone} \end{figure} Correlated traffic is caused by having $\msglen>1$, which impacts interference correlation via the expected value $\expect{\tx{x}{t_1}\,\tx{x}{t_2}}$. \begin{lemma}[Probability of a node sending in two given time slots]\label{lem:traffic} The probability $x$ sends in both slots $t_1$ and $t_2$ is \begin{eqnarray} \lefteqn{\expect{\tx{x}{t_1}\,\tx{x}{t_2}}=\max\big(0,p\,(\msglen-\tau)\big)+\frac{p^2}{1-p(\msglen-1)}}\\\nonumber &&\sum_{i=0}^{\min(\tau-1,\msglen-1)} \sum_{j=1}^{\min(\tau-i,\msglen)} \sum_{k=0}^{\left\lfloor\frac{g}{\msglen}\right\rfloor} \binom{g-k\msglen+k}{k}\\\nonumber &&q^{g-k\msglen}(1-q)^k\:, \end{eqnarray} where $g=\tau-i-j$ and $q=1-\frac{p}{1-p(\msglen-1)}$. \end{lemma} \begin{IEEEproof} Let us assume that a certain node $x$ is sending in both slots $t_1$ and $t_2$. Then, there are two possibilities: (${\rm I}$) a single message could span both slots or (${\rm II}$) two different messages are transmitted in these two slots. A message consists of $\msglen$ time slots; we reference each of them by an index ranging over $1,2,\dots,\msglen$. Let $i$ denote the index of the message at time $t_1$ and $j$ the index at time $t_2$, and write the indices as tuples $(i,j)$. The probability $\prob{\txone{t_1,t_2}}$ that a single message spans over both $t_1$ and $t_2$ is given by \begin{equation}\label{eq:pr:ptxone} \prob{\txone{t_1,t_2}}= \begin{cases} p\, (\msglen-\tau) & \mbox{for }\msglen>\tau\\ 0 & \mbox{else.} \end{cases} \end{equation} This happens, as shown in Fig.~\ref{fig:txone}, because in each time slot a fraction of $p$ nodes start a transmission. For $\msglen>\tau$, there are $\msglen-\tau$ slots for which a message starting there would span both time slots of interest. In this case, the indices $(i,j)$ are always differing by the time lag $j-i=\tau$. For $\msglen\leq\tau$, the time difference between $t_1$ and $t_2$ is larger than the message length and hence it is impossible that a single message spans both slots. The probability $\prob{\txtwo{t_1,t_2}}$ of two different messages being transmitted at slots $t_1$ and $t_2$ is calculated by summing the probabilities of all possible indices $(i,j)$ (see Fig.~\ref{fig:txtwo}). The range of the index $i$ is from $\max\big(1,\msglen-(\tau-1)\big)$ to $\msglen$, i.e., if $\tau\geq\msglen$, we have $i=1,\dots,\msglen$; in case $\tau<\msglen$, the index $i$ has to be big enough to avoid a single message spanning both slots $t_1$ and $t_2$. The range of the index $j$ depends on the value of $i$, as the message covering slot $t_2$ must not overlap with the message covering $t_1$. Hence, $j$ ranges from $1$ to $\min\big(\tau-(\msglen-i),\msglen\big)$, i.e., if there is enough space between $t_1$ and $t_2$, $j$ can go up to $\msglen$; otherwise its maximum value is determined by the case where the two messages are transmitted directly one after another, as for the indices $(1,1)$, $(2,2)$, and $(3.3)$ in Fig.~\ref{fig:txtwo}. In order to calculate the probability of the situation described by an indices $(i,j)$, we calculate the number of slots between the end of the message covering $t_1$ and the beginning of the message covering $t_2$ by \begin{equation} g=\tau-(\msglen-i)-j\:. \end{equation} These intermediate slots can be covered by additional messages in case there is enough space, i.e., if $g\geq\msglen$. The number $k$ of messages fitting these intermediate slots is at most $k\leq \left\lfloor \frac{g}{\msglen}\right\rfloor$ slots, where $\left\lfloor x \right\rfloor$ denotes the biggest integer that is smaller than or equal to $x$. If $k$ messages are present in the intermediate slots, then there are $e=g-k\msglen$ slots unoccupied. The probability of a message starting is, as mentioned in Sec.~\ref{sec:model}, $1-q=\frac{p}{1-p(\msglen-1)}$, while the probability of an empty slot is $q=1-\frac{p}{1-p(\msglen-1)}$. Therefore, the probability that $t_1$ and $t_2$ are occupied by different messages is \begin{eqnarray}\label{eq:pr002:s2ss} \lefteqn{\prob{\txtwo{t_1,t_2}}=\frac{p^2}{1-p(\msglen-1)}\sum_{i=\max(1,\msglen-(\tau-1))}^\msglen \hspace{-5mm} \sum_{j=1}^{\min(\tau-(\msglen-i),\msglen)}} \\\nonumber &&\sum_{k=0}^{\left\lfloor\frac{g}{\msglen}\right\rfloor}\binom{e+k}{k}\left(\frac{p}{1-p(\msglen-1)}\right)^k\, \left(1-\frac{p}{1-p(\msglen-1)}\right)^e\:. \end{eqnarray} Overall, we can sum the two probabilities calculated above to get the expected value $\expect{\tx{x}{t_1}\,\tx{x}{t_2}}=\prob{\txone{t_1,t_2}}+\prob{\txtwo{t_1,t_2}}$. In~\eqref{eq:pr002:s2ss} we substitute $i$ by $i-\msglen$ to get the result. \end{IEEEproof} \begin{corollary}[Simplification for $\tau\leq\msglen$]\label{cor:trafficsimp} In the case the lag $\tau$ is smallter than or equalt to the message length, the result of Lemma~\ref{lem:traffic} simplifies to \begin{equation} \expect{\tx{x}{t_1}\,\tx{x}{t_2}}=p\,(\msglen-\tau) +p\,\frac{\tau(1-{q})+{q}^{\tau+1}-{q}}{1-{q}} \end{equation} where ${q}=1-\frac{p}{1-p(\msglen-1)}$. \end{corollary} \begin{IEEEproof} If we consider the assumption $\tau\leq\msglen$ in~\eqref{eq:pr:ptxone} only the first case can occur. In~\eqref{eq:pr002:s2ss} the upper bounds of the first two sums simplify to $\tau-1$ and $\tau-i$, respectively. In the third sum the upper bound ${\left\lfloor\frac{g}{\msglen}\right\rfloor}=0$ and hence there is only one summand with $k=0$. Thus, we have $\binom{e+k}{k}\,\left(\frac{p}{1-p(\msglen-1)}\right)^k=1$ and overall we have \begin{eqnarray} \expect{\tx{x}{t_1}\,\tx{x}{t_2}}&=&p\,(\msglen-\tau) +\frac{p^2}{1-p(\msglen-1)}\\\nonumber &&\sum_{i=0}^{\tau-1} \sum_{j=1}^{\tau-i} \left(1-\frac{p}{1-p(\msglen-1)}\right)^{\tau-i-j}\:. \end{eqnarray} The inner sum of this expression is a geometric series (with the power being $0,\dots,\tau-i-1$). After replacing the closed form result of the inner sum, the outer sum also results in a geometric series but with the first term ($i=0$) missing. Applying the sum expression of geometric series twice yields \begin{eqnarray} \sum_{i=0}^{\tau-1} \sum_{j=1}^{\tau-i} {q}^{\tau-i-j}&=&\sum_{i=0}^{\tau-1} \frac{1-{q}^{\tau-i}}{1-{q}}\\\nonumber &=&\frac{\tau-\sum_{i=0}^{\tau-1}{q}^{\tau-i}}{1-{q}}\\\nonumber &=& \frac{\tau-\frac{1-{q}^{\tau+1}}{1-{q}}+1}{1-{q}}\:, \end{eqnarray} where the $1$ is due to the sum starting at $i=1$ instead of $0$. Applying some basic algebra leads to the result. \end{IEEEproof} We investigate the temporal correlation of interference when the traffic is the only source of correlation (case $(0,0,2)$) with the aid of Fig.~\ref{fig:case002cont}. It shows a heat map of the interference auto-correlation for different message lengths $\msglen$. Correlation is highest for $\tau=1$ and decreases until the lag matches the message length ($\tau=\msglen$), where it is negative. For lags above $\msglen$ it increases to reach a small positive value, from where an oscillating behavior with reducing amplitude is observed. The lag for which zero crossings exist depend, besides the message length, on the sending probability $p$: for higher $p$ the correlation is in general smaller which implies that it reaches zero for smaller $\tau$ and it gets more negative at $\tau=\msglen$. A detailed study of the impact of $p$ on correlation is presented in the two plots of Fig.~\ref{fig:case002p}. The first important observation is that the traces can be separated into two groups: The influence of $p$ is different for $\msglen\leq\tau$ than for $\msglen>\tau$. For $\msglen\leq\tau$ the correlation is always negative if $\msglen\!\!\mod 2=\tau\!\!\mod 2$ or otherwise mostly positive, only for small $p$ it can take small negative values. Furthermore, it converges to zero for $p\to 0$. This behavior can be explained in the following way: Let us assume we have a message length $\msglen=2$. Since on average $p$ nodes start a new transmission in each slot and they are chosen from the nodes that are idle, the nodes start to form two groups. One group of nodes start their transmissions in even slots while the other is starting transmissions in odd slots. This group formation is stronger for higher $p$. Hence, we have a negative correlation for even values of $\tau$ as mostly the same nodes transmit in $t$ and $t+\tau$, while we have positive correlation for odd values of $\tau$ as mostly nodes of different groups are transmitting in these two slots. For $\msglen>\tau$ there are significantly higher correlation values for small $p$ and in the limit for $p\to 0$ it approaches $\lim_{p\to 0}\cor{\interf_{t}}{\interf_{t+\tau}}\stackrel{\msglen>\tau}{=}\frac{\msglen-\tau}{\msglen}$. For higher $p$ the correlation decreases and becomes negative. For all cases, we have $\lim_{p\to\frac{1}{\msglen}}\cor{\interf_{t}}{\interf_{t+\tau}}=0$, in which case all nodes are always transmitting and hence there is zero variance and covariance. \begin{figure}[tb] \centering \begin{tikzpicture}[scale=0.9] \begin{axis}[ xlabel=Time lag $\tau$, xmin=1, xmax=21, ymin=1, ymax=11, xtick={1,2,...,20}, xticklabels={1,,3,,5,,7,,9,,11,,13,,15,,17,,19}, mesh/cols=20, mesh/rows=40, ylabel=Message length $d$, colormap={bw}{gray(0cm)=(1); gray(1cm)=(0)}, colorbar, xticklabel style = {xshift=0.18cm}, yticklabel style = {yshift=0.3cm}, view={0}{90}] \addplot3[surf,shader=flat] file {case002heat2.txt}; \end{axis} \end{tikzpicture} \caption{Auto-correlation function for the case $(0,0,2)$ for varying the value of the message length $\msglen$. The sending probability is $p=0.05$.} \label{fig:case002cont} \end{figure} \begin{figure}[!ht] \centering \subfigure[The time lag is $\tau=3$.]{ \begin{tikzpicture} \begin{axis}[xlabel={Sending probability $p$},ylabel={Auto-correlation of interference},ymin=-0.3,ymax=0.7,xmin=0,xmax=0.5,grid=both, legend style={at={(0.98,0.98)}, anchor=north east, font=\footnotesize}, legend cell align=left] \addplot plot[color=black,solid,no marks,style=thick] table[x index=0,y index=9] {case002p.txt};\addlegendentry{ ~$\msglen=10$ } \addplot plot[color=black,dashed,no marks,style=thick] table[x index=0,y index=7] {case002p.txt};\addlegendentry{ ~$\msglen=8$ } \addplot plot[color=black,dotted,no marks,style=thick] table[x index=0,y index=5] {case002p.txt};\addlegendentry{ ~$\msglen=6$ } \addplot plot[color=black,dashdotted,no marks,style=thick] table[x index=0,y index=3] {case002p.txt};\addlegendentry{ ~$\msglen=4$ } \addplot plot[color=black,densely dashed,no marks,style=thick] table[x index=0,y index=2] {case002p.txt};\addlegendentry{ ~$\msglen=3$ } \addplot plot[color=black,densely dotted,no marks,style=thick] table[x index=0,y index=1] {case002p.txt};\addlegendentry{ ~$\msglen=2$ } \end{axis} \end{tikzpicture} \label{fig:case002p1} } \subfigure[The time lag is $\tau=4$.]{ \begin{tikzpicture} \begin{axis}[xlabel={Sending probability $p$},ylabel={Auto-correlation of interference},ymin=-0.3,ymax=0.7,xmin=0,xmax=0.5,grid=both, legend style={at={(0.98,0.98)}, anchor=north east, font=\footnotesize}, legend cell align=left] \addplot plot[color=black,solid,no marks,style=thick] table[x index=0,y index=9] {case002p2.txt};\addlegendentry{ ~$\msglen=10$ } \addplot plot[color=black,dashed,no marks,style=thick] table[x index=0,y index=7] {case002p2.txt};\addlegendentry{ ~$\msglen=8$ } \addplot plot[color=black,dotted,no marks,style=thick] table[x index=0,y index=5] {case002p2.txt};\addlegendentry{ ~$\msglen=6$ } \addplot plot[color=black,dashdotted,no marks,style=thick] table[x index=0,y index=3] {case002p2.txt};\addlegendentry{ ~$\msglen=4$ } \addplot plot[color=black,densely dashed,no marks,style=thick] table[x index=0,y index=2] {case002p2.txt};\addlegendentry{ ~$\msglen=3$ } \addplot plot[color=black,densely dotted,no marks,style=thick] table[x index=0,y index=1] {case002p2.txt};\addlegendentry{ ~$\msglen=2$ } \end{axis} \end{tikzpicture} \label{fig:case002p2} } \caption{Auto-correlation function for the case $(0,0,2)$ for varying the sending probability $p$ and the value of the message length $\msglen$.} \label{fig:case002p} \end{figure} \subsection{Correlation by multiple sources} \subsubsection{Channel and traffic} \begin{figure}[tb] \centering \begin{tikzpicture}[scale=0.9] \begin{axis}[ xlabel=Time lag $\tau$, xmin=1, xmax=21, ymin=1, ymax=11, xtick={1,2,...,20}, xticklabels={1,,3,,5,,7,,9,,11,,13,,15,,17,,19}, mesh/cols=20, mesh/rows=40, ylabel=Message length $d$, colormap={bw}{gray(0cm)=(1); gray(1cm)=(0)}, colorbar, xticklabel style = {xshift=0.18cm}, yticklabel style = {yshift=0.3cm}, view={0}{90}] \addplot3[surf,shader=flat] file {case022heat.txt}; \end{axis} \end{tikzpicture} \caption{Auto-correlation function for the case $(0,2,2)$ for varying the value of the message length $\msglen$. The sending probability is $p=0.1$, the channel coherence time is $\chlen=22$ and the Nakagami parameter is $m=2$.} \label{fig:case022cont} \end{figure} Fig.~\ref{fig:case022cont} shows a heat map of the auto-correlation when both channel and traffic introduce correlation, which corresponds to case $(0,2,2)$. Results are shown for $\chlen=22$. It can be noted that for given values of $\chlen$ and $\msglen$ the correlation is highest for $\tau=1$ and vanishes for high lags (at least $\tau>\chlen,\msglen$). The correlation vanishes in the limit $\tau\to\infty$. For each value of $\msglen$ there is a sharp change of trend at two points: The first is at $\tau=\chlen$ and the second at $\tau=\msglen$. These are the points where the correlation caused by the traffic and by the channel, respectively, are at their minimum. The contribution of the channel is zero at $\tau=\chlen$ and does not change for higher lags, while the contribution of traffic is negative for $\tau=\msglen$ and increases when further increasing $\tau$. The case $\msglen=10$ is special since all nodes are transmitting all the time (i.e., we have a traffic intensity $\mu=1$). In such a setting the traffic does not cause any correlation and hence the correlation of interference is fully determined by the channel correlation. This corresponds to the case $(0,2,0)$ for which the correlation shows a linear dependence on $\tau$ (topmost row in the heat map). \subsubsection{Node locations, channel, and traffic} When accounting for all three sources of interference correlation and no mobility (case $(2,2,2)$), the auto-correlation evolves as shown in Fig.~\ref{fig:case222cont}. Correlation values start at a rather high level for small $\tau$ and decrease for higher $\tau$, although not monotonically, as there are also ranges of $\tau$ where correlation slightly increases. For $\tau$ beyond the message length $\msglen$ and channel coherence time $\chlen$, the correlation approaches its static value as determined by the node locations, i.e., case $(2,0,1)$. In the limit $\tau\to\infty$ it converges to the values of that case. This is explained by noting that the contribution of the node locations to the correlation of interference does not change with $\tau$. Hence, for values of $\tau$ for which the correlation contribution from other sources vanishes, the node locations are the only source of correlation and fully determine its value. The specific trends of the auto-correlation in the plot are determined by the specific network parameters. The first qualitative change of the trend is the first local minimum, i.e. the point where it first starts to increase, which is located at $\tau=\msglen$. This corresponds to case $(0,0,2)$, where a similar local minimum is present in the plot. The second qualitative change of the trend can be found at $\tau=\chlen=14$, which is the time lag for which the wireless channel's correlation vanishes. Naturally, the order of these two events depends on which of the parameters $\msglen$ and $\chlen$ has a higher value. Overall, the correlation behaves similar to case $(0,2,2)$, but for high $\tau$ in case $(2,2,2)$ the correlation approaches a constant value while for case $(0,2,2)$ the correlation approaches zero. \begin{figure}[tb] \centering \begin{tikzpicture}[scale=0.9] \begin{axis}[ xlabel=Time lag $\tau$, xmin=1, xmax=21, ymin=1, ymax=11, xtick={1,2,...,20}, xticklabels={1,,3,,5,,7,,9,,11,,13,,15,,17,,19}, mesh/cols=20, mesh/rows=40, ylabel=Message length $d$, colormap={bw}{gray(0cm)=(1); gray(1cm)=(0)}, colorbar, xticklabel style = {xshift=0.18cm}, yticklabel style = {yshift=0.3cm}, view={0}{90}] \addplot3[surf,shader=flat] file {case222heat.txt}; \end{axis} \end{tikzpicture} \caption{Auto-correlation function for the case $(2,2,2)$ for varying the value of the message length $\msglen$. The sending probability is $p=0.1$ and the block fading length is $\chlen=14$ and $m=1$, i.e., Rayleigh fading is adopted. A static network is considered, i.e., no node mobility, $\bar{v}=0$.} \label{fig:case222cont} \end{figure}
{ "timestamp": "2018-06-05T02:17:53", "yymm": "1806", "arxiv_id": "1806.01091", "language": "en", "url": "https://arxiv.org/abs/1806.01091" }
\section{Introduction} Advancements in developing high-performance hardware platforms like GPUs have been a significant enabler for shifting machine learning (ML) models, such as neural networks, from rather theoretical concepts to practical solutions to a wide variety of problems. % % However, computational and storage complexity of these models has forced the majority of computations to be performed on high-end servers or on the cloud. % Meanwhile, the inherent tolerance of many machine learning models to error and approximation has allowed researchers to design systems that benefit from low-precision and approximate computing. % This not only reduces computation and storage cost on existing systems, but also enables efficient deployment of such models on resource-constrained platforms such as smartphones and embedded systems. % While the majority of machine learning frameworks are widely used in Python, low-precision and approximate computing techniques are typically implemented in Verilog, VHDL, or C. % This has prevented these techniques from being deployed extensively, especially in large-scale models such as deep neural networks (DNNs). % Furthermore, training and deployment of complicated machine learning models using hardware simulation tools is extremely burdensome if not impossible. % This necessitates redefining approximate computing techniques at a higher level of abstraction for integration with machine learning frameworks. % Besides, some of the high-level ideas for reducing storage and computational complexity of deep learning models may not be as effective when implemented in hardware. % This motivates introducing new flows for mapping high-level ideas into synthesizable hardware for understanding their actual impact on power, throughput, and area. % Lop is a library that bridges the gap between machine learning and efficient hardware realization. It allows users to choose from a variety of data representations and customize the number of bits used for representing model parameters, intermediate values, etc. % Furthermore, it allows them to choose how arithmetic operations should be performed on each variable, i.e. the standard for a specific data representation or an approximate implementation available in the library. % In other words, Lop introduces a new set of tunable hyperparameters in addition to what machine learning libraries provide, e.g. number of layers, types of layers, number of units per layer, and so on. % % Lop can be used to answer questions such as the following: % \begin{itemize}[nolistsep, leftmargin=*, topsep=0pt] % \item{Using fixed-point representation, how many bits are required to represent weights, biases, and activations in a deep neural network to reach a target prediction or classification accuracy \cite{courbariaux2016binarized, li2016ternary, venkatesh2017accelerating}? How many bits should be used to represent integral and fractional parts? Can a floating-point representation with fewer number of bits reach the same level of accuracy? How many bits should be used to represent exponent and mantissa? How does fixed-point representation compare with floating-point representation in terms of power/energy consumption, throughput, and area?} % \item{In a deep neural network, can a layer with lower range of activation values use fewer bits compared to a layer with a higher range of activation values \cite{judd2015reduced}? If so, how many bits are required to represent activations at each layer?} % \item{During training of a deep neural network, will representing weights and biases with low bit-width during forward pass and high bit-width during backward pass affect the quality of model \cite{zhou2016dorefa, chen2017fxpnet}?} % \item{How would converting some pre-trained floating-point weights to fixed-point numbers with a predefined bit-width affect prediction accuracy in a deep neural network? Would retraining using the new representation improve the accuracy loss due to conversion?} % \item{How would replacing some/all multipliers with an approximate multiplier affect prediction or classification accuracy? How much power/energy will be saved by using the approximate multiplier?} % \end{itemize} The remainder of this paper is organized as follows. % Section~\ref{sec:related-work} reviews some of the related work in low-precision computing in deep neural networks and approximate arithmetic. % Next, Section~\ref{sec:preliminaries} explains some preliminaries and Section~\ref{sec:framework} details the framework. % After that, Section~\ref{sec:results} presents experimental results and finally, Section~\ref{sec:conclusion} concludes the paper. % \section{Related Work} \label{sec:related-work} Designing machine learning models that are efficient in terms of power/energy consumption, area, and/or memory has been widely studied in the past few years \cite{nazemi2017high,lin2018fft,nazemi2018hardware}. % Having such efficient models is particularly important for energy- and/or thermal-constrained devices such as smartphones \cite{dousti2015thermtap}. % Among different methods for efficient implementation of machine learning models, there has been a considerable amount of work on using various representations such as fixed-point or low bit-width numbers for deep learning applications. % This includes using low bit-width floating-point representation for weights and fixed-point representation for activations \cite{lai2017deep}, dynamic fixed-point representation \cite{na2016speeding,gupta2015deep}, fixed-point quantization of clustered weights \cite{mellempudi2017mixed}, binary or ternary weights and activations \cite{courbariaux2016binarized,chen2017fxpnet,li2016ternary,venkatesh2017accelerating,zhou2016dorefa}, and logarithmic representation \cite{miyashita2016convolutional}, to name but a few. % % % % % % % % % Looking at this problem from a hardware design point of view, many researchers have proposed approximate computing techniques for machine learning applications or general-purpose approximate arithmetic units like multipliers and adders that can potentially be used in machine learning applications. % Gysel \textit{et~al.} \cite{gysel2016hardware} present a framework that can condense a convolutional neural network by using fixed-point representation for weights and activations. % Zhang \textit{et~al.} \cite{zhang2015approxann} introduce a framework that can approximate computation and memory accesses in an artificial neural network by characterizing the impact of different neurons on output quality. % Shafique \textit{et~al.} \cite{shafique2016cross} introduce an open-source library of accurate and approximate arithmetic modules and accelerators, however, their modules are implemented in C and VHDL, which prevents their seamless integration into existing machine learning frameworks. % Despite the fact that a lot of work has been done in this area, there are no libraries that allow design space exploration using various customizable data representations and approximate arithmetic operations. % Additionally, most of the prior work only compare different representations in terms of storage requirement for saving weights, not power/energy efficiency, throughput, and area of their hardware realizations. % Lop addresses these issues by integrating various customizable data representations and arithmetic operations into some of the machine learning frameworks for high-level simulations and by providing parameterized hardware implementation of the same data representations and arithmetic operations to compare designs based on power/energy dissipation, throughput, and area figures. % \section{Preliminaries} \label{sec:preliminaries} In a typical DNN, there are neurons and activation connections (activations for short) among nodes. % A neuron is simply a node in the underlying graph of the DNN whereas a synaptic connection is an edge in that graph. % Activations carry the output value of a neuron after the dot product calculation, and application of the nonlinear activation function. % A neuron also receives hidden variable inputs as weights and biases. % These weights and biases are assigned fixed values after the DNN training is completed and remain fixed during the inference. % On the other hand, activations carry a range of values [min, max] over time as a function of the data that is presented to the inputs of the DNN. % Therefore, each weight, bias, or activation in a trained DNN has either a fixed scalar value assignment or a value range assignment. % To avoid using too many different data representations (which can result in a very high implementation cost due to the need to convert back and forth among these representations as we do a forward propagation of input values of the DNN through the network in order to get the classification or recognition result at the output), one should preferably partition the set of nodes, connections, weights, and biases into a small number of different domains where within each domain the choice of data representation and exact vs. approximate arithmetic operation is fixed. % In this case, since we need to convert data representation only when we move data across different parts and since the number of parts is small, the said implementation cost overhead can be managed. % During training of a DNN, weights and biases will also carry a value range and there are gradients that contain updates to model parameters during backward propagation. % We will refer to the set of weights, biases, activations, and gradients as the {\em WBAG set} from here on. \section{Proposed Framework} \label{sec:framework} Fig.~\ref{fig:framework} demonstrates a high-level diagram of Lop and its interaction with other libraries and tools. % The Python module of Lop, called LopPy, implements various data representations, including fixed-point and floating-point numbers as well as different low-precision and approximate computing methods, including some of the state-of-the-art approximate adders, multipliers, and dividers. % LopPy allows these data representations and approximate computing methods to be integrated into some of the existing machine learning frameworks in order to study quality of various ML models under these customized computations. % ScaLop is LopPy's counterpart, which is implemented in Scala and interacts closely with Chisel \cite{bachrach2012chisel}. % It includes implementation of all data representations and approximate computing methods available in LopPy, which can be used to synthesize customized computations on target platforms such as FPGAs or ASICs. % The use of Python and Scala interfaces in Lop enables a high degree of reconfigurability, platform-independence, and programmability. % \begin{figure}[b] \centering \includegraphics[width=1\columnwidth]{figs/framework.pdf} \caption{High-level diagram demonstrating Lop in practice. Various models (e.g. networks with different architectures) are provided to machine learning library, candidate data representations and arithmetic operations are fed to LopPy, viable configurations for each machine learning model are found and fed to ScaLop, synthesizable Verilog files are generated by Chisel, and different metrics are produced by synthesis tool for comparing hardware cost.} \label{fig:framework} \end{figure} To illustrate different features of Lop throughout the paper, we train a deep convolutional neural network (DCNN) using single-precision floating-point values and use different data representations and arithmetic operations to find an efficient inference engine. % Fig.~\ref{fig:dcnn-arch} details the architecture of this DCNN, along with the shape of different layers. % The objective of this network is to classify handwritten digits of the MNIST dataset \cite{lecun2010mnist} into one of ten classes. % % \begin{figure}[tb] \centering \includegraphics[width=0.9\columnwidth]{figs/network.pdf} \caption{DCNN architecture and shape of activations.} \label{fig:dcnn-arch} \end{figure} \subsection{Data Representation \& Arithmetic Operations} This section describes details of different data representations and approximate arithmetic operations implemented in Lop. % Various choices of data representations and arithmetic operations may be used with different granularity levels in a machine learning model. % Indeed one may use a custom data representation and a corresponding approximate multiplier for the whole DNN or alternatively, use different data representations and mixtures of exact and approximate computing blocks for different parts of the DNN. As an example of the latter scheme, an 8-bit floating-point representation and exact multipliers are employed in the first layer of a DNN, a 12-bit floating-point and truncation-based approximate multipliers are utilized in the second layer, and so on. % Similarly, one may use 8-bit fixed-point representation during forward pass and 16-bit fixed-point representation during backward pass of training. % \subsubsection{Fixed-Point and Integer Data Representations} Fixed-point representation breaks down bits into an integral part and a fractional part. % % % A fixed-point number can be thought of as an integer number that is multiplied by a scaling factor. % Therefore, basic operations on fixed-point numbers like addition and multiplication are similar to integer operations, but take scaling factor into account. % This is the main reason that fixed-point arithmetic operations are very efficient in terms of hardware implementation. % Integer representation is a special case of fixed-point where the number of fractional bits is set to zero. \subsubsection{Floating-Point Data Representation} Floating-point representation uses an exponent and a mantissa to define a number according to % \begin{equation*} n = mantissa * base^{exponent}. \end{equation*} % This allows floating-point numbers to have a large dynamic range, but introduces complications for hardware implementation. % \subsubsection{Arithmetic Operations} On top of standard operations for each data representation, Lop implements approximate arithmetic operations inspired by some of the existing methods such as the ones introduced in \cite{hashemi2015drum, imani2017cfpu, narayanamoorthy2015energy, chang2010low}. % These operations can be combined with one of the representations described earlier, assuming that the approximate computing method is compatible with the representation. % In cases where the work in literature is limited to a specific bit-width, we have generalized the reported work to account for arbitrary bit-widths. % \subsection{Exploration Strategy} % % % % The tunable hyperparameters introduced in Lop include choice of data representation, number of bits allocated to each field of data representation, and choice of arithmetic operations. % Among these hyperparameters, number of bits allocated to the field that determines value range of WBAG in the DNN under study (i.e. integral part in fixed-point and exponent in floating-point representation) can easily be determined based on network simulations. % In practice, the range is usually small, which means only a few bits of the data representation are needed to precisely capture the range of said values. % Because customized data representations and approximate arithmetic methods may be used with different granularity levels, values in a network are partitioned into parts where data representation and arithmetic operations within each part are the same. % For example, if a network is being optimized layer-wise, each part will include the WBAG set of exactly one layer. Evidently, when there is no need to use different representations and arithmetic operations within two adjacent layers, they can be combined into the same part. % Note that when doing DNN training, each element of the WBAG set assumes a range of values that must be determined by doing value dumps of this element as the network is being trained. % On the other hand, the weight and bias elements in the WBAG set assume predetermined and fixed values during the inference and only the activations exhibit a non-scalar value range, which is itself determined by dumping activation values for the complete set of training data (note that gradients do not matter and are ignored during the inference; so in fact we are only interested in the WBA set). Table~\ref{table:range} summarizes value ranges for the network of Fig.~\ref{fig:dcnn-arch} assuming layer-wise optimization. % \begin{table}[h] \centering \captionsetup{justification=centering} \caption{Value range of weights, biases, and activations in each layer of the network of Fig.~\ref{fig:dcnn-arch}} \label{table:range} \resizebox{\columnwidth}{!}{% \begin{tabular}{l cccc} Layer & CONV1 & CONV2 & FC1 & FC2 \\ \midrule Range & [-1.45, 1.15] & [-3.33, 2.45] & [-9.85, 6.80] & [-28.78, 35.76] \end{tabular} } \end{table} Given the value range of each part, one can calculate the number of bits that are required for representing that range. % For example, to support the value range of the first fully-connected layer (FC1), a fixed-point representation requires four bits in the integral part (in a sign-magnitude format). % However, because the value range of partial sums may be greater than the value ranges mentioned in Table~\ref{table:range}, we extend the number of bits to a larger interval to ensure correct arithmetic operations. As a result, instead of setting the number of bits for representing the integral part of values in FC1 to four, the number of bits will be chosen from an interval that is lower bounded by four, e.g. [4, 7]. % We must add to this bit count another bit to represent the sign. % A similar analysis may be performed for representing the exponent in a floating-point representation. % Unfortunately, the value ranges do not help us determine lower or upper bounds on the number of bits needed for the part of data representation that determines the computational accuracy (i.e. fractional part in the fixed-point and mantissa in the floating-point representation). % Therefore, here we resort to enumerating the bit count of this part of the data representation in some predefined interval, e.g., [4, 12]. % We refer to the intervals for different parts of the data representation as bit count intervals or BCIs for short. % It should be noted that using exact or approximate arithmetic operations affects BCIs. % For example, an approximate floating-point multiplier may need a higher number of bits in mantissa to achieve acceptable classification or prediction accuracy. % To find the best data representation and arithmetic operation for each part of the partition, the parts in the DNN are sorted topologically, starting from the input layer and moving towards the output layer. % After that, the data representation and arithmetic operation for each part is found according to its BCI such that it minimizes hardware cost subject to bounded loss in classification or prediction accuracy. % Throughout this process, the parts that come before the part under study are implemented with their optimized data representation and arithmetic operation while the parts that come after the part under study are implemented with full precision and exact operations to ensure they do not introduce additional loss in classification or prediction accuracy. % This process continues till all parts are optimized. Optionally, a second pass of optimization can be performed for quality recovery. % During this pass, the objective is to maximize classification or prediction accuracy subject to bounded increase in hardware cost. % Throughout this process, the parts are optimized in the same order that they were processed during the first pass of optimization. % The difference, though, is that the parts that come after the part under study are implemented with their optimized representation that was found during the first pass of optimization. % The bounded increase in hardware cost can be translated to some constraints on BCIs. % For example, the representation for each part may only use one additional bit compared to the representation that was found during the first pass of optimization. % \subsection{LopPy} LopPy implements a Numeric class in Python for each of the data representations described earlier. % These representations can be customized by user in terms of number of bits that are allocated to a specific representation, e.g. number of bits to represent exponent and mantissa in a floating-point number. % The implementation also includes arithmetic operations such as multiplication/division, addition/subtraction, exponentiation, comparison, etc. and is compatible with both Python 2 and Python 3. % Additionally, there are behavioral implementations of approximate arithmetic modules that can be combined with one of the data representations. % For example, a user may choose an 8-bit floating-point representation that allocates one bit to sign, four bits to exponent, and three bits to mantissa. % Furthermore, he/she may replace the standard multiplication and division with an approximate method that is compatible with floating-point representation, but keep other arithmetic operations untouched. % % All implemented Numeric classes accept strings, floating-point numbers, and integers in their constructors. % This enables using different configurations for representing numbers across layers of a deep neural network, during forward and backward passes of training, etc. % For example, when two related layers of a deep neural network are not compatible in terms of data representation or number of bits per field, values in the input layer will be converted to an intermediate floating-point representation and later, to a representation that is compatible with the output layer. % % The following code snippet demonstrates an example of inference on the network of Fig.~\ref{fig:dcnn-arch} with 12-bit and 16-bit fixed-point values for convolutional and fully-connected layers, respectively. % For each representation, the number of integral and fractional bits is set separately. % \lstinputlisting[language=Python]{code/to_dtype.py} There are a few optimizations in LopPy that increase performance and reduce memory usage. % The first one is a result of LopPy's compatibility with Cython. % Cython produces a standard Python module that can be imported in other modules. % However, the original Cython-compatible Python module is translated into C, which is further compiled to machine code, resulting in faster code. % Cython programs usually consume fewer computing resources such as processing cycles and memory. % On our benchmarks, we achieved 2x performance improvement by using Cython-generated modules of our Numeric classes. % As a result, a user may use the Python variant of LopPy for quick and easy development and testing and use the Cython variant in production code. % Another performance advantage comes from the use of \texttt{\_\_slots\_\_} for defining instance variables. % This restricts the valid set of attributes to the ones listed in \texttt{\_\_slots\_\_} and therefore, allows efficient storage of attributes in an array. % It has been shown that using \texttt{\_\_slots\_\_} can increase performance by 15-30\% \cite{slots-stackoverflow}. % Additionally, using \texttt{\_\_slots\_\_} leads to a low, predictable memory usage, which is in contrast to using a \texttt{\_\_dict\_\_} that is the default way of storing instance variables in Python. % It has been shown that using \texttt{\_\_slots\_\_} can improve memory footprint by around 70\% compared to using \texttt{\_\_dict\_\_} \cite{slots-stackoverflow-2}. % It is worth mentioning that application of LopPy goes beyond machine learning. % For example, a user that has used SciPy to solve a problem in signal processing, image processing, or linear algebra may use LopPy to see how an objective is affected when a different data representation or an approximate computing technique is applied. % \subsection{ScaLop} \label{sec:scalop} ScaLop has a similar implementation to LopPy, but is used for hardware design and analysis. % It defines the same data representations and approximate computing methods in such a way that is compatible with Chisel. % While the majority of prior work compare various data representations in terms of memory requirement for storing weights, ScaLop allows full comparison of various configurations in terms of power consumption, throughput, and area due to its seamless integration into existing systems implemented in Chisel. % One of the advantages of Chisel that makes it suitable for our framework is its automatic width inference. % Automatic width inference allows users to modify bit-width of data representation without needing to manually modify other dependent modules. % Furthermore, FIRRTL, the intermediate representation that is generated during RTL generation, can introduce a great degree of compile-time reconfigurability. Additionally, Chisel can generate both synthesizable Verilog files for synthesis on target platforms and C++ representation of circuits for fast simulations using Verilator \cite{snyder2004verilator}. % The following code snippet illustrates an example of defining a processing element (PE) that consists of a multiplier and an adder in which inputs and outputs are fixed-point numbers with six bits in integral part and eight bits in fractional part. % These arithmetic operations may be replaced with another data representation or approximate arithmetic unit that is available in the library. % \lstinputlisting[language=Scala]{code/PE.scala} It should be noted that one may integrate ScaLop modules into an existing Verilog design without having the design implemented in Chisel. % Verilog files for standard and approximate operations can be generated using Chisel and replaced with corresponding modules in Verilog design. % As a result, ScaLop may be used directly in existing Chisel projects or indirectly, through generation of Verilog modules, into existing Verilog designs. % It is worth mentioning that Chisel is a high-level language that may not create the most efficient Verilog implementation. % As a result, the estimated hardware cost is an upper bound and the user may need to fine tune the Verilog code to achieve higher power/energy efficiency, increased throughput, and lower area. % \subsection{Extending Lop} Lop allows users to easily define new data representations and arithmetic operations. % For example, suppose that a user wants to implement a neural network where weights and activations are 0/1 binary values and multiply operations are replaced with XNOR, e.g. a network similar to \cite{courbariaux2016binarized}. % Because existing libraries implement operations such as two-dimensional convolutions using multiply and add operations, the user needs to define convolutions from scratch to use XNOR instead of multiplication. % This includes transforming inputs into a Teoplitz matrix, implementing different strides and paddings, etc. Lop provides a simple solution to this problem where the user can define a new data representation based on fixed-point representation in which the number of integral bits is one and there are no fractional bits, hence achieving binary values. % Furthermore, the multiply operation is overridden to implement XNOR instead of multiplication. % As a result, when a machine learning library applies a multiplication within convolution operation, XNOR is called under the hood. % Therefore, the user may use functionalities of the machine learning library without redefining basic operations. % The following code snippet illustrates one such implementation. % \lstinputlisting[language=Python]{code/xnor.py} \section{Experimental Results} \label{sec:results} \subsection{Software Simulation} \label{sec:sw-results} The trained model of Fig.~\ref{fig:dcnn-arch} is able to classify test data with 99.1\% accuracy using single-precision floating-point values. % This classification accuracy is considered as baseline and all other accuracies are normalized to this value for easier comparison. % This section explains how other data representations and approximate arithmetic operations may be used to design an efficient inference engine for this network. % Table~\ref{table:dtypes} summarizes data representations and approximate computing methods used in this section. % \begin{table}[h] \centering \captionsetup{justification=centering} \caption{Summary of notation} \label{table:dtypes} \resizebox{\columnwidth}{!}{% \begin{tabular}{l | l} Notation & Description \\ \hline FL(e,~m) & Floating-point representation with $e$ exponent bits and $m$ mantissa bits. \\ I(e,~m) & Similar to FL(e,~m), but with approximate multiplier based on \cite{imani2017cfpu}. \\ FI(i,~f) & Fixed-point representation with $i$ integral bits, and $f$ fractional bits. \\ H(i,~f,~t) & Similar to FI(i,~f), but with approximate multiplier of width $t$ based on \cite{hashemi2015drum}. \end{tabular} } \end{table} Table~\ref{table:sw-float} summarizes normalized classification accuracy for some of the explored customized computations based on floating-point representation. % It includes representations with different bit-widths in each layer of the network and other configurations where some or all multiply operations are replaced with approximate multipliers. % Those customized computations that achieve the same classification accuracy as baseline, i.e. 100\% relative accuracy, are selected for hardware realization in the next step. % \begin{table}[h] \centering \captionsetup{justification=centering} \caption{Classification accuracy for different customized computations based on floating-point representation} \label{table:sw-float} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccccc} \toprule \multicolumn{4}{c}{Layers} & \multirow{2}{*}{Relative Accuracy} \\ \cmidrule{1-4} CONV1 & CONV2 & FC1 & FC2 & {} \\ \midrule FL(4, 8) & FL(4, 9) & FL(4, 8) & FL(4, 9) & 98.98\% \\ \textbf{FL(4, 9)} & \textbf{FL(4, 9)} & \textbf{FL(4, 9)} & \textbf{FL(4, 9)} & \textbf{100\%} \\ I(4, 8) & I(4, 9) & I(4, 8) & I(4, 9) & 94.90\% \\ I(4, 9) & I(4, 9) & I(4, 9) & I(4, 9) & 94.90\% \\ \textbf{I(5, 10)} & \textbf{I(5, 10)} & \textbf{I(5, 10)} & \textbf{I(5, 10)} & \textbf{100\%} \\ \bottomrule \end{tabular} } \end{table} Similarly, Table~\ref{table:sw-fixed} summarizes normalized classification accuracy for some of the explored customized computations based on fixed-point representation. % Among customized computations that meet baseline classification accuracy, FI(6, 8) has the lowest number of bits and does not have complications such as leading-one detector and barrel shifter that is used in \cite{hashemi2015drum}. % As a result, this data representation is selected for hardware realization in the next step. % \begin{table}[h] \centering \captionsetup{justification=centering} \caption{Classification accuracy for different customized computations based on fixed-point representation} \label{table:sw-fixed} \resizebox{\columnwidth}{!}{% \begin{tabular}{ccccc} \toprule \multicolumn{4}{c}{Layers} & \multirow{2}{*}{Relative Accuracy} \\ \cmidrule{1-4} CONV1 & CONV2 & FC1 & FC2 & {} \\ \midrule FI(5, 8) & FI(5, 8) & FI(6, 8) & FI(6, 8) & 98.98\% \\ FI(6, 8) & FI(6, 8) & H(8, 8, 14) & H(8, 8, 14) & 100\% \\ H(6, 8, 12) & H(6, 8, 12) & H(8, 8, 14) & H(8, 8, 14) & 100\% \\ \textbf{FI(6, 8)} & \textbf{FI(6, 8)} & \textbf{FI(6, 8)} & \textbf{FI(6, 8)} & \textbf{100\%} \\ \bottomrule \end{tabular} } \end{table} \subsection{Hardware Realization} To compare hardware cost of various implementations using different data representations and arithmetic operations, we take a similar approach to \cite{sharma2016dnnweaver} where the deep neural network is mapped to a set of processing elements (i.e. the datapath) and proper control signals are generated to schedule computations on PEs. % The difference, though, is that our implementation consists of 500 PEs where the multiplier and adder inside the PE operate on customized data representations and may be exact or approximate. % The target FPGA is part of Arria 10 family, which includes 427,200 adaptive logic modules (ALMs), 55,562,240 bits of block RAM, and 1518 DSP blocks. % Table~\ref{table:fpga-results} compares hardware cost of implementing this datapath using different data representations and approximate arithmetic operations that were found viable in the previous section (Tables~\ref{table:sw-float} ~and~\ref{table:sw-fixed}). % The representation shown in the table is used for all layers of the network. % The table also includes two baseline implementations using single-precision and half-precision floating-point representations, respectively (float32 and float16). % \begin{table}[h] \centering \caption{Hardware Cost of Various Implementations} \label{table:fpga-results} \resizebox{\columnwidth}{!}{% \begin{tabular}{lccccc} \toprule \multirow{2}{*}{Representation} & ALMs & DSPs & Clock & Power & Energy Efficiency \\ {} & count (util. factor) & count (util. factor) & (MHz) & (W) & (Gops/J) \\ \midrule float32 & 209,805 (49\%) & 500 (33\%) & 94.41 & 12.38 & 3.81 \\ float16 & 101,644 (24\%) & 500 (33\%) & 113.86 & 7.30 & 7.80 \\ FL(4, 9) & 93,500 (22\%) & 500 (33\%) & 115.89 & 6.68 & 8.67 \\ I(5, 10) & 92,111 (22\%) & 0 (0\%) & 116.80 & 6.28 & 9.30 \\ FI(6, 8) & 15,452 (4\%) & 500 (33\%) & 201.13 & 4.90 & 20.52 \\ \bottomrule \end{tabular} } \end{table} There are a few interesting conclusions that can be made from Table~\ref{table:fpga-results}. % First of all, it shows the potential of Lop in integrating approximate computing techniques into large-scale systems. % The I(5,~10)-based realization of the said datapath achieves the baseline accuracy without using any DSP blocks from the FPGA and consumes 50\% and 14\% less power compared to single-precision and half-precision floating-point baselines, respectively. % While reference \cite{imani2017cfpu} shows the effectiveness of this approximate computing method in smaller benchmarks, Lop allows using it in a convolutional neural network, which has a much more complicated design. % Moreover, the I(5,~10)-based realization has a peak clock speed that is 24\% and 3\% higher than the said baselines, respectively. % The reported data shows that the I(5,~10)-based realization achieves an overall energy efficiency (e.g., MIPS/Watt or ops/Joule) increase of 144.09\% and 19.23\% over the baseline implementations, respectively. % Second, it can be observed that the fixed-point representation FI(6,~8) consumes about half to one third power compared to baseline implementations, can operate twice as fast, and utilizes considerably fewer ALMs. % Additionally, it improves energy efficiency by 438.58\% and 163.08\% compared to the baselines. % However, compared to I(5,~10), it requires 500 DSP blocks, which may be considered as a disadvantage. % Finally, FL(4,~9) can improve power consumption by 46\% and 9\% compared to single-precision and half-precision floating-point, respectively, while it achieves the same prediction accuracy. % Additionally, for this representation, the number of ALMs and clock frequency is slightly better than those of half-precision floating-point representation. % This information can be used to decide the appropriate data representation and approximate computing methods based on a platform's available resources. % For example, if a platform has a limited number of DSP blocks, then I(5,~10) is a good choice because it is a multiplier-free implementation. % On the other hand, if power consumption, throughput, and consumed ALMs are of higher importance, a fixed-point representation is preferred. % Finally, if floating-point representation is desired, then FL(4,~9) has the lowest number of bits that can achieve baseline classification accuracy. % \balance \section{Conclusion} \label{sec:conclusion} In this work, we presented Lop, a library for cross-dimensional comparison of deploying different customized data representation and approximate computing techniques. % Lop consists of a Python module, called LopPy, that allows design space exploration by using various data representations and approximate computing techniques. % The counterpart of LopPy for hardware analysis, called ScaLop, is compatible with Chisel and allows designers to compare the hardware cost of their designs for configurations that are found viable by using LopPy. % The use of Python and Scala interfaces in this framework enables a high degree of reconfigurability, platform-independence and programmability. % While Lop is mainly targeted at machine learning applications, it can be used in a wide variety of applications that involve low-precision and approximate computing such as near-sensor computing. % \section*{Acknowledgements} This research was sponsored in part by a contract from the National Science Foundation. \bibliographystyle{unsrt} {\footnotesize
{ "timestamp": "2018-06-05T02:12:40", "yymm": "1806", "arxiv_id": "1806.00875", "language": "en", "url": "https://arxiv.org/abs/1806.00875" }
\section{Introduction} We analyze the behavior of an anisotropic one-phase Stefan-type problem with periodic coefficients on an exterior domain in a dimension $n \geq 3$. Our purpose is to investigate the asymptotic behavior of the solution of the problem \eqref{Stefan} below and its free boundary as time $t \to \infty$. The results in this paper are the generalizations of our previous work in \cite{PV} for the isotropic case. We consider a compact set $K \subset \mathbb{R}^n$ that represents a source. We assume that $0 \in \operatorname{int}K$ and $K$ has a sufficiently regular boundary, $\partial K \in C^{1,1}$ for example. The one-phase Stefan-type problem with anisotropic diffusion is to find a function $v(x,t): \mathbb{R}^n \times (0,\infty) \rightarrow [0,\infty)$ satisfying \begin{equation} \label{Stefan} \left\{ \begin{aligned} v_t- D_i(a_{ij}D_jv)&=0 && \text{ in } \{v>0\} \setminus K ,\\ v &=1 && \text{ on } K,\\ \frac{v_t}{|Dv|} &=g\,a_{ij}\,D_jv\, \nu_i && \text{ on }\partial \{v>0\},\\ v(x,0)&=v_0 &&\text{ on } \mathbb{R}^n, \end{aligned} \right. \end{equation} where $D$ is the space gradient, $D_i$ is the partial derivative with respect to $x_i$, $v_t$ is the partial derivative of $v$ with respect to time variable $t$ and $\nu=\nu(x,t)$ is the inward spatial unit normal vector of $\partial \{v>0\}$ at a point $(x,t)$. Here we use the Einstein summation convention. The Stefan problem is a free boundary problem of parabolic type for phase transitions, typically describing the melting of ice in contact with a region of water. Here we consider the one-phase problem, where the temperature is assumed to be maintained at $0$ in one of the phases. We prescribe the Dirichlet boundary data $1$ on the fixed source $K$ and an initial temperature distribution $v_0$. Note that the results in this paper apply to a more general time-independent positive fixed boundary data, the constant function $1$ is taken only to simplify the notation. We also specify an inhomogeneous medium with the latent heat of phase transition $L(x)=\frac{1}{g(x)}$ and an anisotropic diffusion with the thermal conductivity coefficients given by $a_{ij}(x)$. The unknowns here are the temperature distribution $v$ and the phase interface $\partial \{v>0\}$, which is the so-called free boundary. Since the free boundary is a level set of $v$, the outward normal velocity of the moving interface is given by $\frac{v_t}{|Dv|}$. The free boundary condition thus says that the interface moves outward with the velocity $ga_{ij}D_jv \nu_i$ in the normal direction. Note that we can also rewrite the free boundary condition as \begin{equation} \label{Stefan bdr cond} v_t=g\, a_{ij}\, D_jvD_i v. \end{equation} Throughout this paper, we will consider the problem under the following assumptions. The matrix $A(x)=(a_{ij}(x))$ is assumed to be symmetric, bounded, and uniformly elliptic, i.e., there exits some positive constants $\alpha$ and $\beta$ such that \begin{equation} \label{ellipticity} \alpha |\xi|^2 \leq a_{ij}(x)\,\xi_i \xi_j \leq \beta|\xi|^2 \quad \mbox{ for all } x \in \mathbb{R}^n \mbox{ and } \xi \in \mathbb{R}^n. \end{equation} Moreover, we are interested in the problems with highly oscillating coefficients that guarantee an averaging behavior in the scaling limit, in particular, \begin{equation} \label{condition in media} \begin{aligned} &\text{$a_{ij}$ and $g$ are $\mathbb{Z}^n$-periodic Lipschitz functions in $\mathbb{R}^n$},\\ &m\leq g \leq M \mbox{ for some positive constants } m \mbox{ and } M. \end{aligned} \end{equation} From the ellipticity \eqref{ellipticity} and the boundedness of $g$, we also have \begin{equation} \label{ellipticity and Stefan bdr cond} m\alpha|\xi|^2\leq g(x)a_{ij}(x)\xi_i \xi_j \leq M\beta |\xi|^2 \mbox{ for all } x \in \mathbb{R}^n \mbox{ and } \xi \in \mathbb{R}^n. \end{equation} Furthermore, during almost the whole paper, the initial data is assumed to satisfy \begin{equation} \label{initial data} \begin{aligned} &v_0 \in C^2(\overline{\Omega_0 \setminus K}), v_0 > 0 \text{ in } \Omega_0, v_0=0 \mbox{ on } \Omega_0^{\mathsf{c}} := \mathbb{R}^n \setminus \Omega_0, \mbox{ and } v_0=1 \mbox{ on } K,\\& |Dv_0| \neq 0 \text{ on } \partial \Omega_0, \text{ for some bounded domain $\Omega_0 \supset K$.} \end{aligned} \end{equation} Here we use a stronger regularity of the initial data than the general requirement to guarantee the well-posedness of the Stefan problem \eqref{Stefan} (see \cite{FK,K1}) and the coincidence of weak and viscosity solutions used in our work (see \cite{K3}). However, our convergence results rely on a crucial weak monotonicity \eqref{weak monotonicity} which holds provided the initial data satisfies \eqref{initial data}. Nevertheless, the asymptotic limit in Theorem \ref{th:main-convergence} is independent of the initial data. Therefore we are able to apply the results for more general initial data. In particular, it is sufficient if the initial data guarantees the existence of the (weak) solution satisfying the comparison principle, and we can approximate the initial data from below and from above by regular data satisfying \eqref{initial data}. As the classical problem \cite{PV}, $v_0 \in C(\mathbb{R}^n), v_0 = 1 \mbox{ on } K, v_0 \geq 0, \operatorname{supp} v_0$ compact is enough. A global classical solution of the Stefan problem \eqref{Stefan} is not expected to exist due to the singularities of the free boundary, which might occur in finite time. This motivated the introduction of a generalized solution of this problem. In this paper, we will use the notions of weak solutions and viscosity solutions. The notion of weak solutions is more classical, defined by taking the integral in time of the classical solution $v$ and looking at the equation that the new function $u(x,t):= \int_{0}^{t}v(x,s)\mathop{}\!ds$ satisfies. It turns out that if $v$ is sufficiently regular, then $u(\cdot,t)$ solves the obstacle problem (see \cite{Baiocchi,Duvaut,FK,EJ,R1,R2,RReview}) \begin{equation} \label{obstacle problem} \left \{ \begin{aligned} &u(\cdot,t) \in \mathcal{K}(t),\\ &\left(u_t - D_i(a_{ij}D_ju)\right)(\varphi - u) \geq f(\varphi -u) \mbox{ a.e } (x,t) \mbox{ for any } \varphi \in \mathcal{K}(t), \end{aligned} \right. \end{equation} where $\mathcal{K}(t)$ is a suitable admissible functional space (see Section~\ref{sec:weak-sol}) and $f$ is \begin{equation} \label{f} f(x)= \left\{\begin{aligned} &v_0(x), && v_0(x) > 0,\\ &-\displaystyle \frac{1}{g(x)}, && v_0(x) = 0. \end{aligned} \right. \end{equation} This formulation interprets the Stefan problem as a fixed domain problem and allows us to apply the well-known results in the general variational inequality theory. Indeed, the obstacle problem \eqref{obstacle problem} has a global unique solution $u$ for the initial data \eqref{initial data}. If the corresponding time derivative $v=u_t$ exists, it is called a \emph{weak solution} of the Stefan problem (\ref{Stefan}). Moreover, the homogenization of this problem was also studied using the homogenization theory of variational inequalities, see \cite{R3,K2,K3}. In a different approach based on the comparison principle structure, Kim introduced the notion of viscosity solutions of the Stefan problem as well as proved the global existence and uniqueness results in \cite{K1}, which were later generalized to the two-phase Stefan problem in \cite{KP}. The analysis of viscosity solutions relies on the comparison principle and pointwise arguments, which are more suitable for the study of the behavior of the free boundaries. The notions of weak and viscosity solutions were first introduced for the classical homogeneous isotropic Stefan problem where $g(x)=1$ and the parabolic operator is the simple heat operator, however, it is natural to define the same notions for the Stefan problem \eqref{Stefan} and obtain the analogous results as observed in \cite{R3,K3}. Moreover, the notion of viscosity solutions is also applicable for more general, fully nonlinear parabolic operators and boundary velocity laws since it does not require the variational structure. In \cite{K3}, Kim and Mellet showed that the weak and the viscosity solutions of \eqref{Stefan} coincide whenever the weak solution exists. We will use the strengths of both weak and viscosity solutions to study \eqref{Stefan}. Among the first results on the asymptotic and large time behavior of solutions of the one-phase Stefan problem on an exterior domain was the work of Matano \cite{Matano} for the classical homogeneous isotropic Stefan problem in dimensions $n \geq 3$. He showed that in this setting, any weak solution eventually becomes classical after a finite time and that the shape of the free boundary approaches a sphere of radius $Ct^{\frac{1}{n}}$ as $t \to \infty$. Note that in our case of an inhomogeneous free boundary velocity we cannot expect the solution to become classical even for large times. In \cite{QV}, Quir\'{o}s and V\'{a}zques extended the results of \cite{Matano} to the case $n=2$ and showed the large-time convergence of the rescaled weak solution of the one-phase Stefan problem to the self-similar solution of the Hele-Shaw problem with a point source. The related Hele-Shaw problem is also called the quasi-stationary Stefan problem, where the heat operator is replaced by the Laplace operator. It typically models the flow of a viscous fluid injected in between two parallel plates which form the so-called Hele-Shaw cell, or the flow in porous media. The global stability of steady solutions of the classical Stefan problem on the full space was established in the recent work of Had\v zi\'c and Shkoller \cite{HS_CPAM,HS_Royal}, and further developed for the two-phase Stefan problem in \cite{HNS}. The homogenization of the one-phase Stefan problem \eqref{Stefan} and the Hele-Shaw problem in both periodic and random media was obtained by Rodrigues in \cite{R3} and later by Kim and Mellet in \cite{K2,K3}. See also the work on the homogenization of the two-phase Stefan problem by Bossavit and Damlamian \cite{BD}. Dealing directly with the large-time behavior of the solutions on an exterior domain in inhomogeneous media, the work of the first author in \cite{P1}, and then the joint work of both authors in \cite{PV} showed the convergence of an appropriate rescaling of solutions of both models to the self-similar solution of the Hele-Shaw problem with a point source. The rescaled free boundary was shown to uniformly approach a sphere. In this paper, we extend the previous results in \cite{PV} to the anisotropic case, where the heat operator is replaced by a more general linear parabolic operator of divergence form. This was indeed the setting considered in \cite{R3, K3} for the homogenization problems. In this setting, the variational structure is preserved, thus we are still able to use the notions of weak solutions as well as viscosity solutions and their coincidence. However, the main difficulties come from the loss of radially symmetric solutions which were used as barriers in the isotropic case and the homogenization problems appear not only for the velocity law but also for the elliptic operators. To overcome the first difficulty, we will construct barriers for our problem from the fundamental solution of the corresponding elliptic equation in a divergence form. Unfortunately, even though the unique fundamental solution of this elliptic equation exists for $n\geq 2$, its asymptotic behavior in dimension $n=2$ and dimensions $n\geq 3$ are significantly different. Moreover, we need to use the gradient estimate \eqref{gradient estimate fundamental sol} for the fundamental solution, which only holds for the periodic structure. Therefore, we will limit our consideration to periodic media and dimension $n\geq 3$. Following \cite{QV,P1,PV}, we use the rescaling of solutions as \begin{equation*} \begin{aligned} & v^\lambda(x,t):=\lambda^{\frac{n-2}{n}}v(\lambda^{\frac{1}{n}}x,\lambda t), && u^\lambda(x,t):=\lambda^{-\frac{2}{n}}u(\lambda^{\frac{1}{n}}x,\lambda t), \quad \lambda > 0. \end{aligned} \end{equation*} Using this rescaling we can deduce the uniform convergence of the rescaled solution to a limit function away from the origin. In the limit, the fixed domain $K$ shrinks to the origin due to the rescaling, and the rescaled solutions develop a singularity at the origin as $\lambda \to \infty$. Moreover, in a periodic setting, the elliptic operator and velocity law should homogenize as $\lambda \to \infty$, and therefore heuristically the limit function should be the self-similar solution of the Hele-Shaw-type problem with a point source \begin{equation} \label{hs-point-source} \left\{ \begin{aligned} -q_{ij}D_{ij}v&=C\delta && \mbox{ in }\{v>0\},\\ v_t&=\frac{1}{\left<\frac{1}{g}\right>}q_{ij}D_ivD_jv && \mbox{ on } \partial\{v>0\},\\ v(\cdot,0)&=0, \end{aligned} \right. \end{equation} where $\delta$ is the Dirac $\delta$-function, $(q_{ij})$ is a constant symmetric positive definite matrix depending only on $a_{ij}$, $C$ is a constant depending on $K, n, q_{ij}$ and the boundary data $1$, and the constant $\left<\frac{1}{g}\right>$ is the average value of the latent heat $L(x)=\frac{1}{g(x)}$. Similarly, the limit variational solution should satisfy the corresponding limit obstacle problem. The first main result of this paper, Theorem \ref{convergence of variational solutions}, is the locally uniform convergence of the rescaled variational solution to the solution of the limit obstacle problem. Using the constructed barriers, we are able to prove that the limit function has the correct singularity as $|x| \to 0$. Moreover, the barriers also give the same growth rate for the free boundary as in the isotropic case. That is, the free boundary expands with the rate $t^{\frac{1}{n}}$ when $t$ is large enough. The aim is then to prove the homogenization effects of the rescaling to our problem. The shrinking of the fixed domain $K$ in the rescaling also makes our current situation slightly different from the standard classical homogenization problem of variational inequalities, where the domain and the boundary condition are usually fixed. In addition, we also need to show that the rescaled parabolic operator becomes elliptic when $\lambda \to 0$. We will use the notion of the $\Gamma$-convergence introduced by De Giorgi and homogenization techniques developed by Dal Maso and Modica in \cite{DM1,DM2,D}. The issue here is that we need to modify the $\Gamma$-convergence sequence in order to use the integration by parts formula for the variational inequality. This will be done with the help of a cut-off function and the \emph{fundamental estimate}, Definition~\ref{def:fundamental-estimate}, for the $\Gamma$-convergence. Note that these techniques are applicable not only for the periodic case but also for the random case, thus we expect to extend our results to the problem in random media in the future. As the last step, we will use the coincidence of the weak and viscosity solution of the problem \eqref{Stefan} and the viscosity arguments to obtain the uniform convergence of the rescaled viscosity solution and its free boundary to the asymptotic profile in the second main result, Theorem \ref{convergence of rescaled viscosity solution}. Fortunately, all the viscosity arguments of the isotropic case can be adapted for the anisotropic case. Therefore the proof is similar to the proof of \cite[Theorem 4.2]{PV}, where we make use of a weak monotonicity \eqref{weak monotonicity} together with the comparison principle. An important point in the proof of\cite[Theorem 4.2]{PV} is that we need to apply Harnack's inequality for a parabolic equation which becomes elliptic in the limit. Here we can proceed as in the isotropic case since the rescaled elliptic operator does not change the constant in Harnack's inequality. As the arguments require only simple modifications, we will skip the proofs of some lemmas and refer to \cite{PV} for more details. In summary, we will show the following theorem. \begin{theorem} \label{th:main-convergence} The rescaled viscosity solution $v^\lambda$ of the Stefan-type problem \eqref{Stefan} converges locally uniformly to the unique self-similar solution $V$ of the Hele-Shaw type problem \eqref{hs-point-source} in $(\mathbb{R}^n \setminus \{0\}) \times [0,\infty)$ as $\lambda \rightarrow \infty$, where $(q_{ij})$ is a constant symmetric positive definite matrix depending only on $a_{ij}$, $C$ depends only on $q_{ij}, n$, the set $K$ and the boundary data $1$. Moreover, the rescaled free boundary $\partial \{(x,t): v^\lambda(x,t) > 0\}$ converges to $\partial \{(x,t): V(x,t) > 0\}$ locally uniformly with respect to the Hausdorff distance. \end{theorem} As mentioned above, almost all of the arguments in our recent work hold for stationary ergodic random case. However, in this situation, we lose a very important pointwise gradient estimate \eqref{gradient estimate fundamental sol} for the fundamental solution of the corresponding elliptic equation to construct the barriers. In fact, for non-periodic coefficients, even though the optimal bounds for the gradient continue to hold for a bounded domain, they cannot hold in the large scale when $|x-y| \to \infty$. The results in \cite{MO,GM} tell us that for random stationary coefficients satisfying a logarithmic Sobolev inequality we have similar bounds for the gradient in local square average forms. This result cannot be upgraded to the pointwise bounds since there is no regularity to control the square average integral as in \cite[Remark 3.7]{GM}. However, it suggests the possibility to modify our approach to the random case. Another question is the extension of the present results to the dimension $n=2$. Since the unique (up to an addition of a constant) fundamental solution of the corresponding elliptic equation exists and the gradient estimates also hold in the two-dimensional case, we expect to obtain analogous results as in this paper. The essential reason that it remains open is the lack of a homogenization result for the fundamental solution (Green's function) in two dimensions, which is of an independent interest. This issue is under investigation by the authors. \subsection*{The structure of the paper is as follows:} The definitions and the well-known results for weak and viscosity solutions are recalled in Section 2. We also review some basic known facts about the fundamental solution of the corresponding elliptic equation. The rescaling is introduced and we discuss the convergence of the fundamental solution in the rescaling limit. The core of this section is the construction of a subsolution and a supersolution of the Stefan problem \eqref{Stefan} in Section \ref{sec: sub and super sol}. Moreover, we formulate the limit problems before giving the proofs of the main results in the later sections. Section 3 is our main contribution, where we prove the locally uniform convergence of the rescaled variational solutions. In Section 4, we deal with the locally uniform convergence of the rescaled viscosity solutions and their free boundaries. \subsection*{Notation} \label{sec:notation} We will use the following notations throughout this paper. For a set $A$, $A^{\mathsf{c}}$ is its complement. Given a nonnegative function $v$, we will denote its positive set and free boundary as $$\Omega(v):=\{(x,t): v(x,t)>0\}, \qquad \Gamma(v):=\partial\Omega(v),$$ and for a fixed time $t$, $$ \Omega_t(v):=\{x: v(x,t)>0\}, \qquad \Gamma_t(v):=\partial\Omega_t(v) .$$ We will denote the elliptic operator of divergence form and its rescaling as \begin{equation} \label{operator-L} \mathcal{L}u=D_j(a_{ij}D_i u), \qquad \mathcal{L}^\lambda u=D_j(a_{ij}(\lambda^{\frac{1}{n}}x)D_i u), \quad \lambda > 0. \end{equation} We will also make use of the bilinear forms in $H^1(\Omega)$ and the inner product in $L^2(\Omega)$ as \begin{equation*} \begin{aligned} a_{\Omega} (u,v)&=\int_{\Omega}{a_{ij}D_iuD_jv}\mathop{}\!dx, &&a^\lambda_{\Omega}(u,v)=\int_{\Omega}{a_{ij}(\lambda^{\frac{1}{n}}x)D_iuD_jv}\mathop{}\!dx, \quad \lambda >0\\ q_{\Omega} (u,v)&=\int_{\Omega}{q_{ij}D_iuD_jv}\mathop{}\!dx, &&\langle u,v \rangle _\Omega = \int_{\Omega}uv\mathop{}\!dx, \end{aligned} \end{equation*} where $q_{ij}$ are the constant coefficients of the homogenized operator. We omit the set $\Omega$ in the notation if $\Omega = \mathbb{R}^n$. \section{Preliminaries} \subsection{Notion of solutions} \subsubsection{Weak solutions} \label{sec:weak-sol} As for the classical one-phase Stefan problem, we will define the weak solutions of \eqref{Stefan} using the corresponding variational problem given in \cite{FK,K3}. Let $B=B_R(0)$, $D=B \setminus K$ for some fixed $R \gg 1$. Find $u \in L^2(0,T; H^2(D))$ such that $u_t \in L^2(0,T;L^2(D))$ and \begin{equation} \label{Variational problem} \left\{\begin{aligned} u(\cdot,t) &\in \mathcal{K}(t), && 0 < t < T,\\ (u_t-\mathcal{L}u)(\varphi -u) &\geq f(\varphi -u), &&\text{ a.e } (x,t) \in D \times (0,T) \text{ for any } \varphi \in \mathcal{K}(t),\\ u(x,0)&=0 \text{ in } D,\\ \end{aligned}\right. \end{equation} where the admissible set $\mathcal{K}(t)$ is $$\mathcal{K}(t)=\{\varphi \in H^1(D),\varphi \geq 0, \varphi =0 \text{ on } \partial B, \varphi =t \text{ on } K \}$$ and \begin{equation*} f(x):= \left \{ \begin{aligned} & v_0(x) && \text{for } x\in \Omega_0,\\ &-\frac{1}{g(x)} && \text{for } x \in \Omega_0^{\mathsf{c}}. \end{aligned} \right. \end{equation*} We use the standard notation $H^k$ and $W^{k,p}$ for Sobolev spaces. Following \cite{FK}, if $v(x,t)$ is a classical solution of the Stefan problem (\ref{Stefan}) in $D \times (0,T)$ and $R$ is sufficiently large depending on $T$, then the function $u(x,t):=\int_{0}^{t}v(x,s)\mathop{}\!ds$ solves \eqref{Variational problem}. On the other hand, it was shown in \cite{FK,R1} that the variational problem \eqref{Variational problem} is well-posed for initial data $v_0$ satisfying \eqref{initial data}. \begin{theorem}[Existence and uniqueness for the variational problem] If $v_0$ satisfies (\ref{initial data}), then the problem (\ref{Variational problem}) has a unique solution satisfying $$ \begin{aligned} u &\in L^\infty(0,T; W^{2,p}(D)), \qquad 1\leq p \leq \infty,\\ u_t &\in L^\infty(D \times (0,T)),\\ \end{aligned} $$ and $$ \left\{\begin{aligned} u_t-\mathcal{L} u &\geq f, &&u \geq 0 ,\\ u(u_t- \mathcal{L} u-f)&=0 \end{aligned}\right. \mbox{ a.e in } D\times (0, \infty). $$ \end{theorem} Thus we define the \emph{weak solution} of the Stefan problem (\ref{Stefan}) as the time derivative $u_t$ of the solution $u$ of (\ref{Variational problem}). Note that as in \cite[Lemma~3.6]{K3}, the solution does not depend on the choice of $R$ if $R$ is sufficiently large. We list here some useful properties of the weak solutions for later use, see \cite{FK,R1,K3}. \begin{prop} \label{boundedness of weak solution} The unique solution $u$ of (\ref{Variational problem}) satisfies $u_t \in C(\overline D \times [0, T])$ and $$0 \leq u_t \leq C \qquad \mbox{ a.e. } D \times (0,T),$$ where $C$ is a constant depending on $f$. In particular, $u$ is Lipschitz with respect to $t$ and $u$ is $C^{\alpha}(D)$ with respect to $x$ for all $\alpha \in (0,1)$. Furthermore, if $0 \leq t <s \leq T$, then $u(\cdot,t) < u(\cdot,s)$ in $\Omega_s(u)$ and also $\Omega_0 \subset \Omega_t(u) \subset \Omega_s(u)$. \end{prop} \begin{proof} Let us only give a remark on $u_t \in C(\overline D \times [0, T])$, the rest is standard following the arguments in the cited papers and the elliptic regularity theory. The regularity of weak solutions and their free boundaries was studied by many authors, see \cite{C,CF,FN, Di-B, Ziemer, Sacks}. If $a_{ij}=\delta_{ij}$, the temperature $u_t$ is continuous in $\mathbb{R}^n \times [0,\infty)$ due to the result of Caffarelli and Friedman \cite{CF}. By a change of coordinates, the continuity of $u_t$ can also be obtained when the coefficients are constants. Using a different approach for more general singular parabolic equations, Di Benedetto \cite{Di-B}, Ziemer \cite{Ziemer} and Sacks \cite{Sacks} showed that the continuity also holds in the case $a_{ij}=a_{ij}(x)$ satisfying \eqref{ellipticity}. Note that the assumptions on the $L^\infty$-bound of the weak solution $v=u_t$ and the $L^2$-bounds of its derivatives as in \cite{Ziemer, Di-B, Sacks} are guaranteed by Proposition~\ref{boundedness of weak solution} above and \cite[Theorem 3]{FK} or \cite[Corollary 2, Theorem 4]{F}. \end{proof} \begin{lemma}[Comparison principle for weak solutions] Suppose that $f \leq \hat{f}$. Let $u, \hat{u}$ be solutions of (\ref{Variational problem}) for respective $f, \hat{f}$. Then $u \leq \hat{u}$ and $$v \equiv u_t \leq \hat{u}_t \equiv \hat{v}.$$ \end{lemma} \subsubsection{Viscosity solutions} \label{sec: viscosity sol} Generalized solutions of the Stefan problem \eqref{Stefan} can also be defined via the comparison principle, leading to the viscosity solutions introduced in \cite{K1}. In the following, $Q$ is the space-time cylinder $Q:=(\mathbb{R}^n \setminus K) \times [0,\infty)$. \begin{definition} \label{def of viscos.subsol} A nonnegative upper semicontinuous function $v= v(x,t)$ defined in $Q$ is a viscosity subsolution of (\ref{Stefan}) if: \begin{enumerate}[label=\alph*)] \item \label{continuous expanding} For all $T \in (0,\infty)$, $\overline{\Omega (v)} \cap \{t \leq T\} \cap Q \subset \overline{\Omega (v) \cap \{t < T\}}.$ \item For every $\phi \in C^{2,1}_{x,t}(Q)$ such that $v-\phi$ has a local maximum in $\overline{\Omega(v)} \cap \{t\leq t_0\} \cap Q$ at $(x_0,t_0)$, the following holds: \begin{enumerate}[label=\roman*)] \item If $v(x_0,t_0)>0$, then $(\phi_t- \mathcal{L} \phi )(x_0,t_0) \leq 0$. \item If $(x_0,t_0) \in \Gamma(v), |D\phi (x_0,t_0)| \neq 0$ and $(\phi_t-\mathcal{L} \phi )(x_0,t_0)>0$, then \begin{equation} (\phi_t-ga_{ij}D_j\phi \nu_i |D\phi|)(x_0,t_0) \leq 0, \end{equation} where $\nu$ is inward spatial unit normal vector of $\p{\{v>0\}}$. \end{enumerate} \end{enumerate} Analogously, a nonnegative lower semicontinuous function $v(x,t)$ defined in $Q$ is a viscosity supersolution if (b) holds with maximum replaced by minimum, and with inequalities reversed in the tests for $\phi$ in (i--ii). We do not need to require (a). \end{definition} \begin{remark} As in \cite[Remark~2.4]{Pozar15}, the condition \ref{continuous expanding} guarantees the continuous expansion of the support of the subsolution $v$, which prevents ``bubbles'' closing up, that is, it prevents $v$ becoming instantly positive in the whole space or in an open set surrounded by a positive phase. \end{remark} \begin{definition} \label{def vis.subsol with data} A viscosity subsolution of (\ref{Stefan}) in $Q$ is a viscosity subsolution of (\ref{Stefan}) in $Q$ with initial data $v_0$ and boundary data $1$ if: \begin{enumerate}[label=\alph*)] \item $v$ is upper semicontinuous in $\bar Q, v=v_0$ at $t=0$ and $v \leq 1$ on $\Gamma$, \item $\overline{\Omega(v)}\cap \{t=0\}=\overline{\{x: v_0(x)>0\}} \times \{0\}$. \end{enumerate} A viscosity supersolution is defined analogously by requiring (a) with $v$ lower semicontinuous and $v \geq 1$ on $\Gamma$. We do not need to require (b). \end{definition} A viscosity solution is both a subsolution and a supersolution: \begin{definition} \label{def vis.sol with data} The function $v(x,t)$ is a viscosity solution of (\ref{Stefan}) in $Q$ (with initial data $v_0$ and boundary data $1$) if $v$ is a viscosity supersolution and $v^\star$ is a viscosity subsolution of (\ref{Stefan}) in $Q$ (with initial data $v_0$ and boundary data $1$). Here $v^\star$ is the upper semicontinuous envelopes of $v$ define by \begin{align*} w^\star(x,t) := \limsup_{(y,s) \rightarrow (x,t)}w(y,s). \end{align*} \end{definition} The notion of viscosity solutions of the classical Stefan problem was first introduced in \cite{K1}. It was generalized to the problem \eqref{Stefan} in \cite{K3} including a comparison principle for ``strictly separated'' initial data. More importantly, in \cite{K3} the authors proved the coincidence of weak and viscosity solutions which will be used as a crucial tool in our work. \begin{theorem}[cf. {\cite[Theorem 3.1]{K3}}] Assume that $v_0$ satisfies (\ref{initial data}). Let $u(x,t)$ be the unique solution of (\ref{Variational problem}) in $B \times [0,T]$ and let $v(x,t)$ be the solution of \begin{equation} \label{coincidence eq} \left\{ \begin{aligned} v_t-\mathcal{L} v &=0 && \mbox{in } \Omega(u) \setminus K,\\ v &=0 && \mbox{on } \Gamma(u),\\ v &=1 &&\mbox{in } K,\\ v(x,0) &=v_0(x). \end{aligned} \right. \end{equation} Then $v(x,t)$ is a viscosity solution of (\ref{Stefan}) in $B \times [0,T]$ with initial data $v(x,0)=v_0(x)$, and $u(x,t)= \int_{0}^{t}v(x,s)\mathop{}\!ds$. \end{theorem} By the coincidence of weak and viscosity solutions, we have a more general comparison principle as follows. \begin{lemma}[cf. {\cite[Corollary 3.12]{K3}}] Let $v^1$ and $v^2$ be, respectively, a viscosity subsolution and supersolution of the Stefan problem (\ref{Stefan}) with continuous initial data $v_0^1 \leq v_0^2$ and boundary data $1$. In addition, suppose that $v_0^1$ (or $v_0^2$) satisfies condition (\ref{initial data}). Then $v^1_\star \leq v^2 \mbox{ and } v^1 \leq (v^2)^\star \mbox{ in } \mathbb{R}^n \setminus K \times [0,\infty).$ \end{lemma} \begin{remark} We first note that a classical subsolution (supersolution) of (\ref{Stefan}) is also a viscosity subsolution (supersolution) of \eqref{Stefan} in $Q$ with initial data $v_0$ and boundary data $1$ by standard arguments. Moreover, if $\Omega(u)$ is not smooth, we need to understand the solution of (\ref{coincidence eq}) as the one given by Perron's method as \begin{equation*} v=\sup \{w\mid w_t- \Delta w \leq 0 \mbox{ in } \Omega(u), w\leq 0 \mbox{ on } \Gamma(u), w\leq 1 \mbox{ in } K, w(x,0) \leq v_0(x)\}, \end{equation*} which allows $v$ to be discontinuous on $\Gamma(u)$. \end{remark} \subsection{The fundamental solution of a linear elliptic equation} In this section, we will recall some important facts about the fundamental solution of the self-adjoint uniformly elliptic second order linear equation of divergence form \begin{equation} \label{elliptic eq} \begin{aligned} &-\mathcal{L}u=0, \end{aligned} \end{equation} in dimension $n \geq 3$, where $\mathcal{L}$ was defined in \eqref{operator-L} and $a_{ij}(x)$ satisfy \eqref{ellipticity} and \eqref{condition in media}. This fundamental solution will be used to construct barriers for the Stefan problem \eqref{Stefan}. We define the fundamental solution of \eqref{elliptic eq} as Green's function in the whole space following \cite{LSW, ABL}. \begin{definition} We say that $G: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ is the fundamental solution (Green's function) of \eqref{elliptic eq} if $G(\cdot,y)$ is the weak (distributional) solution of $-\mathcal{L}G(\cdot,y)= \delta_y$, where $\delta_y$ is the Dirac measure at $y$, i.e., \begin{equation*} \begin{aligned} &\int_{\mathbb{R}^n}a_{ij}D_jG(\cdot,y) D_i\varphi \mathop{}\!dx= \varphi(y), && \forall y \in \mathbb{R}^n, &&&\forall \varphi \in C^\infty_0(\mathbb{R}^n), \end{aligned} \end{equation*} and $\lim_{|x-y|\to \infty}G(x,y)=0$. \end{definition} The existence and uniqueness of the fundamental solution were given by the remark following \cite[Corollary 7.1]{LSW} or more precisely by \cite[Theorem 1]{ABL}. \begin{theorem}[cf. {\cite[Theorem 1]{ABL}}] Assume that $n \geq 3$, $a_{ij}(x)$ satisfy \eqref{ellipticity} and \eqref{condition in media}. Then, there exists a unique fundamental solution $G$ of \eqref{elliptic eq} such that $G(\cdot, y) \in H^1_{loc}(\mathbb{R}^n \setminus \{y\}) \cap W^{1,p}_{loc}(\mathbb{R}^n), p <\frac{n}{n-1},$ and for some constant $C>0$ we have \begin{equation} \label{bound for fund.sol n>2} C^{-1}|x-y|^{2-n}\leq G(x,y) \leq C|x-y|^{2-n}, \quad \forall x,y \in \mathbb{R}^n. \end{equation} \end{theorem} \begin{remark} Note that in any bounded domain $U$ of $\mathbb{R}^n \setminus \{0\}$, $G(\cdot,y)$ satisfies all the properties of a weak solution of a uniformly elliptic equation. The fundamental solution of \eqref{elliptic eq} also has the following properties (for more details, see \cite{LSW,M, GT }): \begin{itemize} \item $G(x,y)=G(y,x)$ \item $G(\cdot,y) \in C^{1,\alpha}(U)$ for some $\alpha >0$. \item The function $u(x)= \int_{\mathbb{R}^n}G(x,y)f(y)\mathop{}\!dy$ is a weak solution in $H^1_{\text{loc}}(\mathbb{R}^n)$ of the equation $-\mathcal{L}u = f$ for any $f \in C^\infty_0(\mathbb{R}^n)$. \item When the coefficients $a_{ij}$ are constants, the fundamental solution can be given explicitly as \begin{equation} \label{fundamental const coef} G^0(x,y):= \frac{1}{(n-2)\alpha_n\sqrt{\operatorname{det} A}}\left(\sum_{ij}(A^{-1})_{ij}(x_i-y_i)(x_j-y_j)\right)^{\frac{2-n}{n}}, \end{equation} where $(A^{-1})_{ij}$ are the elements of the inverse matrix of $(a_{ij})$, $\operatorname{det} A$ is the determinant of $(a_{ij})$ and $\alpha_n$ is the volume of the unit ball in $\mathbb{R}^n$. \end{itemize} \end{remark} Moreover, in a periodic setting, the results in \cite[Proposition 5]{ABL} gives the bounds on the gradient of the fundamental solution. \begin{lemma}[cf. {\cite[Proposition 5]{ABL}}] If $n\geq 2$ and $A$ is periodic then the fundamental solution $G$ of \eqref{elliptic eq} satisfies the following gradient estimates: \begin{align} \label{gradient estimate fundamental sol} &\exists C>0, && \forall x\in \mathbb{R}^n, &&\forall y\in \mathbb{R}^n,&&|D_xG(x,y)| \leq \frac{C}{|x-y|^{n-1}},\\ &\exists C>0, && \forall x\in \mathbb{R}^n, &&\forall y\in \mathbb{R}^n,&&|D_yG(x,y)| \leq \frac{C}{|x-y|^{n-1}}. \end{align} \end{lemma} Using the technique of $G$-convergence, the authors in \cite{ZKOH} established the results on the homogenization and the asymptotic behavior of the fundamental solution of \eqref{elliptic eq}. We refer to \cite{ZKOH,D} for the definition of $G$-convergence and more details of the homogenization problems. \begin{lemma}[cf. {\cite[Chapter III, Theorem 2]{ZKOH}}] \label{Asymptotic expansion fund.sol} Let $n \geq 3$, $A$ satisfy \eqref{ellipticity}, \eqref{condition in media} and $G^\varepsilon$ be the fundamental solution of \begin{equation} \label{elliptic epsilon eq} \begin{aligned} &-\mathcal{L}^\varepsilon u :=-D_i \left(a_{ij} \left(\frac{x}{\varepsilon} \right) D_j u\right)=0. \end{aligned} \end{equation} Then $G^\varepsilon$ converges locally uniformly to $G^0$ in $\mathbb{R}^{2n} \setminus \{x=y\}$ as $\varepsilon \to 0$, where $G^0$ is the fundamental solution of \begin{equation} \label{elliptic hom eq} \begin{aligned} &-\mathcal{L}^0 u:= -q_{ij}D_{ij}u=0, \end{aligned} \end{equation} and $(q_{ij})$ is a constant symmetric positive definite matrix depending only on $a_{ij}$. Moreover, if we denote $G$ as the fundamental solution of \eqref{elliptic eq}, then we will have the asymptotic expression \begin{equation} \label{asymptotic fund.sol} G(x,y)=G_0(x,y)+|x-y|^{2-n}\theta(x,y), \end{equation} where $\theta(x,y) \to 0$ as $|x-y| \to \infty$ uniformly on the set $\{|x|+|y|<a|x-y|\}$, $a$ is any fixed positive constant. \end{lemma} \subsection{Rescaling} \label{sec: rescaling} Recall that we use the notations $\mathcal{L}, \mathcal{L}^\lambda$ for the operators as defined in \eqref{operator-L}. Following \cite{QV,P1, PV}, for $\lambda >0$ and $n \geq 3$ we rescale the solution $v$ of the problem \eqref{Stefan} as $$v^\lambda(x,t)=\lambda^{\frac{n-2}{n}}v(\lambda^{\frac{1}{n}}x,\lambda t).$$ Clearly $v^\lambda$ is a solution of \begin{equation} \label{rescaled equation} \left\{ \begin{aligned} \lambda^{\frac{2-n}{n}}v_t^\lambda- \mathcal{L}^\lambda v^\lambda &=0 && \mbox{ in } \Omega(v^\lambda) \setminus K^\lambda,\\ v^\lambda &= \lambda^{\frac{n-2}{n}} && \mbox{ on } K^\lambda,\\ \frac{v_t^\lambda}{|D v^\lambda|} &=g^\lambda(x) a^\lambda_{ij}(x) D_j v^\lambda \nu_i && \mbox{ on } \Gamma(v^\lambda),\\ v^\lambda(\cdot,0)&=v_0^\lambda && \mbox{ in } \mathbb{R}^n, \end{aligned} \right. \end{equation} where $K^\lambda := K / \lambda^{\frac{1}{n}}, \Omega_0^\lambda:= \Omega_0/\lambda^{\frac{1}{n}}$, $g^\lambda(x)=g(\lambda^{\frac{1}{n}}x), a_{ij}^\lambda(x)=a_{ij}(\lambda^{\frac{1}{n}}x)$ and $v_0^\lambda(x)= \lambda^{\frac{n-2}{n}}v_0(\lambda^{\frac{1}{n}}x)$. The corresponding rescaling of the weak solution $u$ of the variational problem \eqref{Variational problem} can be shown to be (see \cite{QV,P1, PV}) $$u^\lambda(x,t)= \lambda^{-\frac{2}{n}} u(\lambda^{\frac{1}{n}}x, \lambda t),$$ which solves the rescaled obstacle problem \begin{equation} \label{rescaled Variational problem} \left\{ \begin{aligned} u^\lambda(\cdot,t) &\in \mathcal{K}^\lambda(t), && 0<t<\infty,\\ (\lambda^{\frac{2-n}{n}}u^\lambda_t-\mathcal{L}^\lambda u^\lambda)(\varphi -u^\lambda) &\geq f^\lambda(x)(\varphi -u^\lambda) &&\text{a.e. } (x,t) \in \mathbb{R}^n \times (0,\infty)\\ &&& \text{for any } \varphi \in \mathcal{K}^\lambda(t),\\ u^\lambda(x,0)&=0, \end{aligned} \right. \end{equation} where $\mathcal{K^\lambda}(t)=\{\varphi \in H^1(\mathbb{R}^n),\varphi \geq 0, \varphi =\lambda^{\frac{n-2}{n}}t \text{ on } K^\lambda \}$ and $f^\lambda(x)=f(\lambda^{\frac{1}{n}}x)$. \begin{remark} \label{ig compact support} The admissible set $\mathcal{K}^{\lambda}(t)$ can be defined in this way due to \cite[Remark~2.13]{PV}. Note that for any fixed time $t$, the admissible set $\mathcal{K}^\lambda(t)$ depends on $\lambda$. \end{remark} \subsubsection{Convergence of the rescaled fundamental solution} By Lemma~\ref{Asymptotic expansion fund.sol}, we have the following convergence result on the rescaled fundamental solution. \begin{lemma} \label{rescaled fund.sol} Let $G$ be the fundamental solution of \eqref{elliptic eq} in dimension $n \geq 3$ and $G^\lambda$ be its rescaling as \begin{equation*} G^\lambda(x,y)=\lambda^{\frac{n-2}{n}}G(\lambda^{\frac{1}{n}}x,\lambda^{\frac{1}{n}}y). \end{equation*} Then $G^\lambda$ is the fundamental solution of \begin{equation} \label{elliptic rescaled eq} -\mathcal{L}^\lambda u = 0, \end{equation} and $|G^\lambda(x,y) - G^0(x,y)| \to 0$ uniformly on every compact subset of $\mathbb{R}^{2n} \setminus \{(x,x)\in \mathbb{R}^{2n}\}$ where $G^0$ is the fundamental solution of \eqref{elliptic hom eq}. \end{lemma} \begin{proof} We will show that $G^\lambda$ is the fundamental solution of \eqref{elliptic rescaled eq}, then the result follows directly from Lemma~\ref{Asymptotic expansion fund.sol} with $\varepsilon = \lambda^{-\frac{1}{n}}$. For simplicity, we will check that $G^\lambda$ satisfies the definition of the fundamental solution of \eqref{elliptic rescaled eq} for fixed $y=0, F(x)= G(x,0)$ and $F^\lambda(x):=\lambda^{\frac{n-2}{n}}F(\lambda^{\frac{1}{n}}x)$. Indeed, we have $D_jF^{\lambda}(x)={\lambda}^{\frac{n-1}{n}}D_jF({\lambda}^{\frac{1}{n}}x)$. Take a function $\varphi \in C^\infty_0(\mathbb{R}^n)$, then \begin{align*} \int_{\mathbb{R}^n}a_{ij}^\lambda(x)D_jF^{\lambda}(x)D_i\varphi(x)\mathop{}\!dx&=\int_{\mathbb{R}^n}{\lambda}^{\frac{n-1}{n}}a_{ij}({\lambda}^{\frac{1}{n}}x)D_jF({\lambda}^{\frac{1}{n}}x)D_i\varphi(x)\mathop{}\!dx\\ &=\int_{\mathbb{R}^n}{\lambda^{-\frac{1}{n}}} a_{ij}(y)D_jF(y)D_i\varphi(\lambda^{-\frac{1}{n}}y)\mathop{}\!dy\\ &=\int_{\mathbb{R}^n} a_{ij}(y)D_jF(y)D_i\tilde{\varphi}(y)\mathop{}\!dy\\ &=\tilde{\varphi}(0) = \varphi(0), \end{align*} where $\tilde{\varphi}(y)=\varphi(\lambda^{-\frac{1}{n}}y)$. Moreover, $F^\lambda$ satisfy the estimate \eqref{bound for fund.sol n>2} since $F$ has this property. Hence, by definition, $F^\lambda$ is the fundamental solution of \eqref{elliptic rescaled eq}. \end{proof} \begin{remark} The rate of this convergence as well as the rate of convergence for derivatives were also derived in \cite{ALin}. \end{remark} \subsection{Construction of barriers from a fundamental solution} \label{sec: sub and super sol} The main goal of this section is to construct a subsolution and a supersolution of \eqref{Stefan} from a fundamental solution of the elliptic equation \eqref{elliptic eq} so that we can use them as barriers to track the behavior of the support of a solution of \eqref{Stefan}. From now on, we will let $\mathcal{L}^0$ be the limit of the operators of $\mathcal{L}^\lambda$ as in Lemma~\ref{Asymptotic expansion fund.sol} and consider the fundamental solutions of \eqref{elliptic eq}, \eqref{elliptic rescaled eq} and \eqref{elliptic hom eq} with a pole at the origin as \begin{equation*} \begin{aligned} F(x)&:= G(x,0), && F^\lambda(x):=G^\lambda(x,0)= \lambda^{\frac{n-2}{n}}F(\lambda^{\frac{1}{n}}x), &&& F^0(x)&:= G^0(x,0). \end{aligned} \end{equation*} Note that $F^0$ is preserved under the rescaling by \eqref{fundamental const coef}. \subsubsection{Construction of a supersolution} \label{supersol} Define \begin{equation*} \theta(x,t):={[C_1F(x)-C_2 t^{\frac{2-n}{n}}]_+}, \end{equation*} where $[s]_+ := \max(s, 0)$ denotes the positive part of $s$ and $C_1,C_2$ are non-negative constants to be chosen later. It easily follows that in $\{\theta>0\} \setminus \{x = 0\}$, \begin{align*} \theta_t(x,t)&= \frac{C_2(n-2)}{n}t^{\frac{2-2n}{n}} \geq 0,\\ D\theta&= C_1DF,\\ \mathcal{L}\theta&=0,\\ \theta_t -\mathcal{L}\theta &\geq 0. \end{align*} Due to the estimates \eqref{bound for fund.sol n>2} and \eqref{gradient estimate fundamental sol}, there exists a constant $C$ such that \begin{equation} \label{estimate of F(x)} \begin{aligned} C^{-1}|x|^{2-n} &\leq F(x) \leq C|x|^{2-n},\\ |DF(x)|&\leq C|x|^{1-n} . \end{aligned} \end{equation} Then for $(x,t) \in \p{\{\theta>0\}}$ we have \begin{equation*} C_2 t^{\frac{2-n}{n}}=C_1F(x) \geq C_1C^{-1}|x|^{2-n}, \end{equation*} which yields \begin{equation*} t^{\frac{1}{n}} \leq \left(\frac{C_1}{CC_2}\right)^{\frac{1}{2-n}}|x|. \end{equation*} Thus on $\partial \{\theta>0\}$, \begin{equation*} \theta_t(x,t) = \frac{C_2(n-2)}{n}t^{\frac{2-2n}{n}} \geq \frac{n-2}{n}\left(\frac{C_1}{C}\right)^{\frac{2-2n}{2-n}} C_2^{\frac{n}{2-n}}|x|^{2-2n}. \end{equation*} Fix any $t_0 >0$. We can choose $C_1$ large enough and $C_2$ small enough such that \begin{align*} \label{supersol condition} &\theta_t(x,t) \geq M\beta C_1^2 C^2 |x|^{2-2n} \geq M\beta|D\theta(x,t)|^2 \mbox{ on } \partial \{\theta>0 \},\\ &\theta> 1 \mbox{ on } K \mbox{ and }\theta(x,t_0)>v(x,t_0), \end{align*} where $\alpha, \beta$ are the elliptic constants from \eqref{ellipticity}. By \eqref{ellipticity and Stefan bdr cond}, $\theta_t \geq g\,a_{ij}\, D_j\theta D_i\theta$ on $\partial\{\theta>0\}$ and by \eqref{Stefan bdr cond}, $\theta$ is a supersolution of \eqref{Stefan} in $\mathbb{R}^n \times [t_0, \infty)$. \subsubsection{Construction of a subsolution} \label{subsol} Let $h$ be the function constructed in \cite[Appendix A]{K3} with $\mathcal{L}h=n, Dh(x)=(A(x))^{-1}x$ and let $c, \tilde{c} >0$ be constants such that \begin{equation} \label{quadratic growhth} c|x|^2 \leq h(x) \leq \tilde{c}|x|^2. \end{equation} Consider the function \begin{equation} \label{subsol formula} \theta(x,t):=\left[c_1F(x)+\frac{c_2h(x)}{t}-c_3 t^{\frac{2-n}{n}}\right]_+ \chi_{E}(x,t) \end{equation} with non-negative constants $c_1,c_2,c_3$ to be chosen later, where \begin{equation*} \begin{aligned} E&:={\{(x,t): \tfrac{\partial F_b}{\partial r}(|x|,t)<0, t > 0\}}, &&F_b(r,t):=Cc_1r^{2-n}+\frac{c_2\tilde{c}r^2}{t}-c_3t^{\frac{2-n}{n}}, \end{aligned} \end{equation*} $C,\tilde{c}$ are constants as in \eqref{estimate of F(x)}, \eqref{quadratic growhth}. We claim that we can choose constants $c_1,c_2,c_3,t_0$ such that $\theta$ is a subsolution of \eqref{Stefan} for $ t \in [t_0,\infty)$. The differentiation of $\theta$ on the set $\{\theta>0\} \setminus \{x = 0\}$ yields \begin{equation} \label{subsolution computations} \begin{aligned} D\theta(x,t)&= c_1DF(x)+ \frac{c_2A(x)^{-1}x}{t},\\ \mathcal{L}\theta(x,t)&=\frac{c_2n}{t},\\ \theta_t(x,t)&=-\frac{c_2h(x)}{t^2}+\frac{c_3(n-2)}{n}t^{\frac{2-2n}{n}}=t^{\frac{2-2n}{n}}\left[\frac{c_3(n-2)}{n}-\frac{c_2h(x)}{t^\frac{2}{n}}\right], \end{aligned} \end{equation} and thus \begin{equation*} \theta_t(x,t) -\mathcal{L}\theta(x,t) = t^{\frac{2-2n}{n}}\left[\frac{c_3(n-2)}{n}-\frac{c_2h(x)}{t^{\frac{2}{n}}}-\frac{c_2n}{t^{\frac{2-n}{n}}}\right] < 0 \quad \text{for } t \gg 1. \end{equation*} Thus, we can choose $t_0$ large enough such that $\theta_t -\mathcal{L}\theta < 0$ for $t\geq t_0$. Now we will prove the continuity of $\theta$. We have \begin{equation} \label{stronger barrier} 0\leq \theta(x,t) \leq [F_b(|x|,t)]_+ \chi_{E}(x,t)=:F_b^+(x,t), \end{equation} and hence $\Omega_t(\theta) \subset \Omega_t(F_b^+)$ for all $t$. We see that \begin{equation} \label{radius of E(t)} \frac{\partial F_b}{\partial r}(r,t)= Cc_1(2-n)r^{1-n}+\frac{2c_2\tilde{c}r}{t}<0 \Leftrightarrow r < \left(\frac{Cc_1(n-2)}{2c_2\tilde{c}}\right)^{\frac{1}{n}}t^{\frac{1}{n}}=:r_0(t) \end{equation} and hence $E=\{(x, t): |x|< r_0(t), t >0\}$. Clearly $\theta$ is continuous in the set $\{\theta>0\} \setminus \{x = 0\}$. Furthermore, $\theta$ is continuous in $E\setminus \{x = 0\}$ and $\theta=0$ on $E^{\mathsf{c}}$. We will show that we can choose the constants such that $\theta$ is continuous through boundary of $E$. Indeed, for $(x_0,t) \in \partial E$, $t > 0$, \begin{equation*} F_b(|x_0|,t)= F_b(r_0(t),t)=C_{F_b} t^{\frac{2-n}{n}}, \end{equation*} where $C_{F_b}=(Cc_1)^{\frac{2}{n}}(c_2\tilde{c})^{\frac{n-2}{n}}\left[\left(\frac{n-2}{2}\right)^{\frac{2}{n}} \frac{n}{n-2}\right]-c_3$ . We can choose $c_1,c_2,c_3$ such that $C_{F_b}<0$. Then $F_b(|x_0|,t)<0$ for all $(x_0, t) \in \p{E}$, $t>0$. Since $(x,t) \mapsto F_b(|x|, t)$ is continuous in a neighborhood of $\partial E$, we deduce by \eqref{stronger barrier} that $\theta=0$ in a neighborhood of $\partial E$ and therefore it is continuous across $\partial E$. Note that $C_{F_b} <0$ if and only if \begin{equation} \label{subsolution coef condition 1} c_3 \geq C_0 (c_1)^{\frac{2}{n}}(c_2)^{\frac{n-2}{n}}, \end{equation} where $C_0$ is a constant depending only on $n,C,\tilde{c}$. We finally need to show that we can choose suitable constants such that $\theta$ satisfies the subsolution condition on the free boundary. We first note that $\theta(x,t) \geq \tilde{\theta}(x,t):=\left[Cc_1|x|^{2-n}-c_3 t^{\frac{2-n}{n}}\right]_+ $. Then $\Omega(\tilde{\theta}) \subset \Omega(\theta)$, or more precisely, there exists a constant $\tilde{C}$ such that \begin{equation} \label{subsolution bdr condition 1} |x|\geq \tilde{C}t^{\frac{1}{n}} \mbox{ for all } (x,t) \in \partial \{\theta>0\}. \end{equation} By \eqref{subsolution computations} we have \begin{align*} \theta_t(x,t)&\leq c_3 t^{\frac{2-2n}{n}},\\ |D\theta(x,t)|^2&=c_1^2|DF(x)|^2+ \frac{2c_1c_2}{t}DF(x)\cdot A^{-1}x+\frac{c_2^2}{t^2}|A^{-1}x|^2,\\ &\geq \frac{2c_1c_2}{t}DF(x)\cdot A^{-1}x+\frac{c_2^2}{t^2}|A^{-1}x|^2. \end{align*} Since $A$ is a symmetric bounded matrix satisfying the ellipticity \eqref{ellipticity}, then these properties also hold for $A^{-1}$ and $A^{-2}$ with appropriate constants. Hence, for $(x,t) \in \p{\{\theta>0\}}$, \begin{equation*} \begin{aligned} |D\theta(x,t)|^2 &\geq \frac{c_2^2}{t^2}\tilde{\alpha}|x|^2 - \frac{2c_1c_2}{t}C_A|DF(x)||x| &&\mbox{ for some } \tilde{\alpha}, C_A >0\\ & \geq \frac{c_2^2}{t^2}\tilde{\alpha}|x|^2 - \frac{2c_1c_2}{t}CC_A|x|^{2-n} && (\mbox{by \eqref{estimate of F(x)}})\\ &\geq \left(c_2^2 \tilde{\alpha}\tilde{C}^2 -2c_1c_2CC_A\tilde{C}^{2-n}\right)t^{\frac{2-2n}{n}} &&(\mbox{by \eqref{subsolution bdr condition 1}}). \end{aligned} \end{equation*} We want to choose $c_1,c_2,c_3$ such that $\theta_t \leq m\alpha |D\theta|^2$ on $\partial\{\theta>0\}$, which will hold if \begin{equation} \label{subsolution coef condition 2} c_3 \leq m\alpha \left(c_2^2 \tilde{\alpha}\tilde{C}^2 -2c_1c_2CC_A\tilde{C}^{2-n}\right)=:C_0^1 c_2^2 - C_0^2 c_1c_2, \end{equation} where $C_0^1,C_0^2$ are fixed positive constants. Then by \eqref{elliptic epsilon eq}, $\theta_t \leq g\,a_{ij}\,D_j\theta D_i \theta$ on $\partial \{\theta>0\}$. The conditions \eqref{subsolution coef condition 1} and \eqref{subsolution coef condition 2} hold if we choose some suitable $c_1,c_2,c_3$. For example, fix any $c_1>0$, choose $c_2$ large enough such that $$ C_0 (c_1)^{\frac{2}{n}}(c_2)^{\frac{n-2}{n}}<C_0^1 c_2^2 - C_0^2 c_1c_2 .$$ Note that the above inequality holds for $c_2$ large enough since for a fixed $c_1>0$, the right hand side tends to $\infty$ as $c_2 \to \infty$ faster than the left hand side. Then \eqref{subsolution coef condition 1} and \eqref{subsolution coef condition 2} hold for any $c_3$ which is between these two numbers. Fix $t_0$ such that $\theta_t - \mathcal{L}\theta < 0$ in $\{\theta>0\}$ for chosen $c_2,c_3$ and $t \geq t_0$. Choosing a smaller $c_1$ if it is needed, we can assume that the support of $\theta(\cdot,t_0)$ is contained in $\Omega_{t_0}(v), \theta(x,t_0) \leq v(x,t_0)$ and $\theta<1$ on $\partial K$. Thus, with the help of \eqref{Stefan bdr cond}, we see that $\theta$ is a subsolution of the Stefan problem \eqref{Stefan} for that choice of constants. \subsubsection{Some results on the barriers for the Stefan problem \eqref{Stefan}} Due to the construction above, we can use the functions of the form \begin{equation} \label{barrier form} \theta(x,t):= [C_1F(x)-C_2t^{\frac{2-n}{n}}]_+ \end{equation} with $C_1,C_2 >0$ as barriers for the Stefan problem \eqref{Stefan}. Since our purpose is to study the asymptotic behavior, we first observe the convergence of the rescaled barriers. \begin{lemma} \label{barrier convergence of rescaled } Let $\theta$ be a function of the form \eqref{barrier form} and $\theta^\lambda:=\lambda^{\frac{n-2}{n}}\theta(\lambda^{\frac{1}{n}}x, \lambda t)$. Then $\theta^\lambda \to \theta^0$ locally uniformly in $(\mathbb{R}^n \setminus \{0\}) \times [0,\infty)$, where \begin{equation} \theta^0(x,t):= [C_1 F^0(x) -C_2t^{\frac{2-n}{n}}]_+. \end{equation} \end{lemma} \begin{proof} We have \begin{equation*} \theta^\lambda(x,t)= [C_1 F^\lambda(x)-C_2 t^{\frac{2-n}{n}}]_+, \end{equation*} where $F^\lambda(x)=\lambda^{\frac{n-2}{n}}F(\lambda^{\frac{1}{n}}x)$ . By Lemma \ref{rescaled fund.sol}, $F^\lambda \to F^0$ locally uniformly in $\mathbb{R}^n \setminus \{0\}$ and the lemma follows. \end{proof} Moreover we will also need to know the integral of the barriers in time to analyze the weak solution of the Stefan problem \eqref{Stefan}. \begin{lemma} \label{barrier integration in time lemma} Let $\Theta(x,t):= \int_0^t \theta(x,s)\mathop{}\!ds$. Then $\Theta(x,t)$ has the form \begin{equation} \label{barrier integration in time} \begin{aligned} \Theta(x,t)=C_1F(x)t-\frac{C_2n}{2}t^{\frac{2}{n}}+o(F(x)), \mbox{ as } |x| \to 0. \end{aligned} \end{equation} \end{lemma} \begin{proof} We can derive \eqref{barrier integration in time} simply by integrating the function $\theta$ of the form \eqref{barrier form}. Since $\theta$ has the form \eqref{barrier form}, we see that \begin{equation*} \begin{aligned} &\begin{aligned} & \theta > 0 &&\mbox{ if } t> s(x),\\ & \theta = 0 && \mbox{ if } t \leq s(x), \end{aligned} && \mbox{ where } s(x)=\left(\frac{C_1}{C_2}F(x)\right)^{\frac{n}{2-n}}. \end{aligned} \end{equation*} Thus, \begin{equation*} \Theta(x,t)= \left\{ \begin{aligned} & 0, &&t \leq s(x),\\ &\int_{s(x)}^t{(C_1F(x)-C_2s^{\frac{2-n}{n}})\mathop{}\!ds}, && t> s(x). \end{aligned} \right. \end{equation*} When $t> s(x)$, \begin{equation*} \begin{aligned} \Theta(x,t)&=C_1F(x)t-\frac{C_2n}{2}t^{\frac{2}{n}}-C_1F(x)s(x)+\frac{C_2n}{2}(s(x))^{\frac{2}{n}}\\ &=C_1F(x)t-\frac{C_2n}{2}t^{\frac{2}{n}}+\frac{n-2}{2}\frac{(C_1)^{\frac{2}{2-n}}}{(C_2)^{\frac{n}{2-n}}}(F(x))^{\frac{2}{2-n}}\\ &= C_1F(x)t-\frac{C_2n}{2}t^{\frac{2}{n}}+ C(F(x))^{\frac{2}{2-n}}. \end{aligned} \end{equation*} Since $F(x)$ has a singularity at $x=0$ by \eqref{bound for fund.sol n>2} then $s(x) \to 0$ and $C(F(x))^{\frac{2}{2-n}}=o(F(x))$ as $|x|\to 0$, which completes the proof. \end{proof} From these barriers, we can obtain the rate of expansion of the support for viscosity solutions. \begin{lemma} \label{viscos free boundary bound} Let $n \geq 3$ and $v$ be a viscosity solution of (\ref{Stefan}). There exists $t_0>0$ and constants $C, C_1, C_2>0$ such that for $t\geq t_0$, \begin{align*} C_1t^{\frac{1}{n}}\leq \min_{\Gamma_t(v)}|x|\leq \max_{\Gamma_t(v)}|x|\leq C_2t^{\frac{1}{n}} \end{align*} and for $0 \leq t\leq t_0$, $$\max_{\Gamma_t(v)}|x|\leq C_2.$$ Moreover, $$0 \leq v(x,t) \leq C|x|^{2-n}.$$ \end{lemma} \begin{proof} We deduce the bound for $v(x,t)$ first. Let $F(x)$ be the fundamental solution of the elliptic equation \eqref{elliptic eq} as in Section \ref{sec: sub and super sol}. Then $\hat{\theta}=CF(x)$ is a stationary solution of the equation $v_t- \mathcal{L}v =0$. Its integral in time is also a solution of the variational inequality problem with $\hat{f}=CF(x)$. If we take $C$ large enough then $\hat{f}\geq f$ and $\hat{\theta}\geq 1$ on $K$. Applying the comparison principle for the variational problem, \cite[Proposition 2.2]{QV}, we have $v(x,t) \leq CF(x) \leq \tilde{C}|x|^{2-n}$ by \eqref{bound for fund.sol n>2}. The bound on the support of $v(\cdot,t)$ at all times has been proved in \cite[Lemma~3.6]{K3}. Now consider $\theta_1, \theta_2$ that are respectively a subsolution and a supersolution of the Stefan problem \eqref{Stefan} for $t\geq t_0$ as constructed in Section \ref{supersol} and \ref{subsol}. The bounds on the support of $v$ for $t \geq t_0$ follow directly from the behavior of the supports of $\theta_1, \theta_2$. \end{proof} \subsection{Limit problems} The expected limit problem is the corresponding Hele-Shaw type problem with a point source. \subsubsection{Limit problem for $v^\lambda$} We expect $v^\lambda$ to converge to a solution of \begin{equation} \label{Hele_shaw point source problem} \left\{ \begin{aligned} \mathcal{L}^0 v &=0 &&\mbox{ in } \{v>0\},\\ \frac{v_t}{|Dv|}&= \displaystyle \frac1L q_{ij}D_jv \nu_i &&\mbox{ on } \partial \{v>0\},\\ \lim_{|x| \rightarrow 0}\displaystyle \frac{v}{F^0}&=C,\\ v(x,0)&=0 &&\mbox{ in } \mathbb{R}^n \setminus \{0\}, \end{aligned} \right. \end{equation} where $C,L$ are positive constants, $q_{ij}$ are constants of the operator $\mathcal{L}^0 $ and $F^0$ is the fundamental solution of \eqref{elliptic hom eq}. Since $Q:=(q_{ij})$ is symmetric and positive definite, we can write $Q=P^2$, where $P$ is also a symmetric positive definite matrix. Let $\tilde{v}(x,t):=v(Px,t)$. A direct computation then shows that the problem \eqref{Hele_shaw point source problem} becomes the classical Hele-Shaw problem with a point source for function $\tilde{v}$, \begin{equation} \label{Hele_shaw point source problem classical} \left\{ \begin{aligned} \Delta \tilde{v} &=0 &&\mbox{ in } \{\tilde{v}>0\},\\ \tilde{v}_t&= \frac1L |D\tilde{v}|^2 &&\mbox{ on } \partial \{v>0\},\\ \lim_{|x| \rightarrow 0} \frac{\tilde{v}}{|x|^{2-n}}&=C,\\ \tilde{v}(x,0)&=0 &&\mbox{ in } \mathbb{R}^n \setminus \{0\}. \end{aligned} \right. \end{equation} The problem \eqref{Hele_shaw point source problem classical} has a unique classical solution $\tilde{V}$ which is given explicitly (see \cite{P1,PV}, for instance). Thus \eqref{Hele_shaw point source problem} has unique classical solution $V(x,t):= \tilde{V}(P^{-1}x,t)$, which is continuous in $(\mathbb{R}^n \setminus \{0\}) \times [0,\infty)$. \subsubsection{Limit problem for $u^\lambda$} \label{sec: limit variational problem} Suppose that $V=V_{C,L}$ is the classical solution of \eqref{Hele_shaw point source problem} above and set \begin{equation} \label{limit function U} U(x,t):= \int_0^t V(x,s)\mathop{}\!ds. \end{equation} It is known that the time integral of the solution of the classical Hele-Shaw problem with a point source \eqref{Hele_shaw point source problem classical} satisfies an obstacle problem derived in \cite{P1}. Following \cite{P1} and using a change of variables again, we see that $U$ uniquely solves the following problem, which is our limit variational problem: \begin{equation} \label{Limit problem for variational solution} \left\{ \begin{aligned} w &\in \mathcal{K}_t,\\ q(w,\phi) &\geq \left<-L,\phi\right >,&& \forall \phi \in W_1,\\ q(w, \psi w) &= \left<-L,\psi w\right>, &&\forall \psi \in W_2, \end{aligned} \right. \end{equation} where $$\mathcal{K}_t= \left \{ \varphi \in \bigcap_{\varepsilon>0}H^1(\mathbb{R}^n \setminus B_\varepsilon) \cap C(\mathbb{R}^n \setminus B_\varepsilon): \varphi \geq 0, \lim_{|x| \rightarrow 0}\frac{\varphi(x)}{F^0(x)}=Ct\right \},$$ \begin{align} \label{set V} W_1&=\left\{ \phi \in H^1(\mathbb{R}^n \setminus B_\varepsilon):\phi \geq 0, \phi = 0 \mbox{ on } B_\varepsilon \mbox{ for some } \varepsilon >0\right\},\\ \label{set W} W_2&=W_1 \cap C^1(\mathbb{R}^n). \end{align} \subsubsection{Near-field limit} Using the boundedness provided by Lemma~\ref{viscos free boundary bound}, we have the following general near-field limit result similar to \cite{QV}. \begin{theorem}[Near-field limit] \label{Near field limit Theorem} The viscosity solution $v$ of the Stefan problem (\ref{Stefan}) converges to the unique solution $P = P(x)$ of the exterior Dirichlet problem \begin{equation} \label{near filed limit} \left\{ \begin{aligned} \mathcal{L} P&=0, && x \in \mathbb{R}^n \setminus K,\\ P&=1, && x \in K,\\ \lim_{|x| \rightarrow \infty}P(x)&=0, \end{aligned} \right. \end{equation} as $t \rightarrow \infty$ uniformly on compact subsets of $\overline{K^{\mathsf{c}}}$. \end{theorem} \begin{proof} Follow the arguments in proof of \cite[Lemma~8.4]{QV} and note that by Lemma~\ref{viscos free boundary bound} the support of $v$ expands to the whole space as time $t \to \infty$. \end{proof} The results on the isolated singularity of solutions of linear elliptic equations in \cite{SW} allow us to deduce the asymptotic behavior of $P$ as $|x| \to \infty$. \begin{lemma} \label{C*} There exists a constant $C_*=C_*(K, n)$ such that the solution $P$ of the problem (\ref{near filed limit}) satisfies $$\lim_{|x| \rightarrow \infty}\frac{P(x)}{F(x)}=C_*,$$ where $F(x)$ is the fundamental solution of the elliptic equation $-\mathcal{L}v=0$ in $\mathbb{R}^n$. \end{lemma} \begin{proof} Lemma~\ref{C*} is a direct corollary of \cite[Theorem 5]{SW}. The arguments follow the same techniques as in \cite[Lemma~4.3]{QV} using a general Kelvin transform and Green's function for linear elliptic equations. Following \cite[Lemma~4.3]{QV}, it can also be shown that the constant $C_*$ depends continuously on the data of the fixed boundary $\Gamma=\p{K}$. \end{proof} \section{Uniform convergence of the rescaled variational solutions} The purpose of this section is to show the first main result on the uniform convergence of the rescaled variational solutions, which is similar to \cite[Theorem 3.1]{PV}. \begin{theorem} \label{convergence of variational solutions} Let $u$ be the unique solution of variational problem (\ref{Variational problem}) and $u^\lambda$ be its rescaling. Let $U_{A,L}$ be the unique solution of limit problem \eqref{Limit problem for variational solution} where $A=C_*$ as in Lemma~\ref{C*}, and $L=\left<\frac{1}{g}\right>$ as in Lemma~\ref{media}. Then the functions $u^\lambda$ converge locally uniformly to $U_{A,L}$ as $\lambda \rightarrow \infty$ on $\left(\mathbb{R}^n \setminus \{0\}\right) \times \left[0,\infty\right)$. \end{theorem} The classical homogenization results of variational inequalities are usually stated for a fixed bounded domain. Since our admissible set $\mathcal{K}^\lambda(t)$ defined in Section~\ref{sec: rescaling} changes with $\lambda$, we will need to refine the proof. We will use the techniques of the $\Gamma$-convergence introduced in \cite{D} and \cite{K3}. Note that these techniques can be applied not only for the periodic case but also for stationary ergodic coefficients over a probability space $(A,\mathcal{F},P)$. \subsection{The averaging property of media and the \texorpdfstring{$\Gamma$}{Gamma}-convergence} We recall the following lemma on the averaging property of periodic media, which also holds for more general stationary ergodic media. \begin{lemma}[cf. {\cite[Section 4, Lemma~7]{K2}, see also \cite{P1}}] \label{media} For a given $g$ satisfying \eqref{condition in media}, there exists a constant, denoted by $\left<\frac{1}{g}\right>$, such that if $\Omega \subset \mathbb{R}^n$ is a bounded measurable set and if $\{u^\varepsilon\}_{\varepsilon>0} \subset L^2(\Omega)$ is a family of functions such that $u^\varepsilon \rightarrow u$ strongly in $L^2(\Omega)$ as $\varepsilon \rightarrow 0$, then \begin{equation*} \lim_{\varepsilon\rightarrow 0} \int_{\Omega}\frac{1}{g\left(\frac{x}{\varepsilon}\right)}u^\varepsilon(x)\mathop{}\!dx =\int_{\Omega}\left<\frac{1}{g}\right>u(x)\mathop{}\!dx. \end{equation*} The quantity $\ang{\frac{1}{g}}$ in periodic setting is the average of $\frac{1}{g}$ over one period. \end{lemma} We also need some basic concepts and results of the $\Gamma$-convergence which are taken from \cite{D}. Let $\Omega$ be a bounded open set in $\mathbb{R}^n$. Consider the functional \begin{equation} \label{functionals in Gamma convergence} J^\lambda(u,\Omega):=\left\{ \begin{aligned} &\int_{\Omega}a_{ij}(\lambda^{\frac{1}{n}}x)D_iuD_ju\mathop{}\!dx &&\mbox{ if } u \in H^1(\Omega),\\ &\infty &&\mbox{ otherwise}. \end{aligned} \right. \end{equation} \begin{definition}[cf. {\cite[Proposition 8.1]{D}}] Let X be a metric space. A sequence of functionals $F_h$ is said to $\Gamma(X)$-converge to $F$ if the following conditions are satisfied:\begin{enumerate}[label=(\roman*)] \item For every $u \in X$ and for every sequence $(u_h)$ converging to $u$ in $X$, we have \begin{equation*} F(u) \leq \liminf_{h \rightarrow 0}F_h(u_h). \end{equation*} \item For every $u \in X$, there exists a sequence $(u_h)$ converging to $u$ in $X$, such that \begin{equation*} F(u)=\lim_{h\rightarrow 0}F_h(u_h). \end{equation*} \end{enumerate} \end{definition} It is known that the $\Gamma(L^2)$-convergence of $J^\lambda$ is equivalent to the $G$-convergence of elliptic operator $\mathcal{L}^\lambda$ (see \cite[Theorem 22.4]{D} and \cite[Theorem 4.3]{K3}) and we have a crucial result on Gamma-convergence of $J^\lambda$ as follows. \begin{theorem}[cf. {\cite[Theorem 4.3]{K3}}] \label{gamma convergence} The functionals $J^\lambda$ $\Gamma(L^2)$-converge as $\lambda \rightarrow \infty$ to a functional $J^0$, where $J^0$ is a quadratic functional of the form \begin{equation*} J^0(u):=\left\{ \begin{aligned} &\int_{\Omega}q_{ij}D_iuD_ju\mathop{}\!dx &&\mbox{ if } u \in H^1(\Omega),\\ &\infty &&\mbox{ otherwise}. \end{aligned} \right. \end{equation*} Here the constants $q_{ij}$ are the coefficients of the limit operator $\mathcal{L}^0$ as in Lemma~\ref{Asymptotic expansion fund.sol}. \end{theorem} To deal with the Dirichlet boundary condition, we need to use cut-off functions and the \emph{fundamental estimate} below. Here we denote as $\mathcal{A}$ the class of all open subsets of $\Omega$. \begin{definition}\cite[Definition 18.1]{D} Let $A',A''\in \mathcal{A}$ with $A' \Subset A''$. We say that a function $\varphi: \mathbb{R}^n \rightarrow \mathbb{R}$ is a \textit{cut-off function} between $A'$ and $A''$ if $\varphi \in C^\infty_0(A''), 0 \leq \varphi \leq 1$ on $\mathbb{R}^n$, and $\varphi =1$ in a neighborhood of $\overline{A'}$ . \end{definition} \begin{definition}\label{def:fundamental-estimate}\cite[Definition 18.2]{D} Let $F: L^p(\Omega) \times \mathcal{A} \rightarrow [0,\infty]$ be a non-negative functional. We say that $F$ satisfies the \textit{fundamental estimate} if for every $\varepsilon>0$ and for every $A',A'',B \in \mathcal{A}$, with $A' \Subset A''$, there exists a constant $M>0$ with the following property: for very $u,v \in L^p(\Omega)$, there exists a cut-off function $\varphi$ between $A'$ and $A''$, such that \begin{equation} \label{cut-off.est} \begin{aligned} F(\varphi u + (1-\varphi)v, A' \cup B) &\leq (1+\varepsilon)(F(u,A'')+F(v,B))\\ &+ \varepsilon(\|u\|^p_{L^p(S)}+\|v\|^p_{L^p(S)}+1)+ M\|u-v\|^p_{L^p(S)}, \end{aligned} \end{equation} where $S=(A''\setminus A') \cap B$. Moreover, if $\mathcal{F}$ is a class of non-negative functionals on $L^p(\Omega) \times \mathcal{A}$, we say that the fundamental estimate holds uniformly in $\mathcal{F}$ if each element $F$ of $\mathcal{F}$ satisfies the fundamental estimate with $M$ depending only on $\varepsilon,A',A'',B,$ while $\varphi$ may depend also on $F,u,v$. \end{definition} The result in \cite[Theorem 19.1]{D} provides a wide class of integral functionals uniformly satisfying the fundamental estimate. In particular, the fundamental estimate holds uniformly in the class of all functionals of the form \eqref{functionals in Gamma convergence}. Thus for every $J^\lambda$, there exists a cut-off function $\varphi$ such that \eqref{cut-off.est} hold with $F=J^\lambda$ and a constant $M$ independent of $\lambda$. \subsection{Uniform convergence of rescaled variational solutions} Now we are ready to prove Theorem \ref{convergence of variational solutions}. \begin{proof}[Proof of Theorem \ref{convergence of variational solutions}] For a fixed $T > 0$, we can bound $\Omega_t(u^\lambda)$ by $B(0,R)$ for some $R >0$, for all $0 \leq t \leq T$ and $\lambda >0$ by Lemma~\ref{viscos free boundary bound}. We will show the convergence in $Q_{\varepsilon}:= \left(B(0,R)\setminus \overline{B(0,\varepsilon)}\right) \times [0,T]$ for some $\varepsilon >0$. We argue the same way as in the proof of \cite[Theorem 3.2]{PV}. Using the uniform bound on $u^\lambda, u^\lambda_t$ from Lemma~\ref{viscos free boundary bound} and the standard regularity estimates for an elliptic obstacle problem which hold uniformly in $\lambda$, we obtain a uniform H\"older estimate for $u^\lambda$. Then by the Arzel\`{a}-Ascoli theorem and a diagonalization argument, we can find a function $\bar u \in C((\mathbb{R}^n \setminus \{0\}) \times [0, \infty))$ and a subsequence $\{u^{\lambda_k}\} \subset \{u^\lambda\}$ such that \begin{equation*} \begin{aligned} &u^{\lambda_k} \rightarrow \bar{u} \mbox{ locally uniformly on $(\mathbb{R}^n \setminus \{0\}) \times [0, \infty)$} \mbox{ as } k \rightarrow \infty, \\ &u^{\lambda_k}(\cdot,t) \rightarrow \bar{u}(\cdot,t) \mbox{ strongly in } H^1(\Omega_\varepsilon) \mbox{ for all } t\geq 0, \varepsilon > 0. \end{aligned} \end{equation*} In the rest of the proof we show that the function $\bar{u}$ solves the limit problem \eqref{Limit problem for variational solution}, whose uniqueness then implies the convergence of the full sequence. We start with quantifying the singularity at the origin. \begin{lemma} \label{singularity of U} We have \begin{equation*} \lim_{|x|\rightarrow 0}\frac{\overline{u}(x,t)}{U_{C_*,L}(x,t)}=1. \end{equation*} \end{lemma} \begin{proof} Let $C_*$ as in Lemma~\ref{C*} and $F$ be the fundamental solution of \eqref{elliptic eq} as in Section~\ref{sec: sub and super sol}. Fix $\varepsilon>0$. By Lemma~\ref{C*}, there exists $a$ large enough such that \begin{equation} \label{estimate in P} \begin{aligned} &\left| \frac{P(x)}{F(x)}-C_*\right| < \frac{\varepsilon}{2}, &&\mbox{ in } \{|x|\geq a\} \end{aligned} \end{equation} and $K \subset \{|x| < a\}$. In particular, \eqref{estimate in P} holds for every $x, |x|=a$. The set $\{|x|=a\}$ is a compact subset of $\mathbb{R}^n \setminus K$. Then by Theorem \ref{Near field limit Theorem}, there exists $t_0 >0$ such that for all $t \geq t_0$, \begin{equation*} \begin{aligned} &\left|\frac{v(x,t)}{F(x)}-\frac{P(x)}{F(x)}\right|< \frac{\varepsilon}{2}, && \mbox{ for all } x, |x|=a. \end{aligned} \end{equation*} By triangle inequality we have for all $t\geq t_0$, for all $x$ such that $|x| =a$, $$\displaystyle \left|\frac{v(x,t)}{F(x)}-C_*\right| < \varepsilon.$$ Let $\Phi(x,t)$ be the fundamental solution of the parabolic equation \begin{equation} \label{parabolic equation} u_t-\mathcal{L}u=0. \end{equation} As shown in \cite{FriedmanP, Aronson}, such unique fundamental solution exists and satisfies \begin{equation} \label{parabolic fund.sol estimate} N^{-1}t^{-\frac{n}{2}}e^{-\frac{N|x|^2}{t}} \leq \Phi(x,t) \leq Nt^{-\frac{n}{2}}e^{-\frac{|x|^2}{Nt}} \end{equation} for some $N>0$. We consider $\theta_1,\theta_2$ as follows: \begin{align*} \theta_1(x,t)&:=\left[(C_*-\varepsilon)F(x)+\frac{c_2h(x)}{t}-c_3t^{\frac{2-n}{n}}\right]_+ \chi_{E}(x,t),\\ \theta_2(x,t)&:= (C_*+\varepsilon)F(x) + C_2 \Phi(x,t), \end{align*} where $E$, $h(x)$ were defined as in Section \ref{subsol}. We will show that we can choose the coefficients such that $\theta_1$ is a subsolution and $\theta_2$ is a supersolution of \eqref{Stefan} in $\{|x|\geq a\}\times \{t\geq t_0\}$ for some $t_0$. Since we fix the first coefficient of $\theta_1$ and $\theta_2$, we need to check the initial conditions carefully. Note that on the set $\{|x|=a\}$, $\theta_1 \to (C_*-\varepsilon)F(x)$ and $\theta_2 \to (C_*+\varepsilon)F(x)$ uniformly as $t \to \infty$. Thus we can choose a large time $t_0$ such that $\theta_1 \leq v \leq \theta_2$ on $\{|x=a|\}\times \{t \geq t_0\}$. By \eqref{radius of E(t)}, we can choose $c_2$ large enough such that $\operatorname{supp} \theta_1(\cdot,t_0) \subset \{x: (x, t_0) \in E\} \subset B_a(0)$ and then $\theta_1(\cdot, t_0) \leq v(\cdot,t_0)$ in $\{|x|\geq a\}$. Following Section \ref{subsol}, by choosing larger $c_2, t_0$ if necessary and $c_3$ satisfying \eqref{subsolution coef condition 1}, \eqref{subsolution coef condition 2}, $\theta_1$ is a subsolution of \eqref{Stefan} in $\{|x|\geq a\} \times \{t \geq t_0\}$. Fix the time $t_0$ such that $\theta_1$ is a subsolution of \eqref{Stefan} in $\{|x|\geq a\} \times \{t\geq t_0\}$ as above. By \eqref{bound for fund.sol n>2} and \eqref{parabolic fund.sol estimate}, $\theta_2 >0$ in $\mathbb{R}^n$. Moreover, since $F(x)$ and $\Phi(x,t)$ are the fundamental solutions of \eqref{elliptic eq} and \eqref{parabolic equation} respectively, clearly $(\theta_2)_t - \mathcal{L}\theta_2 =0$ in $\mathbb{R}^n \setminus \{0\}$. If we choose $C_2$ large enough then $\theta_2(\cdot, t_0)> v(\cdot, t_0)$ and $\theta_2$ is a super solution of \eqref{Stefan} in $\{|x|\geq a\}\times \{t \geq t_0\}$. By comparison principle, $\theta_1 \leq v \leq \theta_2$ in $\{|x|\geq a\} \times \{t\geq t_0\}$. Moreover, since $h(x)>0$ then $$\theta_1(x,t) \geq \tilde{\theta}_1(x,t):= \left[(C_*-\varepsilon)F(x)-c_3t^{\frac{2-n}{n}}\right]_+.$$ Therefore $\tilde{\theta}_1^\lambda \leq v^\lambda \leq \theta_2^\lambda$ for $\lambda$ is large enough. Noting that $\Phi^\lambda(x,t):=\lambda^{\frac{n-2}{n}}\Phi(\lambda^{\frac{1}{n}}x,\lambda t) \to 0$ uniformly as $\lambda \to \infty$ by \eqref{parabolic fund.sol estimate}, then by Lemma~\ref{barrier convergence of rescaled }, $\tilde{\theta}_1^\lambda, \theta_2^\lambda$ converge locally uniformly to $\theta_1^0, \theta_2^0$ of the form \begin{align*} \theta_1^0(x,t)&:=\left[(C_*-\varepsilon)F^0(x)-c_3t^{\frac{2-n}{n}}\right]_+ ,\\ \theta_2^0(x,t)&:= (C_*+\varepsilon)F^0(x), \end{align*} where $F^0$ is the fundamental solution of $ -\mathcal{L}^0 u =0$, $\mathcal{L}^0$ is the limit of the operators $\mathcal{L}^\lambda$ as in Lemma~\ref{Asymptotic expansion fund.sol}. Applying the same method as in \cite{P1} we have \begin{equation} \label{order} \int_{0}^{t}\theta^0_1(x,s)\mathop{}\!ds \leq \overline{u}(x,t) \leq \int_{0}^{t}\theta^0_2(x,s)\mathop{}\!ds. \end{equation} By Lemma~\ref{barrier integration in time lemma} we obtain \begin{equation*} (C_*-\varepsilon)F^0(x)t-\frac{c_3n}{2}t^{\frac{2}{n}}+o(F^0(x)) \leq \overline{u}(x,t) \leq (C_*+\varepsilon)F^0(x)t \end{equation*} as $|x| \to 0$. Dividing both sides of by $F^0(x)$ and taking the limit as $|x| \rightarrow 0$ we get \begin{equation*} (C_*-\varepsilon)t \leq \liminf_{|x|\rightarrow 0}\frac{\overline{u}(x,t)}{F^0(x)}\leq \limsup_{|x|\rightarrow 0}\frac{\overline{u}(x,t)}{F^0(x)}\leq (C_*+\varepsilon)t. \end{equation*} Since $\varepsilon>0$ is arbitrary, we have the correct singularity by sending $\varepsilon$ to $0$. \end{proof} Finally, we check that the limit function $\bar{u}$ satisfies the inequality and equality in \eqref{Limit problem for variational solution}. \begin{lemma} \label{limit function satisfies limit problem} For each $0 \leq t \leq T$, $\overline{w}=\overline{u}(\cdot,t)$ satisfies \begin{align} \label{ineq1} q(\overline{w},\phi)& \geq \left <-L,\phi\right >, &&\forall \phi \in W_1,\\ \label{ineq2} q(\overline{w},\psi \overline{w})&= \left< -L, \psi \overline{w}\right> ,&&\forall \psi \in W_2, \end{align} where $L=\left< \frac{1}{g} \right>$ and $W_1,W_2$ were defined as in Section~\ref{sec: limit variational problem}. \end{lemma} \begin{proof} Fix $t \in [0,T]$ and take any $\phi \in W_1$. By continuity, we can choose $\phi$ with a compact support contained in $\Omega:= B(0,R) \setminus \overline{B(0,\varepsilon_0)}$ for some $0 < \varepsilon_0 < R$. Let $w^k(x):=u^{\lambda_k}(x,t)$ and $\overline{\varphi}:=\overline{w}+\phi \in H^1(\mathbb{R}^n)$. By Theorem \ref{gamma convergence}, there exists a sequence $\{\varphi^k\}$ that converges strongly in $L^2(\Omega)$ to $\overline{\varphi}$ such that \begin{equation} \label{convergence sequence} J^{\lambda_k}(\varphi^k, \Omega) \rightarrow J^0(\overline{\varphi},\Omega). \end{equation} We will show that we can modify $\varphi^k$ into $\tilde{\varphi}^k$ such that $\tilde{\varphi}^k \in \mathcal{K}^{\lambda_k}(t)$ and all the convergences are preserved. First, we see that $J^0(\bar{\varphi}, \Omega) < \infty$ since $\bar{\varphi} \in H^1(\Omega)$. By \eqref{convergence sequence}, $J^{\lambda_k}(\varphi^k, \Omega) < \infty$ and hence $\varphi^k \in H^1(\Omega)$ when $k$ is large enough. Next, we need to modify $\varphi^k$ so that the boundary condition on $K^{\lambda_k}$ is satisfied. Since $\overline{\varphi} \in H^1(\Omega)$, for every $\varepsilon>0$, there exists a compact set $A(\varepsilon) \subset \Omega$ such that $\operatorname{supp} \phi \subset A(\varepsilon)$ and \begin{equation} \label{negliability} \int_{\Omega \setminus A(\varepsilon)}|D\overline{\varphi}|^2\mathop{}\!dx <\varepsilon. \end{equation} Let $A'(\varepsilon),A''(\varepsilon)$ such that $A(\varepsilon) \subset A'(\varepsilon) \Subset A''(\varepsilon) \Subset \Omega$ and $B(\varepsilon)=\Omega \setminus A(\varepsilon)$. By \cite[Theorem 19.1]{D}, the fundamental estimate \eqref{cut-off.est} holds uniformly in the class of all functionals of the form \eqref{functionals in Gamma convergence}. Thus there exists a constant $M\geq0$ independent of $\lambda_k$ and a sequence of cut-off functions $\xi^k_\varepsilon \in C^\infty_0(A''(\varepsilon)), 0 \leq \xi^k_\varepsilon \leq 1, \xi^k_\varepsilon = 1$ in a neighborhood of $\overline{A'(\varepsilon)}$ such that \begin{equation} \label{phi.fund.est} \begin{aligned} J^{\lambda_k}(\xi^k_\varepsilon \varphi^k+(1-\xi^k_\varepsilon)(w^k+\phi),\Omega)\leq &(1+\varepsilon)(J^{\lambda_k}(\varphi^k,A''(\varepsilon))+J^{\lambda_k}(w^k+\phi, B(\varepsilon)))\\ &+\varepsilon(\|\varphi^k\|_{L^2(\Omega)}^2 + \|w^k+\phi\|_{L^2(\Omega)}^2+1)\\ &+M\|\varphi^k -w^k-\phi\|_{L^2(\Omega)}^2. \end{aligned} \end{equation} Define \begin{equation*} \varphi_\varepsilon^k(x):=\left\{ \begin{aligned} &\xi^k_\varepsilon(x)\varphi^k(x)+(1-\xi^k_\varepsilon(x))(w^k(x)+\phi(x)) && \mbox{ if } x \in \Omega,\\ & w^k(x) && \mbox{ if } x \notin \Omega. \end{aligned} \right. \end{equation*} Then $\varphi_\varepsilon^k \in H^1(\mathbb{R}^n), \|\varphi_\varepsilon^k-\bar{\varphi}\|_{L^2(\Omega)} \leq \|\varphi^k -\bar{\varphi}\|_{L^2(\Omega)} +\|w^k+\phi - \bar{\varphi}\|_{L^2(\Omega)} \to 0$ as $k \to \infty$ and $\varphi^k_\varepsilon-w^k$ has compact support in $\Omega$. By ellipticity \eqref{ellipticity} we have \begin{equation} \label{gamma convergence ellipticity} J^{\lambda_k}(w^k +\phi,B(\varepsilon)) \leq \beta\int_{B(\varepsilon)}|D(w^k+\phi)|^2\mathop{}\!dx. \end{equation} In view of \eqref{negliability}, choose the sequence $\varepsilon_n:=\frac{1}{n}$ and denote $\varphi^k_n:=\varphi^k_{\varepsilon_n}$. By \eqref{phi.fund.est}, \eqref{gamma convergence ellipticity}, and the convergences $\varphi^k_n \to \overline{\varphi}$ in $L^2(\Omega)$ and $w^k \to \overline{w}$ in $H^1(\Omega)$ as $k \to \infty$, for each $n$ there exists $k_0(n)$ such that \begin{equation} \label{gamma convergence limsup} \left\{ \begin{aligned} \|\varphi^k_n -\overline{\varphi}\|_{L^2(\Omega)} &\leq \min \left\{\frac{1}{n}, \frac{1}{Mn}\right\},\\ J^{\lambda_k}(\varphi_n^k, \Omega) &\leq \left(1+\frac{1}{n}\right)\left(J^0(\overline{\varphi}, \Omega)+ \frac{\beta+1}{n}\right)+\frac{1}{n}\left(2\|\overline{\varphi}\|_{L^2(\Omega)}+\frac{1}{n}+1\right) + \frac{2}{n}, \end{aligned} \right. \end{equation} for every $k \geq k_0(n)$. We can choose $k_0(n)$ such that $k_0$ is an increasing function of $n$ and $k_0(n) \to \infty$ as $n\to \infty$. We will form a new sequence $\{\hat{\varphi}^k\}$ from the class of sequences $\{\varphi^k_n\}$. The idea is that for each $k$, we will choose an appropriate $n(k)$ and set $\hat{\varphi}^k:=\varphi^k_{n(k)}$. We need to choose a suitable $n(k)$ such that $n(k) \to \infty$ and \eqref{gamma convergence limsup} holds for $\varphi^k_{n(k)}$ when $k$ is large enough. To this end we introduce an ``inverse" of $k$ as \begin{equation*} n(k):= \min \{j\in \mathbb{N}: k < k_0(j+1) \}. \end{equation*} $n(k)$ is well-defined, non-decreasing and tends to $\infty$ as $k\to \infty$. From the definition of $n(k)$ we see that if $k \geq k_0(2)$ then $n(k)\geq 2$ and $k_0(n(k)) \leq k < k_0(n(k)+1)$ (otherwise $n(k)$ is not the minimum). Thus by \eqref{gamma convergence limsup} and definition of $\hat{\varphi}^k$ we have for all $k \geq k_0(2)$, \begin{equation*} \left\{ \begin{aligned} \|\hat{\varphi}^k -\overline{\varphi}\|_{L^2(\Omega)} &\leq \min \left\{\frac{1}{n(k)}, \frac{1}{Mn(k)}\right\},\\ J^{\lambda_k}(\hat{\varphi}^k, \Omega)&=J^{\lambda_k}(\varphi^k_{n(k)}, \Omega) \\ &\leq \left(1+\frac{1}{n(k)}\right)\left(J^0(\overline{\varphi}, \Omega)+ \frac{\beta+1}{n(k)}\right)\\ &\quad+\frac{1}{n(k)}\left(2\|\overline{\varphi}\|_{L^2(\Omega)}+\frac{1}{n(k)}+1\right) + \frac{2}{n(k)}. \end{aligned} \right. \end{equation*} Sending $k \to \infty$ we get \begin{equation*} \left\{ \begin{aligned} \lim_{k \to \infty}\|\hat{\varphi}^k - \bar{\varphi}\|_{L^2(\Omega)} &=0,\\ \limsup_{k \to \infty} J^{\lambda_k}(\hat{\varphi}^k, \Omega) &\leq J^0(\bar{\varphi},\Omega). \end{aligned} \right. \end{equation*} On the other hand, by Theorem \ref{gamma convergence}, \begin{equation*} J^0(\bar{\varphi},\Omega) \leq \liminf_{k \to \infty}J^{\lambda_k}(\hat{\varphi}^k,\Omega) \end{equation*} and thus we can conclude that $\hat{\varphi}^k \to \bar{\varphi}$ strongly in $L^2(\Omega)$ and $J^{\lambda_k}(\hat{\varphi}^k,\Omega) \to J^0(\bar{\varphi},\Omega)$. Moreover, by the definitions of $\varphi^k_\varepsilon, \hat{\varphi}^k$, we also have $\hat{\varphi}^k \in H^1(\Omega)$ and $\hat{\varphi}^k -w^k$ has compact support in $\Omega$. Now set $\tilde{\varphi}^k:=|\hat{\varphi}^k|$. Then $\tilde{\varphi}^k \in H^1(\Omega), \tilde{\varphi}^k \geq 0, \tilde{\varphi}^k = w^k$ in $\Omega^{\mathsf{c}} \supset K^{\lambda_k}$ for $k$ large enough, and thus $\tilde{\varphi}^k \in \mathcal{K}^{\lambda_k}(t)$ for $k$ large enough. Moreover, following the argument in the proof of \cite[Lemma~4.5]{K3}, $\tilde{\varphi}^k \to \bar{\varphi}$ in $L^2(\Omega)$ and $J^{\lambda_k}(\tilde{\varphi}^k,\Omega) \to J^0(\bar{\varphi},\Omega)$. Since $w^k, \tilde{\varphi}^k \in \mathcal{K}^{\lambda_k}(t)$ and $\operatorname{supp}(\tilde{\varphi}^k-w^k) \subset \Omega $, by \eqref{rescaled Variational problem} and integration by parts formula we have \begin{equation*} a^{\lambda_k}_{\Omega}(w^k, \tilde{\varphi}^k -w^k) \geq -\lambda_k^{\frac{2-n}{n}}\left<u^{\lambda_k}_t, \tilde{\varphi}^k -w^k\right>_{\Omega} + \left<-\frac{1}{g^{\lambda_k}},\tilde{\varphi}^k -w^k\right>_{\Omega}. \end{equation*} The inequality $a^{\lambda_k}(u,v-u) \leq \frac{1}{2}J^{\lambda_k}(v)- \frac{1}{2}J^{\lambda_k}(u)$ for any $u,v$ implies \begin{equation*} \frac{1}{2} J^{\lambda_k}(\tilde{\varphi}^k, \Omega) \geq \frac{1}{2} J^{\lambda_k}(w^k, \Omega)- \lambda_k^{\frac{2-n}{n}}\left<u^{\lambda_k}_t, \phi^k\right>_{\Omega} + \left<-\frac{1}{g^{\lambda_k}},\phi^k\right>_{\Omega}, \end{equation*} where $\phi^k:= \tilde{\varphi}^k - w^k \to \phi$ in $L^2(\Omega)$. Taking $\liminf$ as $k \rightarrow \infty$ and using the fact that $u^{\lambda_k}_t$ is bounded give \begin{equation} \label{gamma convergence inequality} \frac{1}{2} J^0(\overline{\varphi}, \Omega) \geq \frac{1}{2} J^0(\overline{w}, \Omega) + \left<-L,\phi\right>_{\Omega}. \end{equation} This holds for any $\phi \in W_1$ and therefore also for $\delta\phi$, where $0 <\delta <1$. Replacing $\phi$ in \eqref{gamma convergence inequality} by $\delta\phi$ we have \begin{align*} \frac{1}{2}J^0(\bar{w}+\delta\phi,\Omega) &\geq \frac{1}{2}J^0(\bar{w},\Omega)+\left<-L, \delta\phi\right>\\ \Leftrightarrow \frac{1}{2}\left[J^0(\bar{w},\Omega)+2\delta q_\Omega(\bar{w},\phi)+\delta^2J^0(\phi)\right] &\geq \frac{1}{2}J^0(\bar{w},\Omega)+\left<-L, \delta\phi\right>. \end{align*} Dividing both sides by $\delta$ and sending $\delta \to 0$ we obtain \begin{equation*} q_\Omega(\overline{w},\phi) \geq \left<-L, \phi\right>_\Omega. \end{equation*} Since $\operatorname{supp} \phi \in \Omega$, we conclude that \eqref{ineq1} holds in $\mathbb{R}^n. $ Now take $\psi \in W_2$. As above, we assume that $\psi$ has a compact support contained in $\Omega$, and without loss of generality we can also assume that $0 \leq \psi \leq 1, \psi =0$ on $B_\varepsilon(0)$ (otherwise consider $\frac{\psi}{\max_{\mathbb{R}^n}\psi}$ instead). Since $\psi \in W_2$ then $\psi \overline{w} \in W_1$ and \eqref{ineq1} holds for $\psi\bar{w}$, we have $q(\overline{w}, \psi \overline{w}) \geq \left<-L, \psi \overline{w}\right>$. For the reverse inequality, define $\overline{\varphi}:= (1-\psi)\overline{w} \in H^1(\Omega)$. Arguing as before, we can choose $\tilde{\varphi}^k \in \mathcal{K}^{\lambda_k}(t)$ such that $\tilde{\varphi}^k \to \overline{\varphi} \mbox{ in } L^2(\Omega), J^{\lambda_k}(\tilde{\varphi}^k, \Omega) \to J^0(\overline{\varphi}, \Omega)$. Again, since $w^k, \tilde{\varphi}^k \in \mathcal{K}^{\lambda_k}(t)$, by \eqref{rescaled Variational problem} and the inequality $a^{\lambda_k}(u,v-u) \leq \frac{1}{2}J^{\lambda_k}(v)- \frac{1}{2}J^{\lambda_k}(u)$ for any $u,v$ we have \begin{equation*} \frac{1}{2} J^{\lambda_k}(\tilde{\varphi}^k, \Omega) \geq \frac{1}{2} J^{\lambda_k}(w^k, \Omega)- \lambda_k^{\frac{2-n}{n}}\left<u^{\lambda_k}_t, \tilde{\varphi}^k-w^k\right>_{\Omega} + \left<-\frac{1}{g^{\lambda_k}},\tilde{\varphi}^k -w^k\right>_{\Omega}. \end{equation*} Taking $\liminf$ as $k \to \infty$ and arguing the same as in the proof of \eqref{ineq1} we get \begin{equation*} \begin{aligned} &&q_{\Omega}(\overline{w}, \overline{\varphi}- \overline{w}) &\geq \left<-L, \overline{\varphi}- \overline{w}\right>_\Omega\\ &\Leftrightarrow& -q_\Omega(\overline{w}, \psi \overline{w}) &\geq -\left<-L, \psi \overline{w} \right>_\Omega\\ &\Leftrightarrow& q_\Omega(\overline{w}, \psi \overline{w}) &\leq \left<-L, \psi \overline{w} \right>_\Omega. \end{aligned} \end{equation*} Thus $ q(\overline{w}, \psi \overline{w}) = \left<-L, \psi \overline{w} \right>$ for every $\psi \in W_2$. \end{proof} This completes the proof of Theorem \ref{convergence of variational solutions}. \end{proof} \section{Uniform convergence of rescaled viscosity solutions and free boundaries} In this final section, we establish the convergence of the rescaled viscosity solutions $v^\lambda$ of the Stefan problem (\ref{Stefan}) and their free boundaries. The proof is based on viscosity arguments showing that the half-relaxed limits of $v^\lambda$ in $\{|x| \neq 0, t\geq 0\}$ defined as \begin{align*} & v^*(x,t)= \limsup_{(y,s),\lambda \rightarrow (x,t), \infty} v^\lambda(y,s), & v_*(x,t)= \liminf_{(y,s),\lambda \rightarrow (x,t), \infty} v^\lambda(y,s) \end{align*} coincide and are the viscosity solution of the limit problem with a point source. We have the following result, which is similar to \cite[Theorem 4.2]{PV}. \begin{theorem} \label{convergence of rescaled viscosity solution} Let $n\geq 3$ and $V=V_{C_*,L}$ be the solution of Hele-Shaw problem with a point source \eqref{Hele_shaw point source problem} with the constant $C_*$ from Lemma~\ref{C*} and $L=\left<\frac{1}{g}\right>$ as in Lemma~\ref{media}. The rescaled viscosity solution $v^\lambda$ of the Stefan problem (\ref{Stefan}) converges locally uniformly to $V = V_{C_*, \ang{\frac{1}{g}}}$ in $(\mathbb{R}^n \setminus \{0\}) \times [0,\infty)$ as $\lambda \rightarrow \infty$ and $$v_*=v^*=V.$$ Moreover, the rescaled free boundary $\{\Gamma(v^\lambda)\}_{\lambda}$ converges to $\Gamma(V)$ locally uniformly with respect to the Hausdorff distance. \end{theorem} All the viscosity arguments used in \cite[Section 4]{PV} can be applied in our anisotropic case with some minor adaptations. Therefore, we will omit some of the proofs and refer to \cite{K2,K3,P1,PV} for more details. Let us give a brief review of the ideas in the spirit of \cite[Section 4]{PV} as follows. \begin{enumerate} \item We first prove the convergence of the rescaled viscosity solution and its free boundary under the condition \eqref{initial data}. \begin{itemize} \item By the regularity of the initial data $v_0$ as in \eqref{initial data}, we deduce a weak monotonicity of the solution $v$. \item Using the weak monotonicity and pointwise comparison principle arguments, we then show the convergence for regular initial data. \end{itemize} \item For general initial data, we will find regular upper and lower approximations of the initial data satisfying \eqref{initial data} and use the comparison principle together with the uniqueness of the limit solution to reach the conclusion. \end{enumerate} We will state the necessary results here with remarks on the adaptations for the anisotropic case. \subsection{Some necessary technical results} First, we have the correct singularity of $v^*$ and $v_*$ at the origin, which can be established similarly to the proof of Lemma~\ref{singularity of U}. \begin{lemma}[cf. {\cite[Lemma~4.3]{PV}}, $v^*$ and $v_*$ behave as $V$ at the origin] \label{boundary condition for limit problem} The functions $ v^*, v_*$ have a singularity at $0$ with \begin{align} \label{singularity of V} &\lim_{|x| \rightarrow 0+}\frac{v_*(x,t)}{V(x,t)}=1, & \lim_{|x| \rightarrow 0+}\frac{v^*(x,t)}{V(x,t)}=1, \quad \mbox{ for } t>0. \end{align} \end{lemma} \begin{proof} Argue as in the proof of Lemma~\ref{singularity of U}. \end{proof} We will also make use of an uniform estimate on $u^\lambda$ and the convergence of boundary points deduced from the convergence of variational solutions. \begin{lemma}[cf. {\cite[Lemma~3.1]{K2}}] \label{positive sup rescaling} There exists constant $C > 0$ independent of $\lambda$ such that for every $x_0 \in \overline{\Omega_{t_0}(u^\lambda)}$ and $B_r(x_0) \cap \Omega_0^{\lambda} = \emptyset$ for some $r$, we have \begin{equation*} \sup_{x \in \overline{B_r(x_0)}}u^\lambda(x,t_0)>C r^2 \end{equation*} for every $\lambda$. \end{lemma} \begin{proof} We will prove the statement for $x_0 \in \Omega_{t_0}(u^\lambda)$ first, the results then follows by continuity of $u^\lambda$. Since $B_r(x_0) \cap \Omega_0^{\lambda}= \emptyset$ then $u^\lambda$ satisfies \begin{equation*} \lambda^{\frac{2-n}{n}}u^\lambda_t-\mathcal{L}^\lambda u^\lambda= -\frac{1}{g^\lambda} \mbox{ in } \{u^\lambda>0\}\cap (B_r(x_0) \times \{t=t_0\}). \end{equation*} Since $u^\lambda_t \geq 0$ and $-\frac{1}{g}\leq -\frac{1}{M}$ then $-\mathcal{L}^\lambda u^\lambda \leq -\frac{1}{M}=:-C_0$ in $\{u^\lambda>0\} \cap (B_r(x_0) \times\{t=t_0\})$. Define \begin{equation*} w^\lambda(x)=u^\lambda(x,t_0)-\frac{C_0}{n}h^\lambda(x-x_0) \end{equation*} where $h^\lambda(x)$ is the barrier with quadratic growth corresponding to elliptic operator $\mathcal{L}^\lambda$ introduced in Section \ref{subsol}. We have $\{w^\lambda>0\} \cap B_r(x_0) \subset \{u^{\lambda}>0\} \cap \{t=t_0\}$ and therefore, for all $\lambda$, $$-\mathcal{L}^\lambda w^\lambda \leq 0 \mbox{ in } \{w^\lambda>0\} \cap B_r(x_0).$$ We see that $w^\lambda(x_0)>0$. Hence the maximum of $w^\lambda$ in $\overline{B_r(x_0)}$ is positive and by the maximum principle, $w^\lambda$ attains the maximum on the boundary $\{w^\lambda>0\} \cap \partial B_r(x_0)$ and therefore \begin{equation*} \sup_{\overline{B_r(x_0)}} u^\lambda(x,t_0) \geq \sup_{|x-x_0|=r} u^\lambda(x,t_0) > \inf_{|x-x_0|=r}\frac{C_0}{n} \, h^\lambda(x-x_0). \end{equation*} By the quadratic growth of $h^\lambda$, where the coefficients on the growth rate only depend on the elliptic constants, we have \begin{equation*} \sup_{\overline{B_r(x_0)}} u^\lambda(x,t_0) \geq Cr^2, \end{equation*} for some constant $C$ which does not depend on $\lambda$. \end{proof} \begin{lemma}[cf. {\cite[Lemma~5.4]{K3}}] \label{limit of sequence in 0-level set of ulambda k} Suppose that $(x_k,t_k) \in \{u^{\lambda_k}=0\}$ and $(x_k,t_k,\lambda_k) \rightarrow (x_0,t_0,\infty)$. Let $U=U_{C_*,L}$ be the limit function as in Theorem~\ref{convergence of variational solutions}. Then:\begin{enumerate}[label=\alph*)] \item $U(x_0,t_0)=0$, \item If $x_k \in \Gamma_{t_k}(u^{\lambda_k})$ then $x_0 \in \Gamma _{t_0}(U)$, \end{enumerate} \end{lemma} \begin{proof} See the proof of \cite[Lemma~5.4]{K3}. \end{proof} A weak monotonicity in time of the solution of the Stefan problem \eqref{Stefan} is given by the following lemma. \begin{lemma}[cf. {\cite[Lemma~4.7, Lemma~4.8]{PV}}, Weak monotonicity] \label{weak monotonicity}Let $u$ be the solution of the variational problem \eqref{Variational problem}, and $v$ be the associated viscosity solution of the Stefan problem. Suppose that $v_0$ satisfies \eqref{initial data}. Then there exist $C \geq 1$ independent of $x$ and $t$ such that \begin{equation} \label{monotonicity condition} v_0(x) \leq C v(x, t) \mbox{ and } u(x,t)\leq C t v(x,t) \mbox{ in } \mathbb{R}^n \setminus K \times [0,\infty). \end{equation} \end{lemma} \begin{proof} Following the same arguments as in \cite[Lemma~4.7, Lemma~4.8] {PV}, we obtain \eqref{monotonicity condition} simply by using elliptic operator $\mathcal{L}$ instead of the Laplace operator. \end{proof} Lemma~\ref{positive sup rescaling} and Lemma~\ref{weak monotonicity} automatically give us a crucial uniform lower estimate on $v^\lambda$ and allow us to show the relationship between $v_*,v^*$ and $V$. \begin{cor} \label{monotoniciy for v} There exists a constant $C_1=C_1(n,M)$ such that if $(x_0,t_0) \in \Omega(v^\lambda)$ and $B_r(x_0) \cap \Omega_0^\lambda = \emptyset$, we have \begin{equation*} \sup_{B_r(x_0)}v^\lambda(x,t_0) \geq \frac{C_1r^2}{t_0}. \end{equation*} \end{cor} \begin{lemma} \label{relationship v^*,v_*,V} Let $v$ be the viscosity solution of \eqref{Stefan} and $v^\lambda$ be its rescaling. Then the following statements hold. \begin{enumerate}[label=\roman*)] \item \label{i}$v^*(\cdot,t)$ is a subsolution of \eqref{elliptic hom eq} in $\mathbb{R}^n \setminus \{0\}$ and $v_{*}(\cdot,t)$ is a supersolution of \eqref{elliptic hom eq} in $\Omega_t(v_*) \setminus \{0\}$ in viscosity sense. \item \label{ii}$\Omega(V) \subset \Omega(v_*)$ and in particular $v_* \geq V$. \item \label{iii}$\Gamma(v^*) \subset \Gamma(V)$. \end{enumerate} \end{lemma} \begin{proof} i) follows from standard viscosity arguments with noting that we can take a sequence of test functions for rescaled elliptic equation that converges to the test function for \eqref{elliptic hom eq} by classical homogenization results. ii) See \cite[Lemma~5.5]{K3}, the conclusion holds by i), Lemma~\ref{boundary condition for limit problem} and Lemma~\ref{weak monotonicity}. iii) See \cite[Lemma~5.6 ii]{K3}. \end{proof} \subsection*{Proof of Theorem \ref{convergence of rescaled viscosity solution}} \begin{proof} We follow the proof of \cite[Theorem 4.2]{PV}; see \cite{PV} for more details. \textit{\textbf{Step 1}}. We first show the convergence results for the problem with the initial data satisfying (\ref{initial data}) using the weak monotonicity in time of the solution, Lemma~\ref{weak monotonicity}, and its consequences. By Lemma~\ref{relationship v^*,v_*,V}, the correct singularity of $v^*$ from Lemma~\ref{boundary condition for limit problem} and the comparison principle for elliptic equation \eqref{elliptic hom eq} we have $$V(x,t) \leq v_*(x,t) \leq v^*(x,t) \leq V_{C_*+\varepsilon, \left<\frac{1}{g}\right>}(x,t).$$ Let $\varepsilon \to 0$ we obtain $v_*=v^*= V$ by continuity and in particular, $\Gamma(v_*)= \Gamma(v^*)= \Gamma(V)$. Now we show the locally uniform convergence of the free boundaries with respect to the Hausdorff distance. To simplify the notation, we fix $0 < t_1< t_2$ and define \begin{align*} & \Gamma^\lambda:= \Gamma(v^\lambda) \cap \{t_1 \leq t \leq t_2\}, & \Gamma^\infty:= \Gamma(V) \cap \{t_1 \leq t \leq t_2\}. \end{align*} The result will follow if we show that for all $\delta >0$, there exists $\lambda_0>0$ such that for all $\lambda \geq \lambda_0$, \begin{equation} \label{1st Theorem 4.2} \begin{aligned} \operatorname{dist}((x_0, t_0), \Gamma^\infty) &< \delta \text{ for all $(x_0, t_0) \in \Gamma^\lambda$}, \mbox{ and }\\ \operatorname{dist}((x_0, t_0), \Gamma^\lambda) &< \delta \text{ for all $(x_0, t_0) \in \Gamma^\infty$}. \end{aligned} \end{equation} A contradiction argument as in the proofs of \cite[Theorem~7.1]{P1} and \cite[Theorem~4.2]{PV}, using Lemma~\ref{limit of sequence in 0-level set of ulambda k} above yields the existence of $\lambda_0$ for the first inequality. Hence the main task now is to show the existence of $\lambda_0$ for the second inequality in (\ref{1st Theorem 4.2}). Note that we only need to show this pointwise. The result then follows from the compactness of $\Gamma^\infty$. Suppose that there exists $\delta >0, (x_0,t_0) \in \Gamma^\infty$ and $\{\lambda_k\}, \lambda_k \rightarrow \infty$, such that $\displaystyle \mbox{dist}((x_0,t_0), \Gamma^{\lambda_k}) \geq \frac{\delta}{2}$ for all $k$. Then there exists $r >0$ such that after passing to a subsequence if necessary, we can assume that $D_r(x_0,t_0):= B(x_0, r) \times [t_0-r,t_0+r]$ satisfies either \begin{equation} \label{4nd Theorem 4.2} D_r(x_0,t_0) \subset \{v^{\lambda_k}=0\}, \quad \text{ for all } k \end{equation} or, \begin{equation} \label{3rd Theorem 4.2} D_r(x_0,t_0) \subset \{v^{\lambda_{k}}>0\}, \quad \text{ for all } k. \end{equation} But (\ref{4nd Theorem 4.2}) clearly implies $V=v_* =0$ in $D_r(x_0,t_0)$, contradicting $(x_0,t_0) \in \Gamma^\infty$. Thus we assume (\ref{3rd Theorem 4.2}). Following \cite{PV}, to handle Harnack's inequality for a parabolic equation that becomes elliptic in the limit, we rescale time as \begin{equation*} w^k(x,t):=v^{\lambda_k}(x,\lambda_k^{\frac{2-n}{n}} t). \end{equation*} Then $w^k>0$ in $D_r^w(x_0,t_0):=B(x_0,r) \times [\lambda_k^{\frac{n-2}{n}}(t_0-r),\lambda_k^{\frac{n-2}{n}}(t_0+r)]$ and $w^k$ satisfies $w^k_t -\mathcal{L}^\lambda w^k=0$ in $D_r^w(x_0,t_0)$. Since $\lambda_k^{\frac{n-2}{n}} \to \infty$ as $k \to \infty$ then for any fixed $\tau>0$, there exists $\lambda_0$ such that $\tau < \lambda_k^{\frac{n-2}{n}} \tfrac r4$ for all $\lambda_k \geq \lambda_0$. Now applying Harnack's inequality for the parabolic equation $w^k_t -\mathcal{L}^\lambda w^k=0$ we have for a fixed $\tau > 0$, there exists a constant $C_1 > 0$ such that for each $t \in [t_0-\frac r2, t_0+\frac r2]$ and all $\lambda_k$ such that $\tau < \lambda_k^{\frac{n-2}{n}} \tfrac r4$ we have \begin{equation*} \sup_{B\left(x_0,\tfrac{r}{2}\right)} w^k(\cdot, \lambda_k^{\frac{n-2}{n}}t- \tau) \leq C_1 \inf_{B\left(x_0,\tfrac{r}{2}\right)}w^k(\cdot, \lambda_k^{\frac{n-2}{n}}t). \end{equation*} As noted in \cite[Theorem 4.2]{PV}, for the isotropic case, the constant $C_1$ of Harnack's inequality can be taken not depending on $\lambda_k$. For the anisotropic case, this constant also depends on the elliptic constants of operator $\mathcal{L}^{\lambda_k}$. However the rescaling of the operator does not change the elliptic constants. Thus $C_1$ can be taken independent of $\lambda_k$. By this inequality and Corollary~\ref{monotoniciy for v} we have \begin{equation*} \frac{C_2r^2}{t- \lambda_k^{\frac{2-n}{n}}\tau} \leq \sup_{B\left(x_0,\tfrac{r}{2}\right)} v^{\lambda_k}(\cdot, t- \lambda_k^{\frac{2-n}{n}}\tau) \leq C_1 \inf_{B\left(x_0,\tfrac{r}{2}\right)}v^{\lambda_k}(\cdot, t) \end{equation*} for all $t \in [t_0 - \frac r2, t_0 + \frac r2]$, $\lambda_k \geq \lambda_0$ large enough, where $C_2$ only depends on $n, M$. In the limit $\lambda_k \rightarrow \infty$, the uniform convergence of $\{v^{\lambda_k}\}$ to $V$ implies $V>0$ in $B(x_0,\frac r2) \times [t_0-\frac r2,t_0+\frac r2]$, which contradicts the assumption $(x_0,t_0) \in \Gamma^\infty \subset \Gamma(V)$. This concludes the proof of Theorem~\ref{convergence of rescaled viscosity solution} when (\ref{monotonicity condition}) holds. \textit{\textbf{Step 2}}. For general initial data, arguing as in step 2 of the proof of \cite[Theorem 4.2]{PV}, we are able to find upper and lower bounds for the initial data for which (\ref{monotonicity condition}) holds. The comparison principle for viscosity solution of the Stefan problem \eqref{Stefan} then yields the convergence since the limit function $V$ is unique and does not depend on the initial data. \end{proof} \section*{Acknowledgments} The first author was partially supported by JSPS KAKENHI Grants No. 26800068 (Wakate B) and No. 18K13440 (Wakate). This work is a part of doctoral research of the second author. The second author would like to thank her Ph.D. supervisor Professor Seiro Omata for his invaluable support and advice. \begin{bibdiv} \begin{biblist} \bib{ABL}{article}{ author={Anantharaman, Arnaud}, author={Blanc, Xavier}, author={Legoll, Frederic}, title={Asymptotic behavior of Green functions of divergence form operators with periodic coefficients}, journal={Appl. Math. Res. Express. AMRX}, date={2013}, number={1}, pages={79--101}, issn={1687-1200}, review={\MR{3040889}}, } \bib{Aronson}{article}{ author={Aronson, D. G.}, title={Bounds for the fundamental solution of a parabolic equation}, journal={Bull. Amer. Math. Soc.}, volume={73}, date={1967}, pages={890--896}, issn={0002-9904}, review={\MR{0217444}}, } \bib{ALin}{article}{ author={Avellaneda, M.}, author={Lin, Fang-Hua}, title={$L^p$ bounds on singular integrals in homogenization}, journal={Comm. Pure Appl. Math.}, volume={44}, date={1991}, number={8-9}, pages={897--910}, issn={0010-3640}, review={\MR{1127038}}, doi={10.1002/cpa.3160440805}, } \bib{Baiocchi}{article}{ author={Baiocchi, Claudio}, title={Sur un probl\`eme \`a fronti\`ere libre traduisant le filtrage de liquides \`a travers des milieux poreux}, language={French}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={273}, date={1971}, pages={A1215--A1217}, review={\MR{0297207}}, } \bib{BLP}{book}{ author={Bensoussan, A.}, author={Lions, J.-L.}, author={Papanicolaou, G.}, title={Asymptotic analysis for periodic structures}, note={Corrected reprint of the 1978 original [MR0503330]}, publisher={AMS Chelsea Publishing, Providence, RI}, date={2011}, pages={xii+398}, isbn={978-0-8218-5324-5}, review={\MR{2839402}}, } \bib{BD}{article}{ author={Bossavit, A.}, author={Damlamian, A.}, title={Homogenization of the Stefan problem and application to magnetic composite media}, journal={IMA J. Appl. Math.}, volume={27}, date={1981}, number={3}, pages={319--334}, issn={0272-4960}, review={\MR{633807}}, doi={10.1093/imamat/27.3.319}, } \bib{C}{article}{ author={L.A. Caffarelli}, title={The regularity of free boundaries in higher dimensions}, date={1977}, journal={Acta Math.}, pages={155--184}, number={139}, review={MR 56:12601} } \bib{CF}{article}{ author={L.A. Caffarelli}, author={A. Friedman}, title={Continuity of the temperature in the Stefan problem}, date={1979}, journal={Indiana Univ. Math. J.}, pages={53--70}, number={28}, review={MR 80i:35104}, } \bib{DM1}{article}{ author={Dal Maso, Gianni}, author={Modica, Luciano}, title={Nonlinear stochastic homogenization}, language={English, with Italian summary}, journal={Ann. Mat. Pura Appl. (4)}, volume={144}, date={1986}, pages={347--389}, issn={0003-4622}, review={\MR{870884}}, doi={10.1007/BF01760826}, } \bib{DM2}{article}{ author={Dal Maso, Gianni}, author={Modica, Luciano}, title={Nonlinear stochastic homogenization and ergodic theory}, journal={J. Reine Angew. Math.}, volume={368}, date={1986}, pages={28--42}, issn={0075-4102}, review={\MR{850613}}, } \bib{D}{book}{ author={Dal Maso, Gianni}, title={An introduction to $\Gamma$-convergence}, series={Progress in Nonlinear Differential Equations and their Applications}, volume={8}, publisher={Birkh\"auser Boston, Inc., Boston, MA}, date={1993}, pages={xiv+340}, isbn={0-8176-3679-X}, review={\MR{1201152}}, doi={10.1007/978-1-4612-0327-8}, } \bib{Di-B}{article}{ title = {Continuity of weak solution to certain singular parabolic equations}, journal = {Ann. Math. Pura Appl. (IV)}, year = {1982}, volume= {130}, pages = {131-176}, isbn = {978-0-12-434170-8}, doi = {10.1016/B978-0-12-434170-8.50042-X}, url = {https://www.sciencedirect.com/science/article/pii/B978012434170850042X}, author = {Di Benedetto, Emmanuele} } \bib{Duvaut}{article}{ author={Duvaut, Georges}, title={R\'esolution d'un probl\`eme de Stefan (fusion d'un bloc de glace \`a z\'ero degr\'e)}, language={French}, journal={C. R. Acad. Sci. Paris S\'er. A-B}, volume={276}, date={1973}, pages={A1461--A1463}, review={\MR{0328346}}, } \bib{EJ}{article}{ author={Elliott, C. M.}, author={Janovsk\'y, V.}, title={A variational inequality approach to Hele-Shaw flow with a moving boundary}, journal={Proc. Roy. Soc. Edinburgh Sect. A}, volume={88}, date={1981}, number={1-2}, pages={93--107}, issn={0308-2105}, review={\MR{611303}}, doi={10.1017/S0308210500017315}, } \bib{E}{book}{ author={Evans, Lawrence C.}, title={Partial differential equations}, series={Graduate Studies in Mathematics}, volume={19}, edition={2}, publisher={American Mathematical Society, Providence, RI}, date={2010}, pages={xxii+749}, isbn={978-0-8218-4974-3}, review={\MR{2597943}}, doi={10.1090/gsm/019}, } \bib{FriedmanP}{book}{ author={Friedman, Avner}, title={Partial differential equations of parabolic type}, publisher={Prentice-Hall, Inc., Englewood Cliffs, N.J.}, date={1964}, pages={xiv+347}, review={\MR{0181836}}, } \bib{F}{article}{ author = {Friedman, Avner}, journal = {Transactions of the American Mathematical Society}, number = {1}, pages = {51--87}, publisher = {American Mathematical Society}, title = {The Stefan Problem in Several Space Variables}, volume = {133}, year = {1968} } \bib{F2}{book}{ author={Friedman, Avner}, title={Variational principles and free-boundary problems}, series={Pure and Applied Mathematics}, note={A Wiley-Interscience Publication}, publisher={John Wiley \&\ Sons, Inc., New York}, date={1982}, pages={ix+710}, isbn={0-471-86849-3}, review={\MR{679313}}, } \bib{FK}{article}{ author={Friedman, Avner}, author={Kinderlehrer, David}, title={A one phase Stefan problem}, journal={Indiana Univ. Math. J.}, volume={24}, date={1974/75}, number={11}, pages={1005--1035}, issn={0022-2518}, review={\MR{0385326}}, } \bib{GS}{article}{ author={Gilbarg, D.}, author={Serrin, James}, title={On isolated singularities of solutions of second order elliptic differential equations}, journal={J. Analyse Math.}, volume={4}, date={1955/56}, pages={309--340}, issn={0021-7670}, review={\MR{0081416}}, doi={10.1007/BF02787726}, } \bib{GT}{book}{ author={Gilbarg, David}, author={Trudinger, Neil S.}, title={Elliptic partial differential equations of second order}, series={Classics in Mathematics}, note={Reprint of the 1998 edition}, publisher={Springer-Verlag, Berlin}, date={2001}, pages={xiv+517}, isbn={3-540-41160-7}, review={\MR{1814364}}, } \bib{GM}{article}{ author={Gloria, Antoine}, author={Marahrens, Daniel}, title={Annealed estimates on the Green functions and uncertainty quantification}, journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire}, volume={33}, date={2016}, number={5}, pages={1153--1197}, issn={0294-1449}, review={\MR{3542610}}, } \bib{HNS}{article}{ author={Had\v zi\'c, Mahir}, author={Navarro, Gustavo}, author={Shkoller, Steve}, title={Local well-posedness and global stability of the two-phase Stefan problem}, journal={SIAM J. Math. Anal.}, volume={49}, date={2017}, number={6}, pages={4942--5006}, issn={0036-1410}, review={\MR{3735289}}, doi={10.1137/16M1083207}, } \bib{HS_CPAM}{article}{ author={Had\v zi\'c, Mahir}, author={Shkoller, Steve}, title={Global stability and decay for the classical Stefan problem}, journal={Comm. Pure Appl. Math.}, volume={68}, date={2015}, number={5}, pages={689--757}, issn={0010-3640}, review={\MR{3333840}}, doi={10.1002/cpa.21522}, } \bib{HS_Royal}{article}{ author={Had\v zi\'c, Mahir}, author={Shkoller, Steve}, title={Global stability of steady states in the classical Stefan problem for general boundary shapes}, journal={Philos. Trans. Roy. Soc. A}, volume={373}, date={2015}, number={2050}, pages={20140284, 18}, issn={1364-503X}, review={\MR{3393322}}, doi={10.1098/rsta.2014.0284}, } \bib{JKO}{book}{ author={Jikov, V. V.}, author={Kozlov, S. M.}, author={Ole\u\i nik, O. A.}, title={Homogenization of differential operators and integral functionals}, note={Translated from the Russian by G. A. Yosifian [G. A. Iosif$\prime$yan]}, publisher={Springer-Verlag, Berlin}, date={1994}, pages={xii+570}, isbn={3-540-54809-2}, review={\MR{1329546}}, doi={10.1007/978-3-642-84659-5}, } \bib{FN}{article}{ author={Kinderlehrer, David}, author={Nirenberg, Louis}, title={The smoothness of the free boundary in the one phase Stefan problem}, journal={Comm. Pure Appl. Math.}, volume={31}, date={1978}, number={3}, pages={257--282}, issn={0010-3640}, review={\MR{480348}}, doi={10.1002/cpa.3160310302}, } \bib{K1}{article}{ author={Kim, I.~C.}, title={Uniqueness and existence results on the Hele-Shaw and the Stefan problems}, date={2003}, journal={Arch. Ration. Mech. Anal.}, volume={168}, number={4}, pages={299--328}, } \bib{K2}{article}{ author={Kim, I.~C.}, author={Mellet, A.}, title={Homogenization of a Hele-Shaw problem in periodic and random media}, date={2009}, journal={Arch. Ration. Mech. Anal.}, volume={194}, number={2}, pages={507--530}, } \bib{K3}{article}{ author={Kim, I.~C.}, author={Mellet, A.}, title={Homogenization of one-phase Stefan-type problems in periodic and random media}, date={2010}, journal={Trans. Amer. Math. Soc.}, volume={362}, number={8}, pages={4161--4190}, } \bib{KP}{article}{ author={Kim, Inwon C.}, author={Po\v z\'ar, Norbert}, title={Viscosity solutions for the two-phase Stefan problem}, journal={Comm. Partial Differential Equations}, volume={36}, date={2011}, number={1}, pages={42--66}, issn={0360-5302}, review={\MR{2763347}}, doi={10.1080/03605302.2010.526980}, } \bib{LSW}{article}{ author={Littman, D.}, author={Stampacchia, G.}, author={Weinberger, H. F}, title={Regular points for elliptic equations with discontinuous coefficients}, date={1963}, journal={Ann. Scuola Norm. Sup. Pisa}, volume={17}, pages={43--77}, } \bib{Matano}{article}{ title = {Asymptotic Behavior of the Free Boundaries Arising in One Phase Stefan Problems in Multi-Dimensional Spaces}, series = {North-Holland Mathematics Studies}, publisher = {North-Holland}, volume = {81}, pages = {133 - 151}, year = {1983}, booktitle = {Nonlinear Partial Differential Equations in Applied Science; Proceedings of The U.S.-Japan Seminar, Tokyo, 1982}, issn = {0304-0208}, doi = {10.1016/S0304-0208(08)72089-9}, url = {http://www.sciencedirect.com/science/article/pii/S0304020808720899}, author = {Matano, Hiroshi} } \bib{MO}{article}{ author={Marahrens, Daniel}, author={Otto, Felix}, title={Annealed estimates on the Green function}, journal={Probab. Theory Related Fields}, volume={163}, date={2015}, number={3-4}, pages={527--573}, issn={0178-8051}, review={\MR{3418749}}, } \bib{M}{book}{ author={ C. Miranda}, title={Partial Differential Equations of Elliptic Type}, publisher={Springer}, address={Verlag, New York-Berlin}, date={1970}, } \bib{P1}{article}{ author={Po\v{z}\'{a}r, N.}, title={Long-time behavior of a Hele-Shaw type problem in random media}, date={2011}, journal={Interfaces Free Bound}, volume={13}, number={3}, pages={373--395}, } \bib{Pozar15}{article}{ author={Po\v z\'ar, Norbert}, title={Homogenization of the Hele-Shaw problem in periodic spatiotemporal media}, journal={Arch. Ration. Mech. Anal.}, volume={217}, date={2015}, number={1}, pages={155--230}, issn={0003-9527}, review={\MR{3338444}}, doi={10.1007/s00205-014-0831-0}, } \bib{PV}{article}{ author={Po\v{z}\'{a}r, N.}, author={Vu, G.T.T.}, title={Long-time behavior of the one phase Stefan problem in random and periodic media}, date={to appear}, journal={Discrete Contin. Dyn. Syst. Ser. S}, note={https://arxiv.org/abs/1702.07119}, } \bib{QV}{article}{ author={Quir\'{o}s, F.}, author={V\'{a}zquez, J.~L.}, title={Asymptotic convergence of the Stefan problem to Hele-shaw}, date={2001}, journal={Trans. Amer. Math. Soc.}, volume={353}, number={2}, pages={609–634 (electronic)}, note={MR 010265535R35 (76D27 80A22)}, } \bib{R3}{article}{ author={Rodrigues, J.~F.}, title={Free boundary convergence in the homogenization of the one-phase Stefan problem}, journal={Trans. Amer. Math. Soc.}, volume={274}, date={1982}, number={1}, pages={297--305}, issn={0002-9947}, review={\MR{670933}}, doi={10.2307/1999510}, } \bib{R1}{book}{ author={Rodrigues, J.~F.}, title={Obstacle problems in mathematical physics}, publisher={Elsevier Science Publishers B.V.}, address={The Netherlands}, date={1987}, } \bib{RReview}{article}{ author={Rodrigues, J.~F.}, title={The Stefan problem revisited}, conference={ title={Mathematical models for phase change problems}, address={\'Obidos}, date={1988}, }, book={ series={Internat. Ser. Numer. Math.}, volume={88}, publisher={Birkh\"auser, Basel}, }, date={1989}, pages={129--190}, review={\MR{1038069}}, } \bib{R2}{article}{ author={Rodrigues, J.~F.}, title={Variational methods in the Stefan problem}, date={1584}, journal={Phase transitions and hysteresis (Montecatini Terme, 1993)}, pages={147--212}, note={Lecture Notes in Math., 1584, Springer, Berlin, 1994}, } \bib{Sacks}{article}{ title = {Continuity of solutions of a singular parabolic equation}, journal = {Nonlinear Analysis: Theory, Methods and Applications}, volume = {7}, number = {4}, pages = {387 - 409}, year = {1983}, issn = {0362-546X}, doi = {10.1016/0362-546X(83)90092-5}, url = {http://www.sciencedirect.com/science/article/pii/0362546X83900925}, author = {E. Sacks, Paul}, } \bib{S}{article}{ author={Serrin, James}, title={Local behavior of solutions of quasi-linear equations}, journal={Acta Math.}, volume={111}, date={1964}, pages={247--302}, issn={0001-5962}, review={\MR{0170096}}, doi={10.1007/BF02391014}, } \bib{SW}{article}{ author={Serrin, James}, author={Weinberger, H. F.}, title={Isolated singularities of solutions of linear elliptic equations}, journal={Amer. J. Math.}, volume={88}, date={1966}, pages={258--272}, issn={0002-9327}, review={\MR{0201815}}, doi={10.2307/2373060}, } \bib{ZKOH}{article}{ author={\v Zikov, V. V.}, author={Kozlov, S. M.}, author={Ole\u\i nik, O. A.}, author={Ha T\cprime en Ngoan}, title={Averaging and $G$-convergence of differential operators}, language={Russian}, journal={Uspekhi Mat. Nauk}, volume={34}, date={1979}, number={5(209)}, pages={65--133, 256}, issn={0042-1316}, review={\MR{562800}}, } \bib{Ziemer}{article}{ author = {Ziemer, William}, title = {Interior and Boundary Continuity of Weak Solutions of Degenerate Parabolic Equations}, journal = {Trans. Amer. Math. Soc.}, volume = {271}, date = {1982}, pages = {733--748}, } \end{biblist} \end{bibdiv} \end{document} \section{} \subsection{} \begin{theorem}[Optional addition to theorem head] \end{theorem} \begin{proof}[Optional replacement proof heading] \end{proof} \begin{figure} \includegraphics{filename} \caption{text of caption} \label{} \end{figure} \begin{equation} \end{equation} \begin{equation*} \end{equation*} \begin{align} & \\ & \end{align}
{ "timestamp": "2018-06-05T02:11:39", "yymm": "1806", "arxiv_id": "1806.00832", "language": "en", "url": "https://arxiv.org/abs/1806.00832" }
\section{Introduction} In the field of the embedded systems, SoC chips have played an important role in the evolution of this technology area. The most recent SoC chips have several peripherals that increase the range of applications that can be used for. Some of these peripherals are used for digital, analog, mixed-signal, often radio-frequency functions and recently SoC chips include a graphic dedicated hardware in order to accelerate graphical applications. In recent years, the dominance of SoC for embedded system application begins to be questioned. FPGA (Field Programmable Gate Array) devices with on-chip processing system, known on the literature as SoC FPGA or PSoC (Programmable System on Chip), have recently emerged as potential solutions for compact processing applications. PSoCs combine the better of two world, they have a familiar processing system development interface for sequential algorithms or embedded OS applications, and at the same time, they provide an empty landscape for custom hardware development that enlarges the set of application of our system. PSoCs also offer a flexible programmable alternative for sequential processing, implementing any hardware function to augment the capabilities the PS owns. In fact, due to the inherently parallel nature of the FPGA, multiple hardware blocks can operate simultaneously, either in parallel, when the logic is replicated, or in a pipelined stages. These capabilities open up a wide range of possibilities for applications that can be deployed in these systems. PSoCs can be found in different applications where main and lighter tasks are processed by the PS, while harder computational tasks are designed to be deployed on the PL. Some examples are: \textbf{- Automotive}: Cars nowadays contain Advanced Driver Assistance Systems (ADAS) that refers specifically to the collection of systems provided in car for safety and comfort. FPGAs and now PSoC devices, can be used to realize these automotive systems \cite{fons2012fpga,velez2015reconfigurable}. \textbf{- Image and Video Processing}: here, PSoCs processing capabilities are particularly valuable. Because they require both deterministic processing of large amount of pixel data, and software algorithms for extracting information from images \cite{dipert2012embedded}. \textbf{- Medical}: An important issue in medical diagnosis is seeing inside the body. This task requires medical imaging equipment that requires sophisticated image processing algorithms to manage large data sets. PSoCs offer capabilities that support both, high-speed parallel processing and software-based algorithms \cite{khan2012fpgas}. \textbf{- High Performance Computing}: For fast processing of large datasets, which can typically be accelerated with dedicated hardware \cite{sundararajan2010high}. Recently, the growth of Deep Learning Systems has led an increase in the use of PSoCs in this field due to their massive parallel processing capacity and their high speed bus interfaces between PS and PL \cite{aimar2017nullhop,Qiu2016,Zhang2015,Zhang2016}. In this paper a performance evaluation over a Xilinx PSoC memory transfers is presented and tested for a CNN accelerator application \cite{Aimar2016}. It consists on a user-level driver with several improvements, against a kernel-level driver. Both of them under a linux OS for embedded systems. This paper is organized as follows. Section \ref{sec:psoc_platform} describes briefly Xilinx Zynq PSoC architecture and enumerates the interfaces between the PS (Processing System) and the PL (Programmable Logic) in the Zynq. Section \ref{sec:axidma_communication} explains the AXI DMA transfer flow at user-level and kernel-level drivers, while section \ref{sec:results} presents transfer timing results for each scenarios. Finally, section \ref{sec:conclusions} draws the conclusions. \section{Xilinx PSoC Platform}\label{sec:psoc_platform} Zynq chips from Xilinx are PSoCs architectures which contains an ARM-Cortex A family processor and a re-programmable logic (FPGA) in the same chip. A PSoC platform consists of a printed circuit board (PCB) that hosts a PSoC and several external chips to make the system to work properly under a Linux OS, typically. These external components are usually DDR memory, USB and Ethernet transceivers, SD card, JTAG for debugging and expansion connector with GPIOs. These platforms represent a new co-design solution where the embedded OS (Linux) in the ARM cores executes software tasks (eg. data normalization, data recollection from sensors...) and a reconfigurable logic implements a design in order to accelerate a specific application. Interconnection between PL and ARM processor is done through PS. The PS is an ARM interface IP core, which acts as a logic connection between the ARM and the PL that assists to integrate custom and embedded IPs. This PS configures different interfaces of the ARM core (I2C,SPI,..) and interfaces from PL, such as AXI, clock speed,... [Fig \ref{fig1}]. AXI stands for Advanced eXtensible Interface and the current version is AXI4, which is part of the ARM AMBA 3.0 open standard \cite{amba4axi4}. This AMBA standard was originally developed by ARM for microcontrollers but then it was extended for SoCs, including PSoCs, and it is an optimal interconnect technology between PS and PL. There are three different types of AXI4, each of which represents a different bus protocol, as summarized below: \textbf{- AXI4:} Oriented to memory-mapped links. It provides the highest performance. An address is supplied following by a data burst transfer of up 256 words (data word can be from 32 to 1024 bits) \cite{acasandrei2017open}. \textbf{- AXI4-Lite:} A simplified link supporting only one data transfer per connection (no bursts). AXI4-Lite is also memory-mapped. In this case, an address and a single data word are transferred. This interface is commonly used to map control signals for devices \cite{acasandrei2017open}. \textbf{- AXI4-Stream:} Oriented to high data flow applications with DMA support. It does not implement any handshake protocol. It allows unlimited burst transfers of unrestricted size. The protocol allows merging, packing and width conversion. It supports sparse, continuous, aligned and unaligned streams \cite{amba4axi4}. \begin{figure}[ht!] \centering \includegraphics[width=35mm]{images/zynq-proc-system.jpg} \caption{Programmable logic communication with Processing System \label{fig1}} \end{figure} In this work the Zynq-7100 MMP platform from Avnet has been used. This platform contains a PSoC with a Dual ARM\textsuperscript{\textregistered} Cortex\textsuperscript{\texttrademark}-A9 MPCore\textsuperscript{\texttrademark} operating at 666MHz with FPU engine, 1GB DDR3 memory, SD card support, USB and GigaEthernet. The PSoC includes a Kintex-7 FPGA with 444K logic cells in the same chip. Up to 132 GPIOs are available for external connectivity of the logic. A baseboard, called DockSoC, designed for this MMP platform (manufactured by COBER) is able to manage all MMP needed power supplies (from 1V to 12V), the JTAG port over UART and several parallel interfaces to Neuromorphic chips over the CAVIAR and ROME parallel AER connectors are included \cite{serrano2009caviar}. The DockSoC can act as a daughter board for the AERNode\cite{aernode} platform to expand connectivity to other PSoC platforms and\/or to support the connectivity to other Neuromorphic systems. Figure \ref{figDockSoC} shows a picture of the used setup with the PSoC platform, the DockSoC baseboard and a USB neuromorphic retina, called DAVIS. The DAVIS \cite{davis} is a dynamic vision sensor that measures luminosity changes independently per pixel and send out events to signalize which pixel has detected such change in time over a configurable threshold. By collecting a fixed number of events from this sensor a histogram of those events can be used as a frame to be computed by the CNN accelerator running in the platform. \begin{figure}[ht!] \centering \includegraphics[width=80mm]{images/SoCDock_DAVIS.JPG} \caption{Dock Soc platform and DAVIS \label{figDockSoC}} \end{figure} \section{AXI DMA communication}\label{sec:axidma_communication} PL can be connected to the ARM processors by multiple interfaces as it was mentioned before. However, the fastest way is using direct memory access (DMA) under the AXI Stream protocol, called AXI-DMA. AXI-DMA consists of two different buses: Memory Mapped to Stream (MM2S) and Stream to Memory Mapped (S2MM). MM2S reads from DDR memory and transmits data to PL, while S2MM write data from PL to DDR memory. The DMA architecture presented in this paper contains two modules that have been created to adapt S2MM and MM2S interfaces data flow to and from the CNN accelerator implemented in the PL, called NullHop \cite{aimar2017nullhop}. NullHop is a hw accelerator designed for multi-layered CNNs execution for deep-learning classification applications. It resides in the PL and it needs to receive both the visual input (feature maps for a particular layer, or a portion of it) and the parameters (convolution kernels) from the PS, to calculate the results (output feature maps). It has been designed with 128 MAC blocks to work in a streamed way. Once the accelerator has received the parameters, the visual input is streamed in. After a couple of rows are received, the MACs start to operate and to produce an streamed output, which is sent back to the PS. To extract the maximum performance in our PSoC system, it is needed to properly coordinate the data flow in the application. When an OS is managing the PSoC, there are two different memory spaces: the virtual one, which where the user application works; and the physical one, which is managed by the DMA controller, and therefore, visible by the hardware implemented at the PL. \begin{figure}[ht!] \centering \includegraphics[width=70mm]{images/Memory_Transfers_Scheme.png} \caption{Memory hierarchy in a PSoC with OS. User app works at virtual space, while DMA controller at PL works with physical one. The API and/or driver do the transfers to/from both spaces. \label{virtual_physical_mem}} \end{figure} Figure \ref{virtual_physical_mem} shows the memory hierarchy from the user application to the CNN accelerator. Working with embedded Linux OS, there exist two ways to communicate with devices: (1) user-level: using the function mmap() to map a view of the device physical address space into our process virtual address space. This function is called by user application directly and the DMA transfers can be configured in a polling scheme, where the user application is frequently blocked, waiting for the transfer to be completed to process the data; or (2) kernel-level: a piece of software running at a higher privilege level of the OS, with interrupt support, in order to liberate the user application of blocking states until data is ready, allowing the execution of other needed tasks. Furthermore, the kernel-level ensures the integrity of the software avoiding the possible wrongly use of physical address spaces reserved to other processes running in the OS. In this work a performance comparison between these two different communication schemes is presented. Furthermore, two different operating modes for the user-level driver have been introduced in the study: a completely polling-based solution, which would have the lowest latencies in between DMA transfers, and an scheduled solution, where DMA transfers are not continuously blocked. \subsection{User-level} We have compared two read/write buffer implementations: single and double buffer. First one establishes only one channel for data transfers between virtual and physical memory. The double buffer implementation reserves two buffers in memory for virtual to physical transfers: while one is used for data ready to be sent to PL, the other one is used to prepare data for the next transmission. This second implementation allows reducing overhead latencies at OS level. Apart from buffers implementation, two user-level driver operating modes have been implemented: \textit{Unique} and \textit{Blocks}. \textit{Unique} mode sends all the data at once to the buffer, without any kind of partitioning. On the other hand, \textit{Blocks} mode divides data in smaller chunks of data for taking a better advantage of double buffering. Furthermore, two user-level versions have been compared: one completely based on polling, and a second one, closer to the kernel-level scenario explained in the following subsection, where a scheduler is managing the different DMA requests, to avoid dead-lock waits. \subsection{Kernel-level} In order to have the OS with a higher flexibility to attend other tasks for a realistic scenario, we have implemented a kernel-level driver that uses interrupts to manage the configuration of new DMA transfers when they are needed, allowing the PS to work in other tasks in the meantime. In this case, at user-level, the software specifies to the driver, at kernel-level, where all data are placed; then, the driver moves these data from virtual to physical space, and it configures the needed DMA transfers. In this case, we have used the AXI-DMA driver provided by Xilinx, which supports AXI-Stream DMA transfers with the needed length, or dividing them into small pieces and queuing them into consecutive transfers (Scatter-gated mode). To use the AXI-DMA Xilinx driver, a kernel-level API has been developed to adapt the driver to our needs. \section{Results}\label{sec:results} We have tested the PSoC under two different scenarios: (1) with a hardware in a loop-back connection at PL that takes data from MM2S and stream it back to the S2MM interface of the DMA controller; and (2) a CNN execution using the NullHop accelerator at PL, executing the RoShamBo CNN \cite{aimar2017nullhop}. Figures \ref{TransmissionTime_ms} and \ref{TransmissionTime_us_byte} shows the results for the first scenario. TX and RX transfer times evolution is presented for an incremental data size buffers from 8bytes to 6Mbytes considering the user-level driver with polling, the scheduled user-level and the interrupt-based kernel-level driver. For the loop-back streaming it could happen that TX and RX buffers would be full at the same time, so requests for reading RX buffers may occur at the same time that new TX request is produced. Since DDR memory cannot attend read and write operations at the same time, the bandwidth balance between RX and TX transfers is important in order to avoid blocking states of the system, eg. a longer enough TX transfers can fill up the RX hardware buffer and stops the TX transfer, blocking the system if RX and TX transfer are not properly managed. In these figures it can be seen that TX transfers have lightly higher priority than RX transfers, obtaining smaller latencies TX rather than RX transfers. Kernel-level driver approach, due to its bigger overhead at software execution because of the AXI-DMA Xilinx driver and the API, produces bigger latencies for smaller data lengths rather than user-level approach, but it increases the performance for bigger data lengths. User-level solution with polling and without scheduling gets lightly best results, but it could lacks on blocking the system while the transfers are done. \begin{figure}[ht!] \centering \includegraphics[width=85mm]{images/TransmissionTime_ms.png} \caption{Transfer times in ms for data blocks from 8B to 6MB comparing three drivers (user\_level, user\_level\_scheduled and kernel\_level).\label{TransmissionTime_ms}} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=85mm]{images/TransmissionTime_us_byte.png} \caption{Transfer times for 1byte (in us) for data blocks from 8bytes to 6MB comparing three drivers (user\_level, user\_level\_scheduled and kernel\_level).\label{TransmissionTime_us_byte}} \end{figure} For the second scenario, we have set up the RoShamBo CNN execution in the MMP platform OS in the same way as described in \cite{aimar2017nullhop}, but we have modified the software to be using one of the three modes for controlling the memory transfers between virtual memory to physical memory and to manage the DMA transfer as described above. In this test, we have used the \textit{single-buffer} configuration and the \textit{Unique} mode. In table \ref{tab:table_nullhop} it can be seen the obtained timings for this case. The lowest latencies are obtained for the user-level mode with poling use. This is possible with this relative small CNN because transfer lengths are not longer enough to block the system. In \cite{aimar2017nullhop} bigger CNN were tested, such as VGG19, where this mode is not possible to be used and causes blocking the system. The second mode, without a kernel-level driver, but introducing a scheduler in the OS to avoid blocking the system, the latencies increases less than 2 ns per byte for TX and less that 150ns for RX. When the kernel driver is used, the latencies increases around 6 ns/byte for TX, but they decreases respect to the use of the scheduler, being less than 100ns slower than user-level. Regarding to the whole frame computation time, what requires the execution of 5 convolution layers in the NullHop, and therefore, sending and receiving DMA transfer for each layer; the latencies are bigger for the kernel-level driver, followed by the scheduler at user-level and then for the user-level. This behavior is correctly expected since transfer lengths for RoShamBo CNN are in the order of 100Kbytes, where kernel-level driver is still not obtaining its best results, as depicted in figures \ref{TransmissionTime_ms} and \ref{TransmissionTime_us_byte}. \begin{table}[] \centering \caption{CNN execution time for one frame and TX, RX average transfer times per byte} \label{tab:table_nullhop} \begin{tabular}{|l|l|l|l|} \hline \multicolumn{1}{|c|}{} & \multicolumn{3}{c|}{\cellcolor[HTML]{FFCC67}Unique mode, single-buffer} \\ \cline{2-4} \multicolumn{1}{|c|}{\multirow{-2}{*}{\textbf{NullHop RoShamBo}}} & \cellcolor[HTML]{FFCC67}TX (us/byte) & \cellcolor[HTML]{FFCC67}RX (us/byte) & \cellcolor[HTML]{FFCC67}Frame (ms) \\ \hline \rowcolor[HTML]{ECF4FF} user-level polling & 0.0054 & 0.197 & 6.31 \\ \hline \rowcolor[HTML]{9698ED} user-level drv scheduled & 0.0072 & 0.335 & 6.57 \\ \hline \rowcolor[HTML]{00D2CB} kernel-level drv & 0.011 & 0.294 & 7.39 \\ \hline \end{tabular} \end{table} \section{Conclusions}\label{sec:conclusions} This paper presents and evaluates different implementations at software level of data movements between virtual memory space of an OS at user level, and physical memory space at kernel level for DMA transactions between the PS and the PL of a Xilinx Zynq PSoC for CNN executions. From the implementation at user-level privilege of the OS, using a polling solution, with less memory protection; to highest protection, using a kernel-level driver with interruptions; through an intermediate solution at user-level using an scheduler; this paper has evaluated two different scenarios, a real one under the execution of a CNN for playing RoShamBo with the NullHop CNN hardware accelerator, and a synthetic one for extracting the performance characteristics of the different implementations. User-level solutions give better latencies for data transfers bellow 1Mbyte, but they lacks on flexibility for multi-threading programs due to intensive use of polling. Their maximum supported transfer lengths are 8Mbytes (AXI4-Stream limit), but for big transfers the performance decreases due to long polling stages Kernel-level solution, tested for the worst possible case: single buffer scheme and unique data transfers, obtains similar latencies for bigger data transfer lengths. For the RoShamBo test, since transfer lengths are in the order of 100Kbytes, the user-level polling solution performs better due to have a smaller software overhead. \section{Acknowledgment} This work was partially supported by the NPP project funded by SAIT (2015-2018) and by the Spanish government grant (with support from the European Regional Development Fund) COFNET (TEC2016-77785-P). The work of R. Tapiador has been supported by a Formaci\'{o}n de Personal Investigador Scholarship from the University of Seville. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-06-05T02:18:10", "yymm": "1806", "arxiv_id": "1806.01106", "language": "en", "url": "https://arxiv.org/abs/1806.01106" }
\section{Introduction} Categorisation, as understood in cognitive psychology, can be described as producing a category label for a given stimulus based on its perceived properties. The same task is known as classification in machine learning, and the method which carries out the task is called a classifier. Computational models of categorisation are developed in both disciplines. In this article we explore the similarities of these computational models in the context of category representation. In cognitive psychology, the issue of representation has fuelled a long-lasting debate of `prototype models' versus 'exemplar models'. In machine learning, any classifier can be thought of as having an intrinsic or explicit representation of the categories of interest. As an example of intrinsic representation, consider the neural network type of classifiers. While possessing a remarkable ability to distinguish between categories (subject to proper training), the category representation is encoded within the network structure and weights. As an example of explicit category representation, consider the `nearest neighbour classifier (1-NN)'~\citep{Duda2001}, or `lazy learner' in machine learning. This classifier uses a labelled reference set (interpreted interchangeably as prototypes or exemplars). To categorise a new stimulus ${\bf x}$, 1-NN identifies the stimulus from the reference set most similar to ${\bf x}$, and assigns ${\bf x}$ to the category of that stimulus. We examine the relationship between 1-NN and the {\em prototype} and {\em exemplars} models. From a geometric perspective, the stimuli are represented as points in a multidimensional space, categories as regions in that space~\citep{Gardenfors2004}, and similarity between stimuli as distance in the space~\citep{Shepard1957}. Such models have desirable properties such as clear interpretation and the ability to account for the vagueness of natural categories. Pattern recognition and machine learning abound with approaches and algorithms for selecting a representative subset of examples from a given data set~\citep{Garcia2012}. With this study we aim to alert the psychology reader to the existence of such approaches and algorithms and also to bring a psychological perspective to the edited nearest neighbour classifier for the benefit of the machine learning reader. To this end, we have organised the paper as follows. We set the scene matching the terminologies of the fields of interest and formalising the generic categorisation models from the two viewpoints (Section~\ref{sec:termi}). Then we summarise the research done on prototype and exemplar models, and argue as to why more flexible category representations are needed (Section~\ref{sec:representation}). Machine learning algorithms for constructing such representations are presented in Section~\ref{sec:ml}. Finally, we compare machine learning models for selecting a reference set to categorisation models from cognitive psychology, and discuss the benefits of this united view (Section~\ref{sec:comparison}). \section{Parallel between terminologies and modelling approaches} \label{sec:termi} \subsection{Basic terminology} Table~\ref{tab:terminology} shows the correspondence between different terms and concepts in cognitive psychology, machine learning, pattern recognition and mainstream statistics. Of course, there is a degree of permeation between the different fields' terminologies, especially between pattern recognition and machine learning. \begin{table}[htb] \caption{Correspondence between terms used by the relevant areas of research.} \label{tab:terminology} \centering \bigskip \begin{tabular}{llll} Cognitive psychology& Machine Learning& Pattern Recognition& Statistics\\ \hline category, concept& class, hypothesis& class& category, class\\ categorisation& classification/& classification/ & classification/\\ &discrimination&discrimination&discrimination\\ perceptual space& data space& data space&data space\\ stimulus, exemplar& example, instance & object, data point& observation, data point\\ prototype&example, instance& object, data point& observation, data point\\ stimulus dimension& attribute& feature& variable, regressor\\ subject, learner &learner& classifier& predictor\\ model & learner& classification algorithm& model\\ fitted model & learned model& trained classifier& fitted model\\ case-based reasoning& lazy learners& nearest neighbour classifier&nearest neighbour classifier\\ \hline \end{tabular} \end{table} \subsection{Formal description of the modelling approaches} There are subtle but important differences between the approach to categorisation in machine learning (ML) and the modelling approach which traditionally dominated research in cognitive psychology. Suppose that there exist a collection of $n$ numerical attributes describing each stimulus ${\bf x}$, and defining a space ${\mathbb R^n}$, such that all stimuli reside there, ${\bf x}\in {\mathbb R^n}$. In both approaches, we have a labelled (training) data set consisting of $N$ fully described stimuli and their respective category labels $y$, $X=\{({\bf x}_1,y_1),({\bf x}_2,y_2),\ldots,({\bf x}_N,y_N)\}$. The first difference between the two approaches is in what $X$ represents and how it is obtained. In ML, $X$ is sampled randomly from the categorisation problem. As the problems are usually governed by a probability distribution, for any ${\bf x}\in {\mathbb R^n}$, there are a set of probabilities related to the category labels $\Omega = \{\omega_1,\ldots,\omega_c\}$. Prior probabilities,$P(\omega_i)$, determine how likely it is that an object from category $\omega_i$ will appear in the sample (prior probabilities, $\sum_{i=1}^c P(\omega_i) = 1$). Having sampled $X$, $P(\omega_i|{\bf x})$, $i=1,\ldots,c$ is the probability that the true category of a given ${\bf x}$ is $\omega_i$ (posterior probabilities, $\sum_{i=1}^c P(\omega_i|{\bf x}) = 1$). In cognitive psychology experiments, $X$ is designed or selected by the experimenter. The training set does not have to be representative for any real distribution. The category labels are fixed, and not subject to probabilistic uncertainty. The second difference is in what is meant by `fitting a model'. In ML, this is equivalent to `training a classifier' using the training data $X$. A trained classifier can assign a label to any ${\bf x}\in {\mathbb R^n}$, seen or unseen. Ideally, the classifier can produce good estimates of the posterior probabilities, $\hat P(\omega_i|{\bf x})$, $i=1,\ldots,c$. In cognitive psychology, the classification is done by the participants in the experiment and the model is fitted to their responses. Consider an experiment with $M$ participants. Upon presenting them with stimulus ${\bf x}\in {\mathbb R^n}$, $m_1$ chose category label $\omega_1$, $m_2$ chose category label $\omega_2$, and so on, so that $m_1+m_2+\cdots+m_c = M$. We define a set of proportions for this stimulus: $Q(\omega_i|{\bf x})=\frac{m_i}{M}$, $i=1,\ldots,c$. Fitting a model in the psychological literature amounts to defining a method which predicts as close as possible these proportions, both for seen and unseen stimuli. The third difference lies in the model evaluation procedure. In ML, the main performance measure is \emph{generalisation accuracy} (or \emph{testing accuracy}), estimated as the fraction of correctly classified objects in an unseen \emph{testing data set} drawn from the probability distribution of the categorisation problem. In psychology, the goal is to explain the obtained experimental results. Hence, the performance is often measured as goodness of fit of the model to the training sample. This carries the risk of {\em overfitting}, which means that the model is valid only for the data sample $X$ (and the participants in the experiment) but is unable to generalise beyond this. Even with these methodological differences, the two approaches to categorisation have a lot in common. Recent works in psychology begin to explore generalisation accuracy and overfitting, and adopt procedures similar to those in machine learning \citep{Briscoe2011,Smith2014}. More importantly, both models rely on: \begin{itemize} \item a representation space ${\mathbb R}^n$ \item a set $S$ (of stimuli or of data points), which we will call here {\em the reference set}, and \item a measure of similarity between two elements of ${\mathbb R}^n$. \end{itemize} Defining similarity and finding dimensions of perceptual (or conceptual) spaces are topics in their own right~\citep{Jakel2008-1,Tversky1986}, and beyond of the scope of this work. We take forward the task of relating the methods for understanding/designing $S$ from machine learning and cognitive psychology perspectives. \section{Representing categories with prototypes or exemplars} \label{sec:representation} For the benefit of the machine learning reader, we give a brief account of the development of the concept of the reference set $S$ in the field of cognitive psychology. Historically, a category was conceptualised in philosophy as a group of objects possessing a common feature which can be described by necessary and sufficient conditions (a rule-based approach not using the reference set $S$). This view was challenged by \citet{Wittgenstein2010}, who argued that some categories are defined through multiple overlapping similarities between members, but there is no single feature common to all of them. As an example he gave a natural category of ``games'': there are card games, board games, ball games, games like ring-a-ring-a-roses etc.,\ exhibiting multiple similarities, but without an obvious ``defining property''. Later psychological research gave rise to the concept of perceptual space \citep{Shepard1957,Tversky1977} and categorisation made through comparison of similarity of objects expressed as distance in this space. This led to experimental studies on category structure and graded categorisation theories in which some objects are better matched to the category than others \citep{Posner1968, Rosch1975, Rosch1975-1}. Results of such experiments are modelled with computational models of categorisation. ``Central'' members of categories are called prototypes. Prototype theories of categorisation assume that the reference set $S$ consists only of prototypes. The key point is the abstraction made during the learning process: it is the information on the central tendency rather than on individual examples which is used for categorisation. Categorisation of new stimulus is done by comparing its similarity to prototypes of each category and choosing the most similar one. In formal models, usually each category is represented by exactly one prototype which can be either the most central stimulus or, under more relaxed assumptions, any point compatible with the category~\citep{Nosofsky1992}. An alternative to prototype theories are exemplar theories of categorisation. They assume that all encountered examples form a category are stored in the reference set $S$, and no abstraction is made~\citep{Medin1978,Nosofsky1987}. Categorisation of new stimuli (query) is done by retrieving remembered exemplars with probability derived from their similarity to the query. Thus all remembered exemplars influence the result. The most famous exemplar model is the Generalised Context Model (GCM) by \citet{Nosofsky1987}. The questions of plausible category representations spawned a long-lasting debate between proponents of the prototype theories and those of the exemplar theories. Proponents of exemplar models argued that if people remembered prototypes only, it would be impossible for them to learn sparse and complex category structures and categorise exceptions~\citep{Medin1978}. What is more, simple prototype models do not include information on the variability on a specific dimension which may be important in categorisation. For example the variability of sizes of watermelons is larger than the variability of sizes of basketballs, hence a sphere half-way in size between a watermelon and a basketball is more likely to be a watermelon)~\citep{Rips1989,Hampton2006}. In fact these problems were already considered by the pioneers of prototype theories \citet{Posner1968}, who argued on that basis that some additional information has to be stored in memory besides category prototypes. Exemplar models are able to account for all these effects. Finally, there were various experimental results favouring exemplar models~\citep{Medin1978,Medin1981,Medin1982,Nosofsky1992}. In those experiments, participants usually learned to distinguish between two categories -- A and B -- characterised with 2-4 binary traits, then were asked to categorise both previously seen (training) and unseen (transfer) stimuli. It was repeatedly demonstrated that exemplar models achieved better fit to the participants' responses than prototype models (although, no generalisation error of the model was calculated in those experiments). However, results favouring the exemplar theories were challenged by \citet{Smith2000} who argued that artificial categories used in the experiments are very different from natural categories which prototype models aim to explain, and do not allow to draw broader conclusions. They describe the famous 5--4 category structure used in more than 30 different experiments. It consists of 5 stimuli from category~A, 4 stimuli from category~B, and 7 transfer stimuli without assigned labels. Each stimulus is described with 4 binary features, so the 16 examples in the data set cover all possible values of the feature vector. In category A the mode of each feature is 1, and in category B, 0; in category A there is 1 ambiguous example (with two features with value 0), and in category B there are two ambiguous examples (with two features with value 1). The very small number of examples, the ambiguous structure, and the binary representation space make the experimental conditions very different from natural categories that prototype theories aim to explain. \citet{Smith2000} also noted that this category structure is notoriously hard to learn (e.g.,\ in \citet{Medin1981} only 36 of 96 participants achieved one error-free run during 32 iterations through the 9 training stimuli). Exemplar models have also been criticised as inefficient and biologically implausible since it is hard to expect that humans store in memory all encountered examples and use all of them in categorisation~\citep{Rosch1975}. Because exemplar models build no abstraction, their ability to generalise has also been questioned \citep{Smith2000}. Also, due to the large number of parameters, exemplar models may overfit the data, which calls for regularisation~\citep{Jakel2008}. Unfortunately, the long-lasting debate obscured the fact that, from a purely technical perspective, the two models are very similar. There is no explicit difference between ``prototype'' and ``example'' within the mathematically-based areas concerned with learning from examples (see Table~\ref{tab:terminology}). Sometimes, ``prototype'' is used to denote the elements of the reference set $S$ for the nearest neighbour classifier but both terms mean a point in the space of interest. From this perspective the only difference between prototype and exemplar theories lies in the number of points used to represent a category. Such structural similarity suggests that there may be a ground to unify both theories~\citep{Anderson1991,Rosseel2002,Love2004,Vanpaemel2008}. Arguments for treating prototype- and exemplar-models as the facets of the same concept can be found within neuropsychological studies. There are theories differentiating between explicit (rule-based) and implicit (similarity-based) categorisation, on the basis of the kind of memory involved~\citep{Ashby2005,Smith2008}. Both prototypes and exemplars belong to the implicit categorisation system, and they are likely to activate the same structures within a brain. \citet{Iordan2016} performed an extensive fMRI study in which participants were showed images representing objects from different categories ranked according to typicality. They observed a prototype effect in brain activation patterns: typical examples produced patterns similar to objects from the same category and dissimilar to objects from different categories. However, they also identified a brain region in which the opposite effect was present: the least typical examples produced such consistent activations. This may be an argument for categorisation models with hybrid representation where both typical and atypical examples are retained in the reference set $S$. \citet{Machery2011} in his multiple systems theory of categorisation also viewed prototypes and exemplars as complementary rather than mutually exclusive. He recommended investigating specific factors mediating between the two models of categorisation. \citet{Briscoe2011} demonstrated that data complexity may be such a factor. They hypothesised that human learners control the complexity of their mental representation to match the complexity of the category structure. By doing so, they achieve generalisation accuracy and not necessary a perfect training accuracy. \citet{Smith2014} also advocated analysing category representations in terms of generalisation accuracy, stressing its evolutionary importance for survival. Different categorisation models would be optimal in different environments (ecological context). This calls for flexible and adaptable methods of constructing the reference set $S$. \section{Reference set selection in machine learning} \label{sec:ml} For the benefit of the psychology reader, this section reviews the fundamentals of reference set selection in pattern recognition and machine learning. In pattern recognition, data editing aims at finding a small reference set $S$ (set of prototypes/ examples/ instances labelled into the categories of interest) which ensures a high classification accuracy~\citep{Garcia2012,Triguero2012,Olvera-Lopez2010,Wilson2000,Dasarathy1990}. Sometimes interpretability is also added to in the list of criteria~\citep{Bien2011}. We shall refer to the content of the reference set as prototypes. Here we look into the question of {\em how} the prototypes are obtained. Prototypes can be selected from the available data set~\citep{Garcia2012} or extracted/replaced/generated as non-existing data points~\citep{Triguero2012}. The latter approach may produce prototypes which are physically impossible. This is not a cause of concern because all we are interested in is being able to discriminate between the given categories. The methods for obtaining the reference set are vastly different between the two approaches. \subsection{Prototype Selection: Condensing} The two early approaches to prototype selection called {\em condensing} and {\em (error) editing} roughly correspond to the exemplar and the prototype views of category representation, respectively. Hart's Condensed Nearest Neighbour algorithm (CNN)~\citep{Hart1968} gave rise to a multitude of data editing algorithms solving the following problem. Given is a labelled data set $X=\{({\bf x}_1,y_1),\ldots ({\bf x}_N,y_N)\}$, where ${\bf x}_i$ are the objects (stimuli) in the space of interest ${\mathbb R}^n$, equipped with a similarity measure, and $y_i$ are the class labels (categories) taking values in the set of labels $\Omega=\{\omega_1,\ldots,\omega_c\}$. Subset $S\subseteq X$ is called {\em consistent} if 1-NN using $S$ as the reference set classifies correctly all objects in $X$ (100\% training accuracy). The task is to find a {\em minimal consistent set}, that is, a consistent set with the minimal possible cardinality. CNN operates through the following steps: \begin{enumerate} \item Initialise set STORE = $\emptyset$ and set GRABBAG = $X$. Arrange GRABBAG in random order. Take the first element of GRABBAG and place it in STORE. Set a repeat-flag $f$ to TRUE. \item While $f$ \begin{enumerate} \item Set $f$ to FALSE. Arrange GRABBAG in random order. \item Check every element of GRABBAG. If misclassified by the current content of the reference set STORE, add it to STORE and remove it from GRABBAG. Set $f$ to TRUE. \end{enumerate} End while. \item Return STORE as the reference set. \end{enumerate} CNN tends to retain boundary objects (memorising exceptions) which likens it to the exemplar approach. CNN is not a deterministic approach. The content and cardinality of $S$ (STORE) depends on the order in which the objects from GRABBAG are submitted for evaluation in step 2(b). There are a large number of condensing methods developed over the years varying from improvement on the cardinality of CNN to applying new incremental, decremental, and other iterative approaches for creating the reference set~\citep{Garcia2012}. The mechanisms of forming the reference set implemented by these methods are not meant to model human cognition. The result is a set of prototypes, which can be described as ``perfect selective memory'' with respect to the data seen thus far. \subsection{Prototype Selection: Error editing} Error editing serves a different purpose. The goal is no longer to recognise without error the training data but to clean potentially ``noisy'' objects. Such objects may misguide the classifier if selected as nearest neighbours. The pioneering algorithm due to \citet{Wilson1972}, sometimes abbreviated as ENN (edited nearest neighbour), follows the steps below: \begin{enumerate} \item For every object $({\bf x}_i,y_i)$ in $X$, find its $k$ nearest neighbours within the set $X\setminus \{({\bf x}_i,y_i)\}$. If the chosen object is misclassified by its $k$ nearest neighbours, mark it for deletion. \item Delete the marked objects and return the remaining ones as the reference set. \end{enumerate} Editing methods tend to retain objects which are deep within their class regions and clear objects close to the class borders, which are usually prone to noise. This leads to smoothing the boundaries of the classification regions. Many editing methods have been proposed further on~\citep{Garcia2012}, aimed at better generalisation performance as well as eliminating redundancy among the retained prototypes. The editing approach can be thought of as abstract learning in that more ``typical'' exemplars are likely to be retained, however they are not merged into a single prototype. And, again, just like for the condensing methods, the way the prototype set is formed is not meant to resemble a human cognitive process. \subsection{Prototype Selection: Hybrid and agnostic methods} Mirroring the debate between prototype and exemplar theories, it was realised early on that a pure strategy may be insufficient. Hybrid methods have been proposed where the two strategies are combined explicitly, for example distinguishing between edge objects and interior objects~\citep{Li2011}. One hybrid strategy is applying an editing algorithm first to clean up the boundary regions followed by a condensing algorithm to thin down the prototype set. For example Wilson's method followed by CNN can be thought of as hybrid because it consciously applies border cleaning first and redundancy reduction thereafter. Finally, we may call {\em agnostic} those editing methods which are criterion-driven. The objects are not explicitly regarded as borderline or interior, and no balance between the two types is being sought. The selection is guided by minimising a criterion function which is usually defined as \[ J(S) = \lambda\; E(S) +(1-\lambda)\; \frac{|S|}{N}, \] where $E(S)$ is the error of the 1-NN classifier on a designated testing or validation set using $S\subseteq X$ as the references set, $|S|$ is the cardinality of $S$, $N$ is the cardinality of $X$, and $\lambda$, $0\leq \lambda\leq 1$ is a penalising constant balancing reduction and accuracy. An example of this group of methods is random editing~\citep{Kuncheva1998}. This method may work for small data sets with a small number of features. It consists of generating $T$ random subsets $X$ of cardinality $N$, where $T$ is a fixed constant. The subsets are evaluated as reference sets, and the best one is returned. \subsection{Prototype Replacement: Clustering} Suppose that we are not restricted to the training set $X$ for the choice of prototypes but can nominate any point in $\mathbb{R}^n$ as a prototype and choose its category. The simplest, albeit outdated, such approach is clustering and electing the cluster centroids as prototypes~\citep{Bezdek2001,Kuncheva1999}. The categories may be considered one-by-one to obtain prototypes from each category, and then pool them into a single set (pre-supervised approach~\citep{Kuncheva1999}). Alternatively, the whole data set can be clustered, and the labels can be assigned later, based on the prevalent category included in the respective cluster (post-supervised approach). \subsection{Prototype Replacement: Learning Vector Quantisation (LVQ)} The Learning Vector Quantisation (LVQ) classifier \citep{Kohonen1990} {\em learns} the position of the prototypes in the space by small incremental shifts guided by the data used for training. While the prototypes trained through LVQ tend to position themselves in the modes of the probability distributions of the different categories, they may not be the most ``prototypical'' examples of these categories. If, for example, there are several prototypes accounting for the same cluster in the data, the prototypes will spread themselves in such a way that the whole cluster is covered fairly uniformly. While in this section we summarised the basic methods in each group, the reader should be aware that there are, most probably, above 150 methods and variations thereof for selecting a reference set for the nearest neighbour classifier~\citep{Garcia2012,Triguero2012}. \section{Comparison of prototype selection methods and psychological models} \label{sec:comparison} This section contains the main contribution of our study. Here we link the methods for finding $S$ from the two perspectives. As a start, if the prototypes are defined as class centroids (the ``pure prototype'' model), and the category is assigned by similarity to the nearest centroid, we arrive at the {\em nearest mean classifier (also the minimum distance classifier)} in machine learning \citep{Tibshirani2002}. If, on the other hand, the whole of the training set $X$ is used as the reference set $S$ (the ``pure exemplar'' model), the corresponding classifier is the standard one-nearest neighbour conceived in the 1950s~\citep{Fix1952}. Table~\ref{tab:model_analogies} shows the correspondence between the methods which we gathered from the two fields. The psychology counterparts of the machine learning methods are detailed thereafter. \begin{table}[htb] \caption{Correspondence between models of categorisation from psychological literature and prototype selection techniques from machine learning.} \label{tab:model_analogies} \centering \bigskip \begin{tabular}{rl} Machine Learning & Cognitive psychology\\ \hline CNN & SUSTAIN~\citep{Love2004} \\ LVQ & SUSTAIN~\citep{Love2004} \\ LVQ & RMC~\citep{Anderson1991} \\ Random editing & Rex Leopold~I~\citep{De_schryver2009} \\ Clustering-post-supervised & REX~\citep{Rosseel2002} \\ Clustering-post-supervised & RMC~\citep{Anderson1991} \\ Clustering-post-supervised & Category insensitive MMC~\citep{Rosseel2002} \\ Clustering-pre-supervised & k-means VAM~\citet{Verbeemen2007} \\ Clustering-pre-supervised & Category sensitive MMC~\citep{Rosseel2002} \\ EDITING (Wilson's method) & --\\ EDITING+CONDENSING (Wilson+CNN) & -- \\ \hline \end{tabular} \end{table} \medskip\noindent $\bullet$ The {\em Rational Model of Categorisation (RMC)}~\citep{Anderson1991} is one of the oldest models. It discovers the probabilistic structure of the environment in the language of Bayesian inference. The data is clustered in an incremental fashion by assigning each encountered object to the most similar cluster, or, if its similarity to the existing clusters is to low, by creating a new cluster. The centre of a cluster is the mean of the examples it contains, and its label is taken as the majority class. The number of clusters is implicitly controlled by a \emph{coupling parameter}, which defines the similarity threshold used for creating new clusters. In RMC, the category labels are treated as another attribute (dimension in perceptual space). This makes the method similar but not identical to LVQ and both clustering approaches (pre- and post-supervised). Similarity to LVQ is in the type of the clustering process: it is iterative and the results depend on the order in which the stimuli are presented. \medskip\noindent $\bullet$ The {\em Mixture Model of Categorisation (MMC)}~\citep{Rosseel2002} is a straightforward realisation of the clustering approach which employs fuzzy clustering through Gaussian mixtures models (GMM). The Gaussian components (clusters) can be both independent of the categories (Clustering-post-supervised) or category-specific (Clustering-pre-supervised). The number of components is selected {\em a priori}. \medskip\noindent $\bullet$ The {\em Reduced EXemplar model of categorisation (REX)}~\citep{Rosseel2002} was motivated by the hypothesis that only some exemplars are retained in the memory, while exemplars too similar to previously encountered ones are either forgotten or merged. Memorising and forgetting links REX to the prototype selection group of methods. However, merging relates REX to the prototype replacement group. K-means clustering is used to replace some exemplars with cluster centroids. In this sense, REX is linked to Clustering-post-supervised approach. The number of clusters has to be chosen \emph{a priori}. \medskip\noindent $\bullet$ The {\em SUSTAIN} model~\citep{Love2004} uses an iterative clustering process similar to RMC. The difference is that in its supervised version, new clusters are formed when an example is classified to a wrong category. In this fashion, the category structure is driven by prediction failures. The number of clusters is inferred dynamically. The outcome of this process depends on the order of presentation of the stimuli which links SUSTAIN to both LVQ and CNN. As in LVQ, the process is iterative and cluster centres are shifted after a successful classification. Contrary to LVQ, however, the number of clusters is not fixed in advance and the clusters are formed dynamically in case of misclassification -- just as in the inner loop of the CNN algorithm where misclassified examples are added to the reference set. \medskip\noindent $\bullet$ In the {\em Varying Abstraction Model (VAM)}~\citep{Vanpaemel2008}, the notion of clustering is extended to arbitrary partitioning -- even examples situated far from each other, and separated by many other examples, can be grouped together. Partitions are formed separately within each category. VAM is designed primarily as a tool to analyse experimental results, and discover plausible representations, but does not try to model the learning process. This is why the fitting procedure consists of an exhaustive search over all possible partitions. From a machine learning point of view, this put some doubts on the generalisation capabilities of such model, since it might by prone to overfitting. What is more, this method becomes very demanding computationally when the number of examples grows. The authors admit that this method of analysis is feasible only for very small data sets, such as the 5-4 category structure with 5 and 4 examples in each category. \citet{Verbeemen2007} were aware of the problems with the original VAM and proposed a version in which k-means clustering is used to determine clusters within each of the categories. This results in a straightforward implementation of Clustering-pre-supervised approach. The number of clusters again has to be set {\em a priori}. \medskip\noindent $\bullet$ \citet{De_schryver2009} proposed another model named {\em Rex Leopold I} in reference to the original REX model. The set of exemplars is chosen with an exhaustive search, as in the VAM model. Notably, in the original publication the authors used a cross-validation procedure to calculate the generalisation error of the model and reduce the possibility of overfitting. This procedure can be seen as an implementation of agnostic, criterion-based method of prototype selection. The criterion consists only of generalisation accuracy and does not include reference set cardinality ($\lambda=1$ in our formula). The search procedure is an exhaustive search instead of Monte Carlo sampling used in random editing. According to our knowledge, there are no categorisation models proposed within cognitive psychology which operate according to a principle similar to the Wilson's method: cleaning the noisy examples around the decision boundary. This is probably because research on psychological models focused primarily on accurate representation of the environment structure and not so much on generalisation accuracy. In the classic experiments, e.g.,\ with 5--4 category structure \citep{Smith2000}, there was no place for `noisy' examples at all. However, more recent psychological experiments use category structures more similar to that used in ML \citep{Briscoe2011,Smith2014}, which may benefit from noise cleaning. Establishing links between psychological and ML models may help to understand better their properties and overcome their limitations. For example, many of the existing psychological models require the user (experimenter) to specify in advance parameter values such as the number of clusters, the number of components of the Gaussian mixtures, or the smoothing parameter for the data neighbourhood. Some ML methods, on the other hand, choose the number of retained prototypes automatically. The generalisation success of such methods can be taken as support for the hypothesis that representations of the categories in a given problem are learned dynamically, based on the seen data. More complex categories will require more prototypes to be stored compared to simpler categories, which can hardly be addressed by models with pre-specified number of exemplars/prototypes. \section{Conclusion} We demonstrated how prototype selections techniques used in machine learning can be matched to categorisation models in cognitive psychology. We believe that this correspondence may enrich the repertoire of methods on both sides. Our study brought to the fore two issues that may be of interest to the psychology reader. First, the well-documented success of the edited 1-NN (based on prototype/instance selection) gives ground to the theories which seek to reconcile prototype- and exemplar-models. This may indicate that variable amount of abstraction is present in the human categorisation model. Second, testing the generalisation ability (or validity) of a categorisation model is often neglected but very important. On the other hand, machine learning and pattern recognition are often accused of operating as `black boxes' and not giving adequate explanation to earn the user's trust. Borrowing theoretical insights from cognitive psychology and integrating them in the methods and algorithms may go some way to overcome the user's scepticism. In addition, cross-disciplinary fertilisation has proven itself many times over. Whether interpretable or not, new, more successful pattern recognition and machine learning methods may result from bringing the two disciplines together. Our future research includes application of the described methods to data sets coming both from machine learning and psychology. We plan to test their generalisation abilities and compare the results with experimental studies of human categorisation. \vspace{1em} \noindent{\bf Funding:} This work was partly supported by project 2015/16/T/ST6/00493 funded by National Science Centre, Poland and by project PR-2015-188 funded by The Leverhulme Trust, UK. \section*{References}
{ "timestamp": "2018-06-05T02:18:30", "yymm": "1806", "arxiv_id": "1806.01130", "language": "en", "url": "https://arxiv.org/abs/1806.01130" }
\section{Introduction} The analysis of multivariate data is often complicated by high dimensionality and complex inter-dependences between the observed variables. In order to identify patterns in such data it is therefore desirable and often necessary to separate different aspects of the data. In multivariate statistics, for example, principal component analysis~(PCA) is a common preprocessing step that decomposes the data into orthogonal principle components which are sorted according to how much variance of the original data each component explains. There are two important applications of this. Firstly, one can reduce the dimensionality of the data by projecting it onto the lower dimensional space spanned by the leading principal components which maximize the explained variance. Secondly, since the principle components are orthogonal, they separate in some sense different (uncorrelated) aspects of the data. In many situations this enables a better interpretation and representation. Often, however, PCA may not be sufficient to separate the data in a desirable way due to more complex inter-dependences in the multivariate data (see e.g., Section~1.3.3 in \citet{hyvarinen2001a} for an instructive example). This observation motivates the development of independent component analysis (ICA), formally introduced in its current form by \citet{cardoso89} and \citet{comon1994}. ICA is a widely used unsupervised blind source separation technique that aims at decomposing an observed mixture of independent source signals. More precisely, assuming that the observed data is a linear mixture of underlying independent variables, one seeks the unmixing matrix that maximizes the independence between the signals it extracts. There has been a large amount of research on different types of ICA procedures and their interpretations, e.g.,\ \citet[Infomax]{bell1995} who maximize the entropy, \citet[fastICA]{hyvarinen1999} maximizing the kurtosis or \citet[SOBI]{belouchrani1997} who propose to minimize time-lagged dependences, to name only some of the widespread examples. ICA has applications in many fields, for example in finance \citep[e.g.,][]{back1997}, the study of functional magnetic resonance imaging (fMRI) data \citep[e.g.,][]{mckeown1998spatially,mckeown1998analysis,calhoun2003ica}, and notably in the analysis of electroencephalography (EEG) data \citep[e.g.,][]{makeig1995,makeig1997blind,delorme2004eeglab}. The latter is motivated by the common assumption that the signals recorded at EEG electrodes are a (linear) superposition of cortical dipole signals \citep{Nunez2006}. Indeed, ICA-based preprocessing has become the de facto standard for the analysis of EEG data. The extracted components are interpreted as corresponding to cortical sources \citep[e.g.,][]{ghahremani1996independent,zhukov2000independent,makeig2002dynamic} or used for artifact removal by dropping components that are dominated by ocular or muscular activity \citep[e.g.,][]{jung2000removing,delorme2007enhanced}. In many applications, the data at hand is heterogeneous and parts of the samples can be grouped by the different settings (or environments) under which the observations were taken. For example, we can group those samples of a multi-subject EEG recording that belong to the same subject. For the analysis and interpretation of such data across different groups, it is desirable to extract one set of common features or signals instead of obtaining individual ICA decompositions for each group of samples separately. Here, we present a novel, methodologically sound framework that extends the ordinary ICA model, respects the group structure and is robust by explicitly accounting for group-wise stationary confounding. More precisely, we consider a model of the form \begin{equation} \label{eq:basic_model} X_i = A\cdot S_i+H_i, \end{equation} where $i$ denotes the sample index, $A$ remains fixed across different groups, $S_i$ is a vector of independent source signals and $H_i$ is a vector of stationary confounding noise variables with fixed covariance within each group (an intuitive example where such a scenario may be encountered in practice is illustrated in Figure~\ref{fig:eegscenario}). Based on this extension to ordinary ICA, we construct a method and an easy to implement algorithm to extract one common set of sources that are robust against confounding within each group and can be used for across-group analyses. The unmixing also generalizes to previously unseen groups. \subsection{Relation to Existing Work} ICA is well-studied with a tremendous amount of research related to various types of extensions and relaxations of the ordinary ICA model. In light of this, it is important to understand where our proposed procedure is positioned and why it is an interesting and useful extension. Here, we look at ICA research from three perspectives and illustrate how our proposed \coroICA methodology relates to existing work. First off, in Section~\ref{sec:classical_ica} we compare our proposed methodology with other noisy ICA models. In Section~\ref{sec:ajd}, we review ICA procedures based on approximate joint matrix diagonalization. Finally, in Section~\ref{sec:grouped_data_icas} we summarize the existing literature on ICA procedures for grouped data and highlight the differences to \coroICA. \subsubsection{Noisy ICA Models}\label{sec:classical_ica} The ordinary ICA model assumes that the observed process $X$ is a linear mixture of independent source signals $S$ \emph{without} a confounding term $H$. Identifiability of the source signals $S$ is guaranteed by assumptions on $S$ such as non-Gaussianity or specific time structures. For \coroICA we require---similar to other second-order based methods (cf.\ Section~\ref{sec:ajd})---that the source process $S$ is non-stationary. More precisely, we require that either the variance or the auto-covariance of $S$ changes across time. An important extension of the ordinary ICA model is known as noisy ICA \citep[e.g.,][]{moulines1997} in which the data generating process is assumed to be an ordinary ICA model with additional additive noise. In general, this leads to further identifiability issues. These can be resolved by assuming that the additive noise is Gaussian and the signal sources non-Gaussian \citep[e.g.,][]{hyvarinen1999fast}, which enables correct identification of the mixing matrix. Another possibility is to assume that the noise is independent over time, while the source signals are time-dependent\footnote{% Autocorrelated signals are time-dependent, while the absence of autocorrelation does not necessarily imply time-independence of the signal. We thus use the terms time-dependence and time-independence throughout this article.} \citep[e.g.,][]{choi2000b}. In contrast, our assumption on the noise term $H$ is much weaker, since we only require it to be stationary and hence in particular allow for time-dependent noise in \coroICA. As we show in our simulations in Section~\ref{sec:GARCH} this renders our method robust with respect to confounding noise: \coroICA is more robust against time-dependent noise while remaining competitive in the setting of time-independent noise. We refer to the book by \citet{hyvarinen2001a} for a review of most of the existing ICA models and the assumptions required for identifiability. \subsubsection{ICA based on Approximate Joint Diagonalization}\label{sec:ajd} As an extension of PCA, the concept of ICA is naturally connected to the notion of joint diagonalization of covariance-type matrices. One of the first procedures for ICA was FOBI introduced by \citet{FOBI}, which aims to jointly diagonalize the covariance matrix and a fourth order cumulant matrix. Extending on this idea \citet{cardoso1993} introduced the method JADE which improves on FOBI by diagonalizing several different fourth order cumulant matrices. Unlike FOBI, JADE uses a general joint matrix diagonalization algorithm which is the de facto standard for all modern approaches. In fact, there is a still-active field that focuses on approximate joint matrix diagonalization, commonly restricted to positive semi-definite matrices, and often with the purpose of improving ICA procedures \citep[e.g.,][]{cardoso1996jacobi, ziehe2004, tichavsky2009,ablin2018beyond}. Both JADE and FOBI are based on the assumption that the signals are non-Gaussian. This ensures that the sources are identifiable given independent and identically distributed observations. A different stream of ICA research departs from this assumption and instead assumes that the data are a linear mixture of independent weakly stationary time-series. This model is often referred to as a second-order source-separation model (SOS). The time structure in these models allows to identify the sources by jointly diagonalizing the covariance and auto-covariance. The first method developed for this setting is AMUSE by \citet{tong1990} who diagonalize the covariance matrix and the auto-covariance matrix for one fixed lag. The performance of AMUSE is, however, fragile with respect to the exact choice of the lag, which complicates practical application \citep{miettinen2012statistical}. Instead of only using a single lag, \citet{belouchrani1997} proposed the method SOBI which uses all lags up to a certain order and jointly diagonalizes all the resulting auto-covariance matrices. SOBI is to date still one of the most commonly employed ICA methods, in particular in EEG analysis. The SOS model is based on the assumption of weak stationarity of the sources which implies that the signals have fixed variance and auto-covariance structure across time. This assumption can be dropped and the resulting models are often termed non-stationary source separation models (NSS). The non-stationarity can be leveraged to boost the performance of ICA methods in various ways \citep[see][]{matsuoka1995, hyvarinen2001cov, choi2000a, choi2000b, choi2001, choi2001b, pham2001}. All aforementioned methods make use of the non-stationarity by jointly diagonalizing different sets of covariance or auto-covariance matrices and mainly differ by how they perform the approximate joint matrix diagonalization. For example, the methods introduced by \citet{choi2000a, choi2000b, choi2001} make use of non-stationarity across sources by separating the data into blocks and jointly diagonalizing either the covariance matrices, the auto-covariances or both across all blocks. For our experimental comparisons, we implemented all three of these methods with the slight modification that we use the recent uwedge approximate joint matrix diagonalization procedure due to \citet{tichavsky2009}. We denote the resulting three ICA variants as \begin{compactitem} \item choiICA\,(var): jointly diagonalize blocks of covariances, \item choiICA\,(TD): jointly diagonalize blocks of auto-covariances, \item choiICA\,(var\,\&\,TD): jointly diagonalize blocks of covariances and auto-covariances. \end{compactitem} Depending on the type of matrix which is diagonalized, each procedure detects different types of signals and behaves differently with respect to noise. \citet{choi2001b} suggest a modification of choiICA\,(TD) in which instead of auto-covariance matrices, differences of auto-correlation matrices are diagonalized. The advantage being that it captures the non-stationarity of a signal more explicitly. Our proposed method similarly aims to use this type of signal but instead of considering the noise-free case, we explicitly formalize a model class that generalizes to noisy settings. Furthermore, we provide an identifiability theorem allowing for group-wise stationary confounding. Such a result has not been proven for the aforementioned method in the noise-free case. For a detailed description of both SOS- and NSS-based methods we refer the reader to the review by \citet{Nordhausen2014} and for recent developments on leveraging non-stationarity for identifiability in non-linear ICA see \cite{hyvarinen2016unsupervised}. An exhaustive comparison of all methods is infeasible on the one hand due to the sheer amount of different models and methods and on the other hand due to the fact that appropriately maintained and easy adaptable code---for most methods---simply does not exist. Therefore, we focus our comparison on the following representative, modern methods that are most closely related to \coroICA: fastICA, SOBI, choiICA\,(TD), choiICA\,(var), choiICA\,(TD\,\&\,var). The methods and their respective assumptions on the source and noise characteristics are summarized in Table~\ref{table:ica_methods}. \begin{table}[] \begin{tabular}{lll} \textbf{method} & \textbf{signal type} & \textbf{allowed noise} \\ \hline choiICA\,(TD) & varying time-dependence & time-independent \\ choiICA\,(var) & varying variance & none \\ choiICA\,(var\,\&\,TD) & varying variance and time-dependence & none\\ SOBI & fixed time-dependence & time-independent\\ fastICA\footnotemark & non-Gaussian & none \\ \hline \coroICA & varying time-dependence \emph{and/or} variance & group-wise stationary \end{tabular} \caption{Important ICA procedures and the signal types they require as well as the noise they can deal with. \coroICA is a confounding-robust ICA variant and is the only method for which an identifiability result under time-dependent noise is available.} \label{table:ica_methods} \end{table} \footnotetext{The fastICA method can be extended to include Gaussian noise \citep[see][]{hyvarinen1999fast}.} \subsubsection{ICA Procedures for Grouped Data}\label{sec:grouped_data_icas} Applications in EEG and fMRI have motivated the development of a wide variety of blind source separation techniques which are capable of dealing with grouped data, e.g., where groups correspond to different subjects or recording sessions. A short review is given in \citet{hyvarinen2013} and a detailed exposition in the context of fMRI data is due to \citet{calhoun2003ica}. Consider we are given $m$ groups $\{g_1,\dots,g_m\}$ and observe a corresponding data matrix $\mathbf{X}_{g_i}\in\mathbb{R}^{d\times n_i}$ for each group, where $d$ is the number of observed signals and $n_i$ the number of observations. Using this notation, all existing ICA procedures for grouped data can be related to one of three underlying models extending the classical mixing model $\mathbf{X}=A\cdot \mathbf{S}$. The first, often also referred to as ``temporal concatenation'', assumes that the mixing remains equal while the sources are allowed to change across groups leading to data of the form \begin{equation} \label{eq:time_conc} \left(\mathbf{X}_{g_1},\dots,\mathbf{X}_{g_m}\right)=A\cdot\left(\mathbf{S}_{g_1},\dots,\mathbf{S}_{g_m}\right). \end{equation} The second model, often also referred to as ``spatial concatenation'', assumes the sources remain fixed ($n_1 = \dots = n_m$) while the mixing matrices are allowed to change, i.e., \begin{equation} \label{eq:spatial_conc} \begin{pmatrix} \mathbf{X}_{g_1}\\\vdots\\\mathbf{X}_{g_m} \end{pmatrix} = \begin{pmatrix} A_{g_1}\\\vdots\\A_{g_m} \end{pmatrix} \cdot\mathbf{S}. \end{equation} Finally, the third model assumes that both the sources and the mixing remains fixed across groups which implies that for all $k\in\{1,\dots,m\}$ it holds that \begin{equation} \label{eq:averaging} \mathbf{X}_{g_k}=A\cdot\mathbf{S}. \end{equation} In all three settings the baseline approach to ICA is to simply apply a classical ICA to the corresponding concatenated or averaged data, i.e., to apply the algorithm to the temporally/spatially concatenated data matrices on the left-hand side of above equations or the average over groups. These ad-hoc approaches are appealing, since they postulate straightforward procedures to solving the problem on grouped data and facilitate interpretability of the resulting estimates. It is these ad-hoc approaches that are implemented as the default behavior in toolboxes like the widely used \textsf{eeglab} for EEG analyses \citep{delorme2004eeglab}. Several procedures have been proposed tailored to specific applications that extend on these baselines by employing additional assumptions. The most prominent such extensions are tensorial methods that have found popularity in fMRI analysis. They express the group index as an additional dimension (the data is thus viewed as a $\mathbb{R}^{d \times n \times m}$ tensor) and construct an estimate factorization of the tensor representation. Many of these procedures build on the so called PARAFAC (parallel factor analysis) model \citep{harshman1970}. Recasting the tensor notation, this model is of the form \eqref{eq:spatial_conc} with $A_{g_k}=A\cdot D_{g_k}$ for all groups and for diagonal matrices $D_{g_1},\dots,D_{g_m}$. As can be seen from this representation, the PARAFAC model allows the mixing matrices to change across groups while they are constrained to be the same up to different scaling of the mixing matrix columns (intuitively, across groups the source dimensions are allowed to project with different strengths onto the observed signal dimensions). Given that the matrices $D_{g_1},\dots,D_{g_m}$ are sufficiently different it is possible to estimate this model uniquely without further assumptions. However, in the case that some of these diagonal matrices are equal identifiability is lost. In such cases \citet{beckmann2005tensorial} suggest to additionally require that the individual components of the sources be independent. This is comparable to the case where uncorrelatedness may not be sufficient for the separation of sources while independence is. The \coroICA procedure also allows for grouped-data but aims at inferring a fixed mixing matrix A, i.e., a model as given in \eqref{eq:time_conc} is considered. In contrast to vanilla concatenation procedures, our methodology naturally incorporates changes across groups by allowing and adjusting for different stationary confounding noise in each group. We argue why this leads to a more robust procedure and also illustrate this in our simulations and real data experiments. More generally, our goal is to learn an unmixing which allows to generalize to new and previously unseen groups; think for example about learning an unmixing based on several different training subjects and extending it to new so far unseen subjects. Such tasks can appear in brain-computer interfacing applications and can also be of relevance more broadly in feature learning for classification tasks where classification models are to be transferred from one group/domain to another. Since our aim is to learn a fixed mixing matrix $A$ that is confounding-robust and readily applicable to new groups, \coroICA cannot naturally be compared to models that are based on spatial concatenation~\eqref{eq:spatial_conc} or fixed sources \emph{and} mixings~\eqref{eq:averaging}; these methods employ fundamentally different assumptions on the model underlying the data generating process, the crucial difference being that we allow the sources and their time courses to change between groups. \subsection{Our Contribution} One strength of our methodology is that it explicates a statistical model that is sensible for data with group structure and can be estimated efficiently, while being supported by provable identification results. Furthermore, providing an explicit model with all required assumptions enables a constructive discussion about the appropriateness of such modeling decisions in specific application scenarios. The model itself is based on a notion of invariance against confounding structures from groups, an idea that is also related to invariance principles in causality \citep{haavelmo44,peters2016}; see also Section~\ref{sec:causalinterpretation} for a discussion on the relation to causality. We believe that \coroICA is a valuable contribution to the ICA literature on the following grounds: \begin{compactitem} \item We introduce a methodologically sound framework which extends ordinary ICA to settings with grouped data and confounding noise. \item We prove identifiability of the unmixing matrix under mild assumptions, importantly, we explicitly allow for time-dependent noise thereby lessening the assumptions required by existing noisy ICA methods. \item We provide an easy to implement estimation procedure. \item We illustrate the usefulness, robustness, applicability, and limitations of our newly introduced \coroICA algorithm as well as characterize the advantage of \coroICA over existing ICAs: The source separation by \coroICA is more stable across groups since it explicitly accounts for group-wise stationary confounding. \item We provide an open-source scikit-learn compatible ready-to-use Python implementation available as \coroICA from the Python Package Index repository as well as R and Matlab implementations and an intuitive audible example which is available at \oururl. \end{compactitem} \section{Methodology}\label{sec:methodology} We consider a general noisy ICA model inspired by ideas employed in causality research (see Section~\ref{sec:causalinterpretation}). We argue below that it allows to incorporate group structure and enables joint inference on multi-group data in a natural way. For the model description, let $S_i=(S^1_i,\dots,S^d_i)^\top\in\mathbb{R}^{d\times 1}$ and $H_i=(H^1_i,\dots,H^d_i)^\top\in\mathbb{R}^{d\times 1}$ be two independent vector-valued sequences of random variables where $i\in\{1,\dots, n\}$. The components $S^1_i,\dots,S^d_i$ are assumed to be mutually independent for each $i$ while, importantly, we allow for any weakly stationary noise $H$. Let $A\in\mathbb{R}^{d\times d}$ be an invertible matrix. The $d$-dimensional data process $(X_i)_{i\in\{1,\dots,n\}}$ is generated by the following noisy linear mixing model \begin{equation} \label{eq:mixing} X_i = A\cdot S_i+H_i, \quad \text{for all }i\in\{1,\dots,n\}. \end{equation} $X$ is a linear combination of source signals $S$ and confounding variables $H$. In this model, both $S$ and $H$ are unobserved. One aims at recovering the mixing matrix $A$ as well as true source signals $S$ from observations of $X$. Without additional assumptions, the confounding $H$ makes it impossible to identify the mixing matrix $A$. Even with additional assumptions it remains a difficult task (see Section~\ref{sec:classical_ica} for an overview of related ICA models). Given the mixing matrix $A$ it is straightforward to recover the confounded source signals $\widetilde{S}_i=S_i+A^{-1}\cdot H_i$. Throughout this paper, we denote by $\mathbf{X}=(X_1,\dots,X_n)\in\mathbb{R}^{d\times n}$ the observed data matrix and similarly by $\mathbf{S}$ and $\mathbf{H}$ the corresponding (unobserved) source and confounding data matrices. For a finite data sample generated by this model we hence have \begin{equation*} \mathbf{X} = A\cdot \mathbf{S}+\mathbf{H}. \end{equation*} In order to distinguish between the confounding $H$ and the source signals $S$ we assume that the two processes are sufficiently different. This can be achieved by assuming the existence of a group structure such that the covariance of the confounding $H$ remains stationary within a group and only changes across groups. \begin{assumption}[group-wise stationary confounding] \label{assumption:group_structure} There exists a collection of $m$ disjoint groups $\mathcal{G}=\{g_1,\dots,g_m\}$ with $g_k\subseteq\{1,\dots,n\}$ and $\cup_{k=1}^mg_k=\{1,\dots,n\}$ such that for all $g\in\mathcal{G}$ the process $(H_i)_{i\in g}$ is weakly stationary. \end{assumption} Under this assumption and given that the source signals change enough within groups, the mixing matrix $A$ is identifiable (see Section~\ref{sec:identifiable}). Similar to existing ICA methods discussed in Section~\ref{sec:ajd}, we propose to estimate the mixing matrix $A$ by jointly diagonalizing empirical estimates of dependence matrices. In contrast to existing methods, we explicitly allow and adjust for the confounding $H$. The process of finding a matrix $V$ that simultaneously diagonalizes a set of matrices is known as joint matrix diagonalization and has been studied extensively \citep[e.g.,][]{ziehe2004, tichavsky2009}. In Section~\ref{sec:estimation}, we show how to construct an estimator for $V$ based on approximate joint matrix diagonalization. The key step in adjusting for the confounding is to make use of the assumption that in contrast to the signals $S$ the confounding $H$ remains stationary within groups. Depending on the type of signal in the sources one can consider different sets of matrices. Here, we distinguish between two types of signals. \paragraph{Variance signal} In case of a variance signal, the variance process of each signal source $\operatorname{Var}(S^j_i)$ changes over time. These changes can be detected by examining the covariance matrix $\operatorname{Cov}(X_i)$ over time. For $V=A^{-1}$ and using \eqref{eq:mixing} it holds for all $i\in\{1,\dots,n\}$ that \begin{equation*} V\operatorname{Cov}(X_i)V^{\top}=\operatorname{Cov}(S_i)+V\operatorname{Cov}(H_i)V^{\top}. \end{equation*} Since the source signal components $S_i^j$ are mutually independent, the covariance matrix $\operatorname{Cov}(S_i)$ is diagonal. Moreover, due to Assumption~\ref{assumption:group_structure} the covariance matrix of the confounding $H$ is constant, though not necessarily diagonal, within each group. This implies for all groups $g\in\mathcal{G}$ and for all $k,l\in g$ that \begin{equation} \label{eq:jointdiag_a} V\left(\operatorname{Cov}(X_k)-\operatorname{Cov}(X_l)\right)V^{\top}=\operatorname{Cov}(S_k)-\operatorname{Cov}(S_l) \end{equation} is a diagonal matrix. \paragraph{Time-dependence signal} In case of a time-dependence signal, the time-dependence of each signal source $S^j_i$ changes over time, i.e., for fixed $\tau$, $\operatorname{Cov}(S_i^j,S_{i-\tau}^j)$ changes over time. These changes lead to changes in the auto-covariance matrices $\operatorname{Cov}(X_i,X_{i-\tau})$. Analogous to the variance signal it holds for all $i\in\{\tau+1,\dots,n\}$ that \begin{equation*} V\operatorname{Cov}(X_i,X_{i-\tau})V^{\top}=\operatorname{Cov}(S_i,S_{i-\tau})+V\operatorname{Cov}(H_i,H_{i-\tau})V^{\top}. \end{equation*} Since the source signal components $S_i^j$ are mutually independent, the auto-covariance matrix $\operatorname{Cov}(S_i,S_{i-\tau})$ is diagonal and due the stationarity of $H$ (see Assumption~\ref{assumption:group_structure}) the auto-covariance $\operatorname{Cov}(H_i,H_{i-\tau})$ is constant within each group. This implies for all groups $g\in\mathcal{G}$, for all $k,l\in g$ and for all $\tau$ that \begin{equation} \label{eq:jointdiag_b} V\left(\operatorname{Cov}(X_k,X_{k-\tau})-\operatorname{Cov}(X_l,X_{l-\tau})\right)V^{\top}=\operatorname{Cov}(S_k,S_{k-\tau})-\operatorname{Cov}(S_l,S_{l-\tau}) \end{equation} is a diagonal matrix. \-\\ For both signal types, we can identify $V$ by simultaneously diagonalizing differences of \mbox{(auto-)covariance} matrices. Details and identifiability results are given in Section~\ref{sec:estimation}. The two signal types considered differ from both, the more classical settings of non-Gaussian time-independent signals as considered for example by fastICA, and the stationary signals with fixed time-dependence assumed for SOBI (cf.\ Table~\ref{table:ica_methods}). Owing to the non-stationarity of the signal we can allow for more general forms of noise. \subsection{Motivating Examples} To get a better understanding of our proposed ICA model in \eqref{eq:mixing}, we illustrate two different aspects: the group structure and the noise model. \paragraph{Noise model} \coroICA can be viewed as a noisy ICA, where the noise is allowed to be group-wise non-stationary. This generalizes existing noisy ICA methods, which, to the best of our knowledge, all assume that the noise is independent over time with various additional assumptions. The following example illustrates the intuition behind our model via a toy-application to natural images. \begin{example}[unmixing noisy images]\label{ex:image} We provide an illustration of how our proposed method compares to other ICA approaches under the presence of noise. Four images, each $450\times 300$ pixels and with three RGB color channels, are used to construct four sources $S^1,S^2,S^3,S^4$ as follows.\footnote{The images are freely available from \citet{images}.} Every color channel is converted to a one dimensional vector by cutting each image into $15\times 10$ equally sized patches (i.e., each patch consists of $30\times 30$ pixels) and concatenating the row-wise vectorized patches. This procedure preserves the local structure of the image. We concatenate the three color channels and consider them as separate groups for our model. Thus, each of the four sources $S^1,\dots,S^4$ consists of $n=3\cdot 450\cdot 300=405.000$ observations, that is, three groups of $135.000$ observations corresponding to the RGB color channels. Next, we construct locally dependent noise that differs across color channels. Here, locally dependent means that the added noise is similar (and dependent) for pixels which are close to each other. This results in four noise processes $H^1,\dots,H^4$. We combine the sources with the noise and apply a random mixing matrix $A$ to obtain the following observed data \begin{equation*} X=A\cdot S+H. \end{equation*} The recast noisy images $\widetilde{S}=S+A^{-1} H$ are illustrated in the first row and the recast observed mixtures $X$ in the second row of Figure~\ref{fig:image}. The last three rows are the resulting reconstructions of three different ICA procedures, \coroICATD, fastICA and choiICA\,(TD). As expected, fastICA as a noise-free ICA method, appears frail to the noise in the images. While choiICA\,(TD) is able to adjust for independent noise, it is unable to properly adjust for the spatial dependence of the noise process and thus leads to undesired reconstruction results. In contrast, \coroICATD is able to recover the noisy images. It is the noise and its characteristics that break the two competing ICA methods, since all three methods are able to unmix the images in the noise-free case (not shown here). \end{example} \begin{figure}[t] \centering \includegraphics{picture_example.pdf} \caption{Images accompanying example~\ref{ex:image}. The top row shows noisy unmixed images, the second row shows mixed images, and the last three rows show unmixed and rescaled images resulting from an application of \coroICATD, choiICA\,(TD) and fastICA (cf.\ Table~\ref{table:ica_methods}). Here, only \coroICATD is able to correctly unmix the images and recover the original (noise-corrupted) images.} \label{fig:image} \end{figure} The noise model we employ is motivated by recent advances in causality research where the group-wise stationary noise can be interpreted as unobserved confounding factors in linear causal feedback models. We describe this in more detail with an explicit example application to Antarctic ice core data in Section~\ref{sec:causalinterpretation}. \paragraph{Group structure} A key aspect of our model is that it aims to leverage group-structure to improve the stability of the umixing under the presence of group-wise confounding. Here we refer to the following notion of stability: A stable unmixing matrix extracts the same set of independent sources when applied to the different groups; it is robust against the confounding that varies across groups and introduces dependences. A standard ICA method is not able to estimate the correct unmixing $V=A^{-1}$, if the data generating process follows our confounded ICA model in \eqref{eq:mixing}. These methods extract signals that are not only corrupted by the group-wise confounding but also are mixtures of the independent sources and are thus not stable in the aforementioned sense. This is illustrated by the ``America's Got Talent Duet Problem'' (cf.\ Example~\ref{ex:toy_example}), an extension and alteration of the classical ``cocktail party problem''. \begin{example}[America's Got Talent Duet Problem] \label{ex:toy_example} Consider the problem of evaluating two singers at a duet audition individually. This requires to listen to the two voices separately, while the singers perform simultaneously. There are two sound sources in the audition room (the two singers) and additionally several noise sources which corrupt the recordings at the two microphones (or the jury member's two ears). A schematic of such a setting is illustrated in Supplement~\ref{sec:complementary}, Figure~\ref{fig:crowdedroom}. The additional noise comes from an audience and two open windows. One can assume that this noise satisfies our Assumption~\ref{assumption:group_structure} on a single group. The sound stemming from the audience can be seen as an average of many sounds, hence remaining approximately stationary over time. Typical sounds from an open window also satisfy this assumption, for example sound from a river or a busy road. Our methodology, however, also allows for more complicated settings in which the noise shifts at known points in times, for example if someone opens or closes a window or starts mowing the lawn outside. In such cases we use the known time blocks of stationary noise as groups and apply \coroICAvar on this grouped data. An example with artificial sound data related to this setting is available at \oururl. We show that \coroICAvar is able to recover useful sound signals with the two voices being separated into different dimensions and thus allows to listen to them individually. In contrast, existing ICAs applied to the time concatenated data fail to unmix the two singers. \end{example} \subsection{Identifiability}\label{sec:identifiable} Identifiability requires that the source signals $S$ change sufficiently strong within groups. The precise notion of a strong signal depends on the type of signal. As discussed previously, we consider two types of non-stationary signals (i) variance signals and (ii) time-dependence signals. Depending on the signal type we formalize two slightly different assumptions that characterize source signals that ensure identifiability. Firstly, in the case of a variance signal, we have the following assumption. \begin{assumption}[signals with independently changing variance] \label{assumption:var} For each pair of components $p, q\in\{1,\dots,d\}$ we require the existence of three (not necessarily unique) groups $g_1,g_2,g_3\in\mathcal{G}$ and three corresponding pairs $l_1,k_1\in g_{1}$, $l_2,k_2\in g_{2}$ and \mbox{$l_3,k_3\in g_{3}$} such that the two vectors {\renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{pmatrix} \operatorname{Var}\big(S^{p}_{l_1}\big)-\operatorname{Var}\big(S^{p}_{k_1}\big) \\ \operatorname{Var}\big(S^{p}_{l_2}\big)-\operatorname{Var}\big(S^{p}_{k_2}\big) \\ \operatorname{Var}\big(S^{p}_{l_3}\big)-\operatorname{Var}\big(S^{p}_{k_3}\big) \end{pmatrix} \text{ and } \begin{pmatrix} \operatorname{Var}\big(S^{q}_{l_1}\big)-\operatorname{Var}\big(S^{q}_{k_1}\big) \\ \operatorname{Var}\big(S^{q}_{l_2}\big)-\operatorname{Var}\big(S^{q}_{k_2}\big) \\ \operatorname{Var}\big(S^{q}_{l_3}\big)-\operatorname{Var}\big(S^{q}_{k_3}\big) \end{pmatrix} \end{equation*}}% are neither collinear nor equal to zero. \end{assumption} In case of time-dependence signals we have the analogous assumption. \begin{assumption}[signals with independently changing time-dependence] \label{assumption:time-dependence} For each pair of components $p, q\in\{1,\dots,d\}$ we require the existence of three (not necessarily unique) groups $g_1,g_2,g_3\in\mathcal{G}$ and three corresponding pairs $l_1,k_1\in g_{1}$, $l_2,k_2\in g_{2}$ and \mbox{$l_3,k_3\in g_{3}$} for which there exists $\tau\in\{1,\dots,n\}$ such that the two vectors {\renewcommand*{\arraystretch}{1.3} \begin{equation*} \begin{pmatrix} \operatorname{Cov}\big(S^{p}_{l_1},S^{p}_{l_1-\tau}\big)-\operatorname{Cov}\big(S^{p}_{k_1},S^{p}_{k_1-\tau}\big) \\ \operatorname{Cov}\big(S^{p}_{l_2},S^{p}_{l_2-\tau}\big)-\operatorname{Cov}\big(S^{p}_{k_2},S^{p}_{k_2-\tau}\big) \\ \operatorname{Cov}\big(S^{p}_{l_3},S^{p}_{l_3-\tau}\big)-\operatorname{Cov}\big(S^{p}_{k_3},S^{p}_{k_3-\tau}\big) \end{pmatrix} \text{ and } \begin{pmatrix} \operatorname{Cov}\big(S^{q}_{l_1},S^{q}_{l_1-\tau}\big)-\operatorname{Cov}\big(S^{q}_{k_1},S^{q}_{k_1-\tau}\big) \\ \operatorname{Cov}\big(S^{q}_{l_2},S^{q}_{l_2-\tau}\big)-\operatorname{Cov}\big(S^{q}_{k_2},S^{q}_{k_2-\tau}\big) \\ \operatorname{Cov}\big(S^{q}_{l_3},S^{q}_{l_3-\tau}\big)-\operatorname{Cov}\big(S^{q}_{k_3},S^{q}_{k_3-\tau}\big) \end{pmatrix} \end{equation*}}% are neither collinear nor equal to zero. \end{assumption} Intuitively, these assumptions ensure that the signals are not changing in exact synchrony across components, which removes degenerate types of signals. In particular, they are satisfied in the case that the variance or auto-covariance processes change pair-wise independently over time. Whenever one of these assumptions is satisfied, the mixing matrix $A$ is uniquely identifiable. \begin{theorem}[identifiability of the mixing matrix]\label{thm:mixing}\-\\ Assume the data process $(X_i)_{i\in\{1,\dots,n\}}$ satisfies the model in \eqref{eq:mixing} and Assumption~\ref{assumption:group_structure} holds. If additionally either Assumption~\ref{assumption:var} or Assumption~\ref{assumption:time-dependence} is satisfied, then $A$ is unique up to permutation and rescaling of its columns. \end{theorem} \begin{proof} A proof is given in Supplement~\ref{sec:proofs}. \end{proof} \subsection{Estimation}\label{sec:estimation} In order to estimate $V$ from a finite observed sample $\mathbf{X} \in \mathbb{R}^{d \times n}$, we first partition each group into subgroups. We then compute the empirical (auto-)covariance matrices on each subgroup. Finally, we estimate a matrix that simultaneously diagonalizes the differences of these empirical (auto-)covariance matrices using an approximate joint matrix diagonalization technique. This procedure results in three methods depending on which type of matrices we diagonalize. Similar to our notation for the different versions of choiICAs we denote these methods by \coroICAvar if we diagonalize differences of covariances, \coroICATD if we diagonalize differences of auto-covariances, and \coroICAvarTD if we diagonalize both differences of covariance and auto-covariances. More precisely, for each group $g\in\mathcal{G}$, we first construct a partition $\mathcal{P}_g$ consisting of subsets of $g$ such that each $e\in\mathcal{P}_g$ satisfies that $e\subseteq g$ and $\cup_{e\in\mathcal{P}_g}e=g$. This partition $\mathcal{P}_g$ should be granular enough to capture the changes in the signals described in Assumption~\ref{assumption:var} or~\ref{assumption:time-dependence}. We propose partitioning each group based on a grid such that the separation between grid points is large enough for a reasonable estimation of the covariance matrix and at the same time small enough to capture variations in the signals. In our experiments, we observed robustness with respect to the exact choice; only too small partitions should be avoided since otherwise the procedure is fragile due to poorly estimated covariance matrices. More details on the choice of the partition size are given in Remark~\ref{rmk:partition}. Depending on whether a variance or time-dependence signal or a hybrid thereof is considered, we fix time lags $\Tau \subset \mathbb{N}_0$. Next, for each group $g\in\mathcal{G}$, each distinct pair $e,f\in\mathcal{P}_g$, and each $\tau \in \Tau$ we define the matrix \begin{equation*} M^{g,\tau}_{e,f}\coloneqq \widehat{\operatorname{Cov}}_\tau(\mathbf{X}_e)-\widehat{\operatorname{Cov}}_\tau(\mathbf{X}_f), \end{equation*} where $\widehat{\operatorname{Cov}}_\tau(\cdot)$ denotes the empirical (auto-)covariance matrix for lag $\tau$ and $\mathbf{X}_e$ is the data matrix restricted to the columns corresponding to the subgroup $e$. Assumption~\ref{assumption:group_structure} ensures that $V M^{g,\tau}_{e,f} V^{\top}$ is approximately diagonal. We are therefore interested in finding an invertible matrix $V$ which approximately jointly diagonalizes the matrices in the set \begin{equation} \label{eq:comparison_set_all} \mathcal{M}^{\operatorname{all}} \coloneqq\big\{M^{g,\tau}_{e,f} \,\big\rvert\, g\in\mathcal{G} \text{ and } e,f\in\mathcal{P}_g \text{ and } \tau\in\Tau \big\}. \end{equation} The number of matrices in this set grows quadratically in the number of partitions. This can lead to large numbers of matrices to be diagonalized. Another option that reduces the computational load is to compare each partition to its complement, which leads to the following set of matrices \begin{equation} \label{eq:comparison_set_comp} \mathcal{M}^{\operatorname{comp}} \coloneqq\big\{M^{g,\tau}_{e, \bar{e}} \,\big\rvert\, g\in\mathcal{G} \text{ and } e\in\mathcal{P}_g \,(\text{with }\bar{e}\coloneqq g\setminus e) \text{ and } \tau\in\Tau\big\} \end{equation} or to compare only neighboring partitions as in \begin{equation} \label{eq:comparison_set_neighbor} \mathcal{M}^{\operatorname{neighbor}} \coloneqq\big\{M^{g,\tau}_{e,\operatorname{neighbor}(e)} \,\big\rvert\, g\in\mathcal{G} \text{ and } e\in\mathcal{P}_g \text{ and } \tau\in\Tau\big\}, \end{equation} where $\operatorname{neighbor}(e)$ is the partition to the right of $e$. The task of jointly diagonalizing a set of matrices is a well-studied topic in the literature and is referred to as approximate joint matrix diagonalization. Many solutions have been proposed for different assumptions made on the matrices to be diagonalized. In this paper, we use the \textsf{uwedge} algorithm\footnote{As a byproduct of our work, we are able to provide a new stable open-source Python/R/Matlab implementation of the \textsf{uwedge} algorithm which is also included in our respective \coroICA packages.} introduced by \citet{tichavsky2009}. The basic idea behind \textsf{uwedge} is to find a minimizer of a proxy for the loss function \begin{equation*} \ell(V)=\sum_{M\in\mathcal{M}^{*}}\left(\sum_{k\neq l}\big[V M V^{\top}\big]_{k,l}^2\right), \end{equation*} over the set of invertible matrices, where in our case $\mathcal{M}^* \in \{\mathcal{M}^{\operatorname{all}}, \mathcal{M}^{\operatorname{comp}},\mathcal{M}^{\operatorname{neighboring}}\}$. The full estimation procedure based on the set $\mathcal{M}^{\operatorname{neighbouring}}$ defined in \eqref{eq:comparison_set_comp} is made explicit in the pseudo code in Algorithm~\ref{alg:coroICA} (where ApproximateJointDiagonalizer stands for a general approximate joint diagonalizer; here we use \textsf{uwedge}). \begin{remark}[choosing the partition and the lags] \label{rmk:partition} Whenever there is no obvious partition of the data, we propose to partition the data into equally sized blocks with a fixed partition size. The decision on how to choose a partition size should be driven by type of non-stationary signal one expects and the dimensionality of the data. For example, in the case of a variance signal the partition should be fine enough to capture areas of high and low variance, while at the same time being coarse enough to allow for sufficiently good estimates of the covariance matrices. That said, for applications to real data sets the signals are often of various length implying that there is a whole range of partition sizes which all work well. In cases with few data points, it can then be useful to consider several grids with different partition sizes and diagonalize across all resulting differences simultaneously. This somewhat removes the dependence of the results on the exact choice of a partition size and increases the power of the procedure. We employ this approach in Section~\ref{ex:climate}. In general, the lags $\Tau$ should be chosen as $\Tau = \{0\}$, $\Tau \subset \mathbb{N}$, or $\Tau \subset \mathbb{N}_0$, depending on whether a variance signal, time-dependence signal, or a hybrid thereof is considered. For time-dependence signal, we recommend to determine up to which time-lag the autocorrelation of the observed signals has sufficiently decayed, and use all lags up to that point. \end{remark} \begin{algorithm}[h] \SetAlgoLined \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{data matrix $\mathbf{X}$\newline group index $\mathcal{G}$ (user selected)\newline group-wise partition $(\mathcal{P}_g)_{g\in\mathcal{G}}$ (user selected)\newline lags $\Tau \subset \mathbb{N}_0$ (user selected) } initialize empty list $\mathcal{M}$ \For{$g\in\mathcal{G}$}{ \For{$e\in\mathcal{P}_g$}{ \For{$\tau\in\Tau$}{ append $\widehat{\operatorname{Cov}}_\tau(\mathbf{X}_e)-\widehat{\operatorname{Cov}}_\tau(\mathbf{X}_{\operatorname{neighbour}(e)})$ to list $\mathcal{M}$ } } } $\widehat{V}\gets\operatorname{ApproximateJointDiagonalizer}(\mathcal{M})$ $\widehat{\mathbf{S}}\gets \widehat{V}\mathbf{X}$ \Output{unmixing matrix $\widehat{V}$\newline sources $\widehat{\mathbf{S}}$} \caption{\coroICA} \label{alg:coroICA} \end{algorithm} \subsection{Assessing the Quality of Recovered Sources}\label{sec:scores} Assessing the quality of the recovered sources in an ICA setting is an inherently difficult task, as is typical for unsupervised learning procedures. The unidentifiable scale and ordering of the sources as well as the unclear choice of a performance measure render this task difficult. Provided that ground truth is known, several scores have been proposed, most notably the Amari measure introduced by \citet{amari1996} and the minimum distance (MD) index due to \citet{ilmonen2010}. Here, we use the MD index, which is defined as \begin{equation*} \operatorname{MD}(\hat{V}, A) = \frac{1}{\sqrt{p-1}}\inf_{C\in\mathcal{C}}\norm{C\hat{V}A-\operatorname{Id}}, \end{equation*} where the set $\mathcal{C}$ consists of matrices for which each row and column has exactly one nonzero element. Intuitively, this score measures how close $\hat{V}A$ is to a rescaled and permuted version of the identity matrix. One appealing property of this score is that it can be computed efficiently by solving a linear sum assignment problem. In contrast to the Amari measure, the MD index is affine invariant and has desirable theoretical properties \citep[see][]{ilmonen2010}. We require a different performance measure for our real data experiments where the true unmixing matrix is unknown. Here, we check whether the desired independence (after adjustment for the constant confounding) is achieved by computing the following covariance instability score (CIS) matrix. It measures the instability of the covariance structure of the unmixed sources $\widehat{\mathbf{S}}$ and is defined for a each groups $g\in\mathcal{G}$ and a corresponding partition $\mathcal{P}_g$ (see Section~\ref{sec:estimation}) by \begin{equation*} \operatorname{CIS}(\widehat{\mathbf{S}},\mathcal{P}_g)\coloneqq \frac{1}{\abs{\mathcal{P}_g}} \sum_{e\in\mathcal{P}_g}\left(\frac{\widehat{\operatorname{Cov}}(\widehat{\mathbf{S}}_e)-\widehat{\operatorname{Cov}}(\widehat{\mathbf{S}}_{\operatorname{neighbour}(e)})}{\widehat{\sigma}_{\widehat{\mathbf{S}}_g}\cdot\widehat{\sigma}_{\widehat{\mathbf{S}}_g}^{\top}}\right)^2, \end{equation*} where $\widehat{\sigma}_{\widehat{\mathbf{S}}}\in\mathbb{R}^{d\times 1}$ is the empirical standard deviation of $\widehat{\mathbf{S}}$ and the fraction is taken element-wise. The CIS matrix is approximately diagonal whenever $\widehat{\mathbf{S}}$ can be written as the sum of independent source signals $\mathbf{S}$ and confounding $\mathbf{H}$ with fixed covariance. This is condensed into one scalar that reflects how stable the sources' covariance structure is by averaging the off-diagonals of the CIS matrix \begin{equation*} \operatorname{MCIS}(\widehat{\mathbf{S}}, \mathcal{P}_g)^2\coloneqq \frac{1}{d(d-1)}\sum_{\substack{i,j=1\\i\neq j}}^d\left[\operatorname{CIS}(\widehat{\mathbf{S}}, \mathcal{P}_g)\right]_{i,j}. \end{equation*} The differences taken in the CIS score extract the variance signals such that the mean covariance instability score (MCIS) can be understood as a measure of independence between the recovered variance signal processes. High values of MCIS imply strong dependences beyond stationary confounding between the signals. Low values imply weak dependences. MCIS is a reasonable score whenever there is a \emph{variance signal} (as described in Section~\ref{sec:methodology}) in sources and is a sensible evaluation metric of ICA procedures in such cases. In case of time-dependence signal (as described in Section~\ref{sec:methodology}), one can define an analogous score based on the auto-covariances. Here, we restrict ourselves to the variance signal case as for all our applications this appeared to constitute the dominant part of the signal. In case of \emph{variance signals} the MCIS appears natural and appropriate as independence measure: It measures how well the individual variance signals (and hence the relevant information) are separated. To get a better intuition, let $A=(a_1,\dots,a_d)\in\mathbb{R}^{d\times d}$ denote the mixing and $V=(v_1,\dots,v_d)^\top\in\mathbb{R}^{d\times d}$ the corresponding unmixing matrix (i.e., $V=A^{-1}$, $a_i$ are columns of $A$ and $v_i$ are rows of $V$). Then it holds that, \begin{align} \label{eq:covXv} \nonumber \operatorname{Cov}(X_i)v_j^{\top}&=A\operatorname{Cov}(S_i)A^{\top}v_j^{\top}+\operatorname{Cov}(H_i)v_j^{\top}\\ \nonumber &=A\operatorname{Cov}(S_i)e_j^{\top}+A\operatorname{Cov}(H_i)e_j^{\top}\\ &=a_j\operatorname{Var}(S_i^j)+A\operatorname{Cov}(H_i)e_j^{\top} \end{align} Under our group-wise stationary confounding assumption (Assumption~\ref{assumption:group_structure}) this implies that within all groups $g\in\mathcal{G}$, it holds for all $l,k\in g$ that \begin{equation} \label{eq:stable_comp2} \left(\operatorname{Cov}(X_l)-\operatorname{Cov}(X_k)\right)v_j^{\top} =a_j\left(\operatorname{Var}(S^j_l)-\operatorname{Var}(S^j_k)\right). \end{equation} This equation holds also in the confounding-free case and it reflects the contribution of the signal (in terms of variance signal) of the $j$-th recovered source $S^j$ to the the variance signal in all components of the observed multivariate data $X$. While in the population case the equality in \eqref{eq:stable_comp2} is satisfied exactly, this is no longer the case when the (un-)mixing matrix is estimated on finite data. Consider two subsets $e, f\in g$ for some group $g\in\mathcal{G}$, then using the notation from Section~\ref{sec:estimation} and denoting by $\widehat{v}_j$ and $\widehat{a}_j$ the estimates of $v_j$ and $a_j$, respectively, it holds that \begin{align} \label{eq:empcovXv} \nonumber M_{e,f}^g\widehat{v}_j^{\top} &=\big[\widehat{\operatorname{Cov}}(\mathbf{X}_e)-\widehat{\operatorname{Cov}}(\mathbf{X}_f)\big]\widehat{v}_j^{\top}\\ \nonumber &=\widehat{A}\big[\widehat{\operatorname{Cov}}(\widehat{S}_e)-\widehat{\operatorname{Cov}}(\widehat{S}_f)\big]\widehat{A}^{\top}\widehat{v}_j^{\top}\\ &=\widehat{A}\big[\widehat{\operatorname{Cov}}(\widehat{S}_e)-\widehat{\operatorname{Cov}}(\widehat{S}_f)\big]e_j^{\top}\nonumber\\ &\approx \widehat{a}_j(\operatorname{Var}(S_{e}^j)-\operatorname{Var}(S_{f}^j)). \end{align} The approximation is close only if the empirical estimate $\widehat{V}$ correctly unmixes the $j$-th source. Essentially, MCIS measures the extent to which this approximation holds true for all components simultaneously across the subsets specified by the partition $\mathcal{P}_g$. It is also possible to consider individual components by assessing how closely the following proportionality is satisfied \begin{equation} \label{eq:component_instability} \sum_{M\in\mathcal{M}^*}\operatorname{sign}(\widehat{v}_jM\widehat{v}_j^{\top}) M\widehat{v}_j^{\top}\propto \widehat{a}_j. \end{equation}\label{eq:activation_map} In EEG experiments, this can also be assessed visually by comparing the topographic maps corresponding to columns of A with so-called activation maps corresponding to the left-hand side in \eqref{eq:component_instability}. More details on this are provided in Section~\ref{sec:topographic_maps}. \FloatBarrier \section{Causal Perspective}\label{sec:causalinterpretation} Our underlying noisy ICA model \eqref{eq:mixing} and the assumption on the noise (Assumption~\ref{assumption:group_structure}) are motivated by causal structure learning scenarios. ICA is closely linked to the problem of identifying structural causal models (SCMs) \citep[see][]{Pearl2009, Imbens2015, Peters2017}. \citet{shimizu2006} were the first to make this connection explicit and used ICA to infer causal structures. To make this more precise consider the following linear SCM \begin{equation} \label{eq:linearSCM} X_i=B\cdot X_i +\widetilde{S}_i, \end{equation} where $X_i$ are observed covariates and $\widetilde{S}_i$ are noise terms. An SCM induces a corresponding causal graph over the involved variables by drawing an edge from variables on the right-hand side to the one on the left-hand side of \eqref{eq:linearSCM}. Moreover, we can define noise interventions \citep{Pearl2009} by allowing the distributions of the noise terms $\widetilde{S}_i$ to change for different $i$. In the language of ICA, this means that the signals $\widetilde{S}_i$ encode the different interventions (over time) on the noise variables. Assuming that the matrix $\operatorname{Id}-B$ is invertible, we can rewrite \eqref{eq:linearSCM} as \begin{equation*} X_i=(\operatorname{Id}-B)^{-1} \widetilde{S}_i, \end{equation*} which can be viewed as an ICA model with mixing matrix $A=(\operatorname{Id}-B)^{-1}$. Instead of taking the noise term $\widetilde{S}_i$ as independent noise sources one can also consider $\widetilde{S}_i=S_i+H_i$. In that case the linear SCM in \eqref{eq:linearSCM} describes a causal model between the observed variables $X_i$ in which hidden confounding is allowed. This is illustrated in Figure~\ref{fig:SCMillustration}, which depicts a 3 variable SCM with feedback loops and confounding. Learning a causal model as in \eqref{eq:linearSCM} with ICA is generally done by performing the following two steps. \begin{enumerate}[(i)] \item \textbf{(ICA)} The matrix $(\operatorname{Id}-B)$ is inferred by ICA up to an undefined scale and permutation of its rows by using an appropriate ICA procedure. This step is often infeasible in the presence of confounding $H$ since existing ICA methods only allow noise under restrictive assumptions (cf. Table~\ref{table:ica_methods}). \item \textbf{(identify $\mathbf{B}$)} There are essentially two assumptions that one can make in order for this to work. The first is to assume the underlying causal model has an acyclic structure as in \citet{shimizu2006}. In such cases the matrix $B$ needs to be permuted to an upper triangular matrix. The second option is to allow for feedback loops in the causal model but restrict the types of feedback to exclude infinite loops as in \citet{hoyer2008} and \citet{rothenhausler2015}. \end{enumerate} When performing step (i) there are two important modeling assumptions that are made when selecting the ICA procedure: (a) the type of allowed signals (types of interventions) and (b) the type of allowed confounding. For the classic ICA setting with non-Gaussian source signals and no noise this translates to the class of linear non-Gaussian models, such as Linear Non-Gaussian Acyclic Models (LiNGAMs) introduced by \citet{shimizu2006}. While such models are a sensible choice in a purely observational setting (i.e., no samples from interventional settings) they are somewhat misspecified in terms of (a) when data from different interventional settings or time-continuous intervention shifts are observed (see Remark~\ref{rmk:intervention}). In those settings, it is more natural to use ICA methods that are tailored to sequential shifts as for example choiICA or \coroICA. Moreover, most common ICA methods consider noise-free mixing, which from a causal perspective implies that no hidden confounding is allowed. While noisy ICA weakens this assumption, existing methods only allow for time-independent or even iid noise, which again greatly restricts the type of confounding. In contrast, our proposed \coroICA allows for any type of block-wise stationary confounding, hence greatly increasing the class of causal models which can be inferred. This is attractive for causal modeling as it is a priori unknown whether hidden confounding exists. Therefore, our proposed procedure allows for robust causal inference under general confounding settings. In Section~\ref{ex:climate}, we illustrate a potential application to climate science and how the choice of ICA can have a strong impact on the estimates of the causal parameters. \begin{remark}[relation between interventions and non-stationarity] \label{rmk:intervention} A causal model does not only describe the observational distribution but also the behavior of the data generating model under all of the allowed interventions. Here, we restrict the allowed interventions to distribution shifts in the source signals, that either change the distribution block-wise (e.g., abruptly changing environmental conditions) or continuously (e.g., continuous shifts in the environmental conditions). Any such shifts are by definition synonymous with the process $S_i$ being non-stationary. In our proposed causal model \eqref{eq:linearSCM} the non-stationarity of the signal therefore corresponds to shifts in the environmental conditions which can be utilized, using \coroICA, to infer the underlying causal structure. From this perspective, the causal inference procedure we propose here is a method based on interventional data rather than plainly observational data, while the interventions are not exactly known. \end{remark} \begin{figure}[t] \centering \hfill \begin{minipage}{0.5\textwidth} \scalebox{0.6}{ \begin{tikzpicture}[framed, background rectangle/.style={thick, draw=black, rounded corners}, scale=5] \tikzstyle{VertexStyle} = [shape = circle, minimum width = 3em,draw] \SetGraphUnit{2} \Vertex[Math,L=X^1,x=-0.6,y=0]{X1} \Vertex[Math,L=X^2,x=0.6,y=0]{X2} \Vertex[Math,L=X^3,x=0,y=0.6]{X3} \Vertex[Math,L=S^1,x=-1,y=0]{S1} \Vertex[Math,L=S^2,x=1,y=0]{S2} \Vertex[Math,L=S^3,x=0,y=1]{S3} \tikzstyle{VertexStyle} = [shape = circle, minimum width = 3em,draw,color=colA] \Vertex[Math,L=H^1,x=-0.6,y=0.6]{H1} \Vertex[Math,L=H^2,x=0.6,y=0.6]{H2} \tikzstyle{EdgeStyle} = [->,>=stealth',shorten > = 2pt] \Edge(S1)(X1) \Edge(S2)(X2) \Edge(S3)(X3) \Edge(X1)(X3) \Edge(X3)(X2) \tikzset{EdgeStyle/.append style = {->, bend left}} \Edge(X1)(X2) \Edge(X2)(X1) \tikzstyle{EdgeStyle} = [->,>=stealth',shorten > = 2pt, color=colA] \Edge(H1)(X1) \Edge(H1)(X3) \Edge(H2)(X2) \Edge(H2)(X3) \end{tikzpicture} } \end{minipage}% \hfill \begin{minipage}{0.5\textwidth} \begin{mdframed}[roundcorner=5pt] \begin{align*} X^1 &\leftarrow b_{1,2}X^2 &&+ S^1+\textcolor{colA}{H^1}\\ X^2 &\leftarrow b_{2,1}X^1 + b_{2,3}X^3 &&+ S^2+\textcolor{colA}{H^2}\\ X^3 &\leftarrow b_{3,1}X^1 &&+ S^3+\textcolor{colA}{H^1}+\textcolor{colA}{H^2} \end{align*} \phantom{1} \end{mdframed} \end{minipage} \hfill \caption{Illustration of an SCM with (including colored nodes \textcolor{colA}{$H^1$}, \textcolor{colA}{$H^2$}) and without (excluding colored nodes) confounding.} \label{fig:SCMillustration} \end{figure} \subsection{Application to Climate Science}\label{ex:climate} To motivate the foregoing causal model we consider a prominent example from climate science: the causal relationship between carbon dioxide concentration ($\text{CO}_2$) and temperature (T). More precisely, we consider Antarctic ice core data that consists of temperature and carbon dioxide measurements of the past 800'000 years due to \citet[carbon dioxide]{Bereiter2014} and \citet[temperature]{jouzel2007orbital}. We combined both temperature and carbon dioxide data and recorded measurements every 500 years by a cubic interpolation of the raw data. The data is shown in Figure~\ref{fig:climate_results} (right). Oversimplifying, one can model this data as an SCM with time-lags as follows \begin{align} \begin{pmatrix} \log(\operatorname{CO}_2)_t\\ \operatorname{T}_t \end{pmatrix} = \underbrace{\begin{pmatrix} 0 & \beta\\ \alpha & 0 \end{pmatrix}}_{=B_0} \begin{pmatrix} \log(\operatorname{CO}_2)_t\\ \operatorname{T}_t \end{pmatrix} + \sum_{k=1}^p B_k \begin{pmatrix} \log(\operatorname{CO}_2)_{t-k}\\ \operatorname{T}_{t-k} \end{pmatrix} + \widetilde{S}_t,\label{eq:SVAR_climate} \end{align} where $\widetilde{S}_t=S_t+H_t$ with $S_t$ component-wise independent non-stationary source signals and $H_t$ a stationary confounding process. Vector-valued linear time-series models of this type are referred to as structural auto regressive models (SVARs) \citep[see e.g.,][]{lutkepohl2005}. They have been previously analyzed in the confounding free-case by \citet{hyvarinen2010estimation}, using an ICA based causal inference approach. A graphical representation of such a model is shown in Supplement~\ref{sec:causal_appendix}, Figure~\ref{fig:graph}. In this example, we can think of the source signals $S_t$ as being two independent summaries of important factors that affect both temperature and carbon dioxide and vary over time, e.g., environmental catastrophes like volcano eruptions and large wildfires, sunspot activity or ice-coverage. These variations can be considered as changing environmental conditions or interventions (see Remark~\ref{rmk:intervention}). On the other hand the stationary confounding process $H_t$ can be thought of as factors which affect both temperature and carbon dioxide in a constant fashion over time, for example this could be effects due the shifts in the earth's rotation axis. Assuming that this was the true underlying causal model, we could use it to predict what happens under interventions. From a climate science perspective an interesting intervention is given by doubling the concentration of $\text{CO}_2$ and determining the resulting instantaneous (faster than 1000 years) effect on the temperature. This effect is commonly referred to as equilibrium climate sensitivity (ECS) due to $\text{CO}_2$ which is loosely defined as the change in degrees temperature associated with a doubling of the concentration of carbon dioxide in the earth's atmosphere. In the fifth assessment report of the United Nations Intergovernmental Panel on Climate Change it has been stated that "there is high confidence that ECS is extremely unlikely less than 1~\degree C and medium confidence that the ECS is likely between 1.5~\degree C and 4.5~\degree C and very unlikely greater than 6~\degree C" \citep[Chapter~10]{stocker2014climate}. Since the measurement frequency in our model is quite low (500 years) and we model the logarithm of carbon dioxide the ECS corresponds to \begin{equation*} \text{ECS} = \log(2)\alpha. \end{equation*} Estimating the model in \eqref{eq:SVAR_climate} can be done by first fitting a vector auto-regressive model of the time lags using OLS resulting in a vector of residuals \begin{equation*} R_t= \begin{pmatrix} \log(\operatorname{CO}_2)_t\\ \operatorname{T}_t \end{pmatrix} - \begin{pmatrix} \widehat{\log(\operatorname{CO}_2)_t}\\ \widehat{\operatorname{T}_t}. \end{pmatrix} \end{equation*} Then, one can apply the two-step causal inference procedure described in Section~\ref{sec:causalinterpretation} to \begin{equation*} R_t=B_0R_t+\widetilde{S}_t. \end{equation*} Since we are in a two-dimensional setting, step (ii) (i.e., identifying the causal parameters $\alpha$ and $\beta$ from the estimated mixing matrix) only requires to assume that feedback loops do not blow-up, which translates into $B_0$ having spectral norm less than one. Given that the signal is sufficiently strong (i.e., there are sufficient interventions on both $\text{CO}_2$ and $T$), it is possible to recover the causal parameters by trying both potential permutations of the sources with subsequent scaling and assessing whether the aforementioned condition is satisfied. We applied this procedure based on \coroICAvar to the data in order to estimate climate sensitivity and compared it with results obtained when using fastICA or choiICA\,(var). The results are given in Figure~\ref{fig:climate_results}. \begin{figure} \centering \begin{minipage}{0.575\textwidth} \resizebox{\textwidth}{!}{ \includegraphics{climate_sensitivity.pdf}} \end{minipage}% \begin{minipage}{0.375\textwidth} \includegraphics{raw_climate.pdf} \end{minipage} \caption{(left) Estimated equilibrium climate sensitivity (ECS) for different ICAs depending on the number of lags included into the SVAR model. The light gray and dark gray overlay indicate likely and very likely value ranges, respectively, for the true value of climate sensitivity as per the fifth assessment report of the United Nations Intergovernmental Panel on Climate Change (cf.\ Section~\ref{ex:climate}). The differences across procedures illustrate that the choice of ICA has a large effect on the estimation. (right) Interpolated time-series data, which we model with an SVAR model.} \label{fig:climate_results} \end{figure} We believe the results illustrate two important aspects. Firstly, the choice of the lags has a strong effect on the estimation of the causal effect parameters, particularly for boundary cases. If it is chosen too small the remaining time-dependence in the data can obscure the signal. If it is chosen too big part of the signal starts being removed. Choosing an appropriate number of lags is therefore crucial. One option would be to apply an information criterion (AIC or BIC) for this. Secondly, the results illustrate that the choice of ICA has a large impact on the estimated causal effect parameters. More specifically, both the assumed signal as well as the assumed confounding have an impact on the estimation. Compare the results between fastICA (non-Gaussian signal) and choiICA/\coroICA (variance signal) for the former and observe the differences between fastICA/choiICA (no confounding) and \coroICA (adjusted for stationary confounding) for the latter. The choice of the ICA algorithm should therefore be driven by the assumptions (both on signal type and confounding) one is willing to employ on the underlying model. Considering a variance signal and adjusting for confounding, \coroICA appears to lead to estimates of equilibrium climate sensitivity that are more closely in line with the highly likely bands previously identified by the United Nations Intergovernmental Panel on Climate Change. This observation is only indicative as all three methods yield highly variable results and also the panel's highly likely band rests on certain assumptions that may become refuted at some later point. \coroICA can be considered a conservative choice if no assumptions on confounding can be made, while noise-free methods may outperform if indeed there were no confounding factors. \section{Experiments} In this section, we analyze empirical properties of \coroICA. To this end, we first illustrate the performance of \coroICA as compared to time-concatenated versions of (noisy) ICA variants on simulated data with and without confounding. We also compare on real data and outline potential benefits of using our method when analyzing multi-subject EEG data. \subsection{Competing Methods}\label{sec:competing_methods} In all of our numerical experiments, we apply \coroICA as outlined in Algorithm~\ref{alg:coroICA}, where we partition each group based on equally spaced grids and run a fixed number of $10\cdot 10^3$ iterations of the uwedge approximate joint diagonalizer. Unless specified otherwise, \coroICA refers to \coroICAvar (i.e., the variance signal based version) and we explicitly write \coroICAvar, \coroICATD and \coroICAvarTD whenever appropriate to avoid confusion. We compare with all of the methods in Table~\ref{table:ica_methods}. Since no Python implementation was publicly available, we implemented the choiICAs and SOBI methods ourselves also based on a fixed number of $10\cdot 10^3$ iterations of the uwedge approximate joint diagonalizer. For fastICA we use the implementation from the scikit-learn Python library due to \citet{scikit-learn} and use the default parameters. For the simulation experiments in Section~\ref{sec:simulations}, we also compare to random projections of the sources, where the unmixing matrix is simply sampled with iid standard normal entries. The idea of this comparison is to give a baseline of the unmixing problem and enhance intuition about the scores' behavior. In order to illustrate the variance in this method, we generally sample $100$ random projections and show the results for each of them. A random mixing does not lead to interpretable sources, thus we do not compare with random projections in the EEG experiments in Section~\ref{sec:eeg_exps}. \subsection{Simulations}\label{sec:simulations} In this section, we investigate empirical properties of \coroICA in well-controlled simulated scenarios. First off, we show that we can recover the correct mixing matrix given that the data is generated according to our model \eqref{eq:mixing} and Assumptions~\ref{assumption:group_structure} and~\ref{assumption:var} hold, while the other ICAs necessarily fall short in this setting (cf.\ Section~\ref{sec:sim_confounding}). Moreover, in Section~\ref{sec:sim_robust} we show that even in the absence of any confounding (i.e., when the data follows the ordinary ICA model and $H\equiv 0$ in our model) we remain competitive with all competing ICAs. Finally, in Section~\ref{sec:GARCH} we analyze the performance of \coroICA for various types of signals and noise settings. Our first two simulation experiments are based on block-wise shifting variance signals, which we describe in \hyperlink{dat:datasimul}{Data~Set~1} and our third simulation experiment is based on GARCH type models described in \hyperlink{dat:datasimul2}{Data~Set~2}. \subsubsection{Dependence on Confounding Strength}\label{sec:sim_confounding} For this simulation experiment, we sample data according to \hyperlink{dat:datasimul}{Data~Set~1} and choose to simulate $n=10^5$ (dimension $d=22$) samples from $m=10$ groups where each group contains $n/m=10^4$ observations. Within each group, we select a random partition consisting of $\abs{\mathcal{P}_g}=10$ subsets while ensuring that these have the same size on average. We fix the signal strength to $c_1=1$ and consider the behavior of \coroICA (trained on half of the groups with an equally spaced grid of $10$ partitions per group) for different confounding strengths $c_1=\{0.125,0.25,0.5,1,1.5,2,2.5,3\}$. The results for $1000$ repetitions are shown in Figure~\ref{fig:sim_01}. To allow for a fair comparison we take the same partition size for choiICA\,(var). \begin{mdframed}[roundcorner=5pt, frametitle={\hypertarget{dat:datasimul}{Data Set 1}: Block-wise shifting variance signals}] For our simulations we select $m$ equally sized groups $\mathcal{G}\coloneqq\{g_1,\dots, g_m\}$ of the data points $\{1,\dots, n\}$ and for each group $g\in\mathcal{G}$ construct a partition $\mathcal{P}_g$. Then, we sample a model of the form \begin{equation*} X_i=A\cdot\left(S_{i}+C\cdot H_{i}\right), \end{equation*} where the values on the right-hand side are sampled as follows: \begin{itemize} \item $A, C\in\mathbb{R}^{d\times d}$ are sampled with iid entries from $\mathcal{N}(0, 1)$ and $\mathcal{N}(0, \frac{1}{d})$, respectively. \item For each $g\in\mathcal{G}$ the variables $H_i\in\mathbb{R}^d$ are sampled from $\mathcal{N}(0, \sigma^2_g\operatorname{Id}_{d})$, where the $\sigma^2_g$ are sampled iid from $\operatorname{Unif}(0.1, b_1)$. \item For each $g\in\mathcal{G}$ and $e\in\mathcal{P}_g$ the variables $S_i\in\mathbb{R}^d$ are sampled from $\mathcal{N}(0, \eta^2_e\operatorname{Id}_{d})$, where the $\eta^2_e$ are sampled iid from $\operatorname{Unif}(0.1, b_2)$. \end{itemize} The parameters $b_1$ and $b_2$ are selected in such a way that the expected confounding strength $c_1=\mathbb{E}(\sigma^2_g)$ and variance signal strength $c_2\coloneqq\mathbb{E}(\abs{\eta^2_e-\eta^2_f})$ are as dictated by the respective experiment. Due to the uniform distribution this reduces to \begin{equation*} b_1=2c_1-0.1\quad\text{and}\quad b_2=3c_2+0.1. \end{equation*} \end{mdframed} The results indicate that in terms of the MD index the competitors all become worse as the confounding strength increases. All competing ICAs systematically estimate an incorrect unmixing matrix. \coroICA on the other hand only shows a very small loss in precision as confounding increases; the small loss is expected due to the decreasing signal to noise ratio. In terms of MCIS, the behavior is analogous but slightly less well resolved; with increasing confounding strength the unmixing estimation of all competing ICAs is systematically biased resulting in bad separation of sources and high MCIS scores both out-of-sample and in-sample. \begin{figure}[h] \includegraphics{simulation_experiment1_unscaled_absolute_instability.pdf} \caption{Results of the simulation experiment described in Section~\ref{sec:sim_confounding}. Plot shows performance measures (MD: small implies close to truth; MCIS: small implies stable) for fixed signal strength and various confounding strengths. The difference between the competing ICAs and \coroICA is more prominent for higher confounding strengths where the estimates of the competing ICAs are increasingly different from the true unmixing matrix and the sources become increasingly unstable.}\label{fig:sim_01} \end{figure} \subsubsection{Efficiency in Absence of Group Confounding}\label{sec:sim_robust} For this simulation experiment, we sample data according to \hyperlink{dat:datasimul}{Data~Set~1} and choose to simulate $n=2\cdot 10^4$ (dimension $d=22$) samples from $m=10$ groups where each group contains $n/m=2\cdot 10^3$ observations. Within each group, we then select a random partition consisting of $\abs{\mathcal{P}_g}=10$ subsets while ensuring that these have the same size on average. This time, to illustrate performance in the absence of confounding, we fix the confounding strengths $c_1=0$ and consider the behavior of \coroICA (applied to half of the groups with an equally spaced grid of $10$ partitions per group) for different signal strengths $c_2=\{0.025, 0.05, 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.4\}$. The results for $1000$ repetitions are shown in Figure~\ref{fig:sim_02}. Again, choiICA\,(var) is applied with the same partition size. The results indicate that overall \coroICA performs competitive in the confounding-free case. In particular, there is no drastic negative hit on the performance of \coroICA as compared to choiICA\,(var) in settings where the data follows the ordinary ICA model. The slight advantage compared to fastICA in this setting is due to the signal type which favors ICA methods that focus on variance signals. \begin{figure} \includegraphics{simulation_experiment2_unscaled_absolute_instability.pdf} \caption{Results of the simulation experiment described in Section~\ref{sec:sim_robust}. Plot shows performance measures (MD: small implies close to truth; MCIS: small implies stability) for data generated without confounding and for various signal strengths. These results are reassuring, as they indicate that when applied to data that follows the ordinary ICA model, \coroICA still performs competitive to competing ICAs even though it allows for a richer model class.}\label{fig:sim_02} \end{figure} \subsubsection{Comparison with Other Noisy ICA Procedures}\label{sec:GARCH} To get a better understanding of how our proposed ICA performs for different signal and noise types, we compare it on simulated data as described in \hyperlink{dat:datasimul2}{Data~Set~2}. We illustrate the different behavior with respect to the different types of signal by applying all three of our proposed \coroICA procedures (\coroICAvar, \coroICATD and \coroICAvarTD) and compare them to the corresponding choiICA variants which do not adjust for confounding (choiICA\,(var), choiICA\,(TD) and choiICA\,(var\,\&\,TD)). While all \coroICA procedures can deal with any type of stationary noise, choiICA\,(TD) only works for time-independent noise and choiICA\,(var) and choiICA\,(var\,\&\,TD) cannot handle any type of noise at all (see Table~\ref{table:ica_methods}). Additionally, we also compare with fastICA to assess its performance in the various noise settings. The results are depicted in Figure~\ref{fig:GARCH}. \begin{mdframed}[roundcorner=5pt, frametitle={\hypertarget{dat:datasimul2}{Data Set 2}: GARCH simulation}] For this simulation we consider different settings of the confounded mixing model \begin{equation*} X_t=A S_t + H_t. \end{equation*} More precisely, we consider the following three different GARCH type signals: (i)~changing variance, (ii)~changing time-dependence, and (iii)~both changing variance and changing time-dependence. For each of these signal types we consider two types of confounding (noise) terms: (a) time-independent and (b) time-dependent auto-regressive noise. For both we construct $d$ independent processes $\tilde{H}^1,\dots,\tilde{H}^d$ and then combine them with a random mixing matrix $C$ as follows \begin{equation*} H_t=C\cdot\tilde{H}_t. \end{equation*} Full details are given in Supplement~\ref{sec:appendix_simu}. \end{mdframed} \begin{figure}[h] \centering \includegraphics{simulation_experiment345678.pdf} \caption{Results of the simulation experiment described in Section~\ref{sec:GARCH} and \protect\hyperlink{dat:datasimul2}{Data~Set~2}. Plots show performance (MD: small implies close to truth) for data generated with auto-regressive (AR) or iid noise and for var, TD, and var \& TD signal as described in \protect\hyperlink{dat:datasimul2}{Data~Set~2}. \coroICAvarTD is able to estimate the correct mixing in all of the considered settings, while others break whenever the more restrictive signal/noise assumptions are not met.} \label{fig:GARCH} \end{figure} In all settings the most general method \coroICAvarTD is able to estimate the correct mixing. The two signal specific methods \coroICATD and \coroICAvar are also able to accurately estimate the mixing in settings where a corresponding signal exists. It is also worth noting that they slightly outperform \coroICAvarTD in these settings. In contrast, when comparing with the choiICA variants, \coroICA is in general able to outperform the corresponding method. Only in the setting of a changing time-dependence with time-independent noise, choiICA\,(TD) is able to slightly outperform \coroICATD. \subsubsection{Summary of the Performance of \coroICA} In summary, \coroICA performs well on a larger model class consisting of both the group-wise confounded as well as the confounding-free case. An advantage over all competing ICAs is gained in confounded settings (as shown in Section~\ref{sec:sim_confounding}) while there is at most a small disadvantage in the unconfounded case (cf. Section~\ref{sec:sim_robust}). This suggests that whenever the data is expected to contain at least small amounts of stationary noise or confounding, one may be better off using \coroICA as the richer model class will guard against wrong results. The results in Section~\ref{sec:GARCH} further underline the robustness of our proposed method to various types of noise (and signals) for which other methods break. Again, even in settings that satisfy the assumptions of the more tailored methods \coroICA remains competitive. \subsection{EEG Experiments}\label{sec:eeg_exps} ICA is often applied in the analysis of EEG data. Here, we illustrate the potential benefit and use of \coroICA for this. Specifically, we consider a multi-subject EEG experiment as depicted in Figure~\ref{fig:eegscenario}. The goal is to find a single mixing matrix that separates the sources simultaneously on all subjects. Our proposed model allows that the EEG recordings for each subject have a different but stationary noise term $H$. \begin{figure}[h] \begin{tikzpicture}[scale=0.95] \node (A) at(4.1,6) {\textbf{subject a}}; \node (Aform) at(4.1,5.5) {$X_a = AS_a + H_a$}; \node (Abrain) at(4.2,-.2){{\reflectbox{\includegraphics[keepaspectratio,width=4.5cm]{brain}}}}; \node[nodec] (As1) at(2.6,-.7) {$S^1_a$}; \node[nodec] (As2) at(4.1,.2) {$S^2_a$}; \node[nodec] (As3) at(5.9,-.4) {$S^3_a$}; \node[node] (Ax1) at(2.6,2.5) {$X^1_a$}; \node[node] (Ax2) at(4.25,2.5) {$X^2_a$}; \node[node] (Ax3) at(5.9,2.5) {$X^3_a$}; \node[nodeh] (Ah1) at(2.6,4.5) {$H^1_a$}; \node[nodeh] (Ah2) at(4.25,4.5) {$H^2_a$}; \node[nodeh] (Ah3) at(5.9,4.5) {$H^3_a$}; \draw[->] (Ah1) -- (Ax1); \draw[->] (Ah1) -- (Ax2); \draw[->] (Ah2) -- (Ax2); \draw[->] (Ah2) -- (Ax3); \draw[waves,segment angle=6] (As1) -- (Ax1); \draw[waves,segment angle=4.5] (As1) -- (Ax2); \draw[waves,segment angle=3] (As1) -- (Ax3); \draw[waves,segment angle=4.5] (As2) -- (Ax1); \draw[waves,segment angle=6] (As2) -- (Ax2); \draw[waves,segment angle=4.5] (As2) -- (Ax3); \draw[waves,segment angle=3] (As3) -- (Ax1); \draw[waves,segment angle=4.5] (As3) -- (Ax2); \draw[waves,segment angle=6] (As3) -- (Ax3); \node (B) at(10,6) {\textbf{subject b}}; \node (Bform) at(10,5.5) {$X_b = AS_b + H_b$}; \node (Bbrain) at(10.1,-.2){{\reflectbox{\includegraphics[keepaspectratio,width=4.5cm]{brain}}}}; \node[nodec] (Bs1) at(8.5,-.7) {$S^1_b$}; \node[nodec] (Bs2) at(10,.2) {$S^2_b$}; \node[nodec] (Bs3) at(11.8,-.4) {$S^3_b$}; \node[node] (Bx1) at(8.5,2.5) {$X^1_b$}; \node[node] (Bx2) at(10.15,2.5) {$X^2_b$}; \node[node] (Bx3) at(11.8,2.5) {$X^3_b$}; \node[nodeh] (Bh1) at(8.5,4.5) {$H^1_b$}; \node[nodeh] (Bh2) at(10.15,4.5) {$H^2_b$}; \node[nodeh] (Bh3) at(11.8,4.5) {$H^3_b$}; \draw[->] (Bh1) -- (Bx1); \draw[->] (Bh2) -- (Bx1); \draw[->] (Bh2) -- (Bx2); \draw[->] (Bh2) -- (Bx3); \draw[->] (Bh3) -- (Bx2); \draw[->] (Bh3) -- (Bx3); \draw[waves,segment angle=6] (Bs1) -- (Bx1); \draw[waves,segment angle=4.5] (Bs1) -- (Bx2); \draw[waves,segment angle=3] (Bs1) -- (Bx3); \draw[waves,segment angle=4.5] (Bs2) -- (Bx1); \draw[waves,segment angle=6] (Bs2) -- (Bx2); \draw[waves,segment angle=4.5] (Bs2) -- (Bx3); \draw[waves,segment angle=3] (Bs3) -- (Bx1); \draw[waves,segment angle=4.5] (Bs3) -- (Bx2); \draw[waves,segment angle=6] (Bs3) -- (Bx3); \node (Morebrain) at(15.1,-.2) {{\Huge $\cdots$}}; \node (coroICA) at(15.1,1.3) {{$\coroICA(X_a, X_b, ...) \approx A$}}; \end{tikzpicture} \caption{Illustration of a multi-subject EEG recording. For each subject, EEG signals $X$ are recorded which are assumed to be corrupted by subject-specific (but stationary) noise terms $H$. The goal is to recover a single mixing matrix $A$ that separates signals well across all subjects.}\label{fig:eegscenario} \end{figure} We illustrate the applicability of our method to this setting based on two publicly available EEG data sets. \begin{mdframed}[roundcorner=5pt, frametitle={\hypertarget{dat:covertattention}{Data Set 3}: CovertAttention data}] This data set is due to \citet{treder2011} and consists of EEG recordings of 8 subjects performing multiple trials of covertly shifting visual attention to one out of 6 cued directions. The data set contains recordings of \begin{itemize} \item 8 subjects, \item for each subject there exist 6 runs with 100 trials, \item each recording consists of 60 EEG channels recorded at 1000 Hz sampling frequency, while we work with the publicly available data that is downsampled to 200 Hz. \end{itemize} Since visual inspection of the data revealed data segments with huge artifacts and details about how the publicly available data was preprocessed was unavailable to us, we removed outliers and high-pass filtered the data at 0.5 Hz. In particular, along each dimension we set those values to the median along its dimension that deviate more than 10 times the median absolute distance from this median. We further preprocess the data by re-referencing to common average reference (car) and projecting onto the orthogonal complement of the null component. For our unmixing estimations, we use the entire data, i.e., including intertrial breaks. For classification experiments (cf.\ Section~\ref{sec:EEG.classification}) we use, in line with \citet{treder2011}, the 8--12 Hz bandpass-filtered data during the 500--2000 ms window of each trial, and use the log-variance as bandpower feature~\citep{lotte2018review}. The classification analysis is restricted to valid trials (approximately 311 per subject) with the desired target latency as described in \citet{treder2011}. \end{mdframed} Results on the CovertAttention \hyperlink{dat:covertattention}{Data~Set~3} are presented here, while the results of the analogous experiments on the BCICompIV2a \hyperlink{dat:bcicomp}{Data~Set~4} are deferred to Supplement~\ref{sec:EEG_results}. For both data sets, we compare the recovered sources of \coroICA with those recovered by competing ICA methods. Since ground truth is unknown we report comparisons based on the following three criteria: \begin{description} \item[stability and independence]\ \\ We use MCIS (cf.\ Section~\ref{sec:scores}) to assess the stability and independence of the recovered sources both in- and out-of-sample. \item[classification accuracy]\ \\ For both data sets there is label information available that associates certain time windows of the EEG recordings with the task the subjects were performing at that time. Based on the recovered sources, we build a classification pipeline relying on feature extraction and classification techniques that are common in the field~\citep{lotte2018review}. The achieved classification accuracy serves as a proxy of how informative and suitable the extracted signals are. \item[topographies]\ \\ For a qualitative assessment, we inspect the topographic maps of the extracted sources, as well as the corresponding power spectra and a raw time-series chunk. This is used to illustrate that the sources recovered by \coroICA do not appear random or implausible for EEG recordings and are qualitatively similar to what is expected from other ICAs. Furthermore, we provide an overview over all components achieved on \hyperlink{dat:covertattention}{Data~Set~3} by SOBI, fastICA, and \coroICA in the Supplementary Section~\ref{sec:alltopos}, where components are well resolved when the corresponding topographic map and activation map are close to each other (cf.\ Section~\ref{sec:scores}). \end{description} \subsubsection{Stability and Independence}\label{sec:stability} We aim to probe stability not only in-sample but also verify the expected increase in stability when applying the unmixing matrix to data of new unseen subjects, i.e., to new groups of samples with different confounding specific to that subject. In order to assess stability and independence of the recovered sources in terms of the MCIS both in- and out-of-sample and for different amounts of training samples, we proceed by repeatedly splitting the data into a training and a test data set. More precisely, we construct all possible splits into training and test subjects for any given number of training subjects. For each pair of training and test set, we fit an unmixing matrix using \coroICA and all competing methods described in Section~\ref{sec:competing_methods}. We then compute the MCIS on the training and test data for each method separately and collect the results of each training-test split for each number of training subjects. Results obtained on the CovertAttention data set (with equally spaced partitions of $\approx$15 seconds length) are given in Figure~\ref{fig:instability_covert} and the results for the BCICompIV2a data set (with equally spaced partitions of $\approx$15 seconds length) are shown in Supplement~\ref{sec:appendix_stability}, Figure~\ref{fig:instability_bci}. For both data sets the results are qualitatively similar and support the claim that the unmixing obtained by \coroICA is more stable when transferred to new unseen subjects. While for the competing ICAs the instability on held-out subjects does not follow a clear decreasing trend with increasing number of training subjects, \coroICA can successfully make use of additional training subjects to learn a more stable unmixing matrix. \begin{figure}[h] \centering \includegraphics{{scores_coroICA_base_clyde.False}.pdf} \caption{Experimental results for comparing the stability of sources (MCIS: small implies stable) trained on different numbers of training subjects (cf.\ Section~\ref{sec:stability}), here on the CovertAttention \protect\hyperlink{dat:covertattention}{Data~Set~3}, demonstrating that \coroICA, in contrast to the competing ICA methods, can successfully incorporate more training subjects to learn more stable unmixing matrices when applied to new unseen subjects.} \label{fig:instability_covert} \end{figure} \begin{figure}[h] \centering \includegraphics{{scores_coroICA_base_clyde.True}.pdf} \caption{Experimental results for comparing the stability of sources of the competing methods relative to the stability obtained by \coroICA (MCIS fraction: above $1$ implies less stable than \coroICA) trained on different numbers of training subjects (cf.\ Section~\ref{sec:stability}), here on the CovertAttention \protect\hyperlink{dat:covertattention}{Data~Set~3}, demonstrating that \coroICA can successfully incorporate more training subjects to learn more stable unmixing matrices when applied to new unseen subjects.} \label{fig:instability_fraction_covert} \end{figure} Due to the characteristics and low signal-to-noise ratio in EEG recordings, the evaluation based on the absolute MCIS score is less well resolved than what we have seen in the simulations before. For this reason we additionally provide a more focused evaluation by considering the MCIS fraction: the fraction of the MCIS achieved on a subject by the respective competitor method divided by the MCIS achieved on that subject by \coroICA when trained on the same subjects. Thus, this score compares MCIS on a per subject basis, where values greater than $1$ indicate that the respective competing ICA method performed worse than \coroICA. Figure~\ref{fig:instability_fraction_covert} shows the results on the CovertAttention \hyperlink{dat:covertattention}{Data~Set~3} confirming that \coroICA can successfully incorporate more training subjects to derive a better unmixing of signals. \subsubsection{Classification based on Recovered Sources}\label{sec:EEG.classification} While the results in the previous section indicate that \coroICA can lead to more stable separations of sources in EEG than the competing methods, in scenarios with an unknown ground truth the stability of the recovered sources cannot serve as the sole determining criterion for assessing the quality of recovered sources. In addition to asking whether the recovered sources are stable and independent variance signals, we hence also need to investigate whether the sources extracted by \coroICA are in fact reasonable or meaningful. In the ``America's Got Talent Duet Problem'' (cf.\ Example~\ref{ex:toy_example}) this means that each of the recovered sources should only contain the voice of one (independent) singer (plus some confounding noise that is not the other singer). For EEG data, this assessment is not as easy. Here, we approach this problem from two angles: (a) in this section we show that the recovered sources are informative and suitable for common EEG classification pipelines, (b) in Section~\ref{sec:topographic_maps} we qualitatively assess the extracted sources based on their power spectra and topographic maps. In both data sets there are labeled trials, i.e., segments of data during which the subject covertly shifts attention to one of six cues (cf.\ \hyperlink{dat:covertattention}{Data~Set~3}) or performs one of four motor imagery tasks (cf.\ \hyperlink{dat:bcicomp}{Data~Set~4}). Based on these, one can try to predict the trial label given the trial EEG data. To mimic a situation where the sources are transferred from other subjects, we assess the informativeness of the extracted sources in a leave-k-subjects-out fashion as follows. We estimate an unmixing matrix on data from all but $k$ subjects, compute bandpower features for each extracted signal and for each trial (as described in \hyperlink{dat:covertattention}{Data~Set~3} and~\hyperlink{dat:bcicomp}{Data~Set~4}), and on top of those we train an ensemble of $200$ bootstrapped shrinkage linear discriminant analysis classifiers where each is boosted by a random forest classifier on the wrongly classified trials. This pipeline (signal unmixing, bandpower-feature computation, trained ensemble classifier), is then used to predict the trials on the $k$ held-out subjects. The results are reported in Figure~\ref{fig:classification_covert} and Supplement~\ref{sec:appendix_EEG.classification}, Figure~\ref{fig:classification_bci} which show for each number of training subjects, the accuracies achieved on the respective held-out subjects when using the unmixing obtained on the remaining subjects by either \coroICA or one of the competitor methods. The results on both data sets support the claim that the sources recovered by \coroICA are not only stable but in addition also capture meaningful aspects of the data that enable competitive classification accuracies in fully-out-of-sample classification. The mean improvement in classification accuracy of \coroICA over the other methods increases with increasing number of training subjects. This behavior is expected since it is difficult to disambiguate signal from subject-specific confounding for few training subjects, while \coroICA is expected to learn an unmixing which better adjusts for the confounding with more training subjects. \begin{figure}[h] \includegraphics{{classification_data_.coroICA_base_clyde}.pdf} \caption{Classification accuracies on held-out subjects (cf.\ Section~\ref{sec:EEG.classification}), here on the CovertAttention \protect\hyperlink{dat:covertattention}{Data~Set~3}. Gray regions indicate a 95\% confidence interval of random guessing accuracies.} \label{fig:classification_covert} \end{figure} It is worth noting that these classification results depend heavily on the employed classification pipeline subsequent to the source separation. Here, our goal is only to show that \coroICA does indeed separate the data into informative sources. In practice, and when only classification accuracy matters, one might also consider using a label-informed source separation \citep{dahne2014spoc}, employ common spatial patterns \citep{koles1990spatial} or use decoding techniques based on Riemannian geometry \citep{barachant2012multiclass}. \subsubsection{Topographic Maps}\label{sec:topographic_maps} The components that \coroICA extracts from EEG signals are stable (cf.\ Section~\ref{sec:stability}) and meaningful in the sense that they contain information that enables classification of trial labels, which is a common task in EEG studies (cf.\ Section~\ref{sec:EEG.classification}). In this section, we complement the assessment of the recovered sources by demonstrating that the results obtained by \coroICA lead to topographies, activation maps, power spectra and raw time-series that are similar to what is commonly obtained during routine ICA analyses of EEG data when the plausibility and nature of ICA components is to be judged. Topographies are common in the EEG literature to depict the relative projection strength of extracted sources to the scalp sensors. More precisely, the column-vector $a_j$ of $A = V^{-1}$ that specifies the mixing of the $j$-th source component is visualized as follows. A sketched top view of the head is overlayed with a heatmap where the value at each electrodes' position is given by the corresponding entry in $a_j$. These topographies are indicative of the nature of the extracted sources, for example the dipolarity of source topographies is a criterion invoked to identify cortical sources~\citep{delorme2012} or the topographies reveal that the source mainly picks up changes in the electromagnetic field induced by eye movements. Another way to visualize an extracted source is an activation map, which is commonly obtained by depicting the vector $\widehat{\operatorname{Cov}}(X)v_j^\top$ (where $v_j$ is $j$-th row of unmixing matrix $V$) and shows for each electrode how the signal observed at that electrode covaries with the signal extracted by $v_j$~\citep{haufe2014interpretation}. Besides inspecting the raw time-series data, another criterion invoked to separate cortical from muscular components is the log power spectrum. For example, a monotonic increase in spectral power starting at around 20 Hz is understood to indicate muscular activity~\citep{goncharova2003emg} and peaks in typical EEG frequency ranges are used to identify brain-related components.\footnote{These are commonly employed criteria which are also advised in the eeglab tutorial \citep[\url{https://sccn.ucsd.edu/wiki/Chapter_09:_Decomposing_Data_Using_ICA}]{delorme2004eeglab} and the neurophysiological biomarker toolbox wiki \citep[\url{https://www.nbtwiki.net/doku.php?id=tutorial:how_to_use_ica_to_remove_artifacts}]{hardstone2012detrended}.}. In Figure~\ref{fig:topomap}, we depict the aforementioned criteria for three exemplary components extracted by \coroICA on the CovertAttention \hyperlink{dat:covertattention}{Data~Set~3}. \begin{figure} \includegraphics{{Topoplots}.pdf} \caption{Visualization of exemplary EEG components recovered on the CovertAttention \protect\hyperlink{dat:covertattention}{Data~Set~3}. On the left the topographies of three components are shown where the mixing matrix is the inverse of the unmixing matrix obtained by SOBI ($A_\text{SOBI}$), the unmixing matrix obtained by fastICA ($A_\text{fastICA}$) and that of \coroICAvar ($A_\text{coroICA}$). On the right we depict, for a randomly chosen subject, the activation maps (cf.\ Section~\ref{sec:topographic_maps} and \ref{sec:scores}), the log power spectra, and randomly chosen chunks of the raw time-series data corresponding to the respective \coroICAvar components. Components extracted by \coroICAvar are qualitatively similar to those of the commonly employed ICA procedures; see Section~\ref{sec:topographic_maps} for details.} \label{fig:topomap} \end{figure} Following the discussion in Section~\ref{sec:scores} we show the activation maps as \[\operatorname{DiffX}(v_j^\top) = \sum_{\mathcal{M}\in\mathcal{M}^*} \operatorname{sign}(v_jMv_j^\top) Mv_j^\top,\] which captures variance changing signal and allows to asses the quality of a recovered source by comparison to the topographic map $a_j$ (cf.\ Equation~\ref{eq:activation_map}). Here, the idea is to demonstrate that \coroICA components are qualitatively similar to components extracted by commonly employed SOBI-ICA or fastICA. Therefore, we choose to display one example of an ocular component (2\textsuperscript{nd} where the topography is indicative of eye movement), a cortical component (7\textsuperscript{th} where the dipolar topography, the typical frequency peak at around 8--12 Hz, and the amplitude modulation visible in the raw time-series are indicative of the cortical nature), and an artifactual component (51\textsuperscript{st} where the irregular topography and the high frequency components indicate an artifact). For comparison, we additionally show for each component the topographies of the components extracted by SOBI-ICA or fastICA by matching the recovered source which most strongly correlates with the one extracted by \coroICA. The components extracted by \coroICA closely resemble the results one would obtain from a commonly employed ICA analysis on EEG data. For completeness, we provide an overview over all components extracted on \hyperlink{dat:covertattention}{Data~Set~3} by SOBI, fastICA, and \coroICAvar in the Supplementary Section~\ref{sec:alltopos}. Components are well resolved when the corresponding topographic map and activation map are close to each other (cf.\ Section~\ref{sec:scores}), which, by visual inspection, appears to be more often the case for \coroICA than for the competing methods. \section{Conclusion} In this paper, we propose a method for recovering independent sources corrupted by group-wise stationary confounding. It extends ordinary ICA to an easily interpretable model, which we believe is relevant for many practical problems as is demonstrated in Section~\ref{ex:climate} for climate data and Section~\ref{sec:eeg_exps} for EEG data. We give explicit assumptions under which the sources are identifiable in the population case (cf.\ Section~\ref{sec:identifiable}). Moreover, we introduce a straightforward algorithm for estimating the sources based on the well-understood concept of approximate joint matrix diagonalization. As illustrated in the simulations in Section~\ref{sec:simulations}, this estimation procedure performs competitive even for data from an ordinary ICA model, while additionally being robust and able to adjust for group-wise stationary confounding. For real data, we show that the \coroICA model indeed performs reasonably on EEG data and leads to improvements in comparison to commonly employed approaches, while at the same time preserving an enhanced interpretation of the recovered sources. \acks The authors thank Vinay Jayaram, Nicolai Meinshausen, Jonas Peters, Gian Thanei, the action editor Kenji Fukumizu, and anonymous reviewers for helpful discussions and constructive comments. NP and PB were partially supported by the European Research Commission grant 786461 CausalStats - ERC-2017-ADG. \newpage \renewcommand
{ "timestamp": "2019-10-31T01:19:23", "yymm": "1806", "arxiv_id": "1806.01094", "language": "en", "url": "https://arxiv.org/abs/1806.01094" }
\section{\label{}} In nature, an inversion of the occupation probabilities of quantum states is clearly quite artificial since the standard thermal situation leads to the opposite - to a Boltzmann distribution over the energy levels where lower levels are populated exponentially larger than higher ones. Typically such an inversion is achieved by pumping the system, e.g. electrically or optically. Once achieved, an inversion will allow that an initially very small field is amplified by several orders of magnitude resulting in lasing. This leads to the question: Could it be possible to use a temperature difference not only to drive a steam engine, but as source for lasing? In this letter, we will tackle this question by considering a nanomachine that emits phonons and study, if applying a heat gradient results in an amplification of the emission, i.e., to phonon lasing. The concept of a nanomachine, that emits phonons rather than photons, has been on the minds of many researchers \cite{grudinin10, beardsley10, carmele17, vahala09, mendona10, kepesidis13, mahboob13}. In such a system it is interesting to exploit the possibility of amplification of the phonon wave, which in analogue to lasing could lead to the construction of a phonon-laser or saser \cite{zavtrak1997saser,tilstra2007optically}. Such a phonon-laser could be used for new type highly precise nondestructive measurements \cite{khurgin10}. Several proposal for phonon lasing haven been put forward \cite{mahboob13,vahala09,sander12,kent10,vahala10} and also first implementation of phonon lasing using quantum-well structures have been reported \cite{maryam2013dynamics}. In contrast to these studies, in our proposal we want to achieve phonon lasing in a nanomachine using a heat gradient. Our nanomachine is composed of three coupled quantum systems (QSs), which are subject to a heat gradient as displayed in Fig.~\ref{fig:model_system}(a). The active medium is given by the middle three-level system (QS~M). The central quantum system interacts with two-level subsystems (QS~L/R) at each side, which act as energy filters. Such filtering is necessary, because a direct coupling to the two heat baths with different temperatures would lead to a thermal occupation of the system and not to inversion. Each filter is coupled to a heat bath, where the left bath has a significantly higher temperature ($T_{\mathrm{H}}$) than the right one ($T_{\mathrm{C}}$). In consequence of the temperature difference, a flow of excitation takes place. We will show that the exclusive thermalization of the resonant transitions may lead to the crucial inversion in the upper two levels of the central system for certain parameters. The inversion can then result in the emission of coherent phonons at the central quantum system. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Fig1.pdf} \caption{(a) Sketch of the system. (b) Occupations $n_i$ of the middle QS as function of time for temperatures $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100~K$ as well as the inversion $I=n_3-n_2$. (c) Inversion as function of energy mismatch $\Delta$. The dashed lines marks the ideal inversion from Eq.~\eqref{eq:idealI}. \label{fig:model_system}} \end{figure} The Hilbert space of the nanomachine is spanned by the product of the basis states in the QSs as depicted in Fig.~\ref{fig:model_system}(a). The relative energies of the states in the two-level systems are parametrized by $\delta_{\mathrm{L/R}}$ and in the central three-level system by $\delta_{\mathrm{M_{+}/M_{-}}}$. The system Hamiltonian reads \begin{equation} \label{ham_loc} \hat{H}_{\mathrm{sys}} = \hat{h}^{(\mathrm{L})} + \hat{h}^{(\mathrm{M})} + \hat{h}^{(\mathrm{R})}, \end{equation} where $\hat{h}^{(\mathrm{L})/(\mathrm{M})/(\mathrm{R})}$ are the Hamiltonians of the QSs with \begin{equation} \label{ham_L} \hat{h}^{\mathrm{(L)}} = \sum_{n}\epsilon_{n}^{(\mathrm{L})}P_{nn}^{\mathrm{(L)}}\otimes \hat{1}^{\mathrm{(M)}}\otimes\hat{1}^{(\mathrm{R})} \,. \end{equation} Here, $\epsilon^{(\mathrm{L})}_{n}$ are the energies, $\hat{P}^{(\mathrm{L})}_{nm} = |n\rangle^{(\mathrm{L})} {}^{(\mathrm{L})} \langle m|$ is the projection operator and $\hat{1}^{(\mathrm{M})/(\mathrm{R})}$ denote unity operators in the space of respective subsystem. The Hamiltonians $\hat{h}^{(\mathrm{M})}$ and $\hat{h}^{(\mathrm{R})}$ are constructed in the same way. The QSs are coupled in a way that excitations can be exchanged between adjacent sites as denoted in Fig.~\ref{fig:model_system}(a), i.e. the middle system interacts with both sides, while the interaction between left and right system is suppressed. The corresponding Hamiltonian reads \begin{eqnarray} \label{ham_int} \hat{H}_{\mathrm{int}}= \lambda_{\mathrm{ML}}\bigg( \hat{h}^{(\mathrm{LM})}\otimes \hat{1}^{\mathrm{(R)}} \bigg) +\lambda_{\mathrm{MR}}\bigg(\hat{1}^{\mathrm{(L)}}\otimes \hat{h}^{(\mathrm{MR})} \bigg) \end{eqnarray} where $\lambda_{\mathrm{ML/MR}}$ is the coupling parameter. The coupling \begin{equation} \hat{h}^{(\mathrm{LM})} = \hat{P}_{21}^{\mathrm{(L)}} \otimes \bigg( \hat{P}_{12}^{\mathrm{(M)}} + \hat{P}_{13}^{\mathrm{(M)}} + \hat{P}_{23}^{\mathrm{(M)}} \bigg) + \mathrm{h.c.} \end{equation} is given by the respective projection operators and analogous for $\hat{h}^{(\mathrm{MR})}$. The coupling is taken to be weak such that the energy contribution of the interaction is small compared to the energy contained in the system. Each of the two edge QSs is coupled locally to a heat bath of different temperature. To describe the coupling, we make use of a Quantum Master Equation within a Lindblad form, which accounts for the non-equilibrium situation in our system \cite{breuer02}. For this, we set up the equation of motion for the density matrix $\hat{\rho}$ via \begin{equation} \frac{d\hat{\rho}}{dt} = -\frac{i}{\hbar}[\hat{H}_{\mathrm{sys}}+\hat{H}_{\mathrm{int}},\hat{\rho}] + \hat{D}_{\mathrm{H}}(\hat{\rho}) + \hat{D}_{\mathrm{C}}(\hat{\rho})\,. \end{equation} $\hat{D}_{\mathrm{H}}$ and $\hat{D}_{\mathrm{C}}$ are the dissipators to the hot (left) and cold (right) heat bath, respectively, \begin{equation} \hat{D}_{\mathrm{H}}(\hat{\rho}) = \sum_{k = 1}^{2}\Gamma_{k}(T_{\mathrm{H}})\bigg(\hat{L}^{(\mathrm{L})}_{k}\hat{\rho}\hat{L}^{(\mathrm{L})\dagger}_{k} - \frac{1}{2}[\hat{L}^{(\mathrm{L})\dagger}_{k}\hat{L}^{(\mathrm{L})}_{k}, \hat{\rho}]_{+} \bigg) \end{equation} with the Lindblad operators \begin{equation*} \hat{L}^{(\mathrm{L})}_{1} = \hat{P}^{(\mathrm{L})}_{21}\otimes\hat{1}^{(\mathrm{M,R})},\qquad \hat{L}^{(\mathrm{L})}_{2} = \hat{P}^{(\mathrm{L})}_{12}\otimes\hat{1}^{(\mathrm{M,R})} \, . \end{equation*} The effectiveness of the heat coupling is given by the rates $\Gamma_{k}$ chosen by a phenomenological ansatz for the spectral density of an environment of Ohmic kind \cite{breuer02} with \begin{equation} \Gamma_{k}(T_{\mathrm{H}}) = \frac{\gamma}{1+\exp \big\{(-1)^{k-1}\delta_{\mathrm{L}}/k_{\mathrm{B}}T_{\mathrm{H}}\big\}}. \end{equation} containing the distribution function. The dissipator describing the coupling to the cold (right) heat bath $\hat{D}_{\mathrm{C}}(\hat{\rho})$ is analogue. \\ Our goal is to achieve an inversion between states $|3\rangle ^{(\mathrm{M})}$ and $|2\rangle ^{(\mathrm{M})}$, which later will be coupled to a phonon mode. The energy of typical acoustic phonons lie in the order of a few meV. Setting the energy of the lower state $|1\rangle^{(\mathrm{M})}$ to zero, we accordingly chose $\delta_{\mathrm{M_{+}}} = 30$~meV and $\delta_{\mathrm{M_{-}}} = 25$~meV, thus $\delta_{\mathrm{M_{+}}} - \delta_{\mathrm{M_{-}}}= 5$~meV. The energies of the edge state are set to $\delta_{\mathrm{L}} = \delta_{\mathrm{M_{+}}}=30$~meV and $\delta_{\mathrm{R}} = \delta_{\mathrm{M_{-}}}=25$~meV. If not stated otherwise, the parameters are set to $\lambda=\lambda_{\mathrm{ML}} = \lambda_{\mathrm{MR}} = 0.03$~meV, $\gamma=\gamma_{\mathrm{H}} = \gamma_{\mathrm{C}} = 3$~ps$^{-1}$. As initial condition we assume that the whole system is in its ground state $|1\rangle ^{(\mathrm{L})}\otimes |1\rangle ^{(\mathrm{M})} \otimes|1\rangle ^{(\mathrm{R})}$. By solving the equation of motion we calculate the occupations $n_i = \langle P^{(\mathrm{M})}_{ii}\rangle $ of the three states in QS~M. The time evolution of the occupations for a hot bath with temperature $T_{\mathrm{H}}=400$~K and a cold bath of $T_{\mathrm{C}}=100$~K is shown in Fig.~\ref{fig:model_system}(b). When the heating gradient is switched on at $t=0$, the occupation $n_1$ decreases in favour of $n_2$ and $n_3$. After a few hundreds of picoseconds a stationary state is reached. Due to the heat gradient and the energy filtering an inversion in the central system is achieved (red solid line). It is interesting to compare the achieved inversion to the ideal case in which the occupations $n_{i}$ will follow the Boltzmann distribution \begin{equation} \label{P3toP1} \frac{n_{3}}{n_{1}} = \exp\bigg\{-\frac{\delta_{\mathrm{M+}}}{k_{\mathrm{B}}T_{\mathrm{H}}}\bigg\}, \qquad \frac{n_{2}}{n_{1}} = \exp\bigg\{-\frac{\delta_{\mathrm{M-}}}{k_{\mathrm{B}}T_{\mathrm{C}}}\bigg\}. \end{equation} The inversion in the ideal case is then \begin{equation} \label{eq:idealI} I=n_{3} - n_{2} = \frac{A-B}{1+A+B}, \end{equation} where $A = \exp\big\{-\frac{\delta_{\mathrm{M+}}}{k_{\mathrm{B}}T_{\mathrm{H}}}\big\}$ and $B = \exp\big\{-\frac{\delta_{\mathrm{M-}}}{k_{\mathrm{B}}T_{\mathrm{C}}}\big\}$. Inserting the $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100$~K into the equation we obtain an inversion of $I^{ideal}= 0.247$, which is slightly above the numerically calculated value of $I^{num}=0.244$. Hence, we can conclude that through the filters the heat gradient applied at the edge systems leads to an inversion in the middle system. The creation of an inversion depends sensitively on the energy filters as seen when introducing an energy mismatch $\Delta$ between the filter systems and the middle system in Fig.~\ref{fig:model_system}(c). Note that the mismatch is given such that the energy of state $|3\rangle^{(M)}$ increases to $\delta_{M+}+\Delta$. With increasing $\Delta$ the inversion decreases dramatically and for $\Delta > 8$~meV it changes its sign returning to a normal condition. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{(a) Inversion as function of temperature difference $\Delta T$ for a constant temperature of the colder bath $T_{\mathrm{C}}=100$, $200$ and $300$~K. (b) Color map of the inversion as a function of temperature difference and the mean temperature of the system $T_{\mathrm{sys}} = (T_{\mathrm{H}}+T_{\mathrm{C}})/2$. The symbols (white circle, pentagon, and triangle) correspond to the curves from the upper panel. The yellow line depicts zero inversion.} \label{fig:inv_temps} \end{figure} Finding that a heat gradient is able to create a population inversion, given that appropriate filters are applied, we now consider the range of temperatures, for which a population inversion is possible in Fig.~\ref{fig:inv_temps}. Figure~\ref{fig:inv_temps}(a) shows the inversion, when keeping the temperature of the cold bath $T_{\mathrm{C}}$ fixed and increasing the temperature of the hot bath by $\Delta T$ to $T_{\mathrm{H}} = T_{\mathrm{C}}+ \Delta T$. As expected, for higher temperature differences the inversion increases, however, an inversion is only reached over a certain threshold given by the ratio $T_{\mathrm{H}}/T_{\mathrm{C}} > 1.2$. This threshold depends not only on the temperature difference, but also on the absolute values of the baths. This is also summarized in Fig.~\ref{fig:inv_temps}(b), where we show a color plot of the inversion as a function of temperature difference of the heat baths (for the wider temperature range) and the mean temperature of the system defined as $T_{sys}=(T_{\mathrm{H}}+T_{\mathrm{C}})/2$. This figure underlines the fact, that a minimal temperature difference is needed to achieve inversion. The inversion is also sensitive to other system parameters, like system coupling parameter $\lambda$ or the heat bath coupling parameter $\gamma$. This dependence is analyzed in Fig.~\ref{fig:inv_lambda}. In Fig.~\ref{fig:inv_lambda}(a) we show the stationary inversion as function of $\lambda$ keeping all other parameters fixed. The temperatures are $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100$~K. For small $\lambda$ the inversion increases, because the efficient heat coupling is responsible to transfer the excitation to the center system. Then for a large range of $\lambda$ the inversion stays constant. For high values, the inversion decreases again, because now the interaction between the systems is not weak anymore and the new eigenstates become superposition of the uncoupled states. In Fig.~\ref{fig:inv_lambda}(b) we present the impact of the coupling of each two-level system to their environment on the inversion. The parameter $\gamma$ determines how fast the edge systems, and subsequently the middle system, are thermalised. With higher value of $\gamma$ the thermalisation becomes faster and as a consequence the inversion reaches nearly the ideal value of $I^{\mathrm{ideal}}$ marked by the black line (cf. Eq.~\eqref{eq:idealI}). \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Fig3} \caption{Stationary values of the inversion as function of (a) QS coupling parameter $\lambda$ and (b) heath bath coupling constant $\gamma$. The bath temperatures are $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100$~K. The thin black line marks the ideal inversion. \label{fig:inv_lambda} } \end{figure} Having seen that the heat gradient induces an inversion, we now study if this inversion can be utilized to drive a phonon laser. For the phonons, we assume a single acoustic phonon mode, which is confined by a cavity. Phonon cavities have been realized in semiconductor structures by superlattices \cite{trigo2002con,lanzillotti2007pho,kabuss2012opt,esmann2018top}. Denoting $b,b^{\dagger}$ as the bosonic operators of the phonon mode and $\omega$ its frequency, the phonon Hamiltonian is composed of the free phonon system and the carrier-phonon coupling \begin{equation} \hat{H}_{\mathrm{ph}} = \hbar \omega \hat{b}^{\dagger}\hat{b}+ \hat{H}_{\mathrm{c-ph}} \,. \end{equation} The energy of the phonon is chosen to be resonant with the energy difference in QS~M with $\hbar\omega=5$~meV. The phonon mode is coupled only to the middle QS M via \begin{equation} \hat{H}_{\mathrm{c-ph}} = \hat{1}^{(\mathrm{L})} \otimes \hat{h}^{(\mathrm{M})} \otimes \hat{1}^{(\mathrm{R})} \end{equation} with \begin{equation} \label{hm} \hat{h}^{(\mathrm{M})} = \hbar g(\hat{b}^{\dagger}\hat{P}^{(\mathrm{M})}_{23} + \hat{b}\hat{P}^{(\mathrm{M})}_{32}). \end{equation} Here $g$ is the real coupling constant between the QS and phonon mode. We assume that the system can relax from the state $|3\rangle^{(\mathrm{M})}$ to state $|2\rangle^{(\mathrm{M})}$ by the emission of a phonon, while the reverse transition is possible by absorbing a phonon. A measurable quantity for the phonons is the lattice displacement, which is connected to the phonon operators via \begin{equation} \langle \hat{u} \rangle = u_{0} \left(\langle\hat{b}^{\dagger} \rangle +\langle \hat{b}^{} \rangle \right) = u^{(+)} + u^{(-)}, \end{equation} where we defined $u^{(+)} = u_{0}\langle\hat{b}\rangle$ and $u^{(-)} = u_{0}\langle\hat{b}^{\dagger}\rangle$ as well as the single phonon amplitude $u_{0}$. Introducing the phonon coupling, we extend the equations of motion to \begin{equation} \frac{d\hat{\rho}}{dt} = -\frac{i}{\hbar}[\hat{H}_{\mathrm{sys}}+\hat{H}_{\mathrm{int}}+\hat{H}_{\mathrm{ph}} ,\hat{\rho}] + \hat{D}_{\mathrm{H}}(\hat{\rho}) + \hat{D}_{\mathrm{C}}(\hat{\rho}), \end{equation} For the lattice displacement this leads to the following rate equation \begin{equation} \frac{du}{dt} = -\Gamma u - iC \bigg(\rho_{23}^{(\mathrm{M})}(t) + \rho_{32}^{(\mathrm{M})}(t)\bigg), \end{equation} where we introduced the phonon dephasing rate $\Gamma$ and defined the coupling constant $C = u_{0}g$. For our simulations we assume $\Gamma = 2$~ps$^{-1}$, $g = 2.25$~ps$^{-1}$, $u_{0} = 20$~pm. We further assume that always a very small, but finite displacement is present. In Fig.~\ref{fig:lattice}(a), we present the evolution of the inversion (red) in comparison to the amplitude of the lattice displacement field (blue). The temperatures are set to $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100$~K. For small times the inversion of the system build up according to the thermalization of the middle QS~M. When the inversion is close to its maximum the lattice displacement starts to increase. Then the inversion and the lattice displacement oscillate against each other, which are relaxation oscillations well known from lasing dynamics. After some time a steady state is reached with a lattice displacement of $u_{\infty} =0.43$~pm. In this steady state a constant flow of coherent phonons is occurring, we therefore conclude that the system exhibits phonon lasing. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{Fig4} \caption{(a) Time evolution of the inversion (red) and the amplitude of the lattice displacement field (blue) for $T_{\mathrm{H}}=400$~K and $T_{\mathrm{C}}=100$~K. (b) Stationary amplitude of the lattice displacement field as function of temperature difference for $T_{\mathrm{C}}=100$~K (orange) and $T_{\mathrm{C}}=200$~K (brown). } \label{fig:lattice} \end{figure} In Fig.~\ref{fig:lattice}(b), we show the stationary amplitude as a function of the driving strength, which in our case is the temperature difference $\Delta T$. Keeping the cold bath fixed at $T_{\mathrm{C}}=100$~K (solid line), we increase the hot bath temperature. For a classical laser such an output curve should exhibit a characteristic onset of lasing at a given threshold \cite{bjork1994definition}. Indeed, we find a characteristic onset of phonon-lasing at $\Delta T_{\mathrm{on}}=85$~K, where for temperature difference below $\Delta T_{\mathrm{on}}$ no phonons are emitted and the phonon amplitude rises significantly for temperature difference above $\Delta T_{\mathrm{on}}$ as expected for lasing. Like the inversion, the threshold depends not only on the temperature difference, but also on the absolute values of the temperature. If we fix the temperature of the cold bath to $T_{\mathrm{C}}=200$~K (dashed line), the threshold increases to $\Delta T'_{\mathrm{on}}=150$~K. In conclusion, we have shown that a novel pumping mechanism using a thermal gradient is able to lead to inversion and to lasing. Our system was composed of a central three-level quantum system in interaction with quantum two-level subunits at each side acting as energy filters. We then imposed a heat gradient on the whole system. Without any further interactions for sufficiently high temperature differences between the heat baths the system shows a positive inversion within the upper two levels of the central unit. That inversion could be utilized for the generation of coherent acoustic phonons in a phononic cavity. Our concept can readily be transferred to optomechanical systems resulting in phonon lasing in nanomechanical oscillators \cite{kippenberg2007cavity,aspelmeyer2014cavity,anetsberger2009near}. Despite the opening of a strong heat conducting channel between the hot and the cold reservoirs due to the phonon-lasing action, the system shows an amplification of the lattice amplitude. Thus, the novel pumping mechanism using a thermal gradient is, indeed, able to produce coherent phonons demonstrating that amazingly a temperature difference can not only be used to drive a steam engine but also for the generation of coherent phonons in nanoscopic quantum systems.
{ "timestamp": "2018-08-29T02:02:24", "yymm": "1806", "arxiv_id": "1806.01057", "language": "en", "url": "https://arxiv.org/abs/1806.01057" }
\section*{Supplementary Material} \renewcommand{\thesubsection}{\Alph{subsection}} \subsection{Algorithmic Illustration of LCC } \label{app:alg} \begin{algorithm} \caption{LCC Encoding (Precomputation)}\label{euclid} \begin{algorithmic}[1] \Procedure{Encode}{$X_1,X_2,...,X_K,T$}\Comment{Encode inputs variables according to LCC} \State \textbf{generate} uniform random variables $Z_{K+1}, ..., Z_{K+T}$ \State \textbf{jointly compute} $ \tilde{X}_i\gets \sum_{j\in[K]}X_j\cdot \prod_{k\in [K+T]\setminus\{j\}}\frac{\alpha_i-\beta_k}{\beta_j-\beta_k}+ \sum_{j=K+1}^{K+T} Z_j\cdot \prod_{k\in [K+T]\setminus\{j\}}\frac{\alpha_i-\beta_k}{\beta_j-\beta_k}$ for $i=1,2,...,N$ using fast polynomial interpolation \State \textbf{return} $\tilde{X}_1,...,\tilde{X}_N$ \Comment{The coded variable assigned to worker $i$ is $\tilde{X}_i$} \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Computation Stage} \begin{algorithmic}[1] \Procedure{WorkerComputation}{$\tilde{X}$}\Comment{Each worker $i$ takes $\tilde{X}_i$ as input} \State \textbf{return} $f(\tilde{X})$\Comment{Compute as if no coding is taking place } \EndProcedure \end{algorithmic} \begin{algorithmic}[1] \Procedure{Decode}{$S,A$}\Comment{Executed by master} \State \textbf{wait} for a subset of fastest $N-S$ workers \State $\mathcal{N} \gets$ identities of the fastest workers \State $\{\tilde{Y}_i\}_{i\in\mathcal{N}} \gets$ results from the fastest workers \State \textbf{recover} $Y_1,...,Y_K$ from $\{\tilde{Y}_i\}_{i\in\mathcal{N}}$ using fast interpolation or Reed-Solomon decoding \Comment{See Appendix \ref{app:comp} } \State \textbf{return} $Y_1,...,Y_K$ \EndProcedure \end{algorithmic} \end{algorithm} $\beta_1,\ldots,\beta_{K+T}$ and $\alpha_1,...,\alpha_N$ are global constants in $\mathbb{F}$, satisfying\footnote{A variation of LCC is presented in Appendix \ref{app:ulcc}, by selecting different values of $\alpha_i$'s.} \begin{enumerate} \item $\beta_i$'s are distinct, \item $\alpha_i$'s are distinct, \item $\{\alpha_i\}_{i\in[N]}\cap\{\beta_j\}_{j\in[K]}=\varnothing$ (this requirement is alleviated if~$T=0$). \end{enumerate} \subsection{Coding Complexities of LCC} \label{app:comp} By exploiting the algebraic structure of LCC, we can find efficient encoding and decoding algorithms with almost linear computational complexities. The encoding of LCC can be viewed as interpolating degree $K+T-1$ polynomials, and then evaluating them at $N$ points. It is known that both operations only require almost linear complexities: interpolating a polynomial of degree $k$ has a complexity of $O(k \log^2 k \log \log k)$, and evaluating it at any $k$ points requires the same \cite{kedlaya2011fast}. Hence, the total encoding complexity of LCC is at most $O(N \log^2 (K+T) \log \log (K+T) \dim \mathbb{V})$, which is almost linear to the output size of the encoder $O(N \dim \mathbb{V})$. Similarly, when no security requirement is imposed on the system (i.e., $A=0$), the decoding of LCC can also be completed using polynomial interpolation and evaluation. An almost linear complexity $O(R \log^2 R \log \log R \dim \mathbb{U})$ can be achieved, where $R$ denotes the recovery threshold. A less trivial case is to consider the decoding algorithm when $A>0$, where the goal is essentially to interpolate a polynomial with at most $A$ erroneous input evaluations, or decoding a Reed-Solomon code. An almost linear time complexity can be achieved using additional techniques developed in \cite{Berlekamp:2006:NBD:2263266.2267601, 1054260,rational1,rational2}. Specifically, the following $2A-1$ \emph{syndrome variables} can be computed with a complexity of $O((N-S)\log^2 (N-S) \log\log (N-S) \dim \mathbb{U})$ using fast algorithms for polynomial evaluation and for transposed-Vandermonde-matrix multiplication \cite{pan2001matrix}. \begin{align} S_k\triangleq \sum_{i\in{\mathcal{N}}} \frac{\tilde{Y}_i \alpha_i^k}{\prod_{j\in\mathcal{N}\backslash\{i\} }(\alpha_{i}-\alpha_j)} && \forall k\in \{0,1,...,2A-1\}. \end{align} According to \cite{Berlekamp:2006:NBD:2263266.2267601, 1054260}, the location of the errors (i.e., the identities of adversaries in LCC decoding) can be determined given these syndrome variables by computing its rational function approximation. Almost linear time algorithms for this operation are provided in \cite{rational1,rational2}, which only requires a complexity of $O(A\log^2 A \log\log A \dim \mathbb{U})$. After identifying the adversaries, the final results can be computed similar to the $A=0$ case. This approach achieves a total decoding complexity of $O((N-S)\log^2 (N-S) \log\log (N-S) \dim \mathbb{U})$, which is almost linear with respect to the input size of the decoder $O((N-S)\dim \mathbb{U})$. Finally, note that the adversaries can only affect a \emph{fixed} subset of $A$ workers' results for all entries. This decoding time can be further reduced by computing the final outputs entry-wise: for each iteration, ignore computing results from adversaries identified in earlier steps, and proceed decoding with the rest of the results. \subsection{The MDS property of $U^{bottom}$} \begin{lemma}\label{lemma:Uproperties} The matrix~$U^{bottom}$ is an MDS matrix. \end{lemma} \begin{proof} First, let~$V\in \mathbb{F}^{T\times N}$ be \begin{align*} V_{i,j}&=\prod_{\ell \in [T]\setminus\{i\}}\frac{\alpha_j-\beta_{\ell+K}}{\beta_{i+K}-\beta_{\ell+K}}. \end{align*} It follows from the resiliency property of LCC that by having~$(\tilde{X}_1,\ldots,\tilde{X}_N)=(X_1,\ldots,X_T)\cdot V$, the master can obtain the values of~$X_1,\ldots,X_T$ from any~$T$ of the~$\tilde{X}_i$'s. This is one of the alternative definitions for an MDS code, and hence, $V$ is an MDS matrix. To show that~$U^{bottom}$ is an MDS matrix, it is shown that~$U^{bottom}$ can be obtained from~$V$ by multiplying rows and columns by nonzero scalars. Let~$[T:K]\triangleq\{ T+1,T+2,\ldots,T+K \}$, and notice that for~$(s,r)\in[T]\times [N]$, entry~$(s,r)$ of~$U^{bottom}$ can be written as \begin{align*} \prod _{t\in [K+T]\setminus \{ s+K \} } \frac{\alpha_r-\beta_t}{\beta_{s+K}-\beta_t}&=\prod_{t\in[K]}\frac{\alpha_r-\beta_t}{\beta_{s+K}-\beta_t}\cdot\\ &~\prod_{t\in [K:T]\setminus \{s+K\}}\frac{\alpha_r-\beta_t}{\beta_{s+K}-\beta_t}. \end{align*} Hence,~$U^{bottom}$ can be written as \begin{align}\label{equation:Vmatrix} U^{bottom}=&\diag\left( \left( \prod_{t\in[K]}\frac{1}{\beta_{s+K}-\beta_t} \right)_{s\in[T]} \right) \cdot V \cdot\nonumber\\ &\diag \left( \left( \prod_{t\in[K]}(\alpha_r-\beta_t) \right)_{r\in [N]}\right), \end{align} where~$V$ is a~$T\times N$ matrix such that \begin{align*} V_{i,j}=\prod_{t\in [T]\setminus\{i\}}\frac{\alpha_{j}-\beta_{t+K}}{\beta_{i+K}-\beta_{t+K}}. \end{align*} Since~$\{ \beta_t \}_{t=1}^K\cap \{ \alpha_r \}_{r=1}^N=\varnothing$, and since all the~$\beta_i$'s are distinct, it follows from~\eqref{equation:Vmatrix} that~$U^{bottom}$ can be obtained from~$V$ by multiplying each row and each column by a nonzero element, and hence~$U^{bottom}$ is an MDS matrix as well. \end{proof} \subsection{The Uncoded Version of LCC} \label{app:ulcc} \input{ULCC.tex} \subsection{Proof of Lemma \ref{lemma:rec}} \label{pl:rec} \input{lemma_rec.tex} \subsection{Optimality on the Resiliency-Security-Privacy Tradeoff for Multilinear Functions } \label{pl:security} \input{lemma_security.tex} \subsection{Optimality on the Resiliency-Privacy Tradeoff for General Multivariate Polynomials} \label{pl:resiliency} \input{lemma_resiliency.tex} \subsection{Proof of Lemma \ref{lemma:basic}} \label{pl:basic} \input{lemma_basic.tex} \input{new_lemma_randomness.tex} \subsection{Optimality of LCC for Linear Regression }\label{sec:regression_converse} In this section, we prove that the proposed LCC scheme achieves the minimum possible recovery threshold $R^*$ to within a factor of 2, for the linear regression problem discussed in Section~6. As the first step, we prove a lower bound on $R^*$ for linear regression. More specifically, we show that for any coded computation scheme, the master always needs to wait for at least $\lceil\frac{n}{r}\rceil$ workers to be able to decode the final result, i.e., $R^* \geq \lceil\frac{n}{r}\rceil$. Before starting the proof, we first note that since here we consider a more general scenario where workers can compute \emph{any} function on locally stored coded sub-matrices (not necessarily matrix-matrix multiplication), the converse result in Theorem~\ref{thm:opt} no longer holds. To prove the lower bound, it is equivalent to show that, for any coded computation scheme and any subset $\mathcal{N}$ of workers, if the master can recover $\vct{X}^\top\vct{X}\vct{w}$ given the results from workers in $\mathcal{N}$, then we must have $|\mathcal{N}|\geq \lceil\frac{n}{r}\rceil$. Suppose the condition in the above statement holds, then we can find encoding, computation, and decoding functions such that for any possible values of $\vct{X}$ and $\vct{w}$, the composition of these functions returns the correct output. Note that within a GD iteration, each worker performs its local computation only based on its locally stored coded sub-matrices and the weight vector $\vct{w}$. Hence, if the master can decode the final output from the results of the workers in a subset $\mathcal{N}$, then the composition of the decoding function and the computation functions of these workers essentially computes $\vct{X}^\top\vct{X}\vct{w}$, using only the coded sub-matrices stored at these workers and the vector $\vct{w}$. Hence, if any class of input values $\vct{X}$ gives the same coded sub-matrices for each worker in $\mathcal{N}$, then the product $\vct{X}^\top\vct{X}\vct{w}$ must also be the same given any $\vct{w}$. Now we consider the class of input matrices $\vct{X}$ such that all coded sub-matrices stored at workers in $\mathcal{N}$ equal the values of the corresponding coded sub-matrices when $\vct{X}$ is zero. Since $\vct{0}^\top\vct{0}\vct{w}$ is zero for any $\vct{w}$, $\vct{X}^\top\vct{X}\vct{w}$ must also be zero for all matrices $\vct{X}$ in this class and any $\vct{w}$. However, for real matrices $\vct{X}=\vct{0}$ is the only solution to that condition. Thus, zero matrix must be the only input matrix that belongs to this class. Recall that all the encoding functions are assumed to be linear. We consider the collection of all encoding functions that are used by workers in $\mathcal{N}$, which is also a linear map. As we have just proved, the kernel of this linear map is $\{\vct{0}\}$. Hence, its rank must be at least the dimension of the input matrix, which is $dm$. On the other hand, its rank is upper bounded by the dimension of the output, where each encoding function from a worker contributes at most $\frac{rdm}{n}$. Consequently, the number of workers in $\mathcal{N}$ must be at least $\lceil\frac{n}{r}\rceil$ to provide sufficient rank to support the computation. Having proved that $R^* \geq \lceil\frac{n}{r}\rceil$, the factor of two characterization of LCC directly follows since $R^* \leq R_{\textup{LCC}} = 2\lceil \tfrac{n}{r}\rceil -1<2\lceil\frac{n}{r}\rceil \leq 2R^*$. Note that the converse bound proved above applies to the most general computation model, i.e., there are no assumptions made on the encoding functions or the functions that each worker computes. If additional requirements are taken into account, we can show that LCC achieves the exact optimum recovery threshold (e.g., see \cite{yu2018straggler}). \subsection{Complete Experimental Results}\label{sec:experiments} In this section, we present the complete experimental results using the LCC scheme proposed in the paper, the gradient coding (GC) scheme~\cite{TandonLDK17} (the cyclic repetition scheme), the matrix-vector multiplication based (MVM) scheme~\cite{lee2015speeding}, and the uncoded scheme for which there is no data redundancy across workers, measured from running linear regression on Amazon EC2 clusters. In particular, experiments are performed for the following 3 scenarios. \begin{itemize}[leftmargin=*] \item Scenario 1 \& 2: \# of input data point $m = 8000$, \# of features $d = 7000$. \item Scenario 3: \# of input data point $m = 160000$, \# of features $d = 500$. \end{itemize} In scenarios 2 and 3, we artificially introduce stragglers by imposing a $0.5$ seconds delay on each worker with probability $5\%$ in each iteration. We list the detailed breakdowns of the run-times in 3 experiment scenarios in Tables~\ref{table:scenario one_supp}, \ref{table:scenario two_supp}, and \ref{table:scenario three_supp} respectively. In particular, the computation (comp.) time is measured as the summation of the maximum local processing time among all non-straggling workers, over 100 iterations. The communication (comm.) time is computed as the difference between the total run-time and the computation time. \begin{table}[!htbp] \caption{Breakdowns of the run-times in scenario one.} \label{table:scenario one_supp} \centering \scalebox{0.72}{ \begin{tabular}{|c | c | c| c | c | c |} \hline \multirow{2}{*}{schemes} & \# batches/ & recovery & comm. &comp. & total \\ & worker ($r$) & threshold & time & time& run-time \\ \hline uncoded & 1 & 40 &24.125 s &0.237 s & 24.362 s\\ \hline GC & 10 &31&6.033 s & 2.431 s &8.464 s\\ \hline MVM Rd. 1 & 5 & 8&1.245 s & 0.561 s& 1.806 s\\ \hline MVM Rd. 2 & 5 & 8&1.340 s & 0.480 s& 1.820 s\\ \hline MVM total & 10 & -&2.585 s & 1.041 s& 3.626 s\\ \hline LCC & 10 & 7&1.719 s & 1.868 s& 3.587 s\\ \hline \end{tabular}} \end{table} \begin{table}[!htbp] \caption{Breakdowns of the run-times in scenario two.} \label{table:scenario two_supp} \centering \scalebox{0.72}{ \begin{tabular}{|c | c | c| c | c | c |} \hline \multirow{2}{*}{schemes} & \# batches/ & recovery & comm. &comp. & total \\ & worker ($r$) & threshold & time & time& run-time\\ \hline uncoded & 1 & 40 & 7.928 s & 44.772 s & 52.700 s\\ \hline GC & 10 & 31 & 14.42 s & 2.401 s & 16.821 s\\ \hline MVM Rd. 1 & 5 & 8 & 2.254 s & 0.475 s & 2.729 s\\ \hline MVM Rd. 2 & 5 & 8 & 2.292 s & 0.586 s & 2.878 s\\ \hline MVM total & 10 & - & 4.546 s & 1.061 s & 5.607 s\\ \hline LCC & 10 & 7 & 2.019 s & 1.906 s & 3.925 s\\ \hline \end{tabular}} \end{table} \begin{table}[!htbp] \caption{Breakdowns of the run-times in scenario three.} \label{table:scenario three_supp} \centering \scalebox{0.71}{ \begin{tabular}{|c | c | c| c | c | c |} \hline \multirow{2}{*}{schemes} & \# batches/ & recovery & comm. &comp. & total \\ &worker ($r$) & threshold & time & time& run-time\\ \hline uncoded & 1 & 40 & 0.229 s & 41.765 s & 41.994 s\\ \hline GC & 10 & 31 & 8.627 s & 2.962 s & 11.589 s\\ \hline MVM Rd. 1 & 5 & 8 & 3.807 s & 0.664 s& 4.471 s\\ \hline MVM Rd. 2 & 5 & 8 & 52.232 s & 0.754 s& 52.986 s\\ \hline MVM total & 10 & - & 56.039 s & 1.418 s& 57.457 s\\ \hline LCC & 10 & 7 & 1.962 s & 2.597 s& 4.541 s\\ \hline \end{tabular}} \end{table} \section{Application to Linear Regression and Experiments on AWS EC2} \label{sec:regression} In this section we demonstrate a practical application of LCC in accelerating distributed linear regression, whose gradient computation is a quadratic function of the input dataset, hence matching well the LCC framework. We also experimentally demonstrate its performance gain over state of the arts via experiments on AWS EC2 clusters. {\bf Applying LCC for linear regression.} Given a feature matrix $\vct{X} \in \mathbb{R}^{m\times d}$ containing $m$ data points of $d$ features, and a label vector $\vct{y} \in \mathbb{R}^m$, a linear regression problem aims to find the weight vector $\vct{w} \in \mathbb{R}^d$ that minimizes the loss $||\vct{X} \vct{w} - \vct{y}||^2$. Gradient descent (GD) solves this problem by iteratively moving the weight along the negative gradient direction, which is in iteration-$t$ computed as $2 \vct{X}^\top(\vct{X}\vct{w}^{(t)}-\vct{y})$. To run GD distributedly over a system comprising a master node and $n$ worker nodes, we first partition $\vct{X} = [\vct{X}_1 \cdots \vct{X}_{n}]^\top$ into $n$ sub-matrices. Each worker stores $r$ coded sub-matrices generated from linearly combining $\vct{X}_j$s, for some parameter $1 \leq r \leq n$. Given the current weight $\vct{w}$, each worker performs computation using its local storage, and sends the result to the master. Master recovers $\vct{X}^\top\vct{X}\vct{w} = \sum_{j=1}^{n} \vct{X}_j \vct{X}_j^\top \vct{w}$ using the results from a subset of fastest workers.\footnote{Since the value of $\vct{X}^\top \vct{y}$ does not vary across iterations, it only needs to be computed once. We assume that it is available at the master for weight updates. } To measure performance of any linear regression scheme, we consider the metric \emph{recovery threshold} (denoted by $R$), defined as the minimum number of workers the master needs to wait for, to guarantee decodability (i.e., tolerating the remaining stragglers). We cast this gradient computation to the computing model in Section~\ref{section:formulation}, by grouping the sub-matrices into $K \!\!=\!\! \lceil\frac{n}{r}\rceil$ blocks such that $\vct{X} = [\bar{\vct{X}}_1 \cdots \bar{\vct{X}}_{K}]^\top$. Then computing $\vct{X} \vct{X}^\top \vct{w}$ reduces to computing the sum of a degree-$2$ polynomial $f(\bar{\vct{X}}_k) = \bar{\vct{X}}_k\bar{\vct{X}}_k^\top \vct{w}$, evaluated over $\bar{\vct{X}}_1,\ldots, \bar{\vct{X}}_K$. Now, we can use LCC to decide on the coded storage as in (\ref{equation:perWorkerEncoding}), and achieve a recovery threshold of $R_{\textup{LCC}} = 2(K-1)+1=2\lceil \tfrac{n}{r}\rceil -1$ (Theorem~\ref{thm:lcc}).\footnote{This recovery threshold is also optimum within a factor of $2$, as we proved in Appendix~\ref{sec:regression_converse}.} \noindent {\bf Comparisons with state of the arts.} The conventional uncoded scheme picks $r=1$, and has each worker $j$ compute $\vct{X}_j\vct{X}_j^\top\vct{w}$. Master needs result from each work, yielding a recovery threshold of $R_{\textup{uncoded}}=n$. By redundantly storing/processing $r>1$ \emph{uncoded} sub-matrices at each worker, the ``gradient coding'' (GC) methods~\cite{TandonLDK17,Halbawi,raviv2017gradient} code across partial gradients computed from uncoded data, and reduce the recovery threshold to $R_{\textup{GC}} = n-r+1$. An alternative ``matrix-vector multiplication based'' (MVM) approach~\cite{lee2015speeding} requires two rounds of computation. In the first round, an intermediate vector $\vct{z} = \vct{X}\vct{w}$ is computed distributedly, which is re-distributed to the workers in the second round for them to collaboratively compute $\vct{X}^\top\vct{z}$. Each worker stores coded data generated using MDS codes from $\vct{X}$ and $\vct{X}^\top$ respectively. MVM achieves a recovery threshold of $R_{\textup{MVM}} = \lceil \frac{2n}{r} \rceil$ in \emph{each} round, when the storage is evenly split between rounds. Compared with GC, LCC codes directly on data, and reduces the recovery threshold by about $r/2$ times. While the amount of computation and communication at each worker is the same for GC and LCC, LCC is expected to finish much faster due to its much smaller recovery threshold. Compared with MVM, LCC achieves a smaller recovery threshold than that in each round of MVM (assuming even storage split). While each MVM worker performs less computation in each iteration, it sends two vectors whose sizes are respectively proportional to $m$ and $d$, whereas each LCC worker only sends one dimension-$d$ vector. We run linear regression on AWS EC2 using Nesterov's accelerated gradient descent, where all nodes are implemented on \texttt{t2.micro} instances. We generate synthetic datasets of $m$ data points, by 1) randomly sampling a true weight $\vct{w}^{*}$, 2) randomly sampling each input $\vct{x}_i$ of $d$ features and computing its output $y_i=\vct{x}_i^\top\vct{w}^{*}$. For each dataset, we run GD for $100$ iterations over $n=40$ workers. We consider different dimensions of input matrix $\vct{X}$ as listed in the following scenarios. \begin{itemize}[leftmargin=*] \item Scenario 1 \& 2: $(m,d)=(8000,7000)$. \item Scenario 3: $(m,d)=(160000,500)$. \end{itemize} We let the system run with naturally occurring stragglers in scenario 1. To mimic the effect of slow/failed workers, we artificially introduce stragglers in scenarios 2 and 3, by imposing a $0.5$ seconds delay on each worker with probability $5\%$ in each iteration. To implement LCC, we set the $\beta_i$ parameters to $1,...,\frac{n}{r}$, and the $\alpha_i$ parameters to $0,\ldots,n-1$. To avoid numerical instability due to large entries of the decoding matrix, we can embed input data into a large finite field, and apply LCC in it with exact computations. However in all of our experiments the gradients are calculated correctly without carrying out this step. \pgfplotsset{compat=1.11, /pgfplots/ybar legend/.style={ /pgfplots/legend image code/.code={% \draw[##1,/tikz/.cd,yshift=-0.28em] (0cm,0cm) rectangle (3pt,0.8em);}, }, } \begin{figure}[t] \centering \begin{tikzpicture}{center} \begin{axis}[ ybar, yticklabel style = {font=\tiny}, bar width=.25cm, width=8.7cm, height=.25\textwidth, enlarge x limits={abs=1cm}, legend style={at={(0.5,-0.25)}, anchor=north,legend columns=-1,/tikz/every even column/.append style={column sep=0.3cm}}, ylabel={total run-time, sec},style = {font=\tiny}, y label style={at={(axis description cs:-0.05,.5)},anchor=south}, xticklabels={scenario 1,scenario 2,scenario 3}, style = {font=\small}, xtick=data, ymin=0, ] \addplot coordinates {(1,24.362) (1.5,52.700)(2,41.994)}; \addplot coordinates {(1,8.464) (1.5,16.821)(2,11.589)}; \addplot coordinates {(1,3.626) (1.5,5.607)(2,57.457)}; \addplot coordinates {(1,3.587) (1.5,3.925)(2,4.541)}; \legend{uncoded, GC, MVM, LCC} \end{axis} \end{tikzpicture} \caption{{\small Run-time comparison of LCC with other three schemes: conventional uncoded, GC, and MVM. }} \label{fig:run-time} \end{figure} {\bf Results.} For GC and LCC, we optimize the total run-time over $r$ subject to local memory size. For MVM, we further optimize the run-time over the storage assigned between two rounds of matrix-vector multiplications. We plot the measured run-times in Figure~\ref{fig:run-time}, and list the detailed breakdowns of all scenarios in Appendix~\ref{sec:experiments}. We draw the following conclusions from experiments. \begin{itemize}[leftmargin=*] \item LCC achieves the least run-time in all scenarios. In particular, LCC speeds up the uncoded scheme by $6.79\times$-$13.43\times$, the GC scheme by $2.36$-$4.29\times$, and the MVM scheme by $1.01$-$12.65\times$. \item In scenarios 1 \& 2 where the number of inputs $m$ is close to the number of features $d$, LCC achieves a similar performance as MVM. However, when we have much more data points in scenario 3, LCC finishes substantially faster than MVM by as much as $12.65\times$. The main reason for this subpar performance is that MVM requires large amounts of data transfer from workers to the master in the first round and from master to workers in the second round (both are proportional to $m$). However, the amount of communication from each worker or master is proportional to $d$ for all other schemes, which is much smaller than $m$ in scenario 3. \end{itemize} \subsection{Illustrating Example}\label{section:illustrative} Consider the function $f(X_i)=X_i^2$, where input $X_i$'s are ~$\sqrt{M}\times \sqrt{M}$ square matrices for some square integer~$M$. We demonstrate LCC in the scenario where the input data~$X$ is partitioned into~$K=2$ batches~$X_1$ and~$X_2$, and the computing system has~$N=8$ workers. In addition, the suggested scheme is {$1$-resilient, $1$-secure, and~$1$-private (i.e., achieves $(S,A,T)=(1,1,1)$)}. The gist of LCC is picking a uniformly random matrix~$Z$, and encoding~$(X_1,X_2, Z)$ using a Lagrange interpolation polynomial:\footnote{Assume that~$\mathbb{F}$ is a finite field with~$11$ elements.} \begin{alignat*}{3} u(z)&\triangleq&& X_1\cdot \frac{(z-2)(z-3)}{(1-2)(1-3)}+X_2\cdot \frac{(z-1)(z-3)}{(2-1)(2-3)}+\\ &~&&Z\cdot \frac{(z-1)(z-2)}{(3-1)(3-2)}. \end{alignat*} We then fix distinct~$\{\alpha_i\}_{i=1}^8$ in~$\mathbb{F}$ such that~$\{ \alpha_i \}_{i=1}^8\cap [2]=\varnothing$, and let workers~$1,\ldots,8$ store~$u(\alpha_1),\ldots,u(\alpha_8)$. First, note that for every~$j\in[8]$, worker~$j$ sees~$\tilde{X}_j$, a linear combination of~$X_1$ and~$X_2$ that is masked by addition of~$\lambda\cdot Z$ for some nonzero~$\lambda\in\mathbb{F}_{11}$; since~$Z$ is uniformly random, this guarantees perfect privacy for~$T=1$. Next, note that worker~$j$ computes~$f(\tilde{X}_j)=f(u(\alpha_j))$, which is an evaluation of the composition polynomial~$f(u(z))$, whose degree is at most~$4$, at~$\alpha_j$. Normally, a polynomial of degree $4$ can be interpolated from $5$ evaluations at distinct points. However, the presence of~$A=1$ adversary and~$S=1$ straggler requires the master to employ a Reed-Solomon decoder, and have \textit{three} additional evaluations at distinct points (in general, two additional evaluations for every adversary and one for every straggler). Finally, after decoding polynomial~$f(u(z))$, the master can obtain~$f(X_1)$ and~$f(X_2)$ by evaluating it at~$z=1$ and~$z=2$. \subsection{General Description}\label{section:GeneralDescription} Similar to Subsection~\ref{section:illustrative}, we select any~$K+T$ distinct elements~$\beta_1,\ldots,\beta_{K+T}$ from $\mathbb{F}$, and find a polynomial $u:\mathbb{F}\rightarrow \mathbb{V}$ of degree {at most} $K+T-1$ such that $u(\beta_i)=X_i$ for any $i\in[K]$, and~$u(\beta_i)=Z_i$ for~$i\in\{K+1,\ldots,K+T\}$, where all~$Z_i$'s are chosen uniformly at random from~$\mathbb{V}$. This is simply accomplished by letting~$u$ be the \textit{Lagrange interpolation polynomial} \begin{align*} u(z)\triangleq \sum_{j\in[K]}X_j\cdot \prod_{k\in [K+T]\setminus\{j\}}\frac{z-\beta_k}{\beta_j-\beta_k}+\\ \sum_{j=K+1}^{K+T} Z_j\cdot \prod_{k\in [K+T]\setminus\{j\}}\frac{z-\beta_k}{\beta_j-\beta_k}. \end{align*} We then select~$N$ distinct elements~$\{\alpha_i\}_{i\in[N]}$ from $\mathbb{F}$ such that~$\{\alpha_i\}_{i\in[N]}\cap\{\beta_j\}_{j\in[K]}=\varnothing$ (this requirement is alleviated if~$T=0$), and let $\tilde{X}_i=u(\alpha_i)$ for any $i\in[N]$. That is, the input variables are encoded as \begin{align}\label{equation:perWorkerEncoding} \tilde{X}_i\!=\!u(\alpha_i)\!=\!(X_1,\ldots,X_K,Z_{K+1},\ldots,Z_{K+T})\cdot U_i, \end{align} where~$U\in\mathbb{F}_q^{(K+T)\times N}$ is the encoding matrix~$U_{i,j}\triangleq\prod_{\ell\in[K+T]\setminus \{i\}}\frac{\alpha_j-\beta_\ell}{\beta_i-\beta_\ell}$, and~$U_i$ is its~$i$'th column.\footnote{By selecting the values of $\alpha_i$'s differently, we can recover the uncoded repetition scheme, see Appendix \ref{app:ulcc}.} Following the above encoding, each worker~$i$ applies~$f$ on~$\tilde{X}_i$ and sends the result back to the master. Hence, the master obtains~$N-S$ evaluations, at most~$A$ of which are incorrect, of the polynomial~$f(u(z))$. Since $\deg(f(u(z)))\le\deg(f)\cdot (K+T-1)$, and~$N\ge (K+T-1)\deg(f)+S+2A+1$, the master can obtain all coefficients of~$f(u(z))$ by applying Reed-Solomon decoding. Having this polynomial, the master evaluates it at~$\beta_i$ for every~$i\in[K]$ to obtain~$f(u(\beta_i))=f(X_i)$, {and hence we have shown that the above scheme is $S$-resilient and $A$-secure.} As for the $T$-privacy guarantee of the above scheme, our proof relies on the fact that the bottom~$T\times N$ submatrix~$U^{bottom}$ of~$U$ is an MDS matrix (i.e., every~$T\times T$ submatrix of~$U^{bottom}$ is invertible, see Lemma~\ref{lemma:Uproperties} in the supplementary material). Hence, for a colluding set of workers~$\mathcal{T}\subseteq [N]$ of size~$T$, their encoded data~$\tilde{X}_\mathcal{T}$ satisfies~$\tilde{X}_\mathcal{T}=XU_\mathcal{T}^{top}+ZU_\mathcal{T}^{bottom}$, where~$Z\triangleq(Z_{K+1},\ldots,Z_{K+T})$, and~$U_\mathcal{T}^{top}\in \mathbb{F}_q^{K\times T}$, $U_\mathcal{T}^{bottom}\in \mathbb{F}_q^{T\times T}$ are the top and bottom submatrices which correspond to the columns in~$U$ that are indexed by~$\mathcal{T}$. Now, the fact that any~$U_{\mathcal{T}}^{bottom}$ is invertible implies that the random padding added for these colluding workers is uniformly random, which completely masks the coded data~$XU_\mathcal{T}^{top}$. This directly guarantees~$T$-privacy. \section{Introduction}\label{section:introduction} \input{Introduction_arxiv.tex} \section{Problem Formulation and Examples}\label{section:formulation} \input{ProblemFormulation.tex} \section{Main Results and Prior Works} \input{Previous.tex} \section{Lagrange Coded Computing}\label{section:Lagrange} \input{LagrangeCodedComputing.tex} \section{Optimality of LCC}\label{sec:converses} \input{Converse_new.tex} \input{Experiments.tex} \Removed{\section{Conclusion} \input{Conclusion.tex}} \section*{Acknowledgement} This material is based upon work supported by Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001117C0053, ARO award W911NF1810400, NSF grants CCF-1703575, ONR Award No. N00014-16-1-2189, and CCF-1763673. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. M. Soltanolkotabi is supported by the Packard Fellowship in Science and Engineering, a Sloan Research Fellowship in Mathematics, an NSF-CAREER under award \#1846369, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) under award \#FA9550-18-1-0078, an NSF-CIF award \#1813877, and a Google faculty research award. Qian Yu is supported by the Google PhD Fellowship. \subsection{Optimality in randomness}\label{app:pt_rand} In this appendix, we prove the optimality of LCC in terms of the amount of randomness needed in data encoding, which is formally stated in the following theorem. \begin{theorem} \label{lemma:optimalRandomness} (Optimal randomness) Any linear encoding scheme that universally achieves a same tradeoff point specified in Theorem \ref{thm:lcc} for all linear functions~$f$ (i.e., $(S,A,T)$ such that $K+T+S+2A= N$) must use an amount of randomness no less than that of LCC. \end{theorem} \begin{proof} The proof is taken almost verbatim from~\cite{WentaosThesis}, Chapter~3. In what follows, an~$(n,k,r,z)_{\mathbb{F}_q^t}$ \textit{secure RAID scheme} is a storage scheme over~$\mathbb{F}_q^t$ (where~$\mathbb{F}_q$ is a field with~$q$ elements) in which~$k$ message symbols are coded into~$n$ storage servers, such that the~$k$ message symbols are reconstructible from any~$n-r$ servers, and any~$z$ servers are information theoretically oblivious to the message symbols. Further, such a scheme is assumed to use~$v$ random entries as keys, and by~\cite{WentaosThesis}, Proposition~3.1.1, must satisfy~$n-r\ge k+z$. \begin{theorem}\cite{WentaosThesis}, Theorem~3.2.1. \label{theorem:Wentao} A linear rate-optimal~$(n,k,r,z)_{\mathbb{F}_q^t}$ secure RAID scheme uses at least~$zt$ keys over~$\mathbb{F}_q$ (i.e.,~$v\ge z$). \end{theorem} Clearly, in our scenario~$\mathbb{V}$ can be seen as~$\mathbb{F}_q^{\dim\mathbb{V}}$ for some~$q$. Further, by setting~$N=n$, $T=z$, and~$t=\dim\mathbb{V}$, it follows from Theorem~\ref{theorem:Wentao} that any encoding scheme which guarantees information theoretic privacy against sets of~$T$ colluding workers must use at least~$T$ random entries~$\{Z_i\}_{i\in[T]}$. \end{proof} \subsection{LCC vs. Prior Works}\label{sec:prior} The study of coding theoretic techniques for accelerating large scale distributed tasks (a.k.a. \emph{coded computing}) was initiated in {\cite{lee2015speeding,LMA_all,Li2018fundamental}}. Following works focused largely on matrix-vector and matrix-matrix multiplication (e.g.,~\cite{dutta2016short, NIPS2017_7027, dutta2018on, yu2018straggler}), gradient computation in gradient descent algorithms (e.g.,~\cite{TandonLDK17,raviv2017gradient,li2017near}), communication reduction via coding (e.g.,~\cite{li2017coded,ezzeldin2017communication,graphlong,2018arXiv180203049K}), and secure and private computing (e.g.,~\cite{MaddahAli,chen2018draco}). LCC recovers several previously studied results as special cases. For example, setting~$f$ to be the identity function and $\mathbb{V}=\mathbb{U}$ reduces to the well-studied case of \textit{distributed storage}, in which Theorem~\ref{thm:lcc} is well known (e.g., the Singleton bound~\cite[Thm.~4.1]{roth2006introduction}). Further, as previously mentioned, $f$ can correspond to matrix-vector and matrix-matrix multiplication, in which the special cases of Theorem~\ref{thm:lcc} are known as well~\cite{speeding,yu2018straggler}. More importantly, LCC improves and generalizes these works on coded computing in a few aspects: \textit{Generality}--LCC significantly generalizes prior works to go beyond linear and bilinear computations that have so far been the main focus in this area, and can be applied to arbitrary multivariate polynomial computations that arise in machine learning applications. In fact, many specific computations considered in the past can be seen as special cases of polynomial computation. This includes matrix-vector multiplication, matrix-matrix multiplication, and gradient computation whenever the loss function at hand is a polynomial, or is approximated by one. \textit{Universality}--once the data has been coded, any polynomial up to a certain degree can be computed distributedly via LCC. In other words, data encoding of LCC can be \emph{universally} used for any polynomial computation. This is in stark contrast to previous task specific coding techniques in the literature. Furthermore, workers apply the same computation as if no coding took place; a feature that reduces computational costs, and prevents ordinary servers from carrying the burden of outliers. \textit{Security and Privacy}--other than a handful of works discussed above, straggler mitigation (i.e., resiliency) has been the primary focus of the coded computing literature. This work extends the application of coded computing to secure and private computing for general polynomial computations. Providing security and privacy for \textit{multiparty computing} (MPC) and Machine Learning systems is an extensively studied topic which addresses a problem setting similar to LCC. To illustrate the significant role of LCC in secure and private computing, let us consider the celebrated BGW MPC scheme~\cite{ben1988completeness}. \footnote{Conventionally, the BGW scheme operates in a multi-round fashion, requiring significantly more communication overhead than one-shot approaches. For simplicity of comparison, we present a modified one-shot version of BGW. } Given inputs~$\{X_i\}_{i=1}^K$, BGW first uses Shamir's scheme \cite{Shamir:1979:SS:359168.359176} to encode the dataset in a privacy-preserving manner as~$P_{i}(z) = X_{i}+Z_{i,1}z+\ldots+Z_{i,T}z^T$ for every~$i\in[K]$, where $Z_{i,j}$'s are i.i.d uniformly random variables and $T$ is the number of colluding workers that should be tolerated. The key distinction between the data encoding of BGW scheme and LCC is that we instead use Lagrange polynomials to encode the data. This results in significant reduction in the amount of randomness needed in data encoding (BGW needs $KT$ $z_{i,j}$'s while as we describe in the next section, LCC only needs $T$ amount of randomness). The BGW scheme will then store~$\{P_{i}(\alpha_\ell)\}_{i\in [K]}$ to worker~$\ell$ for every~$\ell\in[N]$, given some distinct values $\alpha_1,\ldots,\alpha_N$. The computation is then carried out by evaluating $f$ over \emph{all} stored coded data at the nodes. In the LCC scheme, on the other hand, each worker $\ell$ only needs to store \emph{one} encoded data ($\tilde{X}_{\ell}$) and compute $f(\tilde{X}_{\ell})$. This gives rise to the second key advantage of LCC, which is a factor of $K$ in storage overhead and computation complexity at each worker. After computation, each worker~$\ell$ in the BGW scheme {has essentially evaluated the polynomials $\{f(P_{i}(z))\}_{i=1}^K$} at $z=\alpha_\ell$, whose degree is at most~$\deg(f)\cdot T$. Hence, if no straggler or adversary appears (i.e, $S=A=0$), the master can recover all required results $f(P_{i}(0))$'s, through polynomial interpolation, as long as $N\ge \deg(f)\cdot T+1$ workers participated in the computation\footnote{It is also possible to use the conventional multi-round BGW, which only requires $N\geq 2T+1$ workers to ensure $T$-privacy. However, multiple rounds of computation and communication ($\Omega(\log$ deg$(f) )$ rounds) are needed, which further increases its communication overhead.}. Note that under the same condition, LCC scheme requires $N\ge \deg(f)\cdot (K+T-1)+1$ number of workers, which is larger than that of the BGW scheme. Hence, in overall comparison with the BGW scheme, LCC results in a factor of $K$ reduction in the amount of randomness, storage overhead, and computation complexity, while requiring more workers to guarantee the same level of privacy. This is summarized in Table~\ref{table:BGW}.\footnote{A BGW scheme was also proposed in \cite{ben1988completeness} for secure MPC, however for a substantially different setting. Similarly, a comparison can be made by adapting it to our setting, leading to similar results, which we omit for brevity. } \begin{table}[t] \centering \begin{tabular}{l|c|c|} \hhline{~--} & \cellcolor[HTML]{C0C0C0}BGW & \cellcolor[HTML]{C0C0C0}LCC \\ \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}\shortstack[l]{Complexity\\per worker}} & $K$ & $1$ \\ \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}\shortstack[l]{Frac. data\\per worker}} & $1$ & $1/K$ \\ \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}Randomness} & $ KT$ & $T$ \\ \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{C0C0C0}\shortstack[l]{Min. num. \\of workers}} & {\footnotesize$\deg(f)(T+1)$} & {\footnotesize $\deg(f)(K+T-1)+1$} \\ \hline \end{tabular} \vspace{1mm} \caption{Comparison between BGW based designs and LCC. The computational complexity is normalized by that of evaluating $f$; randomness, which refers to the number of random entries used in encoding functions, is normalized by the length of $X_i$. } \vspace{-7mm} \label{table:BGW} \end{table} Recently,~\cite{MaddahAli} has also combined ideas from the BGW scheme and~\cite{NIPS2017_7027} to form \textit{polynomial sharing}, a {private} coded computation scheme for arbitrary matrix polynomials. However, polynomial sharing inherits the undesired BGW property of performing a communication round for \textit{every} bilinear operation in the polynomial; a feature that drastically increases communication overhead, and is circumvented by the one-shot approach of LCC. \textit{DRACO}~\cite{chen2018draco} is also recently proposed as a secure computation scheme for gradients. Yet, DRACO employs a blackbox approach, i.e., the resulting gradients are encoded rather than the data itself, and the inherent algebraic structure of the gradients is ignored. For this approach, \cite{chen2018draco} shows that a~$2A+1$ \textit{multiplicative} factor of redundant computations is necessary. In LCC however, the blackbox approach is disregarded in favor of an algebraic one, and consequently, a~$2A$ \textit{additive} factor suffices. LCC has also been recently applied to several applications in which security and privacy in computations are critical. For example, in~\cite{li2018polyshard}, LCC has been applied to enable a scalable and secure approach to sharding in blockchain systems. Also, in~\cite{so2019codedprivateml}, a privacy-preserving approach for machine learning has been developed that leverages LCC to provides substantial speedups over cyrptographic approaches that relay on MPC.
{ "timestamp": "2019-04-03T02:01:52", "yymm": "1806", "arxiv_id": "1806.00939", "language": "en", "url": "https://arxiv.org/abs/1806.00939" }
\subsection{{HTM+PM} Basics} Hardware Transactional Memory, or HTM, was introduced in ~\cite{htm} as a new, easy-to-use method for lock-free synchronization supported by hardware. The initial instructions for HTM included load and store transactional instructions in addition to transactional management instructions. Most HTM implementations extend an underlying cache-coherency protocol to handle detection of transactional conflicts during program execution. The hardware system performs a speculative execution on a demarcated region of code similar to an atomic section. Independent transactions (those that do not write a shared variable) proceed unrestricted through their HTM sections. Transactions which access common variables concurrently in their HTM sections, with at least one transaction performing a write, are serialized by the HTM. That is, all but one of the transactions is aborted; an aborted transaction will restart its HTM code at the beginning. Updates made within the HTM section are hidden from other transactions and are prevented from writing to memory until the transaction successfully completes the HTM section. This mechanism provides atomicity (all-or-nothing) semantics for individual transactions with respect to visibility by other threads, and serialization of conflicting, dependent transactions. However HTM was originally designed for volatile memory systems (rather than supporting database style ACID transactions) and therefore any power failure leaves main memory in an unpredictable state relative to the actual values of the transaction variables. Persistent Memory, or {\small PM}, introduces a new method of persistence to the processor. PM, in the form of persistent DIMM, resides on the main-memory bus alongside {\small DRAM}. Software can access persistent memory using the usual {\small LOAD} and {\small STORE} instructions used for {\small DRAM}. Like other memory variables, PM~ variables are subject to forced and involuntary cache-evictions and encounter other deferred memory operations done by the processor. For Intel {\small CPU}s, {\small CLWB} and {\small CLFLUSHOPT} instructions provide the ability to flush modified data (at cacheline granularity) to be evicted from the processor cache hierarchy. These instructions, however, are weakly ordered with respect to other store operations in the instruction stream. Intel has extended the semantic for SFENCE to cover such flushed store operations so that software can issue SFENCE to prevent new stores from executing until previously flushed data has entered a power-safe domain; i.e., the data is guaranteed by hardware to reach its locations in the PM~ media. This guarantee also applies to data that is written to PM~ with instructions that bypass the processor caches. However, when executing within an HTM transaction, a CPU cannot exercise CLWB, CLFLUSHOPT, non-cacheable stores, and SFENCE instructions since the stores by the CPU are considered speculative until the HTM transaction completes successfully. Even though HTM guarantees that transactional values are only visible on transaction completion, hardware manufacturers cannot simply utilize a non-volatile processor cache hierarchy or battery backed flushing of the cache on failures to provide transactional atomicity. Transactions that do not complete before a software or hardware restart produce partial and therefore inconsistent updates in non-volatile memory, as there is no guarantee when a machine halt will occur. The halt may happen during XEND execution leaving only partial updates in cache or write buffers which can corrupt in-memory data structures. \subsection{Benchmarks} The Scalable Synthetic Compact Applications for benchmarking High Productivity Computing Systems ~\cite{ssca}, {\small SSCA2}, is part of the Stanford Transactional Applications for Multi-Processing ~\cite{stamp}, or {\small STAMP}, benchmark suite. {\small SSCA2} uses a large memory area and has multiple kernels that construct a graph and perform operations on the graph. We executed the {\small SSCA2} benchmark with scale 20, which generates a graph with over 45 million edges. We increased the number of threads from 1 to 16 in powers of two and recorded the execution time for the kernel for each method. \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-30.png} \caption{{\small SSCA2} Benchmark Compute Graph Kernel Execution Time as a Function of the Number of Parallel Execution Threads} \label{fig:s30} \end{figure} \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-31.png} \caption{{\small SSCA2} Benchmark Compute Graph Kernel Speedup as a Function of the Number of Parallel Execution Threads} \label{fig:s31} \end{figure} \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-32.png} \caption{{\small SSCA2} Benchmark Compute Graph Kernel {\small HTM} Aborts as a Function of the Number of Parallel Execution Threads} \label{fig:s32} \end{figure} Figure ~\ref{fig:s30} shows the execution time for each method for the Compute Kernel in the {\small SSCA2} benchmark as a function of the number of threads. Each method reduces the execution time with increasing numbers of threads. Our WrAP approach has similar execution time to {\small HTM} in the cache hierarchy with no persistence and is over 2.25 times faster than a persistent {\small PTL-Eager} method to PM. Figure ~\ref{fig:s31} shows the speedup for each method as a function of the number of threads when compared to a single threaded undo log for the persistence methods and speedup versus no persistence for the in cache method {\small HTM} only. Even though the {\small HTM} (cache-only) method does better in absolute terms as we saw in Figure ~\ref{fig:s30}, it proceeds from a higher baseline for single-threaded execution. {\small PTL-Eager} yields a significantly weaker scalability due to the inherent costs of having to perform persistent flushes within its concurrent region. Figure ~\ref{fig:s32} shows the number of hardware aborts for both our WrAP approach and cache-only {\small HTM}. Our approach introduces extra writes to log the write-set, and, along with reading the system time stamp, extends the transaction time. However, as shown in the Figure, this only slightly increases the number of hardware aborts. \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-33.png} \caption{Vacation Benchmark Execution Time as a Function of the Number of Parallel Execution Threads} \label{fig:s33} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-34.png} \caption{Vacation Benchmark Execution Time with Various Persistent Memory Write Times for Four Threads} \label{fig:s34} \end{figure} We also evaluated the {\it Vacation} benchmark which is part of the {\small STAMP} benchmark suite. The {\it Vacation} benchmark emulates database transactions for a travel reservation system. We executed the benchmark with the low option for lower contention emulation. Figure ~\ref{fig:s33} shows the execution time for each method for the {\it Vacation} benchmark as a function of the number of threads. Each method reduces the execution time with increasing numbers of threads. The WrAP approach follows the trends similar to {\small HTM} in the cache hierarchy with no persistence, with both approaches flattening execution time after 4 threads. We examine the effect of $strict$ durability, WrAP-Strict in the figure, and show that $strict$ durability only introduces a small amount of overhead. For just a single thread, it has the same performance as WrAP relaxed as a thread doesn't need to wait on other threads, as it is durable as soon as the transaction completes. Additionally, we examined the effect of increased Persistent Memory~ write times on the benchmark. Byte-addressable Persistent Memory can have longer write times. To emulate the longer write times for PM, we insert a delay after non-temporal stores when writing to new cache lines and a delay after cache line flushes. The write delay can be tuned to emulate the effect of longer write times typical of PM. Figure ~\ref{fig:s34} shows the {\it Vacation} benchmark execution time for various PM~ write times. The WrAP approach is less affected by increasing PM~ write times than the {\small PTL-Eager} approach due to several factors. WrAP performs write-combining for log entries on the foreground path for each thread, so writes to several transaction variables may be combined into a fewer writes. Also, {\small PTL-Eager} transactionally persists an undo log on writes causing a foreground delay. \subsection{Hash Table} Our next series of experiments show transaction sizes and high memory traffic affect overall performance. We create a 64 MB Hash Table Array of elements in main memory and transactionally perform a number of element updates. For each transaction, we generate a set of random numbers of a configurable size, compute their hash, and write the value into the Hash Table Array. \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-35.png} \caption{Millions of Transactions per Second for Hash Table Updates of 10 Elements versus Concurrent Number of Threads} \label{fig:s35} \end{figure} \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-35b.png} \caption{Millions of Transactions per Second for Hash Table Updates of 20 Elements versus Concurrent Number of Threads} \label{fig:s35b} \end{figure} First, we create transactions consisting of 10 atomic updates and vary the number of concurrent threads and measure the maximum throughput. We perform 1 million updates and record the average throughput and plot the results in Figure ~\ref{fig:s35}. Our approach achieves roughly 3x throughput over {\small PTL-Eager}. Figure ~\ref{fig:s35b} shows increasing the write set to 20 atomic updates has similar performance. In both figures, adding $strict$ durability only slightly decreases the overall performance; threads wait additional time for the dependency on other transactions to clear before continuing to another transaction. \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-36.png} \caption{Average Txps for Increasing Transaction Sizes of Atomic Hash Table Updates with 6 Concurrent Threads} \label{fig:s36} \end{figure} \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-36c.png} \caption{Average Txps for Write / Read Percentage of Atomic Hash Table Updates with 6 Concurrent Threads} \label{fig:s36c} \end{figure} The transaction write set was then varied from 2 to 30 elements with 6 concurrent threads. The average throughput was recorded and is shown in Figure ~\ref{fig:s36}. Even with adding strict durability, WrAP performs roughly three times faster than {\small PTL-Eager}. A transaction size of ten elements was then varied with a write to read ratio with 6 concurrent threads. The average throughput was recorded and is shown in Figure ~\ref{fig:s36c}. Unlike transactional memory approaches, our approach does not require instrumenting read accesses and can therefore execute reads at cache speeds. \subsection {Red-Black Tree} \begin{figure}[th] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-38.png} \caption{Millions of Transactions per Second for Atomic Red-Black Tree Element Inserts versus Number of Concurrent Threads} \label{fig:s38} \end{figure} We use the transactional Red-Black tree from {\small STAMP} ~\cite{stamp} initialized with 1 million elements. We then perform insert operations on the Red-Black tree and record average transaction times and throughput over 200k additional inserts. Each transaction inserts an additional element into the Red-Black tree. Inserting an element into a Red-Black tree first requires finding the insertion point which can take many read operations and can trigger many writes through a rebalance. In our experiments, we averaged 63 reads and 11 writes per transactional insert of one element into the Red-Black tree. We record the maximum throughput of inserts into the Red-Black tree per second for a varying number of threads in Figure ~\ref{fig:s38}. As can be seen in the Figure, WrAP has almost 9x higher throughput over {\small PTL-Eager}, and with strict durability almost 6x faster. Our method can perform reads at the speed of the hardware, while {\small PTL-Eager} requires instrumenting reads through software to track dependencies on other concurrent transactions. \subsection {\PMCN~ Analysis} \label{sec:simulation} We investigated the required length of our {\small FIFO} in the Volatile Delay Buffer and performance with respect to Persistent Memory~ write times using an approach similar to ~\cite{zhao13}. In the absence of readily available memory controllers, we modified the McSimA+ simulator~\cite{ahn2013}. McSimA+ is a PIN~\cite{pin05} based simulator that decouples execution from simulation and tightly models out-of-order processor micro-architecture at the cycle level. We extended the simulator to support the notifications for opening and closing WrAPs along with extended support for memory reads and writes. We added support for {\small DRAMS}im2 ~\cite{dramsim2}, a cycle-accurate memory system and {\small DRAM} memory controller model library. Write-combining and store buffers were then added with multiple configuration options to allow fine tuning to match the system to be modeled. \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-h1.png} \caption{Average Atomic Hash Table 10 Element Update with Various Persistent Memory~ Write Times with 8 Concurrent Threads} \label{fig:h1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-h3.png} \caption{Maximum {\small FIFO} Queue Length for Atomic Hash Table 10 Element Update with 4 Concurrent Threads} \label{fig:h3} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-h11.png} \caption{Average B-Tree Atomic Element Insert with Various Persistent Memory~ Write Times with 8 Concurrent Threads} \label{fig:h11} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.05in,height=2.44in]{fig-spaa-h13.png} \caption{Maximum {\small FIFO} Queue Length for B-Tree Atomic Element Insert with 8 Concurrent Threads} \label{fig:h13} \end{figure} To stress the Persistent Memory Controller, we executed an atomic hash table update without any thread contention by having each thread update elements on a separate portion of the table. In the simulation, we fill the cache with dirty cache lines so that each write by a thread in a transaction generates write-backs to main Persistent Memory. For 8 threads, we recorded the average atomic hash table update time for 10 elements in each transaction. We then vary the Persistent Memory~ write time as a multiple of {\small DRAM} write time. As shown in Figure ~\ref{fig:h1}, WrAP is less affected by increasing write times when compared to {\small PTL-Eager}. Additionally, we record the maximum {\small FIFO} buffer size for various Persistent Memory~ write times and 4 concurrent threads, shown in Figure ~\ref{fig:h3}. Initially, the buffer size decreases for an increasing PM~ write time, due to slower transaction throughput and less cache evictions into the buffer. As the write time increases, the buffer length increases, but is still less than 1k cache lines or 64{\small KB}. We performed a similar analysis using a B-Tree, where each thread atomically inserts elements on its own copy of a B-Tree. Each insert into the tree required, on average, over 5 times as many reads as writes. As shown in Figure ~\ref{fig:h11}, our method is less affected by increasing PM~ write times, due to {\small PTL-Eager} instrumenting the large portion of the read operations. In this experiment, we use eight concurrent threads each atomically inserting elements into an initialized B-Tree of 128 elements. As more reads than writes are generated for each atomic insert transaction, the {\small FIFO} buffer length remains small. We also examined the {\small FIFO} buffer length in the {\small VDB} with 8 concurrent threads. Figure ~\ref{fig:h13} shows the length was less than about 100 elements for each write speed due to the large proportion of reads. \vspace{9pt} \subsection{Example} \label{sec:example} Figure ~\ref{fig:example} shows an example set of four transactions, T1-T4, happening concurrently. In this example, we show T1-T4 split into states, specifically, concurrency ({\small COMPUTE}), shown in vertical lines, and {\small LOG}, depicted with slanted lines. At certain time steps we show the contents of the Persistent Memory Controller's Volatile Delay Buffer, or {\small VDB} which is a {\small FIFO} Queue, and the Current Open Transactions~ or {\small COT}~ in Figure ~\ref{table:nvm}. The recovery algorithm is shown in Figure ~\ref{table:recover} with the contents of the log. Either the start timestamp only or the persist timestamp order is shown; where bold and underline indicate the log is written, and a circled transaction indicates that it is recoverable. First, at time $t1$, T1 opens, notifying the controller, and records its start timestamp safely in its log. The controller adds T1 to the bitmap {\small COT}~ of open transactions. At times $t2$ and $t3$, transactions T2 and T3 also open, notify the controller, and safely read and persist their start timestamps. At this point in time the \PMCN~ has a {\small COT}~ of \{1,1,1,0\} and only start timestamps have been recorded in the log. T2 then completes its concurrency section and persists its persist timestamp at time $t4$ and begins writing its log. In Figure ~\ref{table:nvm}, we also show a random cache eviction of a cache line $X$ that is tagged with the {\small COT}~ of \{1,1,1,0\}. Transaction T4 starts at time $t5$ persisting its start timestamp and is added to the set of open transactions on the Persistent Memory Controller, now \{1,1,1,1\}. At time $t6$ we illustrate several events. T3 completes its concurrency section and persists its persist timestamp. At this time, as shown in ~\ref{table:recover}, T3 is now ordered after T2 for recovery, however neither have completed persisting their logs. Also, we show a random cache eviction of cache line $Y$, and it is placed at the back of the {\small VDB} on the \PMCN~ as shown in ~\ref{table:nvm}. At time $t7$, transaction T3 has completed writing its logs and is marked completed and is removed from the dependency set in the controller and cache line dependencies for $X$ and $Y$. However, as shown in ~\ref{table:recover}, T3 is not recoverable at this point since it is not first in the persist timestamp order. T3 is behind T2 and T3 also has a smaller persist timestamp than the start timestamp of T1, which hasn't written its persist timestamp yet. When T2 completes writing of logs at time $t8$, it is removed from the current and dependency sets in the queue of the \PMCN~ as shown in ~\ref{table:nvm}. Note that cache line $X$ is still not able to be written back to Persistent Memory~ as it is still tagged as being dependent on T1, and $Y$ on both T1 and T4. At this time, T2 and T3 are also not recoverable as shown in ~\ref{table:recover} as T2 has a persist timestamp that is greater than the start timestamp of T1, as it would be unknown by a recovery process if T1 had simply had a delay in persisting its log and T2 had transactional values dependent on T1. We illustrate two events at time $t9$. T1 finally completes its concurrency section and writes its persist timestamp at $t9$. Since the persist timestamp of T1 is safely persisted and known at recovery time, transaction T2 is now fully recoverable as shown circled in ~\ref{table:recover}. However, at this time, T3 is not fully recoverable since it is waiting on T4, which started before T3 completed its concurrency section and T4 hasn't yet written the persist timestamp. Also at $t9$, in ~\ref{table:nvm} we illustrate the eviction of cache line $Z$ which is tagged with the set of open transactions {\small COT} \{1,0,0,1\}. At time $t10$, T4 writes its persist timestamp and its order is now known to a recovery routine to be behind T3, which is now fully recoverable as shown with the circle in ~\ref{table:recover}. Note that T4 has a persist time before T1. In ~\ref{table:nvm}, we also illustrate the eviction of cache line $X$ again into the {\small VDB} of the \PMCN~ and tagged with the set of the two open transactions, T1 and T4. Note that there are two copies of cache line $X$ in the controller. The one at the head of the queue has fewer dependencies (only dependent on T1) than the recent eviction. Any subsequent read for cache line $X$ returns the most recent copy, the last entry in the {\small VDB}. Note how cache lines at the back of the queue have dependency set sizes that are greater than or equal to entries earlier in the queue. T1 completes log writing at $t11$, but is behind T4, which hasn't yet finished writing its logs, so neither are yet recoverable. The PM~ controller also removes T1 from its dependency set and of those in the {\small VDB}. The first copy of $X$ now has no dependencies in the queue and is safely written back to PM~ as shown in ~\ref{table:nvm}. At time $t12$, T4 completes writing its logs and both T4 and T1 are recoverable. Also, T4 is removed from the dependency sets in the controller, which allows for $Y$, $Z$, and $X$ to flow to PM~. {\bf Strict Durability:~} Suppose a transaction requires $strict$ durability during its Commit Stage, ensuring that once complete, the transactional writes will be reflected in PM~ if a failure were to occur. If T4 requires $strict$ durability, it is simply durable at the end as there are no open transactions when it completes. However, T1, T2, and T3, have other constraints. A transaction requiring strict durability is only durable when it is fully recoverable. Table ~\ref{table:recover} illustrates transaction durability when it is circled. T1 must wait until step $t12$ if it requires strict durability as it might have dependencies on T4. T2 is fully durable at time $t9$ when T1, which started earlier, writes its persist timestamp. At time $t10$, T3 is fully durable when T4, which started before T3 completed its concurrency section and could have introduced transactional dependencies, writes its persist timestamp which indicates T4 started {\small HTM} section later. \subsection{System Time Stamp} \label{sec:impl:ts} We use the recent Intel instruction {\small RDTSCP}, or Read Time Stamp Counter and Processor ID, to obtain the timestamps in listing 4. The {\small RDTSCP} instruction provides access to a global monotonically increasing processor clock across processor sockets ~\cite{spear13}, while serializing itself behind the instructions that precede it in program order. To prevent the reordering of an {{\small XE}nd} before the {\small RDTSCP} instruction, we save the resulting time stamp into a volatile memory address. Since all stores preceding an {{\small XE}nd} become visible after {{\small XE}nd}, and the store of the persist timestamp is the last store before {{\small XE}nd}, that store gets neither re-ordered before other stores nor re-ordered after the end of the HTM transaction. We note that {\small RDTSCP} has also been used to order HTM transactions in novel transaction profiling ~\cite{tsxprof} and memory version checkpointing ~\cite{ismm17} tools. \subsection{Software Library} \label{sec:impl:sw} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth ]{fig-wrap-detail.png} \caption{Flow of a transaction with implementation using HTM, cached write sets, timing, and logging.} \label{fig:fig-wrap-detail} \end{figure} \begin{algorithm}[h] \caption{Concurrent WrAP User Software Library} \BlankLine {\bf User Level Software WrAP Library:}\\ \BlankLine \BlankLine \FuncSty{OpenWrapC} \ArgSty{()} \\ \hspace{20pt} {\it // ------------ State: OPEN --------------}\\ \hspace{20pt} $wrapId$ = threadId\; \hspace{20pt} Notify Controller Open Wrap $wrapId$\; \hspace{20pt} $startTime$ = RDTSCP()\; \hspace{20pt} Log[$wrapId$].startTime = startTime\; \hspace{20pt} Log[$wrapId$].writeSet = \{\}\; \hspace{20pt} sfence()\; \hspace{20pt} CLFLUSH Log[$wrapId$]\; \hspace{20pt} sfence()\; \hspace{20pt} {\it // ------------ State: COMPUTE ---------}\\ \hspace{20pt} HTMBegin(); \hspace{5pt} {\it // XBegin}\\ \BlankLine \BlankLine \FuncSty{wrapStore} \ArgSty{({\bf addrVar}, {\bf Value})} \\ \hspace{20pt} Add $ \{addrVar, Value\} $ to {\it Log[$wrapId$].writeSet}\; \hspace{20pt} Normal store of {\it Value} to {\it addrVar}\; \BlankLine \BlankLine \FuncSty{CloseWrapC} \ArgSty{({\bf strictDurability})}\\ \hspace{20pt} Log[$wrapId$].persistTime = RDTSCP()\; \hspace{20pt} HTMEnd(); \hspace{5pt} {\it // XEnd}\\ \hspace{20pt} {\it // ------------ State: LOG ------------------}\\ \hspace{20pt} CLFLUSH Log[$wrapId$].persistTime\; \hspace{20pt} for num cachelines in Log[$wrapId$].writeSet\\ \hspace{40pt} CLFLUSH cacheline\; \hspace{20pt} if ($strictDurability$) \\ \hspace{40pt} $durabilityAddr$ = 0; // Reset Flag\; \hspace{40pt} $tAddr$ = $durabilityAddr$\; \hspace{20pt} else $tAddr$ = 0\; \hspace{20pt} sfence()\; \hspace{20pt} {\it // ------------ State: CLOSE ---------------}\\ \hspace{20pt} Notify Controller Closed ($wrapId$, $tAddr$)\; \hspace{20pt} {\it // ------------ State: COMMIT ------------}\\ \hspace{20pt} if ($strictDurability$)\\ \hspace{40pt} // Wait for durable notification from controller\\ \hspace{40pt} Monitor $durabilityAddr$\; \label{algo:SwWrapAlgo} \vspace{0pt} \end{algorithm} For HTM we employ Intel's implementation of Restricted Transactional Memory or RTM, which includes the instructions XBegin and XEnd. Aborting HTM transactions retry with exponential back-off a few times, and then are performed under a software lock. Our HTMBegin routine checks the status of the software lock both before and after an XBegin, to produce the correct indication of conflicts with the non-speculative paths; acquiring the software lock non-speculatively after having backed off. HTMBegin and HTMEnd library routines perform the acquire and release of the software lock for the fallback case within themselves. The remaining software library procedures are shown in Algorithm ~\ref{algo:SwWrapAlgo}. Various events that arise in the course of a transaction are shown in Figure ~\ref{fig:fig-wrap-detail}, which depicts the HTM concurrency section with vertical lines and the logging section with slanted lines. Not shown in Figure ~\ref{fig:fig-wrap-detail}, is a per-thread {\it durability address} location that we call {\it durabilityAddr} in Algorithm ~\ref{algo:SwWrapAlgo}. A software thread may use it to setup a Monitor-Mwait coordination to be signaled via memory by the \PMCN~ (as described shortly) when a transacting thread wants to wait until all updates from any non-conflicting transactions that may have raced with it are confirmed to be recoverable. This provision allows for implementing the strict durability for any transaction because the logs of all other transactions that could possibly precede it in persistence order are in PM~ --which guarantees the replayability of its log. By contrast, many other transactions that only need the consistency guarantee (correct log ordering) may continue without waiting (or defer waiting to a different point in the course of a higher level multi-transaction operation). The number of active HTM transactions at any given time is bounded by the number of CPUs, therefore, we use thread identifiers as wrapIds. In {\bf OpenWrapC} we notify the \PMCN~ that a wrap has started. We then read the start time with {\small RDTSCP} and save it and an empty write set into its log persistently. The transaction is then started with the HTMBegin routine. During the course of a transactional computation, the stores are performed using the {\bf wrapStore} function. The stores are just the ordinary (speculatively performed) store instructions, but are accompanied by (speculative) recording of the updates into the log locations, each capturing the address, value pair for each update, to be committed into PM~ later during the logging phase (after XEnd). In {\bf CloseWrapC} we obtain and record the ending timestamp for an HTM transaction into the persistTime variable in its log. Its concurrency section is then terminated with the HTMEnd routine. At this point, the cached write set for the log and ending persistent timestamp are instantly visible in the cache. Next, we flush transactional values and the persist timestamp to the log area followed by a persistent memory fence. The transaction closure is then notified to the \PMCN~ with the wrapId, and along with it, the durabilityAddr, if the thread has requested $strict$ durability (by passing a flag to {\bf CloseWrapC}) -- for which, we use the efficient Monitor-Mwait construct to receive memory based signaling from the Persistent Memory Controller. If strict durability is not requested, then CloseWrapC can return immediately and let the thread proceed immediately with $relaxed$ durability. In many cases a thread performing a series of transactions may choose relaxed durability over all but the last and then request strict durability over the entire set by waiting for only the last one to be strictly durable. \subsection {PM Controller Implementation} \label{sec:impl:hw} \begin{algorithm}[t!] \caption{Hardware WrAP Implementation} \BlankLine {\bf \PMCN~:}\\ \BlankLine \BlankLine \FuncSty{Open Wrap Notification} \ArgSty{({\bf wrapId})} \\ \hspace{20pt} Add $wrapId$ to Current Open Transactions COT\; \BlankLine \BlankLine \FuncSty{Memory Write} \ArgSty{({\bf memoryAddr, data})}\\ \hspace{20pt} // Triggered from cache evict or stream store.\\ \hspace{20pt} if (($memoryAddr$ not in Pass-Through Log Area) and \\ \hspace{20pt} \hspace{5pt} (Current Open Transactions COT != \{\}))\\ \hspace{40pt} Add ($memoryAddr$, $data$, $COT$) to VDB\; \hspace{20pt} else\\ \hspace{40pt} // Normal memory write\\ \hspace{40pt} Memory[$memoryAddr$] = $data$\; \BlankLine \BlankLine \FuncSty{Memory Read} \ArgSty{({\bf memoryAddr})}\\ \hspace{20pt} if ($memoryAddr$ in Volatile Delay Buffer)\\ \hspace{40pt} return latest cacheline data from VDB\; \hspace{20pt} return Memory[$memoryAddr$]\; \BlankLine \BlankLine \FuncSty{Close WrAP Notification} \ArgSty{({\bf wrapId, durabilityAddr})}\\ \hspace{20pt} Remove $wrapId$ from Current Open Transactions COT\; \hspace{20pt} if ($durabilityAddr$)\\ \hspace{40pt} $durabilityDS$ = COT\; \hspace{40pt} // Add pair to Durability Wait Queue DWQ\; \hspace{40pt} Add ($durabilityDS$, $durabilityAddr$) to DWQ\; \hspace{20pt} Remove $wrapId$ from Volatile Delay Buffer elements\\ \hspace{20pt} if earliest VDB elements have empty DS\\ \hspace{40pt} // Write back entries to memory in FIFO order\\ \hspace{40pt} Memory[$memoryAddr$] = data\; \hspace{20pt} Remove $wrapId$ from Durability Wait Queue elements\\ \hspace{20pt} if earliest DWQ elements have empty DS\\ \hspace{40pt} // Notify waiting thread of durability\\ \hspace{40pt} Memory[$durabilityAddr$] = 1\; \label{algo:HwWrapAlgo} \vspace{0pt} \end{algorithm} The \PMCN~ provides for two needs: (1) holding back modified PM~ cachelines that fall into it at any time T from the processor caches, from flowing into PM~ until at least a time when all successful transactions that were active at time T are recoverable, and (2) tracking the ordering of dependencies among transactions so that only those that need strict durability guarantees need to be delayed pending the completion of the log phases of those with which they overlap. It implements a VDB (volatile data buffer) as means for the transient storage for the first need, implements a durability wait queue (DWQ) for the second need, and implements a dependency set (DS) tracking logic across the open/close notifications to control the VDB and the DWQ, as described next. {\bf Volatile Delay Buffer ({\small VDB}):~~} The {\small VDB} is comprised of a {\small FIFO} queue and hash table that points to entries in the {\small FIFO} queue. Each entry in the {\small FIFO} queue contains a tuple of {\it PM~ address, data,} and {\it dependency set}. On a PM~ write, resulting from a cache eviction or streaming store, to a memory address not in the log area or pass-through area, the PM~ address and data are added to the {\small FIFO} queue and tagged with a dependency set initialized to the {\small COT}. Additionally, the PM~ address is inserted into the hash table with a pointer to the {\small FIFO} queue entry. If the address already exists in the hash table, then it is updated to point to the new queue entry. On a memory read, the hash table is first consulted. If an entry is in the hash table, then the pointer is to the latest memory value for the address, and the data is retrieved from the queue. On a hash table miss, PM~ is read and data is returned. As wraps close, the dependency set in each entry in the queue is updated to remove the dependency on the wrap. Dependency sets become empty in {\small FIFO} order, and as they become empty, we perform three actions. First, we write back the data to the PM~ address. Next, we consult the hash table. If the hash table entry points to the current {\small FIFO} queue entry, we remove the entry in the hash table, since we know there are no later entries for the same memory address in the queue. Finally, we remove the entry from the {\small FIFO} queue. On inserting an entry into the back of the queue, we can also consult the head of the {\small FIFO} queue to check to see if the dependency set is empty. If the head has an empty dependency set, we can perform the same actions, allowing for O(1) {\small VDB} management. {\bf Dependency Wait Queue ({\small DWQ}):~~} Strict durability is handled by the \PMCN~ using the Dependency Wait Queue or {\small DWQ}, which is used to track transactions waiting on others to complete and notify the transaction that it is safe to proceed. The {\small DWQ} is a {\small FIFO} queue similar to the {\small VDB} with entries containing pairs of the {\it dependency set} and a {\it durability address}. When a thread notifies the \PMCN~ that it is closing a transaction (see steps below), it can request $strict$ durability by passing a $durability~address$. Dependencies on closing wraps are also removed from the $dependency~set$ for each entry in the {\small DWQ}. When the $dependency~set$ becomes empty, the controller writes to the $durability~address$ and removes the entry from the queue. Threads waiting on a write to the address can then proceed. {\bf Opening and Closing WrAPs:~~} As outlined in Algorithm ~\ref{algo:HwWrapAlgo}, the controller supports two interfaces from software, namely those for {\it Open Wrap} and {\it Close Wrap} notifications exercised from the user library as shown in Algorithm ~\ref{algo:SwWrapAlgo}. (Implementations of these notification can vary: for example, one possible mechanism may consist of software writing to a designated set of control addresses for these notifications). It also implements hardware operations against the VDB from the processor caches: Memory Write, for handling modified cachelines evicted from the processor caches or non-temporal stores from CPUs and Memory Read, for handling reads from PM~ from the processor caches. The Open Wrap notification simply adds the passed (wrapId) to a bit vector of open transactions. We call this bit vector of open transactions the Current Open Transactions COT. When the controller receives a Memory Write (i.e., a processor cache eviction or a non-temporal/streaming/uncached write) it checks the COT: if the COT is empty, writes can flow into the PM. Writes that target the log range in PM~ can also flow into PM~ irrespective of the COT. For the non-log writes, if the COT is nonempty cache line is tagged with the COT and placed into the VDB. The {\bf Close Wrap} controller notification receives the {\it wrapId} and durability address, $durabilityAddr$. The controller removes the $wrapId$ from the Current Open Transactions {\small COT} bit mask. If the transaction requires $strict$ durability, we save the $durabilityDS$ and {\small COT} as a pair in the {\small DWQ}. The controller then removes the {\it wrapId} from all entries in the {\small VDB} and {\small DWQ}. This is performed by simply draining the bit on the dependency set bit mask for the entire {\small FIFO VDB}. If the earliest entries in the queue result in an empty dependency set, the cache line data is written back in {\small FIFO} order. Similarly, the controller removes the $wrapId$ from all entries in the Durability Wait Queue {\small DWQ}. {\bf Software Based Strict Durability Alternative:} As an alternative for implementing strict durability in the controller, strict durability may be implemented entirely in the software library; we modify the software algorithm as follows. On a transaction start, threads save the start time and an open flag in a dedicated cache line for the thread. On transaction close, to ensure strict durability, it saves its end time in the same cache line with the start time and clears the open flag. It then waits until all prior open transactions have closed. It scans the set of all thread cache lines and compares any open transaction end times and start times to its end time. The thread may only continue, with ensured durability, once all other threads are either not in an open transaction or have a start or persist time greater than its persist time. \subsection{Challenges of persistent HTM transactions} Consider transactions $A$, $B$ and $C$ shown in Listings~\ref{listing:ta},~\ref{listing:tb} and~\ref{listing:tc}. Assume that {\bf w}, {\bf x}, {\bf y}, {\bf z} are persistent variables initialized to zero in their home locations in Persistent Memory~ (PM). The code section demarcated between the instructions {\bf {\small XB}egin} and {\bf {\small XE}nd} will be referred to as an {\it {\small HTM} transaction} or simply a transaction. The {\small HTM} mechanism ensures the {\it atomicity} of transaction execution. Within an {\small HTM} transaction, all updates are made to private locations in the cache, and the hardware guarantees that the updates are {\it not allowed} to propagate to their home locations in PM. After the {\bf {\small XE}nd} instruction completes, all of the cache lines updated in the transaction become instantaneously visible in the cache hierarchy. \smallskip \noindent {\bf Atomic Persistence}: The first challenge is to ensure that the transaction's updates that were made atomically in the cache are also persisted atomically in PM. Following {\bf {\small XE}nd}, the transaction variables are once again subject to the normal cache operations like evictions and the use of cache write-back instructions. There are no guarantees regarding whether or when the transaction's updates actually get written to PM~ from the cache. This can create a problem if the machine crashes before all these updates are written back to PM. On a reboot, the values of these variables in PM~ will be inconsistent with the pre-crash transaction values. This leads to the first requirement: \noindent \begin{itemize} \item {\it Following crash recovery, ensure that all or none of the updates of an {\small HTM} transaction are stored in their PM~ home locations.} \end{itemize} A common solution is to log the transaction updates in a separate persistent storage area before allowing them to update their PM~ home locations. Should a crash interrupt the updating of the home locations, the saved log can be replayed. When transactions execute within an {\small HTM} there is a problem with this solution since the log cannot be written to PM~ within the transaction and can be done only after the {\bf {\small XE}nd}. At that time the transaction updates are also made visible in the cache hierarchy and are susceptible to uncontrolled cache evictions into PM. Hence there is no guarantee that the log has been persisted before transaction updates have percolated into PM. We describe our solution in Section~\ref{sec:solution}. \begin{figure} \noindent\begin{minipage}{.14\textwidth} \begin{lstlisting}[caption={ A},language=C,frame=tlrb,label={listing:ta}] A() { XBegin; w = 1; x = w; XEnd; } \end{lstlisting} \end{minipage}\hfill \begin{minipage}{.14\textwidth} \begin{lstlisting}[caption={B},language=C,frame=tlrb,label={listing:tb}] B() { XBegin; w = w+1; y = w; XEnd; } \end{lstlisting} \end{minipage}\hfill \begin{minipage}{.14\textwidth} \begin{lstlisting}[caption={C},language=C,frame=tlrb,label={listing:tc}] C() { XBegin; w = w+1; z = w; XEnd; } \end{lstlisting} \end{minipage}\hfill \vspace{-10pt} \end{figure} \smallskip \noindent {\bf Persistence Ordering}: The second problem deals with ensuring that the {\it execution order} of dependent {\small HTM} transactions is correctly reflected in PM~ following crash recovery. As an example, consider the dependent transactions $A, B, C$ in Listings 1, 2 \& 3. The {\small HTM} will serialize their execution in some order: say $A$, $B$ and $C$. The values of the transaction variables following the execution of $A$ are given by the vector $V_1$ = [{\bf w, x, y, z}] = [$1, 1, 0, 0$]; after the execution of $B$ the vector becomes $V_2$ = [$2, 1, 2, 0$] and finally following $C$ it is $V_3$ = [$3, 1, 2, 3$]. Under normal operation the write backs of variables to PM~ from different transactions may become arbitrarily interleaved. For instance suppose that $x$ is evicted immediately after $A$ completes, $w$ after $B$ completes, and $z$ after $C$ completes. The persistent memory state is then [$2, 1, 0, 3$]; should the machine crash, the PM~ will contain this meaningless combination of values on reboot. A consistent state would be either the initial vector [$0, 0, 0, 0$] or one of $V_1, V_2$ or $V_3$. This leads to the second requirement: \noindent \begin{itemize} \item {\it {\it Following crash recovery, ensure that the persistent state of any sequence of dependent transactions is consistent with their execution order.}} \end{itemize} If individual transactions satisfy atomic persistence, then it is sufficient to ensure that PM~ is updated in transaction execution order. With software concurrency control (using an {\small STM} or two-phase transaction locking), it is straightforward to correctly order the updates simply by saving a transaction's log {\it before} it commits and releases its locks. In case of a crash, the saved logs are simply replayed in the order they were saved, thereby reconstructing persistent state to a correctly-ordered prefix of the executed transactions. When {\small HTM} is used for concurrency control the logs can only be written to PM~ {\it after} the transaction {\bf {\small XE}nd}. At that time other dependent transactions can execute and race with the completed transaction, perhaps writing out their logs before the first. Solutions like using an atomic counter within transactions to order them correctly are not practical since the shared counter will result in {\small HTM}-induced aborts and serialization of all transactions. Some papers have advocated that processor manufacturers alter {\small HTM} semantics and implementation to allow selective writes to PM~ from within an {\small HTM}~\cite{avni1,avni2,dudetm}. We describe our solution without the need for such intrusive processor changes in Section~\ref{sec:solution}. \smallskip \noindent {\bf Strict and Relaxed Durability}: In traditional {\small ACID} databases, a committed transaction is guaranteed to be durable since its log is made persistent before it commits. We refer to this property as {\it strict durability}. In {\small HTM} transactions the log is written to PM~ {\it after} the {\bf {\small XE}nd} instruction some time before the transaction commits. A natural question is to characterize the time it is safe for a transaction requiring strict durability to commit. It is generally not safe to commit a transaction $Y$ at the time it completes persisting its log for the same reason that it is difficult to ensure persistence ordering. Due to races in the code outside the {\small HTM} it is possible that an earlier transaction $X$ (on which $Y$ depends) to have completed but not yet persisted its log. When recovering from a crash that occurs at this time, the log of $Y$ should not be replayed since earlier transaction $X$ cannot be replayed. This leads to the third requirement: \noindent \begin{itemize} \item {\it {\it Following crash recovery, strict durability requires that every committed transaction is persistent in PM.}} \end{itemize} We define a new property known as {\it relaxed durability} that allows an individual transaction to opt for an early commit immediately after it persists its log. Requiring relaxed or strict durability is a local choice made individually by a transaction based on the application requirements. Transactions choosing relaxed durability face a window of vulnerability after they commit, during which a crash may result in their transaction updates not being reflected in PM~ after recovery. The gain is potentially reduced transaction latency. However, irrespective of the choice of the durability model by individual transactions, the underlying persistent memory will always recover to a consistent state, reflecting an ordered atomic prefix of every sequence of dependent transactions. \section{Introduction} \input{intro} \section{Overview} \input{basics} \input{overview} \section{Our Approach} \label{sec:solution} \input{solution} \section{Algorithm and Implementation} \label{sec:impl} \input{impl} \input{example} \section{Evaluation} \input{eval} \section{Related Work} \label{sec:previous} \input{related} \section{Summary} \input{summary} \bibliographystyle{IEEEtran} \subsection{Transaction Lifecycle} \label{sec:solution:lifecycle} A transaction can be viewed as progressing through five states: {\bf {\small OPEN}}, {\bf {\small COMPUTE}}, {\bf {\small LOG}}, {\bf {\small CLOSE}} and {\bf {\small COMMIT}} as shown in Listing ~\ref{listing:lifecycle}. When a transaction begins, it calls the library function {\it OpenWrapC} (see Algorithm~\ref{algo:SwWrapAlgo} in Section~\ref{sec:impl:sw}). This function invokes the \PMCN~ with a small unique integer ({{\it wrapId}}) that identifies the transaction. The controller adds {\it wrapId} to a set of currently open transactions (referred to as {\small COT}) that it maintains (see Algorithm~\ref{algo:HwWrapAlgo} in Section~\ref{sec:impl:hw}). The transaction then allocates and initializes space in PM~ for its log and updates the {\it {startTime}} record of the log. The {\it startTime} is obtained by reading a system wide platform timer using the {\it \small RDTSCP} instruction (see Section~\ref{sec:impl:ts}). In addition to {\it startTime}, a log includes a second timestamp {\it persistTime} that will be set just prior to completing the {\small HTM} transaction. The {\it writeSet} is a sequence of ($address, value$) pairs in the log that will be filled with the locations and values of the updates made by the {\small HTM} transaction. The log with its recorded {\it startTime} is then persisted using cache line write-back instructions ({\it clwb}) and {\it sfence}. The transaction then enters the { {\small COMPUTE}} state by executing {\bf {\small XB}egin} and entering the {\small HTM} code section. Within the {\small HTM} section, the transaction updates {\it writeSet} with the persistent variables that it writes. Note that the records in {\it writeSet} will be held in cache during the {\small COMPUTE} state since it occurs within an {\small HTM} and cannot be propagated to PM~ until {\bf {\small XE}nd} completes. Immediately before {\bf{\small XE}nd} the transaction obtains a second timestamp {\it {persistTime}} that will be used to order the transactions correctly. This timestamp is also obtained using the same {\it \small RDTSCP} instruction. After executing {\bf {\small {XE}nd}}, a transaction next enters the {{\small LOG}} state. It flushes its log records from cache hierarchy into PM~ using cache line write-back instructions ({\small CLWB} or {\small CLFLUSHOPT}), following the last of them with an {\small SFENCE}. This ensures that all the log records have been persisted. In addition to $startTime$, a log includes the $persistTime$ time stamp that was set just prior to completing the transaction. The $writeSet$ records in the log hold (address; value) pairs representing the locations and the values updated by the transaction. After the {\small SFENCE} following the log record flushes, the transaction enters the {\small CLOSE} state. \begin{figure}[tp] \begin{minipage}{\linewidth} \begin{lstlisting}[caption={Transaction Structure},language=C,frame=tlrb,label={listing:lifecycle}] Transaction Begin ---------- State: OPEN ---------------- 1 NotifyPMController(open); 2 Allocate a Log structure in PM; 3 nowTime = ReadPlatformCounter(); 4 Log.startTime = nowTime; 5 Log.writeSet = {}; 6 Persist Log in PM; ---------- State: COMPUTE ------------- XBegin // Transaction Body of HTM // All reads and writes to transaction // variables are performed and also // appended to Log.writeSet. 7 endTime = ReadPlatformCounter(); XEnd ---------- State: LOG ----------------- 8 Log.persistTime = endTime; 9 Persist Log in PM; ---------- State: CLOSE --------------- 10 NotifyPMController(close); ---------- State: COMMIT -------------- 11 If (strict durability requested) Wait for notification by controller; 12 Transaction End \end{lstlisting} \end{minipage} \end{figure} In the {{\small CLOSE}} state the transaction signals the \PMCN~ that its log has been persisted in PM. The controller removes the transaction from its set of currently open transactions {\small COT}. It also reflects the closing in the state of evicted cache lines in its FIFO buffer as described below in Section~\ref{sec:solution:controller}. A transaction requiring strict durability informs the controller at this time; the controller will signal the transaction in due course when it is safe to commit {\it i.e.} its updates are guaranteed to be durable. The transaction is then complete and enters the {{\small COMMIT}} state. If it requires strict durability it waits till it is signaled by the controller. Otherwise, it immediately commits and leaves the system. \begin{figure}[tp] \includegraphics[width=0.5\textwidth]{fig-controller.png} \caption{PM Controller with Volatile Delay Buffer, Current Open Transactions Set, and Dependency Wait Queue} \label{fig:controller} \end{figure} \subsection{\PMCN~} \label{sec:solution:controller} The \PMCN~ is shown in Figure ~\ref{fig:controller}. While superficially similar to an earlier design ~\cite{spaa17, doshi16, cal16}, this controller includes enhancements to handle the subtleties of using {\small HTM} rather than locking for concurrency control, and makes significant simplifications by shifting some of the responsibility for the maintenance of PM~ state to the recovery protocol. A crucial function of the controller is to prevent any transaction's updates from reaching PM~ until it is safe for it to do so - without requiring detailed book-keeping about which transactions, currently active or previously completed, generated a specific update. It does this by enforcing two requirements before allowing an evicted dirty cache line (representing a transaction's update) from proceeding to PM: (i) ensuring that the log of the transaction has been made persistent, and (ii) guaranteeing that the saved log will be replayed during recovery. The first condition is needed (but not sufficient) for atomic persistence by guarding against a failure that occurs after only a subset of a transaction's updates have been persisted in PM. The second requirement arises because transaction logs are persisted outside the {\small HTM}, and there is no relation between the order in which the transactions execute their {\small HTM} and the order in which their logs are persisted. To maintain correct persistence ordering, the recovery routine may not be able to replay a log. We illustrate the issue in the example below. \smallskip \noindent {\bf Example}: Consider two dependent transactions {\small A} and {\small B}, {\bf {\small A}}: \{w = 3; x=1;\} and {\bf {\small B}}: \{y = w+1; z = 1;\}. Assume that {\small HTM} transaction {\small A} executes before {\small B}, but that {\small B} persists its logs and closes before {\small A}. Suppose there is a crash just after {\small B} closes. The recovery routine will not replay either {\small A} or {\small B}, since the log of the earlier transaction {\small A} is not available. This behavior is correct. \noindent Now consider the situation where $y$ (value $4$) is evicted from the cache and then written to PM~ after {\small B} persists its log. Once again, following a crash at this time, neither log will be replayed. However, atomic persistence is now violated since {\small B}'s updates are partially reflected in PM. Note that this violation occurred even though the write back of $y$ to PM~ happened after {\small B}'s log was persisted. The \PMCN~ protocol prevents a write back to PM~ unless it can also guarantee that the log of the transaction creating the update will be played back on recovery (see Lemmas C2 and C4 in Section~\ref{sec:solution:lemmas}). \smallskip The second function of the controller is to track when it is safe for a transaction requiring strict durability to commit. It is not sufficient to commit when a transaction's logs are persistent on PM~ since, as seen in the example above, the recovery routine may not replay the log if that would violate persistency ordering. The controller protocol effectively delays a strict durability transaction $\tau$~from committing until the earliest open transaction has a {\it startTime} greater than the {\it persistTime} of $\tau$. This is because the recovery protocol will replay a log (see Section~\ref{sec:solution:recovery}) if and only if all transactions with {\it startTime} less than its {\it persistTime} have closed. \smallskip \noindent {\bf Implementation Overview}: The controller tracks transactions by maintaining a {\bf {\small COT}} (currently open transactions) set $S$. When a transaction opens, its identifier is added to {\small COT} and when the transaction closes it is removed. The write into PM~ of a cache line $C$ evicted into the \PMCN~ is deferred by placing it at the tail of a {\small FIFO} queue maintained by the controller. The cache line is also assigned a tag called its {\it dependency set}, initialized with $S$ the value of {\small COT}, at the instant that $C$ entered the Persistent Memory Controller. The controller holds the evicted instance of $C$ in a {\small FIFO} until all transactions that are in its dependency set (i.e. $S$) have closed. When a transaction closes it is removed from both the {\small COT} and from the dependency sets of all the {\small FIFO} entries. When the dependency set of a cache line in the {\small FIFO} becomes empty, it is eligible to be flushed to PM. One can see that the dependency sets will become empty in the order in which the cache lines were evicted, since a transaction still in the dependency set of $C$ when a new eviction enters the {\small FIFO} will also be in the dependency set of the new entry. The simple protocol guarantees that all transactions that opened before cache line $C$ was evicted into the controller (which must also include the transaction that last wrote $C$) must have closed and persisted their logs when $C$ becomes eligible to be written to PM. This also implies that all transactions with {\it startTime} less than the {\it persistTime} of the transaction that last wrote {\small C} would have closed, satisfying the condition for log replay. Hence the cache line can be safely written to PM~ without violating atomic persistence. Note that the evicted cache lines intercepted by the controller do not hold any identifying transaction information and can occur at arbitrary times after the transaction leaves the {\small COMPUTE} state. The cache line could hold the update of a currently open transaction or could be from a transaction that has completed or even committed and left the system. To guarantee safety, the controller must perforce assume that the first situation holds. The details of the controller implementation will be presented in Section~\ref{sec:impl:hw}. \subsection{Recovery} \label{sec:solution:recovery} Each completed transaction saves its log in PM. The log holds records {\it startTime} and {\it persistTime} obtained by reading the platform timer using {\small RDTSCP} instruction. We refer to these as the start and end timestamps of the transaction. The start timestamp is persisted {\it before} a transaction enters its {\small HTM}. This allows the recovery routine to identify a transaction that started but had not completed at the time of a failure. Note even though such a transaction has not completed, it could still have finished its {\small HTM} and fed values to a later dependent transaction which has completed and persisted its log. The end timestamp and the write set of the transaction are persisted {\it after} the transaction completes its {\small HTM} section, followed by an end of log marker. There can be an arbitrary delay between the end of the {\small HTM} and the time that its log is flushed from caches into PM~ and persisted. The recovery procedure is invoked on reboot following a machine failure. The routine will restore PM~ values to a consistent state that satisfies persistence ordering by copying values from the {\it writeSet}s of the logs of qualifying transactions to the specified addresses. A transaction $\tau$~ qualifies for log replay if and only if all earlier transactions on which it depends (both directly and transitively) are also replayed. \smallskip \noindent {\bf Implementation Overview}: The recovery procedure first identifies the set of incomplete transactions ${\mathcal{I}}$, which have started (as indicated by the presence of a {\it startTime} record in their log) but have not completed (indicated by the lack of a valid end-of-record marker). The remaining complete transactions (set ${\mathcal {C}}$) are potential candidates for replay. Denote the smallest start timestamp of transactions in ${\mathcal{I}}$ by $T_{min}$. A transaction in ${\mathcal {C}}$ is valid (qualifies for replay) if its end timestamp ({\it persistTime}) is no more than $T_{min}$. All valid transactions are replayed in increasing {\it order of} their end timestamps {\it persistTime}. \subsection{Protocol Properties} \label{sec:solution:lemmas} We now summarize the invariants maintained by our protocol. \medskip \noindent {\bf Definition}: The precedence set of a transaction $T$, denoted by {\bf prec}($T$), is the set of all dependent transactions that executed their {\small HTM} before $T$. Since the {\small HTM} properly orders any two dependent transactions the set is well defined. \medskip \noindent {\bf Lemma C1}: Consider a transaction $X$ with a precedence set {\bf prec}($X$). For all transactions $Y$ in {\bf prec}($X$), {\it startTime}($Y$) $<$ {\it persistTime}($X$). \smallskip \noindent {\bf Proof Sketch}: Let $Y$ be a transaction in {\bf prec}(X). First let us consider direct precedence, in which a cacheline $C$ modified in $Y$ controls the ordering of $X$ with respect to $Y$. That is, $X$ either reads or writes the cacheline $C$. Since $Y$ is in {\bf prec}($X$), the earliest time that $X$ accesses $C$ must be no earlier than the latest time that $Y$ accesses $C$, and thus {\it persistTime}($Y$) $<$ {\it persistTime}($X$). Next consider a chain of direct precedences, $Y$ $\rightarrow$ $Z$ $\rightarrow$ $W$ $\rightarrow \cdots X$, which puts $Y$ in {\bf prec}($X$); and by transitivity, {\it persistTime}($Y$) $<$ {\it persistTime}($X$). Since {\it startTime}($Y$) $<$ {\it persistTime}($Y$) the lemma follows. \medskip \noindent {\bf Lemma C2}: Consider transactions $X$ and $Y$ with {\it startTime}($Y$) $<$ {\it persistTime}($X$). If a cache line $C$ that is updated by $X$ is written to PM~ by the controller at time $t$, then $Y$ must have closed and persisted its log before $t$. \smallskip \noindent {\bf Proof Sketch}: Suppose $C$ was evicted to the controller at time $t' \leq t$. Now $t'$ must be later than the time $X$ completed {\small HTM} execution and set {\it persistTime}($X$); by assumption this is after $Y$ set its {\it startTime} at which time $Y$ must have been registered as an open transaction by the controller. Now, either $Y$ has closed before $t'$ or is still open at that time. In the latter case, $Y$ will be added to the dependence set of $C$ at $t'$. Since $C$ can only be written to PM~ after its dependence set is empty, it follows that $Y$ must have closed and removed itself from the dependence set of $C$. \medskip \noindent {\bf Lemma C3} Any transaction $X$ that writes an update to PM~ and closes at time $t$ will be replayed by the recovery routine if there is a crash any time after $t$. \smallskip \noindent {\bf Proof} The recovery routine will replay a transaction $X$ if the only incomplete transactions (started but not closed) at the time of the crash started after $X$ completed; that is, there is no incomplete transaction $Y$ that has a {\it startTime}($Y$) $\leq$ {\it persistTime}($X$). By Lemma C2 such an incomplete transaction cannot exist. \medskip \noindent {\bf Lemma C4}: Consider a transaction $X$ with a precedence set {\bf prec}($X$). Then by the time $X$ closes and persists its logs, one of the following must hold: (i) Some update of $X$ has been written back to PM~ and all transactions $Y$ in {\bf prec}($X$) have persisted their logs; (ii) No update of $X$ has been written to PM~ and all transactions $Y$ in {\bf prec}($X$) have persisted their logs; (iii) No update of $X$ has been written to PM~ and some transactions $Y$ in {\bf prec}($X$) have not yet persisted their logs. \smallskip \noindent {\bf Proof Sketch}: From Lemmas C1 and C2, it is not possible to have a transaction in {\bf prec}($X$) that is still open if an update from $X$ has been written to PM.
{ "timestamp": "2018-06-05T02:18:12", "yymm": "1806", "arxiv_id": "1806.01108", "language": "en", "url": "https://arxiv.org/abs/1806.01108" }
\section{Introduction} Coronal jets, first discovered in soft X-ray on Yohkoh satellite (Shibata et al. 1992), are transient and collimated plasma flows. Coronal jets represent a group of impulsive events in different scales and develop in different layers of the solar atmosphere, including spicules, H$\alpha$ surges, Extreme ultraviolet (EUV) jets, and X-ray jets. Coronal jets always involve open field lines in coronal holes (CHs) or large-scale closed loops (Shibata et al. 1992; Schmieder et al. 1995; Rachmeler et al. 2010; Pariat et al. 2015), which may provide possible contributions to coronal heating and solar wind acceleration (Innes et al. 1997; Cirtain et al. 2007; Shibata et al. 2007; Pariat et al. 2009; Tian et al. 2014). With the improvements of the temporal and spatial resolution of the telescopes, more and more new morphological features and physical characteristics of solar jets are revealed (see the recent reviews by e.g., Raouafi et al. 2016, and references therein), such as the composition of the cool and hot components (Zhang et al. 2000; Liu \& Kurokawa 2004; Jiang et al. 2007; Nishizuka et al. 2008), the recurrence from the same arch-base (Cirtain et al. 2007; Pariat et al. 2010; Innes et al. 2011; Jiang et al. 2013; Tian et al. 2014; Zhang et al. 2014b; Chen et al. 2015), and the untwisting motions around the spire (Patsourakos et al. 2008; Liu et al. 2011; Shen et al. 2011; Chen et al. 2012; Zhang et al. 2014a; Cheung et al. 2015; Lee et al. 2015; Liu et al. 2016). It is generally believed that coronal jets are always formed by magnetic reconnection between emerging flux and ambient pre-existing magnetic fields (Shibata et al. 1994; Raouafi et al. 2016). According to the key mechanism of magnetic reconnection, coronal jets are classified into the standard model and the blowout model with two magnetic reconnection processes (Moore et al. 2010, 2013; Sterling et al. 2015, 2016). On the other hand, Shibata et al. (1994) proposed two types of coronal jets, namely the anemone type and two-sided-loop type, based on the different configurations of overlying magnetic fields. The former are single field-directed jets that occur between emerging flux and vertical/oblique magnetic fields, and the latter are bi-directional jets or loop brightenings that form between the emerging flux and the overlying horizontal magnetic fields (Yokoyama \& Shibata 1995, 1996). The scenario of two-sided-loop jets also predicts that the formation of some plasmoids (magnetic islands or blobs) are ejected in one direction accompanying with the bi-directional jets, and the plasmoids confine a cool dense chromospheric plasma. Furthermore, when there exist magnetic shear in the overlying fields, the vortex motion will appear around the plasmoids (Yokoyama \& Shibata 1996). To our knowledge, only a few of the two-sided-loop jets have been reported, but the observations of plasmoids have never been detected in the cases (Alexander \& Fletcher 1999; Jiang et al. 2013). Can the rare two-sided-loop jets be detected frequently by observations with the high temporal and spatial resolution? In this paper, we report a example of successive two-sided-loop jets simultaneously observed in the chromosphere, transition region, and corona. The paper is organized as follows. Section 2 describes the observations; The main results are presented in Section 3; We give conclusions and discussion in Section 4. \section{Observations and Data Analysis} The two-sided-loop jets emanated from an emerging flux region (EFR) in NOAA active region (AR) 12681 on 2017 September 23. To analyse the jet evolution, we employ the EUV observations from Atmospheric Imaging Assembly (AIA: Lemen et al. 2012) onboard Solar Dynamics Observatory (SDO: Pesnell et al. 2012), the narrow-band slit-jaw images (SJIs) from Interface Region Imaging Spectrograph (IRIS: de Pontieu et al. 2014), and the H$\alpha$ filtergrams from New Vacuum Solar Telescope (NVST; Liu et al. 2014; Xiang et al. 2016) in the Fuxian Solar Observatory of China. We mainly use three AIA EUV channels, including 304~{\AA} (He II, $\sim$0.05 MK), 171~{\AA} (Fe IX, $\sim$0.6 MK), and 131~{\AA} (Fe VIII, $\sim$0.4 MK). Each AIA image covers the full disk (4096~$\times$ 4096 pixels) of the Sun and up to 0.5 $R_{sun}$ above the limb, with a pixel size of 0$\farcs$6 and a cadence of 12 seconds. The event is also scanned in the IRIS 1330 and 1400~{\AA} from 07:30 to 10:40 UT on September 23. Each SJI has the field of view (FOV) of $120^{''}$$\times$$120^{''}$, and its time cadence and spatial resolution are 20 seconds and 0$\farcs$332, respectively. The NVST data used in this study were obtained in H$\alpha$ 6562.8~{\AA} from 07:21 to 09:00 UT on September 23. The H$\alpha$ filtergrams have a spatial resolution of 0$\farcs$3, a cadence of 27 s, and a FOV of $151^{''}$$\times$$151^{''}$, covering AR 12681. To check the magnetic field evolution of the EFR, we utilise magnetograms from the Helioseismic and Magnetic Imager (HMI: Scherrer et al. 2012), with a cadence of 45 seconds and pixel scale of 0$\farcs$6. In addition, the jets are also detected in the soft X-ray images of the X-ray Telescope (XRT; Golub et al. 2007; Kano et al. 2008) on the HINODE (Kosugi et al. 2007), and we mainly employ the Be\_thin images with a spatial resolution of $\sim$1$\farcs$0 pixel$^{-1}$ and a cadence of 90 seconds, though only part of the jets are captured due to the saturation of the XRT images. Finally, the images in different instruments are aligned by carefully comparing an obvious same feature (emerging loops, brightenings, and plasmoids), all the images in each wavelength are de-rotated with a reference image at $\sim$08:00 UT on September 23 by the routine DROT\_MAP of SolarSoft/IDL. \section{Results} \subsection{Magnetic flux emergence} The overview of AR 12681 prior to the jets is shown in Figure 1. The AR consisted of a leading positive domain sunspot and following negative polarities (green and blue arrows in panel a). The EFR we selected involved the emerging loops (dotted boxes), and situated in the diffuse following polarities close to the polarity inversion line (PIL; the red dotted line in panel a) of the AR. Bundles of coronal loops in different heights and temperatures (arrows in panels b-c) straddled the PIL and covered the EFR. A filament (F; the yellow arrow in panel e) suspended along the PIL, and its north and south ends rooted in negative and positive polarities, respectively (blue and green arrows in panel e). In its close-up view, the middle part of F appeared as some faint slender threads (F1; the red arrow in panel f), connecting with dark portions on both sides (yellow arrows in panel f). In this paper, we mainly focus on F and F1, though another two filaments (F2 and F3) flew across F (white and black arrows in panels d-f). Next, we focus on the evolution of the magnetic field in the EFR from one hour before the jet onset (Figure 2). In the EFR, the dominant negative polarity (N1) emerged obviously, and merged with the pre-existing other negative polarities in the south (blue arrows in panels a-d). A minor positive polarity (P1; green arrows in panels b-d) was diffuse and faint, and enclosed one isolated negative polarity (N2; the black arrow in panel a). N2 was fading after its emergence (panel d). The clear change of sizes for the P1, N1, and N2 in the magnetograms (panels a-d) likely indicates the emergence and cancellation of magnetic flux. Note that there was another negative polarity (N3; the white arrow in panel a) north to the EFR. Checked by the wavelet analysis, there are some periods of 300-400 s in the curves of magnetic fluxes for the P1, N1, and N2 (panels e-f). It is likely consistent with the 5-minute fluctuations of magnetic field due to the 5-minute p-mode oscillation in a sunspot (Lites et al. 1998). Due to the fluctuations of the background magnetic fields, the weaker emergence and cancellation in the P1a-N2 region (blue dotted boxes in panels a-d) is not evident (panel e), but the strong emergence of N1 in the P1b-N1 region (white dotted boxes excluding blue dotted boxes in panels a-d) is very obvious (panel f). The N1 (blue) keeps growing, and has a rapid increasing phase from $\sim$08:10 to $\sim$08:30 UT. The magnetic flux of N1 ($\sim$ $10^{19}$ Mx) is stronger than that of N2 ($\sim$$10^{18}$ Mx) by one order of magnitude, which indicates the predominance of the emergence of N1 in the EFR. The jets just occurred during the rapid increasing phase of N1, and the times of two major jets are indicated by vertical dotted lines in panel f. The close temporal relationship shows that the continuous emergence of magnetic flux (N1 and P1) is intimately associated with the following jets. \subsection{Emerging loops} As the response for the magnetic flux emergence, small coronal loops emerged in the solar atmosphere (Figure 3). In H$\alpha$, the part of F over the EFR was still nearly invisible at $\sim$08:03 UT, but some filament threads (F1) became apparent and slowly rose (red arrows in panels a-c). On the other hand, small short loops (L1; white arrows in panels d-i) appeared in IRIS SJIs and AIA EUV images. Superposed with the contour of magnetic fields, the south footpoints of L1 clearly rooted at N1. The north footpoints of short L1 anchored in P1 (yellow dotted boxes in upper panels and Fig.2c) between N1 and N2 (blue and black arrows in panel b). Some minutes later, a new longer L1 appeared (panels f and i), and its south footpoints situated at the P1 (red dotted boxes in bottom panels and Fig.2c) north to N2. Following its rise, the longer L1 became cusp in IRIS SJIs, AIA EUV and Hinode XRT images (yellow arrows in Fig.4b-d), and there appeared some brightenings in the EFR in H$\alpha$ (the white arrow in Fig.4a). Sequentially, the cusp of L1 disrupted, and a bright loop-like jet ejected from the top of L1 in IRIS 1400~{\AA}, AIA 171~{\AA}, and X-ray (green arrows in Figure 4f-h). The deformation of L1 and the occurrence of the loop-like jet are likely as a result of the approach and interaction between L1 and F1, and the jet nearly moved along the F1 (the red arrow in Fig.4e). \subsection{Coronal jets} In a minute, there emerged another new L1 after the first loop-like jet (Fig.5b-d). Then, the newly-emerged L1 again disrupted and spread into a bifurcated eruption that horizontally travelled in opposite directions as the two-sided-loop jets (see the e online animated version of this figure). The bi-directional jets were simultaneously seen in the chromosphere, transient region, and corona (white and yellow arrows in panels e-h). The northward jet moved along a small loop (L2; white arrows in panels f-g), shorter than $10^{''}$, in the form of loop brightenings. The L2 connected P1 and N3 (green and black arrows in panels f-g). The southward jet was much more attractive, and emanated from N1 (blue arrows in panels f-g). The southward jet propagated along F1 (red arrows in left panels) to remote positive polarities (the green arrow in Fig.1e) for a longer distance more than $30^{''}$. In X-ray, only the southward jet was distinguished (yellow arrows in panels h and l), and the L2 was occulted by the flaring core and overlying loops. Furthermore, the jets reoccurred intermittently, due to the successive emergence of L1. Another strong two-sided-loop jets happened at $\sim$08:24 UT as a pair of loop brightenings in the north and a southward jet (white and yellow arrows in panels j-k). Because the loop brightenings were short and weak, we mainly study on the southward strong jets. Accompanying with the jets, some plasmoids/blobs escaped from the apex of the emerging L1, and five distinguished plasmoids are shown in Figure 6 (blue arrows). The third plasmoids (panels c-f) were tracked for a longer period, and other plasmoids were transient (panels a-b and h-i). The trajectories of the third plasmoids were superimposed on the H$\alpha$ filtergram (pluses in panel g), which clearly shows the third plasmoids first moved eastward, and then flowed southward, and finally disappeared near the F1 (the red arrow in panel g). It is similar to the vortex motion in the simulation of Yokoyama \& Shibata (1996). Along a horizonal dash line (S1; panel a), the evolution of the third plasmoids is clearly shown in the time-slice plots (panels j-l) in IRIS 1330~{\AA}, and AIA 171 and 304~{\AA}. The eastward initial speed of third plasmoids was $\sim$90 km s$^{-1}$ (blue dotted lines in panels j-l). In addition, it is also clear that the fast rising speed of L1 was $\sim$15.4 km s$^{-1}$ from $\sim$08:10 to $\sim$08:13 (the green dotted line in panel l), which is consistent with the emerging of the longer L1 in Fig.3i. The L1 emergence lasted from 08:00 to 08:30 UT, and the two intense patches (white arrows in panels j-l) are closely related with two major jets at $\sim$08:14 and $\sim$08:24 UT (green vertical lines in panels j-l). The evolution of the F1 during the jet propagation is shown in the original and running-difference H$\alpha$ filtergrams (Figure 7). Before the jet onset, the twist was evident (the red arrow in panel a). The disconnection for the southward jets (yellow arrows in panels f-h) possibly indicates that the southward jets moved with the untwisting F1 (panels f-i). As the jets successively occurred, more and more filament threads of F1/F experienced an untwisting motion and became loose and straight at $\sim$08:30 UT (red arrows in panels b-e). The evolution of the southward jets and F1 is displayed in time-slice plots in Figure 8. Along the jet path (S2 in Fig.7i), two major jets occurred at $\sim$08:14 and $\sim$08:24 UT (dashed vertical lines), respectively, consistent with the two strong patches in Fig.6j-l. The first jet was much more prominent, and propagated for more than 35 Mm with a speed of $\sim$120 km s$^{-1}$ (green dotted lines in panels a-e). The second jet had a faster velocity of $\sim$250 km s$^{-1}$ (blue dotted lines in panels a-d). Across the southern F1 (S3 in Fig.7i), a rise speed of $\sim$6 km s$^{-1}$ (the yellow dotted line in panel f) indicates that the southern F1 began to slowly upwards lift near the occurrence time of jets (the dashed vertical line in panel f). The transverse speed of $\sim$13 km s$^{-1}$ (green and blue dotted lines in panel f) confirms the untwisting motion of the southern F1. In the direction perpendicular to the F1 (S4 in Fig.7i) over the EFR, the rise speed of $\sim$6 km s$^{-1}$ (the green dotted line in panel g) indicates that F1 over the EFR slowly rose following the emergence of short L1 at the early stage, before the jet onset (the dashed vertical line in panel g). \section{Conclusions and Discussion} Combining with the high-quality observations from NVST, IRIS, and SDO, we report two-sided-loop jets on 2017 September 23. Beneath an active region filament, the continuous magnetic flux emergence in a small EFR induced the successively emerging loops (L1). At the early stage, a short L1 appeared, and some overlying filament threads (F1) began to slowly rise at a speed of $\sim$6 km s$^{-1}$. Then, a longer L1 appeared and had a rising velocity of $\sim$15 km s$^{-1}$. After the deformation (cusp) of the longer L1, the jets happened soon, and the southern F1 showed an upward expansion of $\sim$6 km s$^{-1}$ and a transverse motion of $\sim$13 km s$^{-1}$. For the jets, the northward jets moved in newly-formed short loops (L2) as loop brightenings, and the vigorous southward jets propagated along the south newly-formed filament loops. We analyse two major two-sided-loop jets, the speeds of two southward jets were $\sim$120, and $\sim$250 km s$^{-1}$, respectively. The faster speed of the latter jet is possibly caused by the decrease of magnetic field and rarefaction of plasma density after the propagation of the former one. The jets were also accompanied with plasmoids from the top of L1 and the untwisting motion of F1. These observations likely provide clear evidences of magnetic reconnection between emerging loops (L1) and twisted filament threads (F1). We suggest a scenario of the two-sided-loop jets in Figure 9. The loops (L1; pink) emanate from the EFR, and their south and north ends root in the negative (N1) and positive (P1) polarities, respectively. L1 slowly rises and approaches to the twisted filament threads (F1; blue), which induces magnetic reconnection between L1 and F1 (the yellow cross). As a result, the north end of L1 connects to the north end of F1, in the form of short loop brightenings (L2; orange); another newly-formed filament loops (L3; red) links the south end of L1 with the south remote end of F1. The southward powerful jets (the green arrow) moved along newly-formed L3. As another production of magnetic reconnection, some plasmoids also escape eastward from the junction of L1 and F1 (the yellow cross), and then moved southward guided by F1 (yellow arrows in panel b). The continuous emergence of magnetic flux drives the successively emerging of L1, and results in the intermittent magnetic reconnection between L1 and F1. The intermittent magnetic reconnections between L1 and F1 form some more L2 and L3 (panel c). The anemone jets and the two-sided-loop jets have a common nature of magnetic reconnection between the emerging flux and overlying fields. When the overlying filed is predominant horizontal, the resulting jets move in two different newly-formed closed loop structures (Yokoyama \& Shibata 1995, 1996). Yokoyama \& Shibata (1995) suggested that the two-sided-loop jets more likely appear as transient loop brightenings, not a necessary jet form. In this paper, because the junction point is close to the north ends of F1, the northward ejection moved along a newly-formed short loop (L2) and appeared as loop brightenings, while the intense southward ejection propagated along the longer untwisting filament threads in the form of jets. Hence, the two-sided-loop jets we studied here consist of pairs of loop brightenings and jets. Up to date, there are only a few observational example of two-sided-loop jets (Alexander \& Fletcher 1999; Jiang et al. 2013; Tian et al. 2017). The two-sided-loop jets in Tian et al. 2017 were suggested to be caused by magnetic reconnection between adjacent filamentary threads, which is not accordant with the model of Yokoyama \& Shibata (1995,1996). Alexander \& Fletcher (1999) reported the two-sided-loop jets with the observations of Trace and Yohkoh, and Jiang et al. (2013) showed recurrent two-sided-loop jets that resulted from the magnetic reconnection between a emerging bipole and overlying transequatorial loops with the data. The simulation of two-sided-loop jets predicated the ejection of plasmoids (magnetic islands), and the plasmoids showed the vortex motion around the plasmoids when the overlying field had magnetic shear (Yokoyama \& Shibata 1996). To our knowledge, the plasmoids or overlying sheared fields in the two-sided-loop jet model has never been observed. Here, the two-sided-loop jets were observed simultaneously in the chromosphere (NVST), transition region (IRIS), and corona (AIA and XRT), and were associated with sheared fields (twisted filament threads) and plasmoids that have a vortex-like motion (Fig.6g). It is in agreement with the case of overlying sheared fields for the two-sided-loop jet model (Yokoyama \& Shibata 1995, 1996). Coronal jets always display untwisting motions of helical structures. The untwisting motion always comes from the relaxation of the twist stored in the closed field, such as emerging bipole, small-scale flux ropes (Canfield et a. 1996; Jibben \& Canfield 2004; Pariat et al. 2009, 2010). In this case, the signals in AIA EUV images and IRIS SJIs are too weak to determine if there is the twist in the emerging L1. However, F1 had the twist stored in the filament threads. After the magnetic reconnection, the southward jets moved along the newly-formed filament loop and followed the untwisting motion of the filament threads. Therefore, the untwisting motion of jets in this event likely originates from the pre-exist overlying filament threads (F1). In addition, the untwisting velocity is nearly constant ($\sim$13 km s$^{-1}$) through the successive jets with different speeds (Fig.8f), which also confirms that the untwisting motion is the intrinsic property of the twisted filament threads. On the other hand, because there are bundles of coronal loops in different heights over the F1 and L1 (Fig.1b-c), the jet mass just flowed horizontally along newly-formed filament loops. Therefore, the successive jets are regarded as failed eruption, due to the strong confinement of the overlying large-scale loops. The magnetic field strength of the L1 south footpoints is $B$ $\approx$ 68 $G$, and the initial density in the ambient corona is assumed to be $\rho \approx5 \times 10^{-19}$ $g$ $cm^{-3}$. Hence, the Alfv{\'e}n velocity is calculated by $v_A = B/{\sqrt{4 \pi \rho}}$ $\approx$ 260 km s$^{-1}$. According to the method in Yokoyama \& Shibata (1996), the speeds of plasmoids and jets are $v_{plasmoid} = 0.27 v_A$ $\approx$ 70 km s$^{-1}$ and $v_{jet} = 0.7 v_A$ $\approx$ 182 km s$^{-1}$, which are comparable to that of the observations (90 and 120-250 km s$^{-1}$). Furthermore, the length of L1 was $\sim$15 Mm ($\sim$20$^{''}$), and that in the simulation of Yokoyama \& Shibata (1996) was $\sim$9.3 Mm. It seems that the two-sided-loop jets are always associated with shorted loops in the earlier emergence phase. We suggest that the two-sided-loop jets may occur as frequent as anemone jets, and they probably just show unresolved brightenings in low-resolution observations. It is a possible reason for the rarity of two-sided-loop jets. Further better observations will be necessary to detect more and more examples of two-sided-loop jets and probe their physical nature. \acknowledgments IRIS is a NASA Small Explorer mission developed and operated by LMSAL with mission operations executed at NASA ARC and major contributions to downlink communications funded by the NSC (Norway). SDO is a mission of NASA's Living With a Star Program, and SDO data and images are courtesy of NASA/SDO and the AIA and HMI science teams. The authors thank the NVST team for providing the data. This work is supported by grants NSFC 41274175, 41331068, 11303101, and 11603013, Shandong Province Natural Science Foundation ZR2016AQ16, and Young Scholars Program of Shandong University, Weihai, 2016WHWLJH07. H. Q. Song is supported by the Natural Science Foundation of Shandong Province JQ201710. This work is in memory of Dr. Yunchun Jiang who led R. Zheng into the gate of Solar Physics.
{ "timestamp": "2018-06-05T02:14:18", "yymm": "1806", "arxiv_id": "1806.00957", "language": "en", "url": "https://arxiv.org/abs/1806.00957" }
\section{Introduction and basics of the Neumann-Poincar\'e operator}\label{sec:introduction} The Neumann-Poincar\'e operator $\mathcal K_{\Gamma}$ and its formal adjoint $\mathcal K_{\Gamma}^*$ are boundary-integral operators associated with the double-layer harmonic potential and the normal derivative of the single-layer harmonic potential for the boundary $\Gamma$ of a bounded domain in $\mathbb{R}^n$. When $\Gamma$ is of class $C^2$, these operators are compact, and thus their spectra consist only of eigenvalues converging to zero (and zero itself). For domains with Lipschitz boundary, they have essential spectrum, which depends critically on the function spaces in which they act. This work proves the existence of eigenvalues within the essential spectrum of $\mathcal K_{\Gamma}^*$ for certain Lipschitz curves $\Gamma$ in $\mathbb{R}^2$ in the Sobolev distribution space $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$, in which $\mathcal K_{\Gamma}^*$ is self-adjoint (Theorem~\ref{thm:main}). The theorem implies eigenvalues within the essential spectrum for $\mathcal K_{\Gamma}$ in $H^{1/2}} %\newcommand{\Hhalf}{H^\half(\Gamma)$, which has exactly the same spectrum as $\mathcal K_{\Gamma}^*$ in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$. In $\mathbb{R}^2$, if $\Gamma$ is the boundary of a simply connected bounded domain, the Neumann-Poincar\'e operator applied to a function $\phi:\Gamma\to\mathbb{C}$ is \begin{equation} \mathcal K_{\Gamma}[\phi](x) \;=\; - \frac{1}{2\pi} \int_\Gamma \phi(y) \frac{x-y}{|x-y|^2}\cdot n_y\, ds(y), \end{equation} in which $x$ and $y$ are on $\Gamma$, $n_y$ is the outward-directed normal vector to $\Gamma$ at $y\in\Gamma$, and $ds(y)$ is the arclength measure on $\Gamma$. The adjoint of $\mathcal K_{\Gamma}$ in $L^2(\Gamma)$, which we called the formal adjoint $\mathcal K_{\Gamma}^*$ above, is \begin{equation} \mathcal K_{\Gamma}^*[\phi](x) \;=\; \frac{1}{2\pi} \int_\Gamma \phi(y) \frac{x-y}{|x-y|^2}\cdot n_x\, ds(y). \end{equation} These operators are defined as legitimate integrals when $\Gamma$ and $\phi$ are smooth enough, and they are extended to different normed distributional spaces by continuity. The eigenvalues of $\mathcal K_{\Gamma}^*$ in $L^2(\Gamma)$ are real. This is because $\mathcal K_{\Gamma}^*$ is symmetric with respect to the inner product associated with a weaker norm defined through the boundary-integral operator $\mathcal{S}_\Gamma$ for the single-layer potential, \begin{equation}\label{eq:single} \mathcal{S}_\Gamma[\phi] (x) \,=\, - \frac{1}{2\pi}\int_{\Gamma} \log (\beta |x-y|)\, \phi(y)\, ds(y). \end{equation} For appropriately chosen $\beta>0$, this operator on $L^2(\Gamma)$ is strictly positive~\cite[Lemma~2.1]{KhavinsonPutinarShapiro2007} and not surjective since it is bounded and invertible from $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ to $H^{1/2}} %\newcommand{\Hhalf}{H^\half(\Gamma)$~\cite{CostabelStephan1985,PerfektPutinar2014}. The Plemelj symmetrization principle \begin{equation} \mathcal K_{\Gamma}\mathcal{S}_\Gamma = \mathcal{S}_\Gamma\mathcal K_{\Gamma}^* \end{equation} in $L^2(\Gamma)$ implies the symmetry of $\mathcal K_{\Gamma}^*$ with respect to the inner product $\langle f, g \rangle_{\mathcal{S}_\Gamma} := (\mathcal{S}_\Gamma f,g)_{L^2(\Gamma)}$, \begin{equation}\label{eq:sinner} \langle \mathcal K_{\Gamma}^* f,g \rangle_{\mathcal{S}_\Gamma} = \langle f, \mathcal K_{\Gamma}^* g \rangle_{\mathcal{S}_\Gamma}\,. \end{equation} Perfekt and Putinar~\cite{PerfektPutinar2014} show that this theory persists even for Lipschitz curves~$\Gamma$. By completing the vector space $L^2(\Gamma)$ with respect to the $\mathcal{S}$ norm \begin{equation} \label{eq:snorm} \|f\|^2_{\mathcal{S}_\Gamma}=\langle \mathcal{S}_\Gamma f, f \rangle_{L^2(\Gamma)}, \end{equation} $\mathcal K_{\Gamma}^*$ is extended by continuity to a self-adjoint operator. This completion space coincides with the Sobolev space $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ of distributions~\cite[Lemma~3.2]{PerfektPutinar2014}, which is sometimes referred to as the ``energy space" for~$\mathcal K_{\Gamma}^*$. In this article, $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ will always refer to the Hilbert space with the $\mathcal{S}$ inner product $\langle f, g \rangle_{\mathcal{S}_\Gamma}$. The operator norm of $\mathcal K_{\Gamma}^*$, as a self-adjoint operator in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$, is equal to $1/2$, and the spectrum is contained in the half-open interval $(-1/2,1/2]$, with $1/2$ being an eigenvalue. The eigenspace is spanned by the density for a single-layer potential that is constant in the interior domain of $\Gamma$~\cite{Kellogg1929}. The analogous space in which $\mathcal K_{\Gamma}$ is self-adjoint is~$H^{1/2}} %\newcommand{\Hhalf}{H^\half(\Gamma)\!\subset\!L^2(\Gamma)$ with respect to the norm $(\mathcal{S}_\Gamma^{-1} f,g)_{L^2(\Gamma)}$. Therefore, any eigenfunction of $\mathcal K_{\Gamma}$ corresponding to a non-real eigenvalue $\lambda$ cannot lie in $H^{1/2}} %\newcommand{\Hhalf}{H^\half(\Gamma)$. When $\Gamma$ is a curvilinear polygon, $\mathcal K_{\Gamma}$ does admit non-real eigenvalues with eigenfunctions in $L^2(\Gamma)$. Mitrea~\cite{Mitrea2002} proved that these eigenvalues fill the interior domains of bowtie-shaped curves in the complex plane that are symmetric about the real line, one for each corner. The curves themselves consist of essential spectrum. The operator~$\mathcal K_{\Gamma}^*$, on the other hand, being self-adjoint in~$H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ with respect to the inner product $\langle f, g \rangle_{\mathcal{S}_\Gamma}$, cannot have non-real eigenvalues with eigenfunctions in $L^2(\Gamma)\!\subset\!H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$. This means that, for a non-real eigenvalue $\lambda$ of $\mathcal K_{\Gamma}$, the operator $\mathcal K_{\Gamma}^*-\bar\lambda I$ acting on $L^2(\Gamma)$ is injective and has range that is not dense in $L^2(\Gamma)$; such $\bar\lambda$ is in the residual spectrum of $\mathcal K_{\Gamma}^*$ as an operator on $L^2(\Gamma)$. Helsing and Perfekt~\cite{HelsingPerfekt2017} proved that, for a domain in $\mathbb{R}^3$ with a single conical point and continuous rotational symmetry, this spectrum consists of an infinite union of conjugate-symmetric domains in the complex plane corresponding to the Fourier components. In $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$, where $\mathcal K_{\Gamma}^*$ is self-adjoint, the essential spectrum of $\mathcal K_{\Gamma}^*$ for a curvilinear polygon consists of an interval in the real line that is symmetric about $0$~\cite{Carleman1916,PerfektPutinar2014,PerfektPutinar2017}. Each corner of~$\Gamma$ contributes an interval $[-b,b]$ to the essential spectrum, and $b$ varies monotonically between $0$ and $1/2$ as the corner becomes sharper, as described in section~\ref{sec:essential}. When the corner is outward-pointing and $\Gamma$ has reflectional symmetry about a line~$L$ with the tip of the corner on $L$, the interval $[-b,0]$ is the essential spectrum for the odd component of $\mathcal K_{\Gamma}^*$ and $[0,b]$ is the essential spectrum for the even component~\cite{KangLimYu2017}. When the corner is inward-pointing, this correspondence is switched. This separation of even and odd essential spectrum is critical in our proof of eigenvalues within the essential spectrum. In his 1916 dissertation~\cite{Carleman1916}, Torsten Carleman considered eigenfunctions of the operator $\mathcal K_{\Gamma}^*$ that are orthonormal with respect to the $\mathcal{S}$ inner product (p.~157--178 and equation~(194)), as well as generalized eigenfunctions for a curve with corners. At the end of the work (p.~193), he writes a spectral representation for $\mathcal K_{\Gamma}^* g$ in terms of a sum over eigenfunctions plus an integral over generalized eigenfunctions, for functions $g$ that have finite $\mathcal{S}$ norm. The validity of this analysis for $\mathcal K_{\Gamma}^*$ in the space $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ would establish the absolute continuity of the essential spectrum associated with the generalized eigenfunctions, which causes the eigenvalues of our Theorem~\ref{thm:main} to be embedded in the continuous spectrum. It is strongly believed, if not generally accepted, that the essential spectrum and the absolutely continuous spectrum coincide. There is numerical evidence of embedded eigenvalues for the Neumann-Poincar\'e operator. Helsing, Kang, and Lim~\cite{HelsingKangLim2016} numerically implement a rate-of-resonance criterion and illustrate two eigenvalues within the continuum for an ellipse with an attached corner. We will revisit this example in discussion point~(5) of section~\ref{sec:discussion}. For a rotationally symmetric domain in $\mathbb{R}^3$ with a conical point mentioned above~\cite[\S7.3.3,~Fig.~8]{HelsingPerfekt2017}, eigenvalues for certain Fourier components of the Neumann-Poincar\'e operator are computed, and these lie within the essential spectrum of other Fourier components. Our strategy for proving eigenvalues in the essential spectrum goes as follows. Start with a curve $\Gamma_{\!0}$ that is of class~$C^{2,\alpha}$ and that is reflectionally symmetric about a line $L$. Let $\lambda$ be an eigenvalue of $\mathcal K_{\Gamma_{\!0}}^*$ that is, say, positive with eigenfunction that is, say, odd with respect to $L$. Then construct a symmetric perturbation $\Gamma$ of $\Gamma_{\!0}$ such that (1) $\mathcal K_{\Gamma}^*$ has a positive eigenvalue near $\lambda$ with odd eigenfunction and (2) the even component of $\mathcal K_{\Gamma}^*$ has essential spectrum that overlaps this eigenvalue. To accomplish the second requirement, $\Gamma$ is constructed by replacing a small segment of $\Gamma_{\!0}$ with a corner that connects smoothly to the rest of $\Gamma_{\!0}$, with the tip of the corner lying on $L$ and whose angle is such that $\lambda\in(0,b)$. To accomplish the first requirement, the replaced segment needs to be sufficiently small. The analysis of requirement (1) is remarkably subtle, and our proof relies on the deep fact that all eigenfunctions of $\mathcal K_{\Gamma_{\!0}}^*$ as an operator in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ are actually in $L^2(\Gamma_{\!0})$. Perturbative spectral analysis of $\mathcal K_{\Gamma}^*$ in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ relies on the self-adjointness of the operators $\mathcal K_{\Gamma}^*$ in the $\mathcal{S}$ inner product. But the positive-definiteness of this inner product requires an appropriate choice of the constant $\beta$ in (\ref{eq:single}), and this depends on the domain $\Gamma$. As $\Gamma$ varies over a family of Lipschitz perturbations of a smooth curve, it must be guaranteed that $\mathcal{S}$ remain positive for all perturbations. Instead of controlling the number $\beta$, this inconvenience can be dealt with by restricting to the $\mathcal K_{\Gamma}^*$-invariant subspace $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$, on which $\langle \cdot,\,\cdot \rangle_{\mathcal S}$ remains positive. The space $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$ consists of all distributions $\psi\inH^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ such that $\langle \psi,1 \rangle=0$ in the $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}$-$H^{1/2}} %\newcommand{\Hhalf}{H^\half$ pairing. The $\mathcal{S}$-orthogonal complement of $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$ in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ is spanned by the eigenfunction of $\mathcal K_{\Gamma}^*$ corresponding to eigenvalue $1/2$. Some interesting aspects of the definiteness of the single-layer potential in two dimensions are investigated in~\cite{Zoalroshd2016}. \section{Approximate eigenfunction on a perturbed curve}\label{sec:perturbation} This section accomplishes the first step, which is to construct an approximate eigenfunction $\tilde\phi$ of $\mathcal K_{\Gamma}^*$ for a Lipschitz perturbation $\Gamma$ of a $C^{2,\alpha}$ curve~$\Gamma_{\!0}$. The strategy is as follows. Start with a curve $\Gamma_{\!0}$ of class $C^{2,\alpha}$ and an eigenfunction $\phi$ of $\mathcal K_{\Gamma_{\!0}}^*$ as an operator in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$, that is, $\mathcal K_{\Gamma_{\!0}}^*\phi = \lambda\phi$. Then construct a Lipschitz perturbation $\Gamma$ of $\Gamma_{\!0}$ by replacing a segment of $\Gamma_{\!0}$ by a curve with a corner so that the restriction $\tilde\phi$ of $\phi$ to the rest of the curve---which is common to both $\Gamma_{\!0}$ and $\Gamma$---is nearly an eigenfunction of~$\mathcal K_{\Gamma}^*$ in the sense that $ \|(\mathcal K_{\Gamma}^* - \lambda )\tilde\phi \|_{\mathcal{S}_\Gamma} \leq \epsilon\,\|\tilde\phi\|_{\mathcal{S}_\Gamma} $. This is the essence of the proof of Lemma~\ref{lemma:resolvent}, which concludes that the resolvent $(\mathcal K_{\Gamma}^*-\lambda)^{-1}$ can be made as large as desired by taking a fine enough perturbation $\Gamma$. Our proof of Lemma~\ref{lemma:resolvent} relies on the fact that any eigenfunction of $\mathcal K_{\Gamma_{\!0}}^*:H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})\toH^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ actually lies in $L^2(\Gamma_{\!0})$. This was observed by Khavinson, Putinar, and Shapiro~\cite{KhavinsonPutinarShapiro2007,Putinar2017}, in which a theory of M.\,Krein~\cite{Krein1947,Krein1998} on operators in the presence of two norms was brought to bear on the Neumann-Poincar\'e operator. Lemma~\ref{lemma:L2} is essentially Theorem~3 of~\cite{Krein1998}. We include a proof here. \begin{lemma}\label{lemma:L2} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^2$ in $\mathbb{R}^2$. If $\phi\inH^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ satisfies $\mathcal K_{\Gamma_{\!0}}^*\phi=\lambda\phi$ for a nonzero real number $\lambda$, then $\phi\in L^2(\Gamma_{\!0})$. \end{lemma} \begin{proof} Let $\beta$ in the kernel of $\mathcal{S}_{\Gamma_{\!0}}$ (\ref{eq:single}) be chosen such that $\langle \cdot,\,\cdot \rangle_{\mathcal{S}_{\Gamma_{\!0}}}$ is positive definite on $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$. Let $\lambda$ be a nonzero real number. Let $N$ denote the nullspace of $\mathcal K_{\Gamma_{\!0}}^*-\lambda I$ in $L^2(\Gamma_{\!0})$, and let $V$ denote its complement with respect to the inner product induced by the single-layer operator $\mathcal{S}_{\Gamma_{\!0}}$, \begin{align} N &:= \; \big\{ f\in L^2(\Gamma_{\!0}) : (\mathcal K_{\Gamma_{\!0}}^* \!-\! \lambda I)f = 0 \big\}, \\ V &:= \; \big\{ f\in L^2(\Gamma_{\!0}) : \langle f,\,g \rangle_{\mathcal{S}_{\Gamma_{\!0}}} = 0\; \forall\,g\in N \big\}. \end{align} The space $V$ is closed in $L^2(\Gamma_{\!0})$, and $L^2(\Gamma_{\!0})=N+V$ as an algebraic direct sum. The operator $\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I$ is invariant on $V$ because of the symmetry of $\mathcal K_{\Gamma_{\!0}}^*$ with respect to $\langle \cdot,\cdot \rangle_{\mathcal{S}_{\Gamma_{\!0}}}$. Its restriction to $V$ is injective and $\mathcal K_{\Gamma_{\!0}}^*$ restricted to $V$ is compact in the $L^2(\Gamma_{\!0})$ norm because $\mathcal K_{\Gamma_{\!0}}^*$ is compact in $L^2(\Gamma_{\!0})$~\cite{CostabelStephan1985,PerfektPutinar2014}. This implies that $\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I$ is surjective on $V$, using the fact that the Fredholm index of $\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I$ on $V$ is zero. Therefore $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1} : V\to V$ exists as a bounded operator in the $L^2(\Gamma_{\!0})$ norm with $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1} (\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)$ being the identity operator on~$V$. The symmetry of $\mathcal K_{\Gamma_{\!0}}^*$ with respect to $\langle \cdot,\cdot \rangle_{\mathcal{S}_{\Gamma_{\!0}}}$ implies that $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1}$ is also symmetric with respect to this inner product. The key step of the proof is now an application of Theorem~1 in~\cite{Krein1998}. Since the $\mathcal{S}$~norm is weaker than the $L^2$ norm, this symmetry implies that $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)$ and $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1}$ are bounded when considered as operators in $V$, viewed as an incomplete normed linear space with respect to $\|\cdot\|_{\mathcal{S}_{\Gamma_{\!0}}}$. Since $\|\cdot\|_{\mathcal{S}_{\Gamma_{\!0}}}$ is equivalent to the $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ norm, $\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I$ and $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1}$ extend uniquely to the completion $\tilde V$ of $V$ in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$, and the composition $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)^{-1} (\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)|_V$ lifts to the identity operator on $\tilde V$~\cite[Theorem~2]{Krein1998}. Since $N$ is finite dimensional and $L^2(\Gamma_{\!0})=N+V$, one has $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})=N+\tilde V$. And since $\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I$ is invertible on $\tilde V$ and $(\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)[N]=\{0\}$, it follows that the nullspace of $\mathcal K_{\Gamma_{\!0}}^*-\lambda I$ in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ is equal to~$N$, \begin{equation} \big\{ f\in H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0}) : (\mathcal K_{\Gamma_{\!0}}^*\!-\!\lambda I)f=0 \big\} \,=\, N. \end{equation} This implies that every eigenfunction $\mathcal K_{\Gamma_{\!0}}^*$ that is in $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ also lies in $L^2(\Gamma_{\!0})$. \end{proof} If the curve $\Sigma$ (which could be either $\Gamma_{\!0}$ or $\Gamma$) admits reflection symmetry about a line $L$, one has a decomposition \begin{equation} H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Sigma) = H^{-1/2,e}} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Sigma) \oplus H^{-1/2,o}} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Sigma) \end{equation} into spaces of even and odd distributions with respect to $L$. This is an orthogonal direct sum with respect to the $\mathcal{S}$ inner product. Since the operator $\mathcal K_{\Sigma}^*$ commutes with reflection symmetry, this decomposition of $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Sigma)$ induces a decomposition of $\mathcal K_{\Sigma}^*$ onto the even and odd distribution spaces, on which it is invariant, \begin{equation}\label{eq:Kdecomposition} \mathcal K_{\Sigma}^* = \mathcal K_{\Sigma,e}^*\oplus\mathcal K_{\Sigma,o}^*. \end{equation} The Lipschitz perturbations of $\Gamma_{\!0}$ and near-eigenfunctions constructed in this section have to be controlled in a careful way. We therefore make a precise definition of the type of perturbation we will use. It is by no means the most general. The specific geometry of the corner is not important but serves to simplify the proofs; indeed, the invariance of the essential spectrum under smooth perturbations of a Lipschitz curve that preserve the angles of the corners is proved in~\cite[Lemma~4.3]{PerfektPutinar2014}. The perturbed curves $\Gamma$ constructed in Definition~\ref{def:Tperturbation} have corners that are locally identical to a corner of a prototypical simple closed Lipschitz curve featuring a desired half exterior angle $\theta$ with $0<\theta<\pi$. This curve is the boundary $\partial\Omega$ of a region $\Omega$ defined by two intersecting circles of the same radius, as illustrated in Fig.~\ref{fig:disks}. Explicit spectral analysis of these domains has been carried out by Kang, Lim, and Yu~\cite{KangLimYu2017} and will be used in the analysis in section~\ref{sec:essential}. \begin{figure} \centerline{\scalebox{0.33}{\includegraphics{fig_Tperturbation.pdf}}} \caption{\small A type $T$ perturbation of a curve $\Gamma_{\!0}$ of class $C^{2,\alpha}$, as described in Definition~\ref{def:Tperturbation}, with reflectional symmetry about the line $L$. The segment $B$ of $\Gamma_{\!0}$ that is contained in the disk $\Delta$ is replaced by a curve with a corner to obtain~$\Gamma$. In the upper case where the half exterior angle satisfies $\pi/2<\theta<\pi$, the corner is pointing outward; and in the lower case where $0<\theta<\pi/2$, the corner is pointing inward. The curve $\Gamma_{\!0}$ is parameterized by the interval $[0,1]$ with $\Gamma_{\!0}(0)=\Gamma_{\!0}(1)=x_0$ and $0<t_1<t_2<s_2<s_1<1$.} \label{fig:Tperturbation} \end{figure} \begin{figure} \centerline{\scalebox{0.3}{\includegraphics{fig_disks.pdf}}} \caption{\small The boundary $\partial\Omega$ of a bounded domain $\Omega$ defined by two intersecting circles of the same radius is the prototype of a curvilinear polygon. On the left, the outward-pointing corner has half exterior angle $\theta: \pi/2<\theta<\pi$; and on the right, the inward-pointing corner has half exterior angle $\theta: 0<\theta<\pi/2$.} \label{fig:disks} \end{figure} \begin{definition}\label{def:Tperturbation} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^{2,\alpha}$ ($\alpha>0$) in $\mathbb{R}^2$. A {\em type $T$ perturbation} of $\Gamma_{\!0}$ is a curve $\Gamma$ that has one corner with half exterior angle given arbitrarily by $\theta : 0<\theta<\pi$ and is otherwise of class $C^{2,\alpha}$, and that is equipped with the following structure. \smallskip (a) Let $x_0\in\Gamma_{\!0}$ be a reference point, and let $\Gamma_{\!0}$ be parameterized by the unit interval $[0,1]$ (using the notation $\Gamma_{\!0}(t)$ for $t\in[0,1]$) with $\Gamma_{\!0}(0)=\Gamma_{\!0}(1)=x_0$. \smallskip (b) Let $\Delta=\left\{ x : | x - x_0 | \leq \delta \right\}$ be a disk that intersects $\Gamma_{\!0}$ in a connected segment $B$ of $\Gamma_{\!0}$ about $x_0$, that is, such that for some numbers $t_1$ and $s_1$ with $0<t_1<s_1<1$, \begin{equation} B \;:=\; \Delta \cap \Gamma_{\!0} \;=\; \left\{\, \Gamma_{\!0}(t) : t\in [0,t_1] \cup [s_1,1] \,\right\}. \end{equation} Denote the complementary connected component of $\Gamma_{\!0}$ by $A=\Gamma_{\!0}[(t_1,s_1)]$, so that \begin{equation} \Gamma_{\!0} \,=\, A\, \mathring\cup\, B. \end{equation} (c) Let numbers $t_2$ and $s_2$ in $[0,1]$ such that $0<t_1<t_2<s_2<s_1<1$ be given, so that $\Gamma_{\!0}(t_2)$ and $\Gamma_{\!0}(s_2)$ lie in $A$. Let $A'$ denote the subsegment of $A$ equal to $\Gamma_{\!0}[(t_2,s_2)]$. \smallskip (d) A type $T$ perturbation $\Gamma$ of $\Gamma_{\!0}$ is obtained by replacing $B$ by a simple Lipschitz perturbation curve $D$ which connects in a $C^2$ manner to the boundary points $\Gamma_{\!0}(t_1)$ and $\Gamma_{\!0}(s_1)$ of $A$ and which is otherwise contained in the interior of~$\Delta$. $D$ is $C^{2,\alpha}$ except at one interior point $x'_0$ of $D$. An open subset of $D$ containing $x'_0$ coincides with a translation-rotation of the intersection of a disk $\Delta'$ of radius $\delta'<\delta$ with a corner of a curve $\partial\Omega$ obtained from two intersecting circles of the same radius, oriented such that the exterior angle is equal to $2\theta$, as described in Fig.~\ref{fig:disks}. \end{definition} Lemma~\ref{lemma:resolvent} is the workhorse of the main theorem on eigenvalues in the essential spectrum (Theorem~\ref{thm:main}). The type $T$ perturbations $\Gamma$ will be required to satisfy a certain Lipschitz condition that will ensure, according to Lemma~\ref{lemma:Lipschitz}, that $\mathcal{S}_\Gamma$ is uniformly controlled in the $L^2(\Gamma)$ norm. For the construction of such perturbations in Lemma~\ref{lemma:existence}, the Lipschitz constant $M$ will depend on the angle $\theta$ of the corner. \begin{condition}[Lipschitz condition]\label{cond:Lipschitz} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^2$ in $\mathbb{R}^2$. Let a triple $(U,\Delta_0,M)$ for $\Gamma_0$ be given, in which $\Delta_0$ is a closed disk contained in an open subset $\,U$ of $\,\mathbb{R}^2$, such that $\Delta_0\cap\Gamma_{\!0}$ is a simple curve of nonzero length, $M$ is a positive real number, and $U\cap\Gamma_{\!0}$ is the graph of a function in some rotated coordinate system for $\mathbb{R}^2$ with Lipschitz constant less than $M$. A perturbation curve $\Gamma$ of $\Gamma_{\!0}$ satisfies the {\em Lipschitz condition subject to the triple $(U,\Delta_0,M)$}, if the perturbation is confined to $\Delta_0$, that is, $\Gamma_{\!0}\setminus\Delta_0=\Gamma\setminus\Delta_0$, and $U\!\cap\Gamma$ is the graph of a function in some rotated coordinate system for $\mathbb{R}^2$ with Lipschitz constant less than $M$. \end{condition} \begin{lemma}\label{lemma:Lipschitz} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^2$ in $\mathbb{R}^2$, and let $(U,\Delta_0,M)$ be a triple for $\Gamma_{\!0}$ as described in Condition~\ref{cond:Lipschitz}. There exists a constant $C_{\hspace{-1pt}\mathcal{S}}>0$ such that, for each perturbation $\Gamma$ of $\Gamma_{\!0}$ that satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$, \begin{equation} \left| (\mathcal{S}_\Gamma\psi,\psi)_{L^2(\Gamma)} \right| \,\leq\, C_{\hspace{-1pt}\mathcal{S}}^2 (\psi,\psi)_{L^2(\Gamma)} \qquad \forall\psi\in L^2(\Gamma). \end{equation} \end{lemma} \begin{proof} In (\ref{eq:single}), we assume $\beta=1$; the proof is similar for general $\beta>0$. It will first be proved that there exists a constant $C$ such that for every curve $\Gamma$ satisfying the conditions of Lemma~\ref{lemma:Lipschitz}, \begin{equation}\label{eq:intbd} \sup_{x\in \Gamma} \frac{1}{2\pi} \int_{\Gamma}\big| \log|x-y| \big| d\sigma_y \;\leq\; C. \end{equation} Suppose $\Gamma$ is any such curve. The constant $C$ obtained by the following analysis will not depend on the particular choice of $\Gamma$. The conditions in Lemma \ref{lemma:Lipschitz} guarantee that there exists a collection $\{U^i\}_{i=0}^N$ of open subsets of $\mathbb{R}^2$, independent of $\Gamma$, and rotated coordinate systems $\{(\xi^i,\eta^i)\}_{i=0}^N$ for $\mathbb{R}^2$ such that $U^0=U$, $U^i \cap \Delta_0 = \emptyset$ for $i=1,\dots,N$, $\{U^i\}_{i=0}^N$ covers $\Gamma$, and for $i=0,\dots,N$, the intersection $U^i\!\cap\Gamma$ is the graph $\eta^i=f^i(\xi^i)$ of a Lipschitz function $f^i$ on an interval $(\xi^i_1,\xi^i_2)$. The collection $\{U^i\}_{i=1}^N$ can be taken to be fine enough so that all the functions $f^i$ have Lipschitz constant bounded by $M$. Denote the arclength of any curve $\gamma$ by $\mathrm{len}(\gamma)$. For the cover $\{U^i\}_{i=0}^N$, there exists a number $r_0: 0<r_0< 1$, such that for every $x\in \Gamma$, there exists an integer $i_x:0 \leq i_x \leq N$ and a segment $\gamma_x$ of $\Gamma$ such that $x\in\gamma_x \subset U^{i_x}$ and $\mathrm{len}(\gamma_x)=2r_0$, with $x$ located at the center of $\gamma_x$ with respect to arclength. Inside the chart $U^{i_x}$, $\gamma_x$ is parameterized by $\eta_{i_x} = f^{i_x}(\xi^{i_x})$ for $\xi^{i_x}\in(\xi^{i_x}_1,\xi^{i_x}_2)$. With $x$ equal to the point $(\xi^{i_x}_0,f^{i_x}(\xi^{i_x}_0))$, it follows that $|\xi^{i_x}_1-\xi^{i_x}_0|\leq r_0$ and $|\xi^{i_x}_2-\xi^{i_x}_0|\leq r_0$. The number $r_0$ can be taken to be independent of the choice of $\Gamma$ satisfying the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$ because $\Gamma$ differs from $\Gamma_{\!0}$ only within the disk $\Delta_0$. The integral in \eqref{eq:intbd} can be split into two parts, \begin{equation}\label{split} \int_{\Gamma} \big| \log|x-y| \big| d\sigma_y \;=\; \int_{\gamma_x} \big| \log|x-y| \big| d\sigma_y \,+\, \int_{\Gamma\setminus\gamma_x} \big| \log|x-y| \big| d\sigma_y. \end{equation} The first term is bounded by \begin{align} \int_{\gamma_x} \big| \log|x-y| \big| d\sigma_y &\;=\; \int_{ (\xi^{i_x}_1, \xi^{i_x}_2)} \left| \log \sqrt{(\xi^{i_x}_0 - \xi^{i_x})^2 +(f^{i_x}(\xi^{i_x}_0) - f^{i_x}(\xi^{i_x}))^2\,}\right| d\sigma(\xi^{i_x}) \label{hi} \\ &\;\leq\; \int_{ (\xi^i_{x1}, \xi^i_{x2})} \left| \log | \xi^{i_x}_0 - \xi^{i_x} | \right| \sqrt{M^2 +1}\, d\xi^{i_x} \\ &\;\leq\; \int_{ (-r_0, r_0)} \big| \log |r| \big| \sqrt{M^2 +1}\, dr \;=\; C', \end{align} where $C'$ is a finite constant. This constant depends only on $r_0$ and $M$ and is therefore independent of the choice of $\Gamma$ satisfying the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$. The first inequality comes from $r_0<1$, which makes the argument of the logarithm of~(\ref{hi}) less than~$1$. The second term of~(\ref{split}) is bounded~by \begin{equation}\label{bound2} \int_{\Gamma\setminus\gamma_x} \big| \log|x-y| \big| d\sigma_y \;\leq\; \mathrm{len}(\Gamma)\max\left(\big|\log|r_1(\Gamma)| \big|, \big|\log|r_2(\Gamma)| \big|\right), \end{equation} where $r_1(\Gamma): = \inf_{x\in \Gamma} \text{dist}(x, \Gamma\setminus\gamma_x)$ and $r_2(\Gamma)$ is the radius of $\Gamma$. For $\Gamma$ satisfying the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$, $\mathrm{len}(\Gamma)$ is uniformly bounded from above and both $r_1(\Gamma)$ and $r_2(\Gamma)$ are uniformly bounded from above and below by positive numbers. Therefore, the right-hand side of (\ref{bound2}) is bounded by a constant $C''$ that does not depend on this choice of $\Gamma$. With the constant $C=(C'+C'')/(2\pi)$, the bound $\eqref{eq:intbd}$ is proved for all curves $\Gamma$ satisfying the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$. By Young's generalized inequality~\cite[Theorem~0.10]{Folland1995}, \eqref{eq:intbd} implies that \begin{equation} \left\| \mathcal{S}_\Gamma\psi \right\|_{L_2(\Gamma)} \leq C \left\| \psi \right\|_{L_2(\Gamma)} \end{equation} for all such curves $\Gamma$. Thus the conclusion of Lemma \ref{lemma:Lipschitz} holds for $C_{\hspace{-1pt}\mathcal{S}} = \sqrt{C}$. \end{proof} For the proof of Lemma \ref{lemma:resolvent}, we will work within the spaces $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$ to ensure that $\langle \cdot,\,\cdot \rangle_{\mathcal{S}_\Gamma}$ remains positive definite. In $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$, the $\mathcal{S}$ inner product is independent of the choice of $\beta>0$ in the single-layer potential operator~(\ref{eq:single}). We set $\beta=1$. \begin{lemma}\label{lemma:resolvent} Let a simple closed curve $\Gamma_{\!0}$ of class $C^{2,\alpha}$ ($\alpha>0$) in $\mathbb{R}^2$, an eigenvalue $\lambda\not\in\left\{ 0, \frac{1}{2} \right\}$ of $\mathcal K_{\Gamma_{\!0}}^*$, and a number $\epsilon>0$ be given; and let a triple $(U,\Delta_0,M)$ for $\Gamma_{\!0}$ be given as in Condition~\ref{cond:Lipschitz}. (1) There exist numbers $r>0$ and $\rho>0$ such that, for each type $T$ perturbation $\Gamma$ of $\Gamma_{\!0}$ that satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to $(U,\Delta_0,M)$, and the condition \begin{equation}\label{condition1} 0<t_2<r, \qquad 0<1-s_2<r, \end{equation} and the condition \begin{equation}\label{condition2} \frac{\sqrt{\mathrm{len}(D)\,}}{\mathrm{dist}(A', D)} \;<\; \rho\,, \end{equation} ($\mathrm{len}(D)$ is the arclength of the curve $D$), there exists $\psi \in H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$ satisfying \begin{equation}\label{Sbound0} \langle (\mathcal K_{\Gamma}^* - \lambda )\psi, (\mathcal K_{\Gamma}^* - \lambda )\psi \rangle_{\mathcal{S}_\Gamma} \;\leq\; \epsilon^2\, \langle \psi, \psi \rangle_{\mathcal{S}_\Gamma}\,. \end{equation} Thus, either $\lambda\in\sigma(\mathcal K_{\Gamma}^*)$ or \begin{equation}\label{eq:resolventlowerbound} \left\| (\mathcal K_{\Gamma}^*-\lambda)^{-1} \right\|_{\mathcal{S}_\Gamma} > \epsilon^{-1} \end{equation} where $\mathcal K_{\Gamma}^*$ is considered as an operator in $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$. (2) If $\Gamma_{\!0}$ has reflectional symmetry about a line $L$ and $\Delta_0$ contains an intersection point of $L$ and $\Gamma_0$ and $\lambda$ is an eigenvalue of the even component $\mathcal K_{\Gamma_{\!0},e}^*$ of $\mathcal K_{\Gamma_{\!0}}^*$ (or the odd component $\mathcal K_{\Gamma_{\!0},o}^*$), then~(\ref{eq:resolventlowerbound}) can be replaced~by \begin{equation}\label{eq:resolventlowerboundeven} \left\| (\mathcal K_{\Gamma,e}^*-\lambda)^{-1} \right\|_{\mathcal{S}_\Gamma} > \epsilon^{-1} \qquad\big(\text{or}\;\; \left\| (\mathcal K_{\Gamma,o}^*-\lambda)^{-1} \right\|_{\mathcal{S}_\Gamma} > \epsilon^{-1} \,\big) \end{equation} (considered as an operator in the even (odd) subspace of $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$) for each type $T$ perturbation $\Gamma$ of $\Gamma_{\!0}$ that has reflectional symmetry about $L$, satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to $(U,\Delta_0,M)$, and satisfies~(\ref{condition1}) and~(\ref{condition2}). \end{lemma} \begin{proof} Let $\lambda \notin \left\{ 0, \frac{1}{2} \right\}$ be an eigenvalue of $\mathcal K_{\Gamma_{\!0}}^*:H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})\toH^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma_{\!0})$ with eigenfunction $\phi$, \begin{equation} (\mathcal K_{\Gamma_{\!0}}^* - \lambda )\phi = 0\,. \end{equation} We may assume that $\phi$ is real-valued since the kernel of $\mathcal K_{\Gamma_{\!0}}^*$ is real. By Lemma~\ref{lemma:L2}, $\phi \in L^2(\Gamma_{\!0})$. By Theorem~3.6 of~\cite{ColtonKress1998}, $\mathcal K_{\Gamma_{\!0}}^*$ maps $L^2(\Gamma_{\!0})$ into $H^1(\Gamma_{\!0})$ because $\Gamma_{\!0}$ is of class $C^{2,\alpha}$, thus $\phi$ is an absolutely continuous function (in the almost-everywhere sense). Since $\phi$ is not in the one-dimensional eigenspace for the eigenvalue $1/2$ of $\mathcal K_{\Gamma_{\!0}}^*$, it must lie in the $\mathcal{S}_{\Gamma_{\!0}}$-complement $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma_{\!0})$ of that eigenspace, that is, $\phi\inH^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma_{\!0})$. Recall that $\langle \cdot,\,\cdot \rangle_{\mathcal{S}_{\Gamma_{\!0}}}$ is positive definite in $\phi\inH^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma_{\!0})$ and the corresponding norm is denoted by $\|\cdot\|_{\mathcal{S}_{\Gamma_{\!0}}}$. Let $(U,\Delta_0,M)$ be a triple for $\Gamma_{\!0}$ as in Condition~\ref{cond:Lipschitz}, and let $C_{\hspace{-1pt}\mathcal{S}}$ be the constant provided by Lemma~\ref{lemma:Lipschitz}. Let $\Gamma$ be a type $T$ perturbation of $\Gamma_{\!0}$, with all notation from Definition~\ref{def:Tperturbation} pertaining to it, that satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to $(U,\Delta_0,M)$. By Lemma~\ref{lemma:Lipschitz}, \begin{equation}\label{uniformbound} \left\| \psi \right\|_{\mathcal{S}_\Gamma}^2 \;=\; (\psi,\psi)_{\mathcal{S}_\Gamma} \;:=\; (\mathcal{S}_\Gamma\psi,\psi)_{L^2(\Gamma)} \,\leq\, C_{\hspace{-1pt}\mathcal{S}}^2 (\psi,\psi)_{L^2(\Gamma)} \qquad \forall\psi\in L_0^2(\Gamma), \end{equation} in which $L_0^2(\Gamma)$ denotes the space of all $f\in L^2(\Gamma)$ such that $\int_\Gamma f ds=0$. This uniform bound will not be used until inequality~(\ref{Sbound}). Let $x_1 \in \Gamma_{\!0}$ be a point other than $x_0$, such that $|\phi(x_1)|>\frac{3}{4}\max_{y\in\Gamma_{\!0}}|\phi(y)|$. Let $J$ be a subarc of $\Gamma_{\!0}$ containing $x_1$. There exists a number $d>0$, such that when $\mathrm{len}(J)<d$, $\phi$ does not change sign on $J$, $|\phi(x)| > \frac{1}{2}\max_{y\in\Gamma_{\!0}}|\phi(y)|$ for $x\in J$ and $J\subset A'$ when $\mathrm{len}(\Gamma_{\!0} \backslash A') < d$. For every choice of $t_2$ and $s_2$ such that $\mathrm{len}(\Gamma_{\!0} \backslash A') < d$, let $\mathrm{len}(J) = \mathrm{len} (\Gamma_{\!0} \backslash A')$. Then one can choose constant $a : -2<a<2$ in the function \begin{equation} \chi(x) = \begin{cases} 1,\quad x\in A'\backslash J\\ a, \quad x\in J\\ 0, \quad \text{otherwise} \end{cases} \end{equation} such that $\chi\phi \in L_0^2(\Gamma_{\!0})\subset H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma_{\!0})$. Since $\chi\phi$ is supported in $A'$, which is a subarc of both $\Gamma$ and $\Gamma_{\!0}$, $\chi\phi$ can also be considered to lie in $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$. Let $C_0$ be a bound for $\mathcal{S}_{\Gamma_{\!0}}$ in $L^2(\Gamma_{\!0})$, \begin{equation} \left\| \mathcal{S}_{\Gamma_{\!0}}\psi \right\|_{L^2(\Gamma_{\!0})} \;\leq\; C_0 \left\| \psi \right\|_{L^2(\Gamma_{\!0})} \qquad \forall\psi\in L^2(\Gamma_{\!0}). \end{equation} Thus \begin{equation} \begin{split} \left| \left( \chi\phi, \chi\phi \right)_{\mathcal{S}_\Gamma} - \left( \phi, \phi \right)_{\mathcal{S}_{\Gamma_{\!0}}} \right| &\;=\; \left| \left( \chi\phi, \chi\phi \right)_{\mathcal{S}_{\Gamma_{\!0}}} - \left( \phi, \phi \right)_{\mathcal{S}_{\Gamma_{\!0}}} \right| \;=\; \left| \left( \chi\phi-\phi, \chi\phi \right)_{\mathcal{S}_{\Gamma_{\!0}}} + \left( \phi, \chi\phi-\phi \right)_{\mathcal{S}_{\Gamma_{\!0}}} \right| \\ &\;\leq\; C_0 \left( \left\| \chi\phi \right\|_{L^2(\Gamma_{\!0})} + \left\| \phi \right\|_{L^2(\Gamma_{\!0})} \right) \left\| \chi\phi-\phi \right\|_{L^2(\Gamma_{\!0})} \\ &\;\leq\; 3C_0\left\| \phi \right\|_{L^2(\Gamma_{\!0})} \left\| (1-\chi)\phi \right\|_{L^2(\Gamma_{\!0})}. \end{split} \end{equation} As $t_2$ and $1-s_2$ tend to zero simultaneously, the measure of the support of $1-\chi$ on $\Gamma_{\!0}$ tends to zero, and therefore $\left\| (1-\chi)\phi \right\|_{L^2(\Gamma_{\!0})}$ converges to zero. Thus, $\left( \chi\phi, \chi\phi \right)_{\mathcal{S}_\Gamma}$ converges to $\left( \phi, \phi \right)_{\mathcal{S}_{\Gamma_{\!0}}}$; equivalently, $\|\chi\phi\|_{\mathcal{S}_\Gamma}$ converges to $\|\phi\|_{\mathcal{S}_{\Gamma_{\!0}}}$ as $t_2$ and $1-s_2$ tend to zero. The number $C_{\hspace{-1pt}\phi}:=\|\phi\|_{\mathcal{S}_{\Gamma_{\!0}}}/2$\, is positive because $\mathcal{S}_{\Gamma_{\!0}}$ is a positive operator and $\phi$ is nonzero in $L_0^2(\Gamma_{\!0})$. Therefore, \begin{equation}\label{Cphi} \left\| \chi\phi \right\|_{\mathcal{S}_\Gamma} \;>\; C_{\hspace{-1pt}\phi} \end{equation} whenever $t_2$ and $1-s_2$ are sufficiently small. We next seek to bound the $L^2$ norm $\|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(\Gamma)}$ (see (\ref{mainbound}) below). The domains $A$ and $D$ can be treated separately since \begin{equation} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(\Gamma)} \,\leq\, \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(A)} + \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(D)}\,. \end{equation} For the set $A$, one uses the eigenvalue condition $(\mathcal K_{\Gamma_{\!0}}^* - \lambda )\phi = 0$ and $\mathcal K_{\Gamma}^*(\chi \phi)|_A = \mathcal K_{\Gamma_{\!0}}^*(\chi \phi)|_A $ to obtain \begin{equation}\label{hello} \begin{split} [(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi)]\big|_A &= [(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) - (\mathcal K_{\Gamma_{\!0}}^* - \lambda )\phi ]\big|_A \\ &= [(\mathcal K_{\Gamma_{\!0}}^* - \lambda )(\chi \phi) - (\mathcal K_{\Gamma_{\!0}}^* - \lambda )\phi ]\big|_A \\ &= [\mathcal K_{\Gamma_{\!0}}^* \left((\chi - 1) \phi\right) + \lambda (1 - \chi) \phi ]\big|_A\,. \end{split} \end{equation} Denote the kernel of the adjoint Neumann-Poincar\'e operator by \begin{equation}\label{kernel} K_{\Sigma}^*(x,y) = \frac{1}{2\pi}\frac{x-y}{|x-y|^2}\cdot n_x \qquad (\Sigma = \Gamma_{\!0} \text{ or } \Gamma). \end{equation} The first term in the last expression of (\ref{hello}) is bounded pointwise due to the pointwise bound $2\pi K_{\Gamma_{\!0}}^*(x,y)<C_{\Gamma_{\!0}}$, which holds since $\Gamma_{\!0}$ is of class $C^2$~\cite[Theorem~2.2]{ColtonKress1983}, \begin{equation} \begin{split} 2\pi |\mathcal K_{\Gamma_{\!0}}^*((\chi-1)\phi)(x) | &= \left| \int_{\Gamma_{\!0}} K_{\Gamma_{\!0}}^*(x,y) (\chi(y)-1) \phi(y)\, d\sigma(y) \right| \\ &\leq 3\, C_{\Gamma_{\!0}} \int_{\Gamma_{\!0} \!\setminus\! A' \cup J} |\phi(y)|\, d\sigma(y) \\ &\leq 3\, C_{\Gamma_{\!0}} \| \phi \|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0} \!\setminus\! A') + \mathrm{len}(J)} \\ &= 3\sqrt{2}\, C_{\Gamma_{\!0}} \| \phi \|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0} \!\setminus\! A')} \;, \qquad \forall\,x\in A, \end{split} \end{equation} since $\mathrm{len}(J) = \mathrm{len} (\Gamma_{\!0} \backslash A')$, and the second term is bounded in norm by \begin{equation} \| \lambda (1 - \chi) \phi\|_{L^2(A)} \,\leq\, 3\, |\lambda| \left( \int_{\Gamma_{\!0}\setminus A' \cup J} |\phi|^2 \right)^{1/2} \,\leq\, 3\, |\lambda|\, C(2\,\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')), \end{equation} in which $C(\mu)>0$ is a number that decreases to zero as $\mu\to0$. Together, these two bounds yield \begin{equation}\label{eq:A} \begin{split} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(A)} &\,\leq\, \|\mathcal K_{\Gamma_{\!0}} ^* \left((\chi - 1) \phi\right)\|_{L^2(A)} + \| \lambda (1 - \chi) \phi\|_{L^2(A)}\\ &\,\leq\, \frac{ 3 C_{\Gamma_{\!0}}}{\sqrt{2}\pi} \| \phi \|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(A)}\sqrt{\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')} \,+\, 3\, |\lambda|\, C(2\,\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')) \\ &\,\leq\, C'(\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')) \,, \end{split} \end{equation} in which $C'(\mu)>0$ is a number that decreases to zero as $\mu\to0$. On the set $D$, $\chi\phi$ vanishes, so that \begin{equation}\label{K-lambda} (\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) |_D = \mathcal K_{\Gamma}^* (\chi \phi) |_D. \end{equation} Since $\Gamma$ has a corner, the kernel of $\mathcal K_{\Gamma}^*$ does not enjoy a uniform pointwise bound, but \eqref{kernel} does provide \begin{equation}\label{eq:bd} |K_{\Gamma}^*(x,y)| \leq \frac{1}{2\pi} \frac{1}{|x-y|} \qquad \forall\, x, y\in \Gamma. \end{equation} Using this and the inclusion $\mathrm{supp}(\chi)\subset A'$, one obtains a pointwise bound for $x\in D$, \begin{equation} \begin{split} \big| \mathcal K_{\Gamma}^*(\chi\phi)(x) \big| &= \left| \int_\Gamma K_{\Gamma}^*(x,y) \chi(y)\phi(y) d\sigma(y) \right| = 2 \left| \int_{A'} K_{\Gamma_{\!0}}^*(x,y) \phi(y) d\sigma(y) \right| \\ &\leq\; \frac{1}{\pi\,\text{dist}(A', D)} \int_{A'} |\phi(y)| d\sigma(y) \;\leq\; \frac{1}{\pi\,\text{dist}(A', D)} \| \phi \|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0})} \qquad \forall\,x\in D. \end{split} \end{equation} This bound together with (\ref{K-lambda}) yields \begin{align}\label{eq:D} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(D)} \;\leq\; \frac{1}{\pi\,\text{dist}(A', D)} \| \phi \|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0})\,\mathrm{len}(D)}\,. \end{align} Combining \eqref{eq:A} and \eqref{eq:D} produces the bound \begin{equation}\label{mainbound} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(\Gamma)} \;\leq\; C'(\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')) \,+ \left(\frac{ \|\phi\|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0})}}{\pi} \frac{\sqrt{\mathrm{len}(D)}}{\mathrm{dist}(A', D)}\right). \end{equation} Both of these bounding terms can be made arbitrarily small simultaneously. Consider the first term: $\Gamma_{\!0}\!\setminus\!A'$ is the part of $\Gamma_{\!0}$ about $x_0$ between $\Gamma_{\!0}(t_2)$ and $\Gamma_{\!0}(s_2)$. Therefore, by taking $t_2$ and $1\!-\!s_2$ sufficiently small, $\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')$ can be made arbitrarily small, and one obtains \begin{equation}\label{len} C'(\mathrm{len}(\Gamma_{\!0}\!\setminus\!A')) \to 0 \quad\text{as}\quad \max\left\{ t_2, 1\!-\!s_2 \right\} \to 0\,. \end{equation} Let $\epsilon>0$ be given arbitrarily. The convergence~(\ref{len}) implies that there exists $r>0$ such that, if $0<t_2<r$ and $0<1-s_2<r$, then $C'(\mathrm{len}(\Gamma_{\!0}\!\setminus\!A'))<\epsilon\,C_{\hspace{-1pt}\phi}/(2C_{\hspace{-1pt}\mathcal{S}})$. Assume that $r$ is small enough so that also (\ref{Cphi}) holds. Then with $\rho=\epsilon\piC_{\hspace{-1pt}\phi}/(2C_{\hspace{-1pt}\mathcal{S}}\|\phi\|_{L^2(\Gamma_{\!0})} \sqrt{\mathrm{len}(\Gamma_{\!0})}\,)$, the second term of (\ref{mainbound}) is less than $\epsilon\,C_{\hspace{-1pt}\phi}/(2C_{\hspace{-1pt}\mathcal{S}})$ whenever $\sqrt{\mathrm{len}(D)}/\mathrm{dist}(A', D)<\rho$. These two bounds together yield \begin{equation} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(\Gamma)} \;\leq\; \frac{C_{\hspace{-1pt}\phi}}{C_{\hspace{-1pt}\mathcal{S}}}\,\epsilon\,. \end{equation} Combining this bound with (\ref{uniformbound}) and (\ref{Cphi}) provides the desired bound \begin{equation}\label{Sbound} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{\mathcal{S}_\Gamma} \;\leq\; C_{\hspace{-1pt}\mathcal{S}} \|(\mathcal K_{\Gamma}^* - \lambda )(\chi \phi) \|_{L^2(\Gamma)} \;\leq\; \epsilon\,C_{\hspace{-1pt}\phi} \;\leq\; \epsilon\,\|\chi\phi\|_{\mathcal{S}_\Gamma}. \end{equation} If $\lambda \notin \sigma(\mathcal K_{\Gamma}^*)$ is a regular point of the operator $\mathcal K_{\Gamma}^*$, this implies that \begin{equation} \|(\mathcal K_{\Gamma}^* - \lambda )^{-1} \|_{\mathcal{S}_\Gamma} > \epsilon^{-1}, \end{equation} in which $\mathcal K_{\Gamma}^*$ is considered as an operator in $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$, as claimed in the first part of the theorem. These arguments also prove the second part of the theorem for a curve $\Gamma_{\!0}$ that is symmetric about a line~$L$ if (1) the reference point $x_0$ is taken to be on $L$, (2) $J$ consists of two segments that are symmetric about $L$, (3) one takes $\chi$ to be even ($\Gamma_{\!0}(s_2)$ is the reflection of $\Gamma_{\!0}(t_0)$ about~$L$) so that if $\phi$ is even (or odd) $\chi\phi$ will also be even (or odd), and (4) the replacement curve $D$ is taken to be symmetric about~$L$. Then in every occurrence of $\mathcal K_{\Gamma_{\!0}}^*$ or $\mathcal K_{\Gamma}^*$ in the arguments, the operator is acting on an even (or odd) distribution, and thus may be replaced by $\mathcal K_{\Gamma_{\!0},e}^*$ or $\mathcal K_{\Gamma,e}^*$ (or $\mathcal K_{\Gamma_{\!0},o}^*$ or $\mathcal K_{\Gamma,o}^*$). \end{proof} It is geometrically straightforward, even if somewhat technical analytically, to demonstrate that Lipschitz perturbations of type $T$ as required in Lemma~\ref{lemma:resolvent} are plentiful. The following lemma will suffice. Essentially, it says that one can always construct a perturbation $\Gamma$ with a desired corner angle $\theta$ for which the lower bound on the resolvent of $\mathcal K_{\Gamma}^*$ in Lemma~\ref{lemma:resolvent} holds. To do this, one must find an appropriate Lipschitz constant $M$ for the given $\theta$ (sharper angles require larger $M$) and then construct a type $T$ perturbation that satisfies the requirements of Lemma~\ref{lemma:resolvent}. \begin{lemma}\label{lemma:existence} Let a simple closed curve $\Gamma_{\!0}$ of class $C^2$, and a number $\theta$ such that $0<\theta<\pi$ be given. There exists a triple $(U,\Delta_0,M)$ for $\Gamma_{\!0}$ as in Condition~\ref{cond:Lipschitz} such that, for all positive numbers $r$ and $\rho$, there exists a perturbation $\Gamma$ of $\Gamma_{\!0}$ of type $T$ such that: $\Gamma$ satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to $(U,\Delta_0,M)$; conditions (\ref{condition1}) and~(\ref{condition2}) of Lemma~\ref{lemma:resolvent} are satisfied; and the half exterior angle of the corner of $\Gamma$ is equal to $\theta$. If\, $\Gamma_{\!0}$ is symmetric about a line~$L$, then $\Gamma$ can be taken to be symmetric about $L$ with the tip of the corner lying on $L$. \end{lemma} \begin{proof} Given $\theta\in(0,\pi)$, let $g(\xi)$, for $\xi$ in some interval, be a function whose graph describes a rotated corner of a type $T$ perturbation as described in part (d) of Definition~\ref{def:Tperturbation} (a neighborhood of a corner of the intersection of two circles as in Fig.~\ref{fig:disks}) such that the tip occurs at $\xi=0$ and points upward for $\theta>\pi/2$ and downward for $\theta<\pi/2$; and let $M_1$ and $M_2$ be positive numbers such that $M_1<|g'(\xi)|<M_2$ for $\xi\not=0$. Let a simple closed curve $\Gamma_{\!0}$ of class $C^2$ be parameterized such that $\Gamma_{\!0}(0)=\Gamma_{\!0}(1)=x_0$. Choose an open set $U\subset\mathbb{R}^2$ and rotated and translated coordinates $(\xi,\eta)$ for $\mathbb{R}^2$ such that $x_0\in U$ and $\Gamma_{\!0}\cap U$ is the graph $\eta=f(\xi)$ of a $C^2$ function $f$, with $x_0=(0,f(0))$ and $|f'(\xi)|<\min\{1,M_1\}$, and such that the part of $U$ that lies below the graph is in the interior domain of $\Gamma_{\!0}$. Choose a closed disk $\Delta_0\subset U$ centered at $x_0$. Each closed circle centered at $x_0$ contained in $\Delta_0$ intersects $\Gamma_{\!0}$ at exactly two points. There are no more than two intersection points because $|f'(\xi)|<1$. Let $\Delta$ be any closed disk centered at $x_0$ and contained in $\Delta_0$. Define $\tilde g(\xi)=g(\xi)+\eta_0$ with $\eta_0$ chosen such that the graph $\eta=\tilde g(\xi)$ intersects $\Gamma_{\!0}$ in exactly two points in the interior of $\Delta$---call them $x_1=(\xi_1,f(\xi_1))$ and $x_2=(\xi_2,f(\xi_2))$---and such that the graph of $\tilde g$ between these two points lies in the interior of $\Delta$. This is possible because $|\tilde g'(\xi)|>M_1$ and $|f'(\xi)|<M_1$. Set $\tilde f(\xi)=f(\xi)$ for $\xi\not\in[\xi_1,\xi_2]$ and $\tilde f(\xi)=\tilde g(\xi)$ for $\xi\in[\xi_1,\xi_2]$, and observe that the tip of the corner occurs at the point $(0,\tilde f(0))$. Then let $\tilde{\tilde f}(\xi)$ be a function that is of class $C^2$ except at $\xi=0$ and that is equal to $\tilde f(\xi)$ except in two nonintersecting intervals, one about $\xi_1$ and one about $\xi_2$; these intervals can be taken small enough so that the graphs of $\tilde{\tilde f}$ and $f$ coincide outside of $\Delta$. The smoothing $\tilde{\tilde f}$ can also be arranged so that $\big|\tilde{\tilde f}'(\xi)\big|<M_2$; this is because $|\tilde f'(\xi)|<M_2$ except at $\xi_1$, $0$, and $\xi_2$, where $\tilde f$ is continuous but not differentiable. It follows that the length of the graph of $\tilde{\tilde f}$ inside $\Delta$, which is called $D$ in part (d) of Definition~\ref{def:Tperturbation}, is bounded by \begin{equation} \mathrm{len}(D) \leq 2\sqrt{1+M_2^2\,}\,\mathrm{rad}(\Delta). \end{equation} The curve $\Gamma$ resulting from replacing the segment of $\Gamma_{\!0}$ described by $\eta=f(\xi)$ by the curve $\eta=\tilde{\tilde f}(\xi)$ is a type~$T$ perturbation of $\Gamma_{\!0}$ that satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M_2)$, and its corner has half exterior angle equal to $\theta$. Let $r>0$ and $\rho>0$ be given. Choose numbers $t_2$ and $s_2$ in Definition~\ref{def:Tperturbation} so that condition~(\ref{condition1}) is satisfied, that is, $0<t_2<r$ and $0<1-s_2<r$. For these fixed values of $t_2$ and $s_2$, \begin{equation} \frac{\sqrt{\mathrm{len}(D)}}{\mathrm{dist}(A',D)} \leq \frac{\sqrt{2\sqrt{1+M_2^2\,}\,\mathrm{rad}(\Delta)}}{\mathrm{dist}(A',\Delta)} \to 0 \qquad \text{as } \; \mathrm{rad}(\Delta)\to0. \end{equation} Therefore, $\mathrm{rad}(\Delta)$ can be taken to be small enough in this construction of $\Gamma$ so that \begin{equation} \frac{\sqrt{\mathrm{len}(D)}}{\mathrm{dist}(A',D)} < \rho, \end{equation} which is condition~(\ref{condition2}). In the symmetric case, $x_0\in L$ and $t_2$ and $s_2$ can be chosen such that $\Gamma_{\!0}(s_2)$ is the reflection of $\Gamma_{\!0}(t_0)$ about $L$, and $D$ can be arranged to be symmetric about~$L$. \end{proof} \section{Reflection symmetry and essential spectrum}\label{sec:essential} For all of the curves in this section, assume that $\beta$ in (\ref{eq:single}) is chosen such that $\mathcal{S}$ is a positive operator for all the curves under consideration. Consider a curve $\Gamma_{\!0}$ of class $C^2$ and perturbations $\Gamma$ of type $T$ that are symmetric with respect to a line $L$. Recall that, in this case, the operators $\mathcal K_{\Gamma_{\!0}}^*$ and $\mathcal K_{\Gamma}^*$ admit decompositions onto the even and odd distributional spaces, as stated in~(\ref{eq:Kdecomposition}), \begin{equation} \mathcal K_{\Gamma_{\!0}}^* = \mathcal K_{\Gamma_{\!0},e}^*\oplus\mathcal K_{\Gamma_{\!0},o}^*\,, \quad \mathcal K_{\Gamma}^* = \mathcal K_{\Gamma,e}^*\oplus\mathcal K_{\Gamma,o}^*\,. \end{equation} The prototypical curvilinear polygons $\partial\Omega$ described in section~\ref{sec:perturbation} (Fig.~\ref{fig:disks}) are themselves symmetric about a line through the two corner points. The spectral resolution of the Neumann-Poincar\'e operator on $\partial \Omega$ is explicitly computed in~\cite{KangLimYu2017} through conformal mapping and Fourier transformation. Recall that $\theta$ is half the angle of the corner measured in the exterior of the curve. It is shown that \begin{equation} \sigma_\text{\hspace{-1pt}ac}(\mathcal K_{\partial\Omega}^*) = [-b,b], \quad \sigma_\text{\hspace{-1pt}sc}(\mathcal K_{\partial\Omega}^*) = \emptyset, \quad\sigma_\text{p}(\mathcal K_{\partial\Omega}^*) = \{1/2\}, \end{equation} where $b = |\frac{1}{2} - \frac{\theta}{\pi}|$ depends on the angle, $\sigma_\text{\hspace{-1pt}ac}$ refers to absolutely continuous spectrum, $\sigma_\text{\hspace{-1pt}sc}$ refers to singular continuous spectrum, and $\sigma_\text{p}$ refers to pure point spectrum. Therefore, $\sigma_\text{\hspace{-1pt}ac}(\mathcal K_{\partial\Omega}^*)=\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega}^*)$. Furthermore, it is shown in~\cite{KangLimYu2017} that the essential spectra of the even and odd components of $\mathcal K_{\partial\Omega}^*$ intersect only in $\{0\}$, \begin{equation} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial \Omega,o}^*) = [-b, 0], \quad \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial \Omega,e}^*) = [0, b] \quad \text{for} \quad \pi/2<\theta<\pi \end{equation} for outward-pointing corners and \begin{equation} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial \Omega,o}^*) = [0,b], \quad \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial \Omega,e}^*) = [-b,0] \quad \text{for} \quad 0<\theta < \pi/2 \end{equation} for inward-pointing corners. Our proof of eigenvalues in the essential spectrum requires that this disjointness persist for the perturbation $\Gamma$, and this is the content of the following proposition. The proof of Proposition~\ref{prop:evenodd} invokes the local nature of the essential spectrum of $\mathcal K_{\Gamma}^*$. This is bridged by its essential spectrum $\sigma_{\text{ea}}(\mathcal K_{\Gamma}^*)$ in the approximate eigenvalue sense~\cite{PerfektPutinar2017}. For an operator $T: H \rightarrow H$, $\lambda \in \sigma_{\text{ea}}(T)$ if and only if there is a bounded sequence $\{f_n\}_{n=1}^{\infty} \in H$ having no convergent subsequence, such that $(T - \lambda)f_n \rightarrow 0$ in $H$. One calls $\{f_n\}_{n=1}^{\infty}$ a singular sequence. When $T$ is self adjoint, $ \sigma_\text{\hspace{-1pt}ess} (T) = \sigma_{\text{ea}}(T)$. If an operator $S: H \rightarrow H$ is such that $S-T$ is compact, then $ \sigma_{\text{ea}} (S) = \sigma_{\text{ea}}(T)$. \begin{proposition}\label{prop:evenodd} The essential spectra of the even and odd components of $\mathcal K_{\Gamma}^*$ for a reflectionally symmetric perturbation curve $\Gamma$ of type~$T$ coincides with the essential spectra of the even and odd components of $\mathcal K_{\partial\Omega}^*$ for the prototypical curvilinear polygon $\partial\Omega$ (Fig.~\ref{fig:disks}) having corners with the same exterior angle as $\Gamma$, \begin{eqnarray} && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*) \;=\; \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e}^*), \\ && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*) \;=\; \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,o}^*). \end{eqnarray} \end{proposition} \begin{proof} This proof essentially follows~\cite{PerfektPutinar2017}. Let $\Sigma$ be a simple closed Lipschitz curve that is piecewise of class $C^2$ and has $n$ corners. Let $\{\rho_j\}_{j=1}^n$ be cutoff functions on $\Sigma$ that have mutually disjoint supports and such that $\rho_j$ is equal to $1$ in a neighborhood of the $j$-th corner and is of class $C^2$ otherwise, and set $\rho_0 = 1-\sum_{j=1}^n \rho_j$. Denote by $M_{\rho}$ the operator of multiplication by $\rho$. In the decomposition \begin{equation} \mathcal K_{\Sigma} \;=\; \sum\limits_{0\leq i,j \leq n} M_{\rho_i} \mathcal K_{\Sigma} M_{\rho_j}, \end{equation} each term is compact unless $i=j\not=0$. This implies the second equality in \begin{equation}\label{ess} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Sigma}) \;= \sigma_{\text{ea}}(\mathcal K_{\Sigma}) = \; \sigma_{\text{ea}} \left(\textstyle\sum\limits_{j=1}^n M_{\rho_j} \mathcal K_{\Sigma} M_{\rho_j} \right) = \; \textstyle\bigcup\limits_{j=1}^n \sigma_{\text{ea}} \left( M_{\rho_j} \mathcal K_{\Sigma} M_{\rho_j} \right), \end{equation} where the first equality follows from the self-adjointness of $\mathcal K_{\Sigma}: H^{1/2}} %\newcommand{\Hhalf}{H^\half(\Sigma)\toH^{1/2}} %\newcommand{\Hhalf}{H^\half(\Sigma)$ with respect to the $\mathcal{S}_\Gamma^{-1}$ inner product, and the last equality is proved in \cite[Lemma 9]{PerfektPutinar2017}. Now suppose that $\Sigma$ is reflectionally symmetric about a line $L$ and that $\Sigma$ has either one or two corners (so that $n=1$ or $n=2$) with vertex on $L$ and that the cutoff functions $\rho_j$ are chosen to be even so that the operators $M_{\rho_j}$ commute with the reflection. Because of this, one has orthogonal decompositions \begin{equation} M_{\rho_i} \mathcal K_{\Sigma} M_{\rho_j} \;=\; M_{\rho_i} \mathcal K_{\Sigma,e} M_{\rho_j} \,\oplus\, M_{\rho_i} \mathcal K_{\Sigma,o} M_{\rho_j}, \end{equation} and therefore the compactness of $M_{\rho_i} \mathcal K_{\Sigma} M_{\rho_j}$ (unless $i=j\not=0$) implies the compactness of the even and odd components on the right-hand side. Using this with the decomposition \begin{equation} \mathcal K_{\Sigma,e} \;=\; \sum\limits_{0\leq i,j \leq n} M_{\rho_i} \mathcal K_{\Sigma,e} M_{\rho_j} \end{equation} and the analogous decomposition of $\mathcal K_{\Sigma,o}$ yields \begin{eqnarray} && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Sigma,e}) \;=\; \textstyle\bigcup\limits_{j=1}^n \sigma_{\text{ea}} \left( M_{\rho_j} \mathcal K_{\Sigma,e} M_{\rho_j} \right), \\ && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Sigma,o}) \;=\; \textstyle\bigcup\limits_{j=1}^n \sigma_{\text{ea}}\left( M_{\rho_j} \mathcal K_{\Sigma,o} M_{\rho_j} \right).\end{eqnarray} Apply this result to $\partial\Omega$, which has two corners ($n=2$), and to the type $T$ perturbation $\Gamma$ of $\Gamma_{\!0}$, which has only one corner ($n=1$), and use $\tilde\rho_1$ for $\Gamma$ to distinguish it from $\rho_1$ for $\partial\Omega$, \begin{equation} \begin{aligned} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e}) &\;=\; \sigma_{\text{ea}} \left( M_{\rho_1} \mathcal K_{\partial\Omega,e} M_{\rho_1} \right) \cup \sigma_{\text{ea}} \left( M_{\rho_2} \mathcal K_{\partial\Omega,e} M_{\rho_2} \right), \\ \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}) &\;=\; \sigma_{\text{ea}} \left( M_{\tilde\rho_1} \mathcal K_{\Gamma,e} M_{\tilde\rho_1} \right). \end{aligned} \end{equation} Since a neighborhood of the corner of $\Gamma$ coincides after translation and rotation with a neighborhood of either corner of $\partial\Omega$, and since $\partial\Omega$ has symmetry about a vertical line (Fig.~\ref{fig:disks}), the function $\rho_1+\rho_2$ can be chosen to be symmetric with respect to both reflections. Furthermore, $\tilde\rho_1$ and $\rho_1$ can be chosen so that $\mathrm{supp}\,\tilde\rho_1 \cap \Gamma$ and $\mathrm{supp}\,\rho_1 \cap \partial\Omega$ as well as the functions $\tilde\rho_1$ and $\rho_1$ on their supports coincide after translation and rotation. Under these conditions, $M_{\rho_1} \mathcal K_{\partial\Omega,e} M_{\rho_1}$, and $M_{\rho_2} \mathcal K_{\partial\Omega,e} M_{\rho_2}$ are unitarily similar operators, thus \begin{eqnarray} \sigma_{\text{ea}} \left( M_{\rho_1} \mathcal K_{\partial\Omega,e} M_{\rho_1} \right) = \sigma_{\text{ea}}\left( M_{\rho_2} \mathcal K_{\partial\Omega,e} M_{\rho_2} \right) = \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e}). \end{eqnarray} Since $\sigma_{\text{ea}} \left( M_{\rho_j} \mathcal K_{\Sigma,e} M_{\rho_j} \right)$ is characterized by functions localized at the $j$-th corner, we obtain \begin{eqnarray} \sigma_{\text{ea}} \left( M_{\tilde\rho_1} \mathcal K_{\Gamma,e} M_{\tilde\rho_1} \right) = \sigma_{\text{ea}}\left( M_{\rho_1} \mathcal K_{\partial\Omega,e} M_{\rho_1} \right). \end{eqnarray} Therefore \begin{eqnarray} && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}) \;=\; \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e}), \\ && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}) \;=\; \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,o}), \end{eqnarray} and the equation for the odd component is obtained in the same manner. The proposition now follows from $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*) = \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e})$ and $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e}^*) = \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\partial\Omega,e})$ and the analogous equalities for the odd components of these operators, where $\mathcal K_{\Gamma}^*$ and $\mathcal K_{\partial\Omega}^*$ are considered on $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$ and $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\partial\Omega)$. \end{proof} Equation (\ref{ess}) expresses the local manner in which the corners of a curvilinear polygon contribute to the essential spectrum of the Neumann-Poincar\'e operator. How this happens for an individual corner is enlightened through explicit construction of Weyl sequences associated to each $\lambda\in\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma}^*)$, which is carried out by Bonnetier and Zhang~\cite{BonnetierZhang2017}. \section{Eigenvalues in the essential spectrum}\label{sec:embedded} The strategy to construct eigenvalues in the essential spectrum for the Neumann-Poincar\'e operator is to obtain a spectral-vicinity result of the form \begin{equation}\label{eq:dist} \mathrm{dist}(\lambda,\sigma(\mathcal K_{\Gamma,e}^*)) < \epsilon\,, \end{equation} in which $\lambda$ is an eigenvalue of $\mathcal K_{\Gamma_{\!0},e}^*$ and $\Gamma$ is a type $T$ perturbation of $\Gamma_{\!0}$, by applying Lemma~\ref{lemma:resolvent}. The angle of the corner of $\Gamma$ is chosen so that $\lambda$ does not lie within the essential spectrum of $\mathcal K_{\Gamma,e}^*$ but does lie inside the essential spectrum of $\mathcal K_{\Gamma,o}^*$. This will guarantee that $\mathcal K_{\Gamma,e}^*$ has an eigenvalue near $\lambda$ and that this eigenvalue lies in the essential spectrum of $\mathcal K_{\Gamma,o}^*$. An analogous procedure applies to eigenvalues of $\mathcal K_{\Gamma_{\!0},o}^*$. In fact, $\Gamma$ can be chosen so that several eigenvalues of $\mathcal K_{\Gamma_{\!0}}^*$ are perturbed into eigenvalues of $\mathcal K_{\Gamma}^*$ that lie within the essential spectrum. Our proof is only able to guarantee a finite number of eigenvalues in the essential spectrum for a given perturbation $\Gamma$. This is because the perturbation $\Gamma$ depends on the eigenfunction and on $\epsilon$ (smaller $\epsilon$ requires a corner of smaller arclength), and no uniform $\epsilon$ can be chosen to guarantee infinitely many distinct perturbed eigenvalues of the same sign. \begin{theorem}\label{thm:main} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^{2,\alpha}$ in $\mathbb{R}^2$ that is symmetric about a line $L$. \smallskip \noindent (a) Suppose that the adjoint Neumann-Poincar\'e operator $\mathcal K_{\Gamma_{\!0}}^*$ has $m$ even eigenfunctions corresponding to eigenvalues $\lambda^e_j$ and $n$ odd eigenfunctions corresponding to eigenvalues $\lambda^o_j$ such that \begin{equation} \lambda^e_m<\dots<\lambda^e_1<0<\lambda^o_1<\dots<\lambda^o_n\,. \end{equation} There exists a Lipschitz-continuous perturbation $\Gamma$ of $\Gamma_{\!0}$ with the following properties: $\Gamma$ is symmetric about~$L$; $\Gamma$ possesses an outward-pointing corner and is otherwise of class $C^{2,\alpha}$; the associated operator $\mathcal K_{\Gamma}^*$ has $m$ even eigenfunctions corresponding to eigenvalues $\tilde\lambda^e_j$ and $n$ odd eigenfunctions corresponding to eigenvalues $\tilde\lambda^o_j$ such that \begin{equation}\label{perturbedevals1} \tilde\lambda^e_m<\dots<\tilde\lambda^e_1<0<\tilde\lambda^o_1<\dots<\tilde\lambda^o_n\,; \end{equation} these eigenvalues lie within the essential spectrum of $\mathcal K_{\Gamma}^*$. \smallskip \noindent (b) Suppose that the adjoint Neumann-Poincar\'e operator $\mathcal K_{\Gamma_{\!0}}^*$ has $m$ odd eigenfunctions corresponding to eigenvalues $\lambda^o_j$ and $n$ even eigenfunctions corresponding to eigenvalues $\lambda^e_j<1/2$ such that \begin{equation} \lambda^o_m<\dots<\lambda^o_1<0<\lambda^e_1<\dots<\lambda^e_n\,. \end{equation} There exists a Lipschitz-continuous perturbation $\Gamma$ of $\Gamma_{\!0}$ with the following properties: $\Gamma$ is symmetric about~$L$; $\Gamma$ possesses an inward-pointing corner and is otherwise of class $C^{2,\alpha}$; the associated operator $\mathcal K_{\Gamma}^*$ has $m$ odd eigenfunctions corresponding to eigenvalues $\tilde\lambda^o_j$ and $n$ even eigenfunctions corresponding to eigenvalues $\tilde\lambda^e_j$ such that \begin{equation}\label{perturbedevals2} \tilde\lambda^o_m<\dots<\tilde\lambda^o_1<0<\tilde\lambda^e_1<\dots<\tilde\lambda^e_n\,; \end{equation} these eigenvalues lie within the essential spectrum of $\mathcal K_{\Gamma}^*$. \end{theorem} \begin{proof} For part (a), observe that $-\lambda^e_m$ and $\lambda^o_n$ are less than $1/2$ because $\sigma(\mathcal K_{\Gamma}^*)$ is contained in the interval $(-1/2,1/2)$ except for the eigenvalue $1/2$. The eigenfunction for $1/2$ is even because it corresponds to the single-layer potential that is constant on $\Gamma$. Choose a real number $b$ such that $-b<\lambda^e_m<\lambda^o_n<b<1/2$, and let $\theta$ be the number such that $b=\theta/\pi - 1/2$, so that $\pi/2<\theta<\pi$. Let $\epsilon>0$ be given such that \begin{equation} \epsilon < \min\left\{ \textstyle\frac{1}{2} |\lambda^e_i - \lambda^e_{i+1} |,\, \frac{1}{2} |\lambda^o_j - \lambda^o_{j+1} |,\, |\lambda^e_1| ,\, |\lambda^o_1|,\, b-\lambda^o_n,\, b+\lambda^e_m \right\}, \quad i=1,\dots,m-1,\;\; j=1,\dots,n-1. \end{equation} Let $(U,\Delta_0,M)$ be a triple for $\Gamma_0$ guaranteed by Lemma~\ref{lemma:existence} for the given value of $\theta$. For this triple $(U,\Delta_0,M)$ and $\epsilon$, let $r(\lambda)$ and $\rho(\lambda)$ be the numbers stipulated in Lemma~\ref{lemma:resolvent} for $\lambda\in\{ \lambda^e_1,\dots,\lambda^e_m, \lambda^o_1,\dots,\lambda^o_n \}$, and let $r$ be the minimum of $r(\lambda)$ and $\rho$ be the minimum of $\rho(\lambda)$ over all these eigenvalues. Lemma~\ref{lemma:existence} provides a perturbation~$\Gamma$ of type $T$ such that (i) $\Gamma$ satisfies the Lipschitz Condition~\ref{cond:Lipschitz} subject to the triple $(U,\Delta_0,M)$, (ii) its corner has exterior angle $2\theta$, (iii) the conditions (\ref{condition1}) and~(\ref{condition2}) of Lemma~\ref{lemma:resolvent} are satisfied, (iv) $\Gamma$ is symmetric about $L$. For this Lipschitz curve $\Gamma$, Lemma~\ref{lemma:resolvent} guarantees that \begin{equation} \|(\mathcal K_{\Gamma}^* - \lambda )^{-1} \|_{\mathcal{S}_\Gamma} > \epsilon^{-1} \qquad \forall\, \lambda\in\{ \lambda^e_1,\dots,\lambda^e_m, \lambda^o_1,\dots,\lambda^o_n \}, \end{equation} in which $\mathcal K_{\Gamma}^*$ is considered as an operator in $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$. As $\mathcal K_{\Gamma}^*$ is self-adjoint in $H^{-1/2}_0} %\newcommand{\hhalf}{\mathfrak{h}^\half(\Gamma)$ with respect to the $\mathcal{S}_\Gamma$ inner product, one obtains \begin{equation} \mathrm{dist}( \lambda,\, \sigma(\mathcal K_{\Gamma}^*)) < \epsilon \qquad \forall\, \lambda\in\{ \lambda^e_1,\dots,\lambda^e_m, \lambda^o_1,\dots,\lambda^o_n \}. \end{equation} Because of part (2) of Lemma~\ref{lemma:resolvent}, this inequality holds for the spectrum of the even and odd components of~$\mathcal K_{\Gamma}^*$, \begin{align} \mathrm{dist}( \lambda^e_j,\, \sigma(\mathcal K_{\Gamma,e}^*)) < \epsilon & \quad\text{for }\, j=1,\dots,m, \label{s1} \\ \mathrm{dist}( \lambda^o_j,\, \sigma(\mathcal K_{\Gamma,o}^*)) < \epsilon & \quad\text{for }\, j=1,\dots,n. \label{s2} \end{align} By Proposition~\ref{prop:evenodd} and the discussion preceding it, the essential spectra of these operators are \begin{eqnarray} && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*) = [0,b], \label{s3}\\ && \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*) = [-b,0], \label{s4} \end{eqnarray} with $b=\frac{\theta}{\pi}-\frac{1}{2}$. Because of (\ref{s1},\ref{s3}), the choice of $\epsilon$, and the self-adjointness of $\mathcal K_{\Gamma,e}^*$, there exist eigenvalues $\tilde\lambda^e_j$ for $j=1,\dots,m$ that satisfy~(\ref{perturbedevals1}). Similarly, because of (\ref{s2},\ref{s4}), there exist eigenvalues $\tilde\lambda^o_j$ for $j=1,\dots,n$ that satisfy~(\ref{perturbedevals1}). Because of the choices of $b$ and $\epsilon$, one has \begin{eqnarray} && \tilde\lambda^e_j \in \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*), \\ && \tilde\lambda^o_j \in \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*). \end{eqnarray} Since $\pi/2<\theta<\pi$, the corner is outward-pointing. Part (b) is proven analogously. In this case, $b=-\theta/\pi + 1/2$, so that $0<\theta<\pi/2$, and the corner is therefore inward-pointing. \end{proof} For any reflectionally symmetric curve of class $C^{2,\alpha}$ except for a circle, Theorem~\ref{thm:main} allows one to create lots of eigenvalues in the essential spectrum by appropriate Lipschitz perturbations. \begin{corollary} Let $\Gamma_{\!0}$ be a simple closed curve of class $C^{2,\alpha}$ in $\mathbb{R}^2$ that is symmetric about a line $L$ but that is not a circle. For any positive integer $n$, there exists a perturbation $\Gamma$ of type~$T$ \!\!, also symmetric about $L$, such that $\mathcal K_{\Gamma}^*$ admits $n$ negative and $n$ positive eigenvalues that lie within the essential spectrum of $\mathcal K_{\Gamma}^*$. \end{corollary} \begin{proof} We begin with two facts. (1) Except for when $\Gamma_{\!0}$ is a circle, the operator $\mathcal K_{\Gamma_{\!0}}^*$ is always of infinite rank~\cite[\S7.3--7.4]{Shapiro1992}. (2) For each nonzero eigenvalue $\lambda$ of $\mathcal K_{\Gamma_{\!0}}^*$ corresponding to an even (odd) eigenfunction, $-\lambda$ is an eigenvalue of $\mathcal K_{\Gamma_{\!0}}^*$ corresponding to an odd (even) eigenfunction. The symmetry of the point spectrum is proved in~\cite[Theorem~2.1]{HelsingKangLim2016}; and the statement about the parities of the eigenfunctions can be obtained from augmenting the proof of that theorem, using the assumption that the eigenfunction corresponding to $\lambda$ is even (odd). Assume that $\Gamma_{\!0}$ is not a circle. Facts (1) and (2) together imply that both $\mathcal K_{\Gamma_{\!0},o}^*$ and $\mathcal K_{\Gamma_{\!0},e}^*$ are of infinite rank. This means that $\mathcal K_{\Gamma_{\!0},o}^*$ has infinitely many negative eigenvalues or infinitely many positive eigenvalues. Suppose the former case holds. Then by (2), $\mathcal K_{\Gamma_{\!0},e}^*$ has infinitely many positive eigenvalues. Thus, for any integer $n$, the hypotheses of part (b) of Theorem~\ref{thm:main} are satisfied. In the other case, the hypotheses of part (a) are satisfied. In either case, the conclusion of the corollary follows from the theorem. \end{proof} \noindent{\bfseries Example: A perturbed ellipse.}\hspace{0.5em} Consider the Neumann-Poincar\'e operator for an ellipse, whose eigenvalues and eigenfunctions are known explicitly~\cite[\S3]{ChungKangKimLee2014}. They take simple forms in the elliptic coordinates $(\varrho, \omega)$, which are related to the the Cartesian coordinates $x = (x_1, x_2)$ by \begin{equation} x_1 = R \cos \omega \cosh \varrho, \quad x_2 = R \sin \omega \sinh \varrho, \quad \varrho > 0, \quad 0 \leq \omega \leq 2\pi. \end{equation} The set $E = \{ (\varrho, \omega) : \varrho = \varrho_0 \}$ is an ellipse with foci $( \pm R, 0)$. The non-one-half eigenvalues of the operator $\mathcal K_E^*$ are $\alpha_n$ and $-\alpha_n$ and the corresponding eigenfunctions are \begin{equation} \phi_n^+ := \Xi(\varrho_0, \omega)^{-1}\cos n\omega, \qquad \phi_n^- := \Xi(\varrho_0, \omega)^{-1}\sin n\omega \qquad (n \geq 1), \end{equation} in which \begin{equation}\label{alpha} \quad \alpha_n = \frac{1}{2e^{2n\varrho_0}}, \qquad \Xi (\varrho_0, \omega) = R \sqrt{\sinh^2 \varrho_0 + \sin^2 \omega\,} \qquad (n \geq 1). \end{equation} We make two observations. First, $\phi_n^{\pm}$ are in $L^2(E)$, as guaranteed by Lemma~\ref{lemma:L2}. Second, $\phi_n^+$ are even about the major axis of the ellipse, $\phi_n^-$ are odd about the major axis, $\phi_{2k+1}^+$ and $\phi_{2k}^-$ are odd about the minor axis, and $\phi_{2k}^+$ and $\phi_{2k+1}^-$ are even about the minor axis. That is to say, all eigenfunctions corresponding to positive (negative) eigenvalues are even (odd) with respect to the major axis, and they alternate between odd and even with respect to the minor axis. Let $L$ be the major axis of an ellipse $\Gamma_{\!0}=E$. The hypotheses of part (b) of Theorem~\ref{thm:main} are satisfied for any integers $m$ and $n$, and therefore one can perturb $\Gamma_{\!0}$ to a domain $\Gamma$ by attaching an inward-pointing corner with its tip on $L$ (according to Definition~\ref{def:Tperturbation}) that is small enough so that $\mathcal K_{\Gamma}^*$ has eigenvalues within the essential spectrum as described in the conclusion of part~(b). Now let $L$ be the minor axis of an ellipse $\Gamma_{\!0}=E$. Either of the hypotheses of parts (a) and (b) of the theorem can be satisfied for any $m$ and $n$, and thereby eigenvalues within the essential spectrum can be created for $\mathcal K_{\Gamma}^*$ according to the theorem. \section{Discussion}\label{sec:discussion} We end this article with some questions and observations. \smallskip {\bfseries 1.} Can $\mathcal K_{\Gamma}^*$ have infinitely many embedded eigenvalues, and might this actually occur typically? Our proof guarantees only a finite number of eigenvalues within the essential spectrum for a given Lipschitz type $T$ perturbation $\Gamma$ of $\Gamma_{\!0}$ because it establishes merely that the perturbation of an eigenvalue tends to zero as the size of the attached corner tends to zero. One requires tighter control over the variation of the eigenvalues in order to guarantee that an infinite sequence of eigenvalues tending to zero is retained, with the same sign, when passing from $\Gamma_{\!0}$ to $\Gamma$. A desirable result would be to prove that, for a symmetric curve $\Gamma$ with an outward-pointing corner, the positive part of $\mathcal K_{\Gamma,o}^*$ is compact and has infinite rank. This may not be unreasonable, seeing that $\mathcal K_{\Gamma,o}^*$ has non-positive essential spectrum. Such a result would guarantee an infinite sequence of positive eigenvalues of $\mathcal K_{\Gamma,o}^*$ which would overlap with the essential spectrum of $\mathcal K_{\Gamma,e}^*$. \smallskip {\bfseries 2.} What happens when the essential spectrum of $\mathcal K_{\Gamma,e}^*$ overlaps eigenvalues of $\mathcal K_{\Gamma_{\!0},e}^*$? We expect that such eigenvalues of $\mathcal K_{\Gamma_{\!0},e}^*$ would not be perturbed to eigenvalues of $\mathcal K_{\Gamma,e}^*$ but rather would do the generic thing and become resonances, which are poles of the analytic continuation of the resolvent of $\mathcal K_{\Gamma,e}^*$ onto another Riemann sheet. This type of resonance is demonstrated numerically in~\cite[Fig.~6]{HelsingKangLim2016}, where one observes resonances around the spectral values $\pm0.08$; this example is discussed in more detail in point~5 below. \smallskip {\bfseries 3.} Can one construct embedded eigenvalues of the Neumann-Poincar\'e operator in the absence of reflectional symmetry? \smallskip {\bfseries 4.} The technique of perturbing a reflectionally symmetric $C^{2,\alpha}$ curve by attaching corners to create embedded eigenvalues is not extensible to a curve that admits a different group of symmetries, at least not in a straightforward manner. Consider a curve $\Gamma$ with a finite cyclic rotational symmetry group $C_r$ of order~$r$. The Neumann-Poincar\'e operator is decomposed on the $r$ orthogonal eigenspaces of the action of $C_r$ on $H^{-1/2}} %\newcommand{\Hmhalf}{H^{\tminus\half}(\Gamma)$, that is, the Hilbert-space decomposition \begin{equation}\label{Cr1} H^{-1/2}(\Gamma) = H^{-1/2,0}(\Gamma) \oplus \cdots \oplus H^{-1/2,r-1}(\Gamma) \end{equation} into eigenspaces of $C_r$ induces a decomposition \begin{equation}\label{Cr2} \mathcal K_{\Gamma}^* = {\mathcal K_{\Gamma,0}^*} \oplus \cdots \oplus {\mathcal K_{\Gamma,r-1}^*}\,. \end{equation} If $\Gamma$ has exactly $r$ small corners that are cyclically permuted under $C_r$, the essential spectrum of each of these component operators is a symmetric interval $[-b,b]$. This is in contrast to the case of reflectional symmetry, as was seen earlier, where $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*)=[-b,0]$ and $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*)=[0,b]$ (for an outward-pointing corner); and in contrast to the rotationally invariant surface with a conical point in $\mathbb{R}^3$ investigated by Helsing and Perfekt~\cite[Theorem~3.8,\,Fig.~5]{HelsingPerfekt2017}, in which different Fourier components of the Neumann-Poincar\'e operator have different essential spectrum. \smallskip {\bfseries 5.} What if a corner is attached to a smooth curve without smoothing out the points of attachment? The additional corners at the attachment points will contribute to the essential spectrum of the Neumann-Poincar\'e operator of the perturbed domain. A nice example of this is provided by a numerical computation of Helsing, Kang and Lim in~\cite[Fig.~6]{HelsingKangLim2016}. There, the $C^{2,\alpha}$ curve is an ellipse $\Gamma_{\!0}$, to which an outward corner is attached symmetrically with respect to the minor axis $L$ of symmetry of the ellipse to create a perturbed Lipschitz curve $\Gamma$, illustrated in Fig.~\ref{fig:ellipse}. Two additional inward corners not lying on $L$ are created by this attachment, and these two corners are positioned symmetrically about $L$. The computation in~\cite{HelsingKangLim2016} demonstrates exactly one positive embedded eigenvalue with odd eigenfunction and exactly one negative embedded eigenvalue with even eigenfunction. In fact, this is expected based on the eigenvalues of $\mathcal K_{\Gamma_{\!0}}^*$ and the essential spectrum of~$\mathcal K_{\Gamma}^*$. Specifically, we will show that (i)~the essential spectrum of the even and odd components of $\mathcal K_{\Gamma}^*$, created by the three corners, are \begin{equation}\label{ellipseessential} \renewcommand{\arraystretch}{1.1} \left. \begin{array}{lll} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*) &=& \textstyle[-\frac{1}{4},\frac{1}{8}-\eta], \\ \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*) &=& \textstyle[-\frac{1}{8}+\eta,\frac{1}{4}], \end{array} \right. \end{equation} in which $\eta$ is a tiny number with $0<\eta<1/8$, (ii)~the largest four eigenvalues (see~\ref{alpha}) of $\mathcal K_{\Gamma_{\!0}}^*$ are equal to $\pm\alpha_1=\pm 1/5$ and $\pm\alpha_2=\pm2/25$, and (iii)~the eigenfunction for eigenvalue $1/5$ is odd, and that for $-1/5$ is even. Theorem~\ref{thm:main} and the supporting lemmas can be modified to handle this example, in which the perturbed part of the curve has more than one corner. By making the corner attachment small enough so that $\mathcal K_{\Gamma,o}^*$ has a (nonembedded) eigenvalue sufficiently near $1/5$ and $\mathcal K_{\Gamma,e}^*$ has a (nonembedded) eigenvalue sufficiently near $-1/5$, these eigenvalues of~$\mathcal K_{\Gamma}^*$ are contained within the essential spectrum of~$\mathcal K_{\Gamma}^*$ in view of~(\ref{ellipseessential}). And the corner attachment can be made small enough such that $\alpha_2=2/25<1/8-\eta$, so that the next eigenvalues in the sequence lie within the essential spectra of both $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,o}^*)$ and $\sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}^*)$ and thus are not expected to be perturbed to eigenvalues of $\mathcal K_{\Gamma}^*$. Items (ii) and (iii) are results of the discussion on ellipses at the end of section~\ref{sec:embedded}, using $\varrho_0=\tanh^{-1}(3/7)$. Item~(i) can be proved as follows. Modify the proof of Proposition~\ref{prop:evenodd} by letting the cutoff function $\rho_1$ be localized about the one outward corner lying on~$L$ and letting $\rho_2=\rho_2^+ + \rho_2^-$ be a sum of two cutoff functions, one localized about each of the two inward corners not lying on~$L$. As before, one has \begin{equation} \mathcal K_{\Gamma,e} \;=\; \sum\limits_{0\leq i,j \leq n} M_{\rho_i} \mathcal K_{\Gamma,e} M_{\rho_j}, \end{equation} with $\rho_0+\rho_1+\rho_2=1$, and the essential spectrum is \begin{equation} \sigma_\text{\hspace{-1pt}ess}(\mathcal K_{\Gamma,e}) \;=\; \sigma_{\text{ea}} \left( M_{\rho_1} \mathcal K_{\Gamma,e} M_{\rho_1} \right) \cup \sigma_{\text{ea}} \left( M_{\rho_2} \mathcal K_{\Gamma,e} M_{\rho_2} \right). \end{equation} The half exterior angle of the outward corner is $\theta_1=3\pi/4$, and thus $\sigma_{\text{ea}} \left( M_{\rho_1} \mathcal K_{\Gamma,e} M_{\rho_1} \right)$ is equal to the positive interval $[0,1/4]$ since this operator acts on functions that are even with respect to~$L$. The operator $M_{\rho_2} \mathcal K_{\Gamma,e} M_{\rho_2}$ also acts on functions that are even with respect to~$L$, but since the inward corners do not lie on~$L$, the symmetry of a function about $L$ does not restrict the function near either of the inward corners. Thus the contribution to the essential spectrum coming from the inward corners is the full interval $[-b,b]$, with $b=1/8-\eta>0$ since the half exterior angle is a little bigger than~$3\pi/8$; that is to say, \begin{equation} \sigma_{\text{ea}} \left( M_{\rho_2} \mathcal K_{\Gamma,e} M_{\rho_2} \right) = \sigma_{\text{ea}} \left( M_{\rho_2^+} \mathcal K_{\Gamma} M_{\rho_2^+} \right) = [-\textstyle\frac{1}{8}+\eta,\textstyle\frac{1}{8}-\eta]. \end{equation} Likewise, $\sigma_{\text{ea}} \left( M_{\rho_2} \mathcal K_{\Gamma,o} M_{\rho_2} \right)=[-\textstyle\frac{1}{8}+\eta,\textstyle\frac{1}{8}-\eta]$. To make rigorous the assumption above that the eigenvalues $\pm\alpha_1$ of $\mathcal K_{\Gamma_{\!0}}^*$ are perturbed into eigenvalues of $\mathcal K_{\Gamma}^*$, notice that Lemma~\ref{lemma:resolvent} does not rely on the smoothness of the attachment of the replacement curve $D$ to $\Gamma_{\!0}$, so the resolvent bound established by that Lemma holds for this example. In view of the essential spectra~(\ref{ellipseessential}) of the even and odd components, one can establish the existence of the perturbed eigenvalues in a manner following the proof of Theorem~\ref{thm:main}. \begin{figure} \centerline{\scalebox{0.33}{\includegraphics{fig_ellipse.pdf}}} \caption{\small This is the Lipschitz perturbation $\Gamma$ of an ellipse treated numerically in~\cite[Fig.~6]{HelsingKangLim2016}. An outward-pointing corner replaces a small section of the ellipse centered around its minor axis $L$. The points at which the corner attaches to the ellipse introduce two inward-pointing corners. The lines $L^{\!-}$ and $L^{\!+}$ bisect these two corners.} \label{fig:ellipse} \end{figure} \bigskip \noindent{\bfseries Acknowledgement.} This material is based upon work supported by the National Science Foundation under Grant No. DMS-1814902.
{ "timestamp": "2019-03-05T02:15:09", "yymm": "1806", "arxiv_id": "1806.00950", "language": "en", "url": "https://arxiv.org/abs/1806.00950" }
\section{Introduction} Popular recurrent neural networks in NLP, such as the Gated Recurrent Unit \citep{gru} and Long Short-Term Memory \citep{lstm}, compute sentence representations by reading their words in a sequence. In contrast, the Tree-LSTM architecture \citep{tree_lstm} processes words according to an input parse tree, and manages to achieve improved performance on a number of linguistic tasks. Recently, \citet{yogatama_rl_16}, \citet{maillard}, and \citet{choi} all proposed sentence embedding models which work similarly to a Tree-LSTM, but do not require any parse trees as input. These models function without the assistance of an external automatic parser, and without ever being given any syntactic information as supervision. Rather, they induce parse trees by training on a downstream task such as natural language inference. At the heart of these models is a mechanism to assign trees to sentences -- effectively, a natural language parser. \citet{williams} have recently investigated the tree structures induced by two of these models, trained for a natural language inference task. Their analysis showed that \citet{yogatama_rl_16} learns mostly trivial left-branching trees, and has inconsistent performance; while \citet{choi} outperforms all baselines (including those using trees from conventional parsers), but learns trees that do not correspond to those of conventional treebanks. In this paper, we propose a new latent tree learning model. Similarly to \citet{yogatama_rl_16}, we base our approach on shift-reduce parsing. Unlike their work, our model is trained via standard backpropagation, which is made possible by exploiting beam search to obtain an approximate gradient. We show that this model performs well compared to baselines, and induces trees that are not as trivial as those learned by the Yogatama et al. model in the experiments of \citet{williams}. This paper also presents an analysis of the trees learned by our model, in the style of \citet{williams}. We further analyse the trees learned by the model of \citet{maillard}, which had not yet been done, and perform evaluations on both the SNLI data \citep{snli} and the MultiNLI data \citep{mnli}. The former corpus had not been used for the evaluation of trees of \citet{williams}, and we find that it leads to more consistent induced trees. \section{Related work} \label{sec:related_work} The first neural model which learns to both parse a sentence and embed it for a downstream task is by \citet{rae}. The authors train the model's parsing component on an auxiliary task, based on recursive autoencoders, while the rest of the model is trained for sentiment analysis. \citet{bowman_spinn_16} propose the ``Shift-reduce Parser-Interpreter Neural Network'', a model which obtains syntax trees using an integrated shift-reduce parser (trained on gold-standard trees), and uses the resulting structure to drive composition with Tree-LSTMs. \citet{yogatama_rl_16} is the first model to jointly train its parsing and sentence embedding components. They base their model on shift-reduce parsing. Their parser is not differentiable, so they rely on reinforcement learning for training. \citet{maillard} propose an alternative approach, inspired by CKY parsing. The algorithm is made differentiable by using a soft-gating approach, which approximates discrete candidate selection by a probabilistic mixture of the constituents available in a given cell of the chart. This makes it possible to train with backpropagation. \citet{choi} use an approach similar to easy-first parsing. The parsing decisions are discrete, but the authors use the Straight-Through Gumbel-Softmax estimator \citep{gumbel} to obtain an approximate gradient and are thus able to train with backpropagation. \citet{williams} investigate the trees produced by \citet{yogatama_rl_16} and \citet{choi} when trained on two natural language inference corpora, and analyse the results. They find that the former model induces almost entirely left-branching trees, while the latter performs well but has inconsistent trees across re-runs with different parameter initializations. A number of other neural models have also been proposed which create a tree encoding during parsing, but unlike the above architectures rely on traditional parse trees. \citet{le_forest_15} propose a sentence embedding model based on CKY, taking as input a parse forest from an automatic parser. \citet{dyer_rnng_16} propose RNNG, a probabilistic model of phrase-structure trees and sentences, with an integrated parser that is trained on gold standard trees. \section{Models} \label{sec:models} \paragraph{CKY} The model of \citet{maillard} is based on chart parsing, and effectively works like a CKY parser \cite{C,K,Y} using a grammar with a single nonterminal $A$ with rules $A \to A~A$ and $A \to \alpha$, where $\alpha$ is any terminal. The parse chart is built bottom-up incrementally, like in a standard CKY parser. When ambiguity arises, due to the multiple ways to form a constituent, all options are computed using a Tree-LSTM, and scored. The constituent is then represented as a weighted sum of all possible options, using the normalised scores as weights. In order for this weighted sum to approximate a discrete selection, a temperature hyperparameter is used in the softmax. This process is repeated for the whole chart, and the sentence representation is given by the topmost cell. We noticed in our experiments that the weighted sum still occasionally assigned non-trivial weight to more than one option. The model was thus able to utilize multiple inferred trees, rather than a single one, which would have potentially given it an advantage over other latent tree models. Hence for fairness, in our experiments we replace the softmax-with-temperature of \citet{maillard} with a softmax followed by a straight-through estimator \citep{ste}. In the forward pass, this approach is equivalent to an argmax function; while in the backward pass it is equivalent to a softmax. Effectively, this means that a single tree is selected during forward evaluation, but the training signal can still propagate to every path during backpropagation. This change did not noticeably affect performance on development data. \paragraph{Beam Search Shift-Reduce} We propose a model based on beam search shift-reduce parsing (BSSR). The parser works with a queue, which holds the embeddings for the nodes representing individual words which are still to be processed; and a stack, which holds the embeddings of the nodes which have already been computed. A standard binary Tree-LSTM function \citep{tree_lstm} is used to compute the $d$-dimensional embeddings of nodes: \noindent \begin{align*} \begin{bmatrix}\textstyle\bm{i}\\ \textstyle\bm{f}_L\\ \textstyle\bm{f}_R\\ \textstyle\bm{u}\\ \textstyle\bm{o}\end{bmatrix} &= \mathbf{W}\bm{w} + \mathbf{U} \bm{h}_L + \mathbf{V}\bm{h}_R + \bm{b},\\ \bm{c}\quad &= \bm{c}_L \odot \sigma (\bm{f}_L) + \bm{c}_R \odot \sigma (\bm{f}_R)\\ &\qquad+ \tanh (\bm{u}) \odot \sigma (\bm{i}),\\ \bm{h}\quad &= \sigma(\bm{o}) \odot \tanh ( \bm{c}), \end{align*} where $\mathbf{W},\mathbf{U}$ are learned $5d\times d$ matrices, and $\bm{b}$ is a learned $5d$ vector. The $d$-dimensional vectors $\sigma(\bm{i}), \sigma(\bm{f}_L), \sigma(\bm{f}_R)$ are known as \emph{input} gate and \emph{left-} and \emph{right-forget} gates, respectively. $ \sigma(\bm{o}_t)$ and $\tanh(\bm{u}_t)$ are known as \emph{output} gate and \emph{candidate update}. The vector $\bm{w}$ is a word embedding, while $\boldsymbol{h}_L,\boldsymbol{h}_R$ and $\boldsymbol{c}_L,\boldsymbol{c}_R$ are the childrens' $\boldsymbol{h}$- and $\boldsymbol{c}$-states. At the beginning, the queue contains embeddings for the nodes corresponding to single words. These are obtained by computing the Tree-LSTM with $\bm{w}$ set to the word embedding, and $\bm{h}_{L/R},\bm{c}_{L/R}$ set to zero. When a \textsc{shift} action is performed, the topmost element of the queue is popped, and pushed onto the stack. When a \textsc{reduce} action is performed, the top two elements of the stack are popped. A new node is then computed as their parent, by passing the children through the Tree-LSTM, with $\bm{w}=0$. The new node is then pushed onto the stack. Parsing actions are scored with a simple multi-layer perceptron, which looks at the top two stack elements and the top queue element: \begin{align*} \bm{r} &= \mathbf{W}_{s1}\cdot\bm{h}_{s1} + \mathbf{W}_{s2}\cdot\bm{h}_{s2} + \mathbf{W}_{q}\cdot\bm{h}_{q1},\\ \bm{p} &= \operatorname{softmax}\,(\bm{a} + \mathbf{A} \cdot \tanh \bm{r}), \end{align*} where $\bm{h}_{s1},\bm{h}_{s2},\bm{h}_{q1}$ are the $\bm{h}$-states of the top two elements of the stack and the top element of the queue, respectively. The three $\mathbf{W}$ matrices have dimensions $d\times d$ and are learned; $\bm{a}$ is a learned 2-dimensional vector; and $\mathbf{A}$ is a learned $2\times d$ vector. The final scores are given by $\log \bm{p}$, and the best action is greedily selected at every time step. The sentence representation is given by the $\bm{h}$-state of the top element of the stack after $2n-1$ steps. In order to make this model trainable with gradient descent, we use beam search to select the $b$ best action sequences, where the score of a sequence of actions is given by the sum of the scores of the individual actions. The final sentence representation is then a weighted sum of the sentence representations from the elements of the beam. The weights are given by the respective scores of the action sequences, normalised by a softmax and passed through a straight-through estimator. This is equivalent to having an argmax on the forward pass, which discretely selects the top-scoring beam element, and a softmax in the backward pass. \begin{table}[t!] \small \centering \begin{tabular}{lrrr} \toprule \textbf{Model} & \textbf{SNLI} & \textbf{MultiNLI+} \\ \midrule \multicolumn{3}{c}{Prior work: Baselines} \\ \midrule 100D LSTM (Yogatama) & 80.2 & ---\\ 300D LSTM (Williams) & 82.6 & 69.1 \\ 100D Tree-LSTM (Yogatama) & 78.5 & --- \\ 300D SPINN (Williams) & 82.2 & 67.5 \\ \midrule \multicolumn{3}{c}{Prior work: Latent Tree Models} \\ \midrule 100D ST-Gumbel (Choi) & 81.9 & --- \\ 300D ST-Gumbel (Williams) & 83.3 & \textbf{69.5} \\ 300D ST-Gumbel$^{\dagger}$ (Williams) & \textbf{83.7} & 67.5 \\ 100D CKY (Maillard) & 81.6 & --- \\ 100D RL-SPINN (Yogatama) & 80.5 & --- \\ 300D RL-SPINN$^{\dagger}$ (Williams) & 82.3 & 67.4 \\ \midrule \multicolumn{3}{c}{This work: Latent Tree Models} \\ \midrule 100D CKY (Ours) & 82.2 & 69.1 \\ 100D BSSR (Ours) & 83.0 & 69.0 \\ \bottomrule \end{tabular} \caption{SNLI and MultiNLI (matched) test set accuracy. $\dagger$: results are for the model variant without the leaf RNN transformation.} \label{tbl:acc} \end{table} \begin{table*}[t!] \small \centering \begin{tabular}{clcrrrrrr} \toprule & & & \multicolumn{6}{c}{\textbf{F1 w.r.t.}} \\ & & & \multicolumn{2}{c}{\textbf{Left Branching}} & \multicolumn{2}{c}{\textbf{Right Branching}} & \multicolumn{2}{c}{\textbf{Stanford Parser}} \\ \textbf{Dataset} &\textbf{Model} & \textbf{Self-F1} & \multicolumn{1}{c}{$\bm{\mu}$ ($\bm{\sigma}$)} & \textbf{max} & \multicolumn{1}{c}{$\bm{\mu}$ ($\bm{\sigma}$)} & \textbf{max} & \multicolumn{1}{c}{$\bm{\mu}$ ($\bm{\sigma}$)} & \textbf{max}\\ \midrule MultiNLI+ & 300D SPINN (Williams) & 71.5 & 19.3 (0.4) & 19.8 & 36.9 (3.4) & \textbf{42.6} & \textbf{70.2} (3.6) & \textbf{74.5} \\ MultiNLI+ & 300D ST-Gumbel (Williams) & 49.9 & 32.6 (2.0) & 35.6 & \textbf{37.5} (2.4) & {40.3} & 23.7 (0.9) & 25.2 \\ MultiNLI+ & 300D ST-Gumbel$^\dagger$ (Williams) & 41.2 & 30.8 (1.2) & 32.3 & 35.6 (3.3) & 39.9 & 27.5 (1.0) & 29.0 \\ MultiNLI+ & 300D RL-SPINN$^\dagger$ (Williams) & \textbf{98.5} & \textbf{99.1} (0.6) & \textbf{99.8} & 10.7 (0.2) & 11.1 & 18.1 (0.1) & 18.2 \\ MultiNLI+ & 100D CKY (Ours) & 45.9 & 32.9 (1.9) & 35.1 & 31.5 (2.3) & 35.1 & 23.7 (1.1) & 25.0\\ MultiNLI+ & 100D BSSR (Ours) & 46.6 & 40.6 (6.5) & 47.6 & 24.2 (6.0) & 27.7 & 23.5 (1.8) & 26.2\\ MultiNLI+ & \emph{Random Trees} (Williams) & 32.6 & 27.9 (0.1) & 27.9 & 28.0 (0.1) & 28.1 & 27.0 (0.1) & 27.1\\ \midrule SNLI & 100D RL-SPINN (Yogatama) & --- & \multicolumn{1}{c}{---} & 41.4 & \multicolumn{1}{c}{---} & 19.9 & \multicolumn{1}{c}{---} & \textbf{41.7} \\ SNLI & 100D CKY (Ours) & 59.2 & 43.9 (2.2) & 46.9 & \textbf{33.7} (2.6) & \textbf{36.7} & 30.3 (1.1) & 32.1 \\ SNLI & 100D BSSR (Ours) & \textbf{60.0} & \textbf{48.8} (5.2) & \textbf{53.9} & 26.5 (6.9) & 34.0 & \textbf{32.8} (3.5) & 36.4\\ SNLI & \emph{Random Trees} (Ours) & 35.9 & 32.3 (0.1) & 32.4 & 32.5 (0.1) & 32.6 & 32.3 (0.1) & 32.5\\ \bottomrule \end{tabular} \caption{Unlabelled F1 scores of the trees induced by various models against: other runs of the same model, fully left- and right-branching trees, and Stanford Parser trees provided with the datasets. The baseline results on MultiNLI are from \citet{williams}. $\dagger$: results are for the model variant without the leaf RNN transformation.} \label{tbl:f1} \end{table*} \section{Experimental Setup} \label{sec:setup} \paragraph{Data} To match the settings of \citet{maillard}, we run experiments with the SNLI corpus \citep{snli}. We additionally run a second set of experiments with the MultiNLI data \citep{mnli}, and to match \citet{williams} we augment the MultiNLI training data with the SNLI training data. We call this augmented training set \emph{MultiNLI+}. For the MultiNLI+ experiments, we use the \emph{matched} versions of the development and test sets. We use pre-trained 100D GloVe embeddings\footnote{\scriptsize \url{https://nlp.stanford.edu/projects/glove/}} \citep{glove} for performance reasons, and fine-tune them during training. Unlike \citet{williams}, we do not use a bidirectional leaf transformation. Models are optimised with Adam \citep{adam}, and we train five instances of every model. For BSSR, we use a beam size of 50, and let it linearly decrease to its final size of 5 over the first two epochs. \paragraph{Setup} To assign the labels of \emph{entails}, \emph{contradicts}, or \emph{neutral} to the pairs of sentences, we follow \citet{yogatama_rl_16} and concatenate the two sentence embeddings, their element-wise product, and their squared Euclidean distance into a vector $\bm{v}$. We then calculate $\bm{q} = \operatorname{ReLU}\,(\mathbf{C}\cdot\bm{v}+\bm{c})$, where $\mathbf{C}$ is a $200\times 4d$ learned matrix and $\bm{c}$ a 200-dimensional learned bias; and finally predict $p(y=c\mid\bm{q}) \propto \operatorname{exp}\,(\mathbf{B}\cdot\bm{q}+\bm{b})$ where $\mathbf{B}$ is a $3\times 200$ matrix and $\bm{b}$ is 3-dimensional. \section{Experiments} \label{sec:experiments} For each model and dataset, we train five instances using different random initialisations, for a total of $2\times2\times5=20$ instances. \paragraph{NLI Accuracy} We measure SNLI and MultiNLI test set accuracy for CKY and BSSR. The aim is to ensure that they perform reasonably, and are in line with other latent tree learning models of a similar size and complexity. Results for the best models, chosen based on development set performance, are reported in Table \ref{tbl:acc}. While our models do not reach the state of the art, they perform at least as well as other latent tree models using 100D embeddings, and are competitive with some 300D models. They also outperform the 100D Tree-LSTM of \citet{yogatama_rl_16}, which is given syntax trees, and match or outperform 300D SPINN, which is explicitly trained to parse. \paragraph{Self-consistency} Next, we examine the consistency of the trees produced for the development sets. Adapting the code of \citet{williams}, we measure the models' \emph{self F1}, defined as the unlabelled F1 between trees by two instances of the same model (given by different random initializations), averaged over all possible pairs. Results are shown in Table \ref{tbl:f1}. In order to test whether BSSR and CKY learn similar grammars, we calculate the \emph{inter-model F1}, defined as the unlabelled F1 between instances of BSSR and CKY trained on the same data, averaged over all possible pairs. We find an average F1 of 42.6 for MultiNLI+ and 55.0 for SNLI, both above the random baseline. Our Self F1 results are all above the baseline of random trees. For MultiNLI+, they are in line with ST-Gumbel. Remarkably, the models trained on SNLI are noticeably more self-consistent. This shows that the specifics of the training data play an important role, even when the downstream task is the same. A possible explanation is that MultiNLI has longer sentences, as well as multiple genres, including telephone conversations which often do not constitute full sentences \citep{mnli}. This would require the models to learn how to parse a wide variety of styles of data. It is also interesting to note that the inter-model F1 scores are not much lower than the self F1 scores. This shows that, given the same training data, the grammars learned by the two different models are not much more different than the grammars learned by two instances of the same model. \paragraph{F1 Scores} Finally, we investigate whether these models learn grammars that are recognisably left-branching, right-branching, or similar to the trees produced by the Stanford Parser which are included in both datasets. We report the unlabelled F1 between these and the trees from from our models in Table \ref{tbl:f1}, averaged over the five model instances. We show mean, standard deviation, and maximum. We find a slight preference from BSSR and the SNLI-trained CYK towards left-branching structures. Our models do not learn anything that resembles the trees from the Stanford Parser, and have an F1 score with them which is at or below the random baseline. Our results match those of \citet{williams}, which show that whatever these models learn, it does not resemble PTB grammar. \section{Conclusions} First, we proposed a new latent tree learning model based on a shift-reduce parser. Unlike a previous model based on the same parsing technique, we showed that our approach does not learn trivial trees, and performs competitively on the downstream task. Second, we analysed the trees induced by our shift-reduce model and a latent tree model based on chart parsing. Our results confirmed those of previous work on different models, showing that the learned grammars do not resemble PTB-style trees \citep{williams}. Remarkably, we saw that the two different models tend to learn grammars which are not much more different than those learned by two instances of the same model. Finally, our experiments highlight the importance of the choice of training data used for latent tree learning models, even when the downstream task is the same. Our results suggest that MultiNLI, which has on average longer sentences coming from different genres, might be hindering the current models' ability to learn consistent grammars. For future work investigating this phenomenon, it may be interesting to train models using only the written genres parts of MultiNLI, or MultiNLI without the SNLI corpus. \section*{Acknowledgments} We are grateful to Chris Dyer for the several productive discussions. We would like to thank the anonymous reviewers for their helpful comments. \bibliographystyle{acl_natbib}
{ "timestamp": "2018-06-05T02:11:42", "yymm": "1806", "arxiv_id": "1806.00840", "language": "en", "url": "https://arxiv.org/abs/1806.00840" }
\section{Introduction} { Random walks on a plane, whether simple, biased, or correlated, have a long history of being employed by ecologists to model the movement of animals, micro-organisms, and cells on a small time scale. By the functional Central Limit Theorem, from an appropriate distance any random walk (under some mild regularity conditions) looks like a Brownian Motion (BM). So, it is not surprising that recently diffusions are often used to model animal movement on a large time scale \citep[e.g.,][]{Prei:etal:mode:2004, Tilles:Petr:2016}. An excellent review on applications of random walks and diffusions in this area of research can be found in \citet{Codling:etal:2008}. \citet{Horn:etal:2007} introduced the Brownian bridge movement model (BBMM) that, in essence, assumes that animal movement is perpetual and described by a BM\@. Pauses in animal movement (on a small time scale) were first introduced in \citet{Othmer:1988} where the dispersal of cells or organisms is modeled by a process that comprises a sequence of alternating pauses and jumps. The moving-resting process introduced in \citet{Yan:etal:2014} and further investigated in \citet{Pozd:etal:2017} allows an animal to have two states, moving and resting. In the moving state, the motion is characterized by a BM; in the resting state, there is no movement. The duration in either moving or resting states is assumed to be exponentially distributed. Properties and fitting of the moving-resting model are based on results for telegraph processes (the alternating renewal process or the on-off process) that were obtained in \citet{Perr:Stad:Zack:firs:1999}, \citet{DiCrescenzo:2001}, \citet{Stad:Zack:tele:2004}, and \citet{Zack:gene:2004}. The distribution of total time spent in a state plays a critical role in applications driven by a telegraph process \citep{Zacks:2012}. In particular, a BM governed by a telegraph process is an active area of research such as being recently employed in continuous-time option pricing theory \citep[e.g.,][]{DiCrescenzo:Pellerey:2002, Kolesnik:Ratanov:2013, DiCrescenzo:etal:2014, DiCrescenzo:Zhacs:2015}. In animal movement ecology, it is reasonable to assume that there are very different explanations for why a predator is not moving. For example, an animal might spend time resting (as in \citet{Yan:etal:2014}), consuming a prey item, or denning. Resting can be assumed to not last even a single day. However, some predators that can kill a (relatively) large prey item evolved highly elastic guts, and they consume the kill by repeatedly gorging and digesting over a prolonged period called {\it handling}. For example, mountain lions ({\it Puma concolor}) might remain at a kill for days. Both resting and handling are periodic in the time scales of this model but denning is not, and it is inapplicable to male mountain lions in any case. Therefore, this model concerns only two non-moving activities, resting and handling, and it is clear that their durations must be different. This observation motivates our model. In the new model we have one moving state and two motionless states. From a motionless state one always switches to the moving state. Nonetheless, when moving ends, the motionless state type is chosen randomly. For tractability, all the durations (or holding times) are exponentially distributed. We will call this continuous-time process a {\it moving-resting-handling process}, or {\it MRH process}. An extension of the telegraph process to an alternating process with three states is studied in \citet{Bshouty:etal:2012}. The difference is that in \citet{Bshouty:etal:2012} three states alternate deterministically within a renewal cycle. In our case we have only two states within a renewal cycle but one of the motionless states is chosen at random. } In practice, a MRH process is typically observed at discrete, possibly irregularly spaced time points. Estimation of MRH process parameters is challenging because the states are unobserved, and the observed sequence is not Markov. Our estimation procedure uses techniques developed for the hidden Markov model (HMM). More specifically, the dynamic programming, or the forward algorithm, for HMM is employed to construct the true likelihood \citep[e.g.][]{Capp:etal:Infe:2005}. As will be seen, the key to this problem is the distribution of the time that the MRH process spends in the moving state. Our methodology differs from the standard approach to occupation time distribution in continuous-time Markov chain \citep{Seri:2000}. The method is general so that it remains valid when the holding times are not exponentially distributed, in which case, the state process is semi-Markov; see discussion in Section~\ref{sec:conc}. { An implementation of the methods in this paper is publicly available in R package \texttt{smam} \citep{Rpkg:smam}. } \section{Formal Description of MRH Process} Let $S(t)$, $t \geq 0$, be a continuous-time Markov Chain with the state space $\{0,1,2\}$ and the transition rate matrix \begin{equation}\label{transition*rate*matrix} {\mathbf{Q}}=\begin{pmatrix} -\lambda_0 & \lambda_0 \, p_1 & \lambda _0 \, p_2 \\ \lambda_1 & -\lambda_1 & 0 \\ \lambda_2 & 0 & -\lambda_2 \\ \end{pmatrix} \end{equation} where $p_1, p_2,\lambda_0,\lambda_1,\lambda_2>0$ and $p_1+p_2=1$. The zero entries in the matrix means that state~1 or state~2 do not transit between themselves; only a transition to state~0 is allowed from either of them. In animal movement modeling, the mean duration in state~0, 1,~and~2 are, respectively, $1/\lambda_0$, $1/\lambda_1$, and $1/\lambda_2$. We assume that the initial distribution $\nu_0$ of $S(0)$ is stationary, that is, \begin{equation}\label{stationary*distribution} \nu_0={\boldsymbol\pi}=(\pi_0,\pi_1,\pi_2)= \frac{1}{1/\lambda_0+p_1/\lambda_1+p_2/\lambda_2} \left(\frac{1}{\lambda_0}, \frac{p_1}{\lambda_1}, \frac{p_2}{\lambda_2}\right). \end{equation} Recall that ${\boldsymbol\pi}$ has to satisfy $0={\boldsymbol\pi}{\mathbf{Q}}$. Let $B(t)$ be the standard BM independent of $S(t)$. Then the MRH process is given by \begin{equation}\label{MRH*process} X(t)=\sigma \int_0^t1_{\{S(s)=0\}}\mathrm{d} B(s), \end{equation} where $\sigma>0$ is an infinitesimal standard deviation. Estimation of the MRH process parameters ${\boldsymbol\theta}=(\lambda_0,\lambda_1,\lambda_2,p_1,\sigma)$ is based on observations at discrete, possibly irregularly spaced time points. The observed data are represented by the vector of observed changes in location \begin{equation*} {\mathbf{X}} = \big(X(t_1)-X(0), X(t_2)-X(t_1), \dots, X(t_n)-X(t_{n-1})\big), \end{equation*} where $0<t_1<\dots<t_n$ are the time points of the observations. As mentioned earlier, the difficulty is that the MRH process itself is not Markov. However, the location-state process $\{X(t), S(t)\}$ is Markov. So, our first objective is to derive formulas for transitional probabilities of the location-state process. The key random variable here is the total time spent in state 0 in the time interval $[0,t]$: \begin{equation}\label{time*spent*in*moving} M(t)=\int_0^t1_{\{S(s)=0\}}\mathrm{d} s. \end{equation} We also can call this random variable {\it 0-state occupation time} by time~$t$. A continuous-time Markov Chain can be alternatively described by representing the process $S(t)$ as a combination of a discrete time Markov Chain, holding times, and initial distribution~$\nu$. More specifically, let $p_{ij}$ be the probability of switching to state~$j$ at the next jump given that we are currently in state~$i$. The matrix \begin{equation*} {\mathbf{P}}=(p_{ij})=\begin{pmatrix} 0 & p_1 & p_2 \\ 1 & 0 & 0 \\ 1 & 0 & 0 \\ \end{pmatrix} \end{equation*} is a stochastic matrix, and it is the transition matrix of the embedded (discrete time) Markov Chain of process $S(t)$. The time spent in a particular state $i$ between two consecutive jumps is called the holding time. The holding time has exponential distribution with rate $\lambda_i$. For our task this representation (via an embedded Markov Chain and holding times) is a bit more convenient. Note also that in the case of the standard telegraph process the associated stochastic matrix of the embedded Markov chain is \begin{equation*} \begin{pmatrix} 0 & 1\\ 1 & 0 \\ \end{pmatrix}. \end{equation*} Our technique is different from the general approach to the distribution of occupation times in homogeneous finite-state Markov processes (e.g., \citet{Seri:2000}). To develop computationally efficient estimation procedure we exploit the specific structure of our Markov chain. More specifically, a telegraph process can be associated with $S(t)$ if we collapse states~1 and~2 into one state. For this new state the holding time is distributed as a mixture of two exponential distributions. As a consequence, the telegraph process is not Markov. This makes computing the likelihood function for ${\mathbf{X}}$ challenging, because algorithms like the forward algorithm are not applicable. That is, results for telegraph processes can not be directly employed, because we do need to distinguish states 1 and 2. We use a certain periodicity of the Markov Chain and extend the technique developed in \citet{DiCrescenzo:2001} for telegraph processes to obtain the joint distribution of $M(t)$ and $S(t)$. { An alternative approach can be developed by extending the method presented in \citet{Zacks:2012}.} \section{Distribution of Occupation Time $M(t)$ Given $S(0)=0$}\label{M(t)*when*S(0)=0*section} To simulate process $S(t)$ that starts with $S(0)=0$, we need the following independent sequences of random variables: \begin{enumerate} \item $\{M_k\}_{k\geq 1}$ are independent identically distributed (iid) random variables with ${\rm Exp}(\lambda_0)$ distribution, \item $\{R_k\}_{k\geq 1}$ are iid random variables with ${\rm Exp}(\lambda_1)$, \item $\{H_k\}_{k\geq 1}$ are iid random variables with ${\rm Exp}(\lambda_2)$, \item $\{\xi_k\}_{k\geq 1}$ are iid random variables with $P(\xi_k=1)=p_1$ and $P(\xi_k=0)=p_2$. \end{enumerate} Having these sequences defined we can proceed as follows. To generate a particular realization of $S(t)$, first, generate $M_1$, the time duration the process spends in state~0. Then generate $\xi_1$ to decide whether it jumps to state~1 or~2. Depending on $\xi_1$ generate the duration $R_1$ or $H_1$. After that, switch back to state~0, and so on. Let us introduce some auxiliary random variables. Let $U_k=\xi_kR_k+(1-\xi_k)H_k$, $C_k=M_k+U_k$, and \begin{equation*} N(t)=\sup\{n\geq 0:\sum_{k=1}^nC_k\leq t\}. \end{equation*} Here and everywhere in the text, by convention, a summation over an empty set is 0, for instance, $\sum_{k=1}^0C_k=0$. Random variable $N(t)$ is the number of full cycles $C_k$ by time $t$. First, we consider the distribution of occupation time $M(t)$ when $S(t)=0$. Denote $P_i(\cdot)=P(\cdot|S(0)=i)$, where $i=0,1,2$. With probability~1 the random variable $M(t)\in [0,t]$, and it has an atom at $t$ in the following sense: \begin{equation*} P_0(M(t)=t,S(t)=0)=P_0(M(t)=t)=P(M_1>t)=e^{-\lambda_0t}. \end{equation*} Now, fix $0<s<t$. Then we have \begin{align*} P_0(M(t)\in \mathrm{d} s, S(t)=0)&=\sum_{n=0}^\infty P_0(M(t)\in \mathrm{d} s,S(t)=0,N(t)=n)\\ &=\sum_{n=1}^\infty P_0(M(t)\in \mathrm{d} s,S(t)=0,N(t)=n), \end{align*} because $S(0)=0$, $S(t)=0$ and $N(t)=0$ implies $M(t)=t$. \begin{figure}[tbp] \begin{center} \begin{picture}(420, 100) \put(10,50){\vector(1,0){50}} \put(10,50){\circle*{3}} \put(10,30){$0$} \put(60,50){\vector(-1,0){50}} \put(30,60){$M_1$} \put(60,50){\vector(1,0){50}} \put(110,50){\vector(-1,0){50}} \put(80,60){$U_1$} \put(60,50){\oval(100,20)[b]} \put(60,25){$C_1$} \put(110,50){\vector(1,0){50}} \put(160,50){\vector(-1,0){50}} \put(130,60){$M_2$} \put(160,50){\vector(1,0){50}} \put(210,50){\vector(-1,0){50}} \put(180,60){$U_2$} \put(160,50){\oval(100,20)[b]} \put(160,25){$C_2$} \multiput(210,50)(5,0){20} {\line(1,0){3}} \put(260,50){\vector(1,0){50}} \put(310,50){\vector(-1,0){50}} \put(280,60){$M_n$} \put(310,50){\vector(1,0){50}} \put(360,50){\vector(-1,0){50}} \put(330,60){$U_n$} \put(310,50){\oval(100,20)[b]} \put(310,25){$C_n$} \put(390,50){\circle*{3}} \put(390,30){$t$} \put(410,50){\vector(-1,0){50}} \put(380,60){$M_{n+1}$} \end{picture} \end{center} \caption{$S(t)=0$ and $N(t)=n$ given that $S(0)=0$.} \label{fig:pic1} \end{figure} Next, for $n\geq 1$ we get, from Figure~\ref{fig:pic1}, that \begin{align*} &\quad P_0(M(t)\in \mathrm{d} s,S(t)=0,N(t)=n)\\ &=P\left(\sum_{k=1}^nM_k+\sum_{k=1}^nU_k\leq t, \sum_{k=1}^{n+1}M_k+\sum_{k=1}^nU_k> t,t-\sum_{k=1}^nU_k\in \mathrm{d} s\right)\\ &=P\left(\sum_{k=1}^nM_k\leq s, \sum_{k=1}^{n+1}M_k> s,\sum_{k=1}^nU_k\in t- \mathrm{d} s\right)\\ &=P\left(\sum_{k=1}^nM_k\leq s, \sum_{k=1}^{n+1}M_k> s\right)P\left(\sum_{k=1}^nU_k\in t- \mathrm{d} s\right)\\ &=\left[P\left(\sum_{k=1}^nM_k\leq s\right)- P\left(\sum_{k=1}^{n+1}M_k\leq s\right)\right]P\left(\sum_{k=1}^nU_k\in t- \mathrm{d} s\right). \end{align*} Here we use independence of $\{M_k\}_{k\geq 1}$ and $\{U_k\}_{k\geq 1}$. The sums $\sum_{k=1}^nM_k$ and $\sum_{k=1}^{n+1}M_k$ have gamma distributions, ${\rm Gamma}(n,\lambda_0)$ and ${\rm Gamma}(n+1,\lambda_0)$, respectively. The distribution of $\sum_{k=1}^nU_k$ can be expressed in terms of the convolution of gamma distributions. More specifically, by conditioning on $\{\xi_k\}_{1\leq k\leq n}$ one can show that \begin{equation*} P\left(\sum_{k=1}^nU_k\leq s\right)=\sum_{k=0}^nP\left(\sum_{j=1}^kR_j+\sum_{j=1}^{n-k}H_j\leq s\right) {n \choose k} p_1^kp_2^{n-k}. \end{equation*} Random variables $\sum_{j=1}^kR_j$ and $\sum_{j=1}^{n-k}H_j$ are independent, and they have ${\rm Gamma}(k,\lambda_1)$ and ${\rm Gamma}(n-k,\lambda_2)$ distributions, respectively. For the convolution of gamma distributions, we refer the reader to \citet{Math:1982} and \citet{Mosc:1985}. Next, let us work out the case when $S(t)=1$. Again, the random variable $M(t)\in [0,t]$, but now it has no atoms. For any $0<s<t$, we have \begin{align*} P_0(M(t)\in \mathrm{d} s, S(t)=1)&=\sum_{n=0}^\infty P_0(M(t)\in \mathrm{d} s,S(t)=1,N(t)=n) \end{align*} \begin{figure}[tbp] \begin{center} \begin{picture}(370, 100) \put(10,50){\vector(1,0){50}} \put(10,50){\circle*{3}} \put(10,30){$0$} \put(60,50){\vector(-1,0){50}} \put(30,60){$M_1$} \put(60,50){\vector(1,0){50}} \put(110,50){\vector(-1,0){50}} \put(80,60){$U_1$} \put(60,50){\oval(100,20)[b]} \put(60,25){$C_1$} \multiput(110,50)(5,0){20} {\line(1,0){3}} \put(160,50){\vector(1,0){50}} \put(210,50){\vector(-1,0){50}} \put(180,60){$M_n$} \put(210,50){\vector(1,0){50}} \put(260,50){\vector(-1,0){50}} \put(230,60){$U_n$} \put(210,50){\oval(100,20)[b]} \put(210,25){$C_n$} \put(260,50){\vector(1,0){50}} \put(310,50){\vector(-1,0){50}} \put(280,60){$M_{n+1}$} \put(340,50){\circle*{3}} \put(340,30){$t$} \put(360,50){\vector(-1,0){50}} \put(330,60){$R_{n+1}$} \end{picture} \end{center} \caption{$S(t)=1$ and $N(t)=n$ given that $S(0)=0$.} \label{fig:pic2} \end{figure} Then for $n\geq 0$ we get, from Figure~\ref{fig:pic2}, that \begin{align*} &\quad P_0(M(t)\in \mathrm{d} s, S(t)=1,N(t)=n)\\ &=P\left(\sum_{k=1}^{n+1}M_k+\sum_{k=1}^nU_k\leq t, \sum_{k=1}^{n+1}M_k+\sum_{k=1}^nU_k+R_{n+1}> t,\sum_{k=1}^{n+1}M_k\in \mathrm{d} s, \xi_{n+1}=1\right)\\ &=P\left(\sum_{k=1}^nU_k\leq t-s, \sum_{k=1}^{n}U_k+R_{n+1}> t-s,\sum_{k=1}^{n+1}M_k\in \mathrm{d} s,\xi_{n+1}=1\right)\\ &=p_1P\left(\sum_{k=1}^nU_k\leq t-s, \sum_{k=1}^{n}U_k+R_{n+1}> t-s\right)P\left(\sum_{k=1}^{n+1}M_k\in \mathrm{d} s\right)\\ &=p_1\left[P\left(\sum_{k=1}^nU_k\leq t-s\right)- P\left(\sum_{k=1}^{n}U_k+R_{n+1}\leq t-s\right)\right]P\left(\sum_{k=1}^{n+1}M_k\in \mathrm{d} s\right). \end{align*} Random variable $\sum_{k=1}^{n+1}M_k$ has ${\rm Gamma}(n+1,\lambda_0)$ distribution. As before, \begin{equation*} P\left(\sum_{k=1}^nU_k\leq s\right)=\sum_{k=0}^nP\left(\sum_{j=1}^kR_j+\sum_{j=1}^{n-k}H_j\leq s\right) {n \choose k} p_1^kp_2^{n-k}, \end{equation*} and \begin{equation*} P\left(\sum_{k=1}^nU_k+R_{n+1}\leq s\right)=\sum_{k=0}^nP\left(\sum_{j=1}^{k+1}R_j+\sum_{j=1}^{n-k}H_j\leq s\right) {n \choose k} p_1^kp_2^{n-k}. \end{equation*} To summarize our findings let us first introduce the following notation: \begin{enumerate} \item $G(x,\alpha,\beta)$, where $\alpha\geq 0,\beta>0$, is the cdf of ${\rm Gamma}(\alpha,\beta)$ distribution; by convention, ${\rm Gamma}(0,\beta)$ distribution is the degenerate distribution with atom 1 at 0; \item $g(x,\alpha,\beta)$, where $\alpha,\beta>0$, is the pdf of ${\rm Gamma}(\alpha,\beta)$ distribution; \item $F(x,\alpha_1,\beta_1,\alpha_2,\beta_2)$, where $\alpha_1,\alpha_2\geq 0,\beta_1,\beta_2>0$, is the cdf of the convolution of ${\rm Gamma}(\alpha_1,\beta_1)$ and ${\rm Gamma}(\alpha_2,\beta_2)$; note that, for example, $F(x,0,\beta_1,\alpha_2,\beta_2)\equiv G(x,\alpha_2,\beta_2)$; \item $f(x,\alpha_1,\beta_1,\alpha_2,\beta_2)$, where $\beta_1,\beta_2>0$, $\alpha_1,\alpha_2\geq 0$, and $\alpha_1+\alpha_2>0$, is the pdf of $F(x,\alpha_1,\beta_1,\alpha_2,\beta_2)$; \item $H(x, \alpha_1, \beta_1, \alpha_2, \beta_2) = F(x, \alpha_1, \beta_1, \alpha_2, \beta_2) - F(x, \alpha_1 + 1, \beta_1, \alpha_2, \beta_2)$, where $\beta_1,\beta_2>0$, $\alpha_1,\alpha_2\geq 0$, and $\alpha_1+\alpha_2>0$, is the difference in cdf with parameters only differing by $\alpha_1$ versus $\alpha_1 + 1$. \end{enumerate} Finally, let us denote the (defective) densities of $M(t)$ as \begin{equation}\label{M(t)*density} p_{ij}(s,t)=P_i(M(t)\in \mathrm{d} s,S(t)=j)/\mathrm{d} s, \end{equation} where $t\geq 0$, $0<s<t$, $i,j=0,1,2$. Here is the main result of the section. \begin{Theorem}\label{thm:S0=0} Let $t\geq 0$ and $0<s<t$. Then \begin{equation}\label{P0} P_0(M(t)=t,S(t)=0)=e^{-\lambda_0t}, \end{equation} and the densities are given by \begin{equation}\label{p00} p_{00}(s,t)=\sum_{n=1}^\infty\left[ G(s,n,\lambda_0)-G(s,n+1,\lambda_0)\right]\sum_{k=0}^nf(t-s,k,\lambda_1,n-k,\lambda_2) {n\choose k}p_1^kp_2^{n-k}, \end{equation} \begin{equation}\label{p01} p_{01}(s,t)=\sum_{n=0}^\infty p_1g(s,n+1,\lambda_0)\sum_{k=0}^n H(t-s,k,\lambda_1,n-k,\lambda_2) {n\choose k}p_1^kp_2^{n-k}, \end{equation} and \begin{equation}\label{p02} p_{02}(s,t)=\sum_{n=0}^\infty p_2g(s,n+1,\lambda_0)\sum_{k=0}^n H(t-s,k,\lambda_2,n-k,\lambda_1) {n\choose k}p_2^kp_1^{n-k}. \end{equation} \end{Theorem} Note that the last formula of Theorem~\ref{thm:S0=0} can be obtained from the previous one by interchanging state~1 and state~2. \section{Distribution of Occupation Time $M(t)$ Given $S(0)=1$}\label{M(t)*when*S(0)=1*section} Let $\{M_k\}_{k\geq 1}$, $\{R_k\}_{k\geq 1}$, $\{H_k\}_{k\geq 1}$, $\{\xi_k\}_{k\geq 1}$, and $\{U_k\}_{k\geq 1}$ be the same sequences of random variables as in Section~\ref{M(t)*when*S(0)=0*section}. Let $R_0$ be an independent-of-everything random variable with ${\rm Exp}(\lambda_1)$ distribution. When $S(0)=1$, the sequence of holding times starts from $R_0$; that is, we have: $R_0,M_1,U_1,M_2,U_2,\dots$. This requires us to modify the definition of cycles. Now $C_1=R_0+M_1$, and $C_k=U_{k-1}+M_k$ for $k\geq 1$. As before, the random variable $N(t)$ is the number of cycles in time interval $[0,t]$: \begin{equation*} N(t)=\sup\{n\geq 0:\sum_{k=1}^nC_k\leq t\}. \end{equation*} Let us first consider the distribution of $M(t)$ when $S(t)=1$. Again, in this case there is an atom, but now the atom is at $s=0$: \begin{equation*} P_1(M(t)=0,S(t)=1)=P_1(M(t)=0)=P(R_0>t)=e^{-\lambda_1t}. \end{equation*} Fix $0<s<t$. First note that $S(0)=1$, $S(t)=1$, and $N(t)=0$ implies that $M(t)=0$, therefore, \begin{align*} P_1(M(t)\in \mathrm{d} s, S(t)=1)&=\sum_{n=0}^\infty P_1(M(t)\in \mathrm{d} s,S(t)=1,N(t)=n)\\ &=\sum_{n=1}^\infty P_1(M(t)\in \mathrm{d} s,S(t)=1,N(t)=n). \end{align*} \begin{figure}[tbp] \centering \begin{picture}(420, 100) \put(10,50){\vector(1,0){50}} \put(10,50){\circle*{3}} \put(10,30){$0$} \put(60,50){\vector(-1,0){50}} \put(30,60){$R_0$} \put(60,50){\vector(1,0){50}} \put(110,50){\vector(-1,0){50}} \put(80,60){$M_1$} \put(60,50){\oval(100,20)[b]} \put(60,25){$C_1$} \put(110,50){\vector(1,0){50}} \put(160,50){\vector(-1,0){50}} \put(130,60){$U_1$} \put(160,50){\vector(1,0){50}} \put(210,50){\vector(-1,0){50}} \put(180,60){$M_2$} \put(160,50){\oval(100,20)[b]} \put(160,25){$C_2$} \multiput(210,50)(5,0){20} {\line(1,0){3}} \put(260,50){\vector(1,0){50}} \put(310,50){\vector(-1,0){50}} \put(280,60){$U_{n-1}$} \put(310,50){\vector(1,0){50}} \put(360,50){\vector(-1,0){50}} \put(330,60){$M_n$} \put(310,50){\oval(100,20)[b]} \put(310,25){$C_n$} \put(390,50){\circle*{3}} \put(390,30){$t$} \put(410,50){\vector(-1,0){50}} \put(380,60){$R_{n}$} \end{picture} \caption{$S(t)=1$ and $N(t)=n$ given that $S(0)=1$.} \label{fig:pic3} \end{figure} Finally, for $n\geq 1$ one can show (see Figure~\ref{fig:pic3}) that \begin{align*} &\quad P_1(M(t)\in \mathrm{d} s,S(t)=1,N(t)=n)\\ &=P\left(R_0+\sum_{k=1}^{n}M_k+\sum_{k=1}^{n-1}U_k\leq t, R_0+\sum_{k=1}^{n}M_k+\sum_{k=1}^{n-1}U_k+R_n> t,\sum_{k=1}^{n}M_k\in \mathrm{d} s, \xi_{n}=1\right)\\ &=P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s, R_0+\sum_{k=1}^{n-1}U_k+R_n> t-s,\sum_{k=1}^{n}M_k\in \mathrm{d} s, \xi_{n}=1\right)\\ &=p_1P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s, R_0+\sum_{k=1}^{n-1}U_k+R_n> t-s\right)P\left(\sum_{k=1}^{n}M_k\in \mathrm{d} s\right)\\ &=p_1\left[P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s\right)-P\left(R_0+\sum_{k=1}^{n-1}U_k+R_n> t-s\right)\right]P\left(\sum_{k=1}^{n}M_k\in \mathrm{d} s\right)\\ &=p_1g(s,n,\lambda_0)\\ &\times\sum_{k=0}^{n-1}\left[F(t-s,k+1,\lambda_1,n-1-k,\lambda_2)-F(t-s,k+2,\lambda_1,n-1-k,\lambda_2)\right] {n-1\choose k}p_1^kp_2^{n-1-k}ds\\ &=p_1g(s,n,\lambda_0) \times\sum_{k=0}^{n-1} H(t-s,k+1,\lambda_1,n-1-k,\lambda_2) {n-1\choose k}p_1^kp_2^{n-1-k}ds. \end{align*} The next case is when $S(t)=2$. In this situation $M(t)$ does not have atoms, because we cannot switch from state~1 to state~2 without visiting state~0. Since event $\{S(0)=1$, $S(t)=2$, and $N(t)=0\}$ is impossible, for $0<s<t$ we have \begin{align*} P_1(M(t)\in \mathrm{d} s, S(t)=2)&=\sum_{n=0}^\infty P_1(M(t)\in \mathrm{d} s,S(t)=2,N(t)=n)\\ &=\sum_{n=1}^\infty P_1(M(t)\in \mathrm{d} s,S(t)=2,N(t)=n), \end{align*} and for $n\geq 1$ \begin{align*} &\ P_1(M(t)\in \mathrm{d} s,S(t)=2,N(t)=n)\\ =&\ P\left(R_0+\sum_{k=1}^{n}M_k+\sum_{k=1}^{n-1}U_k\leq t, R_0+\sum_{k=1}^{n}M_k+\sum_{k=1}^{n-1}U_k+H_n> t,\sum_{k=1}^{n}M_k\in \mathrm{d} s, \xi_{n}=0\right)\\ =&\ P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s, R_0+\sum_{k=1}^{n-1}U_k+H_n> t-s,\sum_{k=1}^{n}M_k\in \mathrm{d} s, \xi_{n}=0\right)\\ =&\ p_2P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s, R_0+\sum_{k=1}^{n-1}U_k+H_n> t-s\right)P\left(\sum_{k=1}^{n}M_k\in \mathrm{d} s\right)\\ =&\ p_2\left[P\left(R_0+\sum_{k=1}^{n-1}U_k\leq t-s\right)-P\left(R_0+\sum_{k=1}^{n-1}U_k+H_n> t-s\right)\right]P\left(\sum_{k=1}^{n}M_k\in \mathrm{d} s\right)\\ =&\ p_2g(s,n,\lambda_0)\\ &\times\sum_{k=0}^{n-1}\left[F(t-s,k+1,\lambda_1,n-1-k,\lambda_2)-F(t-s,k+1,\lambda_1,n-k,\lambda_2)\right] {n-1\choose k}p_1^kp_2^{n-1-k}ds\\ =&\ p_2g(s,n,\lambda_0) \times\sum_{k=0}^{n-1} H(t-s, n-1-k,\lambda_2, k+1,\lambda_1) {n-1\choose k}p_1^kp_2^{n-1-k}ds. \end{align*} Finally, let us consider the case $S(t)=0$. Again, there are no atoms. For $0<s<t$ \begin{align*} P_1(M(t)\in \mathrm{d} s, S(t)=0)&=\sum_{n=0}^\infty P_1(M(t)\in \mathrm{d} s,S(t)=0,N(t)=n), \end{align*} and for $n\geq 0$ \begin{align*} P_1(M(t)&\in \mathrm{d} s,S(t)=0,N(t)=n)\\ &=P\left(R_0+\sum_{k=1}^{n}M_k+\sum_{k=1}^{n}U_k\leq t, R_0+\sum_{k=1}^{n+1}M_k+\sum_{k=1}^{n}U_k> t,t-\sum_{k=1}^{n}U_k-R_0\in \mathrm{d} s\right)\\ &=P\left(\sum_{k=1}^{n}M_k\leq s, \sum_{k=1}^{n+1}M_k>s,\sum_{k=1}^{n}U_k+R_0\in t- \mathrm{d} s\right)\\ &=P\left(\sum_{k=1}^{n}M_k\leq s, \sum_{k=1}^{n+1}M_k>s\right)P\left(\sum_{k=1}^{n}U_k+R_0\in t- \mathrm{d} s\right)\\ &=\left[P\left(\sum_{k=1}^{n}M_k\leq s\right)-P\left(\sum_{k=1}^{n+1}M_k\leq s\right)\right]P\left(\sum_{k=1}^{n}U_k+R_0\in t- \mathrm{d} s\right)\\ &=\left[G(s,n,\lambda_0)-G(s,n+1,\lambda_0)\right]\sum_{k=0}^{n}\left[f(t-s,k+1,\lambda_1,n-k,\lambda_2)\right] {n\choose k}p_1^kp_2^{n-k}ds. \end{align*} Thus, we have the following result. \begin{Theorem}\label{thm:S0=1} Let $t\geq 0$ and $0<s<t$. Then \begin{equation}\label{P1} P_1(M(t)=0,S(t)=1)=e^{-\lambda_1t}, \end{equation} and the densities are given by \begin{equation}\label{p10} p_{10}(s,t)=\sum_{n=0}^\infty\left[G(s,n,\lambda_0)-G(s,n+1,\lambda_0)\right]\sum_{k=0}^{n}\left[f(t-s,k+1,\lambda_1,n-k,\lambda_2)\right] {n\choose k}p_1^kp_2^{n-k}, \end{equation} \begin{equation}\label{p11} p_{11}(s,t)=\sum_{n=1}^\infty p_1g(s,n,\lambda_0) \times\sum_{k=0}^{n-1} H(t-s,k+1,\lambda_1,n-1-k,\lambda_2) {n-1\choose k}p_1^kp_2^{n-1-k}, \end{equation} and \begin{equation}\label{p12} p_{12}(s,t)=\sum_{n=1}^\infty p_2g(s,n,\lambda_0) \times\sum_{k=0}^{n-1} H(t-s,n-1-k,\lambda_2,k+1,\lambda_1) {n-1\choose k}p_1^kp_2^{n-1-k}. \end{equation} \end{Theorem} In order to get densities $p_{2j}(s,t)$, $j=0,1,2$ we simply need to interchange state~1 and state~2 in all the formulas of Theorem~\ref{thm:S0=1}. Also let us note that Theorems~\ref{thm:S0=0} and~\ref{thm:S0=1} can be easily extended to the case when there are more than two motionless states. \section{Numerical Verification} { \begin{figure}[tbp] \centering \includegraphics[angle=0]{mrhplot.pdf} \caption{Defective densities $p_{ij}(s,t)$: Theorem~\ref{thm:S0=0} and Theorem~\ref{thm:S0=1}} \label{fig:mrh_density} \end{figure} Figure~\ref{fig:mrh_density} presents defective densities $p_{ij}(s,t)$ for two cases when the Markov Chain starts in state~0 and state~1. The following model parameters are used: $\lambda_0=4$, $\lambda_1=.5$, $\lambda_2=.1$, $p_1=.8$, and $t=10$. Note that the total probability in both cases is slightly less than 1. When the Markov Chain starts in state~0, the total probability adds up to $1-e^{-40}$, because $M(10)$ has an atom at $s=10$. When the Markov Chain starts in state~1, the probability of the atom at $s=0$ is relatively larger: $e^{-5}$. Because densities $p_{0j}(s,t)$ correspond to the case when at $s=0$ the state process is in state 0, these occupation times are longer on average than $p_{1j}(s,t)$. Applications of the formulas in practice depends on how accurately the infinite sums can be implemented. To check the accuracy of the implementation and to verify that our formulas in Theorem~\ref{thm:S0=0} and Theorem~\ref{thm:S0=1} are free of errors or typos, we simulated 1,000,000 realizations of the Markov chain $S(\cdot)$ for each theorem. The empirical densities follow theoretical ones extremely closely (not shown). We also performed another check. There are two cases when the MHR process collapses to the moving-resting process investigated in \citet{Yan:etal:2014}. If $p_1$ is equal to 0 or 1, then the MHR process (after the first visit of moving state) will alternate only between two states. The other case is when $\lambda_1=\lambda_2$. The MHR process will hit all three states, but state 1 and state 2 are undistinguishable. This can be verified analytically. For example, one can show that our formula~\eqref{p00} will simplify to the first term of (2.3) in \citet{Zack:gene:2004}. For different sets of parameters, we checked numerically that in these two cases our formulas are consistent with the formulas based on modified Bessel functions derived in \citet{Zack:gene:2004}. } \section{Joint Distribution of $X(t)$ and $S(t)$} \label{sec:jointXS} Let us first work out the details the formula for $P_0(X(t)\in dx, S(t)=0)$. Fix $0<s<t$. Given $M(t)=s$, random variable $X(t)$ has a normal distribution with mean 0 and variance $\sigma^2s$, because Markov Chain $S(\cdot)$ and Brownian Motion $B(\cdot)$ are independent processes. Let $\phi(\cdot,\sigma^2)$ denote the pdf of a normal random variable with mean zero and variance $\sigma^2$. Then we get that \begin{equation*} P_0(X(t)\in dx,S(t)=0,M(t)\in \mathrm{d} s)=\phi(x,\sigma^2s)p_{00}(s,t)dxds. \end{equation*} Now, recall also that given $S(0)=0$, random variable $M(t)$ has an atom (with weight $e^{-\lambda_0t}$ at $s=t$). Therefore, when we integrate $s$ out of the joint distribution of $X(t)$, $S(t)$ and $M(t)$, we get that \begin{equation}\label{h00} h_{00}(x,t)=P_0(X(t)\in dx,S(t)=0)/dx=e^{-\lambda_0t}\phi(x,\sigma^2t)+\int_0^t \phi(x,\sigma^2s)p_{00}(s,t)ds. \end{equation} In a similar fashion, one can show that for $i=1,2$ \begin{equation}\label{h0i} h_{0i}(x,t)=P_0(X(t)\in dx,S(t)=i)/dx=\int_0^t \phi(x,\sigma^2s)p_{0i}(s,t)ds. \end{equation} When $S(0)=1$, the distribution of random variable $X(t)$ has an atom at $x=0$ (if $R_0>t$, that is, the Markov chain stays in state~1 till time $t$). Taking this into an account we have the following formulas: \begin{equation}\label{h1i} h_{1i}(x,t)=P_1(X(t)\in dx,S(t)=i)/dx=\int_0^t \phi(x,\sigma^2s)p_{1i}(s,t)ds,\quad \mbox{ if } x\neq 0 \mbox{ and } i=0,1,2, \end{equation} and \begin{equation*} P_1(X(t)=0,S(t)=1)=e^{-\lambda_1t}. \end{equation*} Similarly, \begin{equation}\label{h2i} h_{2i}(x,t)=P_2(X(t)\in dx,S(t)=i)/dx=\int_0^t \phi(x,\sigma^2s)p_{2i}(s,t)ds,\quad \mbox{ if } x\neq 0 \mbox{ and } i=0,1,2, \end{equation} and \begin{equation*} P_2(X(t)=0,S(t)=2)=e^{-\lambda_2t}. \end{equation*} { It is not essential to use one-dimensional Brownian Motion for these derivations but it simplifies our presentation's notation. If one does want to consider a Brownian Motion of $d$-dimension, then all we need to do is to substitute the one-dimensional normal pdf in formulas (\ref{h00})--(\ref{h2i}) by the $d$-dimensional normal density with mean zero and covariance matrix $\sigma^2 I_d$, where $I_d$ is the $d$-dimensional identity matrix. Of course, in this case $x$ is a vector in the $d$-dimensional space, not a scalar. In fact, later when we run simulations and analyze real-world data we will use the two-dimensional setup. } \section{Likelihood Estimation with Forward Algorithm} \label{sec:like} Assume that we observe the MRH process $X(t)$ at times $0=t_0<t_1<\dots<t_n$. Let ${\mathbf{X}} = \big(X_1, X_2, \dots, X_n\big)$, where $X_k=X(t_i)-X(t_{i-1})$, $i=1,\dots,n$ are the observed increments of the MHR process. Let ${\mathbf{S}}=\big(S(0),S(t_1),\dots,S(t_n)\big)$ be the corresponding states of the the continuous-time Markov Chain, and $\Delta_i=t_i-t_{i-1}$, $i=1,\dots,n$. The location-state process $\{X(t), S(t)\}$ is Markov, so the likelihood function of $({\mathbf{X}},{\mathbf{S}})$ is available in closed-form. More specifically, it is given by \begin{equation}\label{Full*L} L({\mathbf{X}},{\mathbf{S}},{\boldsymbol\theta}) = \nu(S(0))\prod_{i=1}^n f\big( X_i, S(t_i)| S(t_{i-1}), \Delta_i, {\boldsymbol\theta}\big), \end{equation} where \begin{equation}\label{eq:f} f\big( x, u | v, t, {\boldsymbol\theta}\big) = \begin{cases} 0 & v\neq u,\ x=0,\\ 0 & v = u = 0,\ x=0,\\ e^{-\lambda_1 t} & v = u = 1,\ x=0,\\ e^{-\lambda_2 t} & v = u = 2,\ x=0,\\ h_{ij}\big( x, t\big) & v = i,\ u = j,\ x\neq 0, \end{cases} \end{equation} $x\in\mathbf{R}$, $u,v=0,1,2$, $t>0$, and ${\boldsymbol\theta}=(\lambda_0,\lambda_1,\lambda_2,p_1,\sigma)$. The distribution of the increments of the MRH process is a mixture of absolutely continuous and discrete distributions. Therefore, in order to construct the likelihood function we have to use the Radon--Nikodym derivative of the probability distribution relative to a dominating measure that includes an atom at $x=0$. That explains the special sets of formulas in the case when $x=0$. Now, if the state vector $S_t$ is not observed, then obviously the likelihood of the increment vector ${\mathbf{X}}$ can be computed using \begin{equation*} L({\mathbf{X}},{\boldsymbol\theta}) = \sum_{s_0,\dots, s_n}L({\mathbf{X}},(s_0,\dots,s_n),{\boldsymbol\theta}), \end{equation*} where the summation is taken over all possible trajectories of ${\mathbf{S}}$. However, this formula is not practical since the number of trajectories grows exponentially as sample size $n\to \infty$. This difficulty is addressed with help of the forward algorithm. First, we need to introduce forward variables: \begin{equation}\label{alpha*k} \alpha({\mathbf{X}}_k,s_k,{\boldsymbol\theta}) = \sum_{s_0,\dots, s_{k-1}}\nu(s_0)\prod_{i=1}^k f\big( X_i, s_i | s_{i-1}, \Delta_i, {\boldsymbol\theta}\big), \end{equation} where ${\mathbf{X}}_k =\big(X_1,X_2,\dots,X_k\big)$, and $1\leq k\leq n$. Then one can show that \begin{equation}\label{alpha*k*formula} \alpha({\mathbf{X}}_{k+1},s_{k+1},{\boldsymbol\theta})= \sum_{s_{k}}f\big( X_{k+1}, s_{k+1} | s_{k}, \Delta_{k+1}, {\boldsymbol\theta}\big)\alpha({\mathbf{X}}_k,s_k,{\boldsymbol\theta}). \end{equation} { That is, for every $k$ we have three forward variables. To get one $k+1$st forward variable we need to calculate three transitional values in~\eqref{eq:f}, multiply each $k$th forward variable by an appropriate transitional value, and finally sum up these three quantities. The bottom line is that the transition from $\alpha({\mathbf{X}}_{k},s_{k},{\boldsymbol\theta})$ to $\alpha({\mathbf{X}}_{k+1},s_{k+1},{\boldsymbol\theta})$ for each $k$ requires a constant (independent of $k$) number of operations. Since \begin{equation*} L({\mathbf{X}},{\boldsymbol\theta}) = \sum_{s_n}\alpha({\mathbf{X}}_n,s_n,{\boldsymbol\theta}), \end{equation*} we get an algorithm that finds $L({\mathbf{X}},{\boldsymbol\theta})$ with computational complexity that is linear with respect to sample size $n$. } The next step is to modify the forward variables to address the underflow problem. The problem is that for large $k$ forward variables $\alpha({\mathbf{X}}_{k},s_{k},{\boldsymbol\theta})$ might be numerically indistinguishable from zero. To resolve this issue the following normalized forward variables are employed: \begin{equation}\label{normalized*alpha*k*formula} \bar{\alpha}({\mathbf{X}}_k,s_k,{\boldsymbol\theta})=\frac{\alpha({\mathbf{X}}_k,s_k,{\boldsymbol\theta})}{L({\mathbf{X}}_k,{\boldsymbol\theta})}, \end{equation} where $L({\mathbf{X}}_k,{\boldsymbol\theta})=\sum_{s_k} \alpha({\mathbf{X}}_k,s_k,{\boldsymbol\theta})$, the likelihood of vector ${\mathbf{X}}_k$. Then (\ref{alpha*k*formula}) immediately implies that the normalized forward variables satisfy the following equation: \begin{equation*}\label{alpha*bar*k} \bar{\alpha}({\mathbf{X}}_{k+1},s_{k+1},{\boldsymbol\theta})=\frac{L({\mathbf{X}}_{k},{\boldsymbol\theta})}{L({\mathbf{X}}_{k+1},{\boldsymbol\theta})} \sum_{s_{k}}f\big( X_{k+1}, s_{k+1} | s_{k},\Delta_{k+1},{\boldsymbol\theta}\big)\bar{\alpha}({\mathbf{X}}_k,s_k,{\boldsymbol\theta}). \end{equation*} If for $0\leq k\leq n-1$ we define \begin{equation*} d({\mathbf{X}}_{k+1},{\boldsymbol\theta})=\frac{L({\mathbf{X}}_{k+1},{\boldsymbol\theta})}{L({\mathbf{X}}_k,{\boldsymbol\theta})}, \end{equation*} then one can easily verify that \begin{equation*}\label{d*k} d({\mathbf{X}}_{k+1},{\boldsymbol\theta})=\sum_{s_{k+1}}\sum_{s_{k}}f\big( X_{k+1}, s_{k+1} | s_{k}, \Delta_{k+1},{\boldsymbol\theta}\big)\bar{\alpha}({\mathbf{X}}_k,s_k,{\boldsymbol\theta}). \end{equation*} Here is the normalized version of the forward algorithm. \begin{enumerate} \item For observed ${\mathbf{X}}$ and given parameter vector ${\boldsymbol\theta}$, compute $f\big( X_{k+1}, s_{k+1} | s_{k},\Delta_{k+1},{\boldsymbol\theta}\big)$ for all possible pairs $(s_k, s_{k+1})$, $k=0,\dots,n-1$. \item Base case: $\bar{\alpha}({\mathbf{X}}_0,s_0,{\boldsymbol\theta}) = \nu(s_0)$, where $s_0=0,1,2$. \item Induction: for $s_{k+1}=0,1,2$ compute $\bar{\alpha}({\mathbf{X}}_{k+1},s_{k+1},{\boldsymbol\theta})$ using \begin{equation*} \bar{\alpha}({\mathbf{X}}_{k+1},s_{k+1},{\boldsymbol\theta})=\frac{1}{d({\mathbf{X}}_{k+1},{\boldsymbol\theta})} \sum_{s_{k}}f\big( X_{k+1}, s_{k+1} | s_{k},\Delta_{k+1},{\boldsymbol\theta}\big)\bar{\alpha}({\mathbf{X}}_k,s_k,{\boldsymbol\theta}). \end{equation*} and \begin{equation*} d({\mathbf{X}}_{k+1},{\boldsymbol\theta})=\sum_{s_{k+1}}\sum_{s_{k}}f\big( X_{k+1}, s_{k+1} | s_{k}, \Delta_{k+1},{\boldsymbol\theta}\big)\bar{\alpha}({\mathbf{X}}_k,s_k,{\boldsymbol\theta}). \end{equation*} \item Termination: $\log L({\mathbf{X}},{\boldsymbol\theta}) = \sum_{k=1}^n \log d({\mathbf{X}}_{k},{\boldsymbol\theta})$. \end{enumerate} { This algorithm can be easily adapted to a situation when some states are completely observed or partially observed. For example, accelerometer data might be used to infer when an animal is moving or not, and direct inspection of a kill-site can confirm handling. If state $s_{k}$ is known, then first calculate three $k$th forward variables as usual. Next, set the two forward variables with unobservable states to zero. After that just continue the forward algorithm in the normal fashion until the next location where additional information on the state is available. If at $k$th location only one state is excluded, then we have to set only one forward variable to zero. } \section{Simulation and Data Analysis}\label{sec:sim} \begin{figure}[tbp] \centering \includegraphics[angle=-90, scale=.5]{violinplot.pdf} \caption{Violin plots of the maximum likelihood estimates from 49 replicates using the forward algorithm. The horizontal bar in each panel is the true parameter value.} \label{fig:mle} \end{figure} { We ran a small simulation to demonstrate that the forward algorithm successfully recovers the model parameters. The true parameter values were set to be $\lambda_0 = 4$, $\lambda_1 = 0.5$, $\lambda_2 = 0.1$, $p_1 = 0.8$, and $\sigma = 25$. The simulation was small because the computation of the maximum likelihood estimator is very demanding. Evaluation of the terms in Theorems~\ref{thm:S0=0}--\ref{thm:S0=1} involves infinite series that are computationally intensive; evaluation of the terms in the likelihood in Section~\ref{sec:jointXS} is very expensive because functions in~\eqref{eq:f} are numerical integrals of $p_{ij}(s,t)$. We generated $49$ two-dimensional datasets on a time grid from~0 to~4000, with increment 20, so the resulting series ${\mathbf{X}}$ is of length~200. Figure~\ref{fig:mle} presents the violin plots of the likelihood estimates of the 49~replicates in comparison to the true values of the five parameters. Violin plots are similar to box plots with a rotated kernel density plot on each side, which show more information about the data than box plots. The horizontal bars in the panels are the true parameter values. For each parameter, the true value lies in the bulk part of the violin plot, indicating that the true parameters are recovered well by the likelihood estimates in this small scale simulation study. We next applied the proposed model to the data from the same mountain lion analyzed by \citet{Yan:etal:2014} and \citet{Pozd:etal:2017}. This mount lion was a mature female in the Gros Ventre Mountain Range near Jackson Wyoming tracked with a GPS collar from 2009 to 2012. The collar was designed to collect a fix every 8 hours but the actual sampling times were irregular with sampling intervals having standard deviation 6.45~hours, ranging from 0.5 hours to 120~hours. Mountain lions behave differently in the summer and in the winter, so we focused on the summer of 2012, a total of 389 observations spanning from June~1 to August~31, which makes our results not directly comparable to existing analyses \citet{Yan:etal:2014, Pozd:etal:2017}. Field personnel determined that some of the sites were places where the mountain lion consumed a prey item. She typically remained within 250~m of a kill site while it was considered to be ``handling'', which is different from shorter, resting periods. To allow for GPS measurement error, we rounded the locations to the nearest 100~meters. The maximum likelihood estimates of the MRH model parameters are: $\hat\lambda_0 = 9.25$/hour, $\hat\lambda_1 = 2.49$/hour, $\hat\lambda_2 = 0.19$/hour, $\hat\sigma = 1.28$km/hour$^{1/2}$, and $\hat p = 0.70$. That is, on average, the mountain lions stays for 0.11, 0.40, and 5.1 hours in the moving, resting, and handling states. When moving, the mobility parameter is 1.28km/hour$^{1/2}$. This means that if the mountain lion moves without stopping for one hour, the average deviation from the initial position in terms of northing and easting values is 1.28 km. When she stopped moving, she went into resting with probability 0.70 and handling with probability 0.30, respectively. For comparison, we also fitted the moving-resting process to the same data, and the maximum likelihood estimates of the parameters are $\hat\lambda_0 = 0.66$/hour, $\hat\lambda_1 = 0.27$/hour, and $\hat\sigma = 0.59$km/hour$^{1/2}$. Because there is no handling state, the average durations in both moving and resting are estimated longer. Consequently, the mobility parameter estimate is much lower, almost halved, because the animal was assumed to be moving longer. We also fitted the BBMM of \citep{Horn:etal:2007} with the original, non-rounded data and the GPS measurement error standard deviation fixed at 0.02km. The BM mobility parameter estimate is even lower, 0.42km/hour$^{1/2}$, as the animal was assumed to be always moving. } \section{Concluding Remarks}\label{sec:conc} { The results on occupation times obtained in the paper have their own value and can be used for other applications, such as quality control. Indeed, the continuous-time Markov Chain $S(t)$ can be viewed as a telegraph process with two off states. These two states will correspond two different types of breakdown that require different time for repair. The results in Theorems~\ref{thm:S0=0} and~\ref{thm:S0=1} can be easily generalized to cover $k$~motionless states instead of just two. The only difference is that, instead of binomial distribution and convolutions of two gamma distributions, we will have multinomial distribution and convolutions of $k$ gammas. The methodology developed in Sections~\ref{M(t)*when*S(0)=0*section} and \ref{M(t)*when*S(0)=1*section} works even if the holding times are not exponentially distributed, which is an advantage of our approach. If we want to keep the Markov property, then all holding times {\it must} have exponential distributions. The memoryless distribution might be not appropriate for some species that follow a cyclic daily routine. Nonetheless, if animals under observation do not exhibit a daily periodic behavior (like mountain lions), then using an exponential distribution is acceptable. The behavior of these animals is subject to interruptions that can cut their time spent in a particular activity. For example, handling might be interrupted by a more dominate predator who drives the lion off her kill before she is finished with it. A different (from exponential) distribution should be used for species with a periodic routine. One interesting possibility is to employ stable distributions (for example, L\'{e}vy distribution). Because a linear combination of two independent random variables with a stable distribution has the same distribution, up to location and scale parameters, the formulas in Theorems~\ref{thm:S0=0} and~\ref{thm:S0=1} will be even nicer. The drawback is that the state process is then semi-Markov, and, as a result, the likelihood inferences from standard HMM tools are not available. Nevertheless, this still might be of interest for practitioners in ecological science, because estimation can be done via alternative methods such as the composite likelihood estimation \citep{Lind:comp:1988}. } \bibliographystyle{mcap}
{ "timestamp": "2018-06-05T02:12:06", "yymm": "1806", "arxiv_id": "1806.00849", "language": "en", "url": "https://arxiv.org/abs/1806.00849" }
\section{Introduction} We study periodic solutions to the Navier-Stokes equations. Such solutions appear in the laminar regime for the flow around an obstacle but they can also be induced by a periodic forcing. Here we investigate this second case which is easier as the frequency of oscillation is known from the problem data. In the following we will consider the incompressible Navier-Stokes equations on a domain $\Omega\subset\mathds{R}^d$ ($d=2,3$) \begin{equation}\label{NS} \div\,\mathbf{v} = 0,\quad \partial_t\mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v} - \nu \Delta \mathbf{v}+\nabla p = \mathbf{f}\text{ in }\mathds{R}_+\times \Omega,\quad \mathbf{v}=\mathbf{v}_0\text{ on }\{0\}\times \Omega,\quad \mathbf{v}=0\text{ on }\mathds{R}_+\times \partial\Omega, \end{equation} where $\mathbf{v}$ is the velocity, $p$ the pressure, $\nu>0$ the viscosity and $\mathbf{f}$ the right hand side, which we assume to be $P$-periodic in time \begin{equation}\label{RHS:PER} \mathbf{f}(t+P) = \mathbf{f}(t), \end{equation} where $P>0$ is a fixed period. We are looking for periodic solutions to~(\ref{NS}) that satisfy $\mathbf{v}(t+P)=\mathbf{v}(t)$ and $p(t+P)=p(t)$. In general, two strategies for identifying such periodic solutions exist: starting with arbitrary initial values $\mathbf{v}_0$ one lets the system run into the periodic state for $t\to\infty$, or, one tries to identify the correct initial value $\mathbf{v}_0$ that will directly give the periodic state. The necessity to compute such cyclic states of the Navier-Stokes equations arises e.g. in the context of temporal multi-scale schemes, where oscillatory periodic short-scale solutions guide efficient time-stepping schemes that govern the long-scale dynamics~\cite{CrouchOskay2015,SandersVerhulstMurdock2007,FreiRichterWick2016,FreiRichter2019}. Another application is found in optimization and steering processes like in simulated moving bed processes in chemical engineering~\cite{PlatteKuzminFredebeulTurek2005,LuebkeSeidelMorgensternTobiska2007,ZuyevSeidelMorgensternBenner2016}. The transition phase of a dynamic Navier-Stokes solution to the periodic state can be long and it depends, usually exponentially, on the domain size, on problem parameters like the viscosity or on discretization parameters like temporal and spatial resolutions. Several methods for an acceleration exist. One approach is based on a space-time framework for directly computing the periodic state~\cite{PlatteKuzminFredebeulTurek2005,SteihUrban2012}. This transforms the problem into a higher-dimensional one with substantial numerical overhead. Another possibility is to find efficient ways for identifying the correct initial data $\mathbf{v}_0$ for the velocity $\mathbf{v}(0)=\mathbf{v}_0$ that gives a periodic temporal solution of period $P>0$ with $\mathbf{v}(t+P)=\mathbf{v}(t)$ for all $t\ge 0$. The classical approach is the shooting method and it has been demonstrated for systems of ordinary differential equations~\cite{RooseLustChampneysSpence1995,JianBieglerFox2003}. Without special adaptions as discussed by these authors, the shooting method requires the solution of a high-dimensional problem which comes at significant costs if partial differential equations are considered. Yet another approach casts the task of finding a periodic solution into the identification of the initial value $\mathbf{v}_0$ that solves the nonlinear problem $F(\mathbf{v}_0)=0$ where $F(\mathbf{v}_0):= \mathbf{v}_{\mathbf{v}_0}(P)-\mathbf{v}_0$, where $\mathbf{v}_{\mathbf{v}_0}(t)$ is the dynamic solution starting with $\mathbf{v}(0)=\mathbf{v}_0$. Applying Newton's method to this problem yields a similar structure as the shooting method. The authors of~\cite{PotschkaMommerSchloederBock2012,HanteMommerPotschka2015} derived special preconditioners to avoid the large effort of high-dimensional problems within the Newton scheme. Nonlinear parabolic problems are analyzed in~\cite{Pao2001} and monotone iterative schemes are presented for converging to periodic solutions. Finding the zero of $F(\mathbf{v}_0)=0$ can also be tackled as an optimization scheme for minimizing $\|F(\mathbf{v}_0)\|^2\to 0$. The authors of~\cite{AmbroseWilkening2010,RichterWollner2018} apply different optimization schemes to accelerate this problem. This approach requires the solution of backward in time adjoint problems. Finally, the authors of~\cite{LuebkeSeidelMorgensternTobiska2007} propose an acceleration tool for the forward simulation based on a cascadic multilevel method. While this approach requires the forward simulation only, it might still suffer from a long transient phase. Here, an accelerated scheme for the forward simulation is presented that is based on updating the initial values using the solution of a stationary auxiliary problem. For the Stokes equations we give a proof of the robust convergence of the scheme with a rate that does not depend on problem or discretization parameters. We numerically demonstrate the efficiency of this scheme for the nonlinear Navier-Stokes case. In the following section we shortly introduce the Navier-Stokes equations and the notation required for specifying cyclic states. Section~\ref{sec:projection} presents the averaging algorithm for accelerating convergence to such periodic states. We give a complete analysis for the linear case of the Stokes equations and some hints on treating the nonlinear Navier-Stokes equations. In Section~\ref{sec:num} we present different numerical test cases describing the robustness and efficiency of the suggested scheme. We conclude in Section~\ref{sec:conclusion} with a short outlook to open problems. \section{Periodic-in-time flow problems}\label{sec:projection} For the following, let $\Omega\subset\mathds{R}^d$ be a domain ($d=2$ or $d=3$). By $H^s(\Omega)$, for $s\in\mathds{N}$ with $s\ge 1$ we denote the Sobolev space of $L^2$ functions on $\Omega$ with $s$-th weak derivative in $L^2$. By $H^1_0(\Omega)$ we denote the space of $H^1$-functions with trace zero on the boundary $\partial\Omega$. By $\|\cdot\|$ we denote the spatial $L^2$-norm. Further norms are specified by a corresponding index. We start by discussing the linear Stokes equations \begin{problem}[Stokes]\label{problem:stokes} Let $\Omega\subset\mathds{R}^d$ for $d=2,3$ be a domain with a boundary that is either smooth ($C^2$-parametrization) or polygonal with convex corners. Let $P>0$ be the fixed period and \[ \mathbf{f}\in L^\infty(\mathds{R};H^{-1}(\Omega)^d) \] with $\mathbf{f}(t,\cdot)=\mathbf{f}(t+P,\cdot)$. For $\mathbf{v}_0\in L^2(\Omega)$ let \begin{equation}\label{ST} \div\,\mathbf{v}=0,\quad \partial_t \mathbf{v}-\nu \Delta \mathbf{v}+\nabla p = \mathbf{f} \text{ in }\mathds{R}_+\times \Omega,\quad \mathbf{v}=\mathbf{v}_0\text{ on }\{0\}\times \Omega,\quad \mathbf{v}=0\text{ on }\mathds{R}_+\times \partial\Omega \end{equation} be the solution to the Stokes equations for $\nu>0$. \end{problem} Standard existence and regularity results~\cite{Temam2000} give a unique solution to Problem~\ref{problem:stokes} that satisfies \begin{equation}\label{energy} \|\mathbf{v}(t)\|^2 + \int_0^t\nu \|\nabla\mathbf{v}(s)\|^2\,\text{d}s \le \|\mathbf{v}_0\|^2 + \int_0^t \nu^{-1}\|\mathbf{f}(s)\|_{H^{-1}}^2\,\text{d}s\quad\forall t\in \mathds{R}_+. \end{equation} By Poincar\'e's inequality $\|\mathbf{v}\|\le c_p \|\nabla \mathbf{v}\|$ we obtain an integral equation for $\|\mathbf{v}(t)\|^2$ \begin{equation}\label{energy1} \|\mathbf{v}(t)\|^2+\int_0^t\frac{\nu}{c_p^2}\|\mathbf{v}(s)\|^2\,\text{d}s \le \|\mathbf{v}_0\|^2 + \int_0^t\frac{1}{\nu} \|\mathbf{f}(s)\|_{-1}^2\,\text{d}s, \end{equation} which shows that the solution is bounded for all $t\ge 0$ \[ \|\mathbf{v}(t)\|^2 \le \|\mathbf{v}_0\|^2\exp\left(-\frac{\nu}{c_p^2}t\right) +\frac{c_p^2}{\nu^2}\sup_{s\in [0,t]} \|\mathbf{f}\|_{-1}^2. \] Next, let $\mathbf{w}(t):=\mathbf{v}(t+P)-\mathbf{v}(t)$. This function satisfies the homogeneous version of equations~(\ref{energy}) and~(\ref{energy1}) such that it holds \begin{equation}\label{perioddecay} \|\mathbf{w}(t)\| \le \|\mathbf{v}(P)-\mathbf{v}(0)\|^2 \exp\left(-\frac{\nu}{c_p^2}t\right) \xrightarrow[t\to\infty]{} 0. \end{equation} Hence, the solution $\mathbf{v}(t)$ runs into a unique periodic-in-time state for $t\to\infty$. Further~(\ref{perioddecay}) shows the potentially slow decay to the periodic solution depending on the viscosity $\nu$ and the Poincar\'e constant $c_p$. These simply results do not carry over to the Navier-Stokes equations. This is mainly due to the small-data assumption that is required for the existence of global solutions and the uniqueness of stationary solutions, which is also of relevance for identifying periodic-in-time states. \begin{problem}[Navier-Stokes]\label{problem:navierstokes} Let $\Omega\subset\mathds{R}^d$ for $d=2,3$ be a domain with a boundary that is either smooth ($C^2$-parametrization) or polygonal with convex corners. Let $P>0$ be the fixed period and \[ \mathbf{f}\in L^\infty(\mathds{R};H^{-1}(\Omega)^d) \] with $\mathbf{f}(t,\cdot)=\mathbf{f}(t+P,\cdot)$. For $\mathbf{v}_0\in L^2(\Omega)$ let \begin{equation}\label{ST} \div\,\mathbf{v}=0,\quad \partial_t \mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v}-\nu \Delta \mathbf{v}+\nabla p = \mathbf{f} \text{ in }\mathds{R}_+\times \Omega,\quad \mathbf{v}=\mathbf{v}_0\text{ on }\{0\}\times \Omega,\quad \mathbf{v}=0\text{ on }\mathds{R}_+\times \partial\Omega \end{equation} be the solution to the Navier-Stokes equations for $\nu>0$. \end{problem} It has been shown by Kyed and Galdi~\cite{Kyed2012,GaldiKyed2016} that the solution runs into a periodic-in-time state $(\mathbf{v}^\pi,p^\pi)$ if the data is sufficiently small, i.e. if the right hand side $\mathbf{f}$, the initial value $\mathbf{v}_0$, the non-homogenous Dirichlet data $\mathbf{v}^D$ as well as the period length $P$ are small and the viscosity $\nu$ is sufficiently large. If the proper initial $\mathbf{v}_0^\pi$ is known, one cycle of the Stokes or Navier-Stokes equations on $[0,P]$ will directly give the periodic solution. If however the exact initial value is not given it might take a tremendous number of cycles (of length $P$) to sufficiently reduce the periodicity error $\|\mathbf{v}^\pi(t)-\mathbf{v}(t)\|$. Here we describe an acceleration approach that is based on projecting the approximated solution to one that already satisfies a correct temporal average condition. While the analysis is rigorous for the linear Stokes equations it remains an heuristic computational acceleration scheme in the nonlinear case. In contrast to the various approaches presented in the previous section, our averaging scheme only requires the solution of one linear stationary problem in each cycle as computational overhead. \section{Averaging scheme for identifying periodic solutions}\label{sec:projection} A very slow decay to the periodic solution must be expected in the general case. Convergence rates \begin{equation}\label{decay} \|\mathbf{v}(t)-\mathbf{v}^\pi(t)\| \le C \exp\left(-\frac{\nu}{c_p^2}t\right) \|\mathbf{v}_0-\mathbf{v}^\pi_0\| \end{equation} are also numerically observed, both for the Stokes and the Navier-Stokes equations, see Section~\ref{sec:num} for a numerical illustration. In this section we will derive an averaging scheme for accelerating the convergence to the periodic solution $\mathbf{v}^\pi(t)$ for arbitrary initial values $\mathbf{v}_0$. This averaging scheme is based on a splitting of the solution into its average and the fluctuations. On the interval $I=[0,P]$ we introduce \[ \mathbf{v}(t):=\bar \mathbf{v}+\tilde\mathbf{v}(t),\quad \bar\mathbf{v}:=\frac{1}{P}\int_0^P \mathbf{v}(s)\,\text{d}s,\quad \tilde\mathbf{v}(t):=\mathbf{v}(t)-\bar\mathbf{v}. \] To speedup the process of finding the periodic solution we try to quickly adapt the average $\bar\mathbf{v}$. Averaging the Navier-Stokes equation, Problem~\ref{problem:navierstokes}, gives \begin{equation}\label{separate} \begin{aligned} \frac{\mathbf{v}(P)-\mathbf{v}(0)}{P} +(\bar\mathbf{v}\cdot\nabla)\bar\mathbf{v} +\frac{1}{P}\int_0^P (\tilde\mathbf{v}(s)\cdot\nabla)\tilde\mathbf{v}(s)\,\text{d}s -\nu \Delta\bar\mathbf{v} +\nabla\bar p &= \bar \mathbf{f},\quad\div\,\bar\mathbf{v}=0\\ % \partial_t \tilde\mathbf{v}(t) +(\tilde\mathbf{v}(t)\cdot\nabla)\bar\mathbf{v} + (\bar\mathbf{v}\cdot\nabla)\tilde\mathbf{v}(t) -\nu \Delta\tilde\mathbf{v}(t) +\nabla\tilde p(t) &= \tilde \mathbf{f}(t),\quad \div\,\tilde\mathbf{v}(t)=0, \end{aligned} \end{equation} and reveals that average and fluctuation are coupled and cannot be computed separately. Assuming that $\mathbf{v}$ is the periodic solution $\mathbf{v}^\pi$ with $\mathbf{v}^\pi(t)=\mathbf{v}^\pi(t+P)$ it holds \[ \begin{aligned} (\bar\mathbf{v}^\pi\cdot\nabla)\bar\mathbf{v}^\pi + \frac{1}{P}\int_0^P(\tilde\mathbf{v}^\pi(s)\cdot\nabla)\tilde\mathbf{v}^\pi(s)\,\text{d}s -\nu \Delta\bar\mathbf{v}^\pi +\nabla\bar p^\pi &= \bar \mathbf{f},\quad\div\,\bar\mathbf{v}^\pi=0\\ % \partial_t \tilde\mathbf{v}^\pi(t) +(\tilde\mathbf{v}^\pi(t)\cdot\nabla)\bar\mathbf{v}^\pi + (\bar\mathbf{v}^\pi\cdot\nabla)\tilde\mathbf{v}^\pi(t) -\nu \Delta\tilde\mathbf{v}^\pi(t) +\nabla\tilde p^\pi(t) &= \tilde \mathbf{f}(t),\quad \div\,\tilde\mathbf{v}^\pi(t)=0, \end{aligned} \] In the averaging scheme we aim at correcting the initial value such that the correct average $\tilde\mathbf{v}^\pi$ is well approximated. For this, we introduce a problem for the correction $(\bar\mathbf{w},\bar q):=(\bar\mathbf{v}^\pi-\bar\mathbf{v},\bar p^\pi-\bar p)$, i.e. for the difference between periodic solution and current approximation \begin{multline}\label{eq:update} (\bar\mathbf{w}\cdot\nabla)\bar\mathbf{w} +(\bar\mathbf{w}\cdot\nabla)\bar\mathbf{v} +(\bar\mathbf{v}\cdot\nabla)\bar\mathbf{w} -\nu \Delta\bar\mathbf{w} + \nabla\bar q\\ =\frac{\mathbf{v}(P)-\mathbf{v}(0)}{P}+ \frac{1}{P}\int_0^P\Big( (\tilde\mathbf{v}(s)\cdot\nabla)\tilde\mathbf{v}(s) - (\tilde\mathbf{v}^\pi(s)\cdot\nabla)\tilde\mathbf{v}^\pi(s)\Big)\,\text{d}s, % \quad \div\,\bar\mathbf{w}=0. \end{multline} Naturally, the fluctuation term on the right hand side cannot be computed without knowledge of the periodic fluctuations $\tilde\mathbf{v}^\pi(s)$. However since $\tilde\mathbf{v}$ shall eventually converge to $\tilde\mathbf{v}^\pi$, and since adequate initial values will be required anyway, we simplify the equation by dropping this term. Likewise, we will drop the quadratic term $(\bar\mathbf{w}\cdot\nabla)\bar\mathbf{w}$ since this is second order in $\bar\mathbf{w}$, which will converge to zero. With these preparations we can formulate the approximated averaging scheme for finding periodic-in-time solutions to the Navier-Stokes equations \begin{algorithm}[Averaging Scheme for the Navier-Stokes equations] \label{algo:navierstokes} Let $\mathbf{v}_0^{(0)}\in L^2(\Omega)^d$ be an initial value. For $l=1,2,\dots$ iterate \begin{enumerate} \item Solve the Navier-Stokes equation on $[0,P]$ \[ \partial_t \mathbf{v}^{(l)}(t) - (\mathbf{v}^{(l)}\cdot\nabla)\mathbf{v}^{(l)} -\nu \Delta\mathbf{v}^{(l)} +\nabla p^{(l)} = \mathbf{f},\quad \div\,\mathbf{v}^{(l)}=0,\quad \mathbf{v}^{(l)}(0) = \mathbf{v}_0^{(l-1)}. \] \item Compute the average in time \[ \bar \mathbf{v}^{(l)}:=\frac{1}{P}\int_0^P \mathbf{v}^{(l)}(s)\,\text{d}s \] \item Compute the approximated stationary update problem \[ (\bar\mathbf{w}^{(l)}\cdot\nabla)\bar\mathbf{v}^{(l)} +(\bar\mathbf{v}^{(l)}\cdot\nabla)\bar\mathbf{w}^{(l)} -\nu\Delta\bar\mathbf{w}^{(l)} + \nabla \bar q^{(l)} =\frac{\mathbf{v}^{(l)}(P)-\mathbf{v}^{(l)}(0)}{P},\quad \div\,\bar\mathbf{w}^{(l)}=0. \] \item Update the initial value \[ \mathbf{v}^{(l)}_0:= \mathbf{v}^{(l)}(P) + \bar\mathbf{w}^{(l)}. \] \end{enumerate} \end{algorithm} The main effort of this scheme is still in the computation of the dynamic problems in Step 1. We will however observe an significant reduction of cycles required to approximate the periodic solution. \subsection{Analysis for the Stokes equations} The application of the averaging scheme to the Stokes equations significantly simplified the setting. First, this is due to a separation of average and fluctuations in equation~(\ref{separate}). This allows to exactly compute the update problem~(\ref{eq:update}) without further simplifications. Step 3 of Algorithm~\ref{algo:navierstokes} can be replaced by solving \begin{equation}\label{eq:update:stokes} -\nu\Delta\bar\mathbf{w}^{(l)} + \nabla\bar q^{(l)} = \frac{\mathbf{v}^{(l)}(P)-\mathbf{v}^{(l)}(0)}{P},\quad \div\,\bar\mathbf{w}^{(l)} =0. \end{equation} Second, the symmetric Stokes operator allows for a very simple analysis based on diagonalization. To be precise, under the assumptions of Problems~\ref{problem:stokes} and Problem~\ref{problem:navierstokes} there exists an orthonormal basis of weakly divergence free eigenfunctions $\boldsymbol{\omega}_i\in H^1_0(\Omega)$ and corresponding eigenvalues $\lambda_i\in\mathds{R}$ for $i=1,\dots,\infty$ such that (see~\cite{Temam2000}) \begin{equation}\label{eigenvalue:stokes} \begin{aligned} (\nu \nabla \boldsymbol{\omega}_i,\nabla\boldsymbol{\phi})-(\zeta_i,\div\,\boldsymbol{\phi}) + (\div\,\boldsymbol{\omega}_i,\xi) &= \lambda_i(\boldsymbol{\omega}_i,\boldsymbol{\phi})\quad\forall \boldsymbol{\phi}\in H^1_0(\Omega),\; \xi\in L^2(\Omega)\setminus\mathds{R}\\ % \text{with }(\boldsymbol{\omega}_i,\boldsymbol{\omega}_j)_\Omega = \delta_{ij}& \text{ and } 0<\lambda_1 <\lambda_2\le \lambda_3\le \cdots \end{aligned} \end{equation} for all $i,j\in\mathds{N}$. Given $\mathbf{v}(t)$, we denote the expansion in the eigenfunctions as \[ \mathbf{v}(t) = \sum_{i\ge 1} v_i(t)\boldsymbol{\omega}_i,\quad \mathbf{v}_i(t) = (\mathbf{v}(t),\boldsymbol{\omega}_i)_\Omega. \] A corresponding notation is used for further functions like the right hand side $\mathbf{f}(t)$, the periodic solution $\mathbf{v}^\pi(t)$, the update $\mathbf{w}(t)$ or the initial data $\mathbf{v}_0$. Hereby the Stokes equation can be diagonalized and a simple decoupled system of ode's results \begin{equation}\label{ODE} \partial_t v_i(t) - \nu \lambda_i v_i(t) = f_i(t),\quad i=1,2,\dots. \end{equation} Let value $v_i(0)=v_{i,0}\in\mathds{R}$ be the coefficients of the initial values, the solution to~(\ref{ODE}) is given by \begin{equation}\label{SOL} v_i(t) = \exp\big(-\lambda_i\nu t) v_i(0) + \int_0^t f_i(s)\exp\big( -\lambda_i\nu(t-s)\big)\,\text{d}s. \end{equation} \begin{lemma}[Averaging Scheme for the Stokes equation]\label{lemma:ode} Let $\Omega\in\mathds{R}^d$ for $d=2$ or $d=3$ be a domain with either convex polygonal or smooth ($C^2$-parametrization) boundary, $\mathbf{f}\in L^\infty([0,P],H^{-1}(\Omega)^d)$ and $\mathbf{v}_0\in L^2(\Omega)$. The averaging scheme applied to the Stokes equations converges like \[ \|\mathbf{v}^{(l)}_0 - \mathbf{v}^\pi_0\|\le 0.3\cdot \|\mathbf{v}^{(l-1)}_0-\mathbf{v}^\pi_0\|, \] where $\mathbf{v}^\pi_0$ is the initial value that yields the periodic-in-time solution. \end{lemma} \begin{proof} Let $v_{i,0}$ be the coefficients of the initial value $\mathbf{v}_0$, by $v_i(t)$ we denote the coefficient functions of the dynamic solution on $I=[0,P]$, by $v_i^\pi(t)$ the coefficient functions of the unknown periodic-in-time solution. We analyze one iteration of Algorithm~\ref{algo:navierstokes} applied to the Stokes equations. Hence, $l=1$ and we will drop this index. For the error $w_i(t):=v_i^\pi(t)-v_i(t)$ equation~(\ref{SOL}) gives \begin{equation}\label{S1} v_i^\pi(t)-v_i(t) = \exp\big(-\lambda_i\nu t\big) (v_{i,0}^\pi-v_{i,0}) \end{equation} Step 2 can be omitted since the average does not enter the update equation in the linear case, compare~(\ref{eq:update:stokes}). The solution to the stationary update equation of Step 3 is given by \begin{equation}\label{S2} \lambda_i \nu \bar w_i = \frac{v_i(P)-v_i(0)}{P} \quad\Rightarrow\quad \bar w_i = \frac{(v_i^\pi(0)-v_i(0))-(v_i^\pi(P)-v_i(P))}{\lambda_i\nu P}, \end{equation} where we used that $v_i^\pi(0)=v_i^\pi(P)$. With~(\ref{S1}) this gives \begin{equation}\label{S3} \bar w_i =\frac{1}{\lambda_i\nu P}\Big( 1- \exp\big(-\lambda_i\nu P\big)\Big) \big(v_i^\pi(0)-v_i(0)\big) \end{equation} Then, the newly computed initial value from Step 4 in the algorithm satisfies \begin{equation}\label{S4} v_i^\pi- \big(v_i(P) + \bar w_i\big) = \big(v_i^\pi(P)-v_i(P)\big) - \bar w_i = \Big(\exp\big(-\lambda_i \nu P\big) \left(1+\frac{1}{\lambda_i\nu P}\right) -\frac{1}{\lambda_i\nu P}\Big) \big(v_i^\pi(0)-v_i(0)\big) \end{equation} where we used~(\ref{S3}) and~(\ref{S1}). Let $s=\lambda_i\nu P>0$. We study the function \begin{equation}\label{S5} \rho(s) = \exp(-s)\left(1+\frac{1}{s}\right)-\frac{1}{s} =\frac{\exp(-s)}{s}\big(1+s-\exp(s)\big), \end{equation} which is negative for $s\in\mathds{R}_+$ since $\exp(s)\ge 1+s$. Further it holds $\rho(0)=0$ and $\rho(s)\to 0$ for $s\to\infty$. $\rho(s)$ has only one extreme point in $\mathds{R}_+$ that is easily found numerically and gives the bound $|\rho(x)|<0.299$ for $x\in\mathds{R}_+$. \end{proof} The convergence rate of this averaging scheme is always at least $0.3$ and does in particular not depend on the stiffness rate $\lambda_i\nu>0$ or the period length $P>0$. \section{Discrete setting}\label{disc} For discretization of the dynamic Navier-Stokes and Stokes equations we employ standard techniques which we summarize briefly. Details are found in~\cite[Sections 4.1 and 4.2]{Richter2017}. All implementations are done in Gascoigne 3D~\cite{Gascoigne3D}. In time we use the $\theta$-time-stepping scheme that can be considered as a variant of the Crank-Nicolson scheme~\cite{LuskinRannacher1982,HeywoodRannacher1990} with better smoothing properties capable of giving robust long time solutions. Let $k=P/N$ be the time step size and $t_n = nk$ for $n=0,1,\dots$ be the uniform partitioning of $[0,P]$. By $\mathbf{v}_n\approx \mathbf{v}(t_n)$ and $p_n\approx p(t_n)$ we denote the approximation to the solution at time $t_n$. Likewise $\mathbf{f}^n:=\mathbf{f}(t_n)$ is the right hand side evaluated at time $t_n$. Let $V_h\times Q_h\subset H^1_0(\Omega;\Gamma^D)^d \times L^2(\Omega)$ be a suitable finite element pair. For simplicity assume that $V_h\times Q_h$ is an inf-sup stable pressure-velocity pair like the Taylor-Hood element~\cite{Brezzi1991}. Alternatively, stabilized equal order finite elements, e.g. based on the local projection stabilization scheme~\cite{BeckerBraack2001}, can be used. This is indeed the standard setting of our implementation in Gascoigne 3D~\cite{Gascoigne3D}. The complete space-time discrete formulation of the incompressible Navier-Stokes equation is given by \begin{multline}\label{fd} \mathbf{v}^n\in V_h,\quad p^n\in V_h,\quad \mathbf{v}^0:=\mathbf{v}_0,\quad \text{for }n=1,2,\ldots\\ \Big(\mathbf{v}^n-\mathbf{v}^{n-1},\phi\Big) + k\Big( (1-\theta)(\mathbf{v}^{n-1}\cdot\nabla)\mathbf{v}^{n-1} + \theta (\mathbf{v}^n\cdot\nabla)\mathbf{v}^n,\phi\Big)\\ + \nu k\left( \nabla \left( (1-\theta)\mathbf{v}^{n-1} + \theta\mathbf{v}^n\right),\nabla\phi\right) -k\Big(p^n,\nabla\cdot\phi\Big)\\ +k \Big(\nabla\cdot \mathbf{v}^n,\xi\Big) = k\Big( (1-\theta)\mathbf{f}^{n-1}+\theta\mathbf{f}^n,\phi\Big)\quad\forall (\phi,\xi)\in V_h\times Q_h. \end{multline} The parameter $\theta$ is chosen in $[1/2,1]$. For $\theta=1$ this scheme corresponds to the backward Euler method, for $\theta=1/2$ to the Crank-Nicolson scheme and for $\theta>1/2$ to the shifted Crank-Nicolson scheme which has better smoothing properties. For $\theta=1/2+{\mathcal O}(k)$ we get global stability and still have second order convergence, see~\cite{LuskinRannacher1982,Rannacher1984,RichterWick2015_time}. In addition to~(\ref{fd}) we also consider the discretization of the Stokes equation, which is realized by skipping the convective terms. To transfer the averaging scheme to the discrete setting we start by indicating a discrete counterpart of the averaging operator that is conforming with the discretization based on the $\theta$-scheme. We sum~(\ref{fd}) over all time steps using the same pair of test functions $(\phi,\xi)$ for all steps and divide by the period $P$ \begin{equation}\label{summed} \frac{1}{P}\Big(\mathbf{v}^N-\mathbf{v}^{0},\phi\Big) + \Big( \overline{(\mathbf{v}\cdot\nabla)\mathbf{v}}^k ,\phi\Big) + \nu \left( \nabla \overline{\mathbf{v}}^k,\nabla\phi\right) -\Big(\overline{p}^{k,0},\nabla\cdot\phi\Big) + \Big(\nabla\cdot \overline{\mathbf{v}}^{k},\xi\Big) = \Big( \overline{\mathbf{f}}^k,\phi\Big) \end{equation} with the discrete averaging operators \begin{equation}\label{avgdic} \overline{\mathbf{v}}^k:=\frac{k}{P}\sum_{n=1}^N \Big((1-\theta)\mathbf{v}_{n-1} + \theta\mathbf{v}_n\Big),\quad \overline{p}^{k,0}:=\frac{k}{P}\sum_{n=1}^{N} p_n. \end{equation} Directly summing the divergence term in~(\ref{fd}) yields the average $(\nabla\cdot \overline{\mathbf{v}}^{k,0},\xi)=0$, this however is equivalent to $(\nabla\cdot \overline{\mathbf{v}}^{k,0},\xi)=(\nabla\cdot \overline{\mathbf{v}}^{k},\xi)$. The discrete periodic solution $(\mathbf{v}^\pi,p^\pi)$ satisfies the equation \begin{equation}\label{summed:avg} \Big( \overline{(\mathbf{v}^\pi\cdot\nabla)\mathbf{v}^\pi}^k ,\phi\Big) + \nu \left( \nabla \overline{\mathbf{v}^\pi}^k,\nabla\phi\right) -\Big(\overline{p^\pi}^{k,0},\nabla\cdot\phi\Big) + \Big(\nabla\cdot \overline{\mathbf{v}^\pi}^{k},\xi\Big) = \Big( \overline{\mathbf{f}}^k,\phi\Big), \end{equation} such that the approximated discrete update equation, i.e. the equation for approximating the difference $(\overline{\mathbf{w}}^k,\overline{q}^{k,0}) \approx (\overline{\mathbf{v}^\pi}^{k}-\overline{\mathbf{v}}^k,\overline{p^\pi}^{k,0} - \overline{p}^{k,0})$ is given by \begin{equation}\label{summed:up} \Big( (\overline{\mathbf{w}}^k\cdot\nabla)\overline{\mathbf{v}}^k +(\overline{\mathbf{v}}^k\cdot\nabla)\overline{\mathbf{w}}^k ,\phi\Big) + \nu \left( \nabla \overline{\mathbf{w}}^k,\nabla\phi\right) -\Big(\overline{q}^{k,0},\nabla\cdot\phi\Big) + \Big(\nabla\cdot \overline{\mathbf{w}}^{k},\xi\Big) = \frac{1}{P}\Big(\mathbf{v}_N-\mathbf{v}_0,\phi\big), \end{equation} where the same approximations are applied as in the continuous case: we neglect the averaging error $\overline{(\mathbf{w}\cdot\nabla)\mathbf{w}}^k- (\overline{\mathbf{w}}^k\cdot\nabla)\overline{\mathbf{w}}^k$ as well as the quadratic nonlinearity $(\overline{\mathbf{w}}^k\cdot\nabla)\overline{\mathbf{w}}^k$. \begin{algorithm}[Discrete averaging Scheme for the Navier-Stokes equations] \label{algo:disc} Let $\mathbf{v}_0^{(0)}\in V_h$ be an initial value. For $l=1,2,\dots$ iterate \begin{enumerate} \item Solve the Navier-Stokes~(\ref{fd}) equation on $[0,P]$ for $(\mathbf{v}^{(l)}_n,p^{(l)}_n)$, $n=1,\dots,N$ with $\mathbf{v}^{(l)}_0=\mathbf{v}_0^{(l-1)}$. \item Compute $\overline{\mathbf{v}^{(l)}}^k$, the discrete average in time, by~(\ref{summed:avg}). \item Compute the approximated (stationary) update problem~(\ref{summed:up}) for $\overline{\mathbf{w}^{(l)}}^k,\overline{q^{(l)}}^{k,0}$. \item Update the initial value \[ \mathbf{v}^{(l)}_0:= \mathbf{v}^{(l)}_N + \overline{\mathbf{w}^{(l)}}^k. \] \end{enumerate} \end{algorithm} Since we consider conforming finite element discretizations the proof for the linear case can be transferred to the discrete setting. Some care is required to obtain sufficient stability of the time stepping scheme. \begin{lemma}[Discrete averaging Scheme for the Stokes equation]\label{lemma:ode:disc} Let $\Omega\in\mathds{R}^d$ for $d=2$ or $d=3$ be a domain with either convex polygonal or smooth ($C^2$-parametrization) boundary, $\mathbf{f}\in L^\infty([0,P],H^{-1}(\Omega)^d)$ and $\mathbf{v}_0\in L^2(\Omega)$. Let $V_h\times Q_h\subset H^1_0(\Omega)^d\times L^2(\Omega)$ be an inf-sup stable finite element space. Let $I=[0,P]$ be discretized in $N\ge 4$ time steps of size $k=P/N$ with time-scheme parameter \[ \theta_N = \frac{1}{2}+\frac{1}{2N}. \] Then, the discrete averaging Algorithm~\ref{algo:disc} applied to the Stokes equations converges like \[ \|\mathbf{v}^{(l)}_0 - \mathbf{v}^\pi_0\|\le 0.42\cdot \|\mathbf{v}^{(l-1)}_0-\mathbf{v}^\pi_0\|. \] \end{lemma} \begin{proof} Let $(\boldsymbol{\omega}_i,\mu_i,\lambda_i)\in V_h\times Q_h\times \mathds{R}$ for $i=1,\dots,N_h:=\operatorname{div}(V_h)$ be a system of $L^2$-orthonormal, discretely divergence free eigenfunctions and eigenvalues of the discrete Stokes operator. By $v_{i,0}, f_i\in \mathds{R}$ and $v_i(t)\in\mathds{R}$ we denote the coefficient (function) of an initial value $\mathbf{v}_0$ of the right hand side and of the solution $\mathbf{v}(t)$ with respect to this basis. Comparable to the continuous case we analyze one single step of the discrete averaging scheme given in Algorithm~\ref{algo:disc}. \medskip\noindent \emph{(i)} The discretized Stokes equation can be decoupled into a system of $N_h$ difference equations \[ v^n_i-v^{n-1}_i + \nu k \lambda_i \big( (1-\theta)v^{n-1}_i + \theta v^n_i\big) = \nu k \big( (1-\theta)f^{n-1}_i + \theta f^n_i\big),\quad i=1,\dots,N_h. \] We measure the difference $v^{\pi,n}_i-v_i^n$ between the solution to the correct initial value $v^\pi_{0,i}$ and to an arbitrary starting value $v_{0,i}$ \begin{equation}\label{decay:disc} v^{\pi,n}_i-v^n_i=\underbrace{\frac{1-\nu k (1-\theta)\lambda_i}{1+\nu k \theta \lambda_i}}_{=:q_i} w^{n-1}_i = q_i^n (v^\pi_{0,i}-v_{0,i}). \end{equation} The coefficients of the stationary update equation in Step 3 of Algorithm~\ref{algo:disc} are determined by \[ \overline{w_i}^k =\frac{v_i^N-v_i^0}{\nu\lambda_i P} \] and with $v^{\pi,N}_i=v^{\pi,0}_i=v^\pi_{0,i}$ the new initial computed in Step 4 of Algorithm~\ref{algo:disc} is carrying the error \[ v^\pi_{0,i}-\left(v^N_i + \overline{w_i}^k\right) =v^{\pi,N}_i - v^N_i + \frac{v_i^{\pi,N}-v_i^N -(v_i^{\pi,0}-v_i^0)}{\nu\lambda_i P}, \] which, together with~(\ref{decay:disc}), is estimated as \[ v^\pi_{0,i}-\left(v^N_i + \overline{w_i}^k\right) =\left(q_i^N\left(1+\frac{1}{\nu\lambda_i P}\right)-\frac{1}{\nu\lambda_i P}\right) (v^\pi_{0,i}-v_{0,i}). \] With $s_i:=\nu\lambda_i P\in [s_1,s_{|V_h|}]\subset (0,\infty)$ we identify the reduction factor \begin{equation}\label{disc:1} \rho^N(s):=\left(1-\frac{s}{N+\theta s}\right)^N\left(1+\frac{1}{s}\right) -\frac{1}{s}, \end{equation} which for $N\to \infty$ converges to the reduction rate $\rho(s)$ of the continuous case, see~(\ref{S5}). \medskip\noindent \emph{(ii)} For estimating~(\ref{disc:1}) we consider two cases. First, let $0\le s\le N$. Then, it holds \begin{equation}\label{disc:2} 0\le 1-\frac{s}{N}\le 1-\frac{s}{N+\theta s}\le 1-\frac{s}{N(1+\theta)} \end{equation} and hereby, we can estimate $\rho^N(s)$ to both sides by \[ \begin{aligned} \left(1-\frac{s}{N}\right)^N\left(1+\frac{1}{s}\right) -\frac{1}{s} &\le \rho^N(s) \le \left(1-\frac{s}{N(1+\theta)}\right)^N\left(1+\frac{1}{s}\right) -\frac{1}{s}\\ \Leftrightarrow\qquad \exp\left(-s\right)\left(1+\frac{1}{s}\right) -\frac{1}{s} &\le \rho^N(s) \le \exp\left(-\frac{s}{1+\theta}\right)\left(1+\frac{1}{s}\right) -\frac{1}{s}. \end{aligned} \] The lower bound is $\rho(s)$, see Eq.~(\ref{S5}), with $-0.29<\rho(s)$ for all $s\ge 0$. The upper bound takes its maximum at $s\to 0$ \[ \exp\left(-\frac{s}{1+\theta}\right)\left(1+\frac{1}{s}\right) -\frac{1}{s} \xrightarrow[s\to 0]{} \frac{\theta}{1+\theta} \] such that it holds \[ -0.3 \le \rho^N(s) \le \frac{\theta}{1+\theta},\quad \forall 0\le s\le N. \] For $\theta=\theta_N$ and $N\ge 4$ this gives the bound \[ |\rho^N\big|_{\theta=\theta_N}| \le 0.39\quad \forall 0\le s\le N. \] % Next, let $s\ge N$. The function \[ q(s):=1-\frac{s}{N+\theta s} \] is monotonically decreasing, such that for $s\ge N$ it holds \[ 1-\frac{1}{\theta}\le q(s)= 1-\frac{s}{N+\theta s} \le 1-\frac{1}{1+\theta} \quad\Rightarrow\quad |q(s)| \le \frac{1}{\theta}-1. \] Choosing $\theta=\theta_N$ and $N\ge 2$ gives \[ |q(s)^N| \le \left|\frac{N-1}{N+1}\right|^N= \left|1-\frac{2}{N+1}\right|^N\le \exp(-2). \] Altogether we estimate for $s\ge N$ with $N\ge 4$ we get the bound \[ |\rho^N(s)\Big|_{\theta=\theta_N}| \le \exp(-2)\left(1+\frac{1}{N}\right) + \frac{1}{N} \le 1.25 \exp(-2)+0.25 \le 0.42. \] \end{proof} \begin{remark} Numerical results show that this estimate is not sharp, convergence rates close to $0.3$ are observed (which is the reduction rate in the continuous setting). The specific choice of \[ \theta_N = \frac{1}{2}+\frac{1}{2N} \] corresponds to a slightly shifted version of the Crank-Nicolson scheme. It is of second order and has improved stability properties. The choice of $\theta_N$ corresponds to $\theta = \frac{1}{2}+\frac{P}{2}k$ and it can be generalized to $\theta=\frac{1}{2}+\alpha k$ for $\alpha>0$. % In the numerical test cases we observe no problems with the standard Crank-Nicolson scheme $\theta=1/2$. \end{remark} \section{Numerical test cases}\label{sec:num} We discuss different numerical test cases to highlight the efficiency and robustness of the averaging scheme for the computation of cyclic states. We directly consider the Navier-Stokes equations but include problems at very low Reynolds numbers. The linear Stokes problem gives comparable results. Here, perfect robustness of the averaging scheme for arbitrary variations of the viscosity, the domain size, the velocity, etc. if found. Before presenting the specific test cases we shortly describe the computational setting implemented in the finite element software library \emph{Gascoigne 3D}~\cite{Gascoigne3D}: Discretization (in 2d) is based on quadrilateral meshes. To cope with the saddle-point structure we utilize stabilized quadratic equal-order elements. Stabilization is based on the local projection scheme~\cite{BeckerBraack2001}. As the appearing Reynolds numbers are very moderate we do not require any stabilization of convective terms. Nonlinear problems are approximated with a Newton scheme using analytic Jacobians. The equal-order setup allows us to use an efficient geometric multigrid solver for all linear problems, see~\cite{BeckerBraack2000a} for the general setup and~\cite{KimmritzRichter2010} for the efficient implementation. Although the analysis shows superior robustness for a shifted version of the Crank-Nicolson scheme with $\theta>1/2$ we do not observe any difficulties with the choice $\theta=1/2$ which will be used throughout this section. In~\cite{RichterMizerski2020} an application of the averaging scheme to the efficient simulation of temporal multiscale problems is given. Here we also include a study on a three dimensional test case. These results are in perfect agreement to the following two dimensional cases. \subsection{Robustness of the averaging scheme}\label{sec:num:stokes} For $L\in\mathds{R}_+$ let $\Omega = (-L,L)^2$ and $I=[0,P]$ for a given $P\in\mathds{R}_+$. We solve \[ \begin{aligned} \nabla\cdot\mathbf{v} =0,\quad \partial_t \mathbf{v}+(\mathbf{v}\cdot\nabla)\mathbf{v} -\nu \Delta \mathbf{v} + \nabla p &= \mathbf{f}&& \text{ in }I\times \Omega\\ \mathbf{v}=\mathbf{v}_0\text{ on } \{0\}\times \Omega,\quad \mathbf{v}&=0&&\text{ on } I\times \partial\Omega, \end{aligned} \] and try to identify a time-periodic solution $\mathbf{v}(P)=\mathbf{v}(0)$. The forcing $\mathbf{f}$ is $P$-periodic and given by \[ \mathbf{f}(x,y,t) = \frac{\tanh\big(y\big)}{LP} \sin\left(\frac{2\pi t}{P}\right)\begin{pmatrix}1\\ 0 \end{pmatrix}. \] \begin{figure}[t] \setlength{\figureheight}{0.3\textwidth} \setlength{\figurewidth}{0.4\textwidth} \begin{center} \begin{tabular}{cc} \parbox{0.48\textwidth} \begin{tikzpicture} \begin{semilogyaxis}[ width=\figurewidth, height=\figureheight, scale only axis, x label style={anchor=north, below=-12mm}, xlabel=cycles, mark options={solid}, title style={font=\bfseries}, legend style={at={(\figurewidth,0.95)},xshift=-0.2cm,anchor=north east,nodes=right}] \addplot[color=blue,solid,line width=0.5mm] table[row sep=crcr]{ 2 0.0497606 \\ 3 0.0293591 \\ 4 0.00358413 \\ 5 0.00100257 \\ 6 0.000283001 \\ 7 8.00906e-05 \\ 8 2.28839e-05 \\ 9 6.56557e-06 \\ 10 1.88392e-06 \\ 11 5.41209e-07 \\ 12 1.55537e-07 \\ 13 4.46971e-08 \\ 14 1.28441e-08 \\ 15 3.69068e-09 \\ }; \addlegendentry{$L=1$} \addplot[solid,line width=0.5mm] table[row sep=crcr]{ 2 0.040718 \\ 3 0.0396963 \\ 4 0.0024437 \\ 5 0.000657732 \\ 6 0.000184053 \\ 7 5.26298e-05 \\ 8 1.4954e-05 \\ 9 4.21247e-06 \\ 10 1.18558e-06 \\ 11 3.29752e-07 \\ 12 9.18535e-08 \\ 13 2.53591e-08 \\ 14 7.00737e-09 \\ }; \addlegendentry{$L=2$} \addplot[solid,color=red,line width=0.5mm] table[row sep=crcr]{ 2 0.0276215 \\ 3 0.0228166 \\ 4 0.00477496 \\ 5 0.00111102 \\ 6 0.000161514 \\ 7 4.17968e-05 \\ 8 1.1146e-05 \\ 9 3.03335e-06 \\ 10 8.33905e-07 \\ 11 2.33751e-07 \\ 12 6.5562e-08 \\ 13 1.83917e-08 \\ 14 5.16003e-09 \\ }; \addlegendentry{$L=4$} \addplot[mark=*,color=blue,line width=0.2mm,mark size=0.5mm] table[row sep=crcr]{ 2 0.0497606 \\ 3 0.0110588 \\ 4 0.00299763 \\ 5 0.000810536 \\ 6 0.000219065 \\ 7 5.92054e-05 \\ 8 1.6001e-05 \\ 9 4.32449e-06 \\ 10 1.16875e-06 \\ 11 3.15871e-07 \\ 12 8.53682e-08 \\ 13 2.30719e-08 \\ 14 6.23548e-09 \\ }; \addlegendentry{$L=1$} \addplot[mark=+,mark size=0.5mm,line width=0.2mm] table[row sep=crcr]{ 2 0.040718 \\ 3 0.0124829 \\ 4 0.00731993 \\ 5 0.00496711 \\ 6 0.0035285 \\ 7 0.00253523 \\ 8 0.00182434 \\ 9 0.00131688 \\ 10 0.000950305 \\ 11 0.000685452 \\ 12 0.000494303 \\ 13 0.000356425 \\ 14 0.000256996 \\ 15 0.000185298 \\ 16 0.000133599 \\ 17 9.63241e-05 \\ 18 6.94486e-05 \\ 19 5.00716e-05 \\ 20 3.61009e-05 \\ 21 2.60283e-05 \\ 22 1.8766e-05 \\ 23 1.353e-05 \\ 24 9.75496e-06 \\ 25 7.03319e-06 \\ 26 5.07083e-06 \\ 27 3.656e-06 \\ 28 2.63592e-06 \\ 29 1.90046e-06 \\ 30 1.37021e-06 \\ 31 9.87899e-07 \\ 32 7.12261e-07 \\ 33 5.1353e-07 \\ 34 3.70248e-07 \\ 35 2.66943e-07 \\ 36 1.92462e-07 \\ 37 1.38763e-07 \\ 38 1.00046e-07 \\ 39 7.21316e-08 \\ 40 5.20059e-08 \\ 41 3.74955e-08 \\ 42 2.70337e-08 \\ 43 1.94909e-08 \\ 44 1.40527e-08 \\ 45 1.01318e-08 \\ 46 7.30487e-09 \\ }; \addlegendentry{$L=2$} \addplot[mark=diamond,mark size=0.5mm,line width=0.2mm,color=red] table[row sep=crcr]{ 2 0.0276215 \\ 3 0.00870959 \\ 4 0.00505705 \\ 5 0.00362721 \\ 6 0.00276177 \\ 7 0.00227139 \\ 8 0.00189834 \\ 9 0.00162561 \\ 10 0.00142849 \\ 11 0.00126236 \\ 12 0.00112151 \\ 13 0.00101213 \\ 14 0.000918037 \\ 15 0.00083425 \\ 16 0.000759451 \\ 17 0.000692535 \\ 18 0.000636588 \\ 19 0.000585304 \\ 20 0.000538287 \\ 21 0.000495167 \\ 22 0.000455602 \\ 23 0.000419283 \\ 24 0.000385928 \\ 25 0.000355282 \\ 26 0.000327115 \\ 27 0.000301215 \\ 28 0.00027749 \\ 29 0.000255762 \\ 30 0.000235725 \\ 31 0.00021725 \\ 32 0.000200217 \\ 33 0.000184515 \\ 34 0.000170041 \\ 35 0.000156699 \\ 36 0.000144403 \\ 37 0.00013307 \\ 38 0.000122625 \\ 39 0.000112999 \\ 40 0.000104129 \\ 41 9.59537e-05 \\ 42 8.84203e-05 \\ 43 8.14787e-05 \\ 44 7.50836e-05 \\ 45 6.919e-05 \\ 46 6.37586e-05 \\ 47 5.87533e-05 \\ 48 5.41407e-05 \\ 49 4.989e-05 \\ 50 4.5973e-05 \\ }; \addlegendentry{$L=4$} \end{semilogyaxis} \end{tikzpicture}} & \parbox{0.48\textwidth} \begin{tikzpicture} \begin{semilogyaxis}[ width=\figurewidth, height=\figureheight, scale only axis, x label style={anchor=north, below=-12mm}, xlabel=cycles, mark options={solid}, title style={font=\bfseries}, legend style={at={(\figurewidth,0.95)},xshift=-0.2cm,anchor=north east,nodes=right}] \addplot[color=blue,solid,line width=0.5mm] table[row sep=crcr]{ 2 0.0403474 \\ 3 0.00512544 \\ 4 0.000550745 \\ 5 7.3483e-05 \\ 6 9.52856e-06 \\ 7 1.25059e-06 \\ 8 1.63562e-07 \\ 9 2.1416e-08 \\ 10 2.80313e-09 \\ }; \addlegendentry{$P=4$} \addplot[solid,line width=0.5mm] table[row sep=crcr]{ 2 0.0403596 \\ 3 0.0159469 \\ 4 0.000616497 \\ 5 9.7356e-05 \\ 6 1.19473e-05 \\ 7 2.03077e-06 \\ 8 2.80606e-07 \\ 9 4.51702e-08 \\ 10 6.63728e-09 \\ }; \addlegendentry{$P=2$} \addplot[solid,color=red,line width=0.5mm] table[row sep=crcr]{ 2 0.040718 \\ 3 0.0396963 \\ 4 0.0024437 \\ 5 0.000657732 \\ 6 0.000184053 \\ 7 5.26298e-05 \\ 8 1.4954e-05 \\ 9 4.21247e-06 \\ 10 1.18558e-06 \\ 11 3.29752e-07 \\ 12 9.18535e-08 \\ 13 2.53591e-08 \\ 14 7.00737e-09 \\ }; \addlegendentry{$P=1$} \addplot[mark=*,color=blue,line width=0.2mm,mark size=0.5mm] table[row sep=crcr]{ 2 0.0403474 \\ 3 0.00872483 \\ 4 0.00236091 \\ 5 0.000638271 \\ 6 0.000172501 \\ 7 4.66196e-05 \\ 8 1.25992e-05 \\ 9 3.40501e-06 \\ 10 9.20224e-07 \\ 11 2.48696e-07 \\ 12 6.72115e-08 \\ 13 1.81643e-08 \\ 14 4.90901e-09 \\ }; \addlegendentry{$P=4$} \addplot[mark=+,mark size=0.5mm,line width=0.2mm] table[row sep=crcr]{ 2 0.0403596 \\ 3 0.0120208 \\ 4 0.00598796 \\ 5 0.00310265 \\ 6 0.00161608 \\ 7 0.000840482 \\ 8 0.000436977 \\ 9 0.000227165 \\ 10 0.000118089 \\ 11 6.13871e-05 \\ 12 3.19111e-05 \\ 13 1.65885e-05 \\ 14 8.62328e-06 \\ 15 4.48268e-06 \\ 16 2.33025e-06 \\ 17 1.21134e-06 \\ 18 6.29699e-07 \\ 19 3.27339e-07 \\ 20 1.70162e-07 \\ 21 8.84562e-08 \\ 22 4.59826e-08 \\ 23 2.39033e-08 \\ 24 1.24258e-08 \\ 25 6.45934e-09 \\ }; \addlegendentry{$P=2$} \addplot[mark=diamond,mark size=0.5mm,line width=0.2mm,color=red] table[row sep=crcr]{ 2 0.040718 \\ 3 0.0124829 \\ 4 0.00731993 \\ 5 0.00496711 \\ 6 0.0035285 \\ 7 0.00253523 \\ 8 0.00182434 \\ 9 0.00131688 \\ 10 0.000950305 \\ 11 0.000685452 \\ 12 0.000494303 \\ 13 0.000356425 \\ 14 0.000256996 \\ 15 0.000185298 \\ 16 0.000133599 \\ 17 9.63241e-05 \\ 18 6.94486e-05 \\ 19 5.00716e-05 \\ 20 3.61009e-05 \\ 21 2.60283e-05 \\ 22 1.8766e-05 \\ 23 1.353e-05 \\ 24 9.75496e-06 \\ 25 7.03319e-06 \\ 26 5.07083e-06 \\ 27 3.656e-06 \\ 28 2.63592e-06 \\ 29 1.90046e-06 \\ 30 1.37021e-06 \\ 31 9.87899e-07 \\ 32 7.12261e-07 \\ 33 5.1353e-07 \\ 34 3.70248e-07 \\ 35 2.66943e-07 \\ 36 1.92462e-07 \\ 37 1.38763e-07 \\ 38 1.00046e-07 \\ 39 7.21316e-08 \\ 40 5.20059e-08 \\ 41 3.74955e-08 \\ 42 2.70337e-08 \\ 43 1.94909e-08 \\ 44 1.40527e-08 \\ 45 1.01318e-08 \\ 46 7.30487e-09 \\ }; \addlegendentry{$P=1$} \end{semilogyaxis} \end{tikzpicture}} \\ \parbox{0.48\textwidth} \begin{tikzpicture} \begin{semilogyaxis}[ width=\figurewidth, height=\figureheight, scale only axis, x label style={anchor=north, below=-12mm}, xlabel=cycles, mark options={solid}, title style={font=\bfseries}, legend style={at={(\figurewidth,0.95)},xshift=-0.2cm,anchor=north east,nodes=right}] \addplot[color=blue,solid,line width=0.5mm] table[row sep=crcr]{ 1 0.0403662 2 0.0192276 \\ 3 0.00290778 \\ 4 0.000810289 \\ 5 0.000228534 \\ 6 6.4629e-05 \\ 7 1.83877e-05 \\ 8 5.27484e-06 \\ 9 1.51367e-06 \\ 10 4.34493e-07 \\ 11 1.24758e-07 \\ 12 3.58313e-08 \\ 13 1.02932e-08 \\ 14 2.95828e-09 \\ }; \addlegendentry{$\nu=0.4$} \addplot[solid,line width=0.5mm] table[row sep=crcr]{ 1 0.0407366 \\ 2 0.039687 \\ 3 0.00244442 \\ 4 0.000658136 \\ 5 0.000184287 \\ 6 5.2731e-05 \\ 7 1.49925e-05 \\ 8 4.22616e-06 \\ 9 1.19021e-06 \\ 10 3.31274e-07 \\ 11 9.23423e-08 \\ 12 2.55124e-08 \\ 13 7.05465e-09 \\ }; \addlegendentry{$\nu=0.1$} \addplot[solid,color=red,line width=0.5mm] table[row sep=crcr]{ 1 0.0391445 2 0.0424874 \\ 3 0.00351014 \\ 4 0.00492496 \\ 5 0.000425011 \\ 6 0.000155808 \\ 7 3.34941e-05 \\ 8 1.17075e-05 \\ 9 2.97373e-06 \\ 10 9.19202e-07 \\ 11 2.08771e-07 \\ 12 6.20127e-08 \\ 13 1.72378e-08 \\ 14 6.08189e-09 \\ }; \addlegendentry{$\nu=0.025$} \addplot[mark=*,color=blue,line width=0.2mm,mark size=0.5mm] table[row sep=crcr]{ 2 0.0403662 \\ 3 0.00872641 \\ 4 0.00236024 \\ 5 0.000637834 \\ 6 0.000172317 \\ 7 4.65524e-05 \\ 8 1.25764e-05 \\ 9 3.39757e-06 \\ 10 9.17871e-07 \\ 11 2.47967e-07 \\ 12 6.69896e-08 \\ 13 1.80976e-08 \\ 14 4.88915e-09 \\ }; \addlegendentry{$\nu=0.4$} \addplot[mark=+,mark size=0.5mm,line width=0.2mm] table[row sep=crcr]{ 2 0.0407366 \\ 3 0.0124839 \\ 4 0.00732031 \\ 5 0.00496733 \\ 6 0.00352864 \\ 7 0.00253527 \\ 8 0.00182431 \\ 9 0.00131684 \\ 10 0.000950246 \\ 11 0.00068539 \\ 12 0.000494245 \\ 13 0.000356374 \\ 14 0.000256953 \\ 15 0.000185261 \\ 16 0.000133569 \\ 17 9.62999e-05 \\ 18 6.94293e-05 \\ 19 5.00563e-05 \\ 20 3.6089e-05 \\ 21 2.60189e-05 \\ 22 1.87588e-05 \\ 23 1.35245e-05 \\ 24 9.75068e-06 \\ 25 7.02992e-06 \\ 26 5.06833e-06 \\ 27 3.6541e-06 \\ 28 2.63448e-06 \\ 29 1.89937e-06 \\ 30 1.36938e-06 \\ 31 9.8728e-07 \\ 32 7.11796e-07 \\ 33 5.13181e-07 \\ 34 3.69986e-07 \\ 35 2.66748e-07 \\ 36 1.92316e-07 \\ 37 1.38654e-07 \\ 38 9.99645e-08 \\ 39 7.20711e-08 \\ 40 5.19608e-08 \\ 41 3.7462e-08 \\ 42 2.70089e-08 \\ 43 1.94725e-08 \\ 44 1.4039e-08 \\ 45 1.01217e-08 \\ 46 7.29737e-09 \\ }; \addlegendentry{$\nu=0.1$} \addplot[mark=diamond,mark size=0.5mm,line width=0.2mm,color=red] table[row sep=crcr]{ 2 0.0391445 \\ 3 0.0124471 \\ 4 0.00735842 \\ 5 0.00530438 \\ 6 0.00410882 \\ 7 0.0033978 \\ 8 0.00285528 \\ 9 0.00247513 \\ 10 0.00218369 \\ 11 0.00193666 \\ 12 0.00172894 \\ 13 0.00156918 \\ 14 0.00142644 \\ 15 0.00129879 \\ 16 0.00118438 \\ 17 0.00108546 \\ 18 0.000998977 \\ 19 0.000919474 \\ 20 0.000846397 \\ 21 0.000779223 \\ 22 0.000717465 \\ 23 0.000660673 \\ 24 0.000608435 \\ 25 0.000560376 \\ 26 0.000516152 \\ 27 0.000475448 \\ 28 0.000438172 \\ 29 0.000403993 \\ 30 0.000372506 \\ 31 0.000343439 \\ 32 0.000316613 \\ 33 0.000291862 \\ 34 0.000269029 \\ 35 0.00024797 \\ 36 0.00022855 \\ 37 0.000210643 \\ 38 0.000194134 \\ 39 0.000178914 \\ 40 0.000164884 \\ 41 0.000151952 \\ 42 0.000140032 \\ 43 0.000129045 \\ 44 0.000118919 \\ 45 0.000109587 \\ 46 0.000100987 \\ 47 9.30606e-05 \\ 48 8.57562e-05 \\ 49 7.90247e-05 \\ 50 7.28214e-05 \\ }; \addlegendentry{$\nu=0.025$} \end{semilogyaxis} \end{tikzpicture}} & \parbox{0.4\textwidth}{ \textbf{bold solid lines:} averaging scheme\\ \textbf{lines with marks:} forward simulation\\ Number of cycles $n$ required to reduce the periodicity error to \[ \|\mathbf{v}\big(nP\big)-\mathbf{v}\big((n-1)P\big)\|<10^{-8} \] or 50 cycles exceeded. Parameters that are not varies are set to $L=2$, $\nu=0.1$, $P=1$. } \end{tabular} \end{center} \caption{Robustness of the forward simulation and the averaging scheme with respect to different parameters: the domain size $L$ (top/left), the period $P$ (top/right), the viscosity $T$ (bottom/left). No effects of variations in the mesh size $h$ or time step size $k=P/N$ are observed.} \label{fig:stokes} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{cc|cc|cc} \toprule \multicolumn{6}{l}{Forward Simulation}\\ \midrule $L$ & $\sigma$ &$P$ & $\sigma$ & $\nu$& $\sigma$ \\ \midrule 1& 0.27 &1 & 0.72 & 0.1 & 0.72 \\ 2& 0.72 &2 & 0.52 & 0.05 & 0.85 \\ 4& 0.92 &4 & 0.27 & 0.025& 0.92 \\ \bottomrule \end{tabular} \hspace{1cm} % \begin{tabular}{cc|cc|cc} \toprule \multicolumn{6}{l}{Averaging Scheme}\\\midrule $L$ & $\sigma$ &$P$ & $\sigma$ & $\nu$& $\sigma$ \\ \midrule 1& 0.29 & 1 & 0.28 & 0.1 &0.28 \\ 2& 0.28 & 2 & 0.15 & 0.05& 0.29 \\ 4& 0.28 & 4 & 0.13 & 0.025 &0.29 \\ \bottomrule \end{tabular} \end{center} \caption{Convergence rate $\sigma = \|\mathbf{v}^{(l)}_0-\mathbf{v}^{(l-1)}_0\|/ \|\mathbf{v}^{(l-1)}_0-\mathbf{v}^{(l-2)}_0\|$. Left: forward simulation. Right: averaging scheme. Variation of the domain size $L$, the period $P$ and the viscosity $\nu$. } \label{tab:1} \end{table} We start by analyzing the dependency of the two approaches ``forward simulation'' \textbf{(F)} and ``averaging scheme'' \textbf{(A)} on the domain size $L$, which enters the Poincar\'e constant and the smallest eigenvalue, the viscosity $\nu$ that directly influences the reduction rate~(\ref{S5}), further the period length $P$. The results are shown in Fig.~\ref{fig:stokes}. We also tested the robustness versus changes in the temporal step size $k=P/N$ and the mesh size $h$. There was however no influence of these parameters on the iteration counts, neither in the case of the forward simulation nor for the averaging scheme. Enlarging the domain has a dramatic effect on the forward simulation \textbf{(F)} as it is shown in the upper/left plot of Figure~\ref{fig:stokes}. The rate of convergence strongly changes. For $L=4$ or $L=8$ the required tolerance of $10^{-8}$ could no be reached within the allowed 50 cycles. In contrast, the averaging scheme~\textbf{(A)} is very robust. We see a slight dependency of its convergence rate on the parameter $P$. In Table~\ref{tab:1} we indicate the convergence rates of the forward simulation and the averaging scheme in dependency of the parameters, i.e. the experimental reduction rate $\sigma$ defined by \[ \sigma^{(l)}:=\frac{\|\mathbf{v}^{(l)}_0-\mathbf{v}^{(l-1)}_0\|}{ \|\mathbf{v}^{(l-1)}_0-\mathbf{v}^{(l-2)}_0\|}. \] The rate of the averaging scheme is very robust. All test cases show $\sigma<0.29$, which is the limit shown in the continuous case of Lemma~\ref{lemma:ode}. In contrast, the simple forward iteration shows deteriorating convergence rates for smaller viscosities and period lengths and larger domains and behaves like \[ \sigma = {\cal O}\left(\exp\left(-\frac{\nu P}{L^2}\right)\right) \] which is expected from the theoretical analysis, see~(\ref{perioddecay}). The results for the averaging scheme suggest that the analysis in the discrete case in Lemma~\ref{lemma:ode:disc} is not sharp. The bound $0.29$ for the convergence rate identified in the continuous case in Lemma~\ref{lemma:ode} also appears to be valid in the discrete setting. \subsection{Robustness with regard to the Reynolds number}\label{sec:num:ns} As second test case we study the Navier-Stokes flow in an annulus with outer radius $R=5$ and inner radius $r=0.5$, see Fig.~\ref{fig:ns:1}. On both rings we drive the flow by a Dirichlet condition. While the outer ring is rotating, an inflow/outflow condition with an oscillating direction is prescribed on the inner ring. The maximum velocity on the boundary reaches $|\mathbf{v}|=0.5$. The period is chosen as $P=1$ and we choose $N=20$ time steps for each cycle. Biquadratic finite elements on an isoparametric mesh with a total of $N_{dofs} = 49\,920$ degrees of freedom are used. Like in the previous test case we could not identify any dependency of the convergence rates on the temporal step size $k=1/N$ or the number of spatial unknowns $N_{dofs}$. In Figure~\ref{fig:ns:2} we show the required number of steps for different viscosities $\nu$ in order to investigate the influence of the nonlinearity. With $|\mathbf{v}|=1/2$, $diam(\Omega)=10$ we compute the Reynolds number as \[ Re = \frac{|\mathbf{v}|R}{\nu} = \frac{5}{\nu}. \] Variation of the viscosity from $\nu=1$ to $\nu\approx 0.0078$ corresponds to a range of Reynolds numbers from $Re=5$ to $Re=640$. For smaller viscosities we could not identify any periodic solution. In the case of very large viscosity parameters $\nu=1$ there is no benefit of the averaging scheme \textbf{(A)}. \begin{figure}[t] \begin{center} \begin{minipage}{0.26\textwidth} \includegraphics[width=\textwidth]{ns-config.pdf} \end{minipage} \begin{minipage}{0.56\textwidth}\small \[ \begin{aligned} \mathbf{v}_r &= \frac{1}{2} \begin{pmatrix} \cos(2\pi t/P)\\ \sin(2\pi t/P) \end{pmatrix}\\ \\ \mathbf{v}_R &= \frac{sin(2\pi t/P)}{2 R} \begin{pmatrix} -y\\ x \end{pmatrix}\\ \\ r&=\frac{1}{2},\quad R=5,\quad P=1\\ \end{aligned} \] \end{minipage} \end{center} \caption{Configuration of the second Navier-Stokes test case. The flow is driven by periodic Dirichlet conditions on both boundaries, the inner circle $\Gamma_r$ with radius $r=0.5$ and the outer circle $\Gamma_R$ with radius $R=5$. } \label{fig:ns:1} \end{figure} Increasing the Reynolds number causes a significant increase of required iterations in the case of the forward simulation \textbf{(F)} while the necessary iterations for the averaging scheme \textbf{(A)} stays constant until we leave the regime of periodic solutions. In between the savings are substantial. \begin{figure}[t] \begin{center} \begin{minipage}{0.6\textwidth} \setlength{\figureheight}{0.5\textwidth} \setlength{\figurewidth}{0.8\textwidth} \begin{tikzpicture} \begin{semilogxaxis}[ width=\figurewidth, height=\figureheight, scale only axis, x label style={anchor=north, below=-12mm}, xlabel={Reynolds number}, ylabel=cycles, mark options={solid}, title style={font=\bfseries}, ymin=-30, legend style={at={(0,0.95)},xshift=0.2cm,anchor=north west,nodes=right}] \addplot[color=black,mark=*,solid,line width=0.5mm] table[row sep=crcr]{ 5 22 \\ 10 21 \\ 20 27 \\ 40 43 \\ 80 73 \\ 160 122\\ 320 198\\ 640 295\\ }; \addlegendentry{forward scheme \textbf{(F)}} \addplot[color=black,mark=triangle,solid,line width=0.5mm] table[row sep=crcr]{ 5 22\\ 10 20\\ 20 16\\ 40 11\\ 80 8\\ 160 8\\ 320 10\\ 640 15\\ }; \addlegendentry{averaging scheme \textbf{(A)}} \end{semilogxaxis} \end{tikzpicture} \end{minipage}\hspace{0.05\textwidth} \begin{minipage}{0.29\textwidth}\small ~\vspace{0.4cm} \begin{tabular}{cccc} \toprule $\nu$ & $Re$ & \textbf{(F)} & \textbf{(A)}\\ \midrule $2^{-0}$&5&22 &22\\ $2^{-1}$&10&21 &20\\ $2^{-2}$&20&27 &16\\ $2^{-3}$&40&43 &11\\ $2^{-4}$&80&73 &8\\ $2^{-5}$&160&122&8\\ $2^{-6}$&320&198&10\\ $2^{-7}$&640&295&15\\ \bottomrule \end{tabular} \end{minipage} \end{center} \caption{Required number of cycles for the averaging scheme and the forward iteration to reach the tolerance error $10^{-8}$ depending on the Reynolds number $Re$. } \label{fig:ns:2} \end{figure} \section{Conclusion}\label{sec:conclusion} We have presented an acceleration scheme for the computation of periodic solutions to the Stokes and Navier-Stokes equations. In the linear Stokes case we could show robust convergence of the scheme with a rate that does not depend on the problem parameters or the eigenvalues of the Stokes operator. Applied to the Navier-Stokes equations numerical tests predict the same robustness and efficiency of the algorithm. The only numerical overhead of the proposed algorithm is the computation of one stationary averaged problem in every cycle of the dynamic process. Depending on the problem data, which strongly effects the decay rate of direct forward simulations, we get a significant speed-up. The proof of convergence is based on the linearity and symmetry of the Stokes operator. The extension to the nonlinear Navier-Stokes equations will call for a different approach. An interesting but open extension of the averaging scheme will be the application to problems with unknown periodicity. A possible application is the laminar vortex shedding of the flow around an obstacle. Here, predictions of the frequency are available with the Strouhal number, the exact value however is depending on the specific configuration, in particular on the geometry. To tackle such problems we aim at the combination of the averaging scheme for obtaining initial values $\mathbf{v}_0$ with an optimization approach to identify the period length $P$. The difficulty of such settings is the higher Reynolds number that is usually involved. There is only a small regime, where the solution is nonstationary with a clear periodic-in-time solution. Finally, the proposed scheme allows to accelerate several problems where the computation of cyclic states is an algorithmic sub-task such as temporal multi-scale problems~\cite{FreiRichterWick2016,FreiRichter2019,RichterMizerski2020}. \section*{Acknowledgement} The author acknowledges support by the Federal Ministry of Education and Research of Germany (project number 05M16NMA), as well as funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 314838170, GRK 2297 MathCoRe and in project 411046898.
{ "timestamp": "2019-12-30T02:05:27", "yymm": "1806", "arxiv_id": "1806.00906", "language": "en", "url": "https://arxiv.org/abs/1806.00906" }
\section{Path Probability Importance Weights} \label{sec:ppiw} Given the \textit{Probability Density Function}s (PDFs) $p_a$ of a sampler and the number of samples $n_a$ drawn from this sampler, the \textit{Balance Heuristic} is \begin{equation} w_a = \frac{n_a p_a}{\sum_b n_b p_b} = \frac{1}{1+\sum_{b\neq a} \frac{n_b p_b}{n_a p_a}}.\label{eq:weight} \end{equation} According to Elvira et al. \cite{elvira_generalized_2017} it is the best known strategy for sampling a mixture of PDFs. Also, Veach \cite{veach_optimally_1995} states it is the optimal choice, if sampling PDF mixtures with random decisions between PDFs (one-sample model). To obtain a variance optimal combination, it is also necessary to choose the number of samples $n_a$ optimally (multi-sample model) and then use a weighting without the $n_a$ \cite{sbert_variance_2016}. The second form of Eq. \eqref{eq:weight} is used in practice to avoid the computation the numerically challenging probability sums. However, in light transport simulation we do not know the PDF $p_b$ of all possible samplers when sampling a vertex of a path. The difficulty is that paths from the adjoint quantity (importance $\leftrightarrow$ light) depend on the scene globally. I.e. the PDF of paths which randomly reach the current vertex is not given by local properties. A solution was given by Veach \cite{veach_bidirectional_1995} who used the probability density per unit area instead. For each segment of the path this measure is \begin{equation} p(\mvec{x}_i \rightarrow \mvec{x}_{i\addone}) = \frac{\pdfi{i} \cdot \vert\langle\mvec{n}_{i\addone},\dirr{i}\rangle\vert}{\lVert\mvec{x}_i - \mvec{x}_{i\addone}\rVert^2} \label{eq:psegment} \end{equation} where $\pdfi{i}$ is the sampling PDF at the source vertex, $\langle\mvec{n}_{i\addone},\dirr{i}\rangle$ is the $\cos\theta$ at the target vertex and $\lVert\mvec{x}_i - \mvec{x}_{i\addone}\rVert^2$ is the squared distance between the two surfaces. Analogously, the probability in the inverse direction $p(\mvec{x}_i \leftarrow \mvec{x}_{i\addone})$ is defined by inverting all directional sizes on the right side and replacing the indices of $\pdfi{}$ and $\mvec{n}$. The probability of a path $\mathcal{X}$ is given by the product of all its segment probabilities and the probabilities to sample the first and last vertex. \begin{equation} p(\mathcal{X}) = p(\mvec{x}_0) \cdot \!\prod_{i=0}^{s\subtwo} p(\mvec{x}_i \rightarrow \mvec{x}_{i\addone}) \cdot \!\!\!\!\prod_{i=s}^{s+t\subtwo}\!\! p(\mvec{x}_i \leftarrow \mvec{x}_{i\addone}) \cdot p(\mvec{x}_{s+t\subone}).\label{eq:ppath} \end{equation} Whenever two sub-paths, one from the observer and one from the light source, are connected, the probability measure for that sampler is the product of the two parts as defined by Eq. \eqref{eq:ppath}. The measures for all other possible samplers are obtained by replacing forward and backward direction for the other path segments recursively. In the unidirectional case either the left or the right half of the product vanishes, including the sampling probability of the end vertex. \begin{figure} \input{figures/pathprob.tex} \end{figure} Figure \ref{fig:pathprob} shows some of the possibilities for sampling a certain path. For example, to get the probability for the second path from the first \begin{equation*} \frac{p(\mvec{x}_{2}\leftarrow\mvec{x}_{3})}{p(\mvec{x}_{1}\rightarrow\mvec{x}_{2})} \end{equation*} must be multiplied. Hence, the ratio between two path probabilities has the form \begin{equation} \frac{p(\mathcal{X}_b)}{p(\mathcal{X}_a)} = \prod_{k=b}^{a\subone} \frac{p(\mvec{x}_k\leftarrow\mvec{x}_{k\addone})}{p(\mvec{x}_{k\subone}\rightarrow\mvec{x}_k)}\label{eq:propRatio} \end{equation} where $k$ ranges from the end vertex of the connection $s_b=b$ in $\mathcal{X}_b$ to the start vertex of the connection $(s_a=a)\subone$ in $\mathcal{X}_a$. Here, w.l.o.g. the connection appears earlier on the path in $\mathcal{X}_b$ than in $\mathcal{X}_a$. Otherwise, the fraction must be inverted. \section{Path Throughput Importance Weights} \label{sec:ptiw} To calculate the sample value (radiance throughput) $S(\mathcal{X}_a)$ of a sampled path we have \begin{equation} S(\mathcal{X}_a) = \frac{W\!(\mvec{x}_0)}{p(\mvec{x}_0)} \cdot \prod_{i=1}^{s\subtwo} \frac{\overrightarrow{\btf_i}}{\pdfi{i}} \cdot \frac{\overrightarrow{\btf_{s\subone}}\cdot\overleftarrow{\btf_s}} {\lVert\mvec{x}_s-\mvec{x}_{s\subone}\rVert^2} \cdot \!\!\!\prod_{i=s+1}^{s+t\subtwo} \frac{\overleftarrow{\btf_i}}{\pdfr{i}} \cdot \frac{L(\mvec{x}_{s+t\subone})}{p(\mvec{x}_{s+t\subone})} \label{eq:mu}. \end{equation} It depends on the sensor weight $W$, the emitted radiance $L$, multiple sampling events in the form $\btf/p$ and the transport evaluation of the connection. Here $\btf$ is the BxDF $\rho$ multiplied with the outgoing cosine $\overrightarrow{\btf_i} = \rho_i \cdot\vert\langle\mvec{n}_{i},\dirr{i}\rangle\vert$ and $\overleftarrow{\btf_i} = \rho_i \cdot\vert\langle\mvec{n}_{i},\diri{i}\rangle\vert$. \begin{comment} The true radiance estimate for a path $\mathcal{X}$ is then \begin{equation} E[\mu(\mathcal{X})] = \sum_{a} p(a)\mu(\mathcal{X}_a), \end{equation} where $p(a)$ is the probability (not a density) to produce the path with sampler $a$ where $\sum_a p(a) = 1$. \end{comment} The true radiance estimate of a pixel $L(\mvec{x}_0,\diri{0})$ is then \begin{equation*} E[S(\mathcal{X})] = \lim\limits_{N\rightarrow\infty} \frac{1}{\sum_{k=1}^N w_k} \sum_{k=1}^N w_k S(\mathcal{X}_k), \end{equation*} where $N$ is the total number of samples (increases with iterations) and $\mathcal{X}_k$ are different sampled paths. The weight $w_k$ is artificially introduced and can be any weight like the \textit{Balance Heuristic} from Eq. \eqref{eq:weight}. Our goal is to minimize the variance $\var{S(\mathcal{X})} = E[S(\mathcal{X})^2] - E[S(\mathcal{X})]^2$. Since $E[S(\mathcal{X})]$ is the fixed (true) value we only need to minimize $E[S(\mathcal{X})^2]$. Hence, preferring samplers with a small $S$ must lead to a smaller variance. Therefore, we require \begin{equation*} \hat{w}_a \propto S(\mathcal{X}_a)^{\sm1}. \end{equation*} This leads us to an equivalent formulation of the balance heuristic \begin{equation} \hat{w}_a = \frac{n_a \cdot S(\mathcal{X}_a)^{\sm1}}{\sum_b n_b \cdot S(\mathcal{X}_b)^{\sm1}} = \frac{1}{1+\sum_{b\neq a} \frac{n_b S(\mathcal{X}_a)}{n_a S(\mathcal{X}_b)}}.\label{eq:ptiw} \end{equation} \begin{comment} While many works (\cite{sbert_variance_2016, elvira_generalized_2017} and older) argue about the variance optimality of the balance heuristic given in Eq. \eqref{eq:weight}, none questions the use of Eq. \eqref{eq:ppath} in this context. Since we try to minimize the variance of the throughput $\mu$ of sampled paths $\mathcal{X}_i$, it seems naturally to use \begin{equation} \hat{w}_i \propto \mu(\mathcal{X}_i)^{\sm1}.\label{eq:ptiw1} \end{equation} This means that choosing the samplers with the smallest throughputs reduce the variance. If we look at $\var{\mu} = E[\mu^2] - \expect{\mu}^2$, where $\expect{\mu}$ is the true value we want to estimate, the quantity we need to minimize is $E[\mu^2]$. Hence, preferring samplers with a small $\mu$ must lead to a smaller variance. This leads us to an equivalent formulation of the balance heuristic \begin{equation*} \hat{w}_i = \frac{n_i \cdot \mu(\mathcal{X}_i)^{\sm1}}{\sum_j n_j \cdot \mu(\mathcal{X}_j)^{\sm1}} = \frac{1}{1+\sum_{j\neq i} \frac{n_j \mu(\mathcal{X}_i)}{n_i \mu(\mathcal{X}_j)}}. \end{equation*} \end{comment} \subsection{Equivalence to Probability Weights} \begin{theorem} The balance heuristic $w_a$ using path probabilities Eq.\eqref{eq:weight} is equivalent to the heuristic $\hat{w}_a$ using inverse throughputs Eq.\eqref{eq:ptiw}. \end{theorem} \begin{proof} In the calculation of $\hat{w}_a$ we need ratios of sampled throughputs which are: \begin{align*} \frac{S(\mathcal{X}_a)}{S(\mathcal{X}_b)} &= \frac{\lVert\mvec{x}_{b\subone} - \mvec{x}_{b}\rVert^2} {\btfi{b\subone}\cdot\btfr{b}} \cdot \frac{\btfi{a\subone}\cdot\btfr{a}} {\lVert\mvec{x}_{a\subone} - \mvec{x}_{a}\rVert^2} \cdot \frac{\prod_{b\subone}^{a\subtwo} \btfi{k} / \pdfi{k}} {\prod^{a}_{b\addone} \btfr{k} / \pdfr{k}}\\ &= \frac{\lVert\mvec{x}_{b\subone} - \mvec{x}_{b}\rVert^2} {\lVert\mvec{x}_{a\subone} - \mvec{x}_{a}\rVert^2} \cdot \frac{\prod_{b}^{a\subone} \btfi{k}} {\prod^{a\subone}_{b} \btfr{k}} \cdot \frac{\prod^{a}_{b\addone} \pdfr{k}} {\prod_{b\subone}^{a\subtwo} \pdfi{k}}\\ &= \frac{\lVert\mvec{x}_{b\subone} - \mvec{x}_{b}\rVert^2} {\lVert\mvec{x}_{a\subone} - \mvec{x}_{a}\rVert^2} \cdot \prod_{k=b}^{a\subone} \left\vert \frac{\langle\mvec{n}_{k},\dirr{k}\rangle} {\langle\mvec{n}_{k},\diri{k}\rangle} \right\vert \cdot \prod_{k=b}^{a\subone} \frac{\pdfr{k\addone}} {\pdfi{k\subone}} \end{align*} using Eq. \eqref{eq:mu}. In step one we split the products into an BxDF and a probability part and then moved the BxDFs into the product adjusting indices. In the second step we canceled the reciprocal BxDF arriving at a product of cosines and substituted the indices in the probabilities to unify the product range. We get a similar transformation for Eq. \eqref{eq:propRatio} by inserting Eq. \eqref{eq:psegment}: \begin{align*} \frac{p(\mathcal{X}_b)}{p(\mathcal{X}_a)} &= \left( \prod_{k=b}^{a\subone} \frac{\pdfr{k\addone} \vert\langle\mvec{n}_k,\diri{k\addone}\rangle\vert} {\lVert\mvec{x}_k - \mvec{x}_{k\addone}\rVert^2} \frac{\lVert\mvec{x}_{k\subone} - \mvec{x}_k\rVert^2} {\pdfi{k\subone} \vert\langle\mvec{n}_k,\dirr{k\subone}\rangle\vert} \right)\\ &= \prod_{k=b}^{a\subone} \frac{\lVert\mvec{x}_{k\subone} - \mvec{x}_k\rVert^2} {\lVert\mvec{x}_k - \mvec{x}_{k\addone}\rVert^2} \cdot \prod_{k=b}^{a\subone} \left\vert \frac{\langle\mvec{n}_k,\dirr{k}\rangle} {\langle\mvec{n}_{k},\diri{k}\rangle} \right\vert \frac{\pdfr{k\addone}}{\pdfi{k\subone}}\\ &= \frac{\lVert\mvec{x}_{b\subone} - \mvec{x}_{b}\rVert^2} {\lVert\mvec{x}_{a\subone} - \mvec{x}_{a}\rVert^2} \cdot \prod_{k=b}^{a\subone} \left\vert \frac{\langle\mvec{n}_k,\dirr{k}\rangle} {\langle\mvec{n}_{k},\diri{k}\rangle} \right\vert \frac{\pdfr{k\addone}}{\pdfi{k\subone}} \end{align*} In step one we used $\diri{k+1}=-\dirr{k}$ and reordered the terms. In the second step, the first product was expanded and equal terms were canceled out. Comparing the last line of each ratio we see their equivalence. This applies to unidirectional cases ($s=0$ or $t=0$) in the same way. \end{proof} \section{Implementation Consequences} \label{sec:conseqences} In practice, the terms in Eq. \eqref{eq:propRatio} are computed recursively allowing to store intermediate results in sub-paths and having higher robustness. With this approach both weights lead to an identical implementation, because expanding the ratios leads to the same expression as shown in the proof. Instead, path throughput weights $\hat{w}_i$ are a tool to check whether the computed weights are correct or optimal. They show that all modifications to the estimated radiance need to be accounted for in the weight computation. To name some of them: \begin{description} \item[Russian Roulette] To increase the efficiency of a renderer it is common to randomly terminate paths. This termination probabilities must be included into the segment probabilities. \item[Shading Normal Correction] Shading normals\footnote{Smooth interpolated normals on a triangle or bump mapping modified normals}, which are different from geometric normals, need a correction factor for a reciprocal light transport (see Veach's thesis \cite{veach_robust_1997} chapter 5.3). It must be applied inversely to the probability densities for MIS computation, too. \item[Random Connection] The most used implementation of BPT traces one view-path for every light-path and connects them deterministically in all possible ways. However, in GPU implementations often a single random decision is made \cite{davidovic_progressive_2014}. Then, a connection has a probability of $1/\bar{l}$ with $\bar{l}$ being the average path length of paths which can be chosen by the process. \end{description} While some of these events are included in other renderers (e.g. Russian Roulette in the VCM implementation (see supplemental of \cite{georgiev_light_2012})) we have never seen that the shading normal correction is included in the MIS computation. Using $\hat{w}$ as a tool can also help in unifying different sampling approaches. For example, combining BPT with Vertex Merging (i.e. photon gathering) is more intuitive than for the probability based approach. It took several years and multiple authors \cite{georgiev_light_2012, hachisuka_path_2012} to derive a unified weight computation which allowed the combination of BPT with merges. In its essence, the required modification is the multiplication of the query area $\pi r^2$ to the path probability. In the path throughput this quantity is contained naturally. Thus, comparing connection paths with merge paths using $S$ unifies the two sampling approaches trivially. However, there are also cases where our formulation does not help. For example, another possibility of using random connections was used by Popov et al. \cite{popov_probabilistic_2015}. They connected each view sub-path to multiple light sub-paths to increase the path reuse. The problem here is the correlation between paths due to the choice of connections. The authors derived an upper bound for the variances to obtain a more robust MIS calculation. This problem remains equally hard regardless of the MIS formulation. \section{Introduction} Beginning with \textit{Path Tracing} \cite{kajiya_rendering_1986} there are different solutions to the light transport simulation problem. In all cases the integral equation (\textit{Rendering Equation}) \begin{equation*} L(\mvec{x}_i, \diri{i}) = \int_\Omega L(\mvec{x}_{i\addone}, \dirr{i}) \rho(\mvec{x}_i, \diri{i}, \dirr{i})\langle\mvec{n}_i,\dirr{i}\rangle \text{d} \dirr{i} \end{equation*} is solved numerically by sampling. For the notation please refer to table 1. Dependent on the direction of tracing we talk from light transport (paths starting at the light source/photons) or importance transport (paths starting at the observer). Both define a different sampler for the same paths. Since they cover different light effects more successfully, the combination to \textit{Bidirectional Path Tracing} (BPT) by Veach \cite{veach_bidirectional_1995} gives a more robust solution. The key idea in BPT is to weight each path from each sampler using \textit{Multiple Importance Sampling} (MIS). The weights form a partition of unity, such that the weighted sum of all samplers is again an unbiased estimate of the \textit{Rendering Equation}. The goal of that weights is to find a minimal variance solution. I.e. if one of the samplers has a lower variance than others, it should be preferred, otherwise an average of multiple equal samplers will also result in a lower variance due to higher sample count. The state of the art weight function is called the \textit{Balance Heuristic} (introduced by Veach \cite{veach_bidirectional_1995}) and is explained in Section \ref{sec:ppiw}. We introduce a new way to think of the \textit{Balance Heuristic} in Section \ref{sec:ptiw}. Our approach is to use the path throughput (the sampled quantity) instead of probabilities. This new perspective helps in understanding the implications of modifications to the renderer and allow more intuitive extensions towards other samplers (See Section \ref{sec:conseqences}). For an already existing and correct implementation there is nothing to be changed. An example of such an extension is the combination of Photon Mapping \cite{jensen_global_1996} with BPT. In photon mapping two sub-paths are merged by searching end points in a local neighborhood at one path end, instead of connecting sub-paths only. The difficulty here is to find a compatible probability description for both methods to be able to compute the MIS. The solution was discovered by \cite{georgiev_light_2012} and \cite{hachisuka_path_2012} in parallel. Using our perspective the solutions becomes trivial. A further family of samplers are Marcov Chain Monte Carlo methods which conditionally exchange sub-paths (light, importance or both) to sample an arbitrary target function. In \cite{sik_robust_2016} MCMC was combined with BPT using MIS, too. For one chain their approach uses the unmodified path sampling probability, regardless of its optimality, since computing the true probability is unfeasible. Our approach suggests that parts of the throughput calculation (like acceptance probability) should be included into the MIS computation. \section{Related Work} \section{Results} We derived a new form of the \textit{Balance Heuristic} (Eq. \eqref{eq:ptiw}) which opens new perspectives to the path combination problem. By using path throughputs, many effects, which must be included in the MIS weight, become more obvious than before. Whenever a new quantity is multiplied with the path throughput, the path probability should be divided with this quantity. In future the new perspective may also help in finding better combination heuristics for old and new sampling techniques such as MCMC.
{ "timestamp": "2018-09-07T02:06:31", "yymm": "1806", "arxiv_id": "1806.01005", "language": "en", "url": "https://arxiv.org/abs/1806.01005" }
\section{Introduction} The R\'enyi entropy of the probability density $\rho(\vec{r}), \vec{r} = (x_1 , \ldots , x_D),$ which characterizes the quantum state of a $D$-dimensional physical system is defined \cite{renyi1,renyi2} as \begin{equation} \label{eq:renentrop} R_{q}[\rho] = \frac{1}{1-q}\log W_{q}[\rho], \quad 0<q<\infty, \,\, q \neq 1, \end{equation} where the symbol $W_{q}[\rho]$ denotes the frequency or entropic moment of order $q$ of the density given by \begin{equation} \label{eq:entropmom2} W_{q}[\rho] = \int_{\mathbb{R}^D} [\rho(\vec{r})]^{q}\, d\vec{r}. \end{equation} These quantities completely characterize the density $\rho(\vec{r})$ \cite{romera_01,jizba2016} under certain conditions. They quantify numerous facets of the spreading of the quantum probability density $\rho(\vec{r})$, which include the intrinsic randomness (uncertainty) and the geometrical profile of the quantum system. The R\'enyi entropies are closely related to the Tsallis entropies \cite{tsallis} $T_{p}[\rho] = \frac{1}{p-1}(1-W_{p}[\rho]), 0<p<\infty,\, p\neq1$ by $T_{p}[\rho] = \frac{1}{1-p}[e^{(1-p)R_{p}[\rho]}-1]$. Moreover for the special cases $q=0,1,2,$ and $\infty $, the Rényi entropic power, $N_{q}[\rho]=e^{R_{q}[\rho]}$, is equal to $\text{the length of the support}, e^{-\langle \ln \rho \rangle}, \langle \rho \rangle^{-1}, \rho_{max}^{-1}$, respectively. Therefore, these $q$-entropies include the Shannon entropy \cite{shannon}, $S[\rho] = \lim_{p\rightarrow 1} R_{p}[\rho] = \lim_{p\rightarrow 1} T_{p}[\rho]$, and the disequilibrium, $\langle\rho\rangle =\exp(\textcolor{red}{-} R_{2}[\rho])$, as two important particular cases; in addition, they The use of R\'enyi, Shannon and Tsallis entropies as measures of uncertainty allow a wider quantitative range of applicability than the Heisenberg-like measures which are based on the moments around the origin (so, including the standard or root-square-mean deviation). This permits, for example, a quantitative discussion of quantum uncertainty relations further beyond the conventional Heisenberg-like uncertainty relations \cite{hall,dehesa_sen12,bialynicki2,vignat,zozor2008,guerrero11,puertas2017}. The properties of the Rényi entropies and their applications have been widely analyzed; see e.g. \cite{aczel,leonenko,bialynicki3} and the reviews \cite{dehesa_sen12,jizba,jizba_2004b}. In general, the R\'enyi entropies of quantum systems cannot be determined in an exact way, basically because the associated wave equation is generally not solvable in an analytical way. Even when the time-independent Schr\"{o}dinger equation is solvable, what happens for a small set of elementary potentials (zero-range, harmonic, Coulomb) \cite{albeverio,dong2011}, the exact determination of the R\'enyi entropies is a formidable task mainly because they are integral functionals of some special functions of applied mathematics \cite{nikiforov} (e.g., orthogonal polynomials, hypergeometric functions, Bessel functions,...) which control the wavefunctions of the stationary states of the quantum system. These integral functionals have not yet been solved for harmonic (i.e., oscillator-like) systems except for a few lowest-lying states (where the calculation is trivial) and, most recently, for the extreme Rydberg (i.e., highest-lying) \cite{aptekarev2016,dehesa2017,tor2016b} and pseudoclassical (i.e., the highest dimensional) \cite{tor2017b,puertas2017,temme2017} states of harmonic and Coulomb systems by means of sophisticated asymptotical techniques of orthogonal polynomials. This lack is amazing because harmonicity is the most frequent and useful approximation to study the quantum many-body systems, and the other two basic classes of uncertainty measures, the Heisenberg-like measures \cite{ray,drake,hey,assche,andrae,tarasov,zozor,suslov} and the Fisher information \cite{romera2005}, have been already calculated for all stationary states of the multidimensional harmonic system.\\ In this work we determine the exact values of the R\'enyi uncertainty measures of the $D$-dimensional harmonic system (i.e., a particle moving under the action of a quadratic potential) for all ground and excited quantum states directly in terms of $D$, the potential strength and the hyperquantum numbers which characterize the states. This is a far more difficult problem than the Heisenberg-like and Fisher information cases, both analytically and numerically. The latter is basically because a naive numerical evaluation using quadratures is not convenient due to the increasing number of integrable singularities when the principal hyperquantum number is increasing, which spoils any attempt to achieve reasonable accuracy even for rather small hyperquantum numbers \cite{buyarov}. The structure of the manuscript is the following. In section \ref{sec:basics} the wavefunctions and the probability densities of the stationary states of the $D$-dimensional harmonic (oscillator-like) system are briefly described in both position and momentum spaces. In section \ref{sec:renyi} the R\'enyi entropies for all the ground and excited states of this system are determined in an analytical way by use of a recently developed methodology \cite{amc2013}. Finally some conclusions and open problems are given. \section{The $D$-dimensional harmonic problem} \label{sec:basics} In this section we summarize the quantum-mechanical $D$-dimensional problem corresponding to the harmonic oscillator potential \begin{equation} V(r) = \frac{1}{2}k(x_{1}^{2}+\ldots + x_{D}^{2}) = \frac{1}{2}kr^{2}, \end{equation} and we give the probability densities of the stationary quantum states of the system in both position and momentum spaces. The stationary bound states of the system, which are the physical solutions of the Schr\"{o}dinger equation \begin{equation}\label{schrodinger} \left( -\frac{1}{2} \vec{\nabla}^{2}_{D} + V(r)\right) \Psi \left( \vec{r} \right) = E \Psi \left(\vec{r} \right), \end{equation} (we use atomic units throughout the paper) where $\vec{\nabla}_{D}$ denotes the $D$-dimensional gradient operator, are well known \cite{gallup,louck60a,yanez1994} to be characterized by the energies \begin{equation} \label{HOEL} E_{N} = \left(N + \frac{D}{2}\right) \omega \end{equation} where \[ \omega = \sqrt{k}, \quad N = \sum_{i=1}^{D}n_{i} \quad \text{with} \quad n_{\textcolor{red}{i}}=0,1,2,\ldots \] The corresponding eigenfunctions can be expressed as \begin{equation} \label{HOEF} \psi_{N}(\vec{r}) = \mathcal{N} e^{-\frac{1}{2}\alpha(x_{1}^{2}+\ldots+x_{D}^{2})}H_{n_{1}}(\sqrt{\alpha}\, x_{1})\cdots H_{n_{D}}(\sqrt{\alpha}\, x_{D}), \quad \alpha = k^{\frac{1}{4}} \end{equation} where $\vec r\in\mathbb R^D$ and $\mathcal{N}$ stands for the normalization constant \[ \mathcal{N} = \frac{1}{\sqrt{2^{N}n_{1}!n_{2}!\cdots n_{D}! }}\left(\frac{\alpha}{\pi}\right)^{D/4}, \] and $H_{n}(x)$ denotes the Hermite polynomials of degree $n$ orthogonal with respect the weight function $\omega(x) = e^{-x^{2}}$ in $(-\infty, \infty)$.\\ Then, the associated quantum probability density in position space is given by \begin{equation} \label{HOPD} \rho_{N}(\vec{r}) = |\psi_{N}(\vec{r})|^{2} = \mathcal{N}^{2} e^{-\alpha(x_{1}^{2}+\ldots+x_{D}^{2})}H_{n_{1}}^{2}(\sqrt{\alpha}\, x_{1})\cdots H_{n_{D}}^{2}(\sqrt{\alpha}\, x_{D}), \end{equation} and the density function in momentum space is obtained by squaring the Fourier transform of the position wavefunction, obtaining \begin{align} \label{HOMPD} \gamma_{N}(\vec{p}) & = \mathcal{\tilde{N}}^{2} e^{-\frac{1}{\alpha}(p_{1}^{2}+\ldots+p_{D}^{2})}H_{n_{1}}^{2}\left(\frac{ p_{1}}{\sqrt{\alpha}}\right)\cdots H_{n_{D}}^{2}\left(\frac{ p_{D}}{\sqrt{\alpha}}\right) = \alpha^{-D}\rho_{N}\left(\frac{\vec{p}}{\alpha}\right) \end{align} where $\vec p\in\mathbb R^D$ and the normalization constant is \[ \mathcal{\tilde{N}} = \frac{1}{\sqrt{2^{N}n_{1}!\cdots n_{D}! }}\left(\frac{1}{\pi\alpha}\right)^{D/4}. \] \section{R\'enyi entropies of the harmonic system} \label{sec:renyi} Let us now determine the R\'enyi entropy of the $D$-dimensional harmonic system according to Eqs. \eqref{eq:renentrop}-\eqref{eq:entropmom2} by \begin{align} \label{HORE} R_{q}[\rho_{N}] &= \frac{1}{1-q}\log \int_{-\infty}^{\infty} dx_{1}\ldots \int_{-\infty}^{\infty} dx_{D} \, [\rho_{N}(\vec{r})]^{q} \nonumber \\ & = \frac{1}{1-q}\log\left( \mathcal{N}^{2q}\int_{-\infty}^{\infty} e^{-\alpha q x_{1}^{2}}|H_{n_{1}}(\sqrt{\alpha}\, x_{1})|^{2q} \, dx_{1} \cdots \int_{-\infty}^{\infty} e^{-\alpha q x_{D}^{2}}|H_{n_{D}}(\sqrt{\alpha}\, x_{D})|^{2q}\, dx_{D} \right) \end{align} where we have used Eq. \eqref{HOPD}. To calculate these $D$ integral functionals of Hermite polynomials we will follow the 2013-dated technique (only valid for $q\in\mathbb{N}$ other than unity) \cite{srivastava,niukkanen,amc2013} to evaluate similar integral functionals of hypergeometric orthogonal polynomials by means of multivariate special functions. To do so, first we express the Hermite polynomials in terms of the Laguerre polynomials (see e.g., \cite{olver}) as \begin{eqnarray} \label{HfL} H_{2n}(x) &=& (-1)^{n} 2^{2n}n!L_{n}^{-\frac{1}{2}}(x^{2}), \nonumber \\ H_{2n+1}(x) &=& (-1)^{n} 2^{2n+1}n!xL_{n}^{\frac{1}{2}}(x^{2}), \end{eqnarray} which allows to write \begin{equation} \label{HpfL} H_{n}(\sqrt{\alpha}x) ^{2q} = A_{n,q}(\nu) \alpha^{q\nu}x^{2q\nu} L_{\frac{n-\nu}{2}}^{(\nu-\frac{1}{2})}(\alpha x^{2}) ^{2q}, \end{equation} with the constant \[ A_{n,q} (\nu) = 2^{2qn}\left[\Gamma\left(\frac{n-\nu}{2}+1\right) \right]^{2q} \] and the paramater $\nu=0(1)$ for even(odd) $n$; that is, $\nu=\frac12\left(1-(-1)^{n}\right).$ \\ Following the same steps as in \cite{amc2013}, after the change of variable $t_{i}=\alpha q x_{i}^{2}$ in \eqref{HORE}, one obtains the following linearization relation for the $(2q)$-th power of the Hermite polynomials \begin{equation} \label{linforH2} H_{n}\left(\sqrt \alpha x\right)^{2q} = A_{n,q}(\nu)q^{-q\nu}\sum_{j=0}^{\infty}\frac{1}{(-1)^{}2^{2j } j!}c_{j}\left(q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right) H_{2j}(\sqrt{\alpha q}x), \end{equation} with \begin{align} \hspace{-3cm}c_{j}&\left( q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right)= \nonumber \\ & = \left(\frac{1}{2}\right)_{q\nu} \binom{\frac{n+\nu-1 }{2}}{\frac{n-\nu}{2}}^{2q} F_{A}^{(2q+1)}\left( \begin{array}{cc} q\nu+\frac{1}{2} ; \overbrace{\frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2}}^{2q}, -j & \\[-3.5em] &; \underbrace{\frac{1}{q}, \ldots, \frac{1}{q}}_{2q},1\\[-3.5em] \underbrace{\nu + \frac{1}{2}, \ldots, \nu+\frac{1}{2}}_{2q},\frac{1}{2} & \\ \end{array}\right),\nonumber\\ \end{align} where $(z)_a = \frac{\Gamma(z+a)}{\Gamma(z)}$ is the known Pochhammer's symbol and $F_{A}^{(2q+1)}(\frac{1}{q}, \ldots, \frac{1}{q},1)$ is the Lauricella function of type A of $2q+1$ variables given by \begin{align} \label{HOLF} F_{A}^{(2q+1)}\left( \begin{array}{cc} q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2},-j & \\[-3.5em] &; \frac{1}{q}, \ldots, \frac{1}{q},1\\[-3.5em] \nu+ \frac{1}{2}, \ldots, \nu+\frac{1}{2},\frac{1}{2} & \\ \end{array}\right) &=\nonumber\\ &\hspace{-7cm}= \sum_{k_{1}, \ldots, k_{2q}, k_{2q+1}=0 }^{\infty} \frac{\left(q\nu+\frac{1}{2}\right)_{k_{1}+\ldots k_{2q}+ k_{2q+1}} (\frac{\nu_-n }{2})_{k_{1}} \cdots (\frac{\nu-n }{2})_{k_{2q}}(-j)_{ k_{2q+1}} }{(\nu+ \frac{1}{2})_{k_{1}} \cdots (\nu + \frac{1}{2})_{k_{2q}}\left(\frac{1}{2}\right)_{ k_{2q+1}} } \frac{\left(\frac{1}{q}\right)^{k_{1}} \cdots \left(\frac{1}{q}\right)^{k_{2q}}}{k_{1}!\cdots k_{2q}! k_{2q+1}!} , \end{align} Now, the combination of Eqs. \eqref{HORE} and \eqref{linforH2} together with the orthogonalization condition of the Hermite polynomials $H_{n}(x)$ (with which one realizes that all the summation terms vanish except the one with $i=0$), allows one to write the exact Rényi entropy of the harmonic system as \begin{align} \label{HORE1} R_{q}[\rho_{N}] &=\frac{1}{1-q}\log \left[\mathcal{N}^{2q} \left(\frac{\pi}{\alpha}\right)^{\frac{D}{2}}q^{-\frac{D}{2}}\prod_{i=1}^{D}q^{-q\nu_{i}}A_{n_{i},q}(\nu_{i})\, c_{0}\left( q\nu_{i},2q,\frac{1}{q},\frac{n_{i}-\nu_{i}}{2},\nu_{i}-\frac{1}{2},-\frac{1}{2} \right)\right] \nonumber \\ &\hspace{-1cm}= \frac D2 \log\left[\frac{\pi}{\alpha}\right]+\frac{1}{q-1}\log \left[2^{qN} q^\frac D2 \right] +\frac{1}{1-q}\sum_{i=1}^{D}\log\left[\frac{A_{n_{i},q}(\nu_{i})}{q^{q\nu_{i}}\Gamma(n_i+1)^q}\, c_{0}\left( q\nu_{i},2q,\frac{1}{q},\frac{n_{i}-\nu_{i}}{2},\nu_{i}-\frac{1}{2},-\frac{1}{2} \right) \right] \end{align} with \begin{align} c_{0}&\left( q\nu,2q,\frac{1}{q},\frac{n-\nu}{2},\nu-\frac{1}{2},-\frac{1}{2} \right) = \left(\frac{1}{2}\right)_{q\nu} \binom{\frac{n+\nu-1 }{2}}{\frac{n-\nu}{2}}^{2q}\, \mathfrak F_q(n),\nonumber\\ \end{align} where the symbol $\mathfrak F_q(n)$ denotes the following Lauricella function of $2q$ variables \begin{align} \label{HOLF2} \mathfrak F_q(n)&\equiv F_{A}^{(2q+1)}\left( \begin{array}{cc} q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2},0 & \\[-3.5em] &; \frac{1}{q}, \ldots, \frac{1}{q},1\\[-3.5em] \nu+ \frac{1}{2}, \ldots, \nu+\frac{1}{2},\frac{1}{2} & \\ \end{array}\right) = F_{A}^{(2q)}\left( \begin{array}{cc} q\nu+\frac{1}{2} ; \frac{\nu-n }{2}, \ldots, \frac{\nu - n}{2} & \\[-3.5em] &; \frac{1}{q}, \ldots, \frac{1}{q}\\[-3.5em] \nu + \frac{1}{2}, \ldots, \nu +\frac{1}{2} & \\ \end{array}\right) &\nonumber\\ &= \sum_{j_{1}, \ldots, j_{2q}=0 }^{\infty} \frac{\left(q\nu+\frac{1}{2}\right)_{j_{1}+\ldots j_{2q}} (\frac{\nu-n }{2})_{j_{1}} \cdots (\frac{\nu-n }{2})_{j_{2q}} }{(\nu + \frac{1}{2})_{j_{1}} \cdots (\nu+ \frac{1}{2})_{j_{2q}} } \frac{\left(\frac{1}{q}\right)^{j_{1}} \cdots \left(\frac{1}{q}\right)^{j_{2q}}}{j_{1}!\cdots j_{2q}! }& \nonumber\\ &= \sum_{j_{1}, \ldots, j_{2q}=0 }^{\frac{n-\nu}2} \frac{\left(q\nu+\frac{1}{2}\right)_{j_{1}+\ldots j_{2q}} (\frac{\nu-n }{2})_{j_{1}} \cdots (\frac{\nu-n }{2})_{j_{2q}} }{(\nu + \frac{1}{2})_{j_{1}} \cdots (\nu + \frac{1}{2})_{j_{2q}} } \frac{\left(\frac{1}{q}\right)^{j_{1}} \cdots \left(\frac{1}{q}\right)^{j_{2q}}}{j_{1}!\cdots j_{2q}! }. \end{align} Note that, as $\frac{\nu-n}{2}$ is always a negative integer number, the Lauricella function simplifies to a finite sum. In the following, for convenience, we use the notation $N_O=\sum_{i=1}^D\nu_i$, which is the amount of odd numbers $n_i$ and, thus, $N_E=D-N_O$ gives the number of the even ones. Then simple algebraic manipulations allow us to rewrite Eq. \eqref{HORE1} as \begin{eqnarray} \label{HORE3}\nonumber R_{q}[\rho_{N}] &=& -\frac D2 \log\left[\alpha\right]+\mathcal K_q\,D+\overline{\mathcal K}_q\,N_O+\frac {q}{q-1}\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right]+\frac{1}{1-q}\sum_{i=1}^D\log\left[\mathfrak F_q(n_i)\right], \nonumber\\ \end{eqnarray} where $\mathcal K_q=\frac{\log[\pi^{q-\frac12}\,q^\frac12]}{q-1}$ and $\overline{\mathcal K}_q=\frac{1 }{1-q}\log \left[\frac{4^{q}\,\Gamma\left(\frac12+q\right)}{\pi^{\frac{1}{2}}\,q^{q}}\right]$. This expression allows for the analytical determination of the Rényi entropies (with positive integer values of $q$) for any arbitrary state of the multidimensional harmonic systems. Finally, for the ground state (i.e., $n_i=0,\,i=1,\cdots, D$; so, $N=0$) the general Eq. \eqref{HORE3} boils down to , \begin{equation} R_q[\rho_N]=\frac D2\log\left[\frac {\pi\, q^{\frac1{q-1}}}{\alpha}\right]. \end{equation} In fact, this ground state R\'enyi entropy holds for any $q>0$ as one can directly derive from Eq. \eqref{HORE}. Taking into account that the momentum density is a re-scaled form of the position density, we have the following expression for the associated momentum R\'enyi entropy, \begin{eqnarray} \label{Remsp} R_{\tilde q}[\gamma_N] &=& \frac D2 \log\left[\alpha\right]+\mathcal K_{\tilde q}\,D+\overline{\mathcal K}_{\tilde q}\,N_O+\frac {\tilde q}{\tilde q-1}\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right]+\frac{1}{1-\tilde q}\sum_{i=1}^D\log\left[\mathfrak F_{\tilde q}(n_i)\right],\nonumber\\ \end{eqnarray} ($\tilde q\in \mathbb{N}$). Although Eqs. \eqref{HORE3} and \eqref{Remsp} rigorously hold for $q\not=1$ and $q\in\mathbb N$ only, it seems reasonable to conjecture its general validity for any $q>0, \,q\not=1$ provided the formal existence of a generalized function $\mathfrak F_q(n)$. If so, we obtain the general expression for the position-momentum uncertainty Rényi entropic sum as \begin{eqnarray}\nonumber R_{q}[\rho_N]+R_{\tilde q}[\gamma_N] &=& (\mathcal K_{q}+\mathcal K_{\tilde q})\,D+(\overline{\mathcal K}_{q}+\overline{\mathcal K}_{\tilde q})\,N_O+\left(\frac {q}{ q-1}+\frac {\tilde q}{\tilde q-1}\right)\sum_{i=1}^D(-1)^{n_i}\log\left[\left(\frac{n_i+1}{2}\right)_{\frac12} \right] \\ &+&\frac{1}{1- q}\sum_{i=1}^D\log\left[\mathfrak F_{q}(n_i)\right]+\frac{1}{1-\tilde q}\sum_{i=1}^D\log\left[\mathfrak F_{\tilde q}(n_i)\right] \end{eqnarray} which verifies the R\'enyi-entropy-based uncertainty relation of Zozor-Portesi-Vignat \cite{zozor2008} when $\frac1q+\frac1{\tilde q}\ge2$ for arbitrary quantum systems. In the conjugated case $\tilde q=q^*$ such that $\frac1q+\frac1{q^*}=2$, one obtains \begin{eqnarray}\nonumber R_{q}[\rho_N]+R_{q^*}[\gamma_N] &=& D\log\left(\pi q^{\frac1{2q-2}}{q^*}^{\frac{1}{2q^*-2}}\right)+(\overline{\mathcal K}_{q}+\overline{\mathcal K}_{ q^*})\,N_O\\ &+&\frac{1}{1- q}\sum_{i=1}^D\log\left[\mathfrak F_{q}(n_i)\right]+\frac{1}{1- q^*}\sum_{i=1}^D\log\left[\mathfrak F_{ q^*}(n_i)\right]. \end{eqnarray} Let us finally remark that the first term corresponds to the sharp bound for the general Rényi entropy uncertainty relation with conjugated parameters \begin{equation}\nonumber R_{q}[\rho_N]+R_{q^*}[\gamma_N] \ge D\log\left(\pi q^{\frac1{2q-2}}{q^*}^{\frac{1}{2q^*-2}}\right) \end{equation} of Bialynicki-Birula \cite{bialynicki2} and Zozor-Vignat \cite{vignat}. \section{Conclusions} In this work we have explicitly calculated the R\'enyi entropies, $R_q [\rho_N]$ ($q\in \mathbb{N}$), for all the quantum-mechanically allowed harmonic states in terms of the Rényi index $q$, the spatial dimension $D$, the oscillator strength $\alpha$, as well as the hyperquantum numbers, $\{n_{i}\}_{i=1}^{D}$, which characterize the corresponding state's wavefunction. To do that we have used the harmonic wavefunctions in Cartesian coordinates, which can be expressed in terms of a product of $D$ Hermite polynomials and exponentials. So, the R\'enyi entropies of the quantum states boil down to $D$ entropy-like functionals of Hermite polynomials. Then we have determined these integral functionals by taking into account the close connection between the Hermite and Laguerre polynomials and the Srivastava-Niukkanen linearization method for powers of Laguerre polynomials. The final analytical expression of the Rényi entropies with positive integer index $q$ in both position and momentum spaces is given in a compact way by use of a Lauricella function of type A. It remains as an open problem, the extension of this result to R\'enyi entropies for any real value of the parameter $q$. The latter requires a completely different approach, still unknown to the best of our knowledge. \section*{Acknowledgments} This work has been partially supported by the Project FQM-207 of the Junta de Andaluc\'ia and the MINECO-FEDER grants FIS2014-54497P and FIS2014-59311P. I. V. Toranzo acknowledges the support of ME under the program FPU. \\ Author contribution statement: all authors have contributed equally to the paper.
{ "timestamp": "2018-06-05T02:14:47", "yymm": "1806", "arxiv_id": "1806.00982", "language": "en", "url": "https://arxiv.org/abs/1806.00982" }
\section{Introduction} The famous Banach--Mazur game was invented by Mazur in 1935. For the history of game theory and facts about game the reader is referred to the survey \cite{Telgarsky}. Let $X$ be a topological space and $X=A\cup B$ be any given decomposition of $X$ into two disjoint sets. The game $BM(X,A,B)$ is played as follows: Two players, named $A$ and $B$ alternately choose open nonempty sets with $U_0\supseteq V_0\supseteq U_1\supseteq V_1\supseteq\cdots$. $\begin{array}{lccccr} A\;&U_0& &U_1& &\\ & & & & &\cdots\\ B\;& &V_0& &V_1& \end{array} $ Player A wins this game if $A\cap\bigcap_{n\in\omega}U_n\neq\emptyset$. Otherwise B wins. We study well-known modification of this game considered by Choquet in 1958, known as Banach--Mazur game or Choquet game. Player $\alpha$ and $ \beta $ alternately choose open nonempty sets with $U_0\supseteq V_0\supseteq U_1\supseteq V_1\cdots$. $\begin{array}{lccccr} \beta\;&U_0& &U_1& &\\ & & & & &\cdots\\ \alpha\;& &V_0& &V_1& \end{array} $ Player $\alpha$ wins this play if $\bigcap \{V_n: n\in \omega\}\neq\emptyset$. Otherwise $\beta$ wins. Denote this game by $BM(X).$ Every finite sequence of sets $(U_0,\ldots ,U_n)$, obtained by the first $n$ steps in this game is called \textit{legal $n$ moves} of $\beta$ (or partial play). A \textit{strategy} for the player $\alpha$ in the game $BM(X)$ is a map $s$ that assigns to each legal $n$ moves of $\beta$ a nonempty open set $V_n\subseteq U_n$. The strategy $s$ is called \textit{winning strategy} for player $ \alpha$ if for every legal $n$ moves of $\beta$ $(U_0,\ldots ,U_n)$ such that $V_{n} =s(U_0,\ldots ,U_n)$, we have $\bigcap_{n=1}^{\infty} V_n\neq\emptyset$. The space $X$ is called \textit{weakly $\alpha$-favorable} (see \cite{White}) if $X$ admits a winning strategy for the player $\alpha$ in the game $BM(X)$. We say that $(W_0,\ldots,W_k)$ is \textit{stronger} than $(U_0,\ldots, U_m)$ if $m\leq k$ and $U_0=W_0,\ldots,U_m=W_m$. Notice that if $(W_0,\ldots,W_k)$ is stronger than $(U_0,\ldots, U_m)$, then $s(W_0,\ldots,W_k)\subseteq s(U_0,\ldots, U_m)$, we denote it by $(U_0,\ldots, U_m)\preceq (W_0,\ldots,W_k)$. The \textit{strong Choquet} game is defined as follows: $\begin{array}{lccccr} \beta\;&U_0\ni x_0& &U_1\ni x_1& &\\ & & & & &\cdots\\ \alpha\;& &V_0& &V_1& \end{array} $ Player $\beta$ and $\alpha$ take turns in playing nonempty open subset, similar to the Banach--Mazur game. In the first round, player $\beta$ starts by choosing a point $x_0$ and an open set $U_0$ containing $x_0$, then player $\alpha$ responds with an open set $V_0$ such that $x_0\in V_0\subseteq U_0$. In the $n$-th round, player $\beta$ selects a point $x_n$ and an open set $U_n$ such that $V_{n-1}\supseteq U_n$ and $\alpha$ responds with an open set $V_n$ such that $x_n\in V_n\subseteq U_n$. Player $\alpha$ wins if $\bigcap\{V_n: n\in \omega\}\ne\emptyset$. Otherwise $\beta$ wins. We say that $(W_0,x_0,\ldots,W_k,x_k)$ is \textit{stronger} than $(U_0,y_0,\ldots, U_m,y_m)$ if $m\leq k\mbox{ and }U_0=W_0,\ldots,U_m=W_m\mbox{ and }x_0=y_0,\ldots,x_m=y_m.$ We denote it by $(U_0,y_0,\ldots, U_m,y_m)\preceq (W_0,x_0,\ldots, W_k,x_k)$. We denote a sequence $(W_0,x_0,\ldots, W_k,x_k)$ by $(\overrightarrow{x}\circ \overrightarrow{W})$. A topological space $X$ is called \textit{Choquet complete} if the player $\alpha$ has a winning strategy in the strong Choquet game, denote it by $Ch(X)$. For a topological space $X$, let $\tau(X)$ ($\tau^*(X)$) denote the topology on the set $X$ (the family of nonempty elements of $\tau(X)$). A family $\mathcal P$ of open nonempty sets is called a \textit{$\pi$-base} if for every open nonempty set $U$ there is $P\in\mathcal P$ such that $P\subseteq U.$ A domain is a continuous directed complete partial order. This notion has been introduced by D. Scott as a model for the $\lambda$-calculus, for more information see \cite{arab}, \cite{scott}. Domain representable topological spaces were introduced by H. R. Bennett and D. J. Lutzer \cite{benluz}. We say that a topological space is domain representable if it is homeomorphic to the space of maximal elements of some continuous directed complete partial order topologized with the Scott topology. In 2013 W. Fleissner and L. Yengulalp \cite{FY13} introduced an equivalent definition of a \textit{domain representable space} for $T_1$ topological spaces. We do not assume the antisymmetry condition on the relation $\ll$. As suggested S.~\"{O}nal and \c{C}.~Vural in \cite{turcy} if we need additional antisymmetric property, let consider equivalent relation $E$ on the set $Q$ defined by ``$pEq$ if and only if ($p\ll q$ and $q\ll p$) or $p=q$''. We do not assume any separation axioms, if it is not explicitly stated. We say that a topological space $X$ is \textit{F-Y} (\textit{Fleissner--Yengulalp}) \textit{countably domain representable} if there is a triple $(Q,\ll, B)$ such that \begin{enumerate} \item[(D1)] $B: Q\to\tau^*(X)$ and $\{B(q):q\in Q\}$ is a base for $\tau(X)$, \item[(D2)] $\ll$ is a transitive relation on $Q$, \item[(D3)] for all $p,q\in Q$, $p\ll q$ implies $B(p)\supseteq B(q)$, \item[(D4)] For all $x\in X$, a set $\{q\in Q:x\in B(q)\}$ is upward directed by $\ll$, (every pair of elements has an upper bound), \item[(D5$_{\omega_1}$)] if $D\subseteq Q$ and $(D,\ll)$ is countable and upward directed, then $\bigcap\{B(q): q\in D\}\ne\emptyset$. \end{enumerate} If the conditions (D1)--(D4) and a condition \begin{enumerate} \item[(D5)] if $D\subseteq Q$ and $(D,\ll)$ is upward directed, then $\bigcap\{B(q): q\in D\}\ne\emptyset$ \end{enumerate} are satisfied, we say that a space $X$ is \textit{F-Y domain representable}. W. Fleissner and L. Yengulalp in \cite{FY15} introduced a notion of a \textit{$\pi$-domain representable space}, this is analogous to the notion of a domain representable. We say that a topological space $X$ is \textit{F-Y} (\textit{Fleissner--Yengulalp}) \textit{countably $\pi$-domain representable} if there is a triple $(Q,\ll, B)$ such that \begin{enumerate} \item[($\pi$D1)]$B: Q\to\tau^*(X)$ and $\{B(q):q\in Q\}$ is a $\pi$-base for $\tau(X)$, \item[($\pi$D2)] $\ll$ is a transitive relation on $Q$, \item[($\pi$D3)] for all $p,q\in Q$, $p\ll q$ implies $B(p)\supseteq B(q)$, \item[($\pi$D4)] if $q,p\in Q$ satisfy $B(q)\cap B(p)\ne\emptyset$, there exists $r\in Q$ satisfying $p,q\ll r$, \item[($\pi$D5$_{\omega_1}$)] if $D\subseteq Q$ and $(D,\ll)$ is countable and upward directed), then $\bigcap\{B(q): q\in D\}\ne\emptyset$. \end{enumerate} If the conditions ($\pi$D1)--($\pi$D4) and a condition \begin{enumerate} \item[($\pi$D5)] if $D\subseteq Q$ and $(D,\ll)$ is upward directed, then $\bigcap\{B(q): q\in D\}\ne\emptyset$ \end{enumerate} are satisfied, we say that a space $X$ is \textit{F-Y $\pi$-domain representable}. \section{$\pi$-domain representable spaces} P. S Kenderov and J. P. Revalski in \cite{ken-reval} have showed that the set $E=\{f\in C(X):f\text{ attains its minimum in }X\}$ contains $G_\delta$ dense subset of $C(X)$ is equivalent to existence of a winning strategy for $\alpha$ player in the Banach--Mazur game. Oxtoby \cite{ox} showed that if $X$ is metrizable space then Player $\alpha$ has winning strategy in $BM(X)$ if and only if $X$ contains a dense completely metrizable subspace. A. Krawczyk and W. Kubi\'{s} \cite{kra-kub} have characterized the existence of winning strategies for player $\alpha$ in the abstract Banach--Mazur game played with finitely generated structures instead of open sets. In \cite{kub} there has been presented a version of the Banach--Mazur game played on partially ordered set. We give a characterization of existence of a winning strategy for the player $\alpha$ in the Banach--Mazur game using the notation introduced by W. Fleissner and L. Yengulalp ``$\pi$-domain representable space''. \begin{theorem}\label{twierdz1} A topological space $X$ is weakly $\alpha$-favorable if and only if $X$ is F-Y countably $\pi$-domain representable. \end{theorem} \begin{proof} If $X$ is F-Y countably $\pi$-domain representable, then it is easy to show that $X$ is weakly $\alpha$-favorable. Assume that $X$ is weakly $\alpha$-favorable. We shall show that $X$ is countably $\pi$-domain representable. Let $s$ be a winning strategy for the player $\alpha$ in $BM(X)$. We consider a family $Q$ consisting of all finite sets $\{s(\overrightarrow{U}_0),\ldots,s(\overrightarrow{U}_i)\}$, where $\overrightarrow{U}_m=(U^m_0,\ldots, U^m_{j_m}),\; m\leq i$ is a partial play, i.e., $$U^m_0\supseteq s(U^m_0)\supseteq U^m_1\supseteq s(U^m_0,U^m_1)\supseteq\ldots\supseteq U^m_{j_m} \supseteq s(U^m_0,\ldots,U^m_{j_m})$$ and $s(\overrightarrow{U}_0)\supseteq\ldots \supseteq s(\overrightarrow{U}_i)$. Let us define a relation $\ll$ on the family $Q$, \begin{gather*} \{s(\overrightarrow{U}_0),\ldots ,s(\overrightarrow{U}_i)\} \ll \{s(\overrightarrow{W}_0),\ldots ,s(\overrightarrow{W}_k)\} \text{ iff } s(\overrightarrow{U}_i)\supseteq s(\overrightarrow{W}_0) \ \& \\ i\leq k \;\& \;\forall_{j\leq i}\;\exists_{r\leq k} \;\overrightarrow{U}_j\preceq \overrightarrow{W}_r. \end{gather*} Since $\preceq$ is transitive $\ll$ is transitive. Let define a map $B:Q\to\tau^*(X)$ by the formula $$B(\{s(\overrightarrow{U}_0),\ldots ,s(\overrightarrow{U}_i)\})=s(\overrightarrow{U}_i),$$ for $\{s(\overrightarrow{U}_0),\ldots ,s(\overrightarrow{U}_i)\}\in Q$. Since $\{s(V):V\in\tau^*(X)\}$ is a $\pi$-base $\{B(q):q\in Q\}$ is a $\pi$-base for $\tau$. It is easy to see that the map $B$ satisfies the condition $(\pi D3)$. Towards item ($\pi$D4), let $p,q\in Q$ be such that $B(q)\cap B(p)\ne\emptyset$ and $p=\{s(\overrightarrow{U}_0),\ldots ,s(\overrightarrow{U}_i)\}$, $q=\{s(\overrightarrow{W}_0),\ldots ,s(\overrightarrow{W}_k)\}$ and $s(\overrightarrow{U}_0)\supseteq\ldots\supseteq s(\overrightarrow{U}_i)$ and $s(\overrightarrow{W}_0)\supseteq\ldots\supseteq s(\overrightarrow{W}_k)$. Since $V=B(p)\cap B(q)\subseteq s(\overrightarrow{U}_0)$ and $s$ is a winning strategy, we find an element $\overrightarrow U'_0$ stronger than $\overrightarrow{U}_0$ such that $s(\overrightarrow U'_0)\subseteq V$. Step by step we find a partial play $\overrightarrow U'_j$ such that $\overrightarrow{U}_j\preceq \overrightarrow U'_j$ and $s(\overrightarrow U'_j)\subseteq s(\overrightarrow U'_{j-1})$ for $j\le i$. Since $s(\overrightarrow U'_i)\subseteq s(\overrightarrow{W}_{0})$, we find a partial play $\overrightarrow W'_0$ such that $\overrightarrow{W}_0 \preceq \overrightarrow W'_0$ and $s(\overrightarrow W'_0)\subseteq s(\overrightarrow U'_i)$. Similarly, as for the sequence $p$, for the sequence $q$, we define $s(\overrightarrow W'_l)$ with $\overrightarrow{W}_l\preceq \overrightarrow W'_l$ and $s(\overrightarrow W'_l)\subseteq s(\overrightarrow W'_{l-1})$ for all $l\le k$. Continuing in this way we get an element $r=\{s(\overrightarrow U'_0),\ldots ,s(\overrightarrow U'_i), s(\overrightarrow W'_0), \ldots ,s(\overrightarrow W'_k)\}$ such that $p,q\ll r$. Now we show the condition ($\pi$D5$_{\omega_1}$). Let $D\subseteq Q$ be a countable upward directed set and let $D=\{p_n: n\in \omega\}$. We define a chain $\{q_n: n\in \omega\}\subseteq D\subseteq Q$ such that $p_n\ll q_n$ for $n\in \omega$. By the condition ($\pi$D3), we get $\bigcap\{B(q_n):n\in \omega\}\subseteq\bigcap \{B(p):p\in D\}$. Each $q_n\in Q$ is of the form $q_n=\{s(\overrightarrow{W^n_0}),\ldots, s( \overrightarrow{W^n_{k_n}})\}.$ Since $q_0\ll q_1$ there is $j\leq k_1$ such that $\overrightarrow{W^0_0}\preceq \overrightarrow{W}^1_j$. We have $$s(\overrightarrow{W}^0_0)\supseteq B(q_0)=s(\overrightarrow{W}^0_{k_0})\supseteq s(\overrightarrow{W}^1_j)\supseteq B(q_1)=s(\overrightarrow{W}^1_{k_1}) .$$ Let $s(\overrightarrow U'_0)=s(\overrightarrow{W}^0_0)$ and $s(\overrightarrow U'_1)=s(\overrightarrow{W}^1_j)$. Inductively we can choose a sequence $\{s(\overrightarrow U'_n):n\in\omega\}$ such that $\overrightarrow{U}_n' \preceq \overrightarrow U'_{n+1}$ and $$B(q_{n}) \supseteq s(\overrightarrow U'_{n+1})\supseteq B(q_{n+1}).$$ Since $s$ is a winning strategy for the player $\alpha$, we have $$\emptyset\ne\bigcap \{s(\overrightarrow U'_n):n\in\omega\}=\bigcap\{B(q_n): n\in \omega\}\subseteq\bigcap\{B(p):p\in D\}.$$ \end{proof} We give an example of a space, which is F-Y countably domain representable, but it is not F-Y $\pi$-domain representable. Note that this space will be F-Y countably $\pi$-domain representable and not F-Y domain representable. \begin{example} We consider a space $$X=\sigma\big(\{0,1\}^{\omega_1}\big)=\big\{x\in \{0,1\}^{\omega_1}: |\operatorname{supp} x|\le \omega\big\},$$ where $\operatorname{supp} x=\{\alpha \in A:x(\alpha)=1\}$ for $x\in \{0,1\}^A$, with the topology ($\omega_1$-box topology) generated by a base $$\mathcal{B}=\big\{\operatorname{pr}_A^{-1}(x): A\in [\omega_1]^{\le \omega}, x\in \{0,1\}^A\big \} ,$$ where $\operatorname{pr}_A: \sigma(\{0,1\}^{\omega_1})\to \{0,1\}^A$ is a projection. We shall define a triple $(Q,\ll, B)$. Let $Q=\mathcal{B}$, a map $B:Q\to Q$ be the identity. Define a relation $\ll$ in the following way $$\operatorname{pr}^{-1}_A(x_A)\ll\operatorname{pr}^{-1}_B(x_B) \Leftrightarrow \operatorname{pr}^{-1}_A(x_A)\supseteq \operatorname{pr}^{-1}_B(x_B),$$ for any $\operatorname{pr}^{-1}_A(x_A), \operatorname{pr}^{-1}_B(x_B)\in \mathcal{B}$. It is easy to see that the relation $\ll$ is transitive and it satisfies the condition $(D3)$. Now, we shall prove the condition (D4). Let $x\in X$ and $\operatorname{pr}^{-1}_{A_1}(x_{A_1}), \operatorname{pr}^{-1}_{A_2}(x_{A_2}) \in \{\operatorname{pr}^{-1}_A(x_A)\in \mathcal{B}:x\in \operatorname{pr}^{-1}_A(x_A)\}$. Since $x\in \operatorname{pr}^{-1}_{A_1}(x_{A_1})\cap \operatorname{pr}^{-1}_{A_2}(x_{A_2})$ we get $x_{A_1}\upharpoonright A_2=x_{A_2}\upharpoonright A_1$. Set $A_3=A_1\cup A_2$ and $x_{A_3}\in \{0,1\}^{A_3}$ be such that $x_{A_3}\upharpoonright A_2=x_{A_2}$ and $x_{A_3}\upharpoonright A_1=x_{A_1}$. We have $x_{A_3}\in \{0,1\}^{A_3}$ such that $x\in \operatorname{pr}^{-1}_{A_3}(x_{A_3})\subseteq \operatorname{pr}^{-1}_{A_1}(x_{A_1})\cap \operatorname{pr}^{-1}_{A_2}(x_{A_2})$. Hence $\operatorname{pr}^{-1}_{A_1}(x_{A_1}),\operatorname{pr}^{-1}_{A_2}(x_{A_2}) \ll \operatorname{pr}^{-1}_{A_3}(x_{A_3})$. We shall prove the condition $(D5)_{\omega_1}$. Let $D\subseteq \mathcal{B}$ be a countable upward directed family. We can construct a chain $\{\operatorname{pr}^{-1}_{A_{n}}(x_{A_{n}}): n\in \omega\}\subseteq D$ such that for each set $\operatorname{pr}^{-1}_{A}(x_{A})\in D$ there exists $n\in\omega$ such that $\operatorname{pr}^{-1}_{A}(x_{A})\ll\operatorname{pr}^{-1}_{A_{n}}(x_{A_{n}})$. Let $B=\bigcup\{A_n:n\in \omega\}$, $x_B\in \{0,1\}^B$ and $x_B\upharpoonright A_n=x_{A_n}$ for $n\in \omega$, then $$\bigcap\{\operatorname{pr}^{-1}_{A_n}(x_{A_n}): n\in \omega\}=\operatorname{pr}^{-1}_B(x_B)\in \mathcal{B},$$ and $\operatorname{pr}^{-1}_B(x_B)\subseteq \bigcap D$, this completes the proof that the space $\sigma\big(\{0,1\}^{\omega_1}\big)$ is F-Y countably domain representable. Now we shall show that $X=\sigma\big(\{0,1\}^{\omega_1}\big)$ is not F-Y $\pi$-domain representable. Suppose that there exists the triple $(Q,\ll, B)$ satisfying the condition ($\pi$D1)--($\pi$D5). A family $\Pee =\{B(q):q\in Q\}$ is a $\pi$-base. By induction we define a sequence $\{Q_\alpha: \alpha<\omega_1\}$ such that the following conditions are satisfied: \begin{enumerate} \item $Q_\alpha\in [Q]^{\le \omega}$ and $Q_\alpha$ is upward directed, for $\alpha<\omega_1$, \item $\bigcap \{B(q):q\in Q_\alpha\}=\operatorname{pr}^{-1}_{A_\alpha}(x_{A_\alpha}) \in \mathcal{B}$ for some $A_\alpha\in[\omega_1]^{\le \omega}$ and some $x_{A_\alpha}\in \{0,1\}^{A_\alpha}$, for $\alpha<\omega_1$, \item $Q_\alpha\subseteq Q_\beta,$ for $\alpha<\beta<\omega_1$, \item if $\bigcap \{B(q):q\in Q_\alpha\}=\operatorname{pr}^{-1}_{A_\alpha}(x_{A_\alpha})$ and $\bigcap \{B(q):q\in Q_\beta\}=\operatorname{pr}^{-1}_{A_\beta}(x_{A_\beta})$ for some $A_\alpha,A_\beta \in[\omega_1]^{\le \omega}$ and $x_{A_\alpha}\in \{0,1\}^{A_\alpha}$ and $x_{A_\beta}\in \{0,1\}^{A_\beta}$, then $\operatorname{supp} x_{A_\alpha}\varsubsetneq \operatorname{supp} x_{A_\beta}$, for $\alpha<\beta<\omega_1$, \end{enumerate} We define a set $Q_0$. Take any $r_0\in Q$. There exist a set $A_0\in [\omega_1]^{\le \omega}$ and $x_{A_0}\in \{0,1\}^{A_0}$ such that $\operatorname{pr}_{A_0}^{-1}(x_{A_0})\subseteq B(r_0)$. By conditions $(\pi D1), (\pi D3),(\pi D4)$ there exists $r_1\in Q$ such that $r_0\ll r_1$ and $B(r_1)\subseteq \operatorname{pr}_{A_0}^{-1}(x_{A_0}).$ Assume that we have defined $r_0\ll\ldots\ll r_n$ and a chain $\{A_i:i\leq n\}\subseteq[\omega_1]^{\le \omega}$ and $x_{A_i}\in\{0,1\}^{A_i}$ such that $$\operatorname{pr}^{-1}_{A_{i-1}}(x_{A_{i-1}})\supseteq B(r_i)\supseteq \operatorname{pr}_{A_i}^{-1}(x_{A_i}) \text{ for } i\le n$$ By conditions ($\pi$D1), ($\pi$D3), ($\pi$D4) there exists $r_{n+1}\in Q$ such that $r_n\ll r_{n+1}$ and $B(r_{n+1})\subseteq \operatorname{pr}_{A_{n}}^{-1}(x_{A_{n}}).$ There exist a set $A_{n+1}\in [\omega_1]^{\le \omega}$ and $x_{A_{n+1}}\in \{0,1\}^{A_{n+1}}$ such that $\operatorname{pr}_{A_{n+1}}^{-1}(x_{A_{n+1}})\subseteq B(r_{n+1})$. Let $Q_0=\{r_n:n\in\omega\}$. Then $\bigcap\{B(q):q\in Q_0\}=\bigcap\{\operatorname{pr}_{A_{n}}^{-1}(x_{A_{n}}):n\in\omega\}=\operatorname{pr}_{A}^{-1}(x_{A})$, where $A=\bigcup\{A_n:n\in\omega\}$ and $x_A$ is extension of the chain $\{x_{A_n}:n\in\omega\}$. Assume that we have defined $\{Q_\alpha: \alpha< \beta\}$ which satisfies the conditions (D1)--(D4). Let $\Raa _\beta= \bigcup \{Q_\alpha:\alpha< \beta\}.$ The set $\Raa _\beta$ is upward directed by conditions $(3), (1)$. Let $\Raa _\beta=\{p_n:n\in \omega\}$. By $(2)$ and $(3)$ we get $\bigcap\big\{B(p_n): n\in \omega \big\}\in\mathcal B$, hence there is a set $A_\beta\in [\omega_1]^{\le \omega}$ and $x_{A_\beta}\in \{0,1\}^{A_\beta}$ such that $\bigcap \mathcal{R}_\beta=\operatorname{pr}_{A_\beta}^{-1}(x_{A_\beta})$. There exists a set $A\in [\omega_1]^{\le \omega}$ and $x_{A}\in \{0,1\}^{A}$ such that $\operatorname{pr}_{A}^{-1}(x_{A})\varsubsetneq \operatorname{pr}_{A_\beta}^{-1}(x_{A_\beta})$ and $\operatorname{supp} x_{A_\beta}\varsubsetneq \operatorname{supp} x_{A}$. Since $\Pee $ is $\pi$-base we can find $r_\beta\in Q$ such that $B(r_\beta)\subseteq \operatorname{pr}_{A}^{-1}(x_{A})$. Inductively we can define a sequence $\{q_n:n\in\omega\}\subseteq Q$, a chain $\{A_n:n\in \omega\}\subseteq[\omega_1]^{\le \omega}$ and a sequence $\{x_{A_n}\in\{0,1\}^{A_n}:n\in\omega\}$ such that $r_\beta, p_0\ll q_0$ and $q_{n-1},p_n\ll q_n$ and $$B(q_n)\supseteq \operatorname{pr}_{A_n}^{-1}(x_{A_n})\supseteq B(q_{n+1}) \text{ for } n\in\omega.$$ Let $Q_\beta=\bigcup \{Q_\alpha:\alpha< \beta\}\cup \{q_n:n\in \omega\}$. The set $Q_\beta$ satisfies conditions $(1)-(4)$, so we finish induction. The set $\bigcup \{Q_\alpha:\alpha< \omega_1\}$ is upward directed. By conditions $(2),(3)$ we have \begin{gather*} \bigcap\{B(q):q\in \bigcup\{Q_\alpha:\alpha< \omega_1\}\big\}=\bigcap\{\operatorname{pr}_{A_\alpha}^{-1}(x_{A_\alpha}): \alpha <\omega_1\}=\\ =\pi_A^{-1}(x_A), \text{ for some } A \subseteq \omega_1\text{ and } x_A\in \{0,1\}^A, \end{gather*} where $\pi_A:\{0,1\}^{\omega_1}\to\{0,1\}^A$ is the projection. By condition $(4)$ we get $|\operatorname{supp} x_A|=\omega_1$. Hence $\pi_A^{-1}(x_A)\cap\sigma\big(\{0,1\}^{\omega_1}\big)=\emptyset$, a contradiction. \begin{flushright} $\Box$ \end{flushright} \end{example} Note that by the proof of Proposition 8.3. in \cite{FY15}, it follows that if there exists a triple $(Q, \ll, B)$, which F-Y countably $\pi$-represents a space $X$ and $|\bigcap\{B(q): q\in D\}|=1$ for every countable and upward directed set $D\subseteq Q$, then this triple F-Y $\pi$-represents a space $X$. \begin{theorem} The Cartesian product of any family of F-Y countably $\pi$-domain representable spaces is F-Y countably $\pi$-domain representable. \end{theorem} \begin{proof} Let $X$ be a product of a family $\{X_a:a\in A\}$ of F-Y countably $\pi$-domain representable spaces. Let $(Q_a,\ll_a, B_a)$ be a triple which satisfies conditions ($\pi$D1)--($\pi$D4) and ($\pi$D5$_{\omega_1}$). Any basic nonempty open subset $U$ in $X$ is of the form $U=\prod\{U_a:a\in A\}$ where $U_a$ is nonempty open subset of $X_a$ and $U_a=X_a$ for all but a finite number of $a$. We may assume that $0\in Q_a$ is the least element in $Q_a$ and $B_a(0)=X_a$ for each $a\in A$. Put $$Q=\left\{p\in\prod\{Q_a:a\in A\}:|\{a\in A:p(a)\ne 0\}|<\omega\right\}.$$ Define a relation $\ll$ in $Q$ by the formula $$p\ll q\Longleftrightarrow p(a)\ll_a q(a) \text{ for all } a\in A,$$ where $p,q\in Q$. Let us define a map $B:Q\to\tau^*(X)$ by $B(p)=\prod\{B_a(p(a)):a\in A\}$, where $p\in Q$. It is easy to check that $(Q,\ll,B)$ is countably $\pi$-domain represents $X$. \end{proof} \section{Domain representable spaces} In 2003 K. Martin \cite{m03} showed that, if a space is domain representable, then the player $\alpha$ has a winning strategy in the strong Choquet game. In 2005 W. Fleissner and L. Yengulalp \cite{FY15} showed that it is sufficient that a space is countably domain representable. Now, we shall show that the property of being countably domain representable is necessary. \begin{theorem} A topological space $X$ is Choquet complete if and only if it is F-Y countably domain representable. \end{theorem} \begin{proof} By Theorem 4.3 (3) in \cite{FY15} (see also \cite{m03}) it suffices to prove that if $X$ is Choquet complete, then $X$ is F-Y countably domain representable. Assume that $X$ is Choquet complete. Let $s$ be a winning strategy for player $\alpha$. We consider a family $Q$ consisting of all finite sets $\{s(\overrightarrow{x_0}\circ\overrightarrow{U}_0),\ldots ,s(\overrightarrow{x_i}\circ\overrightarrow{U}_i)\}$, where $\overrightarrow{x_m}\circ\overrightarrow{U}_m=(U^m_0,x^m_0,\ldots, U^m_{j_m},x^m_{j_m}),\; m\leq i$, is a partial play in the strong Choquet game, i.e., \begin{gather*} U^m_0\supseteq s(U^m_0,x^m_0)\supseteq U^m_1\supseteq s(U^m_0,x^m_0,U^m_1,x^m_1)\supseteq\ldots\supseteq U^m_{j_m}\\ \supseteq s(U^m_0,x^m_0,\ldots,U^m_{j_m},x^m_{j_m}) \end{gather*} and $s(\overrightarrow{x_0}\circ\overrightarrow{U}_0)\supseteq\ldots \supseteq s(\overrightarrow{x_i}\circ\overrightarrow{U}_i)$. Let us define a relation $\ll$ on the family $Q$, \begin{gather*} \{s(\overrightarrow{x_0}\circ\overrightarrow{U}_0),\ldots ,s(\overrightarrow{x_i}\circ\overrightarrow{U}_i)\}\ll \{s(\overrightarrow{y_0}\circ\overrightarrow{W}_0),\ldots ,s(\overrightarrow{y_k}\circ\overrightarrow{W}_k)\} \text{ iff }\\ s(\overrightarrow{x_i}\circ\overrightarrow{U}_i)\supseteq s(\overrightarrow{y_0}\circ\overrightarrow{W}_0) \; \& \; i\leq k \; \& \;\forall_{j\leq i}\;\exists_{r\leq k} \;(\overrightarrow{x_j}\circ\overrightarrow{U}_j)\preceq (\overrightarrow{y_r}\circ\overrightarrow{W}_r). \end{gather*} We define a map $B:Q\to\tau^*$ by the formula $$B\{s(\overrightarrow{x_0}\circ\overrightarrow{U}_0),\ldots ,s(\overrightarrow{x_i}\circ \overrightarrow{U}_i)\}=s(\overrightarrow{x_i}\circ\overrightarrow{U}_i),$$ with $s(\overrightarrow{x_i} \circ \overrightarrow{U}_i)\subseteq s(\overrightarrow{x_j}\circ \overrightarrow{U}_j)$ for $j\le i$, for each $\{s(\overrightarrow{x_0}\circ \overrightarrow{U}_0),\ldots ,s(\overrightarrow{x_i}\circ\overrightarrow{U}_i)\}\in Q$. The rest of the proof is similar to the proof of Theorem \ref{twierdz1}. \end{proof}
{ "timestamp": "2018-06-05T02:10:51", "yymm": "1806", "arxiv_id": "1806.00785", "language": "en", "url": "https://arxiv.org/abs/1806.00785" }
\section{Conclusions}\label{section:discussion} We propose \emph{iFair}, a generic and versatile, unsupervised framework to perform a probabilistic transformation of data into individually fair representations. Our approach accomodates two important criteria. First, we view fairness from an application-agnostic view, which allows us to incorporate it in a wide variety of tasks, including general classifiers and regression for learning-to-rank. Second, we treat individual fairness as a property of the dataset (in some sense, like privacy), which can be achieved by pre-processing the data into a transformed representation. This stage does not need access to protected attributes. If desired, we can also post-process the learned representations and enforce group fairness criteria such as statistical parity. We applied our model to five real-world datasets, empirically demonstrating that utility and individual fairness can be reconciled to a large degree. Applying classifiers and regression models to \emph{iFair} representations leads to algorithmic decisions that are substantially more consistent than the decisions made on the original data. Our approach is the first method to compute individually fair results in learning-to-rank tasks. For classification tasks, it outperforms the state-of-the-art prior work. \comment{ In this paper, we formulated fairness in machine learning as a data pre-processing approach of learning a fair representation of the data that aims to retain as much non-protected information as possible while forgetting protected information about the individuals. We do so while being agnostic to the underlying machine learning task for which the learned representations will be used. To this end, we proposed a generic and versatile framework to perform a probabilistic transformation of data into individually fair representations. Our model can be applied to input of all data-types, including multiple, multivariate, non-binary protected attributes, and even in the settings where access to protected attributes is restricted due to privacy concerns. An important feature of our fair representations of data is that they remain unaffected to changes in distribution of protected attribute values. Concretely, this implies that machine learning algorithms trained on our fair representations will probabilistically have no possibility of discrimination between two individuals who are deemed similar with respect to non-protected attribute values. Hence preventing arbitrary differences in outcomes of individuals caused due to their membership to protected groups. We applied our model to two real-world datasets in order to learn their fair representations. Applying standard classifier and regression to the transformed representation leads to algorithmic decisions that are substantially fairer -- both individually as well as group-wise -- than the decisions made on the original data. Inevitably, the gain in fairness comes at the expense of a small loss in utility. Our approach is the first method to compute individually fair results in learning-to-rank tasks. For classification tasks, it outperforms the state-of-the-art prior work on individual fairness. \section{Properties of the {\em iFair} Model} We discuss properties of {\em iFair} representations and empirically compare {\em iFair} to the \emph{LFR} model. We are interested in the general behavior of methods for learned representations, to what extent they can reconcile utility and individual fairness at all, and how they relate to group fairness criteria (although {\em iFair} does not consider these in its optimization). To this end, we generate {\em synthetic data} with systematic parameter variation as follows. We restrict ourselves to the case of a binary classifier. \begin{figure*}[tbh!] \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_data_random_nocontour.pdf} \caption{original data (random)} \label{fig:synthetic-data-a} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_IFR_random.pdf} \caption{Learned representation via {\em iFair}} \label{fig:synthetic-data-b} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_zemel_random.pdf} \caption{Learned representation via \emph{LFR}} \label{fig:synthetic-data-c} \vspace{1mm \end{subfigure} \newline \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_data_up_nocontour.pdf} \caption{original data ($X1 \leq 3$)} \label{fig:synthetic-data-a} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_IFR_up.pdf} \caption{Learned representation via {\em iFair}} \label{fig:synthetic-data-b} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_zemel_up.pdf} \caption{Learned representation via \emph{LFR}} \label{fig:synthetic-data-c} \vspace{1mm \end{subfigure} \newline \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_data_left_nocontour.pdf} \caption{original data ($X2 \leq 3$)} \label{fig:synthetic-data-a} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_IFR_left.pdf} \caption{Learned representation via {\em iFair}} \label{fig:synthetic-data-b} \vspace{1mm \end{subfigure} \begin{subfigure}{0.33\linewidth} \centering \includegraphics[scale=0.48]{synthetic_zemel_left.pdf} \caption{Learned representation via \emph{LFR}} \label{fig:synthetic-data-c} \vspace{1mm \end{subfigure} \caption{Illustration of properties of data representations on synthetic data. (left: original data, center: \emph{iFair}, right: \emph{LFR}). Output class labels: o for Y=0 and + for Y=1. Membership in protected group: blue for A=0 and orange for A=1. Solid lines are the decision boundaries of the respective classifiers. \emph{iFair} outperforms \emph{LFR} on all metrics except for statistical parity.} \label{fig:synthetic-data} \vspace{-6mm \end{figure*} We generate 100 data points with 3 attributes: 2 real-valued and non-sensitive attributes $X1$ and $X2$ and 1 binary attribute $A$ which serves as the protected attribute. We first draw two-dimensional datapoints from a mixture of Gaussians with two components: (i) isotropic Gaussian with unit variance and (ii) correlated Gaussian with covariance 0.95 between the two attributes and variance 1 for each attribute. To study the influence of membership to the protected group (i.e., $A$ set to 1), we generate three variants of this data: \squishlist \item Random: $A$ is set to 1 with probability $0.3$ at random. \item Correlation with $X1$: $A$ is set to $1$ if $X1 \leq 3$. \item Correlation with $X2$: $A$ is set to $1$ if $X2 \leq 3$. \end{list} So the three synthetic datasets have the same values for the non-sensitive attributes $X1$ and $X2$ as well for the outcome variable $Y$. The datapoints differ only on membership to the protected group and its distribution across output classes $Y$. Figure \ref{fig:synthetic-data} shows these three cases row-wise: subfigures a-c, d-f, g-i, respectively. The left column of the figure displays the original data, with the two labels for output $Y$ depicted by marker: ``o'' for $Y=0$ and ``+'' for $Y=1$ and the membership to the protected group by color: orange for $A=1$ and blue for $A=0$. The middle column of Figure \ref{fig:synthetic-data} shows the learned {\em iFair} representations, and the right column shows the representations based on \emph{LFR}. Note that the values of importance in Figure \ref{fig:synthetic-data} (middle and right column) are the positions of the data points in the two-dimensional latent space and the classifier decision boundary (solid line). The color of the datapoints and the markers (o and +) depict the true class and true group membership, and not the learned values. They are visualized to aid the reader in relating original data with transformed representations. Furthermore, small differences in the learned representation are expected due to random initializations of model parameters. The solid line in the charts denotes the predicted classifiers' decision boundary applied on the learned representations. Hyper-parameters for both {\em iFair} as well as \emph{LFR} are chosen by performing a grid search on the set $\{0, 0.05, 0.1, 1, 10, 100\}$ for optimal individual fairness of the classifier. For each of the nine cases, we indicate the resulting classifier accuracy $Acc$, individual fairness in terms of consistency $yNN$ with regard to the $k=10$ nearest neighbors \cite{zemel2013learning} (formal definition given in Section \ref{section:fairness-measures}), the statistical parity $Parity$ with regard to the protected group $A=1$, and equality-of-opportunity $EqOpp$ \cite{hardt2016equality} notion of group fairness. \noindent{\bf Main findings:} Two major insights from this study are: (i) representations learned via \emph{iFair} remain nearly the same irrespective of changes in group membership, and (ii) \emph{iFair} significantly outperforms \emph{LFR} on accuracy, consistency and equality of opportunity, whereas \emph{LFR} wins on statistical parity. In the following we further discuss these findings and their implications. \noindent{\bf Influence of Protected Group:} The middle column in Figure \ref{fig:synthetic-data} shows that the {\em iFair} representation remains largely unaffected by the changes in the group memberships of the datapoints. In other words, changing the value of the protected attribute of a datapoint, all other attribute values remaining the same, has hardly any influence on its learned representation; consequently it has nearly no influence on the outcome made by the decision-making algorithms trained on these representations. This is an important and interesting characteristic to have in a fair representation, as it directly relates to the definition of \emph{individual fairness}. In contrast, the membership to the protected group has a pronounced influence on the learned representation of the \emph{LFR} model (refer to Figure \ref{fig:synthetic-data} right column). Recall that the color of the datapoints as well as the markers (o and +) are taken from the original data. They depict the true class and membership to group of the datapoints, and are visualized to aid the reader. \noindent{\bf Tension in Objective Function:} The optimization via \emph{LFR} \cite{zemel2013learning} has three components: classifier accuracy as utility metric, individual fairness in terms of data loss, and group fairness in terms of statistical parity. We observe that by pursuing group fairness and individual fairness together, the tension with utility is very pronounced. The learned representations are stretched on the compromise over all three goals, ultimately leading to sacrificing utility. In contrast, {\em iFair} pursues only utility and individual fairness, and disregards group fairness. This helps to make the multi-objective optimization more tractable. {\em iFair} clearly outperforms {\em LFR} not only on accuracy, with better decision boundaries, but also wins in terms of individual fairness. This shows that the tension between utility and individual fairness is lower than between utility and group fairness. \iffalse \noindent{\tt GW: this is a promising direction, but it still is in a somewhat vague hand-waving form - at this point, I would not include it in the paper.}\\ In this section, we strengthen our claim, we propose that an algorithm satisfying individual fairness would not only improve group fairness but under certain assumptions optimizing for individual fairness would satisfy group fairness criteria as well(such as equal-odds or equal-opportunity). \begin{lemma}{If the base rates across the groups are equal then an algorithm satisfying individual fairness criteria also satisfies group fairness criteria} \emph{Proof:} Let us begin with an informal overview of the proof. For this discussion, let us assume that we aim to learn a binary classifier $A$ on user records $X$. Let $X_t$ be the set of records of individual who have a membership in group $t$. We assume that there are $N_t$ number of individuals in each group and $\mu_t$ number of qualified individuals in each group. An algorithm $A$ that satisfies individual fairness criteria - ``similar individuals should have similar outcomes''- can never probabilistically give different outcomes for equally qualified individuals irrespective of the value of their protected attribute. Thus, the predictions made by $A$ would be the same (either correct or wrong) for all qualified individuals across all groups $\mu_t$ (and non-qualified individuals $N_t - \mu_t$). Thus the rate of true positives (or false positives) across each group would be $\frac{\mu_t. \hat{y_t}}{(N_t - \mu_t)}$ where $\hat{y_t}$ is expected score assigned to qualified individuals. Since the algorithm $A$ satisfies individual fairness, the expected score for qualified individuals $y_t$ would be the same across all groups. Hence, the rate of true positives (false positives) in each group depends only on the rate of qualified individuals in each group. When the dataset has same base rate across all groups, an algorithm satisfying individual fairness would also satisfy group fairness such as equal odds or equal opportunity. When the base rates are not same, we can guarantee that an algorithm satisfying individual fairness would probabilistically never favor qualified individuals in one group to others. \end{lemma} \fi \noindent{\bf Trade-off between Utility and Individual Fairness:} The improvement that {\em iFair} achieves in individual fairness comes at the expense of a small drop in utility. The trade-off is caused by the loss of information in learning representative prototypes. The choice of the mapping function in Equation \ref{eq-prototype-mapping} and the pairwise distance function $d(.)$ in Definition \ref{eq:distance-function} affects the ability to learn prototypes. Our framework is flexible and easily supports other kernels and distance functions. Exploring these influence factors is a direction for future work. \section{Experiments}\label{section:experiments} The key hypothesis that we test in the experimental evaluation is whether {\em iFair} can indeed reconcile the two goals of {\em individual fairness} and {\em utility} reasonably well. As {\em iFair} is designed as an application-agnostic representation, we test its versatility by studying both classifier and learning-to-rank use cases, in Subsections \ref{subsec:classification} and \ref{subsec:ranking}, respectively. We compare {\em iFair} to a variety of baselines including LFR \cite{zemel2013learning} for classification and FA*IR \cite{zehlike2017fa} for ranking. Although group fairness and their underlying legal and political constraints are not among the design goals of our approach, we include group fairness measures in reporting on our experiments -- shedding light into this aspect from an empirical perspective. \subsection{Datasets} \label{section:groundtuth} \noindent We apply the {\em iFair} framework to five real-world, publicly available datasets, previously used in the literature on algorithmic fairness. \begin{itemize} \item \textbf{ProPublica's COMPAS} recidivism dataset \cite{angwin2016machine}, a widely used test case for fairness in machine learning and algorithmic decision making. We set \emph{race} as a {protected attribute}, and use the binary indicator of recidivism as the outcome variable $Y$. \item \textbf{Census Income} dataset consists of survey results of income of 48,842 adults in the US \cite{Dua:2017}. We use gender as the protected attribute and the binary indicator variable of \emph{income $> 50 K$} as the outcome variable Y. \item \textbf{German Credit} data has $1000$ instances of credit risk assessment records \cite{Dua:2017}. Following the literature, we set \emph{age} as the sensitive attribute, and \emph{credit worthiness} as the outcome variable. \item \textbf{Airbnb} data consists of house listings from five major cities in the US, collected from \url{http://insideairbnb.com/get-the-data.html} (June 2018). After appropriate data cleaning, there are $27,597$ records. For experiments, we choose a subset of 22 informative attributes (categorical and numerical) and infer host gender from \emph{host name}, using lists of common first names. We use \emph{gender} of the host as the protected attribute and \emph{rating/price} as the ranking variable. \item \textbf{Xing} is a popular job search portal in Germany (similar to LinkedIn). We use the anonymized data given by \cite{zehlike2017fa}, consisting of top $40$ profiles returned for $57$ job queries. For each candidate we collect information about job category, work experience, education experience, number of views of the person's profile, and gender. We set \emph{gender} as the protected attribute. We use a weighted sum of work experience, education experience and number of profile views as a score that serves as the ranking variable. \end{itemize} The Compas, Census and Credit datasets are used for experiments on classification, and the Xing and Airbnb datasets are used for experiments on learning-to-rank regression. Table \ref{tbl:dataset-statistics} gives details of experimental settings and statistics for each dataset, including base-rate (fraction of samples belonging to the positive class, for both the protected group and its complement), and dimensionality M (after unfolding categorical attributes). We choose the protected attributes and outcome variables to be in line with the literature. In practice, however, such decisions would be made by domain experts and according to official policies and regulations. The flexibility of our framework allows for multiple protected attributes, multivariate outcome variable, as well as inputs of all data types. \begin{table}[tbh!] \center \setlength\tabcolsep{1.75 pt} \noindent\small\resizebox{\linewidth}{!}{% \begin{tabular}{lccccll} \toprule Dataset & Base-rate & Base-rate & N & M & Outcome & Protected\\ & protected & unprotected & & \\ \midrule Compas & 0.52 & 0.40 & 6901 & 431 & recidivism & race\\ Census & 0.12 & 0.31 & 48842 & 101 & income & gender\\ Credit & 0.67 & 0.72 & 1000 & 67 & loan default & age\\ Airbnb & - & - & 27597 & 33 & rating/price & gender\\ Xing & - & - & 2240 & 59 & work + education & gender\\ \bottomrule \end{tabular} } \caption{Experimental settings and statistics of the datasets.} \label{tbl:dataset-statistics} \vspace{-5mm \end{table} \subsection{Setup and Baselines} \noindent In each dataset, categorical attributes are transformed using one-hot encoding, and all features vectors are normalized to have unit variance. We randomly split the datasets into three parts. We use one part to train the model to learn model parameters, the second part as a validation set to choose hyper-parameters by performing a grid search (details follow), and the third part as a test set. We use the same data split to compare all methods. We evaluate all data representations -- {\em iFair} against various baselines -- by comparing the results of a standard classifier (\emph{logistic regression}) and a learning-to-rank regression model (\emph{linear regression}) applied to \begin{itemize} \item {\bf Full Data}: the original dataset. \item {\bf Masked Data}: the original dataset without protected attributes. \item {\bf SVD}: transformed data by performing dimensionality reduction via singular value decomposition (SVD) \cite{halko2009finding}, with two variants of data: (a) full data and (b) masked data. We name these variants \emph{\bf SVD} and \emph{\bf SVD-masked}, respectively. \item \emph{\bf LFR}: the learned representation by the method of \citet{zemel2013learning}. \item \emph{\bf FA*IR}: this baseline does not produce any data representation. FA*IR \cite{zehlike2017fa} is a ranking method which expects as input a set of candidates ranked by their \emph{deserved scores} and returns a ranked permutation which satisfies group fairness at every prefix of the ranking. We extended the code shared by \citet{zehlike2017fa} to make it suitable for comparison (see Section \ref{section:ranking-experiments}). \item \emph{\bf iFair}: the representation learned by our model. We perform experiments with two kinds of initializations for the model parameter $\alpha$ (attribute weight vector): (a) random initialization in $(0,1)$ and (b) initializing protected attributes to (near-)zero values, to reflect the intuition that protected attributes should be discounted in the distance-preservation of individual fairness (and avoiding zero values to allow slack for the numerical computations in learning the model). We call these two methods \emph{\bf iFair-a} and \emph{\bf iFair-b}, respectively. \end{itemize} \noindent {\bf Model Parameters:} All the methods were trained in the same way. We initialize model parameters ($v_k$ vectors and the $\alpha$ vector) to random values from uniform distribution in $(0,1)$ (unless specified otherwise, for the {\em iFair-b} method). To compensate for variations caused due to initialization of model parameters, for each method and at each setting, we report the results from the best of $3$ runs. \noindent{\bf Hyper-Parameters:} As for hyper-parameters (e.g., $\lambda$ and $\mu$ in Equation \ref{eq:objective} of {\em iFair}), including the dimensionality $K$ of the low-rank representations, we perform a grid search over the set $\{0, 0.05, 0.1, 1, 10, 100\}$ for mixture coefficients and the set $\{10,20,30\}$ for the dimensionality $K$. Recall that the input data is pre-processed with categorical attributes unfolded into binary attributes; hence the choices for $K$. The mixture coefficients ($\lambda, \mu, \dots$) control the trade-off between different objectives: utility, individual fairness, group fairness (when applicable). Since it is all but straightforward to decide which of the multiple objectives is more important, we choose these hyper-parameters based on different choices for the optimization goal (e.g., maximize utility alone or maximize a combination of utility and individual fairness). Thus, our evaluation results report multiple observations for each model, depending on the goal for tuning the hyper-parameters. When possible, we identify Pareto-optimal choices with respect to multiple objectives; that is, choices that are not consistently outperformed by other choices for all objectives. \subsection{Evaluation Measures}\label{section:fairness-measures} \squishlist \item \textbf{Utility:} measured as accuracy (Acc) and the area under the ROC curve (AUC) for the classification task, and as Kendall's Tau (KT) and mean average precision at 10 (MAP) for the learning-to-rank task. \item \textbf{Individual Fairness:} measured as the \emph{consistency} of the outcome $\hat{y}_i$ of an individual with the outcomes of his/her k=10 nearest neighbors. This metric has been introduced by \cite{zemel2013learning} \footnote{Our version slightly differs from that in \cite{zemel2013learning} by fixing a minor bug in the formula.} and captures the intuition that similar individuals should be treated similarly. Note that nearest neighbors of an individual, $kNN(x_i)$, are computed on the original attribute values of $x_i$ excluding protected attributes, whereas the predicted response variable $\hat{y}_i$ is computed on the output of the learned representations $\tilde{x_i}$. \begin{equation*} \text{yNN} = 1 - \frac{1}{M}\cdot\frac{1}{k} \cdot\sum\limits_{i = 1}^{M} \smashoperator \sum_{j \in kNN(x^*_i)} \norm{\hat{y}_i - {\hat{y}_j}} \end{equation*} \item \textbf{Group Fairness:} measured as \squishlist \item[-] \emph{Equality of Opportunity} (\emph{EqOpp}) \cite{hardt2016equality}: the difference in the \emph{True Positives} rates between the the protected group $X^+$ and the non-protected group $X^-$; \item[-] \emph{Statistical Parity} defined as: \begin{equation*} Parity = 1 - \norm{\frac{1}{\norm{X^+}} \sum_{i \in X^+} \hat{y}_i - \frac{1}{\norm{X^-}}\sum_{j \in X^-}{\hat{y}_j}} \end{equation*} \end{list} We use the modern notion of \emph{EqOpp} as our primary metric of group fairness, but report the traditional measure of \emph{Parity} as well. \end{list} \subsection{Evaluation on Classification Task} \label{subsec:classification} \begin{figure*}[t!] \centering \includegraphics[scale=0.45]{AUC_yNN.pdf} \caption{Plots of utility vs. individual fairness trade-off for classification task. Dashed lines represent Pareto-optimal points. } \label{fig:compas_paretoPlot} \vspace{-2mm \end{figure*} \begin{table*}[t!] \center \setlength\tabcolsep{4pt} \noindent\small\resizebox{\linewidth}{!}{% \begin{tabular}{c|l|rrrrr|rrrrr|rrrrr} \toprule Tuning & Method & \multicolumn{5}{c}{Compas} & \multicolumn{5}{c}{Census} & \multicolumn{5}{c}{Credit} \\ & & Acc & AUC & EqOpp & Parity & yNN & Acc & AUC & EqOpp & Parity & yNN & Acc & AUC & EqOpp & Parity & yNN \\ \midrule Baseline & Full Data & 0.66 & 0.65 & 0.70 & 0.72 & 0.84 & 0.84 & 0.77 & 0.90 & 0.81 & 0.90 & 0.74 & 0.66 & 0.82 & 0.81 & 0.78 \\ \midrule Max & LFR & \bf 0.60 & \bf 0.59 & 0.60 & 0.62 & 0.79 & \bf 0.81 & \bf 0.77 & 0.81 & 0.75 & 0.90 & 0.71 & \bf 0.64 & 0.78 & 0.77 & 0.77 \\ Utility & iFair-a & \bf 0.60 & 0.58 & \bf 0.91 & \bf 0.91 & 0.87 & 0.78 & 0.63 & \bf 0.96 & \bf 0.91 & 0.91 & 0.69 & 0.61 & 0.84 & 0.86 & 0.74 \\ (a) & iFair-b & 0.59 & 0.58 & 0.84 & 0.84 & \bf 0.88 & 0.78 & 0.65 & 0.78 & 0.85 & \bf 0.93 & \bf 0.73 & 0.59 & \bf 0.97 & \bf 0.98 & \bf 0.85 \\ \midrule Max & LFR & 0.54 & 0.51 & \bf 0.99 & 0.99 & \bf 0.97 & \bf 0.76 & 0.51 & \bf 1.00 & 0.99 & \bf 1.00 & 0.72 & 0.51 & \bf 0.99 & 0.98 & 0.98 \\ Fairness & iFair-a & \bf 0.56 & \bf 0.53 & 0.97 & 0.99 & 0.95 & \bf 0.76 & 0.51 & 0.95 & \bf 1.00 & 0.99 & \bf 0.73 & \bf 0.53 & \bf 0.99 & 0.98 & 0.97 \\ (b) & iFair-b & 0.55 & 0.52 & 0.98 & \bf 1.00 & \bf 0.97 & \bf 0.76 & \bf 0.52 & 0.98 & 0.99 & 0.99 & 0.72 & 0.51 & \bf 0.99 & \bf 1.00 & \bf 0.99 \\ \midrule & LFR & 0.59 & 0.57 & 0.72 & 0.77 & 0.88 & \bf 0.78 & \bf 0.76 & \bf 0.94 & 0.74 & 0.92 & 0.71 & \bf 0.64 & 0.78 & 0.77 & 0.77 \\ Optimal & iFair-a & \bf 0.60 & \bf 0.58 & \bf 0.91 & \bf 0.91 & 0.87 & 0.77 & 0.63 & 0.93 & \bf 0.90 & 0.92 & \bf 0.73 & 0.57 & 0.94 & 0.94 & \bf 0.90 \\ (c) & iFair-b & 0.59 & \bf 0.58 & 0.83 & 0.84 & \bf 0.89 & \bf 0.78 & 0.65 & 0.78 & 0.85 & \bf 0.93 & \bf 0.73 & 0.59 & \bf 0.97 & \bf 0.98 & 0.85 \\ \bottomrule \end{tabular} } \caption{ Comparison of IFR vs iFair for Classification task, with hyper parameter tuning for criterion (a) max utility: best AUC (b) best Individual Fairness: best consistency, and (c) ``Optimal'': best harmonic mean of AUC and consistency.} \label{tbl:classification_task} \vspace{-3mm} \end{table*} This section evaluates the effectiveness of {\em iFair} and its competitors on a classification task. We focus on the utility-(individual)fairness tradeoff that learned representations alleviate when used to train classifiers. For all methods, wherever applicable, hyper-parameters were tuned via grid search. Specifically, we chose the models that were Pareto-optimal with regard to AUC and yNN. \noindent{\bf Results:} Figure \ref{fig:compas_paretoPlot} shows the result for all methods and datasets, plotting utility (AUC) against individual fairness (yNN). The dotted lines show models that are Pareto-optimal with regard to AUC and yNN. We observe that there is a considerable amount of unfairness in the original dataset, which is reflected in the results of \emph{Full Data} in Figure \ref{fig:compas_paretoPlot}. \emph{Masked Data} and the two SVD variants shows an improvement in fairness; however, there is still substantial unfairness hidden in the data in the form of correlated attributes. For the Compas dataset, which is the most difficult of the three datasets due to its dimensionality, SVD completely fails. The representations learned by \emph{LFR} and {\em iFair} dominate all other methods in coping with the trade-off. \emph{iFair-b} is the overall winner: it is consistently Pareto-optimal for all three datasets and all but the degenerate extreme points. For the extreme points in the trade-off spectrums, no method can achieve near-perfect utility without substantially losing fairness and no method can be near-perfectly fair without substantially losing utility. Table \ref{tbl:classification_task} shows detailed results for three choices of tuning hyper-parameters (via grid search): (a) considering utility (AUC) only, (b) considering individual fairness (yNN) only, (e) using the harmonic mean of utility and individual fairness as tuning target. Here we focus on the \emph{LFR} and \emph{iFair} methods, as the other baselines do not have hyper-parameters to control trade-offs and are good only at extreme points of the objective space anyway. The results confirm and further illustrate the findings of Figure \ref{fig:compas_paretoPlot}. The two \emph{iFair} methods, tuned for the combination of utility and individual fairness (case (c)), achieve the best overall results: iFair-b shows an improvement of 6 percent in consistency, for a drop of 10 percent in Accuracy for Compas dataset. (+3.3\% and -7\% for Census, and +9\% and -1.3\% for Credit). Both variants of \emph{iFair} outperform \emph{LFR} by achieving significantly better individual fairness, with on-par or better values for utility. \subsection{Evaluation on Learning-to-Rank Task} \label{section:ranking-experiments} \label{subsec:ranking} This section evaluates the effectiveness of {\em iFair} on a regression task for ranking people on Xing and Airbnb dataset. We report ranking utility in terms of Kendall's Tau (KT), average precision (AP), individual fairness in terms of consistency (yNN) and group fairness in terms of fraction of protected candidates in top-10 ranks (statistical parity equivalent for ranking task). To evaluate models in a real world setting, for each dataset we constructed multiple queries and corresponding ground truth rankings. In case of Xing dataset we follow \citet{zehlike2017fa} and use the 57 job search queries. For Airbnb dataset, we generated a set of queries based on attributes values for \emph{city}, \emph{neighborhood} and \emph{home type}. After filtering for queries which had at least 10 listings we were left with 43 queries. As stated in Section \ref{section:groundtuth}, for the Xing dataset, the deserved score is a weighted sum of the true qualifications of an individual, that is, work experience, education experience and number of profile views. To test the sensitivity of our results for different choices of weights, we varied the weights over a grid of values in $[0.0, 0.25, 0.5, 0.75, 1.0]$. We observe that the choice of weights has no significant effect on the measurs of interest. Table \ref{tbl:ad-hoc_score_test} shows details. For the remainder of this section, the reported results correspond to uniform weights. \begin{table}[tbh!] \center \setlength\tabcolsep{4pt} \noindent\small\resizebox{\linewidth}{!}{% \begin{tabular}{ccc |c|cccc} \toprule \multicolumn{3}{c}{Weights} & Base-rate & MAP & KT & yNN & \% Protected \\ $\alpha_{work}$ & $\alpha_{edu}$ & $\alpha_{views}$ & Protected & & & & in output\\ \midrule 0.00 & 0.50 & 1.00 & 33.57 & 0.76 & 0.58 & 1.00 & 31.07 \\ 0.25 & 0.75 & 0.00 & 33.57 & 0.83 & 0.69 & 0.95 & 35.54 \\ 0.50 & 1.00 & 0.25 & 32.68 & 0.74 & 0.56 & 1.00 & 31.07 \\ 0.75 & 0.00 & 0.50 & 32.68 & 0.75 & 0.55 & 1.00 & 31.07 \\ 0.75 & 0.25 & 0.00 & 31.25 & 0.84 & 0.74 & 0.96 & 33.57 \\ 1.00 & 0.25 & 0.75 & 32.86 & 0.75 & 0.56 & 1.00 & 31.07 \\ 1.00 & 1.00 & 1.00 & 32.68 & 0.76 & 0.57 & 1.00 & 31.07 \\ \bottomrule \end{tabular} } \caption{Experimental results on sensitivity of iFair to weights in ranking scores for Xing dataset.} \label{tbl:ad-hoc_score_test} \vspace{-3mm} \end{table} Note that the baseline \emph{LFR} used for the classification experiment, is not geared for regression tasks and thus omitted here. Instead, we compare {\em iFair} against the \emph{FA*IR} method of \citet{zehlike2017fa}, which is specifically designed to incorporate group fairness into rankings. \noindent \textbf{Baseline FA*IR:} This ranking method takes as input a set of candidates ranked according to a precomputed score, and returns a ranked permutation which satisfies group fairness without making any changes to the scores of the candidates. Since one cannot measure consistency directly on rankings, we make a minor modification to FA*IR such that it also returns fair scores along with a fair ranking. To this end, we feed masked data to a linear regression model and compute a score for each candidate. FA*IR operates on two priority queues (sorted by previously computed scores): $P_0$ for non-protected candidates and $P_1$ for protected candidates. For each rank k, it computes the minimum number of protected candidates required to satisfy statistical parity (via significance tests) at position k. If the parity constraint is satisfied, it chooses the best candidate and its score from $P_0$ $\cup$ $P_1$. If the constraint is not satisfied, it chooses the best candidate from P1 for the next rank and leaves a placeholder for the score. Our extension linearly interpolates the scores to fill the placeholders, and thus returns a ranked list along with ``fair scores''. \noindent{\bf Results:} Table \ref{tbl:Xing_comparison} shows a comparison of experimental results for the ranking task for all methods across all datasets. We report mean values of average precision (MAP), Kendall's Tau (KT) and consistency (yNN) over all 57 job search queries for Xing and 43 house listing queries for Airbnb. Similar to the classification task, \emph{Full Data} and \emph{Masked Data} have the best utility (MAP and KT), whereas iFair has the best individual fairness (yNN). iFair clearly outperforms both variants of SVD by achieving significantly better individual fairness (yNN) for comparable values of utility. As expected, \emph{FA*IR}, which optimizes to satisfy statistical parity across groups, has the highest fraction of protected candidates in the top 10 ranks, but does not achieve any gains on individual fairness. This is not surprising, though, given its design goals. It also underlines our strategic point that individual fairness needs to be explicitly taken care of as a first-order objective. Between {\em FA*IR} and {\em iFair}, there is no clear winner, given their different objectives. We note, though, that the good utility that {\em FA*IR} achieves in some configurations critically hinges on the choice of the value for its parameter $p$. \begin{table}[tbh!] \center \setlength\tabcolsep{2pt} \noindent\tiny\resizebox{\columnwidth}{!}{% \begin{tabular}{clcccc} \toprule Dataset & Method & MAP & KT & yNN & \% Protected\\ & & (AP@10) & (mean) & (mean) & in top $10$ \\ \midrule & Full Data & \textbf{1.00} & \textbf{1.00} & 0.93 & 32.50 \\ & Masked Data & \textbf{1.00} & \textbf{1.00} & 0.93 & 32.68 \\ & SVD & 0.74 & 0.59 & 0.81 & 31.79 \\ Xing & SVD-masked & 0.67 & 0.50 & 0.78 & 32.86 \\ (57 queries)& FA*IR (p = 0.5) & 0.93 & 0.94 & 0.92 & 38.21 \\ & FA*IR (p = 0.9) & 0.78 & 0.78 & 0.85 & \textbf{48.57} \\ & iFair-b & 0.76 & 0.57 & \textbf{1.00} & 31.07 \\ \midrule & Full Data & \textbf{0.68} & \textbf{0.53} & 0.72 & 47.44 \\ & Masked Data & 0.67 & 0.53 & 0.72 & 47.44 \\ & SVD & 0.66 & 0.49 & 0.73 & 48.37 \\ Airbnb & SVD-masked & 0.66 & 0.49 & 0.73 & 48.37 \\ (43 queries) & FA*IR (p = 0.5) & 0.67 & 0.52 & 0.72 & 48.60 \\ & FA*IR (p = 0.6) & 0.65 & 0.51 & 0.73 & \textbf{51.16} \\ & iFair-b & 0.60 & 0.45 & \textbf{0.80} & 49.07 \\ \bottomrule \end{tabular} } \caption{Experimental results for ranking task. Reported values are means over multiple query rankings for the criterion ``Optimal'': best harmonic mean of MAP and yNN.} \label{tbl:Xing_comparison} \vspace{-3mm} \end{table} \subsection{Information Obfuscation \& Relation to Group Fairness} \begin{figure}[tbh!] \centering \includegraphics[scale=0.48]{yAdvAcc.pdf} \vspace{-1mm \caption{Adversarial accuracy of predicting protected group membership. The lower the better.} \label{fig:adversary} \vspace{-2mm \end{figure} \begin{figure}[tbh!] \begin{subfigure}{0.98\columnwidth} \centering \includegraphics[scale=0.46]{IFR_FAIR_Xing_split.pdf} \vspace{-9mm \subcaption{Xing} \label{fig:IFR_FAIR_Xing} \vspace{1mm \end{subfigure} \newline \begin{subfigure}{0.98\columnwidth} \centering \includegraphics[scale=0.46]{IFR_FAIR_Airbnb_split.pdf} \vspace{-9mm \subcaption{Airbnb} \label{fig:IFR_FAIR_Airbnb} \vspace{1mm \end{subfigure} \caption{Applying FA*IR algorithm to \emph{iFair} representations.} \label{fig:IFR_FAIR} \vspace{-5mm \end{figure} We also investigate the ability of our model to obfuscate information about protected attributes. A reasonable proxy to measure the extent to which protected information is still retained in the {\em iFair} representations is to predict the value of the protected attribute from the learned representations. We trained a logistic-regression classifier to predict the protected group membership from: (i) Masked Data (ii) learned representations via \emph{LFR}, and (iii) learned representations via \emph{iFair-b}. \spara{Results:} Figure \ref{fig:adversary} shows the adversarial accuracy of predicting the protected group membership for all 5 datasets (with LFR not applicable to Xing and Airbnb). For all datasets, \emph{iFair} manages to substantially reduce the adversarial accuracy. This signifies that its learned representations contain little information on protected attributes, despite the presence of correlated attributes. In contrast, \emph{Masked Data} still reveals enough implicit information on protected groups and cannot prevent the adversarial classifier from achieving fairly good accuracy. \spara{Relation to Group Fairness:} Consider the notions of group fairness defined in Section \ref{section:fairness-measures}. Statistical parity requires the probability of predicting positive outcome to be independent of the protected attribute: $P(\hat{Y} = 1| S = 1) = P(\hat{Y} = 1| S = 0)$. Equality of opportunity requires this probability to be independent of the protected attribute conditioned on the true outcome $Y$: $P(\hat{Y} = 1| S = 1, Y = 1) = P(\hat{Y} = 1| S = 0, Y = 1)$. Thus, forgetting information about the protected attribute indirectly helps improving group fairness; as algorithms trained on the individually fair representations carry largely reduced information on protected attributes. This argument is supported by our empirical results on group fairness for all datasets. In Table \ref{tbl:classification_task}, although group fairness is not an explicit goal, we observe substantial improvements by more than 10 percentage points; the performance for other datasets is similar. However, the extent to which {\em iFair} also benefits group fairness criteria depends on the base rates $P(Y = 1| S = 1)$ and $P(Y = 1| S = 0)$ of the underlying data. Therefore, in applications where statistical parity is a legal requirement, additional steps are needed, as discussed next. \spara{Enforcing parity:} By its application-agnostic design, it is fairly straightforward to enhance {\em iFair} by post-processing steps to enforce statistical parity, if needed. Obviously, this requires access to the values of protected attributes, but this is the case for all group fairness methods. We demonstrate the extensibility of our framework by applying the \emph{FA*IR} \cite{zehlike2017fa} technique as a post-processing step to the {\em iFair} representations of the Xing and Airbnb data. For each dataset, we generate top-k rankings by varying the target minimum fraction of protected candidates (parameter $p$ of the FA*IR algorithm). Figure \ref{fig:IFR_FAIR} reports ranking utility (MAP), percentage of protected candidates in top 10 positions, and individual fairness (yNN) for increasing values of the FA*IR parameter $p$. The key observation is that the combined model {\em iFair + FA*IR} can indeed achieve whatever the required share of protected group members is, in addition to the individual fairness property of the learned representation. \section{Introduction} \noindent {\bf Motivation:} People are rated, ranked and selected or not selected in an increasing number of online applications, towards algorithmic decisions based on machine learning models. Examples are approvals or denials of loans or visas, predicting recidivism for law enforcement, or rankings in job portals. As algorithmic decision making becomes pervasive in all aspects of our daily life, societal and ethical concerns \cite{angwin2016machine,crawford2016there} are rapidly growing. A basic approach is to establish policies that disallow the inclusion of potentially discriminating attributes such as gender or race, and ensure that classifiers and rankings operate solely on task-relevant attributes such as job qualifications. The problem has garnered significant attention in the data-mining and machine-learning communities. Most of this work considers so-called {\em group fairness} models, most notably, the statistical parity of outcomes in binary classification tasks, as a notion of fairness. Typically, classifiers are extended to incorporate demographic groups in their loss functions, or include constraints on the fractions of groups in the accepted class \cite{calders2009building,kamiran2010discrimination, kamishima2012considerations,pedreshi2008discrimination, feldman2015certifying,fish2016confidence} to reflect legal boundary conditions and regulatory policies. For example, computing a shortlist of people invited for job interviews should have a gender mix that is proportional to the base population of job applicants. The classifier objective is faced with a fundamental trade-off between utility (typically accuracy) and fairness, and needs to aim for a good compromise. Other definitions of group fairness have been proposed \cite{hardt2016equality,zafar2017fairness,zhang2016identifying}, and variants of group fairness have been applied to learning-to-rank tasks \cite{zehlike2017fa,yang2017measuring,singh2018fairness}. In all these cases, fair classifiers or regression models need an explicit specification of sensitive attributes such as gender, and often the identification of a specific {\em protected (attribute-value) group} such as gender equals female. \noindent {\bf The Case for Individual Fairness:} \comment{ \begin{table*}[t!] \centering \noindent\resizebox{\linewidth}{!}{ \begin{tabular}{@{}lllllll@{}} \toprule Candidate & Job & Work Experience (months) & Education Experience (months)& Profile Views & Xing Rank & Minority Group \\ \midrule male & Field Engineer & 301 & 49 & 374 & 4 & female \\ female & Field Engineer & 319 & 51 & 857 & 15 & female \\ \midrule male & Front End Developer & 30 & 0 & 154 & 4 & female \\ female & Front End Developer & 30 & 25 & 128 & 18 & female \\ \midrule female & Daycare & 333 & 83 & 780 & 13 & male \\ male & Daycare & 541 & 89 & 1027 & 33 & male \\ \bottomrule \end{tabular}} \vspace{0.2cm} \caption{Ranking results from \url{www.xing.com} (Jan 2017) for a variety of job search queries.} \label{tbl-ranking-example} \vspace{2mm} \end{table*} \citet{dwork2012fairness} argued that group fairness, while appropriate for policies regarding demographic groups, does not capture the goal of treating individual people in a fair manner. This led to the definition of {\em individual fairness}: similar individuals should be treated similarly. For binary classifiers, this means that individuals who are similar on the task-relevant attributes (e.g., job qualifications) should have nearly the same probability of being accepted by the classifier. This kind of fairness is intuitive and captures aspects that group fairness does not handle. Most importantly, it addresses potential discrimination of people by disparate treatment despite the same or similar qualifications (e.g., for loan requests, visa applications or job offers), and it can mitigate such risks. \noindent{\bf Problem Statement:} Unfortunately, the rationale for capturing individual fairness has not received much follow-up work -- the most notable exception being \cite{zemel2013learning} as discussed below. The current paper advances the approach of individual fairness in its practical viability, and specifically addresses the key problem of coping with the critical trade-off between fairness and utility: {\em How can a data-driven system provide a high degree of individual fairness while also keeping the utility of classifiers and rankings high?} Is this possible in an application-agnostic manner, so that arbitrary downstream applications are supported? Can the system handle situations where sensitive attributes are not explicitly specified at all or become known only at decision-making time (i.e., after the system was trained and deployed)? Simple approaches like removing all sensitive attributes from the data and then performing a standard clustering technique do not reconcile these two conflicting goals, as standard clustering may lose too much utility and individual fairness needs to consider attribute correlations beyond merely masking the explicitly protected ones. Moreover, the additional goal of generality, in terms of supporting arbitrary downstream applications, mandates that cases without explicitly sensitive attributes or with sensitive attributes being known only at decision-making time be gracefully handled as well. The following example illustrates the points that a) individual fairness addresses situations that group fairness does not properly handle, and b) individual fairness must be carefully traded off against the utility of classifiers and rankings. \comment{ Table \ref{tbl-ranking-example} presents a real-world example showcasing the issue of unfairness for individual people. We randomly select 3 (of 57) popular job search queries on a well-known job search engine in Germany named \emph{Xing} (\url{www.xing.com}); that data was originally used in \citet{zehlike2017fa}. For each query we randomly select a user who belongs to a ``protected'' minority group, for example, ``female'' for query ``Field Engineer'' and ``male'' for query ``Daycare''. We then compute the k nearest neighbors set ($k = 10$) of this individual, in terms of job qualifications and disregarding the gender attribute. Table \ref{tbl-ranking-example} presents the attributes of randomly selected individuals from this candidate set. We do not have knowledge about the workings of the proprietary algorithm that has produced the ranking. However, we observe that (nearly) equally qualified candidates have largely dissimilar outcomes. This situation cannot be resolved by any notion of group fairness, but calls for individual fairness. \noindent {\bf Example:} Table \ref{tbl-xing-ranking-statistical-parity} shows a real-world example for the issue of unfairness to individual people. Consider the ranked results for an employer's query ``Brand Strategist'' on the German job portal \emph{Xing}; that data was originally used in \citet{zehlike2017fa}. The top-10 results satisfy group fairness with regard to gender, as defined by \citet{zehlike2017fa} where a top-k ranking $\tau$ is fair if for every prefix $\tau|i = <\tau(1), \tau(2), \cdots \tau(i)>$ ($1 \leq i \leq k$) the set $\tau|i$ satisfies statistical parity with statistical significance $\alpha$. However the outcomes in Table \ref{tbl-xing-ranking-statistical-parity} are far from being fair for the individual users: people with very similar qualifications, such as Work Experience and Education Score ended up on ranks that are far apart (e.g., ranks 5 and 30). By the \emph{position bias} \cite{JoachimsR07} when searchers browse result lists, this treats the low-ranked people quite unfairly. This demonstrates that applications can satisfy group-fairness policies, while still being unfair to individuals. \begin{table}[t!] \centering \tiny \noindent\resizebox{0.9\columnwidth}{!}{% \setlength\tabcolsep{1.5pt} \begin{tabular}{@{}ccccc@{}} \toprule Search Query & Work & Education & Candidate & Xing \\ & Experience & Experience & & Ranking \\ \midrule Brand Strategist & 146 & 57 & male & 1 \\ Brand Strategist & 327 & 0 & female & 2 \\ Brand Strategist & 502 & 74 & male & 3 \\ Brand Strategist & 444 & 56 & female & 4 \\ \rowcolor[gray]{.8} Brand Strategist & 139 & 25 & male & 5 \\ Brand Strategist & 110 & 65 & female & 6 \\ Brand Strategist & 12 & 73 & male & 7 \\ Brand Strategist & 99 & 41 & male & 8 \\ Brand Strategist & 42 & 51 & female & 9 \\ Brand Strategist & 220 & 102 & female & 10 \\ & & $\cdots$ & & \\ Brand Strategist & 3 & 107 & female & 20 \\ \rowcolor[gray]{.8} Brand Strategist & 123 & 56 & female & 30 \\ Brand Strategist & 3 & 3 & male & 40 \\ \bottomrule \end{tabular} } \caption{Top k results on \protect\url{www.xing.com} (Jan 2017) for an employer's job search query ``Brand Strategist''.} \label{tbl-xing-ranking-statistical-parity} \vspace{-7mm} \end{table} \noindent{\bf State of the Art and its Limitations:} Prior work on fairness for ranking tasks has exclusively focused on group fairness \cite{zehlike2017fa,yang2017measuring,singh2018fairness}, disregarding the dimension of individual fairness. For the restricted setting of binary classifiers, the most notable work on individual fairness is \cite{zemel2013learning}. That work addresses the fundamental trade-off between utility and fairness by defining a combined loss function to learn a low-rank data representation. The loss function reflects a weighed sum of classifier accuracy, statistical parity for a single pre-specified protected group, and individual fairness in terms of reconstruction loss of data. This model, called {\em LFR}, is powerful and elegant, but has major limitations: \squishlist \item It is geared for binary classifiers and does not generalize to a wider class of machine-learning tasks, dismissing regression models, i.e., learning-to-rank tasks. \item Its data representation is tied to a specific use case with a single protected group that needs to be specified upfront. Once learned, the representation cannot be dynamically adjusted to different settings later. \item Its objective function strives for a compromise over three components: application utility (i.e., classifier accuracy), group fairness and individual fairness. This tends to burden the learning with too many aspects that cannot be reconciled. \end{list} Our approach overcomes these limitations by developing a model for representation learning that focuses on individual fairness and offers greater flexibility and versatility. \noindent {\bf Approach and Contribution:} The approach that we put forward in this paper, called {\em iFair}, is to learn a generalized data representation that preserves the fairness-aware similarity between individual records while also aiming to minimize or bound the data loss. This way, we aim to reconcile individual fairness and application utility, and we intentionally disregard group fairness as an explicit criterion. iFair resembles the model of \cite{zemel2013learning} in that we also learn a representation via probabilistic clustering, using a form of gradient descent for optimization. However, our approach differs from \cite{zemel2013learning} on a number of major aspects: \squishlist \item iFair learns flexible and versatile representations, instead of committing to a specific downstream application like binary classifiers. This way, we open up applicability to arbitrary classifiers and support regression tasks (e.g., rating and ranking people) as well. \item iFair does not depend on a pre-specified protected group. Instead, it supports multiple sensitive attributes where the ``protected values'' are known only at run-time after the application is deployed. For example, we can easily handle situations where the critical value for gender is female for some ranking queries and male for others. \item iFair does not consider any notion of group fairness in its objective function. This design choice relaxes the optimization problem, and we achieve much better utility with very good fairness in both classification and ranking tasks. Hard group-fairness constraints, based on legal requirements, can be enforced post-hoc by adjusting the outputs of iFair-based classifiers or rankings. \end{list} \begin{figure*}[thb!] \centering \includegraphics[scale=0.35]{flowchart.pdf} \caption{Overview of decision-making pipeline.} \label{fig:flowchart} \vspace{-2mm \end{figure*} The novel contributions of iFair are: 1) the first method, to the best of our knowledge, that provides individual fairness for learning-to-rank tasks; 2) an application-agnostic framework for learning low-rank data representations that reconcile individual fairness and utility such that application-specific choices on sensitive attributes and values do not require learning another representation; 3) experimental studies with classification and regression tasks for downstream applications, empirically showing that iFair can indeed reconcile strong individual fairness with high utility. The overall decision-making pipeline is illustrated in Figure \ref{fig:flowchart}. \iffalse \section{Introduction} \label{section:introduction} Previous work has focused narrowly on the binary classification problems. The resulting fairness definitions therefore do not generalize well to other machine learning tasks. However, algorithmic fairness is a concern for all machine learning tasks. For instance, in a learning-to-rank model such as a people search engine (e.g., LinkedIn), the ranked order of individuals can negatively effect their exposure, and consequently the opportunities an individual receives. In our approach we optimize for utility and fairness independent of the downstream application in which the data might be used. As such, the representations learned are generic and can be applied to train any machine learning task. \cite{zemel2013learning}(\emph{LFR}) is closest to ours in that it also aims to learn a fair representation while accounting for utility. However there are three major differences. First, individual fairness is less directly manifested in their model in that distances are captured in terms of prototypes in the transformed space. However, triangular inequality loses some information about real distances between data points thus affecting individual fairness. Second, their group fairness (parity) objective directly conflicts with individual fairness and utility. As such it restricts the feasible solution space. Finally, utility in their model is tied to a specific task (binary classification task), whereas our model more generally captures utility independent of the underlying ML task. A crucial shortcoming of \emph{LFR} is that learning fair representations requires access to the protected attributes of the user records. To elaborate, given input data $X$, \emph{LFR} separates the data into user records belonging to sensitive group ($X^+$) and non-sensitive group ($X^-$). It then learns different set of model parameters values for the two groups. This can be a critical bottle neck in its usability in practice, for multiple reasons. For one, due to privacy and discrimination concerns applications might not have access to this information. Even if we decouple the representation learning phase, and assume that during this phase the data sanitizers have access to the protected attributes, it would remain a concern for applying the model in future to new, unseen data (which is a common requirement amongst applications). Another short coming of \emph{LFR} is the use of the terms ``protected-group'' and ``non-protected group'', and their treatment. Rather than calling an attribute as protected attribute (e.g., gender), they call one attribute value as ``protected'' (e.g., female) and the other as ``non-protected''(e.g., male). However, in practice, which gender value belongs to the ``minority'' class might change from application to application. In fact, even within the same application, the roles might interchange. For instance, imagine a people search website used for recruitment (e.g., linkedIn). Depending on the job that the recruiter is searching for, the candidates who belong to the minority would change. For an engineering job search query, ``female'' candidates might be a minority, whereas for ``administrative'' job ``male'' might be a minority candidate. Further, it assumes data to have a single binary protected attribute. In reality though data is complex. It is common to have multiple protected attributes, including numerical (age) and categorical (race) data types. Our definition of fairness requires that an algorithm cannot have different probability distributions of outcomes over individuals of similar qualifications(non protected feature representations). More concretely, two individuals having (nearly) same attribute values differing only on protected features should have the same outcome. In other words, changing the value of protected attributes, all other attribute values remaining (nearly) the same, should not change the outcome. Note that, this definition does not contradict utility - an algorithm satisfying our definition of \emph{fairness} can never probabilistically favor a less qualified to a better qualified individual. Thus, there exists no inherent fairness-accuracy tradeoff as long as the ground truth is unbiased (except that incurred due to data loss in the model, which we shall discuss in detail later) Equally, an algorithm satisfying our definition of fairness cannot probabilistically favor one population group over another if outcomes over either populations will have similar utility. \fi \section{Acknowledgment} This research was supported by the ERC Synergy Grant ``imPACT'' (No. 610150) and ERC Advanced Grant ``Foundations for Fair Social Computing'' (No. 789373). \section{Model} \label{section:methodology} We consider user records that are fed into a learning algorithm towards algorithm decision making. A fair algorithm should make its decisions solely based on non-sensitive attributes (e.g., technical qualification or education) and should disregard sensitive attributes that bear the risk of discriminating users (e.g., ethnicity/race). This dichotomy of attributes is specified upfront, by domain experts and follows legal regulations and policies. Ideally, one should consider also strong correlations (e.g., geo-area correlated with ethnicity/race), but this is usually beyond the scope of the specification. We start with introducing preliminary notations and definitions. \spara{Input Data:} The input data for $M$ users with $N$ attributes is an $M \times N$ matrix $X$ with binary or numerical values (i.e., after unfolding or encoding categorical attributes). Without loss of generality, we assume that the attributes $1~..~l$ are non-protected and the attributes $l+1~..~N$ are protected. We denote the $i$-th user record consisting of all attributes as $x_i$ and only non-protected attributes as $x_{i}^{\ast}$. Note that, unlike in prior works, the set of protected attributes is allowed to be empty (i.e., $l=N$). Also, we do not assume any upfront specification of which attribute values form a protected group. So a downstream application can flexibly decide on the critical values (e.g., male vs. female or certain choices of citizenships) on a case-by-case basis. \spara{Output Data:} The goal is to transform the input records $x_i$ into representations $\tilde{x_i}$ that are directly usable by downstream applications and have better properties regarding fairness. Analogously to the input data, we can write the entire output of $\tilde{x_i}$ records as an $M \times N$ matrix $\tilde{X}$. \spara{Individually Fair Representation:} Inspired by the \citet{dwork2012fairness} notion of individual fairness, \textit{``individuals who are similar should be treated similarly''}, we propose the following definition for individual fairness: \begin{defn-eqn}{Individual Fairness}{Given a distance function $d$ in the $N-$dimensional data space, a mapping $\phi$ of input records $x_i$ into output records $\tilde{x_i}$ is individually fair if for every pair $x_i,x_j$ we have \smallskip \begin{equation}\label{defn-individualFairness} \norm{d(\phi(x_i),\phi(x_j)) - d(x_{i}^*,x_{j}^*)} \leq \epsilon \end{equation}} \end{defn-eqn} The definition requires that individuals who are (nearly) indistinguishable on their non-sensitive attributes in $X$ should also be (nearly) indistinguishable in their transformed representations $\tilde{X}$. For example, two people with (almost) the same technical qualifications for a certain job should have (almost) the same low-rank representation, regardless of whether they differ on protected attributes such as gender, religion or ethnic group. In more technical terms, a distance measure between user records should be preserved in the transformed space. Note that this definition intentionally deviates from the original definition of \emph{individual fairness} of \cite{dwork2012fairness} in that with $x_i^*, x_j^*$ we consider only the non-protected attributes of the original user records, as protected attributes should not play a role in the decision outcomes of an individual. \subsection{Problem Formulation: Probabilistic Clustering} As individual fairness needs to preserve similarities between records $x_i, x_j$, we cast the goal of computing good representations $\tilde{x_i},\tilde{x_j}$ into a formal problem of {\em probabilistic clustering}. We aim for $K$ clusters, each given in the form of a {\em prototype vector} $v_k$ ($k=1..K$), such that records $x_i$ are assigned to clusters by a record-specific probability distribution that reflects the distances of records from prototypes. This can be viewed as a low-rank representation of the input matrix $X$ with $K < M$, so that we reduce attribute values into a more compact form. As always with soft clustering, $K$ is a hyper-parameter. \begin{defn-eqn}{Transformed Representation}{ The fair representation $\tilde{X}$, an $M \times N$ matrix of row-vise output vectors $\tilde{x_i}$, consists of \squishlist \item[(i)] $K < M$ prototype vectors $v_k$, each of dimensionality $N$, \item[(ii)] a probability distribution $u_i$, of dimensionality $K$, for each input record $x_i$ where $u_{ik}$ is the probability of $x_i$ belonging to the cluster of prototype $v_k$. \end{list} The representation $\tilde{x_i}$ is given by \vspace{2mm} \smallskip \begin{equation}\label{eq:transformed-representation} \tilde{x_i} = \sum_{k=1..K} u_{ik} \cdot v_k \vspace{2mm} \end{equation} or equivalently in matrix form: $\tilde{X} = U \times V^T$ where the rows of $U$ are the per-record probability distributions and the columns of $V^T$ are the prototype vectors. } \end{defn-eqn} \label{defn:transformed-representation} \begin{defn-eqn}{Transformation Mapping}{ We denote the mapping $x_i \rightarrow \tilde{x_i}$ as $\phi$; that is, \smallskip \begin{equation}\label{eq:transformed-mapping} \phi(x_i) = \tilde{x_i} = \sum_{k=1..K} u_{ik} \cdot v_k \end{equation} using Equation \ref{eq:transformed-representation}. } \end{defn-eqn} \comment{ We avoid making assumptions on the learning algorithm and downstream application, striving for a generic and versatile model. We aim to learn a low-rank representation with individual fairness as an optimization goal while, at the same time, keeping utility as high as possible. \begin{defn-eqn}{Low-Rank Representation}{A low-rank representation of input data $X$ is a factorization given by \begin{equation} X \approx U V^T =: \tilde{X} \end{equation}} \end{defn-eqn} where $U$ is an $M \times K$ matrix and $V^T$ is a $K \times N$ matrix, both of rank $K$ with $K < N$ and $K < M$. The $K$ dimensions of the low-rank representation are latent, and the columns of the resulting $\tilde{X}$ may not be directly interpretable. The column vectors $v_k \in V$ act as prototype vectors in the same $N-$ dimensional space as $x$. Each entry $U_{m,k}$ in the matrix $U$ gives the probability that datapoint $x_m$ maps to the $k$-th prototype $v_k$. Thus $U$ is a $M \times K$ matrix consisting of mixtures of these probabilities. \spara{\emph{Utility Objective:}} Without making any assumptions on the downstream application, the best way of ensuring high utility is to minimize the data loss induced by $\phi$. \begin{defn-eqn}{Data Loss}{The reconstruction loss between $X$ and $\tilde{X}$ is the sum of squared errors \smallskip \begin{equation} L_{util}(X,\tilde{X}) = \sum\limits_{i = 1}^{M} ||x_{i} - \tilde{x}_{i}||_2 = \sum\limits_{i = 1}^{M} \sum\limits_{j = 1}^{N} (x_{ij} - \tilde{x}_{ij})^2 \end{equation}} \end{defn-eqn} \spara{\emph{Individual Fairness Objective:}} Following the rationale for Definition \ref{defn-individualFairness}, the desired transformation $\phi$ should preserve pair-wise distances between data records on non-protected attributes. \begin{defn-eqn}{Fairness Loss}{ For input data $X$, with row-wise data records $x_i$, and its transformed representation $\tilde{X}$ with row-wise $\tilde{x_i}$, the fairness loss $L_{fair}$ is \smallskip \begin{equation} L_{fair}(X,\tilde{X}) = \sum_{i,j=1..M} \left(d(\tilde{x_i},\tilde{x_j}) - d(x^*_i,x^*_j)\right)^2 \end{equation}} \end{defn-eqn} \comment{ Following our definition for \emph{individual fairness} we aim to learn a low rank representation $\tilde{X}$ such that pair-wise distances on non-protected attributes are preserved in the transformed space. \noindent \begin{defn-eqn}{Pairwise-Distance Matrix}{For an $M \times N$ matrix $X$ and a distance measure $d$ between rows of the matrix, the pairwise-distance matrix $D_X$ is a symmetric $M \times M$ matrix with} \begin{equation*} \label{defn-pairwise-distance} D_X(i,j) = d(x_{i},x_{j}),\quad \forall x_i, x_j \in X \end{equation*} \end{defn-eqn} \begin{defn-eqn}{Fairness-Aware Distance Matrix}{For an $M \times N$ matrix $X$ with protected attributes $l+1~..~n$ and distance measure $d$, the fairness-aware distance matrix $D^f_X$ is the pairwise-distance matrix with distances computed only over columns $1~..~l$ (disregarding columns $l+1~..~n$) \smallskip \begin{equation*} \label{defn-fair-pairwise-distance} D_X^F(i,j) = d(x_{i}^\ast,x_{j}^\ast),\quad \forall x_i, x_j \in X \end{equation*}.} \end{defn-eqn} \begin{defn-eqn}{Fairness Loss}{ For input data matrix $X$ and its low-rank representation $\tilde{X}$, the fairness loss $L_z$ is the Frobenius norm given by \smallskip \begin{equation} L_{fair}(X,\tilde{X}) = \normF{D^f_X - D_{\tilde{X}}}_F^2 \end{equation}} \end{defn-eqn} \spara{\emph{Overall Objective Function:}} Combing the data loss and the fairness loss yields our final objective function that the learned representain should aim to minimize. \begin{defn-eqn}{Objective Function}{ The combined objective function is given by \begin{equation}\label{eq:objective} L = \lambda \cdot L_{util}(X,\tilde{X}) + \mu \cdot L_{fair}(X,\tilde{X}) \end{equation} where $\lambda$ and $\mu$ are hyper-parameters. } \end{defn-eqn}\label{defn:objective-function} \subsection{Probabilistic Prototype Learning} So far we have left the choice of the distance function $d$ open. Our methodology is general and can incorporate a wide suite of distance measures. However, for the actual optimization, we need to make a specific choice for $d$. In this paper, we focus on the family of {\em Minkowski p-metrics}, which is indeed a metric for $p \geq 1$. A common choice is $p=2$, which corresponds to a Gaussian kernel. \begin{defn-eqn}{Distance Function}{ The distance between two data records $x_i,x_j$ is \begin{equation}\label{eq:distance-function} d(x_i,x_j) = \big[\sum\limits_{n = 1}^{N} \alpha_n (x_{i,n} - x_{j,n})^p\big]^{1/p} \end{equation} where $\alpha$ is an $N$-dimensional vector of tunable or learnable weights for the different data attributes. } \end{defn-eqn} This distance function $d$ is applicable to original data records $x_i$, transformed vectors $\tilde{x_i}$ and prototype vectors $v_k$ alike. In our model, we avoid the quadratic number of comparisons for all record pairs, and instead consider distances only between records and prototype vectors (cf. also \cite{zemel2013learning}). Then, these distances can be used to define the probability vectors $u_i$ that hold the probabilities for record $x_i$ belonging to the cluster with prototype $v_k$ (for $k=1..K$). To this end, we apply a softmax function to the distances between record and prototype vectors. \begin{defn-eqn}{Probability Vector}{ The probability vector $u_i$ for record $x_i$ is \begin{equation}\label{eq-U_m_k} u_{i,k} = \frac{exp(-d(x_i,v_k))}{\sum\limits_{j=1}^K exp(-d(x_i,v_j))} \end{equation} } \end{defn-eqn} The mapping $\phi$ that transforms $x_i$ into $\tilde{x_i}$ can be written as \begin{equation}\label{eq-prototype-mapping} \phi(x_i) =\sum\limits_{k = 1}^{K} \frac{exp(-d(x_i,v_k))}{\sum\limits_{j=1}^K exp(-d(x_i,v_j))} \cdot v_k \end{equation} With these definitions in place, the task of learning fair representations $\tilde{x_i}$ now amounts to computing $K$ prototype vectors $v_k$ and the $N$-dimensional weight vector $\alpha$ in $d$ such that the overall loss function $L$ is minimized. \begin{defn-eqn}{Optimization Objective}{ The optimization objective is to compute $v_k$ ($k=1..K$) and $\alpha_n$ ($n=1..N$) as argmin for the loss function \bigskip \begin{align*} L& = \lambda \cdot L_{util}(X,\tilde{X}) ~~+~~ \mu \cdot L_{fair}(X,\tilde{X}) \\ & = \lambda \cdot \sum\limits_{i = 1}^{M} \sum\limits_{j = 1}^{N} (x_{ij} - \tilde{x}_{ij})^2 + \mu \cdot \smashoperator{\sum\limits_{i,j=1..M}} \left(d(\tilde{x_i},\tilde{x_j}) - d(x^*_i,x^*_j)\right)^2 \\ \end{align*} where $\tilde{x}_{ij}$ and $d$ are substituted using Equations \ref{eq-prototype-mapping} and \ref{eq:distance-function}. } \end{defn-eqn} The $N-$dimensional weight vector $\alpha$ controls the influence of each attribute. Given our definition of individual fairness (which intentionally deviates from the original definition in \citet{dwork2012fairness}), a natural setting is to give no weight to the protected attributes as these should not play any role in the similarity of (qualifications of) users. In our experiments, we observe that giving (near-)zero weights to the protected attributes increases the fairness of the learned data representations (see Section \ref{section:experiments}). \subsection{Gradient Descent Optimization:} Given this setup, the learning system minimizes the combined objective function given by \smallskip \begin{equation}\label{eq:objective} L = \lambda \cdot L_{util}(X,\tilde{X}) + \mu \cdot L_{fair}(X,\tilde{X}) \vspace{2mm} \end{equation} where $L_{util}$ is the data loss, $L_{fair}$ is the loss in individual fairness, and $\lambda$ and $\mu$ are hyper-parameters. We have two sets of {\em model parameters} to learn \begin{enumerate} \item[(i)] $v_k$ ($k=1..K$), the $N-$dimensional prototype vectors, \item[(ii)] $\alpha$, the $N-$dimensional weight vector of the distance function in Equation \ref{eq:distance-function}. \end{enumerate} \noindent We apply the $L$-$BFGS$ algorithm \cite{liu1989limited}, a quasi-Newton method, to minimize Equation \ref{eq:objective} and learn the model parameters. \section{Related work}\label{section:related-work} \spara{Fairness Definitions and Measures:} Much of the work in algorithmic fairness has focused on supervised machine learning, specifically on the case of binary classification tasks. Several notions of \emph{group fairness} have been proposed in the literature. The most widely used criterion is {\em statistical parity} and its variants \cite{calders2009building,kamiran2010discrimination,kamishima2012considerations,pedreshi2008discrimination,feldman2015certifying,fish2016confidence}. Statistical parity states that the predictions $\hat{Y}$ of a classifier are fair if members of sensitive subgroups, such as people of certain nationalities or ethnic backgrounds, have an acceptance likelihood proportional to their share in the entire data population. This is equivalent to requiring that apriori knowledge of the classification outcome of an individual should provide no information about her membership to such subgroups. However, for many applications, such as risk assessment for credit worthiness, statistical parity is neither feasible nor desirable. Alternative notions of group fairness have been defined. \citet{hardt2016equality} proposed {\em equal odds} which requires that the rates of true positives and false positives be the same across groups. This punishes classifiers which perform well only on specific groups. \citet{hardt2016equality} also proposed a relaxed version of equal odds called {\em equal opportunity} which demands only the equality of true positive rates. Other definitions of group fairness include \emph{calibration} \cite{flores2016false, kleinberg_et_al:LIPIcs:2017:8156}, \emph{disparate mistreatment} \cite{zafar2017fairness}, and \emph{counterfactual fairness} \cite{DBLP:conf/nips/KusnerLRS17}. Recent work highlights the inherent incompatibility between several notions of group fairness and the impossibility of achieving them simultaneously \cite{kleinberg_et_al:LIPIcs:2017:8156,chouldechova2017fair,friedler2016possibility,corbett2017algorithmic}. \citet{dwork2012fairness} gave the first definition of \emph{individual fairness}, arguing for the fairness of outcomes for individuals and not merely as a group statistic. Individual fairness mandates that \emph{similar individuals should be treated similarly}. \cite{dwork2012fairness} further develops a theoretical framework for mapping individuals to a probability distribution over outcomes, which satisfies the Lipschitz property (i.e., distance preservation) in the mapping. In this paper, we follow up on this definition of individual fairness and present a generalized framework for learning individually fair representations of the data. \spara{Fairness in Machine Learning:} A parallel line of work in the area of algorithmic fairness uses a specific definition of fairness in order to design fairness models that achieve fair outcomes. To this end, there are two general strategies. The first strategy consists of {\em de-biasing} the input data by appropriate preprocessing \cite{kamiran2010discrimination,pedreshi2008discrimination,feldman2015certifying}. This typically involves data perturbation such as modifying the value of sensitive attributes or class labels in the training data to satisfy certain fairness conditions, such as equal proportion of positive (negative) class labels in both protected and non-protected groups . The second strategy consists of designing fair algorithmic models - based on \emph{constrained optimization} \cite{calders2009building,kamishima2012considerations,hardt2016equality,zafar2017fairness}. Here, fairness constraints are usually introduced as regularization terms in the objective function. \spara{Fairness in IR:} Recently, definitions of group fairness have been extended to learning-to-rank tasks. \citet{yang2017measuring} introduced statistical parity in rankings. \citet{zehlike2017fa} built on \cite{yang2017measuring} and proposed to ensure statistical parity at all top-k prefixes of the ranked results. \citet{singh2018fairness} proposed a generalized fairness framework for a larger class of group fairness definitions (e.g., disparate treatment and disparate impact). However, all this prior work has focused on group fairness alone. It implicitly assumes that individual fairness is taken care of by the ranking quality, disregarding situations where trade-offs arise between these two dimensions. The recent work of \citet{Biega:SIGIR2018} addresses individual fairness in rankings from the perspective of giving fair exposure to items over a series of rankings, thus mitigating the position bias in click probabilities. In their approach they explicitly assume to have access to scores that are already individually fair. As such, their work is complementary to ours as they do not address how such a score, which is individually fair can be computed. \spara{Representation Learning:} The work of \citet{zemel2013learning} is the closest to ours in that it is also learns low-rank representations by probabilistic mapping of data records. However, the methods deviates from our in important ways. First, its fair representations are tied to a particular classifier by assuming a binary classification problem with pre-specified labeling target attribute and a single protected group. In contrast, the representations learned by {\em iFair} are agnostic to the downstream learning tasks and thus easily deployable for new applications. Second, the optimization in \cite{zemel2013learning} aims to combine three competing objectives: classifier accuracy, statistical parity, and data loss (as a proxy for individual fairness). The {\em iFair} approach, on the other hand, addresses a more streamlined objective function by focusing on classifier accuracy and individual fairness. Approaches similar to \cite{zemel2013learning} have been applied to learn censored representations for fair classifiers via adversarial training \cite{louizos2015variational,edwards2015censoring}. Group fairness criteria are optimized in the presence of an adversary. These approaches do not consider individual fairness at all.
{ "timestamp": "2019-02-07T02:17:55", "yymm": "1806", "arxiv_id": "1806.01059", "language": "en", "url": "https://arxiv.org/abs/1806.01059" }